You are on page 1of 9

Markov Chains

Pi19404
February 23, 2014

Contents

Contents
Markov Chains
Introduction . . . . . . . . . . . . . . 0.1.1 Sequence Classification 0.1.2 Generating a Sequence 0.1.3 Code . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3
3 5 7 8 8

2 | 9

Markov Chains

Markov Chains
0.1 Introduction
In this article we will look at markov models and its application in classification of discrete sequential data.

      

Markov processes are examples of stochastic processes that generate random sequences of outcomes or states according to certain probabilities. Markov chains can be considered mathematical descriptions of Markov models with a discrete set of states. Markov chains are integer time process X ; n  0 for which each random variable X is integer valued and depends on the past only through most recent random variable X 1 for all integer n  1.
n n n

X ; n 2 N is a discrete Markov chain on state space S


n

= 1; : : : ; M

At each time instant t,The system changes state ,and makes a transition. The markov chains follow the markovian and stationarity property. For a first order markov chain,the markov property states that the state of the system at time t + 1 depends only on the state of the system at time t.The markov chain is also said to be memoryless due to this property.

P r(X +1 = x +1 jX
t t

x1 : : : x

P r(X +1 = x +1 jX
t t

x)
t

A stationarity assumption is also made which implies that markov property is independent of time.

P r(X +1 = x jX
t i

)=

P for
i;j

8 t and 8i; j 2 0 : : : M

Thus we are looking at processes whose sample functions are sequence of integers between 1 : : : M .

3 | 9

Markov Chains

  

Thus markov process is parameterized by transition probability P and intital probability P 0


ij i

Markov chains can be represented by directed graphs,where each state is represented by a node and directed arc represents a non zero transition probability. If a markov chain has M states then transition probability can be represented by a MxM matrix.

P11 P22 : : : P1 6 P21 P22 : : : P2 T =6 4 ::: P 1 P 2 ::: P X P =1


M M ij j

3 7 7 5

MM

  

The matrix T is stochastic matrix where elements in each row sum to 1 This implies that it is necessary for transition to occur from present state to one of the M states. The probability of sequence being generated by markov chain is given by

P (X ) =  (x0 )

T Y

=1

p(x jx 1 )
t t

p(x jx 1 ) is the probability of observing the sequence x at time instant t given the present state is t 1
t t t

Let us consider a 2 models with following initial transition and probability matrix.

1 =

T1 = 40:3 0:4 2 =

0:1

0:6

0 :4

0 :3

0 :1

0:45 0:5

:5 0:4 2 3 0:9 0:05 0:05 T2 = 40:3 0:1 0:6 5 0:3 0:5 0:2

4 | 9

Markov Chains

0.1.1 Sequence Classification

The sequence generate from these two markov chains




S1 =


1 3 3 1

2 3 2 1

1 2 1 1

2 1 1 1

3 1 1 1

 

S2 =

      

In sequence 1 since P13 = 0,we can observe there is no transition from 1 to 3 and dominant transition is expected to be from 1 > 1. In sequence 2 since dominant transition is from observe a long sequence of 1s.
1

> 1 we can

We can also compute the probability that the sequence has been generate from a given markov process. The sequence 1 has probability of 8:6400e05 from the 1st model and 2:4300e07 from second model. The sequence 2 has probability of 0 being generate from 1st model and 0.00287 from 2nd model. Thus if we have sequence and know it is being generate from 1 of 2 models we can always predict the model the sequence has been generated from by choosing the model which generates the maximum probability. Thus we can use markov chain for sequence modelling and classification.

/** * @brief The markovChain class : markovChain is a class * which encapsulates the representation of a discrete markov * chain.A markov chain is composed of transition matrix * and initial probability matrix */ class markovChain { public: markovChain(){}; /* _transition holds the transition probability matrix * _initial holds the initial probability matrix */

5 | 9

Markov Chains

MatrixXf _transition; MatrixXf _initial; /** * @brief setModel : function to set the parameters * of the model * @param transition : NXN transition matrix * @param initial : 1XN initial probability matrix */ void setModel(Mat transition,Mat initial) { _transition=EigenUtils::setData(transition); _initial=EigenUtils::setData(initial); } void setModel(Mat transition,Mat initial) { _transition=EigenUtils::setData(transition); _initial=EigenUtils::setData(initial); } /** * @brief computeProbability : compute the probability * that the sequence is generate from the markov chain * @param sequence : is a vector of integral sequence * starting from 0 * @return : is probability */ float computeProbability(vector<int> sequence) { float res=0; float init=_initial(0,sequence[0]); res=init; for(int i=0;i<sequence.size()-1;i++) { res=res*_transition(sequence[i],sequence[i+1]); } return res; } }

6 | 9

Markov Chains

0.1.2 Generating a Sequence

    

The idea behind generating a sequence from a markov process is to use a uniform random number generator. For each row of initial probability or transition matrix select state which is most likely. For example if the row contains values
[0:6; 0:4; 0]

If a uniform random value generates a value between 0 and 0.6 then state 0 is returned If a random value between 0.6 and 1 is generated then state 1 is returned.

/** * @brief initialRand : function to generate a radom state * @param matrix : input matrix * @param index ; row of matrix to consider * @return */ int initialRand(MatrixXf matrix,int index) { float u=((float)rand())/RAND_MAX; cerr << u << endl; float s=matrix(0,0); int i=0; //select the index corresponding to the highest probability //or if all the cols of matrix have transitioned while(u>s & (i<matrix.cols())) { i=i+1; s=s+matrix(index,i); } return i;

 

First step is to use the above method to select a inital state of matrix by passing the initial probability matrix as input. Next random state will be selected from the transition probability by passing the transition probability matrix as input.

/** * @brief generateSequence is a function that generates * a sequence of specified length

7 | 9

Markov Chains

* @param n : is the length of the sequence * @return : is a vector of integers representing * generated sequence */ vector<int> generateSequence(int n) { vector<int> result; result.resize(n); int i=0; int index=0; //select a random initial value of sequence int init=initialRand(_initial,0); result[i]=init; index=init; for(i=1;i<n;i++) { //select a random transition to next sequence state index=initialRand(_transition,index); result[i]=index; } return result;

0.1.3 Code
The code can be found in https://github.com/pi19404/OpenVision in files ImgML/markovchain.cpp and ImgML/markovchain.hpp .

8 | 9

Bibliography

Bibliography
[1] Chee-Way Chong, P. Raveendran, and R. Mukundan.  A comparative analysis of algorithms for fast computation of Zernike moments. In: Pattern Recognition (2003), pp. 731742. [2] Ming-Kuei Hu.  Visual Pattern Recognition by Moment Invariants. In: IRE Transactions on Information Theory

(1962), pp. 179187.

[3]

Radhika Sivaramakrishna and N. S. Shashidhar.  Hus moment invariants: How invariant are they under skew and perspective transformations? In: IEEE WESCANEX 97: Communications, Power and Computing. Conference Proceedings.

1997, pp. 292295.

9 | 9

You might also like