Professional Documents
Culture Documents
Discrete
Markov Chain
2- State System
3/4
1/2
1/2
1
remaining in
State 2
2
1/4
leaving State 2
Probability
State probability
Time
interval
State 1
State 2
0
1.0
0.0
1
= 0.500
= 0.500
2
()() + ()() = 0.375 ()() + ()(3/4) = 0.625
3
0.344
0.656
4
0.336
0.664
5
0.334
0.666
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
start in
State 1
State 2
State 1
start in
State 2
Ergodic System:
every state of a system can be reached from every other
state directly or through intermediate states
Systems with absorbing states are not ergodic.
Absorbing State: a state once entered cannot be left
e.g. a system failure state in a mission oriented system
from
nodes 1
2
P=
..
n
1
P11
P21
..
Pn1
to nodes
2
..
n
P12
P22
..
Pn2
..
..
..
..
P1n
P2n
..
Pnn
sum of probabilities in
each row must be unity
Transient behavior:
State probabilities after n intervals is given by,
P(n) = P(0).P n
where P(0) is the initial probability vector
(state probabilities at initial condition)
P =
Example:
1/2
1
2
1/4
1/2
1/4
3/4
P(0) = [ 1 0]
P=
1/2
1/4
3/4
= [ 0.375
0.625 ]
= [P1 P2]
P =
= [P1 P2]
1/2
3/4
(1)
P1 + P2 = 1 (2)
and P2 = 0.667
Absorbing States
System states when once entered, cannot be left until the
system starts a new mission.
e.g. failure states in mission oriented systems
Need to evaluate:
How many time intervals does the system operate on
average before entering the absorbing state?
Expected # of time intervals,
N = [ I Q ] -1
where,
I = identity matrix
Q = truncated matrix created by deleting row(s) and column(s)
associated with the absorbing states
1
1/4
1/3
1/2
3
2
1/3
absorbing state
Example:
1
1/4
1/2
3
Stochastic Transitional
Probability Matrix,
Truncated Matrix
(deleting Row 3 &
Column 3 from P )
P=
Q=
1
2
3
1
2
absorbing state
3/4
0
0
1/4
1/2
0
0
1/2
1
3/4
0
1/4
1/2
N = [ I Q ] -1
-1
={ 1
= 0
2
2
i.e. average no. of time intervals spent in state 1 given that the system
starts in state 1 is 4.