You are on page 1of 9

Markov Model

Model representing the different resident states of a system, and the


transitions between the different states
(applicable to repairable, as well as non-repairable systems)
System behavior that varies randomly with time and space is known
as a stochastic process.
A stochastic process that meets the following requirements is a
Markovian process, otherwise non-Markovian
Requirements:
1. system states must be identifiable
2. lack of memory: future states are independent of all past
states, except the present state
3. stationary: probability of transition between 2 states is the
same at all times
Requirements 2 & 3 are met by systems with probability distributions
characterized by a constant hazard rate.
Markov Approach:
- discrete (time or space) Discrete Markov Chain
- continuous (time)
Continuous Markov Process

Discrete

Markov Chain

2- State System
3/4
1/2

1/2
1

remaining in
State 2

2
1/4
leaving State 2

P[remaining in State 2] + P[leaving State 2] = + = 1

The behavior of the system (probability of residing in a state after a


number of time intervals) can be illustrated by a tree diagram.
Probability of any branch multiply the probability of each step in the branch
Probability of residing in a state sum of branch probabilities that lead to that state

State probabilities (time dependent) of the 2-state system:

Probability

State probability
Time
interval
State 1
State 2
0
1.0
0.0
1
= 0.500
= 0.500
2
()() + ()() = 0.375 ()() + ()(3/4) = 0.625
3
0.344
0.656
4
0.336
0.664
5
0.334
0.666
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

start in
State 1

State 2

State 1
start in
State 2

Number of Time Intervals

As the # of time intervals increase, state probability tends to a


constant (limiting) value limiting state probability
Transient behavior (time-dependent state probability) depends
on the initial condition
Limiting state probability of ergodic system (or process) is
independent of the initial condition.

Ergodic System:
every state of a system can be reached from every other
state directly or through intermediate states
Systems with absorbing states are not ergodic.
Absorbing State: a state once entered cannot be left
e.g. a system failure state in a mission oriented system

Evaluation Procedure using Markov Model:


- develop Markov model for the component (or system)
- evaluate state probability (time dependent or limiting
state) using:
o Tree diagram: impractical for large systems or a large
number of time intervals
o Stochastic Transitional Probability Matrix
o Other techniques for continuous Markov process will
be discussed later

Stochastic Transitional Probability Matrix


Square matrix (order = number of states)
Rows
: from states
Columns: to states
Element : probability value from one state to another
Pij = prob of transition from state i to state j

from
nodes 1
2
P=
..
n

1
P11
P21
..
Pn1

to nodes
2
..
n
P12
P22
..
Pn2

..
..
..
..

P1n
P2n
..
Pnn

sum of probabilities in
each row must be unity

Transient behavior:
State probabilities after n intervals is given by,
P(n) = P(0).P n
where P(0) is the initial probability vector
(state probabilities at initial condition)

Limiting State Probability:


repeated multiplications of P until resulting P does not change with
further multiplications.

P =

where = limiting probability vector

Example:
1/2
1

2
1/4

Stochastic Transitional Probability Matrix,


1/2

1/2

If the system starts in State 1,


Initial probability vector

1/4

3/4

P(0) = [ 1 0]

P=

State probabilities after interval 2,


2

P(2) = P(0).P = [ 1 0]. 1/2

1/2

1/4

3/4

= [ 0.375

0.625 ]

Limiting State Probabilities:

= [P1 P2]

P1 = limiting probability of being in State 1


P2 = limiting probability of being in State 2

Using the equation,

P =

[P1 P2] 1/2


1/4

= [P1 P2]

1/2
3/4

(1)

P1 + P2 = 1 (2)

Solving (1) & (2), P1 = 0.333

and P2 = 0.667

Absorbing States
System states when once entered, cannot be left until the
system starts a new mission.
e.g. failure states in mission oriented systems
Need to evaluate:
How many time intervals does the system operate on
average before entering the absorbing state?
Expected # of time intervals,
N = [ I Q ] -1
where,
I = identity matrix
Q = truncated matrix created by deleting row(s) and column(s)
associated with the absorbing states

1
1/4

1/3
1/2
3

2
1/3

absorbing state

Example:
1
1/4
1/2
3

Stochastic Transitional
Probability Matrix,

Truncated Matrix
(deleting Row 3 &
Column 3 from P )

P=

Q=

1
2
3

1
2

absorbing state

3/4
0
0

1/4
1/2
0

0
1/2
1

3/4
0

1/4
1/2

Average number of time intervals spent in each state before entering


the absorbing state,

N = [ I Q ] -1
-1

={ 1

0 3/4 1/4 } -1 = 1/4 -1/4


0
1/2
0
1/2
1

= 0

2
2

i.e. average no. of time intervals spent in state 1 given that the system
starts in state 1 is 4.

You might also like