You are on page 1of 38

Lecture 12.

5 Additional Issues
Concerning Discrete-Time
Markov Chains
Topics
Review of DTMC
Classification of states
Economic analysis
First-time passage
Absorbing states

Discrete-Time Markov Chain


A stochastic process { Xn } where n N = { 0, 1, 2, . . . } is
called a discrete-time Markov chain if
Pr{ Xn+1 = j | X0 = k0, . . . , Xn-1 = kn-1, Xn = i }
= Pr{ Xn+1 = j | Xn = i }

transition probabilities

for every i, j, k0, . . . , kn-1 and for every n.

The future behavior of the system depends only on the


current state i and not on any of the previous states.

Stationary Transition Probabilities


Pr{ Xn+1 = j | Xn = i } = Pr{ X1 = j |X0 = i } for all n
(They dont change over time)
We will only consider stationary Markov chains.
The one-step transition matrix for a Markov chain
with states S = { 0, 1, 2 } is
p00
P p10

p20

p01
p11
p21

p02
p12

p22

where pij = Pr{ X1 = j | X0 = i }

Classification of States
Accessible: Possible to go from state i to state j (path exists in
the network from i to j).
d2

d1
0

1
a1

a0

a0

d3

a1

d4
3

a2

a3

a2

a3

Two states communicate if both are accessible from each other. A


system is irreducible if all states communicate.
State i is recurrent if the system will return to it after leaving some
time in the future.
If a state is not recurrent, it is transient.

Classification of States (continued)


A state is periodic if it can only return to itself after a
fixed number of transitions greater than 1 (or multiple
of a fixed number).
A state that is not periodic is aperiodic.
0

(0 .5)

(0 .5)
(1)

(1)

1
(1)

a. Each state visited


every 3 iterations

(1 )

(1 )

1
(1 )

b. Each state visited in multiples


of 3 iterations

Classification of States (continued)


An absorbing state is one that locks in the system once it enters.
d1
0

d2

d3
2

1
a1

3
a2

4
a3

This diagram might represent the wealth of a gambler who


begins with $2 and makes a series of wagers for $1 each.
Let ai be the event of winning in state i and di the event of
losing in state i.
There are two absorbing states: 0 and 4.

Classification of States (continued)


Class: set of states that communicate with each other.
A class is either all recurrent or all transient and may be either
all periodic or aperiodic.
States in a transient class communicate only with each other so
no arcs enter any of the corresponding nodes in the network
diagram from outside the class. Arcs may leave, though,
passing from a node in the class to one outside.
3

Illustration of Concepts
0

Example 1
State
0
1
2
3

0
0
X
0
X

1
X
0
0
0

2
X
0
0
0

3
0
0
X
X

Every pair of states communicates forming a single recurrent


class; moreover, the states are not periodic.

Thus the stochastic process is aperiodic and irreducible.

Illustration of Concepts
Example 2

0
State
0
1
2
3
4

0
X
X
0
0
X

1
X
X
0
0
0

2
0
0
X
X
0

3
0
0
0
X
0

4
0
0
0
0
0

States 0 and 1 communicate and form a recurrent class.

States 3 and 4 form separate transient classes.


State 2 is an absorbing state and forms a recurrent class.

Illustration of Concepts
Example 3

0
State
0
1
2
3

0
0
0
0
X

1
X
0
0
0

2
X
0
0
0

3
0
X
X
0

Every state communicates with every other state, so we


have an irreducible stochastic process.
Periodic? Yes, so Markov chain is irreducible and periodic.

Classification of States
Example

1 0.4 0.6 0
0
0
2 0.5 0.5 0
0
0

P3 0
0 0.3 0.7 0
4 0
0 0.5 0.4 0.1

5 0
0
0 0.8 0.2

.6

.7

2
3

.5
.4

.5

.3

.5

.8

.1

5
.2

.4

A state j is accessible from state i if pij(n) > 0 for some n > 0.


In example, state 2 is accessible from state 1
& state 3 is accessible from state 5
but state 3 is not accessible from state 2.
States i and j communicate if i is accessible from j
and j is accessible from i.

States 1 & 2 communicate; also


states 3, 4 & 5 communicate.
States 2 & 4 do not communicate

States 1 & 2 form one communicating class.


States 3, 4 & 5 form a 2nd communicating class.

If all states in a Markov chain communicate


(i.e., all states are members of the same communicating class)
then the chain is irreducible.

The current example is not an irreducible Markov chain.


Neither is the Gamblers Ruin example which
has 3 classes: {0}, {1, 2, 3} and {4}.
First Passage Times
Let fii = probability that the process will return to state i
(eventually) given that it starts in state i.
If fii = 1, then state i is called recurrent.
If fii < 1, then state i is called transient.

If pii = 1, then state i is called an absorbing state.


Above example has no absorbing states
States 0 & 4 are absorbing in Gamblers Ruin problem.
The period of a state i is the smallest k > 1 such that
all paths leading back to i have a length that is
a multiple of k;
i.e., pii(n) = 0 unless n = k, 2k, 3k, . . .
If a process can be in state i at time n or time n + 1
having started at state i then state i is aperiodic.
Each of the states in the current example are aperiodic.

Example of Periodicity - Gamblers Ruin


States 1, 2 and 3 each have period 2.

0
1
2
3
4

0
1
1-p
0
0
0

1
0
0
1-p
0
0

2
0
p
0
1-p
0

3
0
0
p
0
0

4
0
0
0
p
1

If all states in a Markov chain are


recurrent, aperiodic, & the chain is irreducible
then it is ergodic.

Existence of Steady-State Probabilities


A Markov chain is ergodic if it is aperiodic and allows
the attainment of any future state from any initial state
after one or more transitions. If these conditions hold,
then
j lim pij( n ) steady state probabilty for state j
n

For example,

State-transition network

0.8 0 0.2
P 0.4 0.3 0.3

0 0.9 0.1

Conclusion: chain is ergodic.

2
3

Economic Analysis
Two kinds of economic effects:
(i) those incurred when the system is in a specified state, and

(ii) those incurred when the system makes a transition from one
state to another.
The cost (profit) of being in a particular state is represented by the
m-dimensional column vector
C
S

S S
S T
c1 , c2 ,..., cm

where each component is the cost associated with state i.

R
R
The cost of a transition is embodied in the m m matrix C cij .
where each component specifies the cost of going from state i to
state j in a single step.

Expected Cost for Markov Chain


Expected cost of being in state i : ci

ciS

cijR pij
j 1

Let C = (c1, . . . cm)T


ei = (0, 0, 1, 0, 0) be the ith row of the m m identity
matrix, and
fn = random variable representing the economic return
associated with the stochastic process at time n.
Property 3: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, and expected
state cost (profit) vector C. Assuming that the process
starts in state i, the expected cost (profit) at the nth step
is given by
E[fn(Xn) |X0 = i] = eiP(n)C.

Additional Cost Results


What if the initial state is not known?
Property 5: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, initial
probability vector q(0), and expected state cost (profit)
vector C. The expected economic return at the nth step
is given by

E[fn(Xn) |q(0)] = q(0)P(n)C.


Property 6: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, steady-state
vector , and expected state cost (profit) vector C. Then
the long-run average return per unit time is given by

SiS ici = C.

Insurance Company Example


An insurance company charges customers annual
premiums based on their accident history
in the following fashion:
No accident in last 2 years:

$250 annual premium

Accidents in each of last 2 years: $800 annual premium

Accident in only 1 of last 2 years: $400 annual premium


Historical statistics:
1. If a customer had an accident last year then they
have a 10% chance of having one this year;
2. If they had no accident last year then they have a
3% chance of having one this year.

Problem: Find the steady-state probability and the longrun average annual premium paid by the customer.
Solution approach: Construct a Markov chain with four
states: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate
(accident last year, accident this year).

P=

(N, N)

(N, Y)

(Y, N)

(Y, Y)

(N, N)
(N, Y)
(Y, N)

0.97
0
0.97

0.03
0
0.03

0
0.90
0

0
0.10
0

(Y, Y)

0.90

0.10

State-Transition Network for


Insurance Company
.03
.97

N, N

N, Y
.97

.90
.03

.90
Y, N

Y, Y

.10

.10

This is an ergodic Markov chain.


All states communicate (irreducible)
Each state is recurrent (you will return, eventually)
Each state is aperiodic

Solving the SteadyState Equations


m

j = ipij, j = 0,,m
i=1

j = 1, j 0, j

j =1

(N,N) = 0.97 (N,N) + 0.97 (Y,N)


(N,Y) = 0.03 (N,N) + 0.03 (Y,N)
(Y,N) =

0.9 (N,Y)

+ 0.9 (Y,Y)

(N,N) + (N,Y)+(Y,N) + (Y,Y) = 1

Solution:
(N,N) = 0.939, (N,Y) = 0.029, (Y,N) = 0.029, (Y,Y) = 0.003

& the long-run average annual premium is


0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5

Markov Chain Add-in Matrix


Transition Matrix
Change

Index
0
1
2
3

State
Names
(N, N)
(N, Y)
(Y, N)
(Y, Y)

Calculate Regular matrix. Rows sum to 1.


4 Recurrent States
Analyze 1 Recurrent State Class
0 Transient States
4
0
1
2
(N, N)
(N, Y)
(Y, N)
(N, N)
0.97
0.03
0
(N, Y)
0
0
0.9
(Y, N)
0.97
0.03
0
(Y, Y)
0
0
0.9
Sum
1.94
0.06
1.8

3
(Y, Y)
0
0.1
0
0.1
0.2

Sum
1
1
1
1

Status
Class-1
Class-1
Class-1
Class-1

Economic Data and Solution


Economic Data

Measure: Cost

Calculate Discount
Rate
0
0
(N, N)
1
(N, Y)
2
(Y, N)
3
(Y, Y)

Steady State
Analysis
Steady State

State
Cost
250
400
400
800

Expected
Transition Cost Matrix
State
0
1
2
3
Cost
(N, N) (N, Y) (Y, N) (Y, Y)
250
0
0
0
0
400
0
0
0
0
400
0
0
0
0
800
0
0
0
0

The vector shows the long run probabilities of each


Expected
state.
0
1
2
3
Cost
(N, N)
(N, Y)
(Y, N)
(Y, Y)
per period
0.93871 0.029032 0.029032 0.003226 260.483871

Transient Analysis for Insurance Company


Transient
Analysis

Start

More

Chart

Initial
1
2
3
4
5
6
7
8
9
10

Average Cost 260.1622


0
1
2
(N, N)
(N, Y)
(Y, N)
0
0
0
0
0.873
0.93411
0.938388
0.938687
0.938708
0.93871
0.93871
0.93871
0.93871

0
0.027
0.02889
0.029022
0.029032
0.029032
0.029032
0.029032
0.029032
0.029032

0.9
0.09
0.0333
0.029331
0.029053
0.029034
0.029032
0.029032
0.029032
0.029032

Discounted Cost 5203.243


3
Step
Cum.
(Y, Y)
Cost
Cost
1
0
0.1
0.01
0.0037
0.003259
0.003228
0.003226
0.003226
0.003226
0.003226
0.003226

440
273.05
261.3635
260.5454
260.4882
260.4842
260.4839
260.4839
260.4839
260.4839

440
713.05
974.4135
1234.959
1495.447
1755.931
2016.415
2276.899
2537.383
2797.867

Present
Worth
0
440
713.05
974.4135
1234.959
1495.447
1755.931
2016.415
2276.899
2537.383
2797.867

First Passage Times


Let ij = expected number of steps to transition
from state i to state j
If the probability that we will eventually visit state j
given that we start in i is less than 1, then
we will have ij = +.
For example, in the Gamblers Ruin problem,
20 = + because there is a positive probability
that we will be absorbed in state 4 given that we
start in state 2.

Computations when All States are Recurrent


If the probability of eventually visiting state j given
that we start in i is 1 then the expected number
of steps until we first visit j is given by

ij= 1 + pirrj, for i = 0,1, . . . , m1


rj

It will always take


at least one step.

We go from i to r in the first step


with probability pir and it takes rj
steps from r to j.

For j fixed, we have linear system in m equations and m


unknowns ij , i = 0,1, . . . , m1.

First-Passage Analysis for Insurance Company


Suppose that we start in state (N,N) and want to find
the expected number of years until we have accidents
in two consecutive years (Y,Y).
This transition will occur with probability 1, eventually.
For convenience number the states

0
1
2
3
(N,N) (N,Y) (Y,N) (Y,Y)
Then,

03 = 1 + p00 03 + p01 13 + p0223


13 = 1 + p10 03 + p11 13 + p1223
23 = 1 + p20 03 + p21 13 + p2223

First-Passage Computations

Using P =

(N, N)

(N, Y)

(Y, N)

(Y, Y)

(N, N)
(N, Y)
(Y, N)

0.97
0
0.97

0.03
0
0.03

0
0.90
0

0
0.10
0

(Y, Y)

0.90

0.10

0
1
states
2
3

03 = 1 + 0.9703 + 0.0313
13 = 1 + 0.923
23 = 1 + 0.9703 + 0.0313
Solution: 03 = 343.3, 13 = 310, 23 = 343.3

So, on average it takes 343.3 years to transition


from (N,N) to (Y,Y).
Note, 03 = 23. Why?

Note, 13 < 03.

First Passage Probabilities


Expected number of steps until the first passage into state 3
From
0
1
2
3
(N, N)
(N, Y)
(Y, N)
(Y, Y)
343.3333
310 343.3333
310

Game of Craps
Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223
Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112
Start

P=

Win

Lose

P4

P5

P6

P8

P9

P10

Start

0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083

Win

Lose

P4

0.083 0.167

0.75

P5

0.111 0.167

0.722

P6

0.139 0.167

0.694

P8

0.139 0.167

0.694

P9

0.111 0.167

0.722

P10

0.083 0.167

0.75

First Passage Probabilities for Craps


Rolls

Start-win

Start-lose

Sum

Cumulative

0.222

0.111

0.333

0.333

0.077

0.111

0.188

0.522

0.055

0.080

0.135

0.656

0.039

0.057

0.097

0.753

0.028

0.041

0.069

0.822

0.020

0.030

0.050

0.872

0.014

0.021

0.036

0.908

0.010

0.015

0.026

0.933

0.007

0.011

0.018

0.952

10

0.005

0.008

0.013

0.965

Absorbing States
An absorbing state is a state j with pjj = 1.

Given that we start in state i, we can calculate the


probability of being absorbed in state j.
We essentially performed this calculation for the
Gamblers Ruin problem by finding
P(n) = (pij(n) ) for large n.
But we can use a more efficient analysis
like that used for calculating first passage times.

Let 0, 1, . . . , k be transient states and

k + 1, . . . , m 1 be absorbing states.
Let qij = probability of being absorbed in state j
given that we start in transient state i.
Then for each j we have the following relationship
k

qij = pij + pirqrj , i = 0, 1, . . . , k


r=0

Go directly to j

Go to r and then to j

For fixed j (absorbing state) we have k + 1 linear


equations in k + 1 unknowns, qrj , i = 0, 1, . . . , k.

Absorbing States Gamblers Ruin


Suppose that we start with $2 and want to calculate the
probability of going broke, i.e., of being absorbed in state 0.

We know p00 = 1 and p40 = 0, thus


q20 = p20 + p21 q10 + p22 q20 + p23 q30 (+ p24 q40)
q10 = p10 + p11 q10 + p12 q20 + p13 q30 + 0
q30 = p30 + p31 q10 + p32 q20 + p33 q30 + 0
where
0
1
P = 2
3
4

0
1
1-p
0
0
0

1
0
0
1-p
0
0

2
0
p
0
1-p
0

3
0
0
p
0
0

4
0
0
0
p
1

Solution to Gamblers Ruin Example


Now we have three equations with three unknowns.
Using p = 0.75 (probability of winning a single bet)

we have
q20 = 0 + 0.25 q10 + 0.75 q30
q10 = 0.25 + 0.75 q20
q30 = 0 + 0.25 q20
Solving yields q10 = 0.325, q20 = 0.1, q30 = 0.025
(This is consistent with the values found earlier.)

What You Should Know About


The Mathematics of DTMCs

How to classify states.


What an ergodic process is.
How to perform economic analysis.
How to compute first-time passages.
How to compute absorbing probabilities.

You might also like