You are on page 1of 251

Solutions Manual

to accompany

Probability,
Random Variables
and
Stochastic Processes
Fourth Edition

Athanasios Papoulis
Polytechnic University

S. Unnikrishna Pillai
Polytechnic University
Solutions Manual to accompany
PROBABILITY, RANDOM VARIABLES AND STOCHASTIC PROCESSES, FOURTH EDITION
ATHANASIOS PAPOULIS

Published by McGraw-Hill Higher Education, an imprint of The McGraw-Hill Companies, Inc., 1221 Avenue of the Americas,
New York, NY 10020. Copyright © 2002 by The McGraw-Hill Companies, Inc. All rights reserved.

The contents, or parts thereof, may be reproduced in print form solely for classroom use with PROBABILITY, RANDOM
VARIABLES AND STOCHASTIC PROCESSES, FOURTH EDITION, provided such reproductions bear copyright notice, but may
not be reproduced in any other form or for any other purpose without the prior written consent of The McGraw-Hill Companies, Inc.,
including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning.

www.mhhe.com
1
Problem Solutions for Chapter 3
3.1 (a) P (A occurs atleast twice in n trials)
= 1 P (A never occurs in n trials) P (A occurs once in n trials)
; ;

= 1 (1 p)n np(1 p)n;1


; ; ; ;

(b) P (A occurs atleast thrice in n trials)


= 1 P (A never occurs in n trials) P (A occurs once in n trials)
; ;

P (A occurs twice in n trials)


;

= 1 (1 p)n np(1 p)n;1 n(n2 1) p2(1 p)n;2


; ; ; ; ;
;
;

3.2
P (doublesix) = 16 16 = 361

P (\double six atleast three times in n trials00)


 
= 1 50 1 0  35 50  
50 1   35 49  
50 1 2  35 48
0;
36 36 ;
1 36 36 ;
2 36 36
= 0:162
3.6 (a)
6
p1 = 1 5 = 0:665
;
6
(b)
12  !

11
1 5 12 1 5 = 0:619
;
6 ;
1 6 6
(c)

18  !

17  !
2
16
1 5 18 1 5 18 1 5 = 0:597
;
6 ;
1 6 6 ;
2 6 6
2
3.7 (a) Let n represent the number of wins required in 50 games so that
the net gain or loss does not exceed $1. This gives the net gain to be
; 1 < n 50 4 n < 1
;
;

16 < n < 17:3


n = 17
   17  33
P (net gain does not exceed $1) = 5017 4
1 3 = 0:432
4
P (net gain or loss exceeds $1) = 1 0:432 = 0:568
;

(b) Let n represent the number of wins required so that the net gain
or loss does not exceed $5. This gives
; 5 < n (50 2 n) < 5
;
;

13:3 < n < 20


P (net gain does not exceed $5) = 19
X    n  50;n
50 1 3 = 0:349
n = 14 n 4 4
P (net gain or loss exceeds $5) = 1 0:349 = 0:651
;
3
3.8 Dene the events
A=\ r successes in n Bernoulli trials"
B =\success at the ith Bernoulli trial"
C =\r 1 successes in the remaining n 1 Bernoulli trials excluding
; ;

the ith trial"  


P (A) = nr pr qn;r
P (B ) = p
 
P (C ) = nr 11 pr;1 qn;r
;
;

We need
P (B A) = PP(AB
j
(A)
) = P (BC ) = P (B ) P (C ) = r :
P (A) P (A) n
 
3.9 There are 52
13 ways of selecting 13 cards out of 52 cards. The
number of ways to select 13 cards of any suit (out of 13 cards) equals
13 = 1. Four such (mutually exclusive) suits give the total number
13
of favorable outcomes to be 4. Thus the desired probability is given by
4
 ! = 6:3 10;12
52 

13
4
3.10 Using the hint, we obtain
p (Nk+1 Nk ) = q (Nk Nk;1) 1
; ; ;

Let
Mk+1 = Nk+1 Nk ;

so that the above iteration gives


Mk+1 = qp Mk 1p ;

8  k n o
>
>
> p
q M 1 1 ( q ) k  p=q
< 1 p q ;
p ;
; 6

=>
>
>
: M1 kp  ;p=q

This gives
i;1
X
Ni = Mk+1
k=0
8  i;1  k
X
>
M1 + 1 q i
p q p = q
>
>
>
>
< p q ;
k=0
p ;
;
6

=>
>
>
>
>
: iM1 i(i2p 1) 
;
;
p=q

where we have used No = 0. Similarly Na+b = 0 gives

M1 + p 1 q = pa + qb 1 1 (q=p
; ;
q=p :
)a+b

;
;

Thus 8
> a + b 1 (q=pa)+i b i
>
p q p = q
;
>
< p q 1 (q=p)
 ; 6

Ni = >>
; ;
;

>
: i(a + b i) ;p=q
5
which gives for i = a
8
> a + b 1 (q=pa)+a b a
>
p q p = q
;
>
< p q 1 (q=p)
;
 ;
;
6

Na = >> ;

>
: ab p=q
8
>
>
> b a + b 1 (p=qa)+b b  p = q:
;
< 2p 1 2p 1 1 (p=q)
;  6

=>
; ; ;

>
>
: ab p=q
6
3.11
Pn = pPn+ + qPn;
Arguing as in (3.43), we get the corresponding iteration equation
Pn = Pn+ + qPn;
and proceed as in Example 3.15.
3.12 Suppose one best on k = 1 2   6.
Then
     2
p1 = P (k appears on one dice) = 31 16 56
   2  
p2 = P (k appear on two dice) = 32 16 65
 3
p3 = P (k appear on all the tree dice) = 61
 3
p0 = P (k appear none) = 65
Thus, we get
Net gain = 2p1 + 3p2 + 4p3 p0 = 0:343:
;
215

Chapter 15

15.1 The chain represented by


 
0 1/2 1/2
 
 1/2 0 1/2 
 
P =




1/2 1/2 0 

is irreducible and aperiodic.


The second chain is also irreducible and aperiodic.
The third chain has two aperiodic closed sets {e1 , e2 } and {e3 , e4 }
and a transient state e5 .

15.2 Note that both the row sums and column sums are unity in this
case. Hence P represents a doubly stochastic matrix here, and
 
1 1 ··· 1 1
 
1 1 ··· 1 1 
 

1  
Pn = 
.. .. .. .. .. 

m+1

 . . . . .  
 
1 1 ··· 1 1
 

1
lim P {xn = ek } = , k = 0, 1, 2, · · · m.
n→∞ m+1
15.3 This is the “success runs” problem discussed in Example 15-11
and 15-23. From Example 15-23, we get
1 uo
ui+1 = pi,i+1 ui = ui =
i+1 (i + 1)!

so that from (15-206)


∞ ∞
X X 1
uk = u 0 = e · u0 = 1
k=1 k=1 k!
216

gives u0 = 1/e and the steady state probabilities are given by


1/e
uk = , k = 1, 2, · · ·
k!
15.4 If the zeroth generation has size m, then the overall process may
be considered as the sum of m independent and identically distributed
branching processes x(k)
n , k = 1, 2, · · · m, each corresponding to unity
size at the zeroth generation. Hence if π0 represents the probability of
extinction for any one of these individual processes, then the overall
probability of extinction is given by

lim P [xn = 0|x0 = m] =


n→∞

(1) (2) (m)


= P [{x(1) (2)
n = 0|x0 = 1} {xn = 0|x0 = 1} · · · {x(m) = 0|x0 = 1}]
T T
n
Qm (k)
= k=1 P [x(k)
n = 0|x0 = 1]

= π0m

15.5 From (15-288)-(15-289),

P (z) = p0 + p1 z + p2 z 2 , since pk = 0, k ≥ 3.

Also p0 + p1 + p2 = 1, and from (15-307) the extinction probability is


given by sloving the equation

P (z) = z.

Notice that
P (z) − z = p0 − (1 − p1 )z + p2 z 2
= p0 − (p0 + p2 )z + p2 z 2
= (z − 1)(p2 z − p0 )

and hence the two roots of the equation P (z) = z are given by
p0
z1 = 1, z2 = .
p2
Thus if p2 < p0 , then z2 > 1 and hence the smallest positive root of
P (z) = z is 1, and it represents the probability of extinction. It follows
217

that such a tribe which does not produce offspring in abundence is


bound to extinct.
15.6 Define the branching process {xn }
xn
X
xn+1 = yk
k=1

where yk are i.i.d random variables with common moment generating


function P (z) so that (see (15-287)-(15-289))
P 0 (1) = E{yk } = µ.
Thus P xn
E{xn+1 |xn } = E{ k=1 yk |xn = m}
Pm
= E{ k=1 yk |xn = m}
Pm
= E{ k=1 yk } = mE{yk } = xn µ

Similarly
E{xn+2 |xn } = E{E{xn+2 |xn+1 , xn }}
= E{E{xn+2 |xn+1 }|xn }
= E{µxn+1 |xn } = µ2 xn

and in general we obtain


E{xn+r |xn } = µr xn . (i)
Also from (15-310)-(15-311)
E{xn } = µn . (ii)
Define
xn
wn = . (iii)
µn
This gives
E{wn } = 1.
Dividing both sider of (i) with µn+r we get
xn+r r xn xn
E{ |x n = x} = µ · = = wn
µn+r µn+r µn
218

or
x ∆
E{wn+r |wn = = w} = wn
µn
which gives
E{wn+r |wn } = wn ,
the desired result.
15.7
sn = x 1 + x 2 + · · · + x n
where xn are i.i.d. random variables. We have

sn+1 = sn + xn+1

so that

E{sn+1 |sn } = E{sn + xn+1 |sn } = sn + E{xn+1 } = sn .

Hence {sn } represents a Martingale.


15.8 (a) From Bayes’ theorem

P {xn+1 = i|xn = j} P {xn = j}


P {xn = j|xn+1 = i} =
P {xn+1 = i}
qj pji (i)
= qi = p∗ij ,

where we have assumed the chain to be in steady state.


(b) Notice that time-reversibility is equivalent to

p∗ij = pij

and using (i) this gives


qj pji
p∗ij = = pij (ii)
qi

or, for a time-reversible chain we get

qj pji = qi pij . (iii)


219

Thus using (ii) we obtain by direct substitution


³q ´ µ ¶³
pij pjk pki = j
pji qk p qi p ´
qi qj kj qk ik
= pik pkj pji ,

the desired result.


15.9 (a) It is given that A = AT , (aij = aji ) and aij > 0. Define the ith
row sum X
ri = aik > 0, i = 1, 2, · · ·
k

and let
aij aij
pij = P = .
k aik ri
Then
a a a
pji = X ji = rjij = rijj
ajm
m
(i)
a
= rrji riji = rrji pij

or
ri pij = rj pji .
Hence X X X
ri pij = rj pji = rj pji = rj , (ii)
i i i

since
aji rj
P
i
X
pji = = = 1.
i rj rj
Notice that (ii) satisfies the steady state probability distribution equa-
tion (15-167) with
qi = c r i , i = 1, 2, · · ·
where c is given by
X X 1 1
c ri = qi = 1 =⇒ c = P =P P .
i i i ri i j aij
220

Thus
ri j aij
P
qi = P =P P >0 (iii)
i ri i j aij
represents the stationary probability distribution of the chain.
With (iii) in (i) we get
qi
pji = pij
qj
or
qj pji
pij = = p∗ij
qi
and hence the chain is time-reversible.
15.10 (a) M = (mij ) is given by
M = (I − W )−1
or
(I − W )M = I
M = I + WM
which gives
mij = δij + wik mkj , ei , ej ∈ T
P
k

= δij + pik mkj , ei , ej ∈ T


P
k

(b) The general case is solved in pages 743-744. From page 744,
with N = 6 (2 absorbing states; 5 transcient states), and with r = p/q
we obtain
(rj − 1)(r 6−i − 1)

, j≤i


(p − q)(r 6 − 1)




mij =
(ri − 1)(r 6−i − rj−i )
, j ≥ i.


(p − q)(r 6 − 1)



15.11 If a stochastic matrix A = (aij ), aij > 0 corresponds to the two-


step transition matrix of a Markov chain, then there must exist another
stochastic matrix P such that
A = P 2, P = (pij )
221

where X
pij > 0, pij = 1,
j

and this may not be always possible. For example in a two state chain,
let  
α 1−α
P = 
1−β β
 

so that
α2 + (1 − α)(1 − β)
 
(α + β)(1 − α)
A = P2 =  .

(α + β)(1 − β) β 2 + (1 − α)(1 − β)

This gives the sum of this its diagonal entries to be

a11 + a22 = α2 + 2(1 − α)(1 − β) + β 2


= (α + β)2 − 2(α + β) + 2 (i)
2
= 1 + (α + β − 1) ≥ 1.

Hence condition (i) necessary. Since 0 < α < 1, 0 < β < 1, we also
get 1 < a11 + a22 ≤ 2. Futher, the condition (i) is also sufficient in the
2 × 2 case, since a11 + a22 > 1, gives

(α + β − 1)2 = a11 + a22 − 1 > 0

and hence √
α+β =1± a11 + a22 − 1
and this equation may be solved for all admissible set of values 0 <
α < 1 and 0 < β < 1.
15.12 In this case the chain is irreducible and aperiodic and there are
no absorption states. The steady state distribution {uk } satisfies (15-
167),and hence we get
N
à !
X X N k N −k
uk = uj pjk = uj p j qj .
j j=0 k
222

Then if α > 0 and β > 0 then “fixation to pure genes” does not occur.
15.13 The transition probabilities in all these cases are given by (page
765) (15A-7) for specific values of A(z) = B(z) as shown in Exam-
ples 15A-1, 15A-2 and 15A-3. The eigenvalues in general satisfy the
equation X (k) (k)
pij xj = λk xi , k = 0, 1, 2, · · · N
j

and trivially j pij = 1 for all i implies λ0 = 1 is an eigenvalue in all


P

cases.
However to determine the remaining eigenvalues we can exploit the
relation in (15A-7). From there the corresponding conditional moment
generating function in (15-291) is given by
N
pij sj
X
G(s) = (i)
j=0

where from (15A-7)


{Ai (z)}j {B N −i (z)}N −j
pij =
{Ai (z) B N −i (z)}N
(ii)
coefficient of sj z N in {Ai (sz) B N −i (z)}
=
{Ai (z) B N −i (z)}N

Substituting (ii) in (i) we get the compact expression


{Ai (sz) B N −i (z)}N
G(s) = . (iii)
{Ai (z) B N −i (z)}N
Differentiating G(s) with respect to s we obtain
N
Pij j sj−1
X
G0 (s) =
j=0

{iAi−1 (sz) A0 (sz)z B N −i (z)}N


= (iv)
{Ai (z) B N −i (z)}N
{Ai−1 (sz) A0 (sz) B N −i (z)}N −1
=i· .
{Ai (z) B N −i (z)}N
223

Letting s = 1 in the above expression we get


N
0
X {Ai−1 (z) A0 (z) B N −i (z)}N −1
G (1) = pij j = i . (v)
j=0 {Ai (z) B N −i (z)}N
In the special case when A(z) = B(z), Eq.(v) reduces to
N
X
pij j = λ1 i (vi)
j=0

where
{AN −1 (z) A0 (z)}N −1
λ1 = . (vii)
{AN (z)}N
Notice that (vi) can be written as
P x 1 = λ 1 x1 , x1 = [0, 1, 2, · · · N ]T
and by direct computation with A(z) = B(z) = (q + pz)2 (Example
15A-1) we obtain
{(q + pz)2(N −1) 2p(q + pz)}N
λ1 =
{(q + pz)2N }N
à !
2N
2p q N pN −1
2p{(q + pz)2N −1 }N −1 N − 1
= = = 1.
{(q + pz)2N }N
à !
2N N N
q p
N

Thus N j=0 pij j = i and from (15-224) these chains represent Martin-
P

gales. (Similarly for Examples 15A-2 and 15A-3 as well).


To determine the remaining eigenvalues we differentiate G0 (s) once
more. This gives
N
pij j(j − 1) sj−2
X
G (s) =
00

j=0
00
{i(i − 1)Ai−2 (sz)[A0 (sz)]2 z B N −i (z) + iAi−1 (sz) A (sz)z B N −i (z)}N −1
= .
{Ai (z) B N −i (z)}N
2
{i Ai−2 (sz) B N −i (z)[(i − 1) (A0 (sz)) + A(sz) A00 (sz)]}N −2
= .
{Ai (z) B N −i (z)}N
224

With s = 1, and A(z) = B(z), the above expression simplifies to

N
X
pij j(j − 1) = λ2 i(i − 1) + iµ2 (viii)
j=0

where
{AN −2 (z) [A0 (z)]2 }N −2
λ2 =
{AN (z)}N
and
{AN −1 (z) A00 (z)}N −2
µ2 = .
{AN (z)}N
Eq. (viii) can be rewritten as

N
pij j 2 = λ2 i2 + (polynomial in i of degree ≤ 1)
X

j=0

and in general repeating this procedure it follows that (show this)

N
pij j k = λk ik + (polynomial in i of degree ≤ k − 1)
X
(ix)
j=0

where
{AN −k (z) [A0 (z)]k }N −k
λk = , k = 1, 2, · · · N. (x)
{AN (z)}N
Equations (viii)–(x) motivate to consider the identities

P q k = λ k qk (xi)

where qk are polynomials in i of degree ≤ k, and by proper choice of


constants they can be chosen in that form. It follows that λk , k =
1, 2, · · · N given by (ix) represent the desired eigenvalues.

(a) The transition probabilities in this case follow from Example 15A-1
(page 765-766) with A(z) = B(z) = (q + pz)2 . Thus using (ix) we
225

obtain the desired eigenvalues to be


{(q + pz)2(N −k) [2p(q + pz)]k }N −k
λk =
{(q + pz)2N }N
{(q + pz)2N −k }N −k
= 2 k pk
{(q + pz)2N }N }
à !
2N − k
N −!k
= 2k à , k = 1, 2, · · · N.
2N
N

(b) The transition probabilities in this case follows from Example 15A-2
(page 766) with
A(z) = B(z) = eλ(z−1)
and hence
{eλ(N −k)(z−1) λk eλk(z−1) }N −k
λk =
{eλN (z−1) }N
λk {eλN z }N −k λk (λN )N −k /(N − k)!
= =
λN z
{e }N (λN )N /N !
µ ¶
N! 1 2 ··· 1 − k − 1 ,
³ ´³ ´
= = 1− N 1− N N k = 1, 2, · · · N
(N − k)! N k

(c) The transition probabilities in this case follow from Example 15A-3
(page 766-767) with
q
A(z) = B(z) = .
1 − pz
Thus
{1/(1 − pz)N +k }N −k
λk = p k
{1/(1 − pz)N }N
à ! à !
−(N + k) 2N − 1
N − k! N −k!
= (−1)k à =à , r = 2, 3, · · · N
−N 2N − 1
N N
226

15.14 From (15-240), the mean time to absorption vector is given by

m = (I − W )−1 E, E = [1, 1, · · · 1]T ,

where
Wik = pjk , j, k = 1, 2, · · · N − 1,

with pjk as given in (15-30) and (15-31) respectively.

15.15 The mean time to absorption satisfies (15-240). From there

X
mi = 1 + pik mk = 1 + pi,i+1 mi+1 + pi,i−1 mi−1
k∈T

= 1 + p mi+1 + q mi−1 ,

or
mk = 1 + p mk+1 + q mk−1 .

This gives
p (mk+1 − mk ) = q (mk − mk−1 ) − 1

Let
Mk+1 = mk+1 − mk

so that the above iteration gives

Mk+1 = pq Mk − p1
· ¸
= pq M1 − p1 1 + pq + pq + · · · + pq
³ ´k ³ ´2 ³ ´k−1

q M − 1 n1 − ( q )k o , p 6= q
³ ´k




 p 1 p−q p
=


 M1 − kp , p=q
227

This gives
i−1
X
mi = Mk+1
k=0
i−1 ³ ´



³
M + 1
´ X
q k− i , p 6= q
1 p − q p p−q




k=0
=
i(i − 1)
iM1 − 2p , p=q





 ³ ´ 1 − (q/p)i

M + 1 i , p=
− p− 6 q
1 p−q q

1 − q/p



=
i(i − 1)



 iM1 − 2p , p=q

where we have used mo = 0. Similarly ma+b = 0 gives


1 a+b 1 − q/p
M1 + = · .
p−q p − q 1 − (q/p)a+b
Thus  i
a + b 1 − (q/p) − i , p 6= q
 p−q ·

1 − (q/p)a+b p − q


mi = 


 i(a + b − i), p=q

which gives for i = a


a
a + b · 1 − (q/p) − a , p =


 p − q 1 − (q/p)a+b

 p−q 6 q
ma =


 ab, p=q
 b

 b − a + b · 1 − (p/q) , p 6= q

 2p − 1 2p − 1 1 − (p/q)
a+b
=


 ab, p=q

by writing
1 − (q/p)a (q/p)a − (q/p)a+b 1 − (p/q)b
=1− =1−
1 − (q/p)a+b 1 − (q/p)a+b 1 − (p/q)a+b
(see also problem 3-10).
228

Chapter 16

16.1 Use (16-132) with r = 1. This gives

ρn p , n ≤ 1


n! 0


pn =


 ρn p 0 , 1 < n ≤ m

= ρ n p0 , 0≤n≤m
Thus
m m
(1 − ρm+1 )
ρn = p 0
X X
pn = p 0 =1
n=0 n=0 1−ρ
1−ρ
=⇒ p0 =
1 − ρm+1
and hence
1−ρ n
pn = ρ , 0 ≤ n ≤ m, ρ 6= 1
1 − ρn+1

and lim ρ → 1, we get


1
pn = , ρ = 1.
m+1

16.2 (a) Let n1 (t) = X + Y , where X and Y represent the two queues.
Then
pn = P {n1 (t) = n} = P {X + Y = n}
n
X
= P {X = k} P {Y = n − k}
k=0
n
X (i)
= (1 − ρ)ρk (1 − ρ)ρn−k
k=0

= (n + 1)(1 − ρ)2 ρn , n = 0, 1, 2, · · ·

where ρ = λ/µ.
229
0
(b) When the two queues are merged, the new input rate λ =
λ + λ = 2λ. Thus from (16-102)

0
(λ /µ)n (2ρ)n

p 0 =
n! p0 , n < 2


n!



pn =  0
 22 ( λ )n p = 2ρn p , n ≥ 2.
0 0
2! 2µ
 

Hence


X ∞
X
pk = p0 (1 + 2ρ + 2 ρk )
k−0 k=2
2
= p0 (1 + 2ρ + 12ρ− ρ)
= 1 p−0 ρ ((1 + 2ρ) (1 − ρ) + 2ρ2 )

= 1 p−0 ρ (1 + ρ) = 1

1−ρ
=⇒ p0 = , (ρ = λ/µ). (ii)
1+ρ

Thus

 2 (1 − ρ) ρn /(1 + ρ), n ≤ 1
pn =  (iii)
(1 − ρ)/(1 + ρ), n=0

(c) For an M/M/1 queue the average number of items waiting is


given by (use (16-106) with r = 1)


0 X
E{X} = L1 = (n − 1) pn
n=2
230

where pn is an in (16-88). Thus



0 X
L1 = (n − 1)(1 − ρ) ρn
n=2

X
2
= (1 − ρ) ρ (n − 1) ρn−2
n=2

X (iv)
2 k−1
= (1 − ρ) ρ kρ
k=1

1 2 ρ2
= (1 − ρ) ρ = .
(1 − ρ)2 (1 − ρ)

Since n1 (t) = X + Y we have

L1 = E{n1 (t)} = E{X} + E{Y }


0 2ρ2 (v)
= 2L1 =
1−ρ

For L2 we can use (16-106)-(16-107) with r = 2. Using (iii), this


gives
ρ
L2 = p r
(1 − ρ)2
(1 − ρ) ρ2 ρ 2 ρ3
=2 =
1 + ρ (1 − ρ)2 1 − ρ2 (vi)
2 ρ2
à !
ρ
= < L1
1−ρ 1+ρ

From (vi), a single queue configuration is more efficient then two


separate queues.
16.3 The only non-zero probabilities of this process are

λ0,0 = −λ0 = −mλ, λ0,1 = µ

λi,i+1 = (m − i) λ, λi,i−1 = iµ
231

λi,i = [(m − i) λ + iµ], i = 1, 2, · · · , m − 1

λm,m = −λm,m−1 = −mµ.

Substituting these into (16-63) text, we get

m λ p0 = µ p1 (i)

[(m − i)λ + iµ] pi = (m − i + 1) pi−1 + (i + 1) µ pi+1 , i = 1, 2, · · · , m − 1


(ii)
and
m µ pm = λ pm−1 . (iii)
Solving (i)-(iii) we get
à ! à !i à !m−i
m λ µ
pi = , i = 0, 1, 2, · · · , m
i λ+µ λ+µ

16.4 (a) In this case


 Ã !n
 λ λ λ λ
··· = p0 , n < m





 µ1 µ1 µ1 µ1
pn = 
λ λ λ λ λ
··· ··· p0 , n ≥ m


µ1 µ1 µ1 µ2 µ2



ρn1 p0 ,


 n<m
=
 ρm−1
1 ρ2n−m+1 p0 , n ≥ m,

where "m−1 #
∞ ∞
ρm−1
X X X
pn = p 0 ρk1 + 1 ρ2 ρn2
n=0 k=0 n=0

ρm ρ2 ρm−1
" #
1− 1 1
= p0 + =1
1 − ρ1 1 − ρ2
232

gives
!−1
1 − ρm ρ2 ρm−1
Ã
1 1
p0 = + .
1 − ρ1 1 − ρ2

(b)

X
L = n pn
n=0
"m−1 ∞
#
n ρm−1 ρ2n−m+1
X X
= p0 n ρn1 + 1
n=0 n=m
 !m−2 
m−1
à ∞
ρ1
n ρn−1 n ρn−1
X X
= p 0  ρ1 1 + ρ1 2

n=0 ρ2 n=m
 Ãm−1 ! Ã !m−2 

d X ρ1 d X
= p 0  ρ1 ρn1 + ρ1 ρn 
dρ1 n=0 ρ2 dρ2 n=m 2
 !m−2 !
1 − ρm ρm 
à ! à Ã
d 1 ρ1 d
= p 0  ρ1 + ρ1
dρ1 1−ρ ρ2 dρ 1−ρ
m−1
ρ1 [1 + (m − 1)ρm ] ρ2 ρm−1
" #
1 − mρ1 1 + [m − (m − 1)ρ2 ]
= p0 2
+ .
(1 − ρ1 ) (1 − ρ2 )2

16.5 In this case


 

 λ, j<r 
 jµ, j < r
λi =  µi = 
 pλ, j ≥ r  rµ, j ≥ r.

Using (16-73)-(16-74), this gives

(λ/µ)n

n! p0 , n<r




pn =  (λ/µ)r n−r
r! (pλ/rµ) , n ≥ r.

 
233

16.6
m−1
X
P {w > t} = pn P (w > t|n)
n=r
m−1 µ ¶n−r
=
X
pn (1 − Fw (t|n)) =
X
pr λ (1 − Fw (t|n))

n=r

(γµ)n−r+1 tn−r
fw (t|n) = e−γµt (see 16.116)
(n − r)!)
and
n−r
X (γµt)k −γµt
Fw (t|n) = 1 − e (see 4.)
k=0 k!

so that
n−r
X (γµt)k −γµt
1 − Fw (t|n) = e
k=0 k!

m−1 µ ¶n−r n−r


X (γµt)k
P {w > t} =
X
pr λ e−γµt
γµ k!
n=r k=0
m−r−1 i
X X (γµt)k −γµt
= p r ρi k! e , n−r =i
i=0 k=0
m−r−1 k
X X (γµt)i
= pr e−γµt ρk i!
k=0 i=0
m−r−1
X X k m−r−1
X m−r−1
X
= =
k=0 i=0 i=0 k=i

m−r−1 m−r−1
X (γµt)i X k
P {w > t} = pr e−γµt i! ρ
i=0 k=i
m−r−1
(γµt)i i
= 1 p−r ρ e−γµt
X
m−r
i! (ρ − ρ ), ρ = λ/γµ.
i=0
234

Note that m → ∞ =⇒ M/M/r/m =⇒ M/M/r and

(γµρt)i

P (w > t) = 1 p−r ρ e−γµt
X
i!
i=0

= 1 p−r ρ e−γµ(1−ρ)t t > 0.

and it agrees with (16.119)


16.7 (a) Use the hints
(b)
∞ ∞ ∞ X n
µ X
(λ + µ) pn z n + pn+1 z n+1 + λ pn−k ck z n = 0
X X

n=1 z n=1 n=1 k=1


µ
ck z k pm z m = 0
X X
−(ρ + 1) (P (z) − p0 ) + (P (z) − p0 − p1 z) + λ
z k=1 m=0

which gives
P (z)[1 − z − ρz (1 − C(z))] = p0 (1 − z)
or
p0 (1 − z)
P (z) = .
1 − z − ρz (1 − C(z))
−p0 −p0
1 = P (1) = =
−1 − ρ + ρz C (z) + ρC(z)
0 −1 + ρC 0 (1)
=⇒ p0 = 1 − ρ0 , ρ0 = ρC 0 (1).
Let
1 − C(z)
D(z) = .
1−z
Then
1 − ρL
P (z) = .
1 − ρzD(z)

(c) This gives


(1 − ρc )
P 0 (z) = (ρD(z) + ρzD 0 (z))
(1 − ρzD(z))2
235

(1 − ρc )
L = P 0 (1) = ρ (D(1) + D 0 (1))
(1 − ρc )2
= 1 (C 0 (1) + D 0 (1))
(1 − ρc )

C 0 (1) = E(x)
1 − C(z)
D(z) =
1−z
(1 − z) (−C 0 (z)) − (1 − C(z)) (−1)
D0 (z) =
(1 − z)2
1 − C(z) − (1 − z)C 0 (z)
=
(1 − z)2

By L-Hopital’s Rule

−C 0 (z) − (−1)C 0 (z) − (1 − z)C 00 (z)


D0 (1) = limz→1
−2(1 − z)
C 00 (z)
= limz→1 = 1/2C 00 (z) = 2
E(X 2 ) − E(X)
= 1/2 k(k − 1) Ck =
P
2

ρ (E(X) + E(X 2 ))
L= .
2 (1 − ρE(X))

(d)
C(z)z m E(X) = m
1−ρ
P (z) =
1−ρ mk=1 z
k
P

1 − z m m−1
zk
X
D(z) = =
1−z k=0

E(X) = m, E(X 2 ) = m2
ρ(m + m2 )
L=
2(1 − ρm)
236

(e)
qz
C(z) =
1 − Pz
1 − ρ0 qz
P (z) = , C(z) =
1 − ρzD(z) 1 − pz
qz
1 − C(z) 1 − 1−P (z) 1 − P z − (1 − P )z 1−z 1
D(z) = = = = =
1−z 1−z (1 − z)(1 − P z) (1 − z)(1 − P z) 1 − Pz
(1 − ρ0 )(1 − pz) (1 − ρ0 )(1 − pz)
P (z) = =
1 − pz − ρz 1 − (p + ρ)z
(1 − pz)q − qz(−p) q 1
C 0 (1) = = =
(1 − P z)2 q2 q
1 − C(z)
D(z) =
1−z
D(1) = C 0 (1)
1 − ρc
L = P 0 (1) = (ρ · C 0 (1) + ρ · D 0 (1))
(1 − ρc )2
−(1 − z)C 0 (z) − (1 − C(z)) (ρ − 1) 1 − C(z) − (1 − z)C 0 (1)
D0 (z) = =
(1 − z)2 (1 − z)2
−C 0 (z) − (−1)C 0 (z) − (1 − z)C 00 (z)
limz→1 D0 (z) = limz→1
2(1 − z)
−(1 − z)C 00 (z) ρ00 (z)
= = 2
−2(1 − z)

C 00 (1)
D0 (1) =
2
2
ρE(X) + ρE(X 2 )
à !
1 ρ (E(X ) − E(X))
L= ρE(X) + = .
(1 − ρc ) 2 2(1 − ρc )

16.8 (a) Use the hints.


(b)

∞ ∞ ∞
µ X
(λ + µ) pn z n + n+m
pn−1 z n−1 = 0
X X
− p n+m z + λz
n=1 z n n=1 n=1
237

or
m
à !
1 X
k
−(1 + ρ) (P (z) − p0 ) + m P (z) − pk z + ρzP (z) = 0
z k=0

which gives
h i m
P (z) ρ z m+1 − (ρ + 1) z m + 1 = pk z k − p0 (1 + ρ) z m
X

k=0

or
m
X
pk z k − p0 (1 + ρ) z m
k=0 N (z)
P (z) = = . (i)
ρ z m+1 − (ρ + 1) z m +1 M (z)

(c) Consider the denominator polynomial M (z) in (i) given by

M (z) = ρ z m+1 − (1 + ρ) z m + 1 = f (z) + g(z)

where
f (z) = −(1 + ρ) z m ,
g(z) = 1 + ρ z m+1 .
Notice that |f (z)| > |g(z)| in a circle defined by |z| = 1 + ε, ε > 0.
Hence by Rouche’s Theorem f (z) and f (z)+g(z) have the same number
of zeros inside the unit circle (|z| = 1 + ε). But f (z) has m zeros inside
the unit circle. Hence f (z) + g(z) = M (z) also has m zeros inside the
unit circle. Hence
M (z) = M1 (z) (z − z0 ) (ii)
where |z0 | > 1 and M1 (z) is a polynomial of degree m whose zeros are
all inside or on the unit circle. But the moment generating function
P (z) is analytic inside and on the unit circle. Hence all the m zeros
of M (z) that are inside or on the unit circle must cancel out with the
zeros of the numerator polynomial of P (z). Hence

N (z) = M1 (z) a. (iii)


238

Using (ii) and (iii) in (i) we get


N (z) a
P (z) = = .
M (z) z − z0
But P (1) = 1 gives a = 1 − z0
or
z0 − 1
P (z) =
z0 − z

1
µ ¶ X
= 1− (z/z0 )n
z0 n=0
¶n
1 1
µ ¶ µ
=⇒ pn = 1 − = (1 − r) r n , n≥0 (iv)
z0 z0
where r = 1/z0 .
(d) Average system size

X r
L= n pn = .
n=0 1−r

16.9 (a) Use the hints in the previous problem.


(b)
∞ ∞ ∞
(λ + µ) pn z n + µ pn+m z n + λ pn−1 z n
X X X

n=m n=m n=m
m−1 2m−1
à ! à !
k 1
pk z k
X X
−(1 + ρ) P (z) − pk z + m P (z) −
k=0 z k=0
m−2
à !
pk z k = 0.
X
+ρ z P (z) −
k=0
After some simplifications we get
h i m−1
P (z) ρ z m+1 − (ρ + 1) z m + 1 = (1 − z m ) pk z k
X

k=0
or
m−1
X m−1
X
(1 − z m ) pk z k (z0 − 1) zk
k=0 k=0
P (z) = =
ρ z m+1 − (ρ + 1) z m + 1 m (z0 − z)
239

where we have made use of Rouche’s theerem and P (z) ≡ 1 as in


problem 16-8.
(c)
m−1
X

zk
1 − r k=0
pn z n =
X
P (z) =
n=0 m 1 − rz
gives

(1 + r + · · · + r k ) p0 ,


 k ≤m−1
pn =

 rn−m+1 (1 + r + · · · + r m−1 ) p0 , k ≥ m

where
1−r 1
p0 = , r= .
m z0
Finally

X 0
L= n pn = Pn (1).
n=0

But
m−1
X m−1
X
k z k−1 (1 − rz) − z k (−r)
1−r
µ ¶
0 k=1 k=0
P (z) =
m (1 − rz)2
so that
1−r m−1+r
0 m − (1 − r)
L = P (1) = 2
=
m (1 − r) m (1 − r)
1 1
= − .
1−r m

16.10 Proceeding as in (16-212),


Z ∞
ψA (u) = e−uτ dA(τ )
0
à !m
λm
= .
u + λmz
240

This gives

B(z) = ψA (ψ(1 − z))


à !m
λm
=
µ (1 − z) + λ m
 m
1 (i)
= 1

1 + ρ (1 − z)
à !m
ρ λ
= , ρ= .
(1 + ρ) − z mµ

Thus the equation B(z) = z for π0 reduce to


à !m
ρ
=z
(1 + ρ) − z
or
ρ
= z 1/m ,
(1 + ρ) − z

which is the same as

ρ z −1/m = (1 + ρ) − z (ii)

Let x = z −1/m . Sustituting this into (ii) we get

ρ x = (1 + ρ) − x−m

or

ρ xm+1 − (1 + ρ) xm + 1 = 0 (iii)

16.11 From Example 16.7, Eq.(16-214), the characteristic equation for


Q(z) is given by (ρ = λ/m µ)

1 − z[1 + ρ (1 − z)]m = 0
241

which is equivalent to

1 + ρ (1 − z) = z −1/m . (i)

Let x = z 1/m in this case, so that (i) reduces to

[(1 + ρ) − ρ xm ] x = 1

or the characteristic equation satisfies

ρ xm+1 − (1 + ρ) x + 1 = 0. (ii)

16.12 Here the service time distribution is given by


k
dB(t) X
= di δ(t − Ti )
dt i=1

and this Laplace transform equals


k
di e−s Ti
X
Φs (s) = (i)
i=1

substituting (i) into (15.219), we get

A(z) = Φs (λ (1 − z))
k
di e−λ Ti (1−z)
X
=
i=1
k
di e−λ Ti eλ Ti z
X
=
i=1
k ∞ ∞
(λ T j zj
−λ Ti i)
aj z j .
X X X
= di e j!
=
i=1 j=0 j=0

Hence
k
(λ Ti )j
di e−λ Ti
X
aj = , j = 0, 1, 2, · · · . (i)
i=1 j!
242

To get an explicit formula for the steady state probabilities {qn }, we


can make use of the analysis in (16.194)-(16.204) for an M/G/1 queue.
From (16.203)-(16.204), let
n
X
c0 = 1 − a 0 , cn = 1 − ak , n≥1
k=0

(m)
and let {ck } represent the m−fold convolution of the sequence {ck }
with itself. Then the steady-state probabilities are given by (16.203) as
n
∞ X
X (m)
qn = (1 − ρ) ak cn−k .
m=0 k=0

(b) State-Dependent Service Distribution


Let Bi (t) represent the service-time distribution for those customers
entering the system, where the most recent departure left i customers
in the queue. In that case, (15.218) modifies to

ak,i = P {Ak |Bi }

where
Ak = ”k customers arrive during a service time”
and

Bi = ”i customers in the system at the most recent departure.”

This gives
Z ∞ (λt)k
ak,i = e−λt dBi (t)
0 k!
(λt)k µ1 λk
 Z ∞
e−λt µ1 e−µ1 t dt =


 , i=0 (i)
k! (λ + µ1 )k+1

0



= Z ∞ (λt)k µ2 λk
µ2 e−µ2 t dt =

e−λt , i≥1


k! (λ + µ2 )k+1



 0
243

This gives
1


 , i=0
1 + ρ1 (1 − z)


∞ 

ak,i z k = 
X
Ai (z) = 1 (ii)
k=0 
 , i≥1
1 + ρ2 (1 − z)


where ρ1 = λ/µ1 , ρ2 = λ/µ2 . Proceeding as in Example 15.24, the


steady state probabilities satisfy [(15.210) gets modified]
j+1
X
qj = q0 aj,0 + qi aj−i+1,i (iii)
i=1

and (see(15.212))

qj z j
X
Q(z) =
j=0
∞ ∞
aj,0 z j +
X X
= q0 qi aj−i+1,i
j=0 j=0 (iv)
∞ ∞
qi z i am,i z m z −1
X X
= q0 A0 (z) +
i=1 m=0

= q0 A0 (z) + (Q(z) − q0 ) A1 (z)/z

where (see (ii))


1
A0 (z) = (v)
1 + ρ1 (1 − z)
and
1
A1 (z) = . (vi)
1 + ρ2 (1 − z)
From (iv)
q0 (z A0 (z) − A1 (z))
Q(z) = . (vii)
z − A1 (z)
244

Since
h 0 0
i
q0 A0 (1) + A0 (1) − A1 (1)
Q(1) = 1 = 0
1 − A1 (1)
q0 (1 + ρ1 − ρ2 )
=
1 − ρ2

we obtain
1 − ρ2
q0 = . (viii)
1 + ρ 1 − ρ2

Substituting (viii) into (vii) we can rewrite Q(z) as

(1 − z) A1 (z) 1 1 − z A0 (z)/A1 (z)


Q(z) = (1 − ρ2 ) ·
A1 (z) − z 1 + ρ 1 − ρ2 1−z
ρ2
1− z
à !
1 − ρ2 1 1+ρ1
= ρ1
1 − ρ2 z 1 + ρ 1 − ρ2 1 − 1+ρ1
z
= Q1 (z) Q2 (z)
(ix)
where

1 − ρ2
ρk2 z k
X
Q1 (z) = = (1 − ρ2 )
1 − ρ2 z k=0

and
à ! ∞
à !i
1 ρ2 ρ1
zi.
X
Q2 (z) = 1− z
1 + ρ 1 − ρ2 1 + ρ1 i=0 1 + ρ1

Finally substituting. Q1 (z) and Q2 (z) into (ix) we obtain


 !n−i 
n n−1
ρn−i−1
Ã
ρ1 1
ρi2 − ρi+1
X X
qn = q 0  2
 . n = 1, 2, · · ·
i=0 1 + ρ1 i=0 (1 + ρ1 )n−i

with q0 as in (viii).
245

16.13 From (16-209), the Laplace transform of the waiting time distri-
bution is given by
1−ρ
Ψw (s) = ³
1−Φs (s)
´
1−λ s

1−ρ (i)
= ³
1−Φs (s)
´.
1 − ρµ s

Let
Z t
Fr (t) = µ [1 − B(τ )]dτ
0
· Z t ¸ (ii)
=µ t− B(τ )dτ .
0

represent the residual service time distribution. Then its Laplace


transform is given by
à !
1 Φs (s)
ΦF (s) = L {Fr (t)} = µ −
s s
à ! (iii)
1 − Φs (s)
=µ .
s

Substituting (iii) into (i) we get



1−ρ
[ρ ΦF (s)]n ,
X
Ψw (s) = = (1 − ρ) |ΦF (s)| < 1. (iv)
1 − ρ ΦF (s) n=0

Taking inverse transform of (iv) we get



ρn Fr(n) (t),
X
Fw (t) = (1 − ρ)
n=0

where Fr(n) (t) is the nth convolution of Fr (t) with itself.


16.14 Let ρ in (16.198) that represents the average number of customers
that arrive during any service period be greater than one. Notice that
246

0
ρ = A (1) > 1

where


ak z k
X
A(z) =
k=0

From Theorem 15.9 on Extinction probability (pages 759-760) it


0
follows that if ρ = A (1) > 1, the eqution

A(z) = z (i)

has a unique positive root π0 < 1. On the other hand, the transient
state probabilities {σi } satisfy the equation (15.236). By direct substi-
tution with xi = π0i we get

∞ ∞
aj−i+1 π0j
X X
pij xj = (ii)
j=1 j=1

where we have made use of pij = aj−i+1 , i ≥ 1 in (15.33) for an


M/G/1 queue. Using k = j − i + 1 in (ii), it reduces to

∞ ∞
ak π0k+i−1 = π0i−1 ak π0k
X X

k=2−i k=0

= π0i−1 π0 = π0i = xi (iii)

since π0 satisfies (i). Thus if ρ > 1, the M/G/1 system is transient


with probabilities σi = π0i .

16.15 (a) The transition probability matrix here is the truncated version
of (15.34) given by
247

m−2
 
X
 a0 a1 a2 · · · · am−2 1− ak 
 
 k=0 
 

 m−2
X



 a0 a1 a2 · · · · am−2 1− ak  

 k=0 

 
 m−3
X 
P = 0 a 0 a1 · · · · am−3 1− ak  (i)
 
 
 k=0 
 
 .. .. .. .. .. .. 
. . . . . .
 
 
 
 

 0 0 0 · · · a0 a1 1 − (a0 + a1 ) 

 
0 0 0 · · · 0 a0 1 − a0
 

and it corresponds to the upper left hand block matrix in (15.34)


followed by an mth column that makes each row sum equal to unity.
(b) By direct sybstitution of (i) into (15-167), the steady state prob-
abilities {qj∗ }m−1
j=0 satisfy

j+1
X
qj∗ = q0∗ aj + qi∗ aj−i+1 , j = 0, 1, 2, · · · , m − 2 (ii)
i=1

and the normalization condition gives


m−2
X

qm−1 =1− qi∗ . (iii)
i=0

Notice that (ii) in the same as the first m − 1 equations in (15-210)


for an M/G/1 queue. Hence the desired solution {qj∗ }m−1 j=0 must satisfy
the first m − 1 equations in (15-210) as well. Since the unique solution
set to (15.210) is given by {qj }∞
j=0 in (16.203), it follows that the desired
probabilities satisfy

qj∗ = c qj , j = 0, 1, 2, · · · , m − 1 (iv)

where {qj }m−1


j=0 are as in (16.203) for an M/G/1 queue. From (iii)
we also get the normalization constant c to be
248

1
c= m−1
. (v)
X
qi
i=0

16.16 (a) The event {X(t) = k} can occur in several mutually exclusive
ways, viz., in the interval (0, t), n customers arrive and k of them
continue their service beyond t. Let An = “n arrivals in (0, t)”, and
Bk,n =“exactly k services among the n arrivals continue beyond t”,
then by the theorem of total probability


X ∞
X
P {X(t) = k} = P {An ∩ Bk,n } = P {Bk,n |An }P (An ).
n=k n=k

But P (An ) = e−λt (λt)n /n!, and to evaluate P {Bk,n |An }, we argue as
follows: From (9.28), under the condition that there are n arrivals in
(0, t), the joint distribution of the arrival instants agrees with the joint
distribution of n independent random variables arranged in increasing
order and distributed uniformly in (0, t). Hence the probability that a
service time S does not terminate by t, given that its starting time x
has a uniform distribution in (0, t) is given by

Z t
pt = P (S > t − x|x = x)fx (x)dx
0

Z t 1 1Z t α(t)
= [1 − B(t − x)] dx = (1 − B(τ )) dτ =
0 t t 0 t

Thus Bk,n given An has a Binomial distribution, so that

à !
n k
P {Bk,n |An } = p (1 − pt )n−k , k = 0, 1, 2, · · · n,
k t
249

and
!k µ ¶n−k
(λt)n n

à !Ã
X
−λt α(t) 1Z t
P {X(t) = k} = e B(τ )dτ
n=k n! k t t 0
¶n−k
1Z t
µ
[λα(t)] k X∞ λt B(τ )dτ
= e−λt t 0
k! n=k (n − k)!
h R i
[λα(t)]k −λ t− 0t B(τ )dτ
= e
k!
[λα(t)]k −λ R t [1−B(τ )]dτ
= e 0
k!
[λα(t)]k −λ α(t)
= e , k = 0, 1, 2, · · ·
k!
(i)
(b) Z ∞
lim α(t) = [1 − B(τ )]dτ
t→∞ 0
(ii)
= E{s}

where we have made use of (5-52)-(5-53). Using (ii) in (i), we obtain

ρk
lim P {x(t) = k} = e−ρ (iii)
t→∞ k!
where ρ = λ E{s}.

You might also like