Professional Documents
Culture Documents
VOLUME J: THEORY
Leonard Kleinrock
Prof essor
Computer Science Dep artment
S chool of Engineering and Applied Sci ence
University of California, Los Angeles
A Wiley-Interscience Publication
.I
J
" Ah, ' All thina s come to th ose who wait.'
They come, but cjten come too late."
From Lady Mary M. Curr ie: Tout Vient a Qui Sait Attendre (1890)
27947
13 1415
Preface
How much time did yo u waste waiting in line this week ? It seems we cann ot
escape frequent delays, and they are getting progressively worse ! In this text
we study the phen omena of sta nding , waiting, and serving, and we call this
study queueing theory .
An y system in which arrivals place demands upon a finite-cap acity resource
may be termed a queueing system. In particular, if the arri val times of these
demand s are unpredictable , or if the size of these demands is unpredictable ,
then conflicts for the use of the resource will ari se and queu es of waiting
customers will form . The lengths of these queue s depend upon two aspects of
the flow pattern : first, they depend upon the average rate at which demands
are placed upon the resource; an d second, they depend upon the statistical
flu ctuations of this rate. Certainly, when the average rate exceeds the cap acity,
then the system breaks down and unbounded queue s will begin to form ; it is
the effect of this average overload which then dominates the growth of queue s.
Ho wever, even if the average rate is less than the system capacity, then here,
too , we have the forma tion of queues due to the sta tistical fluctu ati on s and
spurts of arrivals that may occur; the effect of these var iatio ns is greatly
magnified when the average load approaches (but does not necessarily
exceed) that of the system cap acity. The simplicity of these queueing struc-
ture s is decepti ve, a nd in our studies we will often find ourselves in deep
ana lytic waters. Fortunately, a familiar and fund amental law of science
perme ate s our queueing investigations . This law is the conservation of flow,
which states that the rate at which flow increases within a system is equal to
the difference between the flow rat e int o and the flow rate out of tha t system.
Thi s observation permits us to write down the basic system equa tions for
rath er co mplex structures in a relativel y easy fashion .
The pu rpose of this book , then , is to present the theory of queue s at the
first-year gradua te level. It is assumed that the student has been expose d to a
first co urse in probabi lity theory ; however, in Appendi x II of this text we
give a pr obability theory refresher and state the basic pr inciples that we shall
need. It is also helpful (but not necessary) if the student has had some
exposure to tran sform s, alth ough in this case we presen t a rat her com plete
vii
viii PREFACE
o Volume I
o Vo lume II
are keyed to the page where they appear in order to simplify the task o f
locating the explanatory material associated with each result.
Each chapter contains its own list of references keyed alphabetically to the
author and year; for example, [KLEI 74] would reference this book . All
equations of importance have been marked with the symbol - , and it is
these which are included in the summary of important equations. Each chapter
includes a set of exercises which , in some cases, extend the material in that
chapter ; the reader is urged to work them out.
XII PREFACE
the face of the real world's complicated models, the mathematicians proceeded
to ad vance the field of queueing theory rapidly and elegantl y. The frontiers
of this research proceeded into the far reache s of deep a nd complex mathe-
matics. It was soo n found that the really intere sting model s did not yield to
solution and the field quieted down considerably. It was mainly with the
advent of digital computers that once again the tools of queueing theory were
brought to bear on a class of practical problems, but thi s time with great
success. The fact is that at present, one of the few tools we have for an alyzing
the performance of computer systems is that of queuein g the ory , and this
explains its popularity am ong engineers and scientists today. A wealth of
new problems are being formulated in terms of this theory and new tools and
meth ods are being developed to meet the challenge of these problems. More-
over, the application of digital computers in solving the equations of queuein g
theory has spawned new interest in the field. It is hoped that thi s two-volume
series will provide the reader with an appreciation for and competence in the
methods of analysis and application as we now see them .
I take great pleasure in closing th is Preface by acknowledgin g those
indi vidual s and institutions that made it possible for me to brin g this book
int o being . First , I would like to thank all tho se who participated in creatin g
the stimulating environment of the Computer Science Department at UCLA,
which encouraged and fostered my effort in this directi on. Acknowledgment
is due the Advanced Research Projects Agency of the Department of Defense ,
which enabled me to participate in so me of the most exciting and ad vanced
computer systems and networks ever developed . Furthermore , the John
Simon Guggenheim Foundation provided me with a Fellowship for the
academic year 1971 -1 972, during which time I was able to further pursue my
investigati ons. Hundreds of students who have passed through my queueing-
systems courses have in major and minor ways contributed to the creation
of this book , and I am happy to ackn owledge the specia l help offered by
Arne Nilsson, Johnny Wong, Simon Lam, Fouad Tobagi, Farouk Kam oun,
Robert Rice, and Th omas Sikes. My academic and profes sional collea gues
have all been very suppo rtive of th is endeavour. To the typi sts l owe all. By
far the lar gest port ion of this book was typed by Cha rlo tte La Roche , and I
will be fore ver in her debt. To Diana Skoc ypec and Cynthia Ellm an I give my
deepest thanks for carrying out the enormous task of. proofreading and
correction-making in a rapid , enthusiastic, and suppo rt ive fash ion. Others who
contributed in major ways are Barbara Warren , Jean Dubinsky, Jean D'Fucci ,
and Gloria Roy. l owe a great debt of thanks to my fam ily (and especially to
my wife, Stella) who have stood by me and supported me well beyond the
call of duty or marriage contract. Lastl y, I would certainly be remiss in
omitting an ackn owledgement to my ever-faithful dictatin g machine, which
was constantly talking back to me.
LEONARD KLEI NROCK
March, 1974
Contents
VOLUME I
PART I: PR ELIMINARIES
Epilogue 319
Index 4 11
-.
xvi CONTENTS
VOLUME 1/
I. Notation
2. Gene ra l Results
3. Markov, Birth-Death, and Poisson Processes
4. The M /M / l Que ue
5. The MI M l m Q ueueing System
6. Markovian Que ueing Networks
7. The M / G/l Queue
8. The GIMII Queue
9. T he GI Mlm Queue
10. The G/G /l Que ue
I . The Model
2. An Approach for Calculating Average Waiting Times
3. The Delay Cycle, Generalized Busy Periods, and Waiting
T ime Distributions
4. Conservation Laws
5. The Last-Come- First-Serve Queueing Discip line
CONTENTS xvii
6. Head- of-the-Lin e Priorities
7. Ti me-Dependent Prior ities
8. Opt imal Bribin g for Qu eue Position
9. Service-Tim e-Dep end ent Disciplines
I. Resource Sharin g
2. Some Contrasts and Trade-Off's
3. Networ k Structures and Packet Switchin g
4. The ARPANET-An Operational Descripti on of an
Existing Network
5. Definitions, the Model, and the Problem Statement s
6. Delay An alysis
7. Th e Capacity Assignment Problem
8. Th e Traffic Flow Assignm ent Problem
9. The Capac ity and Flow Assignment Problem
10. Some Topological Co nside rations-Applicatio ns to the
ARPAN ET
II . Satellite Packet Switchin g
12. Grou nd Radio Packet Switching
xvi ii CONTENTS
Glossary
Summary of R esults
In dex
QUEUEING SYSTEMS
VOLUME I: THEORY
PART I
PRELIMINARIES
It is difficult to see the forest for the trees (especially if one is in a mob
rather than in a well-ordered queue). Likewise, it is often difficult to see the
impact of a collection of mathematical results as you try to master them; it is
only after one gains the understanding and appreciation for their application
to real-world problems that one can say with confidence that he understands
the use of a set of tools .
The two chapters contained in this preliminary part are each extreme in
opposite directions. The first chapter gives a global picture of where queueing
systems arise and why they are important. Entertaining examples are provided
as we lure the reader on. In the second chapter, on random processes, we
plunge deeply into mathematical definitions and techniques (quickly losing
sight of our long-range goals); the reader is urged not to falter under this
siege since it is perhaps the worst he will meet in passing through the text.
Specifically, Chapter 2 begins with some very useful graphical means for
displaying the dynamics of customer behavior in a queueing system. We then
introduce stochastic processes through the study of customer arrival, be-
havior, and backlog in a very general queueing system and carefully lead the
reader to one of the most significant results in queueing theory, namely,
Little's result, using very simple arguments. Having thus introduced the
concept of a stochastic process we then offer a rather compact treatment
which compares many well-known (but not well-distinguished) processes and
casts them in a common terminology and notation, leading finally to Figure
2.4 in which we see the basic relationships among these processes; the reader
is quickly brought to realize the central role played by the Poisson process
because of its position as the common intersection of all the stochastic
processes considered in this chapter. We then give a treatment of Markov
chains in discrete and continuous time; these sections are perhaps the tough-
est sledding for the novice, and it is perfectly acceptable ifhe passes over some
of this material on a first reading. At the conclusion of Section 2.4 we find
ourselves face to face with the important birth-death processes and it is here
2 PRELIMINARIES
One of life' s more disagreea ble act ivities, namel y , waiting in line, is the
delightful subject of thi s book. One might reasonably ask, " Wha t does it
profit a man to study such unpleasant phenomena 1" The an swer , of course,
is that through understanding we gain compassion, and it is exactl y this
which we need since people will be wa iting in lon ger and lon ger queues as
civilizat ion progresses, an d we must find ways to toler ate the se unpleasant
situa tio ns. Think for a moment how much time is spent in one's daily
acti vities waiting in some form of a queue: waiting for breakfast ; sto pped at a
traffic light ; slowed down on the highways and freewa ys ; delayed at th e
en tran ce to o ne's parking facility; queued for access to an elevat or ; sta nding
in line for the morn ing coffee; holding the telephone as it rin gs, and so o n.
The list is endless, and too often also are the queues.
The orderliness of queues varies from place to place ar ound the world.
Fo r example, the English are terribly susceptible to formation of o rderly
queues, whereas so me of the Mediterranean peopl es con sider th e idea
ludicrous (have yo u ever tried clearing the embarkation pr ocedure at the
Port of Brindisi 1). A common sloga n in the U.S. Army is, "Hurry up and
wait." Such is the nature of the phenomena we wish to study.
the railway net wor k, the dam , the teleph one or telegraph networ k, the
supermarke t checkout counter, and the computer processing system, re-
spectively. The " finite capacity" refers to the fact th at the channel can satisfy
the demands (placed upon it by the commodity) at a finite rate only. It is
clear that the ana lyses of man y of these systems requ ire analytic tools drawn
from a variety of discipline s and , as we shall see, queueing the ory is ju st one
such disciplin e.
When one an alyzes systems of flow, they naturally break int o two classes :
steady and unsteady flow. The first class con sists of those systems in which the
flow proceeds in a predictable fashion . Th at is, the qu antity of flow is
exactly known and is const ant over the int erval of interest; the time when tha t
flow appears at the channel, and how much of a demand that flow places upon
the channel is known and consta nt. These systems are trivial to an alyze in the
case of a single channel. For example , consider a pineapple fact ory in which
empty tin cans are being transported along a conveyor belt to a point at
which they must be filled with pineapple slices and must then proceed further
down the conveyo r belt for addi tional operatio ns. In this case, assume that
the cans arrive at a constant rate of one can per second and that the pine-
ap ple-filling operation takes nine-tenths of one second per can . The se numbers
are constant for all cans and all filling operations. Clearly thi s system will
funct ion in a reliable and smooth fashion as long as the assumptions stated
above continue to exist. We may say th at the arrival rate R is one can per
second and the maximum service rate (or capacity) Cis 1/0.9 = 1.11111 . ..
filling operations per second . The example above is for the case R < c.
However , if we have the condition R > C, we all know wha t happens : cans
and /or pineapple slices begin to inundate and overflow in the factory! Thus
we see that the mean capacity of the sys tem must exceed the average flow
requirements if chaotic congestion is to be avoided ; this is true for all systems
of flow. Th is simple observation tells most of the sto ry. Such systems a re of
little interest theoretically.
T he more interesting case of stea dy flow is that of a net work of cha nnels.
For stable flow, we obviously require that R < C on each cha nnel in the
networ k. However we now run int o some serio us combinat orial problem s.
For example , let us consider a rail way networ k in the fictitious lan d of
Hatafla. See Fig ure 1.1. The scena rio here is that figs grown in the city of
Abra must be transported to the destination city of Cadabra , makin g use
of the railway netwo rk shown. Th e numbe rs on each chann el (sectio n of
railway) in Figure 1.1 refer to the maximum numb er of bushels of figs which
that cha nnel can handle per day. We are now co nfro nted with th e following
fig flow problem: How man y bushels of figs per day can be sent from Ab ra to
Cadabra and in wha t fashion sha ll this flow of figs take place ? The answer to
such questions of maximal " traffic" flow in a variety of networ ks is nicely
1.1. SYST EMS OF FLOW 5
Zeus 8 Nonabel
Abra Cadabra
Sucsamad 6 Oriac
Figure 1.1 Maximal flow prob lem.
sett led by a well-kno wn result in net work flow theory referred to as the
max-flow-min-cut theorem. To state this theo rem, we first define a cut as a
set of channel s which, once removed from the network , will separate all
possible flow from the origin (Abra) to the destination (Cadabra). We define
the capacity of such a cut to be the total fig flow that can travel acro ss that cut
in th e direction from origin to destination . For exa mple, one cut con sists of
the bran ches from Ab ra to Zeus, Sucsam ad to Zeus , and Sucsamad to Oriac ;
the cap acit y of this cut is clearly 23 bushels of figs per day. The max-flow-
min-cut the orem states th at the maximum flow that can pass bet ween an
origin and a destin ation is the minimum capacity of all cuts. In our example
it can be seen th at the maximum flow is therefore 21 bu shels of figs per day
(work it out). In general, one must consider all cut s that sepa rate a given
origin and destination. This computation can be enormously time consuming.
Fortunately, there exists an extremely powerful method for finding not only
what is the maximum flow, but also which flow pattern ach ieves th is maxi-
mum flow. This procedure is known as the labeling algorithm (d ue to Ford
and F ulkerson [FORD 62]) a nd is efficient in tha t th e computational requ ire-
ment grows as a small power of the number of nodes ; we present the algor ithm
in Volume II , Ch apt er 5.
In additio n to maximal flow problems, one can pose nume rou s other
interesting and worthwhile questions regarding flow in such networks. For
example , one might inq uire int o the minimal cost network which will suppo rt
a given flow if we assign costs to each of the channels. Also , one might as k
the sa me questions in network s when more than one origin and dest inati on
exist. Co mplicating ma tters further, we might insist that a given netwo rk
suppo rt flow of various kind s. for example, bushels of figs, carton s of
cartrid ges and barrel s of oil. This multic omm od ity flow problem is an
extremely difficult one, and its solution typically requires consi de rable
computati onal effort. The se and numerous other significant problem s in
networ k flow theory are addressed in the comprehensive text by Frank and
Frisch [FRAN 71] and we shall see them aga in in Volume II , Chapter 5. Net-
work flow theory itself requires met hod s from gra ph the ory, combinator ial
6 QUEUEING SYSTEMS
conta ined and described withi n q ueue ing th eory, to which much o f th is text
devote s itself. Th is requires a back ground in pr obability th eory as well as a n
understanding o f complex variables and so me of the usual tr an sform-
calculus meth ods ; th is material is re viewed in Appendices I and II.
As in the case of determin istic flow , we may enla rge our sco pe of p robl ems
to that of networks of channels in which random flow is encountered. An
exa mple of such a system would be that of a computer network. Such a
system con sists of computers connected together by a set of communica tion
lines where the capacity o f the se lines for carryin g information is finite. Let us
return to the fictiti ous land of Hatafla and assume that the railway net work
considered earli er is now in fact a computer net work. Assume th at user s
located a t Abra requ ire computational effort on the facility at Cad abra. The
particular times a t which these requests are made are themselves unpredict-
able , and th e commands or instructio ns that describe these requests are also
of unpredictable len gth . It is the se commands which mu st be transmitted to
Cadabra over our communication net as message s. When a message is
inserted int o the netw ork a t Abra , and after an appropriate deci sion rule
(referred to as a routing procedure) is accessed, then th e message proceeds
through the netw ork a long so me path. If a port ion of this pa th is busy, a nd
it may well be, then the message must queue up in front of the bu sy channel
and wait for it to bec ome free. Const ant decisions must be made regarding
the flow of messages "and routing procedures. Hopefully, the message will
eventually emerge at Cadabra, the computation will be performed , and the
results will then be inserted into the network for delivery back at Abra.
It is clear th at the problems exemplified by our computer net wor k involve a
variety of extremely complex qu eueing problems, as well as networ k flow
and deci sion problems. In a n earlier work [KLEI 64] the auth or addressed
him self to certain as pects of the se questions. We de velop the an alysis of the se
systems lat er in Volume II , Chapter 5.
Having thus classified * systems of flow , we hope th at the reader understands
where in the genera l scheme of things the field of queueing the ory ma y be
placed . The method s from thi s the ory a re central to a nalyzing most stochas tic
flow problems, an d it is clear from a n examina tion of the current litera ture
that the field and in particular its applications a re growing in a viable a nd
purposeful fashion.
• The classifica tion described a bove places qu eueing systems within the class of systems of
flow. This approach identifies and emph asizes the fields o f applicatio n for queu eing theory.
A n a lterna tive a pproa ch wo uld ha ve been to place queueing theory as belongi ng to the
field of app lied stochas tic processes ; this classifica tion would have emphasized the mat he-
ma tical structure of queueing theory ra ther than its a pplica tions. Th e poin t of view taken
in th is two-volume book is the form er one, namely. with a pplica tion of the theory as its
major goal rat her than extension of the math emat ical for mal ism a nd results.
8 QUEU EIN G SYSTEMS
The as sumption in most of queueing theory is that th ese interarrival time s are
independent, identically distributed random variables (a nd, therefore, the
strea m of arrivals forms a stationary renewal process ; see Chapter 2). Thus,
onl y the distribution A (t) , which de scribes the time between a rrivals, is usually
of significa nce. The second sta tistica l quantity th at mu st be described is the
am ount of demand the se arrivals place upon the channel; thi s is usuall y
referred to as the service tim e whose probability distribution is den oted by
B(x) , that is,
B(x) = P[service time ~ x ] ( 1.2)
Here service time refers to the length of time th at a customer spends in the
ser vice facility .
N ow regarding the st ructure and discipline of the service facility , one must
spec ify a variety of additio na l qu antities. One o f the se is the extent of
storage capacity available to hold waiting customers a nd typically thi s quan-
tit y is described in term s of the variable K ; often K is taken to be infinite. An
additional specification involves the number of service stations ava ilable, and
if more th an one is available, then perhaps the distribution o f service time
will differ for each , in which case the distribution B(x) will include a su bscript
to indicate that fact. On the other hand, it is so metimes the ca se that the
arriving strea m con sists of more th an one identifiable class of customers ; in
such a case the interarrival distributi on A (t ) as well as the service distr ibut ion
B(x) may each be characteristic of each cla ss and will be identified aga in by
use of a subscrip t o n these d istr ibution s. An other importa nt structura l
descripti on o f a queueing system is th at of the queueing discipline; thi s
describes the order in which customers a re taken from the queue a nd a llo wed
int o service . For example, so me sta nda rd queueing disciplines are first-co me-
first-serve (F C FS), last-come-first-serve (LCFS), a nd random o rder of
service. When the arriving customers are distin guishable according to gro ups,
then we encou nter the case of priority queueing disciplines in which priority
• Th e notat ion P[A] denotes, as usua l. the " pro bability of the event A,"
1.2. THE SPECIFICATION AND ~I EASUR E OF QUEUEING SYSTntS 9
among gro ups may be esta blished. A further sta tement regarding the avail-
ability of the service facility is also necessary in case t he service faci lity is
occasio na lly requ ired to pay attention to other ta sks (as, fo r example, its
own breakdown). Beyo nd th is, q ueue ing systems may enjoy custo mer
behavio r in the fo rm of defections from th e qu eue, j ock ey ing a mo ng th e man y
qu eues, balking before ent ering a queue, bribing fo r queue positi on , cheating
for q ueue po sition, a nd a variety of o the r interesting a nd not-unexpected
humanlike cha rac terist ics. We will encounter these as we move th rough t he
text in an o rderly fashion (first-come-fi rst-serve ac co rding to page nu mber).
No w tha t we ha ve indi ca ted how one must specify a queueing system, it is
appropriate t hat we ide nti fy the meas ures of performance a nd effectiveness
th a t we sha ll obtai n by ana lysis. Basicall y, we a re int erested in the waiting time
for a custo mer, the number of customers in the system, th e length of a busy
period (the contin uo us interva l d uring which th e serve r is busy), the length of
an idle period , a nd th e cu rrent 1I'0rk backlog expressed in un its of tim e. All
t hese quant ities a re ra ndom varia bles a nd thus we seek th eir complete
p rob a bilistic desc rip tion (i.e., their proba bility dist ribu tion fu nction ).
Us ually , ho wever , to give th e distribution functio n is to give more th an
one can easi ly make use of. Consequ en tly, we often settle fo r the first few
mo ments (mean , var iance, etc.).
Ha ppily, we shall begin with simp le co nside rations a nd de velop the tools
in a st raigh tforwa rd fashio n , paying a tte ntio n to th e essential details of
a na lysis. In t he followi ng pages we will enco unter a va riety of simple qu eueing
problems, simple at least in the sense of description and usually rather
so phistica ted in term s of so lution. However , in o rde r to do t his pr op erly, we
first devote our efforts in the following chapter to desc ribing some of t he
imp orta nt ra ndo m processes that ma ke up the a rriva l a nd service processes
in o ur q ueueing systems.
REFERENCES
FORD 62 Ford, L. R. and D. R. Fu lkerson, Flows in Networks, Princeton
University Press (Princeton, N.J.), 1962.
FRAN 71 Frank . H. and I. T. Frisch, Communication. Transmission , and
Transportation Ne twork s, Addison-Wesley (Reading, Mass.), 1971.
KL EI 64 Kleinrock, L. . Communication Nets ; Stochastic Message Flow and
Delay . McGraw-Hili (New York), 1964 , out of print. Reprinted by
Dover (New York ), 1972.
2
We assume that the read er is familiar with the basic elementary notions,
terminology, and concepts of probability theo ry. Th e particular aspects of
that theory which we requ ire are presented in summary fashion in Appendix
II to serve as a review for th ose readers desi ring a quick refresher and
remin de r; it is recommended that the material therein be reviewed, especially
Sectio n 11.4 on transform s, generating functions , and characteristic function s.
Included in Appendix " are the fo llowing important definitions, concep ts,
and results :
10
2.1. NOTAn ON AND STRUC TU RE FO R BASIC QUEUEING SYSTEMS II
·stru cture o f queues. Also, we wish to provide the read er a glimpse as to
where we a re head ing in th is journ ey.
It is o ur purpose in thi s sectio n to define so me notation , both sy mbo lic
and gra p hic, and then to introduce one o f the ba sic sto chas t ic pr oce sses that
we find in queueing systems. F urth er , we will deri ve a simple but significa nt
result, which relates so me first moments of impo rta nce in these systems. In
so doin g, we will be in a positi on to define the quantities a nd processes th at
we will spend man y pag es study ing later in th e text.
The syste m we co nsider is the very general queueing syste m G/G/m ; recall
(fro m the Preface) th a t thi s is a system whose interarrival time di stribution
A (I) is completel y a rbitra ry a nd who se service time di stribution B (x) is a lso
completely arbitrary (a ll interar rival tim es and serv ice time s are assum ed to
be inde pe ndent o f each o ther). The syste m ha s m servers and order of service
is also quite a rbitra ry (in particular , it need not be first-come-first-serve).
We focu s a ttentio n o n th e flow o f customers as the y arri ve , pass throu gh , a nd
eventuall y lea ve thi s syste m: as such, we choose to number the cu stomers with
the subsc rip t n a nd define C ,. a s foll ows :
C n denotes the nth custom er to enter the system (2. 1)
Thus, we may portray o ur system as in Figure 2.1 in which the box represents
t he qu eu eing syste m a nd the flow of cust omers both in a nd o ut of the system
is shown . One can immediately define so me rand om processes of int ere st.
For example , we a re int erested in N( I) where *
N(I) ~ number of cust om ers in the system at time I (2.2)
An oth er stochastic process o f interest is th e un finished wo rk V( I) th at exists
in the system a t tim e I , th at is,
V( I) ~ the unfinished work in th e system a t time I
~ th e rem a inin g t ime required to empty the system of a ll
cu stomers pre sent a t time I (2.3)
Whenever V( I ) > 0 , then the system is sa id to be bu sy , a nd o nly when
V( I) = 0 is th e syste m sai d to be idle . The durati on and loca tio n of the se busy
an d idl e peri ods a rc al so qu antiti es of int ere st.
. ,J; ~ ;I
Oueoei nq
system
- - - - - - - ----'
Figure 2. 1 A general queueing system.
The det ail s of the se sto chastic processes may be ob served first by defining
the foll owing va riables and then by displayin g the se va riab les on a n appro-
priate time diagram to be d iscussed belo w. We begin with the definition s.
Recalling that the nth cu stomer is den oted by Cn. we define his a rriva l time
to the queuein g system as
Tn ~ a rriva l time for C n (2.4)
(2. 5)
interarrival time s a re drawn from the dis-
The total time spent in the system by COl is the sum of his waiti ng time a n,
service time, which we denote by
Thus we have defined for the nth cu stomer his a rriva l time, " his" intera rri v.
time , his service time, his waiting time , a nd his system t ime. We find
* T he terms " waiting ti me" and " queueing time" have conflicting defini tions within tl
body of queueing-theory literatu re. T he fo rmer so metimes refers to the tota l time spent
system. an d the latter then refers to the total time spent on queue ; however . these tv
definitions are occasio na lly reversed . We a ttem pt to remove tha t confusion by defini
waiting a nd queu eing time to be the sa me quant ity. name ly. the time spent wa iting '
queu e (but not being served ); a more appropriate term perhaps would be " wasted tim'
Th e tota l time spent in the system will be referred to as "sys tem time" (occasiona lly kn o -
a s " flow time" ).
2.1. NOTATION AND STR UCT UR E FOR BASIC QUE UEIN G SYSTEMS 13
expedien t at th is point to elaborate somewhat further on notation. Let us
con sider the interarrival time In once again . We will have occasion to refer
to the limiting random varia ble i defined by
I- =~ I'im 1n (2.11)
n-e cc
which we den ote by I n -+ i. (We ha ve already requ ired that the interarrival
times In have a distribution independent of n, but this will not necessaril y be
the case with many other random variables of interest.) The typical notation
for the probab ility distribut ion function (PDF) will be
(2.12)
and for the limiting PDF
P[i ~ I] = A (I) (2.13)
This we denote by A n(l) -+ A(t ) ; of course, for the interarrival time we have
as sumed that A n(l ) = A (I) , which gives rise to Eq. (2.6). Similarly, the
probability den sity function (pdf) for t n a nd i will be an(l) and aCt), respec-
tively, and will be den oted as an(t) -+ aCt). Finally, the Laplace transform
(see Appendix II) o f the se pdf's will be denoted by An *(s) and A *(s),
respecti vely, with the obvio us notation A n*(s ) -+ A *(s) . The use of the letter
A (a nd a) is meant as a cue to remind the reader that they refer to the
r interarrival time . Of .course, the moments of the interarrival time are of
s interest a nd the y will be denoted as follows * :
e
E[l n]';;'i. (2.14)
I) Acc ording to our usual notation , the mean interarrival time for the limiting
random variable will be given] by i in the sense that i. -+ i. As it turns out
d i, which is the average interarrival time between customers, is used so
frequently in o ur equ ati on s that a special notation ha s been adopted as
follows :
_ <l I
)) 1=- (2.15)
i.
al Thus i. represents th e Qt'erage arrical rate of customers to o ur queuein g
it
system. Hi gher moments o f the interarrival time are a lso o f interest a nd so
he we define th e k th moment by
in
vo k = 0, 1,2 , . . . (2. 16)
ng • The no tat io n E[ J denotes the expecta tion of the quant ity within sq uar e brackets. As
on
shown, we a lso ad opt the overbar notat ion to deno te expectat ion.
t Actually, we sho uld use the no tatio n I with a tilde and a ba r, but this is excessive a nd will
-vn be simplified to i. T he sa me simplificatio n will be a pplied to ma ny of o ur ot her ra ndom
va riab les.
14 SOME IMPORTANT RANDO~[ PROCESS ES
_ 1 j.
t =-= a1 = a (2. 17)
A
That is, th ree special notations exist fo r the mean interarrival time; in partic-
ular, the use of the symbol a is very common and vario us of the se form s
will be used throu ghout the text as a p propria te. Summarizing th e information
with regard to the interar riva l time we have the followin g shortha nd glossa ry :
t; = interarrival time bet ween C; and C n _ 1
t n -+ i, A n(t) -+ A(t), an(t ) ->- a(t ), A n*(s) ->- A*(s)
- - I k k
t n -+ t= ~ = a1 = a, t; ->- t = ak (2.18)
In a sim ilar ma nner we identify the notation associated with X n , I\'n, and Sn
as follo ws :
Xn = service time for C n
X n -+ X, B.(x) -+ B(x ), b . (x) -+ b(x ), B n*(s) -~ B*(s)
-X -x = -1 = b1 = b , (2.19)
n -..
f-l
IV n = waiting time for C n
(2.20)
-s; = system time for C,
s; -+ s, S n(Y) ->- S(y ), sn(Y) -+ s(y), S n"(s) -+ S *(s)
(2.21)
C ll _ 1
'I
C. cr. ,!'. C n +2
Servicer
T. 'n +1 T/u 2
Queue
(11 +2
Cn en +1 Cn -t2
Figure 2.2 Time-diagram nota tion for queues.
12, -- - - - - - - - - - -- - - - - - - - -- ----,
11
10
~ 9
E 8
B
'5 7
c 6
'0
5
"
~ 4
~ 3
2
1
o L----"'="'"
Time l
Alternat ively, we may position ourselves at the outp ut of the queuei ng system
and count the number of departures that leave; thi s we den ot e by
Sample functions for these two stochastic processes are sho wn in Figur e 2.3.
Clearly N( t), the number in the system at time t, must be given by
N( t ) = «(r) - bet)
On the other hand, the tot al are a bet ween these two curves up to some point ,
say t , repr esents t he tota l time all customers have spe nt in the system (meas-
ur ed in unit s of customer-seconds) during the int erval (0, t) ; let us denote this
cumulative area by yet ). M oreover, let At be defined as the average arrival
rate (custo mers per second) during the interval (0, t); th at is,
(2.24)
We may define T, as the system time per customer averaged over all custome rs
in the interval (0, t); since yet ) repre sents the accumulated customer- seconds
up to time t , we may divide by the number of arriva ls up to that poin t to
ob tain
yet)
Tt = -
(X( t)
only the server (or servers) itself; in th is case our equation would have reduced
to
R. = AX (2.27)
where R. refers to the average number of customers in the service facility
(or facilities) and x, of course, refers to the average time spent in the service
box. Note that it is always true that
T = x+ W _ (2.28)
The queuein g system could refer to a specific class of customers, per haps
based on priority or some other attribute of this class, in which case the same
relationship would apply. In other words, the average arri val rate of custo mers
to a "queueing system" times the average time spent by customers in that
"system" is equal to the average number of customers in the " system ,"
regardless of how we define that " system."
We now discuss a basic parameter p, which is comm only referred to as the
utilization fa ctor. The utilization factor is in a fundamental sense really the
ratio R /C, which we introduced in Chapter I. It is the rat io of the rate at
which "work" enter s the system to the maximum rat e (capacity) at which the
system can perform this work; the work an arri ving customer brings into the
system equals the number of seconds of service he requires. So, in the case of
a single-server system, the definition for p becomes
p ,;; (average arrival rate of customers) X (average service time)
= AX _ (2.29)
Thi s last is true since a single-server system has a maximum capacity for
doin g work , which equals I sec/sec and each ar riving customer brings an
am ount of work equal to x sec ; since, on the average, ..1. customers ar rive per
second , then }.x sec of work are brought in by customers each second that
passes, on th e average. In the case of multiple servers (say, III servers) the
definition remains the same when one considers the ratio R/C , where now th e
work capacity of the system is III sec/sec; expressed in terms of system param-
eters we then have
p =a -AX _ (2.30)
m
Equat ions (2.29) and (2.30) apply in the case when the maximum service
rat e is independent of the system sta te; if th is is not the case, then a more
careful definition must be pro vided. The rate at which work enters the
system is sometimes referred to as the traffic intensity of the system and is
usually expressed in Erlangs ; in single-server systems, the utilizat ion facto r is
equal to the traffic inten sity whereas for (m) multiple servers, the tr affic
intensity equal s mp . So long as 0 ~ p < I, then p may be interpreted as
p = E[fraction of busy servers1 (2.3I)
2.2. DEFINITIO N AND CLASSIFICATION OF STOCHASTIC PROCESSES 19
[In the case of a n infinite number of servers, the ut ilizati on fact or p plays no
impor ta nt part, and instead we are interested in the number of busy servers
(and its expectati on).] -
Indeed , for the system GIGII to be stable , it must be th at R < C, that is,
o ~ p < I. Occasionally, we permit the case p = 1 with in the ran ge of
sta bility (in particul ar for the system 0 /0 /1). Stability here once again refer s
to the fact that limiting distributions for all random vari ables of interest
exist , and that all customers are eventually served. In such a case we may
carry out the following simple calcul ation . We let 7 be an arbitrarily long
t ime interval ; during this interval we expect (by the law of large numbers)
with probability 1 that the number of arrivals will be very nearly equal to .AT.
M ore over , let us define Po as the probability that the server is idle at some
randomly selected time . We may, therefore, say that during the interval 7,
the server is busy for 7 - TPO sec, and so with pr obability I , the number of
customers served during the interval 7 is very nearly (7 - 7po)fx . We may
now equate the number of arri vals to the number served during thi s int erval,
which gives, for lar ge 7,
for a ll x = (Xl' X., . . . , X n ) , t = (II> I. , ... , In), and 11. As menti oned there,
thi s is a formidable ta sk ; fortunately , many interesting sto chastic processes
perm it a simpler description. In any ca se, it is the funct ion Fx(x ; t) th at really
de scribes the dependencies a mo ng the random va ria bles of th e stoc has tic
process. Below we de scribe some of the usual type s o f sto chas tic pr ocesses
th at a re ch aracterized by different kinds of dependency relati on s am on g their
rand om va riables. We provide thi s cla ssificati on in order to give t he read er a
global view of this field so that he may better understand in which particular
2.2. DEFI~ 1TI0N AND C LASSIFICAT ION OF STOCH ASTI C PROC ESSES 21
region s he is o pera ting a s we proceed with our st udy of queueing theory and
it s related sto chas tic pr ocesses.
(a) Stationary Processes. As we discuss at the ver y end of Appendix II,
a sto chas tic process X (I ) is sa id to be sta tiona ry if Fx(x ; t) is inv ari ant to
shifts in time for a ll va lues o f its arguments; th at is , given an y con stant T
the following must hold :
FX(x ; t + T) = Fx (x ; t) (2.34)
(b) Independent Processes. The simplest a nd most tr ivial sto chas tic
process to con sider is the random seq uence in which {X n } forms a set of
independent random variables, that is , the j oint pdf defined for o ur sto chastic
proce ss in Appendix .II mu st fact or in to the product, thusly
(2.35)
is, the way in which the entire past histo ry affects the future of the process is
completely summarized in the current value of the process.
In t he case of a discrete-time Markov chain the instants when state changes
may occur are preord ained to be at the integers 0, 1,2, . . . , n, . . . . In the
case of the continuous-time Markov chain, however, the transition s between
states may take place at any instant in time. Thu s we are led to consider the
rand om variable that describe s how long the proce ss remain s in its curr ent
(discrete) state before making a tr ansition to some ot her state . Because the
Markov pr operty insists that the past history be compl etely summarized in
the specification of the current state, then we are not free to requ ire th at a
specification also be given as to how long the proce ss has been in its current
sta te ! Th is imposes a heavy con straint on the distribution of time that the
process may remain in a given state. In fact , as we shall see in Eq. (2.85),
"this state time must be exponent ially distributed. In a real sense, then , the
exponential distribution is a continuous distribution which is "rn emoryless"
(we will discuss this not ion a t considerable length later in this chapter).
Similarl y, in the discrete-time Markov chain , the process may remain in the
given state for a time that must be geome trically distributed ; this is the only
discrete pr obab ility mass funct ion that is memoryless. This memoryless
property is requi red of all Markov cha ins and restri cts the generality of the
processes one would like to cons ider .
Expressed analytically the Marko v property may be written as
P[X(tn+l) = x n+1 X (t n) = Xn, X( t n_1) = Xn_l>' . . ,X(t l) =
1 xtl
= P[X(t n+l) = x n+1 I X (t n) = xnl (2.36)
where t 1 < t 2 < . .. < I n < t n + 1 and X i is included in some discrete sta te
space.
The consideration of Markov processes is central to the study of queueing
theory and much of this text is devoted to th at study. Therefore , a good
porti on of thi s chapter deals with discrete-and continuous-time Mar kov
chain s.
(d) Birth-death Processes. A very important special class of Mar kov
chains has come to be known as the birth-death process. The se may be either
discrete-or continuou s-time processes in which the defining condit ion is that
sta te transition s take place between neighboring sta tes only. That is, one may
ch oose the set of integer s as the discrete state space (with no loss of generality)
and then the birth-death process require s that if X n = i, then Xn+l = i - I,
i, or i + I and no other. As we shall see, birth -death processes have played a
significant role in the development of queueing the ory. Fo r the moment,
however , let us proceed with our general view of stoc hastic processes to see
how each fits int o the gener al scheme of thin gs.
2.2. DEF INITlO N AN D CLASSIFIC ATI ON OF STOCHASTI C PR OCESSES 23
(e) Semi-Markov Processes. We begin by discussing discrete-time
semi-Ma rkov proce sses. The discrete-time Mark ov chain had the propert y
that at every unit inter val on the time axis the process was required to make a
transition from the current state to some other state (possibly back to the
same state). The transition probabilities were completely arbitrary; however ,
the requirement that a transition be made at every unit time (which really
came ab out because of the Markov property) leads to the fact that the time
spent in a sta te is geo metrically distributed [as we shall see in Eq. (2.66)].
As mentioned earlier, this impo ses a strong restriction on the kind s of
processes we may consider. If we wish to relax that restriction , namel y, to
permi t an arbitra ry distribution of time the proce ss may remain in a sta te,
then we are led directly into the notion of a discrete-time semi-Markov
process; specifically, we now perm it the times between state transitions to
obey an arbitrary probability distribution. Note , however, that at the instants
of state tran sition s, the process behaves just like an ordinary Markov chain
and, in fact , at th ose instants we say we have an imbedded Markov chain.
Now the definition of a continu ous-time semi-Markov pr ocess follows
directly. Here we permit state transitions at any instant in time . However, as
opposed to the Mar kov process which required an exponentially distributed
time in state, we now permit an arbitrary distribution . Thi s then affords us
much greater generality, which we are happy to employ in our study of
queueing systems. Here , again , the imbedded Markov process is defined at
those instants of state transition. Certainly, the class of Markov processes is
contained within the class of semi-Markov processes.
(f) Random Walks. In the stud y of random processes one often en-
counters a process referred to as a random walk . A random walk may be
th ought of as a particle moving am ong sta tes in some (say, discrete) sta te
space. What is of interest is to identify the location of the particle in that state
spac e. Th e salient feature c f a rand om walk is th at the next positio n th e
pr ocess occupies is equal to the previ ous position plu s a random variable
whose value is drawn independently from an arbitrary distribution ; thi s
distribution , however, does not change with the sta te of the pro cess. * Th at is,
a sequence of random variables {S n} is referred to as a random walk (sta rting
at the origin) if
S n = X, + X z + ... + X n n = I, 2, . . . (2.37)
where So = 0 and X" X z , .. . is a seq uence of independent rand om varia bles
with a comm on distribution. The inde x n merely counts the nu mber of sta te
transitions the process goes through ; of course , if the instants of the se
tran sition s are taken from a discrete set, then we ha ve a discrete-time random
walk, whereas if they a re taken from a continuum , th en we have a con tinu ous-
time random walk. In any case , we assume th at the interval between these
tr an sition s is distributed in an a rbitra ry way a nd so a random walk is a
special case of a semi-Ma rkov process. * In the case when the co mmon
distribution for X n is a discrete distribution , th en we ha ve a discrete- stat e
random wal k ; in thi s case the transiti on probability Pi; of goi ng from sta te i
to stat e j will depend only up on th e differenc e in indices j - i (which we
den ote by q;_;).
An exa mple of a continuous-t ime rand om walk is tha t of Brownian mot ion ;
in the disc rete-time case a n exa mple is th e total number of head s observed in a
seq uence of indepe ndent coin tosses.
A random walk is occasionally referred to as a process with " independent
increments."
(g) Renewal Processes. A renewal proce ss is rela ted] to a random walk.
However, the interest is not in followin g a pa rticle am ong man y sta tes but
rather in counting transitions th at take place as a functi on of time . T ha t is,
we co nsider the real time axi s on which is laid ou t a sequence of points ; the
distribution of time between adj acent point s is an a rbitrary common distri-
bution and each point corresponds to an instant of a state tra nsition. We
ass ume tha t the process begins in sta te 0 [i.e., X(O) = 0] a nd increases by
unity at each transiti on ep och ; th at is, X (t) eq uals the number of sta te tran -
siti on s that ha ve taken place by t. In thi s sense it is a special case of a rand om
walk in which q, = I and q; = 0 for i ~ I. We may think of Eq. (2.37) as
describin g a rene wal pr ocess in which S ; is the random va riable den ot ing the
time a t which the nt h tr an siti on tak es place. As earl ier , the seq uence {Xn } is a
set of inde pende nt identically distributed random variab les where X n now
represent s th e time bet ween the (n - I)th a nd nth tr an sition. One sho uld be .
careful to distinguish the interpretati on of Eq. (2.37) when it ap plies to
renewal pr ocesses as here and when it a pplies to a random walk as ea rlier.
The d ifference is that here in the renewal process th e equ at ion describes the
time of the nth renewal or tran sition , whereas in the rand om walk it describes
the state of the pr ocess and the tim e between sta te tr a nsitions is some ot he r
rand om va ria ble.
An impo rta nt example of a renewal process is th e set of a rrival insta nts
to th e G /G/m queue. In th is case, X n is identified with the interarrivaI time .
• Usually, the distribution of time between intervals is of lillieconcern in a random walk;
emphasis is placed on the value (position) S n after n transitions. Often, it is assumed that
this distribution of interval time is memoryless, thereby making the randomwalk a special
case of Markov processes; we are more generous in our definition here and permit an
arbitrary distribution.
t It may be considered to be a special case of the random walk as defined in (f) above. A
renewal process is occasionally referred to as a recurrent process.
2.2. DEFINITION AND CLASSIfi CATIO N OF STOCHASTIC PRO C ESSES 25
S ~l P
Prj arbitrary
IT arbitrary RW
MP
Pi j arbitrary
n.: qj - i
IT arbi trary
RP
q, = 1
ITarbitrary
Figure 2.4 Relationships among the interesting random processes. SMP : Semi-
Markov process; MP: Markov process; RW: Random walk; RP: Renewal process;
BD: Birth-Death Process.
It is now possible to classify sta tes of a Markov cha in according to the value
obtained for /;. In particular, if!j = I then sta te E, is said to be recurrent ;
if on the other hand ,/; < I, then sta te E , is said to be transient. Furthermore,
if the o nly possible steps at which our hippie can return to sta te E , a re
y , 2y , 3y , . . . (where y > 1 and is the largest such integer), th en sta te E, is
said to be periodic with peri od y; if y = 1, then E, is aperiodic.
Con sidering sta tes for which/; = 1, we may then define th e mean recurrence
tim e of E, as
1'V[ j ='" 2:a:> nf \n) (2.42)
n =l
• Man y of the intcresti ng Markov chains which one encounters in queueing theory are
irreducible.
2.3. DISCRETE -TIM E MARK OV CH AINS 29
This is me rely the average time to return to E;. With thi s we may then classify
sta tes even further. In particular, if M ; = 00 , then E; is said to be recurrent
null , whereas if M ; < 00, then E; is said to be recurrent nonnull. Let us define
71"Jnl to be the pr ob ability of finding the system in state E; at the nth step,
that is,
71"jnl ;; P[X n = j] _ (2.43)
We may now state (without proof) two important the orems. The first
comments o n the set of sta tes for an irreducible Markov chain.
Theorem 1 The states of an irreducible Mark ov chain are either all
transient or all recurrent nonnull or all recurrent null. If p eriodic, then
all states have the same period y.
Assum ing th at our hippie wanders fore ver , he will pass through the various
cities o f the nation many times , and we inquire as to whether o r not there
exists a stationary probability distribution {71";} describing his probability of
being in cit y j a t some time arbitrarily far into the future . [A pr ob ability
distribu tion P; is said to be a stationary distribution if when we choose it for
our initial state distribution (that is, 71"JOI = Pi) then for all n we will ha ve
71"Jnl = P;.] Solvin g for {71";} is a mo st important part of the an alysis of
Markov chains. Our second theorem addresses itself to thi s question .
Theorem 2 In an irreducible and aperiodic homogeneous Mark ov
chain the lim iting probabilities
alway s exist and are independent of the initial state probability distr i-
bution. .M oreover, either
(a) all states are transient or all states are recurrent null in which
cases 71"; = 0 f or all j and there ex ists no sta tio na ry distribution,
or
(b) all states are recurrent nonnull and then 71"; > 0 f or all j , in
which case the set {1T;} is a stationary probability distribution and
1
Tr j =- (2.45)
AI ;
In this case the quantities 7T j are uniquely determined through the
fo llowing equations
1 =I 71", (2.46)
1T j = L 1Ti P U (2.47)
30 SOME IMPORTANT RA NDOM PROC ESSES
Abra (0 )
- (2.48)
(2.49)
re = 1tP - (2.50)
~l
3
0
4
1
P= 0
~J
4
1 1
4 4
a nd so we may so lve Eq. (2.50) by conside rin g the three equation s deri vable
from it, th at is,
3
7T1 = - 7TO + 0 1T1 + -1 7T2 (2.51)
4 4
131
7T.= - 1To + - 1T 1 + - 7T.,
- 4 4 2 -
N ote from Eq . (2.51) that the first of the se three equ ati on s equ als the negat ive
sum of the seco nd a nd third , indicating th at there is a linear dependence
am ong them. It alway s will be the case th at o ne of the eq ua tions will be
linea rly de penden t on the others, and it is therefore necessary to int roduce the
addition al con servati on relat ionship as given in Eq. (2.46) in order to solve
the system. In ou r example we then requi re
(2.52)
Thus the sol utio n is obtai ned by simultane ously so lving any two of the
32 SOME IMPORTANT RAN DO~l PROCESSES
equations given by Eq. (2.51) along with Eq. (2.52). Solving we obtain
170 = -1 = 0.20
5
and so
7t = 7tP
which is Eq. (2.50) again. Note that the solution for 7t is independent of the
initial state vecto r. Applying this to our example , let us assume that our
hippie begins in the city of Ab ra at time 0 with probability I, th at is
7t(0 ) = [1 ,0,0] (2.57)
From thi s we may calculate the sequence of values 7t( n ) and the se are given
in the chart below. The limitin g value 7t as given in Eq . (2.53) is also entered
in this chart.
n 0 2 3 4 co
We may alternati vely have chosen to assume th at the hippie begins in the
city of Zeu s with pr obability I , which would give rise to the init ial sta te
vecto r
7t (O) = [0, I, 0] (2.58)
and which result s in the following table:
n o 2 3 4
( n)
7T0 o 0.25 0.187 0.203 0.199 0.20
(n )
7T, I o 0.375 0.250 0.289 0.28
( n)
7T2 o 0.75 0.438 0.547 0.512 0.52
n 0 2 3 4
7T~n) 0 0.25 0.187 0.203 0.199 0.20
(n)
7T1 0 0.25 0.313 0.266 0.285 0.28
( n)
7T2 I 0.50 0.500 0.531 0.516 0.52
From these calculations we may make a number of observati on s. First, we
34 SOME IMPORTANT RA NDOM PR OCESSES
see th at after only four steps the quantities 11";"1 for a given value of i are
a lmost identic al regardless of the city in which we began . The rapid ity with
which these quantities converge, as we shall soo n see, depends up on the
eigenvalue s of P. In all cases, however , we o bserve th at the limiting values at
infinity are rapidly approached and, as stated earlier, are independent of the
init ial positi on of the particle.
In order to get a bett er ph ysical feel for what is occurri ng, it is instructive
to follow the probabilities fo r the vari ous states of the Mark ov chain as time
evo lves. T o this end we introduce the noti on of baricentric coordinates,
which are extremely useful in portraying probabil ity vecto rs. Consider a
pr obabil ity vecto r with N components (i.e., a Markov process with N sta tes
in o ur case) and a tetrahedron in N - I dimensions. In our example N = 3
and so o ur tetrahedron becomes an equil ateral triangle in two dimen sions. In
genera l, we let the height of thi s tetrah edr on be unity. Any pr obability vecto r
1t 1n l may be repre sented as a point in this N - I space by identifying eac h
component of tha t pr obability vecto r wit h a distance from one face of the
tetrahedron . Th at is, we mea sure from face j a distance equal to the pr ob-
a bility assoc iated with th at component 11"~"); if we do this for each face and
th erefore for each compon ent , we will specify o ne point within th e tetr a-
hedr on and that point co rrectly identifies our prob ab ility vecto r. Eac h unique
prob ability vecto r will map into a un ique point in th is spa ce, and it is easy to
determine .the pr obability measure from its locati on in th at space. In our
exa mple we may plot the three initial sta te vecto rs as given in Eqs . (2.57)-
(2.59) as show n in Figure 2.6. The numbers in parentheses represen t which
pr oba bility compon ents a re to be measu red from the face associa ted with
th ose nu mbers. Th e initial state vecto r corresponding to Eq. (2.59), for
[a, 0 , I J
T
Height = 1
[0. 1, OJ (2)
1
[ 1. 0 , OJ
Th is tran sform will certa inly exist in the unit disk , th at is, Izi ..,:; I. We no w
a pply the transfor m method to Eq . (2.55) over its ran ge of app licatio n
(11 = 1,2 , . . . ,); thi s we do by first multiplying that equ ati on by c :' a nd th en
sum ming from I to infinity, thu s
* The ste ps involved in ap plying this meth od are summa rized on pp . 74-5 of th is chap ter.
36 SOME IMPORTANT RANDOM PROCESSES
The parenthetical term o n the right-ha nd side of thi s last equation is rec0llt
nized as D (z) simply by cha nging the index of summati on. Thus we find
(2.6 1)
where 1 is the identity matrix and the (-I) notation implies the matrix
in verse . If we can invert this equation , we will ha ve, by the uniqueness of
transforms, the transient solution; that is, using the double-headed, d ouble-
barred arrow notation as in Appendix I to denote tr an sform pairs, we have
D (z) ~ 7t (n ) = 7t(O)p n (2.62)
3 I
0
P= 4 4
I 3
0
4 4
I I I
4 4 2
2.3. DISCRET E-TIME MARKOV CHAINS 37
First we must form
3 I
1 -- z - z
4 4
3
1 - zP = - - z
4
1 1- 1. z
-- z
4 2
Next, in order to find the inverse of this matrix we must form its determinant
thus :
[I _ ?P j- t = 1
- (1 - z) [I + (1/4)zf
3 5 .
1- 1. Z _1- z 2 - z - - z- -1 z + -9 Z2
2 16 4 16 4 16
1 I . 3
x - "'+- .,.-
4- 16 -
1- 1. z _l.. Z2 - z + -1 .
z-
2 16 4 16
I
- z + -1 Z2
1z
-t
3 z2
+- 1 _1- z 2
4 16 4 16 16
Having found the matrix inverse, we are now faced with finding the inverse
transform of thi s matrix which will yield P ", This we do as usu al by carrying
out a partial fraction expansion (see Appendix I) . The fact that we have a
matrix presents no problem ; we merely note that each element in the matrix
is itself a rati onal function of z which must be expanded in partial fraction s
term by term . (This task is simplified if the matrix is written as the sum of
three matrices: a constant matrix ; a constant matrix times z ; and a constant
matrix times Z2.) Since we have three roots in the denom inator of o ur rational
functions we expect th ree terms in our partial fraction expansion. Carrying
38 SOME IMPORTA NT RANDOM PROCESSES
-~ -~]
7
[I - =P ]- I 1/25 [5 13] 1/5 [0
= -- 5
1- z
7 13 + (1 + =/4)2 0
5 7 13 0 2 -2
33
1/25 [20 -53]
+ -5 8 -3 (2.64)
1 + =/4
-5 -17 22
We observe immediately from this expansion that the matrix associated with
the root (l - e) gives precisely the equilibrium solution we found by direct
methods [see Eq . (2.53)]; the fact that each row of this matrix is identical
reflects the fact that the equilibrium solution is independent of the initial
state. The other matrices associated with roots greater than unity in absolute
value will always be what are known as differential matrices (each of whose
row s must sum to zero). Inverting on z we finally obtain (by our tables in
Appendix I)
~ -~]
7 -8
P" ;,[: 7
13] 1 1 n[O
13 +:5 (n + 1)(- 4) 0 2
7 13 0 2 -2
P[s y stem rem ains in E i for exactly m addition al steps given that it has
ffl
just en tered E i] = (I - Pii)Pii (2.66)
which gives the probability that the system will be in state E, at step n, given
In q n
Time step
for m ~ q ~ 11. Thi s last equation must hold for any stochastic process (not
necessarily Markovian) since we are considering all mutually exclusive and
exha ustive possibilities. From the definition of conditional probability we
I
may rewrite this last equation as
,
.1
Pilm , n) = LP[X. = k I x; = i]P[X n =j Ix ; = i, X . = k ] (2.69)
k
Equ ati ons (2.74) and (2.75) are known as the fo rward Chaprnan-Kolrnogorov
equations for discrete-time Markov chains since they a re writt en at the
for ward (most recent time) end of the interv al. On the other hand, we could
ha ve chosen q = m + I, in which case we obtain
whose solution is
-
7t(n+!1 = 7t(oIP (O)P (I ) .. . P (II) _ (2.79) ,
The se last two equations corre sp ond to Eqs. (2.55) and (2.56), respectively,
for the hom ogeneous case. The Chapman-Kolmogorov equations give us a
mean s for describing the time-dependent probabilities of man y interesting
queu eing systems that we develop in later chapters. *
Before leaving discrete -time Markov chains, we wish to introduce the
special case of discrete time birth-death processes. A birth-death process is an
example of a Mark ov proces s that may be thought of as modelin g chan ges
in the size of a popul ation. In what follows we say that the system is in state
Ek when the popul ation consists of k members. We further assume th at
chan ges in popul ati on size occur by at most one; th at is, a " birt h" will chan ge
the popul ati on' s size to one greater, whereas a "death" will lower the
popul at ion size to one less. In consider ing birth -death processes we do not
perm it multiple birth s or bulk disasters; such possibilities will be con sidered
* It is clear fro m this develop ment that a ll Mar kov processes must sati sfy the Chapma n-
Kolmogorov equatio ns. Let us note, however , that a ll proc esses that sa tisfy the Cha pman-
Kolmogorov equation are not necessarily Mark ov processes; see . for exam ple. p. 203 of
[PAR Z 62].
2.3. DISCR ETE-TIM E MARKOV CHAINS 43
later in the text and correspond to rand om walks . We will con sider the Mar kov
chain to be hom ogene ous in that the transition probabilities P i; do not change
with time; howe ver , cert ainly the y will be a functi on of the state of the
system. Thus we"have that for our discrete-time birth-death process
j= i - I
{ I-h.
d, - d, j=i
PH = (2.80)
b, j =i+I
0 ot herwise
Here d, is the pr obability that at the next time step a single death will occur,
driving the population size down to i - I , given that the population size
now is i. Similarly, b, is the probability that a single birth will occur, given
th at the current size is i, thereby dri ving the populati on size to i + I at the
next time step. I - b, - d, is the probability that neither of these event s will
occur and that at the next time step the population size will not change.
Onl y these three possibilities are permitted. Clearly do = 0, since we can
have no deaths when there is no one in the populati on to die. However,
contrary to intuition we do permit b o > 0; this correspond s to a birth when
there are no members in the population. Whereas this may seem to be
spo ntaneo us generation, or perhaps divine creation , it does provide a
meaningful model in term s of queueing the ory. The model is as follows : The
population corresponds to the custo mers in th e queueing system ; a death
corresponds to a customer departure from that system; and a birth corre-
f
sponds to a customer arrival to th at system. Thus we see it is perfectly feasible
to ha ve a n arrival (a birth) to an empty system ! The sta tiona ry pr obab ility
tran sition matrix for the general birth-death pr ocess t hen appears as follows :
I- bo bo 0 0 0 0 0 0
d, I - b, - d , b, 0 0 0 0 0
0 d'!, I - bz-dz b, 0 0 0 0
p =
0 di I- bi - d, b, 0 ...
If we are dealing with a finite cha in, then the last row of thi s matri x would be
[00 . . . 0 ds I - dsL which illustrates the fact th at no births are permitted
when the populati on has reached its maximum size N . We see th at th e P
44 SOME IMPORTA NT RANDOM PRO CESSES
matrix has nonzero terms only along the main diagonal and along the dia-
gonals directly above and below it. This is a highly specialized form for the
transition probability matrix, and as such we might expect that it can be
solved . To solve the birth-death process means to find the solution for the
state probabilities 1t(n l . As we have seen, the general form of solution for
these probabilities is given in Eqs. (2.55) and (2.56) and the equation that
describes the limiting solution (as n -- 00) is given in Eq , (2.50). We also
demonstrated earlier the z-transform method for finding the solution. Of
course, due to this special structure of the birth-death transition matrix, we
might expect a more explicit solution. We defer discussion of the solution to
the material on continuous-time Markov chains , which we now investigate.
where h(l ) is a function only of the additional time 1 (and not of the expended
time s) *. We may rewrite thi s conditional probability as follows :
_
P~['T.!..i..:..>_
5 --:+----'I'_'T!...-
i :::..
>_5-,]
P['Ti > 5 + 1 I'Ti > 5] =
P['Ti > 5]
P[Ti > 5 + I] I
P[Ti > 5]
Thi s last step follows since the event 'T i > s + 1 implie s the
event Ti > s.
Rewritin g this last equati on and introducing h(l ) o nce again we find
P['T i > s + I] =
> s]h( l)
P[Ti (2.82)
Setting s °
= and observing that P['T i > 0] = I we have immed iately th at
P[Ti > I] = h(l )
Using thi s last equ ati on in Eq. (2.82) we then obtain
°
for s, 1 ~ 0. (Setti ng s = 1 = we aga in requ ire P[Ti > 0] = I.) We now
show that the only continuous distribution satisfying Eq. (2.83) is the
• T he symbo l s is used as a time variab le in this section on ly an d should not be confuse d
with its use as a transform varia ble elsewhere.
46 SOME IMPOR TANT RANDOM PROC ESSES
dPh > s + t]
----"-'------" = - JT (s) P [T i > t]
ds •
where we have taken advantage ofEq . (2.84). Dividing both sides by P[T i > I]
and setting s = 0 we have
or
P[Ti > t] = e- f T, (O) '
Now we use Eq. (2.84) again to obtain the pdf for Ti as
(2.87)
j"
t~~,
I I
get)
ye t) =
0
T hus we have interp reted the terms in Eq. (2.92); th is is nothing more than
the for ward Chapman -Kolmo gorov equ ation for the continuou s-time
Ma rkov ch a in.
In a sim ilar fas hion, beginning with Eq . (2.77) we may deri ve the back ward
Ch apman -Kolm ogor ov equ ati on
The for ward and backward matrix equations j ust deri ved may be expressed
through their indi vidu al terms as follows. The forward equation gives us
[with t he addi tio na l condition that the pa ssage to the limit in Eq. (2.95) is
uniform in i for fixed j]
(2.98)
(2.99)
pilt , t) = {~ if i = j
if i ~ j
These equ ati on s [(2.98) and (2.99)] uniquely determine the tr an siti on
p rob abilities p,ieS, t) and mu st, of course , a lso satisfy Eq. (2.87) as well as
the ini tial condition s.
In matrix not ati on we may exhibit the solution to the forw ard a nd back-
ward Eqs. (2.92) and (2.97), respectively, in a stra ightfo rwa rd manner ; the
50 SO~[E I ~I PORTANT RAN DOM PROCESSES
result is'
This corresponds to the discrete-time solution given in Eq , (2.79). The mat rix
differential equ ation corresp onding to Eq. (2.103) is easi ly seen to be
dn( l )
-
dl
= n (I)Q (t)
,
This last is simila r in form to Eq . (2.92) a nd ma y be expr essed in terms of its
elemen ts as
(2. 105)
The sim ilarity between Eq s. (2. 105) an d (2.98) is not accidental. The latt er
de scr ibes th e pr obab ility th at t he process is in sta te E; at time t given that it
was in state E; at time s. The fo rmer merel y gives the probability that the
system is in state E; a t time t ; information as to whe re the proce ss began is
given in the initial state probability vecto r n CO). If indeed 7T k (O) = I for
k = i a nd 7T k(O) = 0 for k ,t= i, then we are sta ting for sure th at the system was
in state E ; at ti me O. In th is case 1T;(I) will be identically eq ua l to Pu(O, I).
Both form s for thi s probability are often used ; th e form Pu(s, I) is used whe n
• Th e expression e P ' where P is a squa re mat r ix is defined as the following matrix po wer
series:
(2 (3
e Pl =I+ PI + p 2_ + p 3_ + .. .
2! 3!
i
.I
2.4 . CONTINUOUS-TIME ~IAR KOV C HAINS 51
(2 . 111)
N ow for the sta te probabilit ies them sel ves we have th e d ifferenti al eq ua tio n
d7T,(t)
- d - = q ji 7T j ( l)
t
+ L qkj7Tk(l)
k:;:.j
(2.114)
Fo r an irreduci ble hom ogeneou s Mark ov chain it can be shown that the
follow ing limits a lways exist and a re independent of the initi al sta te o f th e
ch a in , name ly,
lim Pil(t) = TT j
1-",
This set { TTj } will fo rm the lim iting sta te p robab ility di stribut ion . For a n e r-
godi c M arkov ch a in we will ha ve the furth er limit , whic h will be ind ependen t
of th e in itia l d istr ibu tion, nam el y,
lim TT /t ) = TT j
I- x
This limit ing di stribution is given uniquely as the so lutio n of th e follo win g
system o f linear equati on s :
(2 . 117)
a nd
fl k = Qk ,k-l
Thus our infinitesimal generator for the general hom ogeneou s birth -death
process takes the form
-,10 )'0 0 0 0
fll -()., + fll) Ai 0 0
0 fl . -(A. + P.) i'2 0
Q= 0 0 fl 3 - (;'3 + fl 3) ,13
,
L
Note that except for the main , upper, and lower diagonals, all term s are zero.
T o be more explicit, the assumptio ns we need for the birth-death process
are th at it is a hom ogeneous Markov chain X (t ) on the sta tes 0 , I , 2, . . . ,
that births and death s are independent (this follows directl y from the Markov
pr operty), and
B, : P [exactly I birth in (r, t + nt ) I k in populat ion]
= ;'k n t + o( nl)
D1 : P[exactly I death in (t , t + nt ) I k in population]
= Pk nt + o(nt)
B.: P[exactly 0 birth s in (r, t + nt) Ik in population]
= I - ;'k n l + o(nt)
D. : P[e xactly 0 deaths in (I, t + nt ) I k in population]
= I - Pk nl + o(n t)
2.5. BIRTH-DEATH PROC ESSES 55
Fr om these assumptions we see that multiple births, multiple deaths, or in
fact, both a birth and a death in a small time interval are prohibited in the
sense that each such multipleevent is of order o (~t).
Wh at we wish to solve for is the probabil ity that the population size is k:
at some time t ; th is we denote by'
Thi s calculation could be carried out directly by using our result in Eq.
(2.114) for 7T J(t) and our specific values for q i j ' However , since the deriva tion
of these equation s for the bir th-death process is so straightforward and
follows from first principles, we choose not to use the heavy machine ry we
developed in the previou s section , which tend s to cam ouflage the simplicity
of the basic approach, but rather to rederive them below. The reader is
encouraged to identify the parallel steps in this development and compare
them to the more general steps taken earlier. Note in term s of our previous "/
Th ese three cases ar e portrayed in Figure (2.8). T he p robability for the first
of these possibilities is merely the probability Pk(t) that we were in st ate E k at
time f time s the probability hk(~f) that we moved from state Ek to state E,
(i.e., had neither a birth nor a death) durin g the next ~f seconds ; thi s is
rep resented by the first term on the right-hand side ofE q . (2.120) below. T he
second and th ird terms on the right-hand side of th at equ ati on correspond ,
respectivel y, to the second and third cases listed ab ove. We need no t concern
ourselves specifically with transition s fr om states other than neare st neighb or s
to state E k since we have assumed that such transitions in a n interval of
• We use X (r) here to denote the num ber in system at time I to be consistent with the use of
X (l )for ou r genera l stochastic process. Cer ta inly we cou ld have used N(t) as defined
earlier; we use N (t) outside of this chapter.
56 SOME IMPORTANT RA NOmr PR OCESSES
Time
We may add the three probabilitie s ab ove since these events are dearly
mutually exclusive. Of course, Eq. (2.120) only make s sense in the case for
k ~ I , since clearly we could no t have had - I members in the population .
For the case k = 0 we need the special boundary equati on given by
(2.122)
T o solve the system represented by Eqs. (2.120)-(2. 122) we must make use
of our assumptions B" D B2 , and D 2 , in order to evaluate the coefficients
"
2.5. BIRTH- DEATH PROCESSES 57
in these equ ati on s. Carrying out thi s opera t ion our eq uati on s convert to
Pk(t + llt ) = Pk(t)[l .; i'k llt + o(ll t)][ l - flk llt + o (llt) ]
+ Pk_1(t)[i'k_l llt + o (llt )]
+ Pk+l(t )[Pk+lllt + o(llt) ]
+ o( llt ) k ~ I (2.123)
Po(t + llt) = Po(t)[l - i,o llt + o (llt )]
+ P1(t)[fll llt + o (ll t )]
+ o( ll t) k = 0 (2. 124)
In Eq. (2.124) we ha ve used the assumption that it is imposs ible to ha ve a
death when the population is of size 0 (i.e., flo = 0) and the assumption that
o ne indeed can have a birt h when the populat ion size is 0 (i,o ~ 0). Expanding
the right-hand side of Eq s. (2.123) and (2.124) we ha ve
k ~ l
- (2.127)
k =O
recognize them as Eq. (2.114) and their solution will give the behavio r of
Pk(t ). It remains for. us to solve them. (No te that t his set was obtai ned by
essentially using the Chapman- Ko lrnogorov equations.)
In order to solve Eqs. (2.127) for the time-dependent behavior Pk(t) we now
require our initial cond ition s: that is, we must specify Pk(O) for k = 0, I,
2, . . . . In addi tio n, we further require that Eq . (2.122) be satisfied.
Let us pa use temp orarily to describe a simple inspect ion technique for
finding the differenti al-difference equa tions given ab ove. We begin by observ-
ing that an alternate way for disp laying the information contained in the Q
matri x is by means of the state-transition-rate diagram . In such a diagram the
sta te Ek is represented by an ova l surro unding the number k. Each nonzero
infinitesimal rate q j j (the elements of the Q matrix) is represented in the
sta te-transition-ra te diagram by a directed branch point ing from E, to E ,
and label ed with the value q j j ' Fur thermo re, since it is clear that the terms
a long the main diagonal of Q cont ain no new informa tion [see Eqs. (2.96)
and (2.118)] we do not include the "self"-loop from E, back to E j • Thus the
sta te-transition-rate diagram for the genera l birt h-death pro cess is as shown
in Figure 2.9.
In viewing this figure we may tru ly think of a pa rticle in motion moving
among the se states; the branches identify the per mitted transitions and th e
bra nch labels give the infinitesimal rates at which th ese transitions take
place. We emph asize that the labels on the ordered link s refer to birth and
dea th rates a nd not to probabilities. If one wishes to con vert these labels to
proba bilities, one must multiply eac h by the quant ity dt to obtain the
probabili ty of such a transition occurring in the next interval of time whose
duration is dt , In t hat case it is also necessary to put self-loops on each -state
indicating the prob ab ility that in the next interval of time dt the system re-
mains in the given state . No te that t he sta te-transition-rate diagra m contains
exactly the sa me informati on as does the tr ansition-rate matrix Q .
Co ncentra ting on state E k we observe that one may en ter it only from state
E k_1 or from sta te Ek+l an d similarly one leaves state E k only by entering
sta te Ek - 1 or sta te Ek + 1 • From this picture we see why such processes are
referr ed to as "nearest-neighbo r" birt h-deat h processes .
Since we a re considering a dynamic situatio n it is clea r that the difference
between the rate a t which the system ent ers Ek and the ra te at which the system
leaves E k must be equal to the rate of change of "flow" into that state . This
But thi s is exactly Eq . (2. l27) ! Of course, we ha ve not a tte nded to th e details
for the bound ar y state Eo but it is easy to see that the rate argument ju st given
lead s to the correct equa tio n fo r k = O. Ob ser ve that each ter m in Eq .
(2.128) is of the form : pr obability of bein g in a particular state at tim e t
multiplied by the infinitesimal rate of leaving that state. It is clear that wha t
we have done is to draw an imaginary boundary surrou nd ing sta te Ek and
hav e calcul ated th e pr obability flow rates cr ossing th at boundary , where we
place opposite signs . on flows entering as oppos ed to leaving ; thi s tot al
computatio n is th en set equa l to the time derivati ve of the prob ability flow
rate into that sta te.
Actu ally there is no reason for selecting a single sta te as the "system" for
wh ich the flow equ ati on s mu st hold . In fact one may encl ose an y number of
sta tes wit hin a contour a nd th en write a flow equ ati on for all flow crossing '
th at boundary. Th e only d an ger in de aling with such a con glomerate set is
th a t one may write down a dependent set of equ ati ons rather than an inde-
pendent set; on the other hand , if one systema tically encloses each sta te
sing ly a nd writes d own a con servation law for each, then one is guaran teed to
have a n independent set o f equ a tions for the syste m with the qu ali fication
th at the co nservatio n of prob ability given by Eq. (2. 122) mu st also be
a pplied. * T hus we have a simple inspection techn iqu e for a rriving at the
equa tions of moti on for the birth-death proce ss. As we sha ll see lat er th is
ap proa ch is perfectly suita ble for other M ar kov pr ocesses (includi ng sem i- .
M arkov p rocesses) a nd will be used extensively ; the se observa tio ns also lead
us to the no tion of globa l and local balan ce equ ati on s (see C ha pter 4).
At thi s point it is imp ortant for the reader to recognize and accept the fact
that the birth-death pr ocess descr ibed abov e is capa ble of pr ovidin g the
• When the number of states is finite (say. K states) then any set of K - I single-node sta te
equations will be indepe ndent. T he addi tio nal equatio n needed is Eq . (2.122).
:
60 SOME IMPORTANT RANDOM PRO CESSES
p oet) = e- At
Inserting thi s last int o Eq. (2.129) for k = I result s in
P,(t) = J,te- AI
Continuing by induction, then , we finally have as a solution to Eq. (2.129)
(}.t)k - AI
P (I ) = - - e k ?: 0, I ?: 0 _ (2.131)
k k!
This is the celebrated Poisson di stribution. It is a pure birth pr ocess with
constant birth rat e A and gives rise to a sequence of birth ep och s which a re
• Transien t behavior is discussed elsewhere in this text, nota bly in Chapte r 2 (Vol. II).
For a n excellent trea tment the reader is referred to [COH E 69].
2.5. BIRTH-DEATH PROCESSES 61
said to constitute a Poisson process. Let us study the Poisson process
more carefully and show its relat ionship to the exponential distribution.
The Poisson process is central to much of elementary and intermediate
queue ing theory and is widely used in their development. T he special position
of this process comes about for two reasons. First , as we have seen, it is the
"innermost circle" in Figure 2.4 and, therefore, enjoys a number of mar velous
and simplifying anal ytical a nd probabilistic properties ; this will become
und eniably apparent in our subsequent development. The second reason for
its great import ance is that , in fact, nume rous natu ral physical and organic
processes exhibit behavior that is probably meanin gfully modeled by Poisson
pr ocesses. For example , as Fry [FRY 28] so graphically point s out, one of the
first observations of the Poisson process was that it properly represented the
number of army soldiers killed due to being kicked (in the head ?) by their
horses. Other examples include the sequence of gamma rays emitting from a
rad ioact ive part icle, and the sequence of times at which telephone calls a re
originated in the teleph one network . In fact , it was shown by Palm [PALM
43] and Khinchin [KHIN 60] that in many cases the sum ofa large number of
independent stationary renewal processes (each with an arbitrary distribution
of renewal time) will tend to a Poisson process. Thi s is an imp ortant limit
the orem and explain s why Poisson pr ocesses appear so often in nature where
the aggregate effect of a large number of individual s or particles is under
observa tion. "
Since this development is intended for our use in the study of queueing
systems, let us immediately adopt queueing notation and also conditi on
ourselves to d iscussing a Poisson process as the arrival of customers to some
queueing facility rather than as the birth of new members in a population.
Thus ,l is the average rate at which the se customer s arrive . With the"initial
condition in Eq, (2.130), PkV) gives the pr obability th at k arrivals occur
during the time interva l (0, I). It is intuitively clear , since the average arrival
rate is ,l per second , that the average number of a rrivals in an inte rval of
length I must be AI. Let us carry out the calculation of this last intuitive
statement. Defining K as the number of arr ivals in this interval of length I
[previously we used a(I)] we have
co
E[K] = L kPk(t)
k ~O
-ll ~ (i.t) k
= e L. k - -
k _O k!
= e- 1 1 1 · ( ,lt)k
k _l (k - I )!
-.<t , ~(At)k
= e At L . - -
k _O k!
62 sosts IMPORTA NT RA NDOM PRO CESSES
for [z] ~ I. Applying this to the Pois son distribution deri ved ab ove we have
co
E[zK] = 2, Zk p k (t)
k= O
= ~ e- At (lt z )k
k~O k!
= e- 1t +J. f%
2.5. IlIRTH- DEATH PROCESSES 63
Ik II)
~I
;/
Figure 2.10 The Poisson distribution .
a nd so
G(z) = E[zK] = eW =- 1l - (2.134)
We sha ll ma ke co nsid erable use of this result fo r the z-transfo rm of a Poisson
dis trib utio n. For exa mple , we may no w easily ca lcula te the mean and
va riance as given in Eqs . (2.132) and (2.133) by taking advantage o f the
spec ial p ro perties o f th e z-tra ns fo r m (see Appendix II ) as foll o ws * :
GIll(l) = i
OZ
E[ZK]! =
,_1
E[K ]
c
I "k.'
We have intr oduced the Poisson process here as a pure birth process and
we ha ve found an expression for Pk(t), the probability distribution for the
number of arrivals during a given.interval of length t. Now let us consider the
joint distribution of the arrival instants when it is known beforehand that
exactly k arrivals have occurred during that interval. We break the interval
(0 , t) into 2k + I interv als as shown in Figure 2.11. We a re intere sted in A k ,
which is defined to be the event that exactly one arri val occurs in each of the
intervals {PJ and that no arri val occurs in any of the inter vals {O'.J. We wish
to calculate the probability th at the event A k occur s given that exactly k
arrivals have occurred in the inter val (0, r) : from the definiti on of conditi onal
pr obability we thu s have
• . ( [A
.P..!C..:ck!E..a::.:n~d=--=-
ex.:.::a::.::c.:.::tI:.o. iv.:.::a.1:.:s.:.::
· ::.:ar..:cr.:.::
y.:.::k.:.:: : in~...c(":'O:.....t:..:.!
)]
P[ Ak I exact Iy k. arrivals In 0, t )] = -
P[exactl y k arrivals in (0. t )]
(2.135)
When we consider Poisson arrivals in nono verlapping interv als, we are
consider ing independent events whose joint probabilit y may be calculated
as the pr oduct of the individual pr obabilitie s (i.e., the Poisson process has
independent increments). We note from Eq. (2.131), therefore , that
Prone arriv al in interval of length Pd = i.{3je- ' P,
and
P[n o arrival in inter val of length O'. j] = e-A'j
Using thi s in Eq. (2.135) we have directly
P[Ak I exactl y k: a rrivals in (0, t )]
(i.{3 ,i,P2 · .. i.{3ke- lP'e-lP, . . . e-lPk)(e-l"e-ob, . . . e-l'k+')
= [(i.t)kfk!Je- lt
= f3 ,{32 . . . {3k k !
k
(2.136)
t
On the other hand, let us consider a new process th at selects k points in the
interval (0, t ) independently where each point is uniformly distributed over
th is interval. Let us now make the same calculati on that we did for the Poisson
p rocess, namely,
p[Ak l exactly k a rrivals in (0, t)] = ( ~')( ~2) ...( ~k) k! (2.137)
2.5 . BIRTH-DEATH PROCESSES 65
where the ter m k! come s abo ut since we do not dist inguish a mo ng the
permutations of the k points am on g the k chosen intervals. We observe that
the two con ditiona l prob abi lities given in Eq s. (2.136) and (2.137) a re the
sa me a nd, the refore, conclude that if a n interval ofl ength t cont ains exactl y k
arrivals from a Poisson process, then the j oint distribution of the instants
when the se a rrivals occurred is "t he sa me as the d istribution of k points
un iformly distribu ted over the sa me interva l.
Furthermore , it is easy to show from the pr operties of our birth process
that -the Poisson process is one with independent increments ; that is, definin g
X es, s + t) as th e number of arrival s in t he interval (s, s + t) then the
followin g is true:
(}.tle- 1 1
P[X(s, s + t) = k] = -'---''---
k!
regardless of the location of thi s interval.
We would now like to investigate the int imat e relat ionsh ip between the
Poi sson pr ocess and the exponential di stribution. This distribution also plays
a central role in queueing the ory. We consider the random vari able l, which
we recall is th e t im e be tween adjacent arrivals in a queueing system, a nd whose
PDF a nd pdf are given by A(t ) and a Ct ), respectively, as already agreed for
the inte rarrival times . From its definiti on, then , a Ct)b.t + o (b.t) is the prob-
a bility that the next a rrival occurs at least t sec and a t mos t (t + b.t ) sec
from the time of the last arrival.
Since the definition of A (t ) is mere ly the probability that th e time between
a rrivals is ~ t , it must clearl y be given by
l
66 SO~(E 1~IPORTANT RANDOM PROCESSES
1--------- _
(b ) PDF
and so
P[i ~ t + to I i > toJ = 1 - e- " (2.140)
This result sho ws that the distribution of rem aining time until the next
a rr ival , given that to sec has elap sed since the last a rrival, is iden tically equal
to the uncondition al distribution of intera rrival time . The imp act of this
stat ement is that our probabilistic feeling regard ing the time unt il a future
a rrival occurs is inde pendent of how lon g it has been since th e last arr ival
occurred. Th at is, the future of an exponentially distributed rand om vari able
...
I
I
2. 5. BIRTH-DEATH PROCESSES 67
arrival occurs within the next !::>.f sec. From Eq . (2.140) we have
P[i ~ 1 +!::>.I Ii> 1 ] = I - e- a t
0 0
(A !: >. t)"
= 1- [ 1 - I. !::>.I + 2"!" - ..-]
= A!::>.I + O(!::>. I) (2. 14 1)
Equation (2.14 1) tells us, given that a n arrival has not yet occurred, th at the
prob ability of it occurring in the next interval of length !::>.I sec is A!::>.t +
O(!::>.I). But thi s is exactly assumption B[ from the opening paragraphs of thi s
section. Furthermore, the probability of no a rrival in the interval (to, to + !::>.t)
is calculated as
We use a trick here to evaluate the (simple) inte gral by recogni zing t ha t the
integrand is no more than the partial deri vati ve of the following integral ,
2.5. BIRTH-DEATH PROCESSES 69
f"' ·
o
ti.e-A'dt = - J. - a f OO «:". dt
aJ. 0
a nd so
_ 1
t=- (2.142)
i.
Thus we have that the ave rage intera rriva l time for an exponential distrib ut ion
is give n by I Ii.. This result is intuitively pleasi ng if we examine Eq . (2. 14 1)
a nd observe th at the proba bility of a n a rrival in a n interva l of length 6>t is
given by J. 6>t [+ o(6)t)] and thus i. itse lf must be the average rate of arrivals ;
thus the average time between arrivals must be l{i.. In orde r to evaluate the
variance, we first calculate the second moment for the interarrival time as
follows:
2
70 SOME I~IPORTA NT RANDOM PROCESSES
a/ = E[(i)"] - vr
and so
=:2- (1Y
9 1
a( = ::; (2.143)
) .-
As usual, these two moments could more easily have been calculated by
first considering the Laplace transform of the probability density functi on
for this random variable. The notati on for the Laplace transform of the
interarrival pdfis A *(s) . In thi s special case of the exponential distribution we
then have the followin g :
A*(s) ~ r)~-'la(t) dt
= 1""e- "i.e-J.' dt
and so
A *(s) = -)'- (2.144)
s +}.
Equation (2.144) thus gives the Laplace transform for the exponential density
functi on . Fr om Appendix II we recognize that the mean of thi s density
function is given by
f = _ dA*(s) I
ds ,~ O
= (s
i.
+ if ,_0
I
I
i.
The second moment is also calculated in a similar fashion:
E[(i)2] =
2
d A *(s)
-2-
I
ds ,~ O
2i. I
= (s + }.)3 ,~O
2
2.5. BIRTH- DEATH PR OCESSES 71
an d so
Thus we see the ease with which moments can be calculated by making use
of tr an sforms.
No te also, th at the coefficien t of variation [see Eq . (II .23)] fo r the exp onen-
tial is
(2.145)
We define f x(x) to be the pdf for this random vari able . From Appendix II
we sho uld immedia tely recognize that the density of X is given by the
con volu tion of the den sities on each of th e I,:S , since they are indep endently
distri buted. Of course, thi s con voluti on operatio n is a bit lengthy t o carry
out, so let us use our further result in Appendix II, which tells us that th e
Lapl ace tr an sform of th e pdf for the sum of independent random va ria bles is
equ al to the product of the Laplace transforms of the den sity for each . In
our case each I n ha s a comm on exponential distribution and therefor e the
Laplace transform for th e pdf of X will merel y be the kth po wer of A *(s)
where A *(s) is given by Eq. (2.144); that is, defining
X*( s) = f'
e-SXfx(x) d x
X *(s) = [A *(S)]k
thus
J
72 SOME IM PORT A;-,/T RA;-,/DOM PRO CESSES
This family o f density functi ons (one for each va lue of k) is referred to as the
family of Erlang distributions. We will have con siderable use for thi s famil y
later when we di scu ss the method of stages, in Chapter 4.
So much for the Poisson arrival process and its relati on to the exponential
di stribution. Let us now return to the birth-death equations a nd consider a
m ore genera l pure birth process in which we perm it state-dependent birth
rates Ak (for the Poisson process , we had Ak = A). We once again insist that
th e de ath rates fl.k = O. From Eq . (2.127) thi s yield s the set of equations
dPk(t) _
-- = - AkPk(t) + Ak_IPk_l(t ) k~1
dt
(2 .148)
dPo(t)
- - = -)'oPo(t) k=O
dt
Again, let us a ssume the initial di stribution as given in Eq. (2. 130), which
states that (with probability one) the population begin s with 0 members a t
time O. Solving for poet) we have
p oet) = e- Ao'
The general solution * for Pk(t) is given bel ow with a n explicit expression for
the first two va lues of k:
d Pk(t) =
dl
-flPk(t ) + flPk +l(t ) O<k< N
dP.,(I )
----;;;- = - fl Ps ( I) k = N
MIMI 1
I \
Ak = A} -o(~ (A(I) = 1- e -).I
(2.151)
fl k = fl B(x) = 1 - e-,n
It sho uld be clear why A (I) is of exponential form from our earlier discussion
relating the exponential interarrival distribution with the Poisson arri val
pr ocess. In a similar fashi on , since the death rate is constant (fl k = fl,
k = I, 2, ...) then the same reason ing leads to the observation that the time
between deaths is also exponentially distributed (in this case with a param-
eter fl) . However, deaths correspond in the queueing system to service
74 SOME IMP ORTANT RAND OM PROCESSES
dP
- k(t
- ) =- ("It + f-l) Pk()t + I.Pk t + ,u Pk+l t)
• _, () ( k";?:. l
dt
(2.152)
dPo(t )
- - = - APo(t ) + ,u P,(t) k=O
.dt
Many meth ods are available for solving th is set of equ ati ons. Here, we choose
to use the meth od of z-transfo rms developed in Appendi x I. We have already
seen o ne application of this meth od earlier in this chapter [when we defined
the tr ansform in Eq. (2.60) and a pplied it to the system of equ at ion s (2.55)
to obta in the algebra ic equ ati on (2.6 1)]. Recall th at the steps involved in
a pplying the meth od of z-transfo rrns to the solution of a set of difference
equations may be summarized as follows:
P(z, t) ~ I Pk(t)Zk
k =O
(2.153)
-Iext we multiply the kth differential equation by zk (step I) and then sum
rver all permitted k (k = 1,2 , .. .) (step 2) to yield a single differential
qu ati o n for th e z-tra nsfo rm of Pk(t) :
We sometimes.ob tain a differential equatio n at this stage if our original set of difference
lua tions was. in fact , a set of differential-difference equat ions. When th is occurs, we arc
Iectivcly back to step 1 of this procedure as far as the differential varia ble (usually time)
concerned. We then proceed thro ugh steps 1- 5 a second time using the Lapl ace transform
Ifthis new variable; our transform multipl ier becomes e-· t o ur sums become integrals .
?
id our ' 'tricks" become the properties associa ted with Lapl ace transforms (see Append ix
. Similar " returns to step I " occu r whenever a function of more than one variable is
m sformed ; for each discrete variab le, we require a e-transform and , for each cont inuous
.riable, we require a Laplace transform .
When additiona l unknowns remain, we must a ppeal to th e a nalyticity o f the tran sform
rd obser ve that in its region of ana lyticity the tra nsform must have a zero to cance l eac h
ile (singularity) if the transform is to remai n bou nded . Th ese ad ditio nal conditions
mplet ely remo ve a ny remai ning unkn owns. Thi s procedure will often be used and
pla ined in the next few chap ters.
76 SOME n1PORTANT RANDOM PROCESSES
Returning to step I , a pp lying thi s tran sform to Eq. (2. 155), an d taking ad vant-
age o f pr operty II in Table 1.3 in Appendix I, we obtain
Of course, Pk(O+) is just our initial condition ; whereas earlier we took the
simple point of view that the system was empty at time 0 [that is, Po(Q+) = 1
an d all other terms Pk(O+) = 0 for k ~ 0), we now genera lize and permit i
customers to be pre sen t at time 0 , th at is,
k=i
(2.161)
k~i
When i = 0 we have our original initi al condi tion. Sub stituting Eq, (2.161)
int o Eq. (2. 160) we see immediately that
P(:,O+) = :i
which we may place int o Eq. (2.159) to obtain
* Zi+1 - ,u(l - z)Po *(s)
P (z, s) = (2.162)
sz - ( I - z)(,u - ),z)
We are almos t finished with step 5 except for the fact that the unknown
funct ion Po*(s) appears in o ur equ ation. The second footnote to step 5 tells
us how to proceed. F rom here on the analysis becomes a bit complex and it is
beyond our desire at thi s point to continue the calcul at ion ; instead we
relegate the excruciating details to the exerci ses below (see Exerci se 2.20). It
suffi ces to say that Po*(s) is determined throu gh the denomin at or root s of
Eq. (2. 162), which th en leaves us with an explicit expre ssion for our double
transfo rm . We are now at step 6 and mu st attempt to invert o n both the
transform varia bles ; the exercises require the reader to show th at the result
of th is inversion yields the final solution for our transient an alysis, namely,
(2.165)
and
k Z. -1 (2.166)
EXERCIS ES 79
EXERCISES
2.1. Consider K independent sources of customers where the in.t er~rrival
time between customers for each source is exp onentially distributed
with parameter Ak (i.e. , each source is a poiss?n proc.ess). Now consider
the arrival stream , which is formed by merging the Input from each of
the K sources defined above. Prove that this mer ged stream IS al so
Poiss on with parameter A = Al + }'2 + ... + }'K'
2.2. Referring back to the previous problem, consid.er thi.s mer ged Poi sson
stream and now assume that we wish to break It up Into several bran-
ches. Let Pi be the probability that a customer from .the mer ged strea m
is assigned to the substream i, If the overall rate IS A cu stomers per
second, and if the substream probabilities Pi are ch osen for e~ch
customer independently, then show that each of the se substreams IS a
Poi sson process with rate APi'
where Ik(x) is the modified Bessel functi on of the first kind o f order k . This
last expression is most disheartening. What it has to say is that an appropriate
model for the simp les t interesting queueing system (di scu ssed further in the
next chapter) leads to an ugly expression for the time-dependent beh avior o f
its state probabilities. As a consequence , we can only hope for grea ter
complexity and obscurity in attempting to find time-dependent behavior of
more general queueing systems.
More will be said about time-dependent results later in the text. Our main
purpose now is to focus upon the equilibrium behavior of queueing systems
rather than upon their transient behavior (which is far more difficult). In the
next chapter the equilibrium behavior for birth-death queueing systems will
be studied and in Chapter 4 more general Markovian queues in equilibrium
will be considered. Only when we reach Chapter 5, Chapter 8, and then
Chapter 2 (Volume II) will the time-dependent behavior be considered again .
Let us now proceed to the simplest equilibrium behavior.
REFERENCES
BHAR 60 Bharucha-Reid , A. T., Elements of the Theory of Mark ov Processes
and Their Applications , McGraw-Hili (New York) 1960.
COHE 69 Cohen, J., The Single S erver Queue, North Holland (Amsterd am) ,
1969.
ElLO 69 Eilon, S., "A Simpler Proof of L = }.W,'· Operations Research , 17,
915-916 (1969).
FELL 66 Feller , W., An Introduction to Probability Theory and Its Applications.
Vol. II, Wiley (New York), 1966.
FRY 28 Fry, T. c., Probability and Its Engineering Uses , Van Nostrand,
(New York) , 1928.
HOWA 71 Howard, R. A., Dynamic Probabilistic Sy stems , Vol. I (Markov
Models) and Vol. II (Semi-Markov and Decision Processes). Wilev
80 SOME IMPORTANT RANDOM PROCESSES
PH = e
-1 ~ (i)
s: n P q
n i-n ,1f - n
( J. - n) '.
n- O
+
where p q = 1 (0 < P < I).
(a) Is this chain irreducible? Periodic? Explain.
(b) We wish to find
1T i = equilibrium probability of E,
P(z) '" 1T Z i
=L
.-0
i
(d) Recursi vely (i.e., repeatedly) appl y the result in (c) to itself and
show that the nth recursion gives
2.8. Show that any point in or on the equilateral triangle of unit height
shown in Figure 2.6 represents a three-component probability vector
in the sense that the sum of the distances from any such point to each
of the three sides must always equal unity.
EXERCISES 81
2.9. Consider a pure birth process with constant birth rate ,1.. Let us
consider an interval oflength T, which we divide up into m segments
each of length TIm. Define t1t = TIm.
(a) For t1t small, find the probability that a single arri val occurs in
each of exactly k of the m intervals and that no arrivals occur in
the remaining m - k intervals.
(b) Consider the limit as t1t ->- 0, that is, as m ->- co for fixed T,
and evaluate the probability Pk(T) that exactly k arrivals occur
in the interval of length T.
2.10. Con sider a population of bacteria of size N(t) at time t for which
N (O) = I. We consider this to be a pure birth process in which any
member of the population will split into two new members in the
interval (t , t + t1t) with probability ;. t1t + o(t1t) or will remain
unchanged in this inter val with probability I - ;. t1t + o (t1t) as
t1t ->- O.
(a) Let Pk(t) = P[N(t) = k] and write down the set of differential-
difference equations that must be satisfied by these probabilities.
(b) Fr om part (a) show that the a-transform P (z, t) for N (t ) must
satisfy
ze-.t'
P( z, t) = .t .
1- z + ze: ,
(e) Find E[ N (t)] .
(d) Solve for Pk(t) .
(e) Solve for P (z, r), E[N(t)] and Pk(t) that satisfy the initial con-
d it ion N( O) = n ;::: I.
(f) Consider the corre sponding determ inistic problem in which each
bacterium splits into two every I I). sec and compare with the
answer in part (c).
2.11. Consider a birth-death process with coefficients
k=O k=1
k#-O k#-I
k~O
k~O
P(z, t) = 2,Pk(t)Zk
k=O
and find the partial differential equation that P(z, t ) must sat isfy.
(c) Show that the solution to this equ ation is
(K - k )A k~K
Ak ={
o k > K
Write down the differential-difference equation s for
(c) What is the value of P(I, t)? Give a verbal interpretation for the
expression
- a
N(t) = lim - P(z, t)
.-1az
(d) Assuming that the population size begins with i members at time
0, find an ordinary differential equation for N(t) and then solve
for N(t) . Consider the case A = fl as well as A ,,-6 fl.
(e) Find the limiting value for N(t) in the case A < fl (as t --+ (0).
2.16. Consider the equations of motion in Eq, (2.148) and define the
Laplace transform
IT )' i
P/( s) = -k-,-,
i -,-"
O_-
IT (s + Ai)
i= O
2.17. Consider a time interval (0, t) during which a Poisson process generates
arrivals at an average rate A. Deri ve Eq . (2.147) by con siderin g the
two events: exactly k - I arrivals occur in the interval (0, t - tl.t)
and the event that exactly one arrival occur s in the interval (t - tl.t, r).
Considering the limit as tl.t ->- 0 we immedi ately arrive at our desired
result.
2.18. A barber opens up for business at t = O. Customers arrive at random
in a Poisson fashion ; that is, the pdf of interarrival time is aCt) =
Ae- lt • Each haircut takes X sec (where X is some random variable).
Find the probability P that the second arriving customer will not have
to wait and also find W, the average value of his waiting time for the
two following cases :
i. X = c = constant.
ii. X is exponentially distributed with pdf:
b( x) = pe-~'
2.19. At t = 0 customer A places a request for service and finds all m
servers busy and n other customers waiting for service in an M/M /m
queueing system . All customers wait as long as necessary for service,
waiting customers are served in order of arri val, and no new requests I
for service are permitted after t = O. Service times are assumed to be
i
mutually independent, identical , exponentially distributed rand om
variables, each with mean duration I Ip.
(a) Find the expected length of time custome r A spends waiting for
service in the queue.
(b) Find the expected length of time from the arrival of customer A
I
at t = 0 until the system become s completely empty (all cust omer s
complete service).
(c) Let X be the order of completion of service of customer A ; that ]
is, X = k if A is the kth customer to complete service afte r
t = O. Find P[X = k] (k = 1,2 , . .. , m + n + I).
(d) Find the probability that customer A completes service before
the custo mer immediately ahead of him in the queue.
(e) Let 1V be the amount of time custo mer A waits for service. Find
P[lV > x ] .
2.20. In thi s problem we wish to proceed from Eq. (2. 162) to the transient
solution in Eq. (2.163). Since P* (z , s) must converge in the region
[z] ~ 1 for Re( s) > 0 , then, in this region, the zero s of the denom ina-
tor in Eq. (2.162) mu st also be zeros of the numerat or.
(a) Find those two values of z th at give the denominator zeros, and
denote them by ct1 (s) , ~ (s) where 1ct2 (s) 1 < Ict 1 (s)l.
EXERCISES 85
(b) Using Rouche's theo rem (see Appendix I) show that the denom-
inator of P* (z, s ) has a single zero within the unit disk Izl ~ 1.
(c) Req uiring that the -numerator of P *(z , s) vanish at z = CX2(S)
from our earlier considerations, find an explicit expression for
Po* (s).
(d) Write P* (z, s) in terms of cx,(s) = cx, and cx 2 (s) = cx 2 • Then show
that this equati on may be reduced to
.
k pkl _t-lf k(al)-=-
[s+ ";S2 -
2.A.
4A.u]-k
where p and a are as defined in Eqs. (2. 164), (2.165) and where
f k(x) is the modified Bessel function of the first kind of order k as
defined in Eq . (2.166). Using these facts and the simple relations
among Bessel function s, namel y,
X(t) = LXi
i =l
86 SOME IMPORTANT RANDOM PROCESSES
I
I
I
,I
I
PART II
ELEMENTARY
QUEUEING THEORY
Elementary here means that all the systems we consider are pure Markov-
ian and, therefore, our state description is convenient and manageable. In
Part I we developed the time-dependent equations for the behavior of
birth-death processes ; here in Chapter 3 we address the eq uilibrium solut ion
for these systems. The key eq uation in th is chapte r is Eq. (3.11), and the
balance of the material is the simple application of that formula. It , in fact ,
is no more than the solution to the equation 1t = 1tP deri ved in Chapter 2.
The key tool used here is again that which we find throughout the text,
namely, the calculation of flow rates across the bou ndaries of a closed
system. In the case of equilibrium we merely ask that the rate of flow into be
equal to the rate of flow out of a system. The application of these basic
results is more than just an exercise for it is here that we first obtain some
equations of use in engineering and designing queueing systems. The classical
M IM II queue is studied and some of its important performance measures
are evaluated. More comple x models involving finite storage, multiple
servers, finite customer population, and the like, are developed in the balance
of this chapter. In Chapter 4 we leave the birth-death systems and allow
more general Markovian queue s, once again to be studied in equilibrium. We
find that the technique s here are similar to our earlier ones, but find that no
general solution such as Eq. (3.11) is available ; each system is a case unt o
itself and so we are rapidly led into the solutions of difference equations,
which force us to look carefully at the method of z-transforms for these
solutions. The ingenious method of stages introduced by Erlang is considered
here and its generality discussed. At the end of the chapter we introduce (for
later use in Volume II) networks of Markovian queues in which we take
exquisite ad vantage of the memoryle ss propertie s that Mark ovian queues
provide even in a network environment. At this point , however, we have
essent ially exhausted the use of the memoryless distribution and we must
depa rt from that crutch in the following parts.
87
3
Recall from the previ ous chapter that the limit given in the Eq. (3. 1) is
independent of the initial conditions.
Ju st as we used the state-transition-rate diagram as an inspection technique
for writing down the equations of motion in Chapter 2, so may we use the
sa me concept in writing down the equilibrium equations [Eq s. (3.2) and (3.3)]
directly from that d iagram. In thi s equilibrium case it is clear that flow mu st
be conserved in the sense that the input flow must equal the output flow from a
given state. For example, if we look at Figure 2.9 once again and concentrate
on sta te E k in equilibrium, we observe that
and
(3.6)
(3.10)
fllfl 2 . . . flk
To validate this assertion we need merely use the inductive argument and
apply Eq. (3.10) to Eq, (3.4) solving for PHI' Carrying out this operation we
do. in fact . find that (3.10) is the solution to the general birth-death process
in this steady-state or limiting case. We have thus expressed all equilibrium
probabilities P» in terms of a single unknown constant Po:
k-I }...
Pk = Po I1- ' k=0.1,2• .. . - (3.II )
i=O I' i+l
(Recall the usual convention that an empty product is unity by definition.)
Equation (3.5) provides the additional condition that allows us to determine
Po; thus, summing over all k , we obtain
I
<X> k -I A - (3.12)
I +~ I1-'
k= l i = O Pi+l
3.1. GENERAL EQUILIBRI UM SOL UTION 93
Th is "product" solution for P» (k = 0 , I , 2, . . .) simply obt ained , is a
principal equati on in elementary queueing theory and, in fact, is the point of
dep arture for all of our further solutions in this chapter.
A second easy way to obtain the solution to Eq. (3.4) is to rewrite that
equ at ion as follows :
(3.13)
Defining
(3.14)
we have from Eq. (3. 13) that
(3.15)
Clearl y Eq. (3.15) implies that
g_1 = 0
and so the constant in Eq. (3.16) must be O. Setting gk equal to 0, we immed-
iately obtain from Eq , (3.14)
A.k
P HI = - - Pk (3.17)
P HI
(3.18)
(3.19)
• It is easy to construct counterexa mples to th is case. and so we requ ire the precise argu-
ment s which follow.
j
94 BIRTH-DEATII QUEUE ING SYSTEMS IN EQU ILIBRIUM
On the other hand , all states will be recurrent null if and only if
Recurrent null: 5, = IX)
52 = IX)
52 < IX)
It is the ergodic case that gives rise to the equilibrium probabilities {fk} and
that is of most interest to our studies. We note that the condition for
ergodicity is met whenever tbesequence {).,JPk} remains below unit y from so me
k onwards, that is, if there exists some k o such th at for all k ~ k o we have
Ak < I (3.20)
Pk
We will find this to be true in most of the queueing systems we study.
We are now ready to apply our general solution as given in Eqs. (3. 11)
and (3. 12) to some very important special cases. Before we launch headlong
into that discussion. let us put at ease those readers who feel that the birth-
death constraints of permitting on ly nearest-ne ighb or tran sition s are too
confining. It is true that the solution given in Eqs. (3. 1I) and (3. 12) applies
only to neare st-ne ighb or birth-death processes. H owever . rest assured that
t he equilibrium meth ods we have described can be extended to more general
than neare st-neighbor system s ; these generalizat ions a re co nside red in
Chapter 4.
~ ...
Figure 3.1 State-transition-rate diagram for M/M /I.
k- l }.
Pk = Po II -
'-0 /-l
or
(3.21)
The result is immediate. The condi tion s for our system to be ergodic (and ,
therefore, to ha ve an equilibrium solution P» > 0) are that S, < CXJ and
So = CXJ; in this case the first condition becomes
The series On the left-hand side of the inequality will converge if and only if
Af/-l < I. The second conditi on for ergodicit y becomes
T his last condition will be satisfied if Af/-l ::;; I ; thus the necessary and suffi-
cient condition for ergodicity in the M IMII queue is simply}. < /-l. In order
to solve for Po we use Eq. (3.12) [or Eq . (3.5) as suits the reader] and obtai n
96 BIRTH-DEATH QUEUEI NG SYSTEMS IN EQUILIBRI UM
co
= (I - p),I kp k
k= O
Using the trick similar to the one used in deriving Eq. (2.142) we have
a <Xl
N= (I-p ) p - I l
ap k- O
= (1 - a
p) p - - -
1
ap 1 - p
N =- P- - (3.24)
1- p
• If we inspect the transient solution for M/M /l given in Eq. (2.163), we see the term
(I - p)pk ; the reader may verify that , for p < 1, the limit of the transient solution agrees
with our solution here.
3.2. MIM/l : TIlE CLASSICAL QUEUEING SYSTEM 97
l- p
~ (l -p)p
(1_p)p2
The behavior of the expected number in the system is plotted in Figure 3.3.
By similar methods we find that the va riance of the number in the system is
given by
00
- (3.25)
We may now appl y Little's result directly from Eq . (2.25) in order to obtain
o
p ~
T=-
N
;.
T= C~)G)
IIp.
T=-- - (3.26)
l-p
• We observe at p = I that the system behavior is unstable ; this is not surprising if one
recalls that p < I was our condition for ergodicity. What is perhaps surprising is that the
behavior of the average number iii and of the average system time T deteriorates so badly as
p - I from below; we had seen for steady flow systems in Chapter I that so long as R < C
(which corresponds to the case p < I) no queue formed and smooth, rapid flow proceeded
through the system. Here in the M IMI! queue we find this is no longer true and that we pay
an extreme penalty when we attempt to run the system near (but below) its capacity. The
3.3. DISCOURAGED ARRI VALS 99
simple pole at p = 1. This type oj behavior with respect to p as p approaches
1 is characteristic of almost every queueing system one. CUll encounter. We will
see it aga in in M IGII in Chap ter 5 as well as in the heavy traffic beh avior
of G/Gjl (and also in the tight bounds on G/Gfl behavior) in Volume II,
Chapter 2.
Another interesting quantity to calculate is the pr obability of finding at
least k customers in the system :
= I'"( l - p)pi
i- k
~ ...
Figure 3.5 State-transition-rate diagram for discouraged arrivals.
Pk = po(~r :! (3.29)
). = flP = fl(l - e -o / P )
T = oc (3.32)
fl"( I - e- ' /P)
• No te that this result could have been obtained from ;' = L kJ.kPk- The reader should
verify this last calculat ion.
3.4. M/M / oo: RESPONSIVE SERVERS (I NFINITE N UMBER OF SERVERS) 101
k -1 i,
(3.33)
ITo (.I + I )f-l
Pk = Po .~
N=!
f-l
Her e. too . the ergodic condition is simply i.l f-l < 00. It a ppea rs then that a
system of d iscouraged arriva ls beh aves exactly the same as a system th at
includ es a resp onsive server. H owever, Little 's result provid es a different
(and simp ler) form for T here th an th at given in Eq . (3.32) ; thus
I
T=-
f-l
This answer is, of co urse. obvio us sinc e if we use t he interpreta tion where
each a rriving customer is gra nted his own server. then his tim e in system will
be merely his service time which clearl y equ als l il t o n the ave rage.
x x
~ ...
11 211
= po(;r~! (3.35)
Similarly, for k ~ m,
m-l A k- l A
Pk = Po II . II -
'-0 (I + l)ft ;-m mft
Ak I
= Po ( ft)
-
m! mk- m
(3.36)
Collecting together the results from Eqs. (3.35) and (3.36) we have
(mp)k
POI:!
Pk = - (3.37)
{ (p) kmm
Po--- k~ m
m!
where
A
p = - <1 (3.38)
mfl
This expression for p follows that in Eq . (2.30) and is consistent with our
A }. A A A x
~ . ..
~ 2~ (m - l)~ m~ m~ m~
Po = [ I +kd
m- 1(mp )k co
I -k!- +k~I m -m!- -m k-- m
(mp)k I J-1
a nd so
Po= [ I
m- 1 ( m p)k
--+ (mp
-- )m) ( I
-- )J-1 - (3.39)
k- O k! m! I - p
Th e prob ability th at a n a rriving customer is forced to join the queue is
given by
co
P[queueing] =I Pk
k-m
Thus
(
m p)m) (_1)
. m! 1- p
P[queuemg] = ["f (mp)k + (mp)m) (_I_)J - (3.40)
. k-o k! m! 1- p
This prob ability is of wide use in telephony a nd gives th e pr ob ability th at no
trunk (i.e., server) is available for an arriving call (customer) in a system of m
tru nks; it is refer red to as Erlang's C f ormula a nd is oft en denoted * by
C(m , A/p) .
A A A A
~ ... ~
11 JJ P. IJ
Figure 3.8 State-transition-rate diagram for the case of finite storage room
M/M/I /K.
lk = {~ k <K
k:? K
P.k = P. k = 1, 2, . . . , K
From Eq. (3.20), we see that th is system is alw ays ergodic. The sta te-tra nsi-
ti on-rate diagram for thi s finite M a rkov chain is sho wn in F igure 3.8. Proceed-
ing directly with Eq. (3.11) we o btai n
k-l l
Pk = Po II - k ::;; K
;_ 0 P.
or
(3.4 1)
Of co urse, we also ha ve
Pk = 0 k >K (3.42)
In order to solve for Po we use Eqs . (3.4 1) and (3.42) in Eq. (3.12) to obtai n
and so
I - i.fp.
Po = 1 _ (l/p.)K+1
Thus, finally,
1 - I./P. ( l)k
Pk = l~ - (l/p.)K+1 P.
r - (3.43)
otherwise
J
3.7. M jM jm jm : m-SERVER LOSS SYSTEMS 105
A A A A
~ ... ~
I' 21' Im - l )p mu
Figure 3.9 State-transition-rate diagram for m-server loss system M/M /m/m.
1
k=O
1+ Nil
Pk = Nil k = 1= K
(3.44)
1 + ;.jll
0 otherwise
Po = [
I (Il-A)k-k!1
m
k_ O
J- 1
Pm = m
- (3.46)
2,(A lfl)klk!
k- O
Thus
Pk =
(
:0; (
A)k M!
(M - k) !
k>M
- (3.47)
Ak =
. {.il(M - k)
o otherwise
P.k = kp. k = I , 2, ...
Clearly , this too is an er godic system. The finite state-transition-rate diagram
is shown in Figure 3.11. Solving this system, we have from Eq . (3. II)
k- l }.(M - i)
Pk = PoIT (. + 1)P.
,-0 I
M) .l M!
( k = k! ( Af - k)!
.11), (,11-1 ) ), 2) , ) ,
~ . .. ~
I' 21' (.II- l lp M I'
Figur e 3.11 State-tran sition-rate diagram for "in finite"-server finite population
system M/M/ ooI/M.
108 BIRTH-DEATH QUEUEING SYSTEMS IN EQUILIBRIUM
I
I
I-
Thus
O~ k~M
I (3.50)
otherwise
We may easily calculate the expected number of people in the system from
Ik(2:)k(M)
>- 0 fL k
(I + i./fL)M
Using the partial-differentiation trick such as for obtaining Eq . (3.24) we
then have
N- = ---'-'--
Mi'/fL
1+ AlfL
i' = {OA(M - k) 0~ k ~ K- I
k
otherwise
k fl
fl k = {mfl
-
3.10. M /M /m /K /M: FINITE POPUL ATION, m-SERVER CASE 109
M~ (1.1- 1) ~ (M- m+ l p (M- m+ 2) .\ ( .\I- m) ~ (M- K+ I) ~
~
IJ Zu
.. :e:B:GD3: ...~ (m - I) J: mu mu /Il 1J
k- I A(M - i)
Pk = Po IT
.~o
(.I + I) P.
-_ po(~)k(Mk)
r: O~k~lII-l (3.51)
(~)(;r k = 0, 1, .. . , m (3.53)
Pk = i (~) (2:)'
. ~o I p.
This is kn own as the Engset distribution.
We could continue these examples ad nauseam but we will instead take a
benevolent approach and terminate the set of examples here . Additi on al
examples a re given in the exercises. It should be clear to the reader by now
that a lar ge number of interestin g queueing stru ctures can be modeled with
the birth-death process. In particular, we have demonstrated the a bility to
mod el the multipl e-ser ver ca se, the finite-population case, the finite-storage
case a nd co mbina tions thereof. The common element in a ll of the se is th at
the so lution for the equilibrium probabilities {pJ is given in Eq s. (3.11) a nd
(3.12). Only systems wh ose solutions are given by the se equations have been
con sidered in thi s chapter. However, there are many other Markovian systems
that lend themselves to simp le solution and which a re important in queueing
110 BIR TH - DEATH QUEUEIN G SYSTE MS IN EQU ILIBRIUM
th eory. In the next chapter (4) we con sider the equilibrium solutio n for
M arkovian queues ; in Chapter 5 we will generalize to semi-Markov processes
in which th e service time distribution B(x) is permitted to be genera l, and in
Chapter 6 we revert back to the exponential service time case, but permit
the interarrival time d istribution A (I) to be general; in both of the se cases
a n im bedded Markov chain will be identified and solved. Onl y when both
A(I) a nd B (x) a re nonexpon ential do we requ ire the methods of adva nced
queueing theory discu ssed in Chapter 8. (There are so me special none xp on-
entia l distribution s tha t may be described wit h th e the ory of Markov pr ocesses
and these too are discussed in Chapter 4.)
EXERCISES
~Consider a pure Markovian queueing system in which
;
/ .\...:.;;::
--??.'" A. = {A. 0 ~k ~ K
! ~
.~1 ~~ '.~ ) k 2A. K <k
'1>;a;;~ -!1
~')
_'>j
...; j
\ - . .!'! i P,k=P, k=I , 2, .. .
\ ..."i, ../ . '" .
" \ !J "
" ~ (a) Find the equilibrium probabilities P» for the number in the
system.
(b) What relationship must exist am on g t he parameters of the
, ;
problem in order that the system be sta ble and, therefore , th at
thi s equilibrium solution in fact be reached ? Interpret this
an swer in terms of the possible dyn ami cs of the system.
3.2. Cons ider a Markovian queueing system in which
k ~ 0, 0 ~ a: < I
k~1
Pk = lim Pk( l )
I -a:
EXERC ISES III
3.4. Consider an M IM II system with parameters A, p in which customer s
are impatient. Specifically, upon arri val, customers estimate their
queueing time wand then join the queue with probability e- a w (or
leave with pr obab ility I - e- a w ) . The estimate is w = k /fl when the
new arrival finds k in the system. Assume 0 ::::; oc.
(a) In terms of Po , find the equilibrium probabi lities P» of finding
k in the system. Gi ve an expre ssion for Po in term s of the system
parameters.
(b) For 0 < «, 0 < p und er wha t cond ition s will the equilibrium
solution hold ?
(e) For oc -> 00 , find P» explicitly and find the average number in the
system.
3.5. Consider a birth-death system with the following birth and death
coefficients :
A. = (k + 2)A k = 0, 1,2 , .
/ • = kp:
1 k = 1,2,3, .
All other coefficients are zero.
(a) Solve for h. Be sure to express yo ur answer explicitly in terms
of A, k , and p only.
(b) Find the ave rage number of customer s in the system.
A. = ock(K. - k ) k = K" K , + I, , K.
fl . = fJk (k - K,) k = K" K, + I, K.
where K , ::::; K. and where these coefficients are zero outside the range
K , ::::; k ::::; K a- Solve for P» (assuming tha t the system initially co ntai ns
K , ::::; k ::::; K. customers).
3.7. Consider an M/M /m system that is to serve the pooled sum of two
Poisson arrival streams; t he ith stream has an average arriva l rate
given by Ai and exponentially distribute d service times with mean
I /p , (i = 1, 2). The first stream is an ordina ry stream whereby each
ar rival requires exactly one of the In servers ; if all In servers are busy
then any newly arrivi ng custom er of type I is lost. Customers from the
second class each require the simultaneous use of Ino servers (and will
occupy them all simulta neously for the same exponenti ally distributed
amo unt of time whose mean is I Ip. sec); if a customer from th is class
finds less than mo idle servers then he too is lost to the system. Find
the fracti on of type I custo mers and the fraction of type 2 customers
that are lost.
112 BIRTH-D EATH QUEUE ING SYSTEMS IN EQUILIBRIUM
3.8. Consider a finite customer pop ulation system with a single server such
as that considered in Section 3.8 ; let the parameters M, A be replaced
°
by M, i:. It can be shown that if M ->- 00 and A' ->- such that lim
MA' = A then the finite population system becomes an infinite
population system with exponential interarrival time s (at a mean rate
of ). customers per second). Now consider the case of Section 3.10 ;
the par ameters of that case are now to be denoted M, A' , m, p" Kin
the obvi ous way. Show what value these parameters must take on if
the y are to repre sent the earlier cases described in Sections 3.2, 3.4 , 3.5,
3.6,3 .7,3.8 , or 3.9.
3.9. Usin g the definition for Bim, A/p,) in Section 3.7 and the definiti on of
C(m, Alp,) given in Section 3.5 establish the following for A/p, > 0,
m = 1,2, . ..
(b) c(m,;)
have occur red at the end of this interval as well as any customers
who are about to leave at this point.
(c) Solve for the expected value of the number of customers at these
points.
3.11. Consider an M/M/I system with "feedback"; by this we mean that
when a customer departs from service he has probability a of rejoining
the tail of the queue after a random feedback time, which is exponen-
tially distributed (with mean 1/1' sec) ; on the other hand , with prob-
ability I - a he will depart forever after completing service. It is
clear that a customer may return many times to the tail of the queue
before making his eventual final departure. Let hi be the equilibrium
probability that the re are k customers in the "system" (that is, in the
queue and the service facility) and that there are j customers in the
process of returning to the system .
(a) Write down the set of difference equations for the equilibrium
probabilities hi'
(b) Defining the double z-transform
00 00
C
" Z2 -
- ) apCZh Z2) + {oCI
""' I A - Zl
)
aZ2
=,u I --
I - - a - a ZOJ
-: P(O, Z2)
[ Zl "" I
~~---y----' ~~-----y--~)
Stage 1
114 BIRT H- DEATH QUEUE ING SYSTEMS IN EQUILIBRIU M
Both servers are of the exponential type with rate s JlI and Jl2, respec-
tively. Let
Pk = P[k customers in stage I and M-k in stage 2]
(a) Draw the state-transition-rate diagram.
(b) Write down the relati on ship among {Pk} .
(c) Find
.U
P(z) = 2: P k Zk
k=Q
(d) F ind Pk.
3.13. Con sider an M{Mfl queue with parameters). and Jl. A customer in
the queue will defect (depart without service) with probability
oc tJ.t + o (tJ.t) in any interval of duration tJ.t.
(a) Dr aw the state-transition-rate diagram .
(b) Expre ss P k+1 in terms of Pk.
(c) For oc = fl , solve for Pk (k = 0, 1, 2, . . .).
3.14. Let us elab orate on the M {M{I {K system of Section 3.6.
(a) Eva luate P» when). = fl .
(b) Find N for ). ,c. Jl and for }. = fl·
(c) Find T by carefull y solving for the average arrival rate to the
system:
4
ve con sider the special Erlangia n distributio n E" which is applied to the
[ueucing systems M/ ET/I and ET/M/l. We find that the system M/ ET/I
Las an interpre tat ion as a bulk a rrival process whos e general form we study
urther ; similarly the system ET/M fl may be inte rpreted as a bulk service
ystem, which we also investigate sepa rately. We then consider the more
.eneral systems ET.lETJI and step beyond tha t to ment ion a broad class of
l/G fl systems th at a re derivable from the Erla ngian by "s eries-pa rallel"
ombina tions. Finally, we consider the case of qu eueing netwo rks in which
II the underlying distrib utions once agai n are of the memoryless type. As
-e shall see in most of these cases we obtain a pr oduct form of solution.
pQ = 0
with the additional conservation relation given in Eq . (2.117), namely,
LPt=l
,..
This vector equation describes the "equations of motion" in equilibrium.
In Chapter 3 we presented a graphical inspection method for writing down
equations of motion making use of the state-transition-rate diagram. For the
equilibrium case that method was based on the observation that the prob-
abilistic flow rate into a state must equal the probabilistic flow rate out of
that state. It is clear that this notion of flow conservation applies more
generally than only to the birth-death process, but in fact to any Markov
chain. Thus we may construct "non-nearest-neighbor" systems and still
expect that our flow conservation technique should work; this in fact is the
case. Our approach then is to describe our Markov chain in terms of a state
diagram and . then apply conservation of flow to each state in turn. This
, I
graphical representation is often easier for this purpose than , in fact, is the
verbal, mathematical, or matrix description of the system. Once we have this
graphical representation we can, by inspection, write down the equations
that govern the system dynamics. As an example, let us consider the very
simple three-state Markov chain (which clearly is not a birth-death process
since the transition Eo-- E 2 is permitted), as shown in Figure 4.1. Writing
down the flow conservation law for each state yields
(4.1)
A •
IlP2 = 2Po + API (4.3)
where Eqs . (4.1), (4.2). and (4.3) correspond to the flow conservation for
states Eo, E" and E 2 , respectively . Observe also that the last equation is
exactly the sum of the first two; we always have exactly one redundant equa-
tion in these finite Markov chains. We know that the additional equation
required is
Po + PI + P2 = 1
4.1. T HE EQU ILIBR IU M EQUA TIONS 117
(4.4)
Vo ila ! Simple as pie . In fact, it is as " simple" as invertin g a set of sim ul-
taneous linear equations.
We ta ke adva ntag e o f this inspection technique in so lving a number o f
Mark ov chains in equilib rium in the bal ance of th is chapter. *
As in the prev iou s chapter we are here concerned with the limit ing prob-
ability defined as P» = lim P[N(t ) = k] a s t ~ co, a ssuming it exists . Th is
p roba b ility may be inte rpreted a s giving the p roportio n of time th at the
system spends in sta te Ek • One could, in fact, estimate th is pr ob ability by
meas u ring how ofte n the system contained k cust omers as comp ared to the
tot al mea su rem ent time. A no ther qu antity o f interes t (perhaps of grea te r
interest) in queueing systems is the pr obability th at an arri ving customer finds
the sys tem in sta te E k ; th at is, we consider the equil ibrium probability
in the cas e of an ergod ic system. One might intui tive ly feel that in all cas es
Pk = ' k, but it is ea sy to show that th is is not genera lly true. For example ,
let us con sider the (no n- Ma rkov ia n) system 0 /0/1 in which arri val s are
un iformly spaced in time such tha t we ge t one a rriva l every i sec exactl y;
the serv ice-time req uirements a re identical for all cu st omers a nd equa l, say
• It should also be clear that this inspection technique permits us to wri te down the time-
dependent state probabilities Pk(r ) directly as we have already seen for the case of birth-
death processes; these time-dependent equations will in fact be exactly Eq. (2. 114).
118 MARKOVIA N QUE UES IN EQUILIBR IUM
where Pk(t ) is, as before, the probability that the system is in sta te E k at time t
and where Rk(t) is the probability that a customer arriving a t time t find s the
system in state Ek • Specifically, for our system with Poisson arrivals we define
A (t , 1+ UI) to be the event that an arrival occurs in the interval (I, I + UI) ;
then we have
Rk(t) ~ lim P[N(t) = k I A(t, I + ut)] (4.5)
A t ..... 0
[where N (t) gives the number in system at time I]. Using our definition of
conditional probability we may rewrite Rk(l) as
. _P-,--[N--O(--,-
t)_=_
k ,:.-A-:(-:
t ,_I-:+_U
_I-,-,
)]
Rk(t ) = lim
"'-0 P[A(t, t + ut)]
. P[A(t, I + ut) I N(t ) = k]P[N(t ) = k]
= lim ~-'---'----'-'--':--'-----=----=---'--'-~
" .-0 P[A(t , t + ut)]
Now for the case of Poi sson arrivals we know (due to the mem or yless
property) th at the event A(I , I + UI) must be independent of the number
in the system at time I (and also of the time I itself); consequently P[A (I, I +
I
UI) N(I) = k] = P[A (I, I + UI)] , and so we have
~ dB(x) z
b(x) = - - = pe - · x ~ O (4.7)
dx
The notation of the figure shows an oval which repre sent s the service facility
and is labeled with the symbol /1, which repre sent s the service-rate parameter
* As we shall see in Chapter 5, a newer approac h to this problem, the "method of imbedded
Markov chains," was not ava ilable at the time of Erlang.
t Identical observa tions a pply a lso to the interarrival time distribut ion.
J
120
Service
facili ty Figure 4.2 The single-stage exponential server.
as in Eq. (4 .7). The reader will recall from Ch apter 2 that the exp onential
d istribution ha s a mean and va ria nce given by
E[ i] = 1.
fI.
• I
(]b - = ---:;
fI. -
where the subscrip t bon (1b 2
iden tifies thi s as the service time va ria nce .
N ow con sider the system sho w n in Figure 4 .3. In th is figure the large oval
represents the service fac ility . T he internal structure o f th is service facility is
re vealed as a series or tandem connection of tw o smaller ov als. Ea ch of these
sm all ova ls represents a single exponential server such as that depicted in
Fi gure 4.2 ; in Fi gure 4.3, howe ver, the small ov a ls a re labeled intern a lly with
the parameter 2f1. ind icating th at they each ha ve a pdf given by
y ~O (4 .8)
Thu s the mean a nd va ria nce for h(y) a re E (fi ) = I f2f1. a nd (1." = (l f2f1.)2 .
The fa shion in which th is two- st age service facilit y funct ion s is that up on
departure o f a cu stomer from th is facility a ne w cu st omer i s all owed to enter
fro m the left. T his new cu st omer enters stage I a nd rem ains there for a n
a mo u nt of time rand omly ch osen fr om h(y). U po n hi s de pa rture from this
first stage he then proceeds immediat ely int o th e seco nd stage and spends an
a mo u nt of time th ere equal to a random vari able dra wn indepe nde ntly once
a gain from h(y). After thi s seco nd random in terva l expi res he th en dep arts
fr om the service fac ilit y a nd a t thi s p oint only may a new cu st omer enter the
facility fro m th e left. We see th en , that o ne, and on ly one, custo me r is
Service facility
8 *(5) = [H*(5)]2
But , we a lready kn ow the transform of the exponential fr om Eq . (2.144)
and so
H *(5) = -
2#-
5 + 2#
Thus
• As a n example of a two-stage service facility in which only one stage may be act ive at a
time, consider a courtroom in a sma ll town. A queue of defend ant s forms, waiting for trial.
The judge tries a case (the first service stage) a nd then fines the defendan t. Th e second stag e
consists of paying the fine to the cou rt clerk. Ho wever, in th is sma ll town, the j udge is a lso
the clerk and so he moves over to the clerk 's desk, collects the fine, releases the defendant,
goes back to his bench, and then accepts the next defend ant into "service."
;-=
from the den sity funct ion given in Eq. (4. 12). We choose the first of th ese
I: three met hod s since it is most straightforwa rd (the reader may verify the
II other two for his own satisfaction). Since the time spent in service is the sum
of two random variables, then it is clear that the expected time in service is
!I the sum of the expectati ons of each. Thus we ha ve
i
E[i] = 2EW] = 1
I p.
!
Similarly, since the two rando m variables being summed are independent,
we may, therefore , sum their variances to find the variance of the sum :
1
O"b
2
=
2
O"h + O"h. 2
=-
2p.2
Note that we have arra nged mat ter s such that the mean time in service in the
single-sta ge system of Figure 4.2 and the two-stage system of Figure 4.3
is the same. We accompli shed thi s by speeding up each of the two-stage
service stations by a factor of 2. Note further that the variance of the
two-stage system is one-half the varia nce of the one-stage system.
The previou s paragraph introduced the noti on of a two-stage service
facility but we ha ve yet to discuss the crucial point. Let us consider the state
variable for a qu eueing system with Poisson arrivals and a two-stage expo-
nenti al server as given in Figure 4.3. As a lways, as part of ours tate descript ion,
we must record the number of cust omers waitin g in the queue. In additio n we
must supply sufficient information abo ut the service facility so as to summar-
ize the relevant past history. Owing to the memoryless property of the
exponential distribution it is enou gh to indicate which of the following three
possible situatio ns may be found within the service facility: either both stages
are idle (indicating an empt y service facility); or the firs t stage is busy and the
second stage is idle; or the first stage is idle a nd the second stage is busy.
Th is service-facility state information may be supplied by ident ifying the
stage of service in which the customer may be found . Our sta te description
then becomes a two-dimen sional vector that specifies the number of custo mers
in queue an d the number of stages yet to be completed by our customer in
service. Th e time this customer has already spent in his current stage of
service is irrelevant in calculatin g the future behavior of the system. O nce
aga in we have a Markov process with a discrete (two-dimensio nal) sta te space!
Th e method generalizes and so now we consider the case in which we
provide an r-stage service facility , as shown in Figure 4.4. In this system, of
cour se, when a custo mer departs by exiting fro m the right side of the oval
service facilit y a new customer may then enter from the left side and proceed
one stage at a time thro ugh the sequence of r stages. Upon his departure from
4.2. TilE METHOD OF STAG ES- ERLANGI AN DISTRIB UTI ON E, 123
Service facility
i T 7
Figure 4.4 The ,-stage Erlangian server E ,.
the rt h stag e a new customer again may then enter, and so on. The time that
he spends in the ith stage is drawn from the density functi on
E[Y] = 1-
'flo
It sho uld be clear to the reader that we have cho sen each stage in thi s system
to have a service rate equ al to 'Il in order that the mean service time remain
con stant :
E[i] = ,(J...) = .!
'flo flo
Similarly, since the stage time s are independent we may add the vari anc es to
obtai n
I
c, = J~ (4.14)
1 x
•
Figure 4.5 The family of , -stage Erlangian distributions Ei:
X p ea k = (r ~ I) ~ (4. 17)
Thus we see th at the locat ion of the peak mo ves rather qui ckly toward its
final location at If.u.
We now show that the limiting distribution is, in fact , a unit impulse by
considering the limit of the Lapl ace transform given in Eq . (4.15):
lim B*(s)
!- '"
= lim
T- ",
(-!1!:.-)T
s + r,u
= Iim (
I )T
T- '" I + slru
lim B*(s) = e- ,I. (4.18)
distribution by the symbol E, (no t to be confused with the notation for the
sta te of a rand om process). Since our state variable is discrete, we are in a
position to anal yze the queuein g system * M/Er/l. Th is we do in the following
sectio n. Moreover, we will use the same technique in Secti on 4.4 to decompose
the interarrival time distribution A (t ) into an r-sta ge Erla ngian distribution.
Note in these next two sections that we neurotically require at least one of
our distributi ons to be a pure exponential (this is also true for Chapters 5
and 6).
Pk = :L P; k = 1,2,3, . . .
; E:(k-l)r+ l
• Clearly this is a special case of the system M IG II which we will ana lyze in Chap ter 5
using the imbedded Markov chain approach .
t Note that this converts our proposed two-dimensional state vector into a one-dimensiona l
description .
4.3. TH E QUEUE M IErl1 127
x
And now for the beaut y of Erlang's approach : We may represent the sta te-
transition-rate diagram for stages in ou r system as shown in Figure 4.6.
Focusing on state E; we see that it is entered from below by a state which is r
po sitions to its left and also entered from above by state E;+l ; the former
trans ition is due to the arrival of r new stages when a new customer enter s,
.',
and th e latter is due to the completion of one stage within the r-stage service
facility. Furthermore we may leave state E; at a rate). due to an arrival and
at a rate rfl due to a service completion. Of course,.we have special bound ary
conditions for states Eo, E" .. . , Er_1 • In order to handle the bound ary
situation simply let us agree, as in Ch apter 3, that state probabilities with
negat ive subscripts are in fact zero. We thus define
P; = 0 j<O (4.21 )
We may now write down the system sta te equations immediately by using
our flow conservation inspection method. (Note that we are writing the
forwa rd equations in equilibrium.) Thus we have
Let us now use ou r " familiar" meth od of solving difference equat ions,
namely the z-tra nsform. Thus we define
co
P(z) = 2. P ;z;
j=o
As usual, we multiply thej th equation given in Eq. (4.23) by z; and then sum
over all applicable j. Thi s yields
co co co
2. (). + r,u)P ;z;
;_ 1
= 2. )'P ;-.z; + 2. rfl P ; +1 Z ;
;_ 1 j~ l
Rewriting we have
128 MARKOVIAN QUEUES I N EQU ILIB RIU M
The first term on the right-hand side of thi s last equation is obtained by
ta king special note of Eq . (4.21). Simplifying wehave
_P,,-,
o[c-A--,+,--'--,r/ l_-_(",--r,--/ll,--z",-"
)l_-_r/l,--P
-"
P(z) =
A + r/l - U - (r/ll z)
We may now use Eq. (4.22) to simplify this last further :
z)~l_
_ ----'r/l,-P--,o,-,-[I_----'(--'I/-'
P(z) =
A + r/l - U - (r/llz)
yielding finally
r/lPo(I - z)
P(z) = - --'--=----'--- (4.24)
r/l + Azr+l - (A + r/l )z
We may evaluate the con stant Po by recognizing that P(l) = I and using
L'Hospital's rule , thus
P(l) =I= r/lP o
r/l - ).r
giving (ob serve th at Po = Po)
A
Po = 1 - -
/l
In thi s system the arri val rate is Aand the avera ge service time is held fixed at
I//l independent of r, Thus we recognize that our utilizat ion fact or is
<l. _ A (4.25)
p = J.x =-
/l
Substituting back into Eq . (4.24) we find
rp(1 - p)(1 - z)
P(z) = (4.26)
rfl + Azr+l - (A + r/l)z
We must now invert th is z-transform to find the distr ibution of the number of
stage s in the system.
The case r = I , which is clearly the system M/M/l , presents no difficulties;
thi s case yields
P( z) = _fl ( I_------'
!.....:(,--I _- _ ,p--,).o.... z)'---
/l + ),Z2 - (A + /l )z
( I - p)(1 - z)
- I + p Z2 - (I + p)z
4.3. TH E QUEUE M IE,II 129
The denominator factors int o (I - z)( 1 - pz) and so canceling the common
term (I - z) we obtain
P(z) = I - P
1 - pz
We recognize this functi on as entry 6 in Table 1.2 of Appendix I, and so we
have immediately
k = 0, 1,2, .. . (4.27)
Now in the case , = I it is clear that P k = Pk and so Eq . (4.27) gives us the
distributio n of the number of customers in the system M /M /I, as we had
seen previo usly in Eq . (3.23).
For arbitrary values of r things a re a bit more co mp lex. T he usual ap-
proach to inverting a z-transform such as that given in Eq . (4.26) is to make
a partial fraction expansion and the n to invert each term by inspection; let
us follow th is approach . Before we can carry out this expansion we must
identify the' + I zeroes of the denominator polynomial. Unity is easily
seen to be one such. The denominator may therefor e be written as
(I - z) [' 11 - A(Z + Z2 + .. . + z') ], whe re the remaining' zeroes (which
we choose to de no te by zl' Z2, . • • , z, ) are the roo ts of the bracketed expression.
Once we have found these roots* (which are un ique) we may then write th e
denominator as '11(1 - z)(1 - ZIZl) . .. (I - zlz r)' Su bstitut ing this back into
Eq. (4.26) we find
P(z) = - - - - - - '1-
- -P-- --
(I - ZIZl)(1 - zfz2) ' .. (I - zlzr)
Our pa rtial fraction expansion now yields
(4.28)
where
• Many of the ana lytic pro blems in queueing theory red uce to the (difficult) task of locating
the roots of a funct ion.
130 MARKOVIA N Q UEUES IN EQUILIBRIUM
and where as before Po = I - p. Thus we see for the system M/Er/1 that the
distribution of the number of stages in the system is a weighted sum of
geometric distributions. The waiting-time distribution may be calculated
using the methods developed later in Chapter 5.
j = rk +i- I
On ce again let us use the definition given in Eq. (4.20) so that Pi is defined
to be the numb er of arrival stag es in the system; as always Pk will be the
4.4 . THE QUEUE ET/M /I 131
rX rX r,\
...
Pk = j =Lr k P;
The system we have defined is an irreducible ergodic Markov chain with its
state -transition-rate diagram for stages given in Figure 4.7. Note that when a
customer departs from service, he "removes" r stages of "arrival" from the
system. Using our inspection method, we may write down the equilibrium
equations as
rAPo = p,Pr (4.32)
rAP; = rAP;_1 + p,PHT I~j~r-I (4.33)
(rA + p,)P; = rAP;_1 + p,PHT r ~j (4.34)
Again we define the z-transform for these probabilities as
00
P(z) = LP;z;
j= O
(p, + rA)[P(z) - j
Po] - Ip,p jZ = rAzp(z) + ~[P(Z) - t P jzj]
We may now use Eq. (4.32) to eliminate the term P and then finally solve T
K = r/(I - Ilzo)
a nd so we ha ve
( I - z')( 1 - I/zo)
P( z) = - ' - - ---'---'--"'- (4.36)
r (1 - z)( 1 - z/ zo)
!
4.4. THE QUEUE E,/ MfJ 133
z' F(z) <=> / "_,, where we recall that the notation <=> indica tes a transform
pair. With thi s observation then , we carry out the following pa rti al fraction
expan sion
, j; =
(1 r
(I - ZO;-I) j ~ O
(4.38)
o j<O
First we solve for P; in the range j ~ r ; from Eqs. (4.37) a nd (4.38) we,
therefore , ha ve
P i -- 1Zo' - ;-1( 1 _ Zo
- '\J (4.39)
r
We may simplify thi s last expression by recogni zing that the den om ina tor of
Eq. (4.35) must equal zero for z = zo; th is ob servation lead s to the equality
rp (zo - I) = I - zo-', and so Eq. (4. 39) becom es
P; = p(zo - I )z~- H j ~ r (4.40)
On t he othe r hand , in the ran ge 0 ~ j < r we ha ve that /;_, = 0, a nd so P;
is easily found for the rest of our range. Combining thi s a nd Eq . (4.40) we
finally obta in the di stribution for the number of a rrival stages in our system :
! ( I - zoH ) O~j<r
P, = r (4.41)
J
( p(zo _ I)Z~-;-I
U sing o ur earlier relati onship between Pk and P; we find (the reader sho uld
check thi s a lgebra for himself) that the di stribution of th e nu mber of custo mers
in th e system is given by
I - p k =O
P» = { - (4.42)
p(z; - 1)zi)'k k >O
We note that thi s distribution for number of customers is geo me tric with a
slightly mod ified first term. We could a t this point calcul ate the waiting time
dis tr ibutio n , but we will postpone th at unt il we study th e system G/M /l in
Cha pter 6.
134 MARKOVIAN QU EUES IN EQUILIBRIUM
(As an example, one may think of random-size families arriving at the doctor's
office for individual vaccinations.) As usual, we will assume that the arrival
rate (of bulks) is i.. Taking the number of customers in the system as our
state variable, we have the state-transition-rate diagram of Figure 4.8. In
this figure we have shown details only for state E k for clarity. Thus we find
that we can enter Ek from any state below it (since we permit bulks of any
size to arrive); similarly, we can move from state E k to any state above it, the
net rate at which we leave Ek being i.g, + i.g. + ... = AL;';., gi = A. If,
as usual we define Pk to be the equilibrium probability fer the number of
customers in the system, then we may write down the following equilibrium
• To make the correspondence complete. the parameter for this exponential distribution
should indeed be ru, However, in the following development, we will choose the parameter
merely to be Il and recall this fact whenever we compare the bulk arrival system to the
system M/ETII.
4.5. BULK ARRIVAL SYSTEMS 135
We may interchange the order of summation for the double sum such that
GO k- l <:0 00
(4.47)
P(z) = fl Po(1 - z)
fl(1 - z) - ).z(l - G(z)]
To eliminate Po we use P(I) = I; direct application yields the indeterminate
form % and so we must use L'Hospital' s rule , which gives po = I - p.
We obtain
.. _,-,fl(,---I_----'p--'-)(OI_
- -_z.. :. .)_
P(z) = - - (4.49)
fl (1 - z) - AZ(l - G(z)]
Th is is the final solution for th e transform of number of customers in the bulk
arrival M/M /l system. Once the sequence {gk} is given, we may then face the
pr oblem of inverting this transform . One may calculate the mean and variance
of the numb er of customers in the system in term s of the system parameters
directl y from P(z) (see Exercise 4.8). Let us note that the appropriate defini-
tion for the utilization fact or p must be carefully defined here. Recall that p
is the average arrival rate of customers times the average service time. In
our case, the average arri val rate of customers is the product of the average
a rrival rate of bulks and the average bulk size. From Eq. (1I.29) we have
immediately that the average bulk size must be G'(I). Thus we naturally
conclude that the appropriate definition for p in this system is
).G'( l)
p= - - (4.50)
fl
It is instructive to consider the special case where all bulk sizes are the same,
namely,
k = r
k ~ r
Clearly, this is the simplified bul k system discussed in the beginning of thi s
section ; it correspond s exactly to the system M/Er/l (where we must make the
minor modification as indicated in our earlier footn ote that fl must now be
replaced by rfl) . We find immediately that G(z) = ZT and after substituting
this into our solution Eq . (4.49) we find that it correspo nds exactly to our
earlier solution Eq. (4.26) as, of course, it must.
4.6. BU LK SERVICE SYSTEMS 137
• For exam ple. the shared taxis in Israel do not (usually) depart unt il they have collected a
full load of custo mers, all of whom receive service simultaneously.
138 MARKOVIAN QUEUES IN EQUILIBRIUM
I' I'
the system:
(l + P.)h = P.Pk+r + lh-I k ~ I
lpo = fl(PI + P2 + ... + Pr) (4.51)
Let us now apply our z-transform method; as usual we define
a>
P(z) = ~p,tZk
k _O
We then multiply by z\ sum, and then identify P(z) to obtain in the usual way
give the same arguments regarding the location of the denominator roots; in
particular, of the r + I denominator .zeroes, exactly one will occur at the
point z = I, exactly r - I will be such that [z] < I, and only one will be
found, which we will denote by zo, such that IZol > I. Now let us study the
numerator of Eq. (4.52). We note that this is a polynomial in z of degree r .
Clearly one root occurs at z = I. By arguments now familiar to us, P(z)
must remain bounded in the region Izi < I, and so the r - I remaining
zeroes of the numerator must exactly match the r - 1 zeroes of the denomina-
tor for which Izi < I; as a consequence of this the two polynomials of degree
r - I must be proportional, that is,
T-1
K I PtCzk - ZT)
rp zT+l - (1 + rp)zT + 1
k= O
1- z (1 - z)(1 - Z/Zo)
Taking advantage of this last equation we may then cancel common factors
in the numerator and denominator of Eq. (4.52) to obtain
1
P(z) = - - " - - - -
K(I - z/zo)
The constant K may be evaluated in the usual way by requiring that P(I) = I,
which provides the following simple form for our generating function:
k = 0, 1,2, . . . -(4.54)
Once again we see the familiar geometric distribution appear in the solution
of our Markovian queueing systems!
varia tio n that is less than th a! of the exponenti al distributi on [from Eq.
(4.14) we see that Co = IIJ r wherea s for r = I the exponenti al gives
C b = I] and so in some sense Erlang ian rand om variables are " mo re regular"
than exponent ial variables. Thi s situation is cert ainly less than completely
general.
One dire ction for generalizatio n would be to remove the restriction that
one of our two basic queueing distributi on s must be exponential ; tha t is, we
certa inly could consider the system ErJErJ I in which we have an ra-stage
Erlangian distributi on for the interarr ival time s and an rb-stage Erlan gian
distribution for the service times . * On the other hand , we could atte mpt to
generali ze by broadening the class of distributions we consider beyond that
of the Erlangian. Thi s we do next.
We wish to find a stage-type arran gement that gives larger coefficient s of
va riation than the exponential. One might consider a generalizatio n of the
r-stage Erlangi an in which we permit each stage to have a differ ent service
rate (say, the ith stage has rate fl ,). Perhaps this will extend the ran ge of C,
ab ove unit y. In this case we will ha ve instead of Eq. (4. 15) a Lapl ace tran s-
form for the service-time pdf given by
The service time density hex) will merely be the con volution of r exponen tial
den sities each with its own parameter fl i. The squa red coefficient of variati on
in this case is easily shown [see Eq. (11.26), Appendix II] to be
Service facility
Figure 4.10 A two-stage parallel server H 2 •
facility he will procee d to serv ice stage I with probabil ity 0( , or will proceed
to service stag e 2 with pr ob ab ility 0(2' where 0( , + 0(2 = 1. He will then spend
an exponentially distributed interval of time in the ith such stage who se mean
is I{fl i sec. After th at interval the customer departs and o nly then is a new
cu stomer allowed int o the serv ice fac ility. It is clear fro m th is des cription
tha t the service time pdf will be given by
x ~ O
a nd also we ha ve
(4.56)
Clearly,
R
b(x) = ! O(iflie-#' z x ~ O - (4.57)
i= l
a nd
B*(5) = !R lX i - /l-'--
i-' 5 + /l i
The pdf given in Eq. (4.57) is referred to as the hyperexp onent ial distribution
a nd is denoted by H R. Hopefully, the coefficient of var iati on (C b ,;; a Ji)
is now grea ter th an unity and the refore repre sent s a wider va ria tio n than
142 ~1ARKOVIA N QUEU ES IN EQUILIBRIUM
Service facility
that of the exponential. Let us prove thi s. From Eq. (II.26) we find immed-
iatel y that
_ R C1.
x = 1: -
i = l }li
i
(4.58)
Now, Eq . (II. 35), the Cauchy-Schwarz ineq uality , may also be expre ssed as
follo ws (fo r ai' b, real):
(4.59)
4.7. SERIES-PARALLEL STAGES: GENERALIZ ATIONS 143
(T his is often referred to as the Cauchy inequality.) lfwe mak e th e asso ciatio n
ai = J CJ. i , hi = J "- J,u,, then Eq. (4.59) shows
- (4.60)
which pr oves the desired result.
One might expect t hat an a nalysis by the method of stages exists for the
systems M/H rt/I , H rt/M fI, H R a / H rtb fI , and thi s is indeed true. The rea son
th at the ana lysis can proceed is that we may take account of the nonexpon-
ential character of the service (or arrival) facilit y merely by specifying which
stage within the service (or arri val) facility the customer currentl y occupies.
Thi s inform at ion along with a sta tement regarding the number of customers
in the system creates a Mark ov chain , which may then be studied much as
was done earlier in this chapt er.
For exa mple, the system M/H 2 /l would have the sta te-tra nsitio n-rate
diag ram show n in Figure 4.12. In this figure the designati on k, implies th at
the system contains k customers and that the customer in service is locat ed
in stage i (i = I , 2). T he transitions for higher numbered sta tes are ide ntica l
to the transitions between states I, and 2,.
We are now led directly int o the foll owing genera lization of series stages
and parallel stages ; specifica lly we are free to combine series and par allel
•
2 r,
Service facility
(4.62)
", ",
Service facilit y
parallel branc h to have a service rate given by {-I;;, then we find tha t the Lap-
lace tr ansform of the service time density will be generalized to
8 *(5) = LR «, IT
r, (
.uzu.: II )
(4.63)
;- 1 ;- 1 5 + {-IH
These genera lities lead to rather comple x system equations.
Another way to create the series-parallel effect is as follows. Consider the
service facility shown in Figure 4.14. In thi s system there are r service stages
only one of which may be occupied at a given time. Cust omers enter from the
left and depart to the right. Before entering the ith stage an independent
choice is made such that with probability f3; the customer will proceed into
the ith exponential service stage and with probability cx; he will depart from
the system immediately ; clearly we require f3; + cx ; = I for i = I , 2, ... , r,
After compl eting the rth stage he will depart from the system with pr obability
1. One may immediately write down the Laplace transform of the pdf for the
system as follows:
(4.64)
where (l(r +l = I. One is tempted to consider more general transiti ons among
stages th an that shown in thi s last figure ; for example, rather th an choosing
only between immed iate departure and entry int o the next stage one might
co nsider feedb ack or feedforward to ot her stages. Cox [COX 55] has shown
that no furth er generality is introduced with this feedback and feedforwa rd
concept over that of the system shown in Figure 4.14.
It is clear that each of these last three expressions for B *(s) may be re-
written as a rati onal funct ion of s, that is, as a rati o of polynomials in s.
The position s of the poles (zeroes of the denominator polynomial) for B *(s)
will of necessity be located on the negative real axis of the complex s-plane.
This is not quite as general as we would like, since an arb itrary pdf for
146 MARKOVIAN QU EU ES IN EQUILIBRIUM
service time may have poles located anywhere in the negati ve half s-plane
[that is, for Re(s) < 0]. Cox [COX 55] has studied this pr oblem and suggests
that complex values for the exponential parameters rill . be permitted ; the
ar gument is that whereas this correspond s to no physically realizable expon-
ential stage, so long as we provide poles in complex conju gate pai rs then the
entire service facility will have a real pdf, which corresponds to the feasible
cases. If we permi t complex-conjugate pair s of poles th en we have complete
generality in synthesizing any rational functi on of s for our service-time
tran sform B *(s). In addition, we have in effect outlined a meth od of solving
these systems by keeping track of the state of the service facility. Moreover ,
we can similarly construct an interarrival time distri buti on from series-
parallel stages, and thereby we are capable of con siderin g any G/G/ I system
where the distributions have transform s that are rational function s of s.
It is further true that any nonrati onal functi on of s may be approx imated
arbitrarily closely with rational functi ons. * Thus in pr inciple we have solved
a very general problem. Let us discuss this meth od of solution. Th e sta te
descript ion clearly will be the number of customers in the system, the stage
in which the arriving cust ome r finds himself within the (stage-type) arriving
box and the stage in which the cust omer finds himself in service. Fr om thi s
we may draw a (horribly complicated) state-transition dia gram . Once we
have this diagram we may (by inspect ion) write down the equilibrium
equations in a rather straightfo rward manner ; th is large set of equ ati on s
will typ ically have many bound ary conditions. H owever, these equ ati on s
will all be linear in the unknown s and so the solution meth od is straight-
forward (albeit extremely tedi ou s). What more natural setup for a computer
solutio n could one ask for ? Ind eed , a digital co mputer is extremely adept at
solving large sets of linear equ ati ons (such a task is much eas ier for a digital
computer to handle than is a sma ll set of nonlinear equ ations). In carrying
out the digital solution of this (typically infinite) set of linear equa tions, we
must redu ce it to a finite set; thi s can only be done in an ap pro ximate way by
first deciding at what point we ar e satisfied in truncatin g the seq uence
Po ,PI> p", .. . . Then we may solve the finite set and perh ap s extrap olate the
• In a rea l sense, then, we are faced with an approximation pro blem ; how may we "best"
app roximate a given dist ribution by one tha t has a rat iona l tra nsform. If we a re given a
pdf in numerical form then Prony' s method IWHI T 44] is one acceptable procedure. On
the other hand, if the pdf is given analytica lly it is difficult to describe a genera l proced ure
for suita ble approxi mation. Of course one wou ld like to make these approximati ons with
the fewest number of stages possib le. We comment that if one wishes to fit the first an d
second moment s of a given distributi on by the method of stages then the number of stage s
canno t be significantly less than I / Cb" ; unfortun ate ly, this implies that when the distri-
but ion tends to concentrate ar ound a fixed value, then the num ber of stage s required
grows ra ther quickly.
4.8. NET WOR KS O F MARKOVIAN QUEUES 147
solution to the infinite set; all this is in way of ap proximation and hopefull y
we are able to carry out the .computation far enough so that the neglected
terms a re indeed negligible.
One must not overemphas ize the usefulne ss of this pr ocedure ; this solutio n
meth od is not as yet a utomated but does at least in principl e provide a meth od
of approach. Other anal ytic meth od s for handling the more comple x qu eueing
situatio ns are discussed in the balance o f this book.
· 8f---t--~Of--··--
Figure 4.15 A two-node tandem network.
148 MARKOVIAN QUEUES IN EQUI LIBRIUM
exponential server also of rate p,. The basic que stion is to solve for the inter-
arrival time distribut ion feeding node two ; th is certainly will be equivalent to
the interdeparture time distribution from node one . Let d (t ) be the pdf
describing the interdeparture process from node one and as usual let its
Laplace transform be denoted by D*(s). Let us now calculate D*(s). When a
customer departs from node one either a second customer is ava ilable in the
queue and ready to be taken into service immed iately or the queue is empt y.
In the first case, the time until this next customer departs from node one will
be distributed exactly as a service time and in that case we will have
On the other hand , if the node is empty upon th is first customer's departure
then we must wait for the sum of two intervals, the first being the time until
the second customer arrives and the next being his service time ; since these
two intervals are independently distributed then the pdf of the sum must be
the convoluti on of the pdf's for each. Certainly then the tran sform of the sum
pdf will be the pr oduct of the transforms of the individual pdfs and so we
have
A
D*(S)l nod e o ne empty =- - B*(s)
s +A
where we have given the explicit expression for the tran sform of the inter-
arrival time densit y. Since we ha ve an expo nential server we may also write
B*(s) = p,/ (s + p, ); furthermore , as we shall discuss in Ch apter 5 the proba-
bility of a departure leaving behind an empty system is the same as the
probability of a n a rrival finding an empty system, namely, I - p. T his
permits us to write down the unc onditi onal transform for the inte rdepa rture
time density as
D*(s) =~ (4.65)
S +A
and so the interdeparture time distributi on is given by
D (t ) = I - e-).' t~ O
-
4.8. NETWO R KS OF MARK OVIAN QUEUES 149
T hus we find the remar kable conclu sion that the interdeparture times are
expo nentia lly distribut ed with t he same parameter as the interarrival times!
In other words (in the case of a stable sta tionary queueing system), a Poisson
pr ocess driving an exponential server generate s a Poisson process for de-
partures. This startling result is usually referred to as Burk e's theorem
[BURK 56]; a number of others also studied the pr oblem (see, for example,
the discussion in [SAAT 65]). In fact , Burke' s theorem says more, namely,
that the steady-sta te output of a stable M/M /m queu e with input parameter
Aand service-time parameter flo for each of the m cha nnels is in fact a Poisson
process at the same rate A. Burke also established that the output process was
independent of the other processes in the system. It has also been sho wn tha t
the M/M /m system is the only such FCFS system with this pro perty. Returning
no w to Figure 4.15 we see therefore that node two is dri ven by an independent
Poisson arrival process and therefore it too beha ves like an M/M fJ system
and so may be analyzed independently of node one. In fact Burke's the orem
tells us that we may connect many multiple- server nodes (each server with
exponential pdf) together in a feedfor ward * network fashion and still
preserve th is node-by-node decomp osition .
Jack son [JACK 57) addressed himself to this question by considering an
arbitrar y net work of queue s. The system he studied consists of N nodes
where the it h node consists of m , exponential servers each with par ameter fIo i;
fur ther the ith node receives arrivals from outside the system in the form of a
Poisson process at rate Y i' Th us if N = I then we have an M/M /m system.
Upon leaving the ith node a customer then proceeds to the jth node with
probability r ii ; this formul ati on permits the case where r« ~ O. On the other
ha nd, aft er completing service in the ith node the proba bility that the custo-
mer departs from the netwo rk (never to return again) is given by I - Li'.:,l r ii .
We must calculate the total ave rage arriva l rate of customers to a given node.
T o do so, we must sum the (Poisson) ar rivals from out side the system plu s
arrivals (no t necessarily Poisson) from all intern al nodes; that is, den oting
th e total average a rrival rate to node i by j' i we easily find that this set of
par ameters must sa tisfy the following equ at ions :
S
Ai = r, + L
j= l
}1i i i=I , 2, .. . , N - (4.66)
I n order for all nod es in this syste m to represent ergodic Ma rkov cha ins we
require that i'i < m ill i for all i; aga in we cau tio n the read er not to confuse
t he nodes in this discussion with the system states of each node from our
• Specifically we do not permit feedba ck pat hs since this may dest roy the Poisso n nature of
the feedback depart ure stream. In sp ite of this, the following discussion of Ja ckson's work
points ou t that even networks with feedback are such that the individua l node s behave
as if they were fed totall y by Poisson arrivals, when in fact they are not.
l
ISO MA RKO VIAN QU EUES I N EQU ILIBRIUM
- (4.67)
and ' pi (k ,) is given as the solutio n to the classical M/M / m system [see. for
example, Eqs . (3.37)-(3.39) with the obvious chan ge in not ation ]! This last
result is commonly referred to as Jack son's theorem . On ce agai n we see the
"product" form of solution for Mark o vian queues in equ ilibriu m.
A mod ificat ion of Jack son 's network of queues was con sidered by G ordon
I, and Ne well [GORD 67]. The modification th ey investiga ted was th at of a
closed Mark ovian netw ork in the sense that a fixed and finite number of
cust omers, say K , are con side red to be in the system and a re trapped in that
system in the sense th at no o thers may enter and none of the se may leave : this
cor responds to Jack son's case in which ~;:. \ r ij = I and Yi = 0 for all i.
(A n interestin g example of thi s class of systems know n as cyclic queues had
been con sidered earli er by K oenigsberg [KO EN 58]; a cyclic queue is a
tandem q ueue in which the last stage is conn ected bac k to the first.) In the
general case co nsidered by G ord on and Ne well we do not quite expect a
pr oduct soluti on since there is a dep end ency a mo ng the element s of the sta te
vecto r (k\ . k, • . . . • k s ) as foll ows :
S
I ki =
i= l
K (4.68)
As is the case for Jackson 's model we ass ume that this discre te-state Ma rkov
pr ocess is irred ucible and therefor e a unique equ ilibrium pr o bability
distribution exists for p(k\ . k" . . . , k s ). In thi s mo del, however , th ere is a
finite num ber of sta tes; in particular it is easy to see that the num ber of
dist ingui shable states of th e system is eq ual to the nu mber of ways in which
o ne can place K custom ers a mo ng th e N nodes. and is eq ua l to the binomial
coefficient
(
N +K - I)
N - I
4.8. NETWORKS OF MARKOVIAN QUEUES 151
The following equations desc ribe the behavior of the equilibrium distribution
of custo mers in this closed syste m and may be written by inspection as
.v
P(/(I, /(2' . . . , /(x) 2 0k.- ICf..( /(;)fl i
i= 1
s s
= I IOkj_ICf.,(k i + l)!t i ' ij p(k l, k2 , • • • , k, - 1, . . . , k , + I , ... , ks )
i = l r "", l
(4.69)
where the discrete unit step-funct ion defined in Appendix I ta kes the for m
k = 0, 1, 2, . . .
'" {I0
Ok = (4.70)
k< O
and is included in' the eq uilibri um equations to indicate the fact that the
service rate must be zero when a given node is empty ; furthe rmore we define
k. k 1·<
- nI ,,
Cf.i(k i) = '
{11l
i
which merely gives th e number of cust omers in service in th e ith node when
there a re k, custo mers a t th at nod e. As usual the left-h a nd side of Eq . (4.69)
des cribes the flow of 'pro bability out of sta te (k l , k 2 , • • • , k".) whereas the
right-hand side acco unts for the flow of probability into that state from
neighboring states. Let us proceed to write down the solution to these equa-
tion s. We define the function (li(k i) as follows :
, - m ,·
k<
Consider a set of numbers {Xi}' which are solutio ns to the foliowing set of
linea r equations :
N
# iXi = L p j x jr ji i = 1,2, . . . , lV (4.71)
;=1
• Again the reader is caut ioned that, on the one hand, we have been con sidering Markov
cha ins in which the quantities Pij refer to the transition probabilities among the possible
slates that the system may take on, wherea s, on the other hand, we have in this section in
additi on been considering a network of queuein g systems in which the prob ab ilities r ij
refer to tran sition s that customers make between nodes in tha t network .
•
(4.74)
Thus we see the pr oduct solution directly for this marginal distribution and ,
of cour se, it is similar to Jackson's theorem in Eq. (4.67); note that in one
case we have an open system (one that permit s external a rrivals) and in the
other case we have a closed system. As we shall see in Chapter 4, Volume II ,
th is model has significant applications in time-shared and multi-access
computer systems.
Jack son (JACK 63] earlier con sidered an even more genera l open queue ing
system, which includes the closed system just considered as a special case.
The new wrinkles introduced by Jackson a re, first , that the customer arrival
proce ss is permitted to depend up on the total number of customers in the
system (using this, he easily creates closed network s) and, second, that the
service rate at any node may be a function of the number of cust omers in that
node. Thus defining
S(k) ~ k , + k, + . .. + k»
4.8. NETWO RKS OF MARK OVIAN QUE UES 153
we the n permit the tota l a rrival rate to be a function of S(k) when the system
sta te is given by the vecto r k. Similarl y we define the exp onential service
rat e a t node i to be Ilk, when there are k , cu stome rs at that nod e (includ ing
th ose in ser vice). As earlier, we ha ve the node transiti on probabilities ' ij
(i , j = 1,2 , . . . , N) wit h the following additional definitions : '0, is the
probability th at the next externally generated arrival wiII enter the network
at node i ; ' i .N +l is the probability that a cu stomer leaving node i departs
from the system ; and 'O, N +l is the probability th at the next arrival will
require no service from the system and leave immediately upon arrival. Thus
we see that in this case y, = 'Oiy(S(k», where y(S(k» is the total external
arrival rate to the system [conditioned on the number of customers S (k) at
the moment] from our external Poisson process. It can be seen that the
prob ability o f a customer arriving at node i l and then passing through the
node sequence i 2 , i 3 , . • . , in and then departing is given by ' oil' I,,',,i,' "
" . _ l i . 'i • •V+l ' Rather than seek the solution of Eq . (4.66) for the traffic
rates, since the y are funct ion s of the total number of cu stomer s in the system
we rather seek the solution for the following equivalent set :
N
[In the case where the arrival rates are independent of the number in the
system then Eqs. (4.66) and (4.75) differ by a multiplicative factor eq ual to
the total arrival rate of customers to the system.] We assume th at the solution
to Eq. (4.75) exists, is unique , and is such that e, ~ 0 for all i; th is is equ iv-
alent to assuming that with prob ability I a cu stomer' s j ourney throu gh the
netwo rk is of finite length . e, is, in fact , the expected number of times a
customer will visit nod e i in passing through the netw ork.
Let us define the time-dependent state probabilities as
where terms a re omitted when any component of the vector a rgument goes
negative ; k (i-) = k except for its ith component, which takes o n the value
a:
k, - 1; k (i+) = k except for its ith comp onent , which takes on the value
k , + I; and k (i,j) = k except that its ith comp onent is k , - I and its jth
component is k , + I where i ~ j . Complex as this notati on appears its
interpretat ion sho uld be rather straightforward for the reader. Jackson shows
that the equilibrium distribution is unique (if it exists) an d de fines it in our
earl ier notati on to be lim Pk(t ) g Pk g pt k, k 2 , •• • , k N) as t ->- 00. In
order to give the equilibrium solution for Pk we must unfortunately define
the following furt her notation :
F(K ) gII K- l
S lk ) ~ O
y(S(k» K = 0, 1, 2, . .. (4.78)
N ki
f( k) ';' II II .5- (4.79)
1"", 1 ij = l f-l; i
H(K ) g I f(k )
k e..l
(4.80)
where the set A shown in Eq . (4.80) is the same as that defined for Eq. (4.73).
In ter ms of these definiti ons then Jackson's more general theorem states that
if G < 00 then a unique equilibrium-state prob ability distribution exists for
the general state-dependent networks and is given by
1
Pk = - f( k) F(S( k» (4.82)
G
Again we detect the product form of solutio n. It is also possible to show that
in the case when arrivals are independent of the total number in the system
[that is, y g y( S(k» ) then even in the case of state-dependent service rates
Jack son's first the orem applies, namely, that the jo int pdf fact ors into the
produc t of the individual pd f' s given in Eq. (4.67). In fact PiCk;) tu rns out to
be the same as the probabi lity distribut ion for the nu mber of customers in a
single-node system where arriv a ls come from a Poisson pr ocess at rate y e;
and with the sta te-dependent service rates fl., such as we ha ve derived for our
general birth-death process in Chapter 3. Thu s one impact of Jackson's
second theorem is that for the constant-arrival-rate case, the equilibrium
prob abili ty distributions of number of customer s in the system at individ ual
4.8. NETW O RKS OF MARKOVIAN Q UEUES 155
centers are independent of other centers; in addition, each of these distri -
but ions is identical to the weil-known single-node service center with the
sa me pa ra meters. * A remar kable result!
This last theo rem is perhap s as far as one can got with simple Markovian
networks, since it seems to extend Burke' s theo rem in its most genera l sense.
When one relaxes the Mar kovian assumpti on on arrivals and/o r service
times, then extreme complexity in the inter depar ture process arises not only
from its marginal distri butio n, but also from its lack of independence on
othe r state variables.
These Markovian queuein g network s lead to rath er depr essing sets of
(linear) system equ ations ; this is due to the enormous (yet finite) sta te
descripti on. It is indeed remar kable that such systems do possess reasonably
straightforward solutions. The key to solution lies in the observation that
these systems may be repr esented as Mark ovian population processes, as
neatly described by Kingman [KI NG 69) and as recently pursued by Chandy
[CHAN 72). In particular , a Mar kov popu lation process is a continuous-time
Markov cha in over the set of finite-dimen sional sta te vectors k = (k 1 , k 2 , • • • ,
k s ) for which transitions are permitted only between sta tesf : k a nd k (i+)
(an external ar rival at node i) ; k and k (i- ) (an external departure from node
i) ; and k and k(i ,j ) (an internal tra nsfer from node ito nodej). Kingman
gives an elegant discussion of the interesting classes and properties of these
processes (using the notion and properties of reversible Markov chai ns).
Chandy discusses so me of these issues by observing that the equilibrium
pr obabi lities for the system sta tes obey not only the global-balance equati ons
that we have so far seen (and typica lly which lead to product-form solutions)
bu t also that this system of equati ons may be decomposed into many sets of
smaller systems of equations, each of which is simpler to solve. Th is tran s-
for med set is referred to as the set of " local" -balance equa tions , which we
now proceed to discuss.
The concep t of local balance is most valuab le when one deals with a net-
work of queu es. H owever, the concept does apply to single-node Mar kovian
queues, and in fact we have already seen an example of loca l balan ce at pla y.
• Thi s model also permit s one to handle the closed queueing systems studied by Gordon
a nd Newell. In order to crea te the constant tot al number of customers one need merely set
y (k ) = 0 for k ~ K an d y( K - I) = co, where K is the fixed number one wishes to conta in
within the system. In order to keep the node tran sition probabilities iden tica l in the open and
closed systems, let us denote the former as earlier by r;; and the latter now by rii' : to mak e
th e limit of Jackson' s genera l system equivalent to the closed system of Gordon an d
Newell we then require r;;' = ri; + (r i .N+l)(rU;)'
t In Chapter 4, Volume II , we describe some recent result s that do in fact exte nd the model
to han dle different customer classes and different service disciplines at each node (permit-
ting. in some ca ses, more genera l serv ice-time distributions).
t Sec the definitions following Eq. (4.77).
....
:r
[
4.8. NETWORKS OF MARKOVIAN QUEUES 157
(
N +K - I) 6
=
N - I
Each of these glob al-balance equ ati ons is of the form whereby the left-hand
side repre sents the flow out of a state and the right-hand side represents the
flow int o that sta te. Equations (4.83)-(4.85) are already local-balance
equations as we shall see; Eqs. (4.86)-(4.88) have been written so th at th e
first term on the left-hand side of each equation balances the first term on the
right-hand side of the equ ation, and likewise for the seco nd term s. Thus
Eq . (4.86) gives rise to the following local-balance equations:
\, , p(l , 0, I) = fil
- P( 2,0,0)
fi 3
1
p(2 , 0, 0) = [1 + PI + !!:l + (fll )2 + (fl l)2+ (fll)1 - (4.91)
fl3 u« fl 2P3 fl 3 fl 2
Had we allowed all possible transitions among nodes (rather th an the cyclic
behavior in this example) then the state-transition-rate dia gram would have
perm itted transitions in both directions where now only unidire ction al
transition s are perm itt ed ; however, it will always be true that only tr ansitions
t o neare st-nei ghb or states (in thi s two-d imensional dia gram ) are permitted
so that such a diagram can always be drawn in a planar fashion . For example,
had we allowed four customer s in a n arbitra rily conn ected three-node
network , then the state-transition-rate di agram would have been as shown in
Figure 4.18. In t his diagram we repr esen t possible tran siti ons between nodes
by an undirected branch (representing two one-way branches in opposi te
directions). Also , we have collected together sets of branches by joinin g the m
with a heavy line, and these are mean t to repr esent branches whose cont ri-
buti ons appear in the same local-balance equ ati on . Th ese diagrams can be
extended to higher dimensions when the re a re more than three nodes in the
system. In particular , with four nodes we get a tetrahedron (that is, a three-
dimensional simplex). In general, with N nodes we will get an (N - 1)-
dimensional simplex with K + 1 nodes along each edge (where K = number
of customers in the closed system). We note in these diagram s that all node s
lying in a given straight line (pa ral!el to any base of the simplex) maintai n one
comp onent of the sta te vector at a constant value and that this value increases
or decreases by un ity as one moves to a parallel set of nodes. The local-
balan ce equ ati ons are identi fied as balancing flow in th at set of bran ches that
conn ects a given node on one of these constant lines to all other nodes on
that constant line adjacent and parallel to this node , and th at decreases by
unity that component that had been held con stant. In summa ry, then , the
160 MA RK O VIAN QUEUES IN EQU ILI BR IU M
local- bal ance equ ati on s a re tr ivial to write down, a nd if one can succeed in
findin g a solution that satisfies them , then one has found the solut ion to the
globa l-bala nce equati on s as well!
As we see, most of the se Markovian ne tworks lead to rather complex
systems of linear equations. Wall ace and Rosenberg [WALL 66] propose a
numerical so lutio n metho d for a large class of the se equation s which is
computati on ally effi cient. They di scuss a computer program, which is designed
to evaluate the equilibrium probability distribution s of state variables in
very large finite Mark ovian queueing net works. Specifically, it is designed to
so lve the equilibrium equ ati on s of the form given in Eqs. (2.50) a nd (2. 116),
namely , 7t = 7tP and 7tQ = O. The procedure is of the "power-iteration
type" such th at if7t (i) is the ith iterate then 7t(i + I) = 7t(i)R is the (i + I)th
iterate ; the matrix R is either equal to the matri x GtP + (I - Gt) I (where a: is a
scalar) or equal to the matrix ~ Q + I (where ~ is a scalar and I is the identity
matrix), depending up on wh ich of the two above equation s is to be solved .
The sca la rs a: and ~ a re ch osen carefully so as to give a n efficient con vergence
to the solution of the se equations. The speed of solution is quite remarkable
and the reader is referred to [WALL 66] and its references for further det ails.
Thus ends our study of purely Markovian systems in equilibrium. The
unify ing feature throughout Chapters 3 and 4 has been that these systems
give rise to product-type so lutions; one is therefore urged to look for
solution s of thi s for m wheneve r Mark ovian queueing system s are enc oun-
tered. In the next chapter we permit either A (t) or B (x) (but not both) to be
of arbitrary form , requiring the other to rem ain in exponential form .
REFERENCES
BURK 56 Burke, P. J., " The Output of a Queueing System," Operations Research,
4, 699-704 (1966).
CHAN 72 Chandy, K. M., " T he Analysis and Solutions for General Queueing
Networks," Proc. Sixth Annual Princeton Conference on Information
Sciences and Systems , Princeton University, March 1972.
COX 55 Cox, D. R., " A Use of Complex Probabilit ies in the Theory of Sto-
chastic Processes," Proceeding s Cambridge Philosophical Socie ty,
51,313-31 9 (1955).
GORD 67 Gordon , W. J. and G. F. Newell, " Closed Queueing Systems with
Exponential Servers," Operations Research, 15, 254-265 (1967).
JACK 57 Jackson, J . R., "Networks of Waiting Lines," Operations Research,S,
518-521 (1957).
JACK 63 Jack son , J. R., "Jobshop-Like Queueing Systems," Manag ement
S cience , 10, 131 -142 (1963).
KING 69 Kingman, J. F. C., "Markov Population Processes," Journal of Applied
Probability, 6, 1-18 (1969).
EXER CISES 161
KOEN 58 Koenigsberg, E., " Cyclic Queues," Operations Research Quarterly, 9,
22-35 (1958).
SAAT 65 Saaty, T. L., "Stochastic Network Flows: Advances in Networks of
Queues," Proc. Symp. Congestion Theory, Univ, of North Carolina
Press, (Chapel Hill), 86-107, (1965).
WALL 66 Wallace, V. L. and R. S. Rosenberg, "Markovian Models and Numer-
ical Analysis of Computer System Behavior," AFIPS Spring Joint
Computer Confe rence Proc., 141-148, (1966).
WHIT 44 Whittaker, E. and G. Robinson, The Calculus ofObservations, 4th ed.,
Blackie (London), (1944).
EXERCISES
4.1. Consider the Markovian queueing system shown below. Branch
labels are birth and death rates. Node labels give the number of
customers in the system.
F or the bulk arrival system studied in Section 4.5, find the mean N
and variance aN" for the number of customers in the system. Express
your answers in terms of the moments of the bulk arrival distribution.
Consider an M/M /I system with the followin g variation: Whenever
the server becomes free, he accepts (11'0 customers (if at least two are
available) from the queue into service simultaneously. Of these two
customers, only one receives service; when the service for this one is
co mpleted, both customers depart (and so the other cust omer got a
" free ride").
If only one cust omer is available in the queue when the server
becomes free, then that cust omer is accepted alone and is serviced;
if a new customer happens to arrive when this single customer is being
served, then the new customer joins the old one in service and this
new customer receives a "free ride ."
In all cases, the service time is exponentially distributed with mean
I/p, sec and the average (Poisson) arrival rate is A customers per
second .
(a) Draw the appropriate state diagram.
(b) Write down the appropri ate difference equati ons for P» =
equilibrium probability of finding k customers in the system.
(e) Solve for P(z) in term s of Po and Pt.
(d) Express Pi in terms of Po.
We con sider the denominator polynomial in Eq. (4.35) for the system
Er/ M/ I. Of the r + I roots, we know that one occurs at z = I. Use
Rouche's theorem (see Appendix I) to show that exactly r - I of the
remain ing r roots lie in the unit disk 1=1 ::s; I and therefore exactly
one roo t, say zo, lies in the region IZol > I.
Show that the soluti on to Eq. (4.7 1) gives a set of variables {Xi} which
gua ran tee that Eq. (4.72) is indeed the solution to Eq. (4.69).
(a) Draw the state-transitio n-rate diagram sho wing local balance
for the case (N = 3, K = 5) with the following structure:
j
o 2 3
o o o o
o o I - ~ ~
2 0 o o
where 0 < ~ < I and nodes 0 and N + I are the "source" and "sink'.'
nodes, respectively. We also have (for some integer K)
k, + k, ¥- K
k, + kz = K
and assume the system initially contains K customers.
(a) Find e, (i = 1,2) as given in Eq. (4.75).
(b) Since N = 2, let us denote p(k 1 , k.) = p(k lo K - k 1) by hi'
Find the balance equations for hi'
(c) Solve these equations for hi explicitly.
(d) By considering the fraction of time the first node is busy, find
the time between customer departures from the network (via
node I, of course).
PART III
INTERMEDIATE
QUEUEING THEORY
We are here concerned with those queueing systems for which we can still
apply certain simplifications due to their Markovian nature. We encounter
those systems that are representable as imbedded Markov chains, namely,
the M/G/I and the G/M/m queues. In Chapter 5 we rapidly develop the basic
equilibrium equations for M/G/l giving th e noto rious Pollaczek-Khinchin
equations for queue length and waiting time . We next discuss the busy period
and, finally, introduce some moderately advanced techniques for studying
these systems, even commenting a bit on the time-dependent solutions.
Similarly for the queue G/M /m in Chapter 6, we find that we can make some
very specific statements about the equilibrium system behavior and, in fact,
find that the conditional distribution of waiting time will always be exponen -
tial rega rdless of the interarrival time distri bution! Similarly, the conditional
que ue-length distribution is shown to be geometric. We note in this part that
the methods of solution are quite different from that studied in Part II, but
that much of the underlying behavior is similar; in particular the mean
queue size, the mean waiting time, and the mean busy period duration
all are inversely proportional to I - p as earlier. In Chapter 7 we briefly
investigate a rather pleasing interpretation of transforms in terms of
probabilities.
The techniques we had used in Chapter 3 [the explicit product solution
of Eq. (3.1 I)] and in Chapter 4 (flow co nservation) are replaced by an
indirect a-transform approach in Chapter 5. However, in Chapter 6, we return
once again to the flow con servation inherent in the 1t = 1tP solution.
165
s
The Queue MjGjl
c ....: ) -: -;..: \_~ ' L;,..·· t. :,../ •••- ..:::: 7 ~' .:.,-. " . I_ ~ ..
• Usually a state descripti on is given in term s of a vector which describes the system's
state at time t , A vecto r v(t) is a state vector if, given v(t) a nd all input s to this system du ring
the interval (t, (1 ) (where t < (1 ) , then we ar e capable of solving for the state vecto r v(t1) .
Clearly it beho oves us to cho ose a state vector containing that inform at ion that permit s us
to calculate quantities of importan ce for unde rstandin g system beha vior.
t We saw in Cha pter 4 that occasionally we reeor d the number of stages in the system rather
than the number of customers.
167
168 THE QUEUE M/G /I
variables , is discussed in the exercises at the end of th is chapter ; more will be
said abo ut this method in the next section. We also discuss the busy period
analy sis [GAV E 59], which leads to the waiting-time distributi on (see
Section 5.10). Beyond these there exist other approaches to non-Mar kovian
queueing systems, amo ng which are the random- walk and combinatorial
approaches [TAK A 67] and the method of Green's fun ction [KEIL 65].
k ~ roo
x = Jo xkb(x) dx
a nd we sometimes express these service time moments by bk ~ x'.
Let us discuss' the state description (vector) for the M/G /l system. If at
some time t we hope to summarize the complete past history of this system ,
then it is clear that we must certainly specify N( t), the numb er of customers
present at time t. Moreover, we must specify Xo(t) , the service time already
received by the customer in service at time t ; this is necessary since the
service-time distribution is not necessarily of the memoryless type. (Clearly,
we need not specify how long it has been since the last arri val entered the
system, since the arrival process is of the memoryless type.) Thus we see that
the rand om pr ocess N (t) is a non-M ark ovian proce ss. However, the vector
[N (t), XoCt)] is a Markov proce ss and is an appropriate sta te vecto r for the
M IG/I system , since it completely summarizes all past history relevant to the
future system development.
We have thus gone from a single-comp onent description of state in
elementary queueing theory to what appears to be a two-component descrip-
tion here in intermedi ate queueing theory. Let us examine the inherent
difference between these two state descriptions. In elementary queueing
theory , it is sufficient to pro vide N (t) , the number in the system at time t , and
we then have a Markov process with a discrete-state space, where the states
themsel ves are either finite or countable in number. When we proceed to the
current situation where we need a two-dimensional state description , we find '
that the number in the system N(t) is still denumerable, but now we must also
5.2. TH E PARADOX OF RESIDUAL LIFE: A BIT OF RENEWAL THE ORY 169
pr ovide Xo(t) , the expended service time , which is continuous. We have thus
evolved from a discrete-state description to a continuous-state description ,
and this essenti al difference complicates the analysis.
It is possible to proceed with a general theory based upon the couplet
[N (t ), Xo(t )] as a state vector and such a method of solution is referred to as
the method of supplementary cariables. For a treatment of this sort the reader
is referred to Cox [COX 55] and Kendall [KEND 53); Henderson [HEND
72] also discusses this method, but chooses the remaining service time instead
of the expended service time as the supplementary variable. In this text we
choose to use the method of the imbedded Markov chain as discussed below.
However, before we proceed with the method itself, it is clear that we should
understand some properties of the expended service time; this we do in the
following section.
Time
XO~i P Pie =J
axis
aenves
I + - -- X
A,
in the excellent monograph by Cox [COX 62] or in the fine expo sito ry
article by Smith [SMIT 58]; the reader is also encouraged to see Feller
[FELL 66]. The basic diagram is that given in Figure 5.1. In this figure we let
A k denote the kth automobile, which we assume arrives at time T k• We assume
that the intervals T k +! - T k are independent and identically distributed
random variables with distribution 'given by
f( x) ,g, dF ( x) (5.2)
dx
Let us now choose a random point in time, say t , when o ur hippie arrives a t
the roadside cafe. In thi s figure , A n _ 1 is the last au tom ob ile to a rrive pri or to t
and A n will be the first autom obile to arrive after t . We let X den ote this
"s pecial" interarrival time and we let Y den ote the time that our hippie must
wait until the next arrival. Clearly, the sequence of arriv al points {Tk} for ms
a renewal pr ocess; renewal theory discusses the instantane ous replacement
of co mpo nents. In this case , {Tk} form s the sequence of instants when the
old comp onent fails and is replaced by a new comp onent. In the language
of renewal theory X is said to be the lifetim e of the comp onent under con-
sidera tion, Y is said to be th e residual life of that comp onent at time t, and
X o = X - Y is referred to as the age of that component at time t. Let us ado pt
that termin ology and pr oceed to find the pdf for X and Y. the lifetime and
residual life of our selected componen t. We assume that the renewal process
has been operating for an arbitrarily long time since we are interested only
in limit ing distr ibuti ons.
The amazing result we will find is that X is not distr ibuted according to
F(x ). In term s of our earlier example thi s mean s that the interval which the
hippie happens to select by his arriv al at the cafe is not a typical interval.
In fact , herein lies the solution to our parad ox: A long interval is more likely
5.2. TH E PARADOX OF RESID UAL LIFE: A BIT OF RENEWAL TH EORY 171
to be "intercepted" by our hippie than a sh ort one . In the case of a Poisson
process we shall see that this bias causes the selected interval to be on the
average twice as lon g as a typical interval.
Let the residual life have a distribution
. ~
F(x) = pry ~ xl (5.3)
with den sity
j( x) = dF( x) (5.4)
dx
Similarly, let the selected lifetime X have a pdfI x(x) and PDF Fx (x) where
~
Fx (x) = P[X ~ xl (5.5)
In Exercise 5.2 we direct the reader through a rigorous deri vati on for the
residual lifetime density j'(«). Rather than proceed through those. details , let
us give an intuiti ve der ivation for the density that take s advantage of o ur
physical intuition regarding thi s pr oblem. Our basic observation is that lon g
intervals between renewal points occupy larger segments of the time axis
than do shorter intervals, and therefore it is more likely th at our rand om point
t will fall in a long interval. If we accept this, then we recognize th at the
probability that an interval of length x is chosen should be proportional to
th e lengt h (x) as well as to the relative occurrence of such interv als [which is
given by I(x) dx]. Thus, for the selected int erval , we may write
xJ(x)
Jx(x) = - - - (5.8)
Ill,
This is o ur first result. Let us proceed now to find the den sity of residu al life
! (x). If we are told that X = x , then the probability that the residu al life Y
does not exceed the value y is given by
pry ~ y I X = xl = -y
x
172 THE QUEUE M/G/I
for 0 s: s:
y x; this last is true since we have randomly chosen a poin t within
this selected interval, a nd therefore thi s point must be uniformly distributed
within that interval. Thus we may write down the joint density of X and Y as
for 0 s: s:
y x . Integrating over x we obtain!(y) , which is the uncondition al
density for Y , namely,
! (y) dy = r a:> f( x) dy dx
Jz=y m1
This immediately gives the final result:
This is our second result. It gives the density of residu al life in terms of the
common distribution of interval length and its mean. *
Let us express thi s last result in terms of transforms. Using our usual
transform notation we have the following correspondences:
f (x)-=- F*(s)
,, .
!(x)-=- I *(s)
Clearl y, all the random va riables we ha ve been d iscu ssing in th is section are
nonnegati ve, and so the relationsh ip in Eq . (5. 10) may be tr ansformed directly
by use of entry 5 in Table 1.4 and entry 13 in Table 1.3 to give
It is now a tri vial ma tte r to find the moments of residual life in terms of th e
moments of the lifetimes themselves. We denote the nth moment of the life-
time by m; a nd th e Ilth mom ent of the residual life by r n ' that is,
ln n+ 1
r = ---'''-'-''-- -(5 .14)
n (n + I)m l
This important formula is most often used to evaluate 'I ' the me an residual
life, which is found equal to
- (5.15)
ml (J2
'I = -2 +--
2m l
(5.16)
This last form shows that the correct answer to the hippie paradox is m, /2,
half the mean interarrival time , only if the variance is zero (regula rly spaced
arrivals); however, for the Poisson arrivals, m l = IIA a nd (J2 = IfJ.2, giving
'1 = IIA = mt> which confirms our earlier solution to the hippie paradox of
residual life. Note that mtl2 ~ 'I and 'I will gr ow without bound as (J2 ->- 00.
The result for the mean residual life (' I) is a rather counterintuitive result; we
will see it appear again and again.
Before lea vin g renewal theory we take this opportunity to qu ote so me other
useful results. In the lan guage of renewal theory the age-d ependent fa ilure
rate rex) is defined as the instantaneous rate at which a component will fail
given th a t it has already attained a n age of x ; th at is, , (x) dx ~ P[x <
lifetime o f component ~ x + dx I lifetime > z ], From firs t principles, we see
that this conditional density is
f(x)
rex ) = 1 _ F(x ) - (5.1 7)
where once again f (x) and F(x) refer to the common di stribution o f compo-
nent lifetime. The renewal fun ction H (x) is defined to be
and the renewal density h ex) is merely the renewal rate at time x defined by
Thi s merely says that in the limit one cannot identify when the rene wal
process began, and so the rate at which components are renewed is equal to
the inverse of the average time between renewal s (m.). We note that hex) is
not a pdf; in fact, its integral diverges in the typical case. Ne vertheless, it
does possess a Laplace transform which we denote by H *(s). It is easy to
show that the following relationship exists between this transform and the
transform of the underlying pdf for renewals, namely :
More will not be said about renewal theory at this point. Again the reader
is urged to consult the references mentioned above.
-
5.4. THE TRANS IT ION PRO BABILITIES 177
Our a pproac h for the balance o f thi s chapter is first to find the mean
number in system, a result referred to as the P ollaczek-Khinch in mean-value
formula . * F ollowin g that we obta in the genera ti ng functi on for the distri-
bution of number of custo mers in the system and then the tran sform for both
the waiting-time and total system-time distributions. These last transform
results we sha ll refe r to as Pollaczek-Khinchin tr an sform equ ations. *
Furthermore, we so lve for the transform of the bu sy-pe riod durati on a nd
for the number served in the busy pe riod; we then show how to derive
waitin g-t ime results from the bu sy-period a na lysis. Lastly, we deri ve the
Takacs integrodifferentia l equ ation for the unfinished work in the system. We
begin by defining so me notation and identifying the transiti on probabilities
associa ted with ou r im bedded Markov chain.
In addition, we int roduce two new random va ria bles of consider ab le interes t :
qn = number of cu stomers left behind by departure of Cn from service
V n = nu mber of customers a rriving during the service of C n
• There is considerable disagreement within the queueing theory literature regarding the
names for the mean-value and transform equations. Some authors refer to the mean-value
expression as the Pollaczek-Kh inchin formula, whereas others reserve that term for the
transform equations. We attempt to relieve that confusion by adding the appropriate
adjectives to these names.
r-
r
i
178 TH E QUE UE MIG/!
We are interested in solving for the distributi on of q", namely , Piq; = kj ,
which is, in fact , a t ime-dependent probability ; its limiting distribution
(as II ->- co) corresponds to elk' which we know is equ al to Pk> the basic
d istribution discussed in Chapters 3 and 4 previously. In carrying out that
so lution we will find that th e n umber of a rriving cu stomers V n plays a crucial
ro le.
As in Chapter 2, we find that the tr an sition probabilities descr ibe our
Markov chain ; thu s we define the one-step transiti on pr ob abilities
eL. eL J eL 2 eL a
, P=
0 0 eL. eLl
I 0 0 0 eL.
,j
I;
where
eLk ~ P[v. +! = k] (5.26)
For example, the jth component of the first row o f thi s matri x gives the
prob ability th at the previou s customer left behind a n emp ty system and that
during th e service of C n + l exactly j customers a rriv ed (a ll of who m were
left behind by the dep arture of C n+\); similarly, for other than the first row,
the entry Pi; for j ~ i - I gives the probability that exac tly j - i + I
customers a rr ived during the service peri od for C,,+I> give n tha t C" left behind
exactly i customers ; of these i customers one was ind eed C "+ 1 and thi s
acc ounts for the + I term in th is last co mp uta tio n. The sta te-tra nsition-
probability dia gram for th is Markov ch ain is show n in F igure 5.3, in which
we show only trans iti on s o u t of E i .
Let us now calc ulat e eLk' We ob serve first o f all th at the a rriva l pr ocess (a
Poisson process at a rate of A customers per seco nd) is ind ependen t of the sta te
of the queueing system . Similarl y, x"' the service time for C", is independent
5.4. TH E TRA NSITIO N PROBABILITI ES 179
ao
Figure 5.3 State-transition-probabilit y diagram for the M/G/I imbedded Mar kov
Chain.
where again b(x) = dB (x)/dx is the pdf fo r service time. Since we have a
Poisson arrival process, we may replace the pr ob abil ity bene ath th e int egral
by the expre ssion given in Eq . (2.131), t ha t is,
(f.k =
i'"
o
- - e- l' b( x ) d x
(}.X)k
k!
(5.28)
p = AX
a nd point out th at thi s Markov chain is ergodi c if p < 1 (unless specified
otherwise, we sha ll assume p < I below) .
T he stationary pro ba bilities may be obtained from the vector equ ati on
p = pP where p = [Po, p" P2' . . .] whose kth component Pk ( = .-ik ) is
180 TH E QUEUE M/G/l
merely the limiting probability that a departing customer will leave behind k
customers, namely,
Pk = P[q = k] (5.29)
In the following section we find the mean value E[q] and in the section
following that we find the z-transform for h .
which certainly will exist in the case where our imbedded chain is ergodic.
Our first step is to find an equation relat ing the random variable qn+l to
the random variable qn by considering two cases. The first is shown in Figure
5.4 (using our time-diagram notation) and corre spond s to the case where C;
leaves behind a nonempty system (i.e., qn > 0). Note that we are assuming a
first-come-first-served queueing discipline, alth ough this assumption only
a ffects waiting times a nd not queue lengths or busy periods. We see from
Figure 5.4 that qn is clearly greater than zero since C n+l is already in the
system when C n departs. We purposely do not show when customer Cn +2
arr ives since th at is unimportant to our developing argument. We wish now
to find an expression for q n +l ' the number of customers left behind when C n+l
dep arts. Th is is clearly given as equ al to qn the numb er of customers present
when C; departed less I (since customer C n+l departs himself) plus the
number of customers that arri ve during the service interval Xn +l ' Thi s last
term is clearly equal to Dn+l by definition and is shown as a "s et" of arri vals
T ime~
Queue - - r - - - - - - ' - - - - - - - . - . L - - - - - - - : - - -
'---v---J
Cn " ~
v n. l arrive
Server
r-- x
, . , --"," ~ q." left behind
c. C,, +I Ti m e ~
Qu eue ---,r-----'----~c_-----+__--
t
C.
t
C,,.1
V IJ+ l
~
arri ve
qn >0 (5.31)
Now consider the secon d case where qn = 0, that is, our departing custo-
mer leaves behind an empty system; this is illustrated in Figure 5.5. In this
case we see that qn is clea rly zero since e n+! has not yet arrived by the time
C n departs. T hus qn+!, the number of customers left behi nd by the depar ture
of C n +1 , is merely equal to the number of arrivals d urin g his service time.
Thus
qn = 0 (5.32)
Collectin g together Eq. (5.31) and Eq. (5.32) we have
qn > 0
(5.33)
qn = 0
It is convenient at thi s point to introduce D. k , the shifted discrete step function
k = 1,2, . . .
(5.34)
k~O
- (5.35)
182 TH E QUEUE M/G/I
Equation (5.35) is the key equation for the st udy of M/GfI systems. It
remain s for us to extract from Eq . (5.35) the mean value * for qn' As usual, we
concern ourselves not with the time-dependent behavior (which is inferred
by the subscript II) but rather with the limiting distribution for the rand om
variable qn, which we den ote by g. Accordingly we assume that the jth
moment of qn exists in the limit as II goes to infinity independent of II ,
namely ,
lim E[q /] = EW] (5.36)
n -e cc
Alas, the expectation we were seeking drops out of this equation, which
yield s instead .
E[6 .] = E[ v] (5.37)
What insight does this last equ at ion provide us ? (No te that since v is the
number of arrivals during a customer's service time , which is independent of
II , the ind ex on u; could have been dropped even before we went to the limit.)
We have by definiti on that
E[6;;] = .26kP[g = k]
k= O
We thus have the perfe ctly reason able conclusion that the expected number
of arrivals pe r service inte rval is eq ual to p (= ix). For stability we of co urse
require p < I , a nd so Eq . (5.4 1) ind ica tes that customers must arrive more
slowly th an the y can be served (on the average).
We now return to the ta sk of solving for the expected va lue of q. Forming
the first mo me nt of Eq . (5.35) yielded interesti ng resul ts but fai led to give the
des ired expectati on. Let us now a ttem pt to find th is average value by first
squaring Eq . (5.35) and then ta king expectati on s as follows :
(5.42)
From o ur de finition in Eq. (5.34) we ha ve (D. o )" = D.o" an d also qn D. o" = q n'
Applyi ng this to Eq. (5.42) a nd taking expecta tio ns ,we have
E - _ + E[i?] - E[ o]
(5.43 )
[q] - P 2(1 - p)
V(z) = I'" r
k~ O 0
-(h)k e-AXb(x)
k!
. d x Zk
Our summation and integral are well behaved , and we may interchange the
order of these two operations to obtain
V(z) =
l ro
e- AX
• ( co
I (Axzt)
- - b(x ) d x
o k -O k!
= r e-IA-A=lxb(x) dx (5.45)
At thi s point we define (as usual) the Laplace transform B*(s) for the service
time pdf as
B*(s) ~ LX> e- SXb(x) d x
We note that Eq. (5.45) is of this form , with the complex variable s replaced
by i. - }.z, and so we recognize the impo rtan t result th at
In order to simplify the nota tion for the se limitin g derivat ive opera tions, we
have used the more usual superscript notation with the argument replaced
by its limit. Furthermore, we now resort to the overb ar notat ion to denote
expected value of the random variable below that bar. t Thus Eqs. (5.47)-
(5.49) become
B*Ckl(O) = ( - I )kx" (5.50)
V(ll(1) = iJ (5.51)
V(2l( l) = v' - iJ (5.52)
Of course, we must also have the con servati on of probability given by
t Recall from Eq. (2.19)tha t E [x nk ] _ x k = bk (ra ther tha n the more cumbersome nota tion
(ilk which one might expect). We ta ke the sa me liberties with vand ij, namely, (if = ;;;
and (fj)k = qk.
186 TH E QUEUE M/G /l
Thi s last may be calculated as
V(ll(1) = _ A dB *(y )
dy
I:~1
ij = i3: (5.58)
But Ax is ju st p and we have once again established that which we knew from
Eq . (5.41), namely, ij = p . (This certainl y is encouraging.) We may continue
to pick up higher moments by differentiating Eq. (5.54) once again to
obtain
d 2 V(z) d 2 B*(A - k)
-- 2
- 2
(5.59)
dz dz
U sing the first derivati ve of B *(y ) we now for m its second der ivative as
follows :
d
2B*
(). - i.z) = .!!-[_;. dB *(y)]
dz 2 dz dy .
= _A(d2B*~!J))(dY)
dy- dz
or
d 2B*(}. - i.z) , 2 d 2B*( y )
= I. (5.60)
d z2 d y'
- (5.61)
We have thus fina lly solved for v'. Thi s clearly is the quantity requ ired in
order to evaluate Eq. (5.43). If we so desired (and with suita ble ener gy) we
could continue this differentiati on game a nd extract additional moment s of iJ
in term s of the moments of i; we prefer not to yield to that temptati on here.
Returning to Eq . (5.43) we apply Eq . (5.61) to obtain
j. 2 2
X
ij = P+ · _"'--
2-(1.:.:... p) (5.62)
T his is the result we were after ! It expresses the average queue size at customer
departure instants in terms of known quantities, namel y, the utilizati on
factor (p = AX), }., and x' (the second moment of the service-time
distr ibuti on). Let us rewr ite thi s result in terms of C; = Gb'/{x)', the squared
coefficient of variat ion for service time :
-0- '"
N = "2: kP[ ij = k] (5.64)
k= O
Nq = I(k - I )P[q = k)
k= l
This easily gives us
'"
Nq = I kP [q = k) - I'" P[q = k)
k= O k= l
Nq = N - p - (5.65)
This simple formula gives the general relationship we were seeking.
As an example of the P- K mean-val ue for mula , in the case of an MIMfI
system, we have that the coefficient of va riati on for the exponential distri-
buti on is uni ty [see Eq . (2. 145»). Thus for this system we have
__ + 2 (2)
q - p P 2(1 - p)
or
-
q= -p- MIMII (5.66)
I - P
Equati on (5.66) gives the expected number of cust omers left behind by a
departi ng custome r. Compare thi s to t he expression for the average number of
customers in a n MIMfI system a s give n in Eq . (3.24). They a re identical and
lend va lidit y to our ea rlier statemen ts that th e meth od of the imbedded
Markov cha in in the MIGfI case gives rise to a so lution that is good a t all
points in time. As a second example , let us con sider the service-time dis-
tributi on in which service time is a con stant a nd equ al to x. Such systems are
de scribed by the notation MIDII , as we ment ioned earlier. In th is case
clea rly C b 2 = 0 a nd so we have
__ + 2 1
q- P P 2(1 - p)
- (5.67)
ij = - p- - --,P_- MIDII
1- P 2( 1 - p)
Thus the MIDfI system has p 2 /2(1 - p) fewer customers o n the a verage than
the MIMI I system, demonstrating the earlier sta tement th at ij increases with
the vari ance of the service-time distribution .
5.5. T HE MEAN QUEUE LENGTH I S9
x ~ O (5.6S)
That is, the service facility consists of two parallel service stages, as shown in
Fi gure 5.6. N ot e that A is also the arrival rate, as usual. We may immediately
ca lculate x = 5/(S).) a nd (Jb 2 = 31 /(64.12) , which yield s C/ = 31/25. Thus
-- P"( 2.24)
+..:........:_-
q - p 2(1 - p)
p O.12p 2
= --+--
I- p I -p
Thus we see t he (small) increase in ij for the (sma ll) increase in C;2 over th e
va lue of un ity for M/M / 1. We note in this example th at p is fixed a t p =
i.x = 5/S; th erefore, ij = 1.79, whereas for M /M/l a t thi s va lue of p we get
ij = 1.66. We have introduced thi s M /H 2/l example here since we intend to
carry it (a nd the M/M/ I exa mple) thr ou gh our MIG/l discussion.
The main result o f th is sect ion is th e Pollaczek -Khinchin fo rm ula fo r
the mean number in system, as given in Eq . (5.63). This result bec omes a
special case of ou r results in the next sect io n , but we feel th at its development
has been useful as a pedagogical device. Moreover , in ob tai ning th is res u lt
we established the ba sic equation for MIG/I given in Eq . (5.35) . We a lso
obtai ned the ge nera l relati on ship between V( z) a nd B*(5) , as given in Eq.
(5.46); from t his we a re a ble to obtai n the moments for the number o f a rr ivals
during a service interval.
190 TH E QU EUE M IGII
We have not as yet derived an y results regarding time spent in the system ;
we are now in a positi on to do so . We recall Little's result:
This result relates the expected number of customers iii in a system to 1 , the
arrival rate of customers and to T, their average time in the system. For
MIGII we have deri ved Eq . (5.63), which is the expected number in the
system at customer departure instants. We may therefore appl y Little's result
to this expected number in order to obtain the average time spent in the
system (queue + service) . We know that ij als o represents the average
number of customers found at random , and so we may equate ij = iii. Thus
we have
_ • (1 + C.2 )
N=p+p· =1T
2(1 - p)
T =
px(1 + C; )
x + -'--'--'----"-'- (5.69)
2(1 - p)
This last is easily interpreted. The average total time spent in system is clearly
the average time spent in service plus the average time spent in the queue.
The first term above is merely the average service time and thu s the seco nd
term mu st represent the average queueing time (which we den ote by W).
Thus we have th at the average queueing time is
px(l + C;)
W = '---''-----''---'-
2(1 - p )
or
Wo
W=-- - (5.70)
I-p
where W o ~ i0/2; W o is the average remaining service time for th e cust omer
(if an y) found in service by a new arrival (work it out using the mean residu al
life formula). A particularly nice normalization fact or is now apparent.
Consider T, the average time spent in system. It is natural to comp are this
time to x, the average service time required of the system by a cust omer.
Thus the ratio Tlx expre sses the ratio of time spent in system to time required
of the system and repre sents the factor by which the system inconvenie nces
5.6. DISTRIB UTI O N OF NU MBER IN SYSTEM 191
customers due to the fact that they are sharing the system with other custom-
ers. If we use this normalization in Eqs. (5.69) and (5.70), we arrive at the
following, where now time is expre ssed in units of average service intervals:
+ p (1 + C b )
2
T
- = 1 _ (5.71)
x · 2(1 - p)
W
- = p
(l +C 2
b )
_ (5.72)
x 2(1 - p)
Each of these last two equations is also referred to as the P-K mean-value
formula [along with Eq . (5.63)]. Here we see the linear fashi on in which the
statistical fluctuati ons of the input processes create delay s (i.e., I + C b 2 is
the su m of the squared interarrivai-time and service-time coeffici ents of
variation). Further, we see the highly nonlinear dependence of delays upon
the average load p .
Let us now comp are the mean normalized queueing time for the systems"
M /M /l and M /D fl ; these have a squared coefficient of variation Cb 2 equal to
I and 0, respectively. Applying this to Eq. (5.72) we ha ve
W P
- MIM II _ (5.73)
x (I - p)
W P
- M IDII _ (5.74)
x 2(1 - p)
Note that the system with constant service time (M /D/l) has half the average
waitin g time of the system with exponentially distributed service time
(M / M {l) . Thus, as we commented earlier, the time in the system and the
number in the system both grow in proportion to the vari an ce of the
service-time distribution .
Let us now proceed to find the distribution of the number in the system.
• Of less interest is our highly specialized MjH zll example for which we obtain W j;;; =
1.12pj(1 - pl.
192 TH E QUEUE M fG fl
takin g expectati on s we were able to obtain P-K formulas that gave the
expected number in the system [Eq. (5.63)] and the norm alized expected
time in the system [Eq. (5.71)]. If we were now to seek the second moment of
the number in the system we could obtain this quantity by first cubing Eq.
(5.75) and then taking expectations. In thi s operation it is clear that the
expectation E[f] would cancel on both sides of the equation once the limit
on n was taken ; thi s would then leave an expression for the second moment of
g. Similarly, all higher moments- can be obtained by raisin g Eq. (5.75) to
successively higher powers and then forming expectations. * In this section,
however, we choose to go after the distribution for qn itself (actually we
consider the limiting random variable g). As it turns out, we will obtain a
result which gives the z-transforrn for this distribution rather than the dis-
tributi on itself. In principle, these last two are completely equivalent; in
practice, we sometimes face great difficulty in inverting from the z-tra nsform
back to the distribution . Nevertheless, we can pick off the moments of the
distributi on of g from the z-transforrn in extremely simple fashion by making
use of the usual properties of transforms and the ir deri vatives.
Let us now proceed to calculate the a-transform for the probability of
finding k customers in the system immediately following the departure of a
customer. We begin by defining the z-transform for the random va riable qn
as
(5.76)
From Appendix II (and from the definition of expected value ) we have that
thi s z-transform (or probability generating functi on) is also given by
• Specifically, th e k th power leads to an expression for Erqk- 'j that involves the first k
momen ts of service time.
5.6: DISTRIB UTIO N OF NU MBER IN SYSTEM 193
Let us now take expectations:
E[z·" ,] = E[z· .-<1<' +· ' +1]
Using Eq. (5.77) we recognize the left-hand side of this last as Qn+!(z).
Similarly, we may write the right-h and side of this equat ion as the expecta-
tion of the product of two fact ors, giving us
Qn+1(z) = E[z· .- dq. zV'+l ] (5.79)
We now observe, as earlier, that the random var iable v n+t (which represents
the number of arrivals during the service of C n +!) is independent of the
ra ndo m varia ble qn (which is the number of customers left behind upon the
departure of C n) . Since this is true , then the two fact ors within the expectat ion
on the right-hand side of Eq. (5.79) must themselves be independent (since
function s of independent rand om variables are also independent). We may
thu s write the expectatio n of the produ ct in that equ ati on as the product of
the expectatio ns:
Qn+1(z) = E[z· .- dq. ]E[zv. +.] (5.80)
Th e second of these two expectations we again recogn ize as being independent
of the subscript n + I ; we thus remove the subscrip t and consider the ran dom
variable v again . From Eq . (5.44) we then recognize that the second expecta-
tion on the right-h and side of Eq. (5.80) is merely
E[zVo+' ] = E [zV] = V(z)
We thus have
Qn+1(z) = V(z)E[z· .- dq.] (5.81)
T he only complicat ing factor in this last equati on is the expectation . Let us
exam ine this term sepa rately; from the definition of expectation we have
'"
E[zo.-<1••] = LP[q n = k]Zk- <1.
k~ O
Th e difficult part of this summation is that the expo nent on z cont ains t:J. k ,
which takes on one of two values according to the value of k . In order to
simplify this special behavior we write the summa tion by exposing the first
ter m separately:
co
E[zo.-<1••] = P[qn = O]ZO-O+ LP[q n = k ]Zk- ' (5.82)
k= l
Regarding the sum in this last equa tion we see that it is almost of the form
given in Eq. (5.76); the differences are that we have one fewer powers of z
and also t hat we are missing the first term in the sum. Bot h these deficiencies
may be correct ed as follows:
'" I '" 1
LP[q n = k]Zk-t = z L P[qn = k]Zk - Z P[qn = O]ZO (5.83)
k= t k- O
194 T HE QUEU E M IGII
Applying thi s to Eq. (5.82) and recognizing that the sum on the right-hand
side of Eq . (5.83) is merely Qn(z), we have
I i
We now take the limit as n goes to infinity and recognize the limiting value
expressed in Eq. (5.36). We thus have
II Finally we multiply numerator and denominator of this last by ( -z) and use
our result in Eq. (5.46) to arrive at the well-kn own equation th at gives the
z-transform for the number of customers in the system,
Q(z) = ( p ) (I - p)(1 - z)
A- AZ + ,u [PIC;, - AZ + ,u)] - Z
N oting that p = A/p , we have
Q(Z) = I - P (5.88)
1 - pz
Equation (5.88) is the solution for t he z-t ra nsfo r m of the distributi on of the
number of people in the sys te m. We ca n reach a point such as thi s with many
serv ice-tim e d istributi on s B(x); for th e exponential d istribution we can ev alu-
ate the inve rse transform (by inspection !). We find immediately that
This then is the fami liar so lu tion for M /M /!. If the reader refers back to
Eq . (3 .23), he will find the same functi on for the probability of k cu stomers
in the M IMII syste m. Ho wever, Eq . (3 .23) gives the solution for all p oints
in time whereas Eq . (5.89) gives the so lutio n only at the imbedded Markov
points (na mely, at th e de pa rtu re in stants for cu st omers). The fact t hat these
two a nswe rs a re identi cal is no surprise for tw o reason s : first , because we
told yo u so (we said that the imbedded Ma rkov poin ts give so lutio ns tha t
a re good at a ll points) ; a nd second, b ecause we rec o gni ze th at the M IM I I
system forms a contin uou s-t ime Markov ch ain.
As a sec ond examp le, we conside r the system M/H 2/1 whose pdf for service
time was give n in Eq. (5.68). By in spect ion we m ay find B *(s), wh ich gives
( 1 - p)(1 - (7{15)z]
Q(z) = [1 _ (2/5 )z][1 - (2/3)z]
We now exp and Q(z) in partial fraction s, which gives
1{4 3{ 4 )
Q(z) = (I - p) ( I _ (2/5)z +I _ (2/3)z
This la st may be inverted by inspection (by now the reader sho uld rec ogni ze
the sixth entry in Table 1.2) to give
Lastl y , we note th at the value for p ha s a lready been calculated a t 5/8 , and
so for a final soluti on we have
k = 0, 1,2, . . . (5.92)
It sho u ld not surprise us to find thi s su m of geo metric terms for our so lutio n.
Further examples will be found in the exerci ses. F or now we terminate th e
d iscussion of how many cu st omers are in the system a nd proceed with the
calculati on of how long a cu st omer spends in the system .
Time------;;....
Ououo - , -;.., _ _
\.~----,v~---'}
"n
~.
arrive
Figure 5.7 Derivation of V(z) = B* (i. - i.z).
In Figure 5.7, the reader is reminded of the structure from which we obtained
this equation. Recall that V (z) is the z-transform of the number of customer
arrivals in a particular inter val, where the arrival proce ss is Poisson at a rate A
cust omers per second. The particular time interval involved happens to be
the service interval for C n; this interval has distribution B(x) with Laplace
t ransform B *(s). Th e deri ved relati on between V(z) and B* (s) is given in
Eq. (5.93). The imp ortant observation to make now is that a relationship
of this form must exist between any two random variables where the one
identifies the number of customer arrivals from a Poisson process and the
other describes the time interval over which we are co unting these customer
arri vals. It clearly makes no difference what the interpretation of this time
interval is, only that we give the distribution of its length ; in Eq. (5.93) it
ju st so happens that the interval involved is a service interv al. Let us now
d irect our attention to Figure 5.8, which concentrates on the tim e spent in the
sys tem for C n' In th is figure we have traced the history of C n' The interval
labeled lI" n ident ifies the time from when C; enters the queue until that cus-
tomer leaves the queue and enters service; it is clearly the waiting time in queue
for C n' We have also identified the service time X n for C n' We may thu s
Tj me~
\'-----~ ~-----'
~
q"
arrive
Figure 5.8 Derivation of Q (z ) = S * (i. - i.:;).
198 TH E QUEUE M/G/I
identify the total time spen t i ll sy stem Sn for CO'
(5.94)
We have earlier defined gn as the number of customers left beh ind upon the
departure of Cn' In considering a first-come-first-served system it is clear
th at all those customers present upon the arri val of C n must depart before he
d oes; consequently, those customer s that C; leaves behind him (a total of
gn) must be precisely th ose who arri ve durin g his stay in th e system. Th us,
referring to Figure 5.8, we may identify those customers who arrive du ring
the time interval s; as bein g our previously defined rand om variab le gn'
Th e reader is now asked to comp are Figures 5.7 and 5.8. In bot h cases we
have a Poisson arrival process at rate I customers per second. In Figure 5.7 we
inqu ire into the number of arrivals (un) during the interval whose durat ion
is given by X n ; in Figure 5.8 we inquire int o the number of arrivals (gn)
during an interval whose durati on is given by S n' We now define the distri-
but ion for the total time spent in system for C; as
Sn(Y) ~ P[sn ~ y] (5.95)
Since we are assuming ergodicity, we recognize immediat ely that the limit of
this distribution (as n goes to infinity) must be independent of II . We deno te
this limit by S(y) and the limiting rand om varia ble by s [i.e., Sn(Y) ->- S(y )
and s; ->- s]. Thus
S (y ) ;; P[s ~ y] (5.96)
Finally, we define the Lap lace transform of the pdf for total time in system as •
With these definitions we go back to the analogy between Figures 5.7 and
5.8. Clearly, since Un is an alogous to gO' then V( z) must be analogous to
Q(z), since each describes the generating functi on for the respective nu mber
distribution. Similarly, since X n is analogous to S n , then B *(s) must be
anal ogous to S *(s). We ha ve therefore by dir ect analogy from Eq. (5.93)
t hat t
Q(z) = S* (i. - }.z) (5.98)
Since we already have an explicit expression for Q(:) as given in the P-K
transform equat ion , we may therefore use that with Eq . (5.98) to give an
explicit expression for S * (s) as
S *(,1. _ ,1.z) = B*(}. _ ,1.:) (l - p)(l - z) (5.99)
B*(,1. - }.z) - z
t Thi s can be der ived directly by the unco nvinced reader in a fashion similar to tha t which
led to Eqs. (5.28) and (5.46).
5.7. DI ST RI BUTI O N O F W AITI NG TIME 199
Thi s last equat ion is just crying for the o bvio us change of va ria ble
which gives
= = I -~
A
Making thi s chan ge of variable in Eq. (5.99) we then have
Furthermore , we define the limit ing quantities (as n ->- co) , Wn(y) ->- W(y)
and W n ->- Iii, so th at
W(y ) ~ P[I~' :-:; y ) (5.102)
The corresponding Laplace transform is
F ro m Eq . (5.94) we may de rive the dist ributio n of l~' from the d istribut ion
of s and x (we drop subscri pt notation now since we a re con sidering equ i-
librium behavior). Since a customer' s service time is independent of his
qu eueing tim e, we hav e th at s, the time spent in system for some customer,
is the sum of two independent random vari abl es: l~' (his queueing time) and x
(his service time). T hat is, Eq. (5.94) has the limiting for m
(5.104)
As derived in Appendix II the Laplace transform of the pdf of a random
vari able that is itself the sum of two independent rand om vari able s is equal
to the prod uct of the Lapl ace transforms for th e pdf of ea ch. Con sequently,
we have
5 *(s) = W*(s) B*(s)
200 TH E QUEUE M/G/I
Thus fr om Eq. (5.100) we obtain immed iat ely that
W *(s) = s( 1 - p)
- (5.105)
s - A + AB*(s)
Thi s is the desired expre ssion for the Laplace tran sform of the queu eing
(waiting)-time distribution. Here we have the third equ ati on that will be
referred to as the P-K transform equation .
Let us rewrite the P-K transform equation for waitin g time as follows:
* 1- p
W (s) =
1- p
. [I - B*(S)]
_
(5.106)
sx
We reco gnize the bracketed term in the denominator of thi s equation to be
exactly the Laplace transform associated with the density of residual service
time from Eq. (5.1I). Using our special notation for residual den sities and
the ir tr ansform s, we define .
B*(s) ;; 1 - B*(s) (5.107)
SX
and are therefore permitted to write
* I - p
W (s) - ------,'----- (5.108)
- I - pB*(s)
Thi s observa tion is trul y amazi ng since we recognized at the outset that the
problem with the M/Gfl analysis was to take account of the expended
service time for the man in service. Fr om that investigat ion we found that the
residual service time remain ing for the customer in service had a pdf given
by b(x) , whose Laplace transform is given in Eq. (5.107). In a sense ther e is a
poetic ju stice in its appearance a t thi s point in the final solution. Let us
follow Benes [BENE 56] in inverting this transform in term s of these residu al
service time den sities. Equation (5.108) may be expanded as the following
power series : co
W*(s) = ( I - p)2: l[B*(s)]k (5.109)
P O
From Appendix I we know that the kth power of a Lapl ace tran sfor m corre-
sponds to the k-fold con volution of the inverse tran sform with itself. As in
Appendix I the symbol 0 is used to denote the conv oluti on opera to r, and
we no w choose to den ote the k-fold convoluti on of a funct ion f (x ) with
itself by the use of a parenthetical subscript as follows :
d
f (k)(X) = ,f (x) 0 f ( x )0 .. ·0 f( x) ~
( 5.110)
k-fold convo lut ion
5.7. DISTRIBUTION OF WAITING T IME 20 1
Us ing this notation we may by inspection invert Eq. (5.109) to obtai n the
waiting-time pdf, which we de note by w(y) ~ dW(y)/dy; it is given by
w(y)
'" (I
=L - p)pk bCkl(y) (5.111)
k=O
Thi s is a most intriguing result! It state s th at the waiting time pdf is given by
a weigh ted sum of conv olved residual service time pdf' s. The interesting
observatio n is that the weightin g factor is simply (I - p)pk, which we now
recognize to be th e pro bab ility distribution for the number of custo mers in
an M/M /l system . Tempting as it is to try to give a physical explanation for
th e simp licity of this result and its relation to M/M /I , no satisfactory,
int uitive explan at ion has been found to explain th is dramatic form. We
note that the contributio n to the waitin g-time den sity decreases geometrically
with p in thi s series. Thu s, for p not especially close to unit y, we expect the
high-o rde r terms to be of less an d less significance, and one pract ical applica-
tion of this equ ati on is to provide a rapidly converging approximatio n to the
density of waiting time.
So far in th is section we have esta blished two principle results, namely,
the P-K transfor m equatio ns for time in system and time in queue given in
Eq s. (5.100) and (5. 105), respectively. In the previous section we have already
given the first moment of these two rand om variable s [see Eqs. (5.69) and
(5.70)]. We wish now to give a recurrence formula for the moments of t he
waiting time. We denote the kth moment of the waitin g time E [wk ], as usual ,
by Irk. Takacs [TAKA 62b] has show n that if X i+! is finite, then so also are
Iii, \\,2, . . . , Wi; we now adopt our slightly simp lified notati on for the ith
moment of service time as follows : hi ~ x'. Th e Tak acs recurr ence for mula is
wk = - I.-
•
T""
'
k
(k) b
i+ 1
---IV
~.' (5.112)
I - P i=' i (i + I)
where \\ ,0 ~ 1. Fr om this formula we may write down the first couple of
moments for waiting time (and note that the first moment of waiting time
agrees with the P-K formula):
sb ;
lii (= IV) = - (5.113)
2(1 - p)
-; •
IV- = 2( + - }.b
l,'r ---"--
3
(5.114)
3(1 - p)
In orde r to obtain similar moments for the total time III system, that is,
E[5 k ] , which we denote bys", we need merely take ad vant age of Eq. (5. 104) ;
from this equ ation we find
(5.115)
202 THE QUEUE M /GfI
Using the bin omi al expansion and the ind ependence bet ween wai ting time
and service time for a given customer, we find
? = i (~)Wk-ibi
i=O I
(5.11 6)
Thus calculating the moments of the wa iting time from Eq . (5.112) a lso
permits us to calcul ate the moments of time in system from this las t equation.
In Exercise 5.25 , we drive a relati on ship bet ween Sk and the mom ent s o f
the number in system; the simplest of these is Little's result, a nd the others
are useful genera liza tio ns.
At the end of Section 3.2, we promised the reader th at we wo uld de velop
the pd f for the time spent in the system for an M IM II queueing system. We
are now in a position to fulfill th at promise. Let us in fact find both the Th is
distribution of waiting time and distribution of system time for cu stomers in we I
M /M /I . Usi ng Eq. (5.87) for the system M /M fI we may calculate S*(s) from ApF
Eq. (5. 100) as follows : Imp
S*(s) = p 1- s( l - p) ]
(s + p ) L; - A + Ap/(s + p) F ro
S*(s) = p (1 - p) MIM II (5.117)
. s + p( l - p)
T hi
Th is equat ion gives the Laplace tr an sform of th e pdf for time in the system
(
which we den ote , as usu al , by s(y) ~ dS(y) ldy. Fortunately (as is usual with
to '
the case M /M /I) , we recogni ze the inver se of thi s tr an sform by inspection.
Thus we have immediat ely that reF
Re
s(y) = p( 1 - p ) e- P(l - p) u y ~O M IM I 1 - (5.118) (5.
The cor responding PDF is given by frc
d i:
S(y) = I - e-p(l- p) u y ~ 0 M IMII - (5.1l 9) tir
Simil arly, from Eq, (5.105) we may obtain W* (s) as th
(5
W*(s) = s( 1 - p)
s - A + i.p/(s + It) m
(s + p )(1 - p) (5.120)
SI
s + (p - ;.) H
I" (y )
o y
This exp ression gives the Lap lace transform for the pdf of waiting time which
we denote, as usual , by w(y) ~ dW(y)/dy. From ent ry 2 in Table 1.4 of
Appendix I, we recogn ize that the inverse transform of ( I - p) mu st be a n
impulse at the origin ; thus by inspection we have
h = (I - p)pk (5.124)
• A simple exponential form for the tail of the waiting-t ime distribution (that is, the
probabilities associated with long waits) can bederived for thesystem M/G /1. We postpone
a discussion of this asymptotic result until Chapter 2, Volume II, in which we establish
this result for the more general system GIG/I.
204 THE QUEUE M/G/I
We repeat agai n that thi s is the same expression we foun d in Eq. (5.89) and
we know by now that this result app lies for all po ints in time. We wish to form
t he Lapl ace transform of the pdf of total time in the system by considering
thi s Lapl ace transform conditioned on the number of customer s found in th e
system upon arrival of a new customer. We begin as generally as possible
a nd first consider the system M IGII. In particular , we define the condit iona l
d istribution
I = P[customer's total
S (y k ) I
time in system j; y he finds k in
system upon his arrival]
I
5 *(s k ) = Jo e- sv d5(y I k )
t. ( ""
(5.125)
Now it is clear that if a customer finds no one in system upon his a rrival,
then he must spend an amount of time in the system exactly equal to his own
service time , and so we have
S *( s I 0) = B *(s )
On the other hand , if our arriving customer finds exactly one customer
ahead of him , then he remains in the system for a time equal to the time to
finish the man in service, plu s his own service time; since these two int ervals
are independent, then the Laplace transform of the density of this sum must
be the product of the Lapl ace tr ansform of each density, giving
where B *(s) is, again, the tran sform for the pdf for residual service time.
Similarly, if our arriving customer finds k in front of him , then his total
system time is the sum of the k service times associated with each of t hese
customer s plus his own service time. Th ese k + I rand om variable s are all
independent, and k of them are dra wn from the same distributio n S ex).
Thus we have the k-fold product of B *(s) with B*(s) giving
I =
5 *(s k ) [B*(s)jk8*(s) (5. 126)
Equ ati on (5.126) hold s for M IG II. Now for our M/M /I problem , we have
that B* (s) = I'-/ (s + 1'-) and, similarly, for B*(s) (memoryless); thus we have
I = ( -I'--)k+'
5*(s k ) (5.127)
s + 1'-
5.7. DISTRIB UTION OF WAITI NG TI ME 205
In order to obtain S *(s) we need merel y weight the transform S *(s k ) with I
the pr obability P» of our customer finding k in the system upon his arrival,
namely , cc
S*(s) = L 5*(s I k)Pk
k=O
S *(s) =L
co (
-P-)k+l(I - p)p'
k~O S + P
!l(I - p)
(5.128)
s + p(I - p)
We recogni ze that Eq . (5.128) is identical to Eq . (5.117) and so the remaining
steps leading to Eq . (5.118) follow immediately. This demonstration of a
simpler method for calcul ating the distribution of system time in the MIMII
queu e demon strates the followin g import ant fact: In the development of
Eq. (5. 128) we were required to consider a sum of random variables, each
distributed by the same exponential distributi on ; the number of terms in
that sum was itself a rand om "variab le distributed geometrically. What we
fou nd was t hat this geomet rical weighting on a sum of identically dis tributed
exponential random vari ables was itself expo nential [see Eq . (5.118)]. This
result is true in general, namely , that a geometric sum of exponential random
variables is itself exponentially distributed.
Let us now carry out the calculations for our M/H./I example. Using the
expr ession for B*(s) given in Eq. (5.90), and applying this to the P-K
transform equation for waiting-time den sity, we have
* 4_s-,-(I_----'-p
- -'--)("-s--:+'----'l)-'-(s_+
..:...-2_l )' --_
W (s) -
- 4(s - l )(s + l )(s + 2).) + 8).3 + 7l 2 s
Thi s simplifies up on fact oring the den ominator , to give
* I_-----'-p.:..o
.0....( )(s--,-
+_).-,-,)(:.. . s.-:
+_ 2_),.:. .)
W (s) = -
[s + (3/2)l ][s + (112»).]
Once again, we must divide numerator by den ominator to reduce the degree
of the numerator by one, giving
*
W (s) = (I - p) + _ ).-:,(_1 _--,-p-,--)['--s.-:+_ (,---51,---4,---»).-,---
]
[s + (3/2»)'][s + ( 1/2»).]
We may now carry out our partial-fr acti on expansion:
}./4 3}./4 ]
1
W*(s) = (1 - p>[ + s + (3/2»). + s + ( 1/2) )'
==
We now recall the imp o rtant sto chastic process V( I) as de fined in Eq . (2.3) :
V(t) ~ the unfini shed work in the system at time I
~ the rem aining time req uired to empty the system of all
customers present a t time I
(a>
~ I~
I
c,
C, C3
Server - .---ji-- +--'-------,----L---,----+-..L---;;..
(b) C, C, C3
C3
Figure 5.10 (a) The unfinished work, the busy period, and (b) the customer
history.
enters the system at time T I and brings with him an amount of wor k (tha t is, a
required service time) of size X l ' Thi s customer finds the system idle and
therefore his arrival termin ate s the pre vious idle period and initiates a new
busy period. Prior to his arrival we assumed the system to be empty and
therefore the unfinished work was clearly zero. At the inst ant of the arrival
of C, the system backl og or unfinished work j umps to the size X ll since it
would take this long to empty the system if we allowed no further entries
beyond this instan t. As time progresses from T 1 and the server works on C ll
this unfinished work reduce s at the rate of 1 sec/sec and so Vet ) decrease s
with slope equal to - I. t 2 sec later at time T 2 we observe that C2 ent ers the
system and forces t he unfinished work Vet) to make another vertical jump of
magnitude X 2 equal to the service time for C2 • The functi on then decrea ses
agai n at a rate of I sec/sec until customer C 3 enters at time T 3 forcing a vertical
ju mp aga in of size X 3• Vet ) continues to decre ase as the server works on t he
customers in the system unt il it reaches the instant T 1 + YI , at which time he
has successfully emptied the system of all cust omers and of all work . Thi s
..
208 TH E QUEUE M/G/!
then terminates the busy period and initiates a new idle period. The idle
per iod is terminated at time T. when C. enters . This second busy period
serves only one customer before the system goes idle again . The third busy
period serves two customers. And so it continues. For reference we show in
Figure 5. lOb our usual double-time-axis representation for the same sequence
of customer arrivals and service times dra wn to the same scale as Figure 5. IOu
and under an assumed first-come-first-served discipline. Thus we can say
that Vet) is a function which has vertical jumps at the customer-arrival instants
(these jumps equaling the service times for those customers) and decrea ses
at a rate of I sec/sec so long as it is positive; when it reaches a value of zero, it
remains there until the next customer arrival. This stochastic process is a
continuous-state Markov process subject to discontinuous jumps; we have
not seen such as this before .
Observe for Figure 5.10u that the departure instants may be obtained by
extrapolating the linearly decreasing portion of Vet) down to the horizontal
axis; at these intercepts , a customer departure occurs and a new customer
service begins. Again we emphasize that the last observation is good only for
the first-come-first-served system . What is important, however, is to observe
that the function Vet) itself is independent of the order of service ! The only
requirement for this last statement to hold is that the server remain busy as
long as some customer is in the system and that no customers depart before
they are completely served; such a system is said to be "work conserving"
(see Chapter 3, Volume II) . The truth of this independence is evident when
one considers the definition of Vet) .
Now for the idle-period and busy-period distributions. Recall
The calculation of the idle-period distribution is trivial for the system M/G/I .
Observe that when the system terminates a busy period , a new idle period
must begin, and this idle period will terminate immediately upon the arrival
of the next customer. Since we have a memoryle ss distribution, the time until
the next customer arrival is distributed according to Eq . (5. I30), and
therefore we have
F(y) = J - e-i.· y ~ 0 - (5.133)
So much for the idle-time distribution in M/G/1.
5.8. TH E BUSY PERI OD AND ITS DURATI ON 209
Ulti
o
-----J-.\'
T,
~------------ y ------------~
Busy period generated by C1
Nlti
C, C. C9 ClI
Figure 5.11 The busy period : last-come-first-served
Now for t he busy-period d istrib ution ; this is not q uite so simple. The
reader is referred to Figur e 5.11. In part (a) of th is figure we once agai n
observ e the unfinished work U(t) . We assum e th at th e system is empty just
pri or to the instant 7"1. at which time customer C;
init iates a busy pe riod of
duration Y. His service time is equal to Xl . It is clear that th is customer will
depart from the system at a time 7"1 + Xl . Du ring his service other customers
2 10 THE QUEUE M/G /I
may arrive to the system and it is they who will continue th e busy period.
Fo r the function shown, three other custo mers lC2 , C3 , and C.) ar rive during
the interval of Cl's service. We now make use of a brilliant device due to
Tak acs [TAKA 62a]. In particular, we choose to permute the order in which
customers are served so as to create a last-come-first-served (LCFS) que ueing
discipline* (recall that the duration of a busy period is independent of the
order in which customers a re served). The moti vation for the reordering of
custo mers will soon be ap parent. At the departure of Cl we then take int o
service the newest customer , which in our example is C, . In add ition , since
all future arrivals du ring this busy period must be served before (LCFS!)
a ny customers (besides C,) who arrived during Cl' s service (in this case C 2
and C3 ) , then we may as well consider them to be (tempora rily) out of the
system. Thus, when C, ent ers service, it is as if he initiated a new busy period,
which we will refer to as a "sub-busy peri od"; the sub-busy period generated
by C, will have a duration X. exactly as long as it takes to service C. and all
those who enter into the system to find it busy (remember that C 2 and C3 are
not considered to be in the system at thi s time). T hus in Figure 5.l la we
show the sub-busy period generated by C. during which customers C., C. ,
and Cs get serviced in that order. At time 1"1 + Xl + X. this sub-busy period
ends and we now continue the last-come-first-served order of service by
bringing C3 back into the system. It is clear that he may be co nsidered as
generating his own sub-busy period , of durati on X 3 , duri ng which all of his
"descendents" receive service in the last-come-first-served order (name ly,
C3 , C7 , Ca, and C9) . Finall y, then , the system emptie s agai n, we reintr oduce
C 2 , and perm it his sub-busy peri od (of length X 2 ) to run its cour se (and
complete th e major busy period) in which customer s get serviced in the order
C2 , C10 , and finally Cu.
Figure 5. lla shows that the cont our of any sub-busy per iod is identical
with the con tour of the main busy period over the same time interval and is
merely shifted down by a constant amount ; th is shift, in fact, is equal to the
summed service time of all th ose customers who arrived during Cl's service
time and who have not yet been allowed to generate their own sub-busy
periods. The details of custo mer history are shown in Figure 5.1Ic and the
to tal numb er in the system at any time und er this discipline is shown in Figure
5.1l b. Th us, as far as the queueing system is concerned, it is strictly a last-
come-first-served system from sta rt to finish. However, our analysis is
simplified if we focus upon the su b-busy periods and observe th at each behaves
statistically in a fashion identical to the major busy period generated by Cl.
T his is clear since all the sub-busy periods as well as the major busy period
• This is a "push-down" slack. This is only one of many perm utations that " work"; it
happens that LCFS is convenient for peda gogical purp oses.
5.8. TH E BUSY PERIOD A!'o1) ITS DURATIO N 211
are eac h initiated by a single customer whose service times a re all drawn
from the same distribution independently ; each sub-busy period continues
until the system catches up to the work load , in the sense that the unfin ished
work funct ion U(t) drops to zero . Thus we recognize th at the random vari-
ables { X k } are each independent a nd identically distributed a nd have the sa me
distributio n as Y , the duration of the major busy peri od .
In Figure S.ll e the reader may follow the customer history in detail ; the
soli d black region in this figure identifies the customer being served during
that time interval. At each cu stomer departure the server " floa ts up" to the
top of the customer contour to engage the most recent a rrival a t that time;
occasio nally the server "fl oats d own " to the cust omer directly below him
such as a t the departure of CG• The server may trul y be thought of as floating
up to the highest customer there to be held by him until his departure, a nd
so on . Occasi onall y, however, we see that our ser ver "falls down" through a
ga p in o rde r to pick up the most recent a rrival to the system, for example, a t
the departure of CS • It is at such instants th at new sub-busy peri ods begin
and on ly when th e server falls down to hit the horizontal axis doe s the maj or
busy period termin ate .
Our point of view is now clear : the duration of a bu sy peri od Y is the sum
of I + v random variables, the first of which is the service time for C , and
the remainder of which a re each random vari able s de scribing the duration of
the sub-busy peri od s, each of which is distributed as a busy peri od itself. v is
a rand om va riable equal to the number of cust omer arrivals during C, 's
service interval. Thus we ha ve the important relat ion
On ce ag ain we remind the reader that these transform s may also be expressed
as expectation opera to rs, nam ely:
G*(s) ~ E[ e- SF ]
Let us now ta ke ad vantage of the powerful technique of conditioning used so
often in pr obability the ory ; thi s technique permits one to write d own the
probability associated with a complex event by cond itionin g that event on
----- - -- --
212 TH E QUEUE M IG II
enough given conditions, so that the conditional probability ma y be written
down by inspection. The unconditional pr obability is then obtained by
multiplying by the probability of each condition and summing over all
mutually exclusive and exha ust ive conditions. In our case we choose to
condition Yon two events : the du ration of Cl's service and the number of
customer arrivals d uring his service . With th is point of view we then calculate
the followin g conditional transform:
E[e- ' Y Xl I = X, ii = k] = E[e- ,(z+x ' +l+" '+ X 2 1]
= E[e- SXe- SX.l: -T-l .. . e- Sx 2]
Since the sub -busy periods have durations that a re independent of each other,
we may write this last as
I
E[e- ' Y Xl = X, ii = k] = E[ e- ""]E[e-,xt+l] ... E[e- ' x ,]
Since X is a given constant we have E[e-sx } = e:«, and further, since the sub-
busy periods are identically distributed with corresponding transforms
G* (s), we have
I
E[e-:' Y Xl = X, ii = k ] = e- SX[G*(s)]k
Since ii represents the number of ar riva ls during an interval of length x,
then ii must have a Poisson distribution whose mean is I.X. We may therefore
remove the condition on ii as follows :
= e -X[s+l- ;.O - (s )l
This last we recognize as the transform of the pdf for service time evaluated
at a value equal to the bracketed te rm in the exponent, that is,
gk ~ E[yk] (5 .139)
as the kth moment of the busy-peri od distribution, and we intend to express
the first few moments in term s of the moments of the service-time distribution ,
namely, x", As usual we have
gk = ( _ I)kG*(k)(O)
(5.140)
x k = (_I)kB*(kl(O)
From Eq . (5.137) we then obtain directly
g, = -G *(lI(O) = -B*(lI(O)!!... [s
ds
+ i. - )'G*(s)] I,_0
= -B*(I)(O)[I - )'G*(I)(O)]
[note, for s = 0, th at s + ). - i.G*(s) = 0] and so
g, = x( 1 + i.g,)
Solving for g , and recalling th at p = i.x, we then have
g, = -x- - (5.141)
1- P
If we comp are this last result with Eq , (3.26), we find th at the average length
of a busy period fo r the sys tem M IGII is equal to the average time a customer
spends in an M IMI] sys tem and depends only 0 11 ). and x
Let us now chase down the second moment of the bu sy period. Pr oceedin g
from Eq. (5. 140) and (5.137) we obtai n
and so
g2 = x 2(1 + Ag,)Z + XI.g2
Sol ving for gz a nd using our result for g" we have
x 2( 1 + Ag,)Z
gz =
I - I.X
xZ[1 + I.X/(i _ p)] z
-
I-p
and so finally
gz = (I _ p)3 - (5.142)
This last result gives the second moment o f the bu sy period and it is interest-
ing to not e the cube in the den ominat or ; this effect d oes not occu r when one
calculates th e seco nd moment of th e wai t in the system where only a squa re
power ap pears [see Eq. (5. 114)]. We may now ea sily calcul a te the va rian ce of
the bu sy period , den ot ed by u.", as follows:
XZ ( X)2
= ( I - p)3 ( I _ p)z
and so
uz =u.-" +p (-)"
x-
_ (5.143)
• ( I _ p)3
where uo" is theva ria nce of th e service-time distribution .
Proceeding as above we find th at
x3
g3 = (i _ p)4 + (I
3A(?)"
_ p)5 -
g4 =
x'
(l -p)
5 + (I
lOA? ?
-p)
6
15Az(?) 3
+ (I -
-p)' -
We observe th at the fact or ( I - p) goes up in powers of 2 for the dom ina nt
term of eac h succeeding mo me nt of the busy per iod an d this determi nes th e
behavior as p - I .
We now co nside r so me exa mples o f invert ing Eq. (5.137). We begin wit h
th e M/M /I queueing system. We have
B*(s) = _ f.1 _
s + f.1
5.8. TH E BUSY PERIOD AND ITS DURAnON 215
which we ap ply to Eq. (5.137) to obtain
G*(s) ,;" ft
s + A- AG*(S) + ft
or
A[G*(s}f - (ft + ). + s)G* (s) + ft = 0
Solving for G * (s) and restricting our solution to the required (sta ble) case
for which IG* (s)1 ~ I for Re (s) ~ 0, gives
and so
G*(O) = 1p
~
Thu s p< l
(5.147)
Plb"" ""';00 ends in M IM I!] {;
p> 1
21 6 TH E QU EUE M IGII
The busy peri od pdf given in Eq . (5.145) is much more complex than we
would have wished for this simplest of interesting queuein g systems ! It is
ind icati ve of the fact that Eq. (5. I37) is usually unin vertible for more general
service-time distributions.
As a seco nd exampl e, let' s see how well we can do with our M/H 2 /1 example.
Using the expression for B* (s) in our funct ional equat ion for the busy period
we get
G*(s) = 8). 2 + 7).[s + }. - )'G*(s)]
4[s + A - }.G*(s) + A][S + A - }.G*(s) + 2A]
which lead s dire ctly to the cubic equation
4[G * (S)]3 - 4(2s + 5)[G* (s)J2 + (4s + 20s + 31 )G* (s) -
2
(15 + 7s) = 0
Th is last is not easily solved and so we stall at this po int in our attempt to
invert G* (s). We will return to the functional equati on for the busy period
when we discuss pri orit y queueing in Chapt er 3, Volume II. Th is will lead
us to the concept of a delay cycle, which is a slight generalization of the
busy-period analysis we have j ust carried out and greatly simplifies priority
queueing calculations.
In = P[ N b p = II] (5.148)
The best we can do is to obt ain a functi onal equati on for its z-transform
defined as
(5.149)
The term for II = 0 is omitted from this definitio n since at least one customer
must be served in a busy peri od. We recall that the random var iable ii
repre sent s the number of arrivals during a service peri od and its z-transform
V(z) obeys the equation deri ved earlier, namely ,
But each of the M i is dist ributed exactly the same as N b p and, therefore,
E[ZSb P I iJ = k] = z[F(z)]k
Removing the condition on the number of arrivals we have
00
= z LP[iJ =
k=O
k][F(zW
From Eq, (5.44) we recognize this last summation as V(z) (the z-transform
associated with iJ) with transform variable F(z); thus we have
h1 = Flll(l)
= 8*(1)(0)[- AF(1)(I)] + 8*(0)
Thus
J1. =
2p(1 - p) + A x +--
2
12
- (5.154)
- (I - p)3 1- p
p(l - p) + A2 ?
Uk
2
= ~--'-':"""';'--- - (5.155)
(1 _ p)3
As an example we again use the simple case of the M/M jl system to solve
for F(z) from Eq. (5.152). Carrying thi s out we find
F(z) = z /l
/l+ A- AF(z)
AF2( z) - (/l + A)F(z) + /lz = 0
Solving,
- (5.157)
As a seco nd example we con sider the system M/D/1. For thi s system we
have hex ) = uo(x - x) and from entry three in Table 1.4 we ha ve immediately
that
B*(s) = e- ' z
U sing thi s in our functi onal equ ati on we obta in
F(z) = z e- Pe pF ( z ) (5.158)
where as usual p = AX. It is convenient to make the substitution u = z pe: "
a nd H (u) = pF(z), which th en permits us to rewrite Eq. (5. 158) as
u = H(u)e-ll(u)
The solutio n to th is equ ation may be obta ined [RIOR 62) a nd then our
original fun ction may be evaluated to give
n= l 11!
5.10. FROM BUSY PERIODS TO WAITING TIMES 219
From this power series we recognize immediately that the distribution for the
number served in the MIDII busy period is given explicitly by
n- l
( )
In = .!!.f!....-- e- np - (5.159)
Il !
Fo r the case of a constan t service time we know tha t if the busy period
ser ves II customers then it must be of durat ion nii , and therefore we may
immediately write down the solution for the MIDfI busy-period dist ribution
as
[V/il( n p) n-l
G(y) =L- - e- np - (5.160)
n= l It !
c,
OL....-+_ _--->:,---_ _ ---'':-_~ ~ _ _.._:I~._
Thus we see that Y, the duration of the total busy period , is given by
""
Y = LXi
;= 0
"
X i(y) = P[X i :s;; y]
and the correspondi ng Lapl ace transform of the assoc iated pdf to be
X,*(s) ~ 1"" e- SV
dXi(y)
= E[e- ' X']
I
We wish to derive a recurrence relati on am ong the X,*( s). Th is derivat ion
is muc h like that in Section 5.8, which led up to Eq. (5. 137). Th at is, we first
condition our transform sufficiently so that we may write it down by in-
spection; the cond itions are on the interval length X i _ l and on the number of
5. 10. FROM BUSY PERIODS TO WAITI NG T IMES 221
a rri vals n i - 1 during that interval, that is, we may write
I
E [e- 'X ; X i - -I -- Y, " i - l -- n ] -- [B*(s)]"
Thi s last follows from our con volution property leading to the multiplicati on
of tr an sforms in t he case when the va ria bles are independent ; here we have
n independent service times, all with identical distributions. We may un-
condition first on n :
00 (A )"
E[e- ' x ; I X i_1 = Y] = I ..JL e-J.V[B*(s)]"
n=O n !
and next on Y:
Clearly , the left-hand side is X;*(s); evaluating the sum on the right-hand side
lead s us to
f.-0
OO •
X i*(s) = e- [J.-J.B (, )]. dXi_1(y )
Thi s integra l is recogni zed as the tran sform of the pd f for Xi-I> na mely,
E[e- ' ;;; I i] = r'" r' e-['-HW·(, »).~-P-AB·( ')l> dX / y) dy' /E[ X ;]
Jy=o J JI'= O
'" [e- " - e-[ ,t-,tB'(,»).]
=
1._0 [- s + A- AB* (5)]E [X ;]
dX(y)
•
I
E[e- ' ;;; i] = X/(5) - X/(J. - ;.8*(5»
[- 5 + I. - 1.8*(5) ]E[X ;]
But now Eq . (5.161) permits us to rew rite the seco nd o f th ese tr an sforms to
ob ta in .
E [e
- ,W I I]. X7+1(5) - X ;*(5)
= ----"-'-'--'-----'---'-'---
[5 - I. + A8*(5)] £[ X i ]
Now we may rem o ve the cond ition o n our arriva l entering during the ith
interval by weighting th is la st ex pression by the probability th at we have
formerly expressed for the occurre nce of th is event (still condition ed on o ur
ar riva l en tering during a bu sy per iod) , a nd so we have
1 :L
'" [v*
,i \ · ... ( S ) - X .*(s)]
[5 _"I. + I."8*( 5)]E[ Y] .'_- 0 , ,1 ,
5.11. CO MBINATOR IAL METHO DS 223
Th is last sum nicely collap ses to yield 1 - X o*(s) since Xi*(s) = I for those
inte rvals beyond the busy period (recall X i = 0 for i ~ i o) ; also , since X o =
x" a service time, then X o*(s) = B *(s ) , and so we arrive at
I
E[e-S;;; enter .
In
b usy peno
. d] = 1 - B*(s)
[s - }.+ }.B*(s)]E[Y ]
Fr om pre viou s con sider ation s we know that the probability of an a rrival
ente ring during a busy per iod is merely p = Ax (and for sure he mu st wait for
service in such a case); further, we may evaluate the average length of the
busy peri od E[ Y] either from our pre vious calcul ati on in Eq . (5. 141) o r from
elementary considerations ' to give E [Y] = 'i/ (l - p). Thus, unc onditioning
on an arrival finding th e system bu sy, we finally have
E[e- SW]
= (I -
-
p)E[e- SWI en ter in idle period] + pE[e- s"- Ienter in busy period]
[1 - B*(s)](1 - p)
= ( 1 - p) + p [s _ A + AB*(s)]'i
= ----'s'-'-(I=--------'---p)~ (5.163)
s - A + AB*(s)
Voila! T his is exactl y the P-K tran sform equation for waiting time , namel y,
W *(s) ~ E[e- siD ] given in Eq. (5. 105).
Thus we have shown how to go from a busy-period analysis to the calcul a-
tion of waiting time in the system. Thi s meth od is rep orted up on in [CO NW
67] and we will have occasio n to return to it in Chapter 3, Volu me 11 .
is defined as the instant when the random walk R (t) rises from its kth new
minimum (and the value of this minimum is referred to as the ladder height).
In Figure 5.13 the first three ladder indices are indicated by heavy dots.
Fluctuation theory concerns itself with the distribution of such ladder
indices and is amply discussed both in Feller [F ELL 66] and in Prabhu
[PRAB 65] in which they consider the applications of that theory to queueing
proce sses. Here we merely make the obse rvation that each ladder index
identifies the arrival instants for those customers who begin new busy p eriods
a nd it is th is observation that makes them interesting for queuein g theory.
More over, whenever R (t ) drops below its previous ladder height then a busy
peri od terminates as shown in Figu re 5.I3. Thu s, between the occurrence of a
ladder index and the first time R (t) drops below the corresponding ladder
height, a busy period ensues and both R (t) and U(t ) have exactly the same
shape, where the former is shifted down from the latter by an am ount exactly
equal to the accumulated idle time since the end of the first busy peri od . One
sees that we a re quickly led into meth ods from combinatorial theory when
we deal with such indices.
In a similar vein, Tak acs has successfully applied combinatorial theory to
the study of th e busy period. He consider s this subject in depth in his book /
[TAKA 67] on combinatorial methods as applied to queuein g theory and
develops, as his cornerstone , a generali zati on of the classical ballot theorem.
The classical ballot theorem concerns itself with the counting of votes in a.
two-way conte st involving candidate A and candidate B. Ifwe assume th at A
scores a votes and B scores b votes and that a ;:::: mb , where m is a non-
negati ve integer and if we let P be the probability that through ou t the
5.11. COMBINATORIAL METHODS 225
counting of votes A continually leads B by a factor greater than m and further ,
if all possible sequences of voting records are equally likely, then the classical
ballot theorem states that
a - mb
P = =-----:..:.:::: (5.164)
a+b
This theorem originated in 1887 (see [TAKA 67] for its history). Takacs
generalized thi s theorem and phrased it in terms of cards drawn from an urn
in the following way. Consider an urn with Il cards, where the cards are
marked with the nonnegative integers k I , k 2 , • • • , k ; and where
n
L k, = k ~ Il
i= l
(that is, the ith card in the set is marked with the integer k ;). Assume that all
Ilcards are drawn without replacement from the urn. Let o; (r = I, . . . , Il)
be the number on the card drawn at the rth drawing. Let
Nr = VI + V2 + .. . + V T
r = I, 2, . .. , 11
NT is thus the sum of the numbers on all cards drawn up through the rth
draw. Takacs' generalization of the classical ballot theorem states that
- 11 - k
P[N T < r for all r = 1,2, .. . , 11] =- - (5.165)
11
The proof of this theorem is not especially difficult but will not be reproduced
here . Note the simplicity of the theorem and , in particular, that the prob-
ability expressed is independent of the particular set of integers k; and de-
pends only upon their sum k . We may identify o; as the number of customer
arrivals during the service of the rth customer in a busy period of an fovl /G/l
queueing system. Thus FlT + I is the cumulative number of arrivals up to the
conclusion of the rth customer's service during a busy period . We are thus
involved in a race between FlT + I and r : As soon as r equals FlT + I then the
busy period must terminate since, at this point, we have served exactly as
many as have arrived (including the customer who initiated the busy period)
and so the system empties. If we now let N b P be the number of customers
served in a busy period it is possible to apply Eq . (5.165) and obtain the
following result [TAKA 67]:
and so ,
"I
I
co .lz(i·X)· - l
G(y ) = e" - - b1nl(X) d x - (5.169)
O .~ l II !
Thus Eq . (5.169) is a n exp licit expression in terms of known quantities for
the distribution of the bu sy period a nd in fact may be used in place of the
expression given in Eq . (5. 137), the Lapl ace tr ansform of dG(y)/dy. Th is is
the expre ssion we had p ro mis ed earlier, a ltho ug h we ha ve expressed it as a n
infinite summati on ; nevertheles s, it does pr ovide the ability to a pproxi ma te
the busy-peri od distribution numericall y in a ny given situa tion. Similarl y,
Eq . (5.168) gives an explicit expression for the number served in th e bu sy
period .
The reader may have o bserved th at ou r study of the busy per iod has
reall y been th e study of a transient phenomenon a nd thi s is one of the
reasons th at t he de velopment bogged d own . In the next sectio n we con sider
certain aspects of the transient solution for M/G/I a bit fur th er.
aF(w, t)
--'----'- =
aF(w, t)
. - i.F(w, t) +A I W
B(w - x) dxF(x, t) - (5.172)
at ow x-o
228 TIl E QUEUE M/G/I
T ak ac s [TAKA 55] deri ved thi s equation for the more genera l case of a
nonhom ogene ou s Poisson process, namely , where th e a rriva l rat e .1.(1)depends
up on I. He sho wed t ha t this equ ation is good for almost all W ~ 0 an d 1 ~ 0 ;
it d oes 1/01 hold a t th ose w a nd 1 for which of(lV, 1)/OlV has an accumulati on
of probability (na mely, an impulse) . This occurs , in particular, a t 1\' = 0
a nd would give rise to the term F(O , I)uo(w) in of(lV, I)/OW, whereas no other
term in the equation contains such an impulse.
We may gai n more information from the Takac s integr odifferential
equation if we transform it on the variable W (a nd not on t) ; thus using the
tr an sform variable I' we define
W *'(r, I) ~fo~ e- TW
dF w(w, I) (5.173)
We use t he notation (*.) to denote transformation on the first , but not the
second a rgument. The symbo l Wis ch osen since, as we shall see, lim W*'(r, I) =
W *(r) as 1 ->- 00 , which is our former tr ansform for the waitin g-time ' pdf
[see, for example, Eq . (5. 103)].
Let us examine the tran sform of each term in Eq, (5. 172) sepa ra tely. First
we note th at since F(w, I) = S~ '" d Fi», I), then from entry 13 in Table 1.3 o f
Appendix I (a nd its footnote) we mu st ha ve
.
'" F(w, I)e - TW
dw =
W*'(r, I)
--'---'-----'-~
+ F(O- , I)
J.o I'
H owever , since the unfini shed work and the ser vice time are both nonnegat ive
random varia bles , it mu st be that F(O-, I) = B (O-) = 0 a lways . We rec ogni ze
th at th e last term in the T ak acs inte grodifferential equa tion is a con volution
between B(w) an d of(W,I)/O W, a nd therefore th e tr an sform o f th is co n-
volution (includi ng the con stant multiplier A) mu st be (by properties 10 a nd
13 in that sa me tabl e) }.W* ·(r, I)[B *(r) - B (O- )]Ir = }.lV*·(r, I)B*(r)/r.
N ow it is clear that the tr an sform for the term of(w, I)/OW will be W* '( r, I) ;
but thi s tra nsfo rm includes F«(j+- , I), the tr ansform of the impulse locat ed a t the
o rigin for thi s partial deri vative, and since we kn ow th at the T ak acs int egr o-
d ifferential equati on does not contain that impulse it mu st be subtracted out.
Thus, we ha ve from Eq . (5.172),
J
5.12. THE TAKACS INTEG RODl FFERENTIAL EQUATION 229
which may be rewritten as
Takacs gives the solution to thi s equ ati on {p, 51, Eq . (8) in [TAKA 62b]}.
We may now transfor m on o ur seco nd vari able 1 by first defining the
double transform
I"
F**(r, s) =t. J o e-' tW *"(r, t) dt (5.176)
r,*(s) ~ 100
e- stF(O+, t) dl (5.177)
We may now transform Eq . (5.175) usin g the tran sform pr operty given as
entry II in Table I.3 (and its foot note) to obtain
(rl1])e-~ Wo - e- rwo
F**(r, s) = - (5.180)
;.B*(r) - i. +r- s
We will return to this equati on later in Ch apter 2, Volume II , when we d iscuss I
the diffusion ap proxi matio n.
For now it beh ooves us to investigate the steady-sta te value of these func-
tions ; in particular, it can be shown that F(w, t) has a limit as t ->- 00 so
long as p < I , and thi s limit will be independent of the initi al co ndition
230 TH E QUEUE M/G/ l
F (O, w) : we d en ot e this .lirnit by F (lI') = lim F ( lI', r) as t ---+ CIJ, a nd from
Eq . (5. 172) we find th at it mu st sa tisfy the following equ at ion:
d F(w)
-- = U(w) - A
l WB(w - x) d F( x ) (5. 181)
dw =0
Furtherm ore , for p < 1 then W *(r ) ~ lim W *' (r , t) as t ---+ CIJ will exist and
be independent of-the init ial distribution . Taki ng the tr an sform o f Eq. (5.181)
we find as we did in deri ving Eq . (5. 174)
where F (O+) = lim F (O+, t) as t ---+ CIJ and equals the p robability that the
unfini shed wo rk is zero. Th is last may be re written to give
rF(O+)
W*(r ) = - - ---'--'--
r - ). + ).B*(r)
H owe ver , we require W* (O) = 1, which requ ires th a t the unkn own consta nt
F (O+) ha ve a va lue F (O+) = I -p. Finally we ha ve
REFERENCES
BENE 56 Benes, V. E., " On Que ues with Poisson Arrivals," Annals of M athe-
matical Statistics, 28, 670-6 77 (1956).
cONW67 Co nway, R. W., W. L. Maxwell, and L. W. Miller, Theory ofSchedul-
ing , Addison-Wesley (Reading , Mass.) 1967.
COX 55 Cox, D. R., " The Analysis of No n-Markovian Stochastic Processes by
the Inclusion of Supplementary Variables," Proc. Camb. Phil. Soc .
(M ath. and Phy s. S ci.), 51,433-441 (1955).
COX 62 Cox, D. R., Renewal Theory , Methuen (London) 1962. I
FELL 66 Feller, W., Probability Theory and its Applications Vol. II , Wiley (New
York), 1966.
GAVE 59 Gaver, D. P., Jr ., "Imbedded Mar kov Cha in Analysis of a Waiting-
Line Process in Continu ous Time," Annals of Mathematical S tatistics
30, 698-720 (1959).
EXERCISES 231
HEND 72 Henderson, W., " Alterna tive Approaches to the An alysis of the
M/G /I and G/M /I Queues," Operations Research, 15,92-101 (1972).
KEIL 65 Keilson , J ., " The Role of Green's Fun ction s in Conge stion The ory ,"
Proc. Symp osium 0 11 Conge stion Theory , U niv. of No rth Carolina
Press, 43- 71 (1965).
KEND 51 Kend all, D. G ., "Some Probl ems in the The ory of Que ues," Journal
of the Royal Statistical Society , Ser. B, 13, 151-1 85 (1951).
KEND 53 Kendall, D. G ., "Stochastic Processes Occurring in the Theory of
Queues and the ir Analysis by the Method of the Imbedded Markov
Chain," Annals of Math ematical St atistics, 24, 338-354 (1953).
KHIN 32 Khinchin , A. Y. , " Ma thema tical The ory of Stati onary Queues,"
Mat . Sbornik, 39, 73-84 (1932).
LIND 52 Lindle y, D. Y., "The Theory of Queues with a Single Server ," Proc.
Cambridge Philosophical Society, 48, 277-289 (1952).
PALM 43 Palm, C.; "Intensitatschwankungen im Fernsprechverkehr," Ericsson
Technics, 6,1 -189 (1943).
POLL 30 Pollaczek, F., "Uber eine Aufgab e dev Wahrscheinlichkeitstheori e,"
I-II Mat h. Ze itschrift., 32, 64--100, 729- 750 (1930).
PRAB 65 Prabhu, N. U., Queues and Inventories, Wiley (New York) 1965.
RIOR 62 Riordan , J. , Stochastic Service Sy stems, Wiley (New York) 1962.
SMIT 58 Smith , W. L., " Renewal Theory and its Ramifications ," Journal of the
Royal Statistical Society, Ser . B, 20, 243-302 (1958).
TAKA 55 Takacs, L. , "Investigation of Wait ing Time Problems by Redu ction
to Markov Processes," Acta Math Acad. Sci. Hung ., 6,101 -129 (1955).
TAKA 62a Tak acs, L., Introduction to the Theory of Queues, Oxford University
Press (New Yor k) 1962.
TAKA 62b Takacs, L., " A Single-Server Queue with Poisson Input ," Operations
Research, 10, 388-397 (1962).
TAKA 67 Takacs, L. , Combinatorial M ethods in the Theory of Stoch astic
Processes, Wiley (New York) 1967.
EXERCISES
5.1. Prove Eq . (5. 14) from Eq. (5.11) .
5.2. Here we derive t he residual lifetime density j(x) di scu ssed in Section
5.2 . We u se th e notation o f Fi gure 5.1.
(a) O bservin g that the event { Y ::S; y } can o ccur if a nd only if
+
t < T k ::s; t y < T k+l for so me k , show th at I
(b) Observing that Tk :s; x if and only if oc(x) , the number of " arriv-
als" in (0, x): is at least k , that is, P[Tk :s; x ] = P[oc(x) ;::: k] ,
show that
<Xl '"
L Ph :s; x ] = L kP [oc(x) = k]
k= l k= l
where h(x o)
rex) - _....:......:e.....-
o - 1 - B(xo)
(b) Let h = lim Pk(t) as t -.. ~ and h (x o) = lim Pk(t , x o) as t -.. co,
From (a) we have the equilibrium result
°Pk(XO)
(i) - - = - [}. + r(xO)]pk(xO) + APk_l(XO) k ~ 1
oXo
oR(z, x o) •
---''-..:--= = [}.z - A - r(xo)]R(z, x o)
ox o
and
Pk(l ) =
n~k
(AI)n(n) -
Ico e-.lt-
n! k
[11'l o
[1 - B(x)] d x Jk[11'
-
t 0
B(x) dx In-k
[HINT: (I /I) S~ B(x) dx is the probability th at a customer's
service terminates by time I , given th at his a rrival time was
uniforml y distr ibuted ove r the interval (0, I). See Eq. (2.137)
also. ]
(b) Sh ow th at P» ~ lim Pk(l ) as 1 ->- 00 is
-
5.9. Co nsider M/ E./ !.
(a) F ind the po lynomial for G*(s).
(b) Solve for S(y) = P[time in system ~ y]. I
5.10. Conside r an M/D/I system for which x = 2 sec. I
(a) Sh ow th at the residu al service time pdf hex) is a rectan gular
distr ibuti on. /
(b) For p = 0.25, show that the result of Eq . (5.111) with four term s
may be used as a goo d approxi matio n to the distribution of
queueing time.
EXERCISES 235
5.11. Co nsider a n M/G/I que ue in which bul k arrivals occur at rate A and
with a probability gr that r customers arrive together at an arrival
instant.
(a) Show that the z-t ransforrn of the n umber of customers arriving
in an inte rva l of lengt h t is e- ,l '[l - Gl zl ] where G(z) = 2: g.zr.
(b) Show th at the z-transform of t he random va riables Un . the number
of arrivals during the service of a customer, is B * [A - i.G(z)].
5.12. Consider the M/G/I bulk arrival system in the pre viou s problem .
Usi ng the method of imbedded M a rkov chains:
(a) Fi nd th e expe cted queue size. [HI NT: show th a t ij = p and
Q(z) =
(I - p)(l - z)B*[A - AG(Z)]
B*[A _ AG(Z)] _ z
Using Litt le's result, find the ratio W/x of the expected wait on
-
queue to the ave rage service time.
(c) Using the same method (imbedded Markov chain) find the
expected nu mb er of groups in th e qu eu e (averaged over depa rture
times). [H IN TS : Show tha t D(z) = f3* (A - Az), where D(z) is the
generating functi on for the number of groups arri ving during the
ser vice time for an entire group and where f3 *(s) is the Laplace
tra nsform o f the service-time den sity for an entire gro up. Also
not e th a t f3 *(s) = G [B*(s) ], which a llows us to show that
r 2 = (X) 2(g2 - g) + x 2g , where r2 is the second moment o f the
group service time.]
(d) U sin g Little's result, find W., the expected wa it on queue for a
gr oup (measured from the arrival time of the gr oup until the
start of service o f the firs t mem ber of the group) a nd show that
x- C
II' =
•
P g
2(1 - p)
[1+ ~g
+ C2J
2
•
(e) If the customers within a gr oup a rriving together are served in
ran d om order, show that the ra tio of the mean wai ting time fo r a
single customer to the average service time for a single cu stomer
is W.l x from (d) increased by (1/2)g( 1 + C; ) - 1/2.
236 TH E QUEUE M/Gfl
5.13. Con sider an MIGII system in which service is instant aneou s bu t is
only available at " service instants," the interval s between successive
service instants being independently distributed with PDF F(x ). T he
maximum number of custom ers that can be served at any service
instant is m. Note that thi s is a bulk service system.
(a) Show that if qn is the number of customer s in the system ju st
before the nth service instant, then
qn + V n - m
q n+t ={
vn qn <m
where V n is the number of arrivals in the interval between the
nth and (n + I)th service instants.
(b) Prove that the probability generating function of u, is P (/, - k) .
Hence show that Q(z) is
m-'
I Pk(zm - Zk)
Q(z) = Zm [;:~A _ AZ)r' _ 1
I Q(z) z - 1
= _m
__
zm - Z
5.17. Consider an M/G{I queue. Let E be the event that Tsec have elapsed
since the arrival of the last customer. We begin at a random time and
238 THE QUEUE M /G /!
measure the time IV until event E next occurs. This measurement may
invol ve the o bserva tion o f man y customer a rriva ls before E occurs.
(a) Let A(t ) be the intera rrival-tirne distribution for th ose interva ls
during which E d oes no t occur. F ind A(1) .
(b) Find A *(s) = f;; e-st dA(t).
(c) Find W* (s I n) = f;; e- S W dW(1V I n). where W (IV I 11) = P[time
to event E :s;; IV I II arrivals occur before E).
(d) F ind W *(s) = f;; e- SW dW( IV), where W (w) = P[time to event
E:S;; w).
(e) Find the mean time to event E .
. 5.18. Consider a n M /G f! system in which time is di vided in to intervals of
length q sec each . Assume that arrivals are B~rn oulli, th at is,
P[l arrival in an y interval) = ).q
prO arrival s in a ny interval) = 1- ).q
P[ > I arrival in any interval) = 0
A ssume th at a customer's service time x is so me multiple o f q sec
such th at
P[service time = nq sec) = K» 11 = 0 , 1,2" . ,
(a) Find E[number of arrivals in an interval).
(b) Find the average a rr iva l rate.
(c) Express E[ i) ;;, x a nd E[x (i - q») ~ x 2 - xq in terms of th e
mom ents of the g n distribution (i.e., let gk ;;, L :"o IIkg n)'
(d) Find Ymn = P[m cu stomers a rrive in IIq sec).
(e) Let v m = P[m cu stomers a rrive during the service of a customer)
and let
00 00
- n _
XT -
-;;
x + p. .L..
..
~(n) X".k-;;::;;
XT
q k- 1 k
(d) Let
co
QT(z) = LPkTzk
k ~O
J
240 THE Q UEUE M IGII
again. Let F (z) = I ;"': 1];Z; be the z-tra nsfo rm for the number of
customers awaiting service when the server returns from vaca tion to
find at least one cu stomer wa iting (tha t is, /; is the prob ability that a t
the initiation o f a bu sy period the server find s j cu stomers awaiting
service).
(a) Derive an expression which gives qn+l in terms of qn, vn+l' and j
(the number of customer a rriva ls du ring the server' s vacati on) .
(b) Deri ve an expression for Q(z) where Q(z) = lim E[z"") as
n ->- co in terms of Po (equal to the probability that a departing
cu stomer lea ves 0 customers behind). (HINT : condition o n j .)
(c) Sh ow that po = ( I - p)IF(l )(I) where F (l)(I) = aF(z)/a zlz_1
and p = Ax.
(d) Assume no w th at the service vaca tio n will end whenever a new
cu stomer enters the empty system. For th is ca se find F (z) a nd
show that when we substitute it back into our an swer for (b)
then we arrive a t the classical M IGII solutio n.
5.24. We rec ogni ze th at a n arrivin g customer who find s k others in the system
is delayed by the rem aining service time for the customer in service
plu s the sum o f (k - I ) complete service times.
(a) U sing the notation and ap proach of Exerci se 5.7 , show that we
may express the transform of the waiting time pdf as
1V*(s) = Po + r'" I
Jo klCl
pk(Xo)[B*(s» )k-1
We have so far studied systems of the type MfM/I and its variants
(elementary queueing theory) and MfG/l (intermediate queueing theory).
The next natural system to study is GfM/I, in which we have an arbitrary
interarrival time distribution A (t) and. an exponentially distributed service
time . It turns out that the m-server system GfMfm is almost as easy to study
as is the single-server system GfM/I, and so we proceed directly to the
m-server case. This study falls within intermediate queueing theory along
with MfG/I, and it too may be solved using the method of the imbedded
Markov chain, as elegantly presented by Kendall [KEN D 51].
en Time ----;.
8 ···
Figure 6.2 Slate-tran sition-probabilit y diagram for the G/M/m imbedded
Mark ov chain.
p =4 ---
). -
(6.4)
mfl
Once again thi s is the a vera ge rate at which work enters the system (Ax = Nfl
sec of work -per elapsed seco nd) divided by the ma ximum rate at which the
system can do work (m sec of wo rk per elap sed second). Thus our condition
for ergodicity is simply p < I. In the ergodic case we are assured th at an
eq uilibrium pr ob ab ility d istribution will exist describing the number of
cust omers present a t the a rriva l inst ants; thus we define
rk = lim P[q; = k] (6.5)
j
244 TH E QUEUE G/M /m
tj
a ::~:-:::::-:::::-::::~:-~::::-::
1 2 3
III
see that i + I - j cu stomers will complete their service. Since service times
are exponentially d istributed, the probability tha t any given customer will
dep art within I sec after the arrival of C n is given by I - e- P ' ; similarly the
probability that a given customer will not depart by this time is «r'. There-
fore, in thi s region we have
P[i +I - j departures within I sec after en arrives I q n' = ij
(. i+ 1. ) = ( i ~ l )
I+I-J J
merely counts the number of ways in which we can ch oo se the i + I - j
customers to dep art out of the i + I that are available in the system. With
t n+! as the interarrival time between C; a nd Cn+" Eq . (6.8) gives P[q~ + l =
j I q; = i , I n+ 1 = Ij. Rem oving the condition on I n+! we then ha ve the o ne-
step tr ansition probab ility in this ran ge, namel y,
P"
., =i'"
t- O ( I.
(mflt )i+1- ; e-mn t dA (t )
+ 1 - ] .) I. l1I~j~i+l (6.10)
Note that in Eq. (6.10) the indices i and j appear only as the difference
i+ 1 - j , and so it behooves us to define a new quantity with a single index
111 ~ j ~ i + 1, m ~ i (6.11)
where Pn = the probabi lity of serving n customers duri ng an interarrival time
given that all m servers remain busy during this interval ; thu s, with n = i +
1 - j, we have
Pn = Pi,i+!- n =
'i" 1= 0
(l1Iflt) n - mn t
--,- e
n.
dA(t ) o~ n ~ i +1- 111 , In ~ i
- (6.12)
The last case we must consider (region 4) is j < m < i + 1, which describe s
the situa tio n where C; a rrives to find m cust omers in service and i - 111
waitin g in queu e (which he joi ns) ; upon the a rriva l of C n +! there are exactly j
custo mers, all of whom are in service. If we assume that it requires y sec unt il
the queue empties then one may calcul ate Pi} in a straightforward manner to
yield (see Exercise 6.1)
-l"'(m). e
Pi' -
o ]
- in'
j< m < i + l
- (6.1 3)
j
246 T HE Q UEU E G/M /m
Thus Eqs. (6.3), (6.9), (6. I2), and (6.13) give the complete description of the
one-step transition probabilities for the G/M /m system.
Havin g established the form for our one-step transition probabilities we
may place them in the transition matrix
poo pal 0 0
PlO Pn Pl2 0
P20 hI p.)') P23
p= P m- 2 .0 P m-2. 1 P m-2. m -l 0 0 0
Pm- I.O Pm-I. I Pm-l.m-I Po 0 0
Pm.O P«.I Pm.m-l PI flo 0
In this matri x all terms above the uppe r diagonal are zero, and the terms
fln are given through Eq. (6.12). The "boundary" terms denoted in thi s
matrix by their generic symbol PH are given either by Eqs. (6.9) or (6.13)
according to the range of subscripts i and j . Of most importance to us are the
transition probabilities Pn.
We have that the pr obability of reaching state E k+l no times bet ween returns
to state E k is equal to I - Po(that is, given we are in state E k the onl y way
we can reach state E k+1 before our next visit to sta te E k is for no customers to
be served , which has pr obability Po, and so the probability of not getting to
E k+1 first is I - Po, the probability of serving at least one) . Furthermore, let
y = P[Ieave state E k+l and return to it some time later without passing
thro ugh state s; where j :::;; k]
= P[leave state E k+ 1 and return to it later without passing through
state E k ]
This last is true since a visit to state E , for j :::;; k must result in a visit to sta te
E k before next returning to state E k+I (we move up only one state at a time) .
We note that y is independent of k so long as k ~ III - I (i.e., all III servers
are bu sy). We have the simple calcul ati on
PIn occurrences of state E k+ 1 between two successive visits to state E k]
= yn- 1(I - y)Po
Thi s last equation is calculated as the probability (Po) of reaching state
E k+I at all , times the probability (yn-I) of returning to E k+1 a total of n - I
times without first touching state E k , times the probability (I - y) of then
visitin g sta te E k without first returning to state E k+1' From th is we may cal-
culate
<0
Uk =I nyn-1(1 - y)Po
n= l
j
248 TH E QUEUE G/M /m
ourselves in state Ek ; thus we may write
. . N kH( t) fl o
a=hm---=-- k ~ m-l (6.17)
/-'" Nk(t) 1- Y
However, the limit is merely the ratio of the steady-state probability of finding
the system in state EkH to the probability of finding it in state Ek . Con -
sequently, we have established
k ~ m-l (6.18)
The solution to this last set of equations is clearly
k ~ m-l (6.19)
for some constant K. This is a basic result, which says that the distribution of
number of customers found at the arrival instants is geometric for the case
k ~ m - 1. It remains for us to find a and K, as well as rk for k < m - 1.
Our intuitive reasoning (which may easily be made rigorous by results
from renewal theory) has led us to the basic equation (6.19). We could have
"pulled this out of a hat" by guessing that the solution to Eq . (6.6) for the
probability vector r ~ fro, rl> r2 , ••• J might perhaps be of the form
(6.20)
This flash of brilliance would, of course , have been correct (as our calculations
have just shown) ; once we suspect this result we may easily verify it by
considering the kth equation (k ~ m) in the set (6.6), which reads
co
r, = K~ = L riP ik
i=O
co
= L riP ik
i = k- l
<Xl
=L K a ifli+l_ k
i =k- l
I
j
6.2. CO NDITIO NAL DISTRIB UTIO N OF QUEUE SIZ E 249
Of course we know fJn from Eq. (6.12), which permits the following calcula-
tion:
a = i an ('Xl (mf.lt) n e- m p , dA(t)
n= O Jt-O n!
= J.'" e -l m.- mpa)t dA(t)
This equation must be satisfied if our assumed ("calculated") guess is to be
correct. However, we recognize this last integral as the Laplace transform
for the pdf of interarrival times evaluated at a special point ; thus we have
a = A *(mf.l - mf.la) - (6.21)
Th is functional equation for a must be satisfied if our assumed solution is to
be acceptable. It can be shown [TAKA 62] that so long as p < I then there
is a unique real solution for a in the range 0 < a < I, and it is this solution
which we seek; note that a = I must always be a solution of the functional
equation since A *(0) = I.
We no w have the defining equation for a and it remains for us to find the
unknown constant K as well as rk for k = 0, 1,2, . . . , m - 2. Before we
settle these questions, however, let us establish some additional important
results for the G/M /m system using Eq . (6.19), our basic result so far. This
basic result establishes that the distribution for number in system is geo-
metrically distributed in the range k ~ m - I. Working from there let us now
calculate the probability that an arriving customer must wait for service.
Clearl y co
P[arrival queues] = I r k
k=m
co
=IKak
k =m
(6.22)
1- a
(T his operation is permissible since 0 < a < I as discussed above.) The
conditional probability of finding a queue length of size n, given that a
customer must queue, is
r
P[queue size = I
n arri val queues] = . m+n
P[arnval queue s]
and so
Ka n+ m
P[queue size = n I arrival queues] = --="----
m
Ka /(1 - a)
= (1 - a)a n n ~ 0 - (6.23)
Thus we conclude that the conditional queue length distribution (given that a
I
queue ex ists) is geometric for any G/Mlm system.
1
250 TH E QUEUE G/ M/m
W*(s n) I = I
E[e- S ;;; ar riva l queues and queue size = n] (6.24)
we have
I
W*(s arrival q ueues) = .2 W*(s I n)P[queue size = n I arrival queues]
n =O
I
W*(s ar rival q ueues) = .2 (1 -
00
(J)(J n
( mp: )n+1
n~O S + mp.
111P.
= (1 - (J) - ----'---
s + mp. - mp.(J
Luckily, we recognize th e inverse of this Lapl ace tran sfor m by inspectio n.
th ere by yieldi ng the following conditional pd f for qu eueing time,
I
w(y a rrival que ues) = (1 - (J)m,ue- m # lI -<1 )Y y ~O - (6.26)
k = 0, 1,2, . . .
K is now easily evaluated since the se probab ilities must sum to unity. From
thi s we obtain immediately
Clearly, the last term in thi s equation is zero; th e rem ainin g conditional
pro bability in this last exp ression may be obtained by integrating Eq. (6.26)
from y to infinity fo r III = I; thi s computation gives e -p (l -.) . a nd since a is the
J
252 TH E QUEUE G/M/m
probab ility of queueing we have immediately from Eq. (6.29) that
y ~ O - (6.30)
We have the remark able conclu sion that the unconditional waitin g-time
distribution is exponential (with a jump of size 1 - a at the origin) for the
system G/M /l. If we compare thi s result to (5.123) and Figure 5.9, which
gives the waitin g-time distribution for M/M /l, we see that the results agree
with p replacing a. That is, the queueing-time distribution for G/M /l is of
t he same f orm as for M/M / 1!
By straightforward calcul ati on, we also have that the mean wait in G/M / I is
a
- (6.31)
Exa mple
Let us now illustrate thi s meth od for the example M/M /\. Since A(t) =
I - e-;"(t ~ 0) we have immedia tely
i.
a = - =p M/M /l (6.33)
fl
which yields from Eq . (6.27)
(6.34)
Thi s, of course, is our usual solutio n for M/M /!. Fu rth er, using a = p as the
value for a in our waiting time distr ibuti on [Eq . (6.30)] we come up imme-
diately with the known solution given in Eq . (5. 123).
I
I
J
6.5. THE QUEUE G/M /m 253
Example
2 2
A *(s) = P, (6.35)
(s + p,)(s + 2p,)
N ote th at th is co rresponds to an E 2/M /I system in which th e two ar rival
stages have different death rates ; we choose these rates to be linear multiples
of th e service rate p,. As always our first step is to evaluate a fr om Eq . (6.28)
a nd so we ha ve
2p,2
a =--- -- -'----- - - - -
(p, - ua + p.)(p. - p,a + 2p,)
This lead s directly to the cubic equation
a3 - 5a 2 + 6a - 2 = 0
We know for sure th at a = I is always a root of Eq. (6.28), and this permits
the stra ightforward fact oring
(a - 'I)(a - 2 - J 2)(a - 2 + J 2) = 0
1
254 TH E QUEUE G /M /m
appears in the form of Eq. (6.20) ; we may fact or out the term Ko":" to obtain
- (6.38)
where
k = 0, 1, . .. , m - 2 (6.39)
We have as yet not used the first m - 1 equations repre sented by the matri x
equation (6.6). We now require them for the evaluation of our unkn own
terms (of which there are m - I) . In terms of our one-step transition prob-
ab ilities PH we then have
e, = I"" R iPik k = 0, 1, .. . , m - 2
i = k- l
e, = I R iPik + I a i+ l - mp ik
i =k- l i =m-l
m- 2 00
Rk -
"R
£.. iP;k -
"
£.. a
i +I- m
P ik
_ i =k i =m-l - (6.41)
Rk- l -
P k-l .k
J = 1
1 m -2
--+IR k - (6.42)
I - a k- O
6.5. TH E Q UEUE G/M /m 255
This then provides a complete prescription for evaluating the distribution of
the number of customers in the system. We point out that Takacs AKA 62] rr
gives an explicit (albeit complex) expression for these boundary probabilities.
Let us now determine the distribution of waiting time in this system [we
already have seen the conditional distribution in Eq. (6.26)]. First we have the
probability that an arriving customer need not queue , given by
m- l m- l
W(O) = .L r, = J .L s , (6.43)
k= O k =O
If we now remove the condition on k we may write the unconditi onal dis-
tribution as
W(y) = W(O) +J i f"
k~m Jo
( mfL)(mfLx)k-mr!'-m+l e- mpz d x
(k - m)!
We may now use the expression for J in Eq. (6.42) and for W(O) in Eq . (6.43)
and carry out the integration in Eq. (6.44) to obtain
(J
y~O - (6.45)
Thi s is the final soluti on for our waitin g-time distribution and shows that in
the general case GIM/m Ire still have the ex ponential distribution twith an
accumulation point at the origin) for waiting time!
We may calculate the average waiting time either from Eq. (6.45) or as
follows. As we saw, a cust omer who arrives to find k ~ m others in the
system must wait unt il k - m + I services are complete , each of which
take s on the avera ge I jmp sec. We now sum over all those cases where our
256 TH E QUEUE G/M /m
customer must wait to obtain
U <X> 1
E[lv] = W =L - (k - m + l )r k
k- m mfl
But in this ran ge we know that rk = Ko" a nd so
K <X>
W = -. L (k - m + l )a k
mf,l k- m
L rk = 1 = ro + L K~
k- O k~l
K = ( I - ro)(1 - a)
(6.46)
a
Our task now is to find another relation between K and roo Thi s we may do
from Eq. (6.41), which states
co
R1 - ".L. a'-1Pn.
R 0 -_ '-1 (6.47)
POl
Pu =1"'(7)[1 - e-·'V·'dA(t)
L'" a,-I
.
PiI = 2A*(21l) + - 20'- [I - 2A*{fl) ]
i- 2 20' - I
Substituting back into Eq. (6.51) we find
R _ 2A*(Il) - 1
o - (20' - I)A *(Il)
However from Eq. (6.39) we know that
R0 -- ..!:!.
Ka
and so we may express r« as
r - Ka[l - 2A *(Il)] (6 .53)
0 - (I - 2a)A*(Il)
Thus Eqs. (6.46) and (6.53) give us two equations in our two unknowns K
and ro, which when solved simultaneously lead to
(l - 0')[1 - 2A *(Il)]
ro =
I - 0'- A*(Il)
Th is compl etes our study of the G[M [m queue . Some further results of
interest may be found in [DESM 73]. In the next chapter, we view transform s
as probabilities and gain considerable reduction in the ana lytic effort required
to solve equ ilibrium and transient queueing problems.
REFERENCES
COHE 69 Cohen, J. W., The Single Ser ver Queue , Wiley (New York) 1969.
DESM 73 De Smit, J. H. A., "On the Many Server Queue with Exponential
Service Times," Advances in Applied Probability, 5,1 70-1 82 (1973).
KEND 51 Kendall, D. G. , " Some Problems in the Theory of Queues," Journal
of the Royal Statistical Society , Ser. E., 13, 151-1 85 (1951).
TAKA 62 Takacs, L., Introduction to the Theory of Queues, Oxford University
Press (New York) 1962.
EXERCISES
6.1. Pro ve Eq. (6.13). [HINT: condition on an interarrival time of dur at ion
t and then further conditi on on the time (~ t) it will take to empty the
queue.]
6.2. Cons ider E2[M[ I (with infinite qu eueing room).
(a) Solve for r k in terms of G.
(b) Evaluate G explicitly.
6.3. Conside r M[M[m .
(a) How do Pk and r k co mpare?
(b) Co mpare Eqs. (6.22) and (3.40).
6.4. Prove Eq. (6.31).
6.5. Show that Eq. (6.52) follows from Eq. (6.50).
6.6. Consider an H 2 /MfI system in which }.t = 2, }'2 = I, f1 = 2, and
()(l = 5[8.
(a) Find G .
(b) Find r k •
(c) Fin d Il'(Y) .
(d) Find W.
6.7. Conside r a D[MfI system with f1 = 2 and with the same p as in the
previous exercise.
(a) Find G (correct to two decimal places).
260 TH E QUEUE G/M/m
(b) Find r k •
(c) Find w( y).
(d) Find W.
6.8. Consider a G/M /I queueing system with room for at most two customers
(one in service plus one waiting) . Find rk (k = 0, 1,2) in terms of fl
and A * (5).
6.9. Consider a G/M/I system in which the cost of making a customer wait
y sec is
c(y) = aebV
(3) Find the average cost of queueing for a customer.
(b) Under what conditions will the average cost be finite?
J
7
W hen on e stud ies stoch astic pr ocesses such as in queueing the ory , one
finds th at the wo rk d ivides int o two parts. The first part typically requires a
careful probabilistic argument in order to arrive a t expressions inv olvin g the
random va riables of interest. * The second part is then one of a nalysis in which
the f ormal manipulation of symbo ls takes place either in the o rigina l domain
or in so me transformed d omain. Whereas the prob abil istic a rgu men ts typi-
ca lly must be made with great care, they nevertheless leave one with a com-
fort abl e feeling th at the " p hysics" of th e situ ation a re con stantly withi n one's
understand ing a nd gras p. On the other hand , whereas the a na lytic manipula-
tions that o ne carries out in the seco nd part tend to be rather stra ightfo rwa rd
(albeit difficul t) formal o peratio ns, one is unfortunately left with the uneasy
feelin g th a t these man ipul at ion s relate back to the origina l p roblem in no
clearly understandable fashi on. This " no nphysica l" aspect to problem
so lving typically is taken o n when one moves into the domain of transforms,
(either Laplace o r z-t ra nsfo rms).
In th is ch ap ter we de mon str at e that one ma y deal wit h tr a nsforms a nd sti ll
maintain a hand le on the prob abilistic arguments taking place as the se
tra nsfo rms a re manipulated. Th ere a re tw o separat e opera tions involved : the
" ma rking" of custo mers ; a nd the observatio n of "catas tro phe" p roc esses.
Together the se meth od s a re referred to as the meth od of collective marks.
Both opera tio ns need not necessarily be used simultaneou sly , and we study
them sepa ra tely bel ow. This ma terial is dra wn princip ally from [R U N N 65] ;
these ideas were introduced by van Dan tzig [VA N 48] in order to expose th e
probabil ist ic int erpreta tion for tr an sforms.
We first consider a Pois son a rrival pr ocess with a mean a rrival rat e of }.
customers per second. Assume that customers are marked as ab ove. Let us
co nsider the probability
It is clear that k cu stomers will arrive in the int er val (0 , t ) with the pro babil ity
(At)ke-At/k !. M oreover, with probability zk, none of these k cu stomers will
be marked ; thi s last is true since marking takes place independently a mo ng
customers. Now summing over all values o f k we have immediately that
0) (}. t)ke-.lt
q(z, t) = .2 Zk
k- O k!
= e.lt( z-Il (7.4)
G oin g back to Eq . (2. 134) we see that Eq . (7.4) is merel y the generati ng fun c-
tion for a Poisson arriva l proce ss. We thus conclude th at the genera ting
functi on for th is arrival process may also be interpreted as the prob abili stic
qu antity expressed in Eq . (7.3). This will not be the first time we may give a
probabili stic in terpretati on for a generat ing fun ction! I '
f
Example 2: M /M/ ro
We con sider the birth-death queueing system with a n infinite number of
servers. We also assume a t time t = 0 that there a re i customers present. Th e
parameters of our system as usual are). a nd f.l [i.e., A(t) = I - e- At a nd
B (x ) = I - e-~X] .
We are intere sted in the qu antity
1
264 THE METHOD OF COLLECTIVE MARKS
Thus we may calculate the probability that there a re no marked c usto mers
a t time t as follows:
co
P( z, t) =I P[k arrive in (0, t)]
k= O
X {P [new arrival is not a marked customer present a t tW
x {P [initial customer is not a marked customer present at t]}i
II U sing o ur established relati onships we a rrive a t
I
r
P(z , t) =
Example 3: M JGJI
In this exa mple we co nsider the FCFS M JGfI system. Re call that the
random va riables w. , t . +! , t . + 2 , ••• , x., x.+1 , . . . a re all independent of
each o ther. As usu al , we define B *(s) a nd W n *(s) as the Laplace tr an sform's
for the service-time pdf b(x) and the waitin g-time pdf w n(Y) for C n, respe ctively.
We define the event
{no u In
.
1Y, IV .
} a {no customers who arrive during the waiting}
-- .
trrne 0
f Cn a re mark e d (7.10)
We wish to find the probability of this event, that is, P [no M in Jr. ]. Con -
ditioning on the number of arriving customers and on the wait ing time lI ' n ,
a nd then removing the se conditions , we have
.
P[no M In w.] = I<Xl 1 <Xl (i.)k .
....JL. e- A·zk dWn(y )
k- O 0 k!
Furthermore , we have
P[no Min Wn + X n and C n+l marked] = 0 if Wn+l >0
since if C n H mu st wait, then he must arrive in the interval w, + X n a nd it is
impossible for him to be ma rked and stilI to ha ve the event {no M in W n +
Xn} . T hus the first term on the right-hand side of Eq . (7.15) mu st be P[ wnH =
0](1 - z) where th is seco nd factor is merely the probability th at C n+l is
marked. Now cons ider the second term on the right-hand side of Eq. (7. I 5) ;
I as sh own in F igure 7.1 it is clear th at no custo mers arrive between C n and
CnH , and therefore the customers of interest (namely, those arriving after
I
266 TH E METHOD OF COLLECTIVE MARKS
c;
1< Wn + X fl - - - - -- -71
Server
IE wn
Queue
t
c;
No
arrivals .-
I
C n-tl
wn + 1 >1
,
\. v- - - - ---)
A rrivals of
interest
Figure 7.1 Arrivals of interest during lV n + X n •
Cn+l does, but yet in the inte rval ll'n + x n ) must arrive in the interva l 11''' +1
since this interval will end when II' n + X n ends. Thu s this second term must be
P[no Min IV n + X n and C n+l not mar ked) = P[no Mi n IVn+l )
X P[C n+ 1 not mar ked)
= P[n o M in II' n +1) Z
From the se observations and the result of Eq. (7. \ I) we may write Eq. (7.15)
as
P[no M in +
lV n Xn)
WOes) = s( 1 - p) (7.18)
s - ). +i.B*(s)
which, of course, is the P-K transform equ ation for waiting time.
j
268 T HE M ETHOD OF CO LLECTIVE MARK S
=J,nPn(t)
n -O
But from its definition we see that Pn(l ) = F(n,(I) - F(n+I)(I) and so we ha ve
OX>
= .L F(n,(I) (7.2 1)
n- l
If we now permit a Poisson catastrophe process (a t rate y) to devel op we may
a sk for the expectat ion o f the followin g random var iable:
N; ~ number of even ts occurring before th e first ca tas tro p he (7.22)
W ith probability ve:" dt the first catastrophe will occur in th e interval
(I, I + dl ) and then H (I) will give the expected num ber of events occurr ing
before this first ca ta strophe, th at is,
H (t ) = E[ N e I first ca tastrophe occurs in (I, I + dl )]
Summing over a ll possibilities we may then writ e
REFERENCES
RUNN 65 Runnenburg. J. Th ., " On the Use of the Method of Collective Marks
in Queueing Theory," Proc. Symposium O il Congest ion Theory , eds,
W. L. Smith and W. E. Wilkinson, University of North Carolina
Press (1965).
VAN 48 van Dantzig. D., " Sur la methode des fonctions generatrices,"
Colloques internationaux du CN RS, 13, 29-45 (1948).
270 TH E METH OD OF COLLECTIVE MARKS
EXERCISES
7.1. Consider the M/G /l system sho wn in the figure belo w with average
arri val rate A and service-time distribution = B(x) . Customers a~e
served first-come-first-served from queue A until they either leave or
receive a sec of service, at which time they join an entra nce box as shown
in the figure. Cu stomers continue to collect in the ent rance box forming
,J sec of service
received
Server
a gro up until queue A empties and the server becomes free. At this point ,
the entrance box "dumps" all it has collected as a bulk arrival to queue
B. Queue B will receive service until a new arrival (to be referred to as a
"starter") join s queue A at which time the server switche s from queue B
to serve queue A and the customer who is preempted returns to the head
of queue B. The entrance box then begins to fill and the process repeat s.
Let
g. = P[entrance box delivers bulk of size n to queue B]
G(z) = '"
I gnzn
.-0
(a) Give a pr obabili stic interpretat ion for G(z) using the method of
collective marks.
(b) Gi ven th at the " starter" reaches the entrance box, and usin g the
method of collective marks find [in term s of A, a, B (' ), and G(z)]
(c) Gi ven that the "starter" does not reach the entrance box, find Pk
as defined above.
(d) From (b) and (c), give an expres sion (involving an integral) for
G(z) in terms of A, a , B(') , and itself.
(e) From (d) find the average bulk size ii = I ~-o ng•.
EXER CISES 271
ADVANCED MATERIAL
273
8
We have so far made effective use of the Mark ovian property in the
queuein g systems M/M /l , M/G/l , and G/M /m . We must now leave behind
man y (but not all) of the simplificatio ns that deri ve from the Markovian
p roperty and find new meth ods for studying the more difficult system GIG /I.
In this chapter we solve the G/G/l system equat ion s by spectral meth ods,
makin g use of tran sform and complex-varia ble techniques. There are , how-
ever, numerous other approa ches : In Section 5.11 we introduced the ladder
ind ices and pointed out the way in which they were related to important
events in queueing system s ; these idea s can be extended and applied to the
general system GIG /I. Fluctuations of sums of random variables (i.e., the
ladd er indices) have been studied by Ander sen [AND E 53a, AND E 53b,
A ND E 54] and also by Spit zer [SPIT 56, SPIT 60], who simplified and
expanded Andersen' s work . Thi s led, among other thin gs, to Spitzer's identity ,
of great impo rta nce in that app roach to queueing theory. Much earlier (in
the 1930's) Pollaczek considered a form alism for solving these systems and his
approach (summarized in 1957 [POLL 57]) is now referred to as Pollaczek's
method. More recently, Kingman [KING 66] has developed an algebra fo r
queues, which places all these meth ods in a commo n fram ewor k and exposes
the und erlying similarity among them ; he also identifies where the problem
gets difficult and why, but unfortunately he shows that this method does not
exte nd to the multiple server system. Keilson [KEIL 65] applies the method
of Green's fun ction. Benes [BENE 63] studied G/G fl through the unfinished
work and its "relat ives."
Let us now esta blish the basic equat ions for this system.
We assume that the random variables {t n } and {x n } are independent and are
given , respectively, by the distribution functions A (t) and R(x) independent
of the subscript n. As always, we look for a Markov process to simplify our
analysis. Recall for M/G/I , that the unfinished work U( t) is a Markov proce ss
for all t. For G/G/I , it should be clear that although U( t ) is no longer
Markovian, imbedded within U(t) is a crucial Markov pr ocess defined at the
customer-arrival tim es. At these regeneration points, all of the past history
that is pertinent to future beh avior is completely summa rized in the current
value of U(t ). That is, for FCFS system s, the value of the unfinished work
just p rior to the arrival of C; is exactl y equ al to his waiting time (IVn ) and
this Mar k ov process is the object of our study . In Figures 8.1 a nd 8.2 we use
the time-diagram notation for queues (as defined in F igure 2.2) to illust rat e
the history of en in two cases : Figure 8. 1 displays the case where C n +!
arrives to the system before C; departs from the service facility ; and Figur e
8.2 sho ws the case in which Cn + 1 arrives to an empty system. Fo r the con-
dit ion s of Figure 8.1 it is clear th at
That is,
if (8.1)
The condition expressed in Eq . (8. 1) ass ures that C n +! ar rives to find a busy
Cn _ 1 c,
Xn
"'n
Server I•
c; Cn + 1 Time --;:-
Queue
t,.., 'Ulnf.' .. I
C. c..,,
Figure 8.1 The case where C n+1 arrives to find a busy system.
J
8.1. LINDLEY'S INTEGRAL EQUATION 277
Server
t-- w• x.~
T im e~
C. ell +1
Qu eue
t n +1
I
c.
Figure 8.2 The case where C n+l arrives to find an idle system.
(8.3)
This random variable is merely the difference between the service time for C;
and the interarrival time between Cn+l and Cn (for a stable system we will
require th at the expectation of u.; be negative). We may thus combine Eqs.
(8.1)--(8.3) to obtain the follo wing fundamental and yet elementary relation-
ship , first established by Lindley [LIND 52];
if IV n + U n :?: 0 (8.4)
if IV n + U n::::;; 0
The term IV. + Un is merely the sum of the unfinished work (w.) found by
C; plus the service time (x. ), which he now adds to the unfini shed work,
less the time durat ion (t'+l) until the a rrival of the next customer Cn+l;
if th is qu antity is nonnegati ve then it represents the a mount of unfini shed
work found by Cn+l and therefore represents his waiting time wn+1 • How-
ever, if this quantity goes nega tive it indicates that an interval of time has
elap sed since the a rrival of Cn, which exceed s the a mount of un finished wo rk
present in the system j ust after th e arrival of Cn' thereby ind icating that the
system has go ne idle by the time Cn+l arrives.
We may write Eq . (8.4) as
1"n+1 = max [0, II' n + un] (8.5)
We introduce the notation (x)+ ;;, max [0, x ] ; we then have
- (8.6)
1
278 T HE QUEUE GIGII
Since the random va ria bles {In} and {xn} are independent a mo ng themselves
and each other, then one o bserves that the sequence of random variables
{1I'0, W I' IVz, . •.} forms a Markov process with sta tiona ry tran siti on prob-
abilities. This can be seen immediately from Eq . (8.4) since the new value
IV n +! depends upon the previou s sequence of random vari abl es W ; (i =
0, I, . .. , n) o nly through the most recent value IV n plus a random varia ble
lin' which is independent of the random variables 11'; for all i ~ n.
Let us solve Eq . (8.5) recursively beginning with W o as a n initi al condition.
We ha ve (defining Co to be our initi al arrival)
IV, = + lIo)+
(IVO
IVz =(II', + lI,) + = max [0, w, + lI l]
= max [0, lI, + max (0, Wo + lIo)]
= max [0, lI l, III + lIo + IVO]
W3 = (w z + IIz)+ = max [0, Wz + liz]
= max [0, liz + max (0, lI" lI, + lIo + 11'0)]
= max [0, liz, liz + lIl , liz + III + lIO + wo]
11': ~ max [0, lIo, lIo + u. , lIo + lI, + liz, . . . , lIO + III + ... + lI n_ Z,
lIO + III + ... + lI n_ Z + lI + 11'0 ] (8.8)
n_ ,
Equation (8.8) is obta ined from Eq. (8.7) by relabeling the ra ndo m varia bles
u.. It is no w con venient to define the qu antities U; as
n- I
o; = III; (8.9)
i= O
Uo = 0
We thus have from Eq, (8.8)
Equation (8. 12) is our usual condition for stabilityas ma y be seen from the
following:
E[lI nl = E[ X n - t n+ll
= E[ xnl - E[t n+I]
= x-i
= i(p - 1) (8 .13)
where as usual we assume that the expected service time is x and the expected
interarrival time is I (and we have p = xli). From Eqs , (8. 12) and (8. 13) we
see we have requi red th at p < I , as is our usual cond ition for sta bility. Let us
denote (as usu al) the stati on ary d istribution for IV n (a nd also the refore for
\I'n') by
lim P[ w n ~ y ] = lim P[ w n ' ~ y ] = W(y ) (8.14)
n-Xl n -CD
which mu st exist for p < 1 [LIND 52]. Thus W(y ) will be our ass um ed
sta tio na ry distribution for time spent in queue ; we will not dwell up on the
proof o f its existence but rather up on the me th od for its calcul ati on . As we
kno w for such Markov processes, this limiting distribution is ind ependen t o f
the initi al sta te 11"0'
Before proceeding to the formal derivation of result s let us inves tiga te the
way in which Eq. (8.7) in fact produces the waiting time. This we do by
exa mple; consider Figure 8.3, which represen ts the unfin ished work U(t) .
Fo r the sequence of arrivals a nd departures given in this figure, we pre sent the
table bel ow sho wing the interarri val times t n + b service time s x n ' the rand om
variables U n' and the wait ing time W n as measured from the dia gram ; in the
last row of this table we give the waiting time s W n as calculated from Eq.
(8.7) as follows.
j
280 TH E QUEUE G/G/l
U(t)
Arrivals
Depa rtures
Figure 8.3
t t C,t t t
Co C,
~
Co
C J C4
~ ~
C, C2
~ ~
CJ C4
tt
C. C.
..
t
C,
~
C. C. C,
•
e"
~
e"
1,,+1 2 2 5 2 7
Xn 2 3 2 3 2 3
/I n -I -4 2 -I -6
Wn 0 2 2 0 2 0 measured from Fig. 8.3
Wn 0 2 2 0 2 0 calculated from Eq. 8.7
IVO = 0
IV 1= max (0, Iro + uo) = max (0, I) = 1
IV2 = ma x (0 , ub U 1 + U o + 11.0) = ma x (0 , 1,2) = 2
IVa = max (0, u2 , u2 + u1 , u2 + u1 + U o + Iro) = ma x (0, - 1, 0, I) = 1
IV. = ma x (0, Ua, lI a + U 2 , lI a + u2 + Il l ' Ua + 112 + III + Uo + IVO)
= max (0, I , 0 , I , 2) = 2
IVS = max (0, II. , II. + Ua, II, + lI a + U2 , II. + Ua + 112 + Ill'
U, + U a + U 2 + III + 110 + IVo)
= max (0, -4, -3 , -4, -3 , -2) =0
\1'. = max (0, 2, -2, -I, -2, -1 ,0) = 2
IV 7 = ma x (0, -I , 1, -3, -2, -3, - 2, - I) = 1
IVB = ma x (0, -6, -7, -5, -9, -8, -9, -8 , -7) = 0
IV. = ma x (0, I, -5 , - 6, -4, -8 , -7 , - 8, -7 , -6) = 1
J
8.1. LlNDLEY'S INTEGRAL EQUAnON 281
These calculations are quite revealing. For example, whenever we find an m
for which W m = 0, then the .m rightmost calculations in Eq . (8.7) need be
made no more in calcu lating W n for all n > m ; this is due to the fact that a
busy period has ended and the service times and intera rrival times from tha t
busy period cannot affect the calculations in future busy periods. Thus we
see the isolating effect of'idle periods which ensue between busy periods.
Furthermore, when W m = 0, then the rightmost term (Um + 11'0) gives the
(negative of the) total accumulated idle time of the system during the interval
(0, T m).
Let us now proceed with the theory for calculating W(y). We define
Cn(u) as the PDF for the ra ndom variable U n> that is,
(8.15)
and we note that un is not restricted to a half line. We now derive the expres-
sion for C n(u) in terms of A (r) and B(x):
Note that the integral given in Eq . (8.17) is very much like a convolution form
for aCt) and B(x) ; it is not quite a straight convolution since the distribution
C(u) represents the difference between X n and t n+1 rather than the sum.
Using our convolution notation (@), and defining cn(u) ~ dCn(u) jdu we have
J
282 THE QUEUE G/ G/I
For y ~ 0 we have fro m Eq. (8.4)
= i~P[Un:::; Y - wi Wn = w) d W.(w)
Further, it is clear th at
W( y) = 0 for y <0
Combining these last two we have Lindley 's integral equation [LI ND 52),
which is seen to be an integral equation of the Wiener-Hopf type [SPIT 57).
j
1
8.2. SP ECTR AL SOLUTI ON TO LINDL EY'S INTEGRAL EQUATION 283
Let us now show a third form for this equation. By the simple variable change
u = y - IV for the a rgument of our distributio ns we fina lly arrive at
W(y) = (fo"
- Q)
W(1i - u) dC(u) y ~ O
y<O
- (8.23)
Equations (8.21), (8.22), and (8.23) all describe the basic integral equati on
which governs the beha vior of GIGII. These integral equ ations, as menti oned
ab ove, are Weiner-Hopf-type integral equations and are not unfamiliar in
the theory of stochastic processes.
One observes from these forms tha t Lindley 's integral equat ion is almo st,
but not quite, a convolution integral. The imp ortant distinction between a
convolution integral and that given in Lind ley's equation is that the latter
integral form holds only when the varia ble is nonnegative; the distribution
functi on is identically zero for values of negativ e argument. Unfortunately,
since the integral ho lds on ly for the half-line we must borrow techniques from
the the ory of complex variables and from contour integration in or der to
solve our system. We find a similar difficulty in the design of optimal linear
filters in the mathematical theory of co mmun icat ion ; there too, a Weiner-
Hopf integral equ ati on describe s the optimal solution, except that for linear
filters, the unkn own appear s as one factor in the inte grand rather than as in
o ur case in que ueing theory, where the unknown appears on both sides of the
integral equation. Neverthel ess, the solution techniques are a mazingly
similar and the read er acqua inted with the theory of optimal realiza ble linear
filter s will find the following ar guments famil iar.
In the next section, we give a fairly general solutio n to Lindley's integral
equat ion by the use of spectral (transform) methods. In Exercise 8.6 we
examine a solution approach by mean s of an example that doe s not require
tran sform s ; the example chosen is the system D/E,/I con sidered by Lindley.
In that (direct) approa ch it is requ ired to ass ume the solution for m. We now
conside r the spectral so lution to Lindley's equation in which such assum ed
so lution form s will not be necessary.
0 y ~ O
C>
W_(y ) = ( "
L"
W(y - u ) d C(u ) y<O
(8.24)
Note that the left-hand side of Eq. (8.23) might consistently be written as
W+(y) in the same way in which we defined the left-hand side of Eq. (8.24).
We now observe that if we add Eqs. (8.23) and (8.24) then the right-hand
side takes on the integral express ion for all values of the argument, that is,
. aCt)
hm
t - oo e
-Dt < 00 (8.26)
The condition (8.26) really insists that the pdf associated with the inter-
arrival time dr ops off a t least as fast as an exponen tial for very large inter-
arrival times. From this cond ition it may be seen from Eq. (8.17) that the
behavior of C (u) as u ..... - 00 is governed by the behavior of the interarri val
time ; this is true since as u takes on large negative values the argument for the
service-time distribution can be made positive only for lar ge values of t ,
which also appears as the argument for the interarrival time density. T hus
we can show
. C(u )
hm ---v;: < 00
u - - oo e
That is, C( u) is O (~U) as u ---+ - 00 . If we now use this fact in Eq. (8.24) it is
easy to establish that W_(y) is also O(eD") as y ---+ - 00 •
• The notat ion O(g (x» as x _ X o refers to a ny function that (as x - xo> decays to zero
at least as rapidl y asg(x) [where g(x) > OJ , that is,
x-xo sv:
8.2. SPECTRAL SOLUTION TO LINDLEY'S I NTEGRAL EQUATION 285
Let us now define some (bilateral) transforms for various of ou r functions .
For the Laplace transform of W_ (y) we define
Note that <I>+(s) is the Laplace transform of t he PDF for waiting time,
whereas in previous chapters we have defined WOes) as the Laplace transform
of the pdf for waiting time ; thu s by entry I I of Table 1.3, we have
s<l>+(s) = WOes) (8 .29)
Since there are regions for Eqs. (8.23) and (8.24) in which the functions drop
to zero, we may therefore rewrite these transform s as
j
286 THE QUEUE G /G /l
transform of this convolutio n mu st therefore give the produ ct of the
Laplace tran sform <1>+(5) (for the waiting-time distributi on) and C*(s) (for
the density on ii ). The transform of the left-hand side we recognize from Eqs.
(8.30) and (8.31) as being <I>+(s) + <I>_(s) , thu s
<I>+(s) + <I>_(s) = <1>+(s)C* (s)
From Eq. (8.32) we therefore obtai n
<I>+(s) + <I>_(s) = <1>+(s)A *( - s)B * (s)
which gives us
<I>-Cs) = <I>+(s)[ A * (- s)B * (s) - I] (8.33)
We ha ve already established that both <I> _(s) a nd A*(-s) are analytic in the
region Re (s) < D. Furthermore, since <I>+(s) and B* (s) a re transform s of
bounded function s of nonnegative varia bles then both funct ions must be
. analytic in the region Re (s) > O.
We now come to the spect rum fa ctorizat ion. The purpose of this factoriza-
tion is to find a suitable repre sentation for the term
A*(-s)B*(s) - 1 (8.34)
in the form of two fact ors. Let us pause for a moment and recall the method
of stages whereby Erla ng conceived the ingenious idea of approxi mating a
distribution by mean s of a collect ion of series and parallel exponent ial stages.
The Laplace transform for the pdf's obtaina ble in this fashion was genera lly
given in Eq. (4.62) or Eq. (4.64); we immediately recognize these to be rati onal
functions of s (tha t is, a rati o of a polynomial in s divided by a polynomial
in s). We may simila rly conceive of appro ximating the Laplace transfor ms
A *( - s) and B* (s) each in such form s; if we so app roximate, then the term
given by Eq. (8.34) will also be a rat ion al functio n of s. We thus choose to
consider th ose queue ing systems for which A *(s) and B *(s) may be suitably
approximated with (o r which are given initia lly as) such rational functio ns
of s, in which case we then p ropose to for m the following spectrum factor iza-
tion
• Liouville's theorem states, "If I(~) is analytic and bounded for all finite values of z,
then I(z) is a constant."
288 TH E QUEUE G/G /l
[TIT C 52], which immediately establishes that this function must be a constant
(say, K). We thus have .
Let us now consider the limit of this equation as s -+ 0 ; working with the
right-h and side we have
Equat ion (8.43) provides a means of calcul atin g the constant K in our solu-
tion for $ +(s) as given in Eq. (8.41). If we make a Taylor expan sion of the
funct ion 'Y+(s) around s = 0 [viz., 'Y...(s) = 'Y+(O) + s'Y<;> (O) + (s2/2 !)'Y~ I(O)
+ ...] and note from Eqs. (8.35) and (8.36) that 'Y +(0) = 0, we then
recognize that this limit may also be written as
. d'Y+(s)
K=hm--- (8.44)
.-0 ds
J
8.2. SPECTRAL SOLUTION TO LINDLEY'S INT EGRAL EQUATION 289
and this provides us with an alternate way for calculating the constant K.
We may further explore this constant K by examining the behavior of
<1>+(5)o/+(S) anywhere in the region Re (5) > 0 [i.e., see Eq. (8.40»); we
choose to examine this beh avior in the limit as 5 --->- a::J where we kn ow from
Eq. (8.37) that 0/+(5) behaves as 5 does ; that is,
K = lim
s.. . oo )o
r~e-"w(~)
s
dx
As 5 --->- cc we may pull the con stant term W(O+) outside the inte gral and
then obtain the value of the rema ining integral, which is unit y. We thus
obtain
'(8.45)
This establishes that the con stant K is merely the probability that an arriving
customer need not queue]. .
In conclusion then , assuming that we can find the appro priate spectru m
fact ori zati on in Eq . (8.35) we may immediately solve for the Lapl ace trans-
form of the waitin g-time distribution through Eq . (8.41), where the con stant
K is given in eith er of the three forms Eq . (8.43), (8.44), or (8.45). Of course
it then remain s to invert the transform but the pr oblems involved in that
calcul ati on have been faced before in numerous of our other solution form s.
H is possible to carry out the solution of this problem by concentrating on
0/_ (5) rather than '1'+(5) , and in some cases this simplifies the calcul ation s. In
such cases we may proceed from Eq, (8.35) to obtai n
t Note t ha t W (O+) is not necessaril y equ al to I - p. which is the fracti on of lime the server
is idle . (T hese two are equ al for the system M{G{I.)
290 THE QUEUE GIGII
giving
K = °+ o/_(O)[-x + f]
K = 0/_(0 )(1 - p)i (8 .49)
Thus , if we wish to use 'F_(s) in our so lutio n form , we obtai n the transform
of the waiting-time di st ribution fr om Eq. (8.47), where the unknown constant
K is evalu ated in terms of 'L(s) through Eq. (8.49) .
Summari zin g then , o nce we have ca rried out the spectru m factori zat ion
as indicated in Eq . (8.35), we may proceed in one of two directions in solving
fo r <1l +(s) , the tra nsfo rm of the waiting-t ime distributi on . The firs t me th od
gives us
- (8.50)
Example 1: M IMI]
Our old fr iend MIMI I is extremely straightfo rwa rd and sho uld serve to
clarify the meaning of spectru m factori zati on. Since both the intera rriva l time
a nd the service time a re exponentially di stributed random va ria bles, we
immediately have A *(5) = AI(s + }.) and B*(s) = fll(s + fl) , wh ere x = I/,u
a nd i = I /A . In order to solve for <I>+(s) (the transform of th e wai ting time
di stribution), we mu st first form the expression given in Eq. (8.34), tha t is,
!
--
8.2. SPECTRAL SOLUTIO N TO LI NDL EY'S INTEGRAL EQUATION 291
Thus , from Eq. (8.35), we obtain
In Figure 8.4 we show the location of the zeroes (denoted by a circle) and
poles (deno ted by a cross) in the complex s-plane for the funct ion given in
Eq. (8.52). No te that in this particular example the ro ots of the numer at or
(zeroes of the expression) and the roots of the denominat or (poles of the
expression) are especially simple to find ; in general , one of the most difficult
parts of this method of spectrum factori zation is to solve for the ' roots. In
order to fact orize we require that conditions (8.36) and (8.37) mainta in.
Inspecting the pole- zero plot in Figure 8.4 and rememberin g that 0/ +(s)
must be analytic and zero-free for Re (s) > 0, we may collect together the
two zeroes (at s = 0 and s = - po + I.) and one pole (at s = - po) and still
satisfy this requ ired condition. Similarly , 0/ _(s) must be a nalytic and free
from zeroes for the Re (s) < D for some D > 0; we can obtain such a
cond ition if we allow this functi on to contain the rema ining pole (at s = J.)
and choose D = J.. Th iswe show in Figur e 8.5.
Thus we have
Im (s)
s-plane
- - -"*--()---e>---*- - - - - Re(s)
-p
_I
292 THE QUEUE GIG II
Im(s) Im(s)
K = lim o/+(s)
.-0 S
. s +/l-A
= hm -----'--
.- 0 S + /l
=I - p (8.55)
Our expression for the La place tran sform of the waiting time PDF for M IMI !
is therefore fro m Eq. (8.41),
<I>+(s) = (I - p)(s +
/l) (8.56)
s(s + /l - A)
A t this poi nt, typically, we attempt to invert the transform to get the waiting-
time dist ribution. H owever , for thi s M/M /I example, we have already
carr ied out th is inversion for W *(s) = s<I>+(s) in going from Eq. (5.120)
to Eq . (5.123). T he solutio n we o btai n is th e familiar form,
y~O (8.57)
Example 2: GIMllt
1
8.2. SPECTRAL SOLUTION TO LI NDLEY'S INTEGRAL EQU ATION 293
and so we have
o/+(s} = flA*( - s) - s - fl
(8.58)
'I'_(s) s +fl
In order to factorize we must find the roots of the numerator in this equ ation.
We need not concern ourselves with the poles due to A *( - s) since they mu st
lie in the region Re (s) > 0 [i.e., A (t ) = 0 for t < 0) and we are attempting to
find o/+(s), which cannot include any such poles. Thus we only study the
zeroes of the function
s + fl - flA*(- s) = 0 (8.59)
Clearly, one root of this equation occ urs at s = O. In order to find the
remaining roots , we make use of Rouche's theorem (given in Appendix I but
which we repeat here) :
Rouche's Theorem Iff( s) and g(s ) are analytic functions of s inside and
on a closed contour C, and also iflg(s)1 < /f(s)1 on C, thenf(s) andf (s) +
g (s) have the same number of zeroes inside C.
f (s) = s + fl
g(s) = -flA*( - s)
We have by definition
We now ch oose C to be the contour that runs up the imaginary axis a nd then
forms a n infinite-radius semicircle moving counterclockwise and surround ing
the left half of the s-plane, as shown in Figure 8.6. We consider thi s contour
since we are concerned abo ut all the pole s a nd zeroes in Re (s) < 0 so that
we may properly include them in 'Y+(s) [recall that 0/ _(s) may contain none
such); Rouche's theorem will give us information concerning the number of
zeroes in Re (s) < 0, which we must consider. As usual , we assume that the
real a nd imaginary parts of the complex variable s are given by a and (0),
respectively, that is, for j = J=I
s = a + jw
294 TH E QUEUE GIGII
Im (, )
s- plane
~ l,ui: dA(t)1
=,u (8.60)
Similarly we hav e
1I(s)I = Is + ,u l (8.6 1)
Now, examining the contour C as shown in Figure 8.6, we observe th at for
all points on the contour, except at s = 0, we ha ve from Eqs. (8.60) and
(8.61) that
II(s)1 = Is + ,ul > ,u ~ Ig(s)j (8.62)
This follows since s + ,u (for s on C) is a vector whose length is the distance
from the point -,u to the point on C where s is located. We are almos t in a
8.2. SPECTRAL SOLUTION TO LINDL EY'S INTEGRAL EQUATION 295
Im(. ) = w
s-p lane
positi on to a pply Rouche's the orem; the onl y remaining con sidera tion is to
sho w that 1/(s)1 > Ig(s)1 in the vicin ity s = O. For thi s purpose we allow the
conto ur C to make a small semicircula r excu rsion to th e left of the o rigin
as show n in Figure 8.7. We note at s = 0 tha t Ig(O)1 = 1/(0)1 = fl, which
doe s no t sati sfy the conditions for Rouch e's the ore m. The small semicircular
excursion of radius £(£ > 0) that we take to the left of the ori gin overco mes
thi s difficult y as follows. Cons ider ing a n a rbitra ry point s on thi s semicircle
(see the figure) , which lies at an a ngle () with the a-axis, we may write s =
a + jw = - £ cos () + j £ sin 0 and so we ha ve
2
1/ (5)1 = Is + fl l 2 = 1-£ cos () + j £sin () + fl l 2
Formin g the product of (s + fl) and its co mp lex co njugate, we get
N ote that the sma llest value for I/(s) I occurs for () = O. Eva lua ting g( s) on
th is sa me semicircula r excursion we have
F ro m the power-series expan sion of the expon enti al inside the inte gral we
have
Ig(sW = fl21I.~ [I + (- £ cos () + j« sin 0)1 +... 1 dA(e)12
I
296 T HE Q UEUE GIGII
(8.64)
where, as usual, p = xli = II!1-i. Now since () lies in the ran ge -n/2 ~ (} ~
71"12 , which gives cos (} ~
0, we have as E -+-O that on the shrinking semi-
circle surro unding the origin
2 2 2!1-E
!1- - 2!1- E co s (} >!1- - - cos /} (8.65)
p
This last is true since p < I for our stable system . The left-ha:nd side of
Inequality (8.65) is merely the expression given in Eq. (8.63) for I/(s)/2
correct up to thefirst order in E , and the right-hand side is merely the expres-
sion in Eq. (8.64) for Ig(S)j2, again correct up to the first order in E. Thus we
have shown that in the vicinit y s = 0, I/(s) 1 > jg(s)l. Thi s fact now having
been established for all points on the contour C , we may apply Rouche's
theorem and state that I (s) and I (s) + g(s) have the same number of zeroes
inside the contour C. Since I (s) has only one zero (at s = - !1-) it is clear that
the expression given in Eq. (8.59) [/(s) + g(s) ] ha s only one zero for Re (s) <
0 ; let this zero occur at the point s = - S1' As discussed a bove, the point
5 = 0 is also a root of Eq. (8.59).
We may therefore write Eq . (8.58) as
where the first bracketed term contains no poles and no zero es in Re (s) ~ 0
(we have di vided out the only two zeroes at s = 0 and s = - 51 in this half-
plane). We now wish to extend the region Re (s) ~ 0 into the region Re (s) <
D and we choose D (> 0) such that no new zeroes or poles of Eq. (8.59)
are introduced as we extend to this new region . The first br acket qu alifies
for ['I"_(S)]-1, and we see immediatel y that the second bracket qual ifies for
'I"+(s) since none of its zeroes (s = 0, s = - S1) or poles (s = -!1-) are in
--
8.2. SPECTR AL SOLU TION TO LINDLEY 'S INT EGRAL EQUATION 297
Re (5) > O. We may then factorize Eq. (8.66) in the following form:
<1>+(5) = St(p. + s)
P.5(5 SI)+
The partial-fraction expansion for this last function gives us
W(y) = 1- (1 - ; )e- S 1Y
y ~0 (8 .71)
The reader is urged to compare this last result with that given in Eq. (6.30),
also for the system G{Mjl; the comparison is clear and in both cases there
is a single constant that must be solved for. In the solution given here that
constant is solved as the root of Eq. (8.59) with Re (s) < 0; in the equati on
given in Chap ter 6, one must solve Eq. (6.28), which is equivalent to Eq.
(8.59).
Example 3:
The example for G{Mjl can be carried no further in the general case. We
find it instructive therefore to consider a more specific G{M{I example and
finish the calculations; the example we choose is the one we used in Chapter 6,
for which A *(s ) is given in Eq . (6.35) and corresponds to an Ez{M{1 system,
where the two arrival stages have different death rates. For that example we
298 THE QUEUE GIG II
s-plane
)(
I'
note that the poles of A *( -s) occur at the points s = fl , s = 21t, which as
promised lie in the region Re (s) > O. As our first step in factori zing we form
S~~2~ C: J-
\
= [(fl - - S)] 1
'f'"+(s)
-- - - (S - fl - fl JZ)] rls(s - It + It J 2l]
'L(s) [ (fl - s)(2fl - s) s + ,u
8.3. KI NGMAN'S ALGE BRA FOR QUEUES 299
In th is form we rec ogn ize the first bracket as II' F_(s) and the seco nd bracket
as 'F+(s). Thus we have
\I'"+(s) = s(s - It +
It J 2) (8.73)
s +1t
We may evaluate the con stant K fro m Eq . (8.69) to find
s
K = ...! = -1 + ./2- (8.74)
It
a nd thi s of co urse corresponds to W(O+) , whic h is the probability tha t a ne w
arrival mu st wa it for service. Finally then we substi tute these values into
Eq. (8.7 1) to find
W(y) = I - (2 - J 2)e- p ( v "2- Uy y ~0 (8 .75)
which as expected correspon ds exactly to E q . (6.37).
T his method of spectru m facto rizatio n has been used successfully by Rice
[R IC E 62], who con siders the busy peri od for the GfGfl system. Amon g the
interesting results available, there is one corresponding to the limiting distri-
bution of lon g waiting time s in the hea vy-traffic case (which we de velop in
Section 2.1 of Volume II) ; Rice gives a similar a pp ro ximatio n for the dura-
tion of a busy peri od in the heav y tr affic ca sco
U n_ 1 + . .. + U 1 , U n_ 1 + . . . + U o + H'o]
We observed ea rlier th at {lV n } is a Markov process with sta tio nary tr an sition
prob abilities ; its tot al stoc has tic structure is givcn by P[ lV m+ n :::; y 11I'm= x],
which may be calcu lated as a n n-fold integral ove r the n-d imensional joint
d istri b utio n of the n rando m va ria bles \I' m +!' .• . , \l'm+ n ove r that regio n o f
the space which results in 11·.,+ n :::; y. T his ca lculation is mu ch too complicated
an d so we look fo r alternative means to so lve this p ro blem. Pollaczek [POLL
57] used a spectra l ap proach and comp lex integrals to carry ou t the sol ution.
Lind ley [LI ND 52) observed that Il' n has the sa me d istribution as defin edIV : ,
ea rlier as
300 THE QUEU E G/G/I
If we have the case E[u n ] < 0, which corresponds to p = xli < I, then a
stable solution exists for the limiting random variable IV such that
clearly the convolution IVn(Y) 0 c(y). This convolution will result in a density
funct ion that has nonne gative va lues for negative as well as positive values of
its argument. However Eq. (8.76) requires that our next step in the calcula-
tion of IVn+1 (Y) is to calculate the pdf associated with (wn + u n)+; this re-
quires that we take the total probability associated with all negative argu-
ments for this density just found [i.e., for II'n(Y)0 c(y) ] and collect it
together as an impul se of probability located at the origin for wn+1 (Y)' The
value of this impulse will ju st be the integral of our former density on the
negative half line. We say in this case that " we sweep the probability in the
negative half line up to the origin ." The values found from the convolution
on the positive half line are correct for W n +1 in that region. The algebra that
describes this operation is that which Kingman introduces for stud ying the
system G/G /1. Our iterative procedure continues by next forming the
convolution of wn+1(Y) with c(y), sweeping the pr obability in the negative
half line up to the origin to form \I' n+2(Y) and then proceed s to form ll'n+3(Y)
in a like fashion , and so on.
7
8.3. KINGMAN'S ALGEBRA FOR ' QUEUES 301
The elements of this algebra consist of all finite signed measures on the
real line (for example, a pdf on the real line). For any two such measures,
say hi and h 2 , the sum hi + h2 a nd also all scalar multiples of either belong
to th is algebra. The product o peration hi 0 h 2 is defined as the convoluti on
of hi with h2 • It can be shown th at this algebra is a real commutative algebra.
There a lso exists an identity element denoted by e such that e 0 h = h
for an y h in the algebra, and it is clear that e will merely be a unit impulse
located at the origin. We are interested in operators that map real functions
into other real functi ons and that are measurable. Specifically we are inter-
ested in the operato r that takes a value x and maps it into the value (z) " ,
where as usual we have (x)+ ~ max [0, x). Let us denote this operator by 11',
which is not to be confused with the matrix of the transition probabilities
used in Chapter 2 ; thus, if we let A denote some event which is measurable,
and let h(A) = P{w : X(w)tA } denote the measure of this event, then 11' is
defined through
11'[h(A») = P{w: X +(w)€A }
We note the linearity of this operator, that is, 11'(ah) = a11'(h) and
11'(h l + h2 ) = 11'(h l ) + 11'(h2 ) . Thus we have a commutative algebra (with
identity) alo ng with the line ar o pera to r 11' that maps this algebra into itself.
Since [(x) +]+ = (x)+ we see that an important property of this operator 11'
is th at
The solution of this equation is of main interest in solving G/G/1. The re-
maining p orti on of th is secti on gives a succinct summa ry of some elegant
results invol ving thi s algebra; only the courageous are encouraged to con-
tinue.
302 TH E QUEUE GIGII
The particular formalism used for constructing this algebra and car rying
out the solution of Eq. (8.79) is what distinguishes the various meth ods we
have menti oned above . In order to see the relationship among the various
approaches we now introduce Spitzer's identity . In order to state this identit y,
which involves the recurrence relation given in Eq. (8.78), we must intr oduce
the following z-transform :
co
X(z, y) =I wn(y)z' (8.80)
n= O
Addition and scalar multiplication may be defined in the ob vious way for
this power series and "multiplication" will be defined as corre sponding to
convolution as is the usual case for tran sforms. Spitzer's identity is then given
as
(8.81)
where
y ~ log [e - zc(y)] (8.82)
Thus wn(Y) may be found by expanding X( z, y) as a power series in z and
picking out the coefficient of Z' . It is not difficult to show that
II
I
I
8.3. KIN GMAN'S ALG EBRA FOR QUEUES 303
1
304 THE QUEUE G jGjl
Using Eq. (8.90) and recognizing that the moments of the limiting distribution
on IV n must be independent of the subscript we have
E[(y)2] = 2E[lvii] + E[(ii)2]
We now revert to the simpler notation for moments, " ok ~ E[(w)'], etc. Since
W n and Un are independent random variables we have E[wu] = wii; using this
and Eq , (8.92) we find
_ .:l u2 y2
IV=W=---- - (8.94)
2ii 2y
Recalling that the mean residual life of a random variable X is given by
X2f2X, we observe that W is merely the mean residual life of -u less the
mean residu al life of y! We must now evaluate the second moment of ii.
Since ii = x - i ; then u2 = (x - i)2, which gives .
(8.95)
, •
where a; and a/ are the variance of the interarrival-time and service-time
densities, respectively. Using this expression and our previou s result for u
we may thu s convert Eq. (8.94) to
y2
(8.96)
2y
We must now calculate the first two moments of y [we already know that
fj= i(l - p) but wish to express it differently to eliminate a constant].
This we do by conditi oning these moments with respect to the occurrence of
an idle peri od. Th at is, let us define
Go = P[y > 0]
= P[arrival finds the system idlej (8.97)
1
306 TH E QUEUE GIGII
It is clear tha t we have a stable system when ao > O. Fu rthermore , since we
have defined an idle period to occur only when the system remain s idle for a
nonzero interva l of time, we have that
I > 0] =
P[fi ~ y y P[idle period ~ y] (8.98)
a nd this last is just the idle-period distribution earlier den oted by F(y ). We
denote by I the random variable represent ing the idle period. Now we may
calculate the following:
The expectation in this last equation is merely the expected value of I and
so we have .
(8.99)
Similarly, we find
(8.100)
Thus, in particul ar, y2/2y = 12/21 (a o cancels!) and so we may rewrite the
expre ssion for Win Eq. (8.96) as
2
IV = ao" + ao" + (i)"(1 - p)2 1
- (8.101)
2i(1 - p) 21
Unfortunately this is as far as we can go in establ ishing W for GIG /I. The
calculation now invo lves the dete rmin ation of the first two moments of the
idle period. In general, for G/Gfl we cannot easily solve for these rnojnents
since the idle period depends upon the particul ar way in which the previous
•
busy period terminated. However in Chapter 2, Volume II , we place bounds
on the second term in this equation , thereby bounding the mean wait W.
As we did for M IGII in Chapter 5 we now return to our basic equat ion '
(8.91) relat ing the import ant rand om variables and attempt to find the
transform of the waiting time density W* (s) ~ E[ e- siD ] for GIG/I. As one
might expect this will involve the idle-time distribution as well. Formin g the
tran sform on both sides of Eq . (8.91) we have
(8.102)
-
8.4. THE IDL E T IME AND D UA LIT Y 307
In order to evalu a te the left-h and side o f thi s tra nsform expression we ta ke
advantage of the fact that o nly o ne or the other of the ra ndom variables
IV n +l and Yn may be nonzero. Accordingly, we have
where as in the past C*(s) is the Laplace tr an sform for the den sity describing
the random varia ble ii . Thi s last equation fina lly gives us [M ARS 68]
* 0 0 [1 - 1*(- s)]
W (s) =~ 1 - C*(s) - (8.106) •
which represents the genera lization o f the Pollaczek-Khinchin tran sform
equ at ion given in Chap ter 5 and which now appl ies to the system G/G/ 1.
Clearly this eq ua tio n hold s a t least alo ng the imagi nary ax is of the complex s-
plane , since in that case it become s the characteristic func tion of the various
distr ibution s which a re kn own to exist .
Let us now co nsider some examples.
Example 1: M/M /1
For thi s system we know that the idle-period d istributi on is the same as the
intera rri val-time dist ribution, nam ely ,
F(y) = P[l ~ y ] = I - e- 1 • y ~ O (8.107)
308 THE QUEUE G /G/I
*
W (s) =
( I - p)[1 - A/(A - s)]
"------'-'~-'-"---'-=
I - ).fl/(A - s)(s + fl)
- (I - p)s(s + fl)
= (A - s)(s + fl) - Afl
(1 - p)(s + fl)
(8.109)
S + fl - A
which is the sa me as Eq . (5. I20).
Example 2: M /G//
In thi s case the idle-time d istribution is as in M/M /I ; howe ver , we mu st
lea ve the va ria nce for the ser vice-t ime distribution as a n unkn own. We
.. obta in
A2 [(I/A 2 ) + a b2] + (I - p)2 1
W = --=-'---'---'-"---.::....:..."---'-------'--'-
2).(1 - p) J,
(1 +C 2
b )
•
=p (8 .1 10)
p)
2fl (1 -
which is the P-K formula . Also , C*(s) = B*(S)A/ <J. - s) a nd agai n Go =
(I - pl· Equation (8 . 106) then gives
This last is of course correct since the equilibrium waiting time in the (stable)
system D/D!I is always zero.
Since i, I, and] are all constants, we have B*(s) = e- ii , A *(s) = e-" and
I*(s) = e- ,l = r ,m -p). Also, with probabil ity one an arrival finds the system
empty; thus ao = I. Then Eq. (8.106) gives
and so w(y) = uo(y ), an impulse at the origin which of course checks with the
result that no waiting occurs.
Now let us define the random varia ble t, (which as we shall see is related to
an idle time) as
(8 .114)
310 TH E QUEU E GIG II
for k ::;; K. That is t, is merely the amount by which the new ascending ladder
height exceeds the previous ascending ladder height. Since all of the random
variables u; are independent then the rand om variables i k cond itioned on K
are independent and identically distributed. If we now let I - a = P[Un ::;;
Un. for all n > nk ] then we may easily calculate the distribution for K as
l+l+···+I-=u
1 2 K n l -Un o + Un! -Un l + ···+ Un g - U" K - l
=Un x
where no ~ 0 and Uo ~ O. Thus we see that whas the same distribution as
11 + ... + i K and so we may write
E[e-'W] = I
E[E[e- ,l1,+·· ·+Jx ) K]]
= E[(i*(s))K] (8.116)
\
\ \
where I *(s) is the Lapl ace transform for the pdf of each of the t, (each of
which we now denote simply by I). We may now evaluate the expectation in
Eq . (8.116) by using the distribution for Kin Eq. (8.115) finally to yield
W*( ) 1- a - (8.117)
s = I _ ai*(s)
Here then , is yet another expre ssion for W*(s) in the GIGII system.
We now wish to interpret the random variable 1 by considering -a ."dual"
queue (whose varia bles we will distinguish by the use of the symbol "), The
I
dual queue for the GIGII system considered above is the queue in which the
service times x n in the original system become the interarrival times i n+1 in
the dual queue and also the interarrival time s I n + 1 from the origin al queue
become the service time s xn in the dual queue.] It is clear then that the
random variable Un for the dual queue will merely be Un = xn '- i n+! =
I n+! - X n = -Un and defining O n = Uo + ... + Un_ 1 for the dual queu e we
have
(8.118)
t Clearly , if the origina l queue is sta ble, the du al must be unstabl e, a nd conversely (except
that both may be unstable if p = I) .
-
8.4. THE IDLE TIME AND D UALITY 311
as the relationship am ong the dual and the original queues. It is then clear
from our discussion in Section 5.11 that the ascending and descending ladder
indice s are interchanged for the original and the dual queue (the same is
true of the ladder heights). Therefore the first ascending ladder index n) in the
original queue will correspond to the first descending ladder index in the dual
queue ; however, we recall that descending ladder indices correspond to the
arrival of a customer who terminates an idle period. We denote this customer
by en,. Clearly the length of the idle period that he terminates in the dual queue
is the difference between the accum ulated interarrival times and the accumu-
lated service times for all customers up to his arrival (these services must
have taken place in the first busy period), that is, for the dual queue,
.-
= U n,
(8.119)
where we have used Eq. (8.114) at the last step. Thus we see that th e random
variable j is merely th e idle period in th e dual queue and so our Eq . (8.117)
relate s the transform of the waiting time in the original queue to the transform
of the idle time pdf in the du al queue [contrast this with Eq . (8.106), which
relates this waiting-time transform to the tran sform of the idle time in its own
queue] .
Thi s d uality observation permits some rather powerful conclusions to be
d rawn in simp le fashion (and these are discussed at length in [FELL 66],
especially Sections VI.9 and XII.5). Let us discuss two of these .
Example 4: GIMII
If we have a sta ble G IM II queue (with i = II), and x = IIp.) then the du al
is an unstable queue of the type M IGI I (with i = IIp. and if = II), and so 1
(the distribution of idle time in the dual queue) will be of exp onential form;
therefore 1*(s ) = p.1(s + p.), which gives from Eq. (8.11 7) the follo wing
---
312 TIlE QU EUE G/GfI
result for the original G{M /I queue :
In vertin g this a nd form ing the PDF for waiting time we have
Example 5: M IGI}
As a second example let the original queue be of the form M /G fl and there-
fore the dual is of the form G/M /I. Since 0" = P[ w > OJ it must be that
0" = P for M IG/I . Now in the du al system, since a busy period end s at a
random point in time (and since the service time in this dual queue is memory-
less), an idle per iod will have a durati on equ al to the residual life of an
interarrival time ; therefore from Eq. (5.1I) we see that
and when these calc ulatio ns are applied to Eq. (8.117) we have
W *(s) = I - P (8.122)
1- p{[l - B*(s)]/sx }
which is the P-K transform equ ati on for wailing time rewritten as in Eq.
(5.106).
Th is conclu des our study of G/G /1. Sad to say, we have been unable 10
give analytic expressions for the waiting-time distribution explicitly in terms
of known qu antities. In fact, we have not even succeeded for the mean wait
W! Nevert heless, we have given a method for handling the rational case by
spectrum fact orizati on , which is quite effective. In Chapter 2, Volume II,
we return to G/G{I and succeed in extracting many of its important proper-
ties throu gh the use of bounds , inequ alities, and approximations.
REFERENCES 313
REFERENCES
ANDE 53a Andersen , S. E., "On Sum s of Symmetrically Dependent Random
Vari ables," Skan . A k tuar., 36, 123-138 (1953).
ANDE 53b Andersen, S. E., "On the Fluctuations of Sum s of Random Variables
I," Math . Scand., 1,263-285 (1953).
ANDE 54 Andersen , S. E., "On the Fluctuati on s o f Sums o f Random Varia bles
II," Math . Scand., 2, 195-223 (1954).
BENE 63 Benes, V. E., General St ochastic Processes in the Theory of Queues ,
Addison-Wesley (Reading, Mass.) , 1963.
FELL 66 Feller, W. , Probability Theory and its Applications, Vol. II, Wiley
(New Yo rk), 1966.
KEIL 65 Keilson, J ., " The Role of Green's Function s in Congestion The ory,"
Proc. Symp, on Congestion Theory (edited by W. L. Smith and W. E.
Wilkinson) Uni v. of North Carolina Press (Chapel Hill) , 43-71
(1965).
KING 66 Kingman, J. F. C,; "On the Algebra of Queues," Journal of Applied
Probability , 3, 285-326 (1966).
LIND 52 Lindley , D. V., "The Theory of Queues with a Single Server," Proc.
Cambridge Philosophical Society , 48, 277-289 (1952).
MARS 68 Marshall, K. T., "Some Relationships between the Distributions of
Waiting Time , Idle Time, and Interoutput Time in the GI/G /I
Queue," SIAM Journal Applied Math ., 16,324-327 (1968).
POLL 57 Pollaczek, F ., Problemes St ochastiques Poses par le Phenomene de
Formation dune' Queue d'Attente a un Guichet et par de Phenomenes
Apparentes, Gauthiers Villars (Paris), 1957.
RICE 62 R ice, S. o., " Single Server System s," Bell Sys tem Technical Journal,
41, Part I : "Relations Between Some Averages," 269- 278 a nd Part II:
"Busy Period s," 279-310 (1962).
SMIT 53 Smith, W. L., " O n the Di stribution of Queueing Times," Proc.
Cambridg e Philosophical Society , 49, 449-461 (1953).
SPIT 56 Spitzer, F . "A Combinatorial Lemma and its Application to Proba-
bility Theory," Transactions of the American Math ematical Society ,
82, 323-339 ( 1956).
SPIT 57 Spitzer, E , "The Wiener-Hopf Equ ation whose Kernel is a Prob-
ability Density," Duke Math ematics Journal, 24, 327-344 (1957).
SPIT 60 Spit zer , F . "A Tauberian Theorem a nd its Probability Interpreta-
tio n," Transactions ofthe American M athematical Society, 94,150-1 60
(1960).
SYSK 62 Syski, R. , Introduction to Cong estion Theory in Telephone Systems ,
Oliver and Boyd (London), 1962.
TITC 52 Titchmarsh, E. C, ; Theory of Functions, Oxford Uni v, Press (London) ,
1952.
WEND 58 Wendel, F. G ., "Spitzer'S Formula; a Short Proof," Proc. American
Math ematical Society, 9, 905-908 (1958).
3 14 THE QUEUE GIGII
WOLF 70 Wolff, R. W., " Bounds and Inequalities in Queueing," unpublished
notes, Department of Industrial Engineering and Operations Re-
search, University of California (Berkeley), 1970.
EXERCISES
8.1. From Eq. (8.18) show th at C*(s) = A *( - s)B *(s).
8.2. Find ceu) for MIM I!.
8.3. Consider the system MIDI I with a fixed service time of x sec.
(3) Find
ceu) = P[u n .::;; II]
a nd sketch its shape.
(b) Find E[u n ].
8.4 . For the seq uence o f random variables given below, generate the figure
corresponding to Figure 8.3 and complete the ta ble.
II 0 2 3 4 5 6 7 8 9
\ t n -t 1 2 I I 5 7 2 2 6
\ x. 3 4 2 3 3 4 2 3
/In
lV . measured
w. calculated
2
W(y - u) = W(y) - u W(l)(y) + -u W(21(y) + R(u, y)
2
where w (nl(y) is the nth derivat ive of W(y) and R (II, y) is such th at
~'" R (u, y) dC(~) is negligible due to the slow varia tion of W( y)
when p = I - E. Let Il k denote the kth mom ent of ii.
(3) Und er these conditio ns con vert Lindley's integral eq uation to a
seco nd-orde r linear d ifferential eq uation involving Ii" and ii.
(b) Wit h the boundary cond ition W(O) = 0, solve the equation
foun d in (a) and expre ss the mean wait W in term s of the first
two moments of i and x.
8.6. Consider the DIErl1 queueing system, with a con stant intera rrival
time (of [sec) and a service-time pdf given as in Eq. (4.16).
(3) Fi nd ceu).
--
EXERC ISES 315
W( y - i) = f W( y - w) dB(w) for y ~ {
r
lV(y) = 1 + L G,e'" y~O
i- I
_, ., ( rp. )r =
e ' = i 1,2, . . . , r
rp. + cx,
i (rp.. +a,cx . = 0 j = 0, 1, . . . , r - 1
i_ 0 i )' +!
A*(s) - - _ 2
:::....-_ -
(s + 1)(s + 2)
1
8 *( s) = - -
s + 1
(a) Find the expression for 'I'+(s)/'I"_ (s) and show the pole- zero plot
in the s-plane.
(b) Use spectrum factorization to find 'I' +(s) and 'F _(s).
(c) Find <I>+(s).
(d) Find W(y) .
(e) Find the average waiting time W.
(0 We solved for W(y) by the method of spectrum fact or izati on .
Can you describe another way to find W( y ) ?
8.10. Consider the system M/G/1. Using the spectral so lution meth od for
Lindley' s integral equation, find
(a) . 'Y+ (s). {HINT: Interpret [l - B * (s)l/sx. }
(b) 'I'_(s).
(c) s<l>+(s).
J
EXERCISES 317
8.11. Consider the queue Eo/E r /1.
(a) Show th at
'¥+(5) = F(5)
'1" _(5) 1 - F(5)
1*(s) = 1 - ~ *(s)
sl
I
L
-
A P PEN DIX I
fa ~
to den ote the fac t that get) is the output of our system when f (t) is applied as
input. A system is sa id to be linear if, when
and
(1.2)
for a ny T. If the a bove two properties both hold , then ou r system is sai d to be
a linear time-invariant system, a nd it is these with which we con cern ourselves
fo r the momen t. .
Whenever o ne studies such systems , o ne find s th a t complex exponent ial
fun ctions of time a ppea r throu ghout the solution . Further, as we sha ll see,
the tr an sforms o f interes t merely represent ways of dec omposing functi on s of
time int o sums (o r int egrals) of complex exp onentials. Th at is, co mp lex
expo nentials form the building blocks of ou r tr an sforms, a nd so, we m ust
inq uire further to d iscover why the se complex exponentials pervade o ur
thin kin g with such systems. Let us now pose the fundam ent al qu estio n,
namel y , which fun ction s of tim e f( t) may pass through o u r linear ti me-
invariant systems with no change in form ; th at is, for whichf(t) will g( l) =
Hf (t }, where H is so me sca la r multiplier (with respect to I) ? If we can discover
such functi on s f( t} we will then have found the "eigenfunctions," or "char-
acteri stic functi on s," o r " inva ria nts" of ou r system. Denotin g th ese eigen-
fun ction s by f ,(I) it will be show n th at th ey mu st be of the following form (to
within a n a rbitrary sca la r multiplier) :
!e(t} = e st (IA)
L
-
1.1. W HY TRAN SFOR," tS? 323
where s is, in general, a complex varia ble. Th at is, the compl ex exponentials
given in (1.4) form the set of eigenfunctions for all linear time-invariant
systems. Thi s result is so fund amental that it is worthwhile devoting a few
lines to its derivation. Thu s let us assume when we appl y[. (t) that the output
is of the form g.(t), tha t is,
f . (t ) = e' t -+ g.(t)
But , by the linearity property we have
eSTg.(t) = g.(t + T)
Th e uniq ue solution to this equati on is given by
g.(t) = He' t
which confirms our earlier hypothesis that the co mplex exponentials pass
through our linear time-invariant systems unch anged except for the scalar
mult iplier H. H is independent of t but may certa inly be a funct ion of s and so
we choose to write it as H = H(s) . Therefore, we have the final conclu sion
th at
(r.5)
an d this funda menta l result exposes the eigenfun ctions of our systems.
In this way the complex expon enti als are seen to be the basic functions in
the study of linea r time-invaria nt systems. Moreover , if it is tru e th at the
in put to such a system is a complex expon ential , then it is a tr ivial computa-
tion to evaluat e the output of that system from Eq. (1.5) if we are given the
function H (s ). Thus it is natural to ask th at for a ny inputf(t ) we would hope
to be ab le to decomp ose f( t) into a sum (o r integral) of com plex expo nentials,
each of which contri butes to t he overa ll outp ut g(t ) thr ough a com putation
of the form given in Eq. (1.5). Then the overall output may be foun d by
summin g (integrating) these individual comp onents of th e output. (The
fact tha t the sum of the individual out puts is the same as the output of the
sum of the individua l inpu ts-that is, the complex expo nentia l decom-
position-is due to the linear ity of o ur system.) The process of decomposing
our input into sums of exponentials, computing the respon se to each from
Eq. (r.5), and then reconstitutin g the outpu t from sums of expo nentials is
324 APPENDIX I
i n ->- gn ( 1.6)
ai~ll + bi~;' ->- a g~ll + bg ~2 ) (1.7)
f n+ m - . g n+ m ( 1.8)
where m is some integer constant. Here Eq. (I.7) is the expressi on of linearit y
wherea s Eq. (I.8) is the expression of time -invariance for our discrete systems.
We may ask the same fundamental question for the se discrete systems and ,
of course, the an swer will be essentially the same, namel y, that the eigen-
function s are given by
( 1.10)
where H(z) is a function independent of n. This merely expresses the fact that
the set of functi on s {z:"} for any value of z form the set of eigenfunction s for
discrete line ar time-invariant sys tems. Moreover, the functi on (co ns ta nt) H
either in Eq . (1.5) o r (Ll O) tells us precisely how much of a given complex
exp onential we get o ut of our linear system when we ins ert a unit a mo unt of
that exp onential at the input. Th at is, H really describes the effect of the
system on these exponenti als; for this reason it is usually referred to a s the
system (or transfer)function.
Let us pursue this line of reasoning somewhat further. As we all know , a
common way to discover what is ins ide a system is to kick it-hard and
quickly. F or our sys tems this corresponds to providing an input only at time
t = 0 a nd then ob serving the subsequent output. Thus let us define the
Kronecker delta fun ction (also known as the unit fun ction) as
n=O
(I. 11)
n~O
When we apply u.; to our system it is common to refer to the output as the
unit response, a nd th is is usually denoted by h; That is,
U n+ m ---+ hn+ m
From the linearity p roperty in Eq . (1.7) we have therefore
Furthermore , if we con sider a set of inputs {J~I}, and if we define the output
for each of these by
.,.- n "" II
"" .£. n+ m
z n+ 1n ----+- .,.-n
OJ
v hn+ m'"
L-
_ n+ m
m m
where the sum ranges over all integer values of m . Fr om the definition in
Eq. (I. I I) it is clear that the sum on the left-hand side of this equation has
only one nonzero term , namely, for m = -n, and this term is equal to unity;
moreover, let us make a change of variable for the sum on the right-hand
side of this expression , giving
This last equation is now in the same form as Eq. (1.10); it is obvious then
that we have the relationship
(l.I 4)
This last equ ation relates the system functi on H( z) to the unit respon se hk •
Recall that our linear time-invariant system was completely* specified by
knowledge of H( z), since we could then determine the output for any of our
eigenfunctions ; similarly, knowledge of the unit response also completely*
determines the operation of our linear time-invariant system. Thus it is no
surprise that some explicit relationship must exist between the two , a nd,
of course, this is given in Eq. (I.I4).
Finally, we are in a position to answer the question-why tran sforms ?
The key lies in the expression (1.14), which is, itself, a transform (in this case
a z-transform), which con verts'[ the time function hk int o a function of a
complex variable H(z). This transform aro se naturally in our study of linear
time-invariant systems and was not intr oduced into the anal ysis in an arti-
ficial way. We shall see later that a similar relationship exists for continuous-
time systems, as well, and this gives rise to the Laplace transform . Recalling
that continuous-time systems may be described by constant-coefficient linear
differential equations and that the use of tr ansform s greatly simplifies the
solution of these equations, we are not surprised that discrete-time systems
lead to sets of constant-coefficient linear difference equations whose solution
is simplified by the use of e-transforms. Lastly, we comment that the input s f
• Completely specified in the sense that the only additional requ ired information is the
initial sta te of the system (e.g., the initial condit ions of all the ener gy sto rage elements).
Usually, the system is assumed to be in the zero-energy state, in which case we truly have a
complete specification.
t Transforms not only change the form in which the informati on describing a given func-
tion is presented , but they also present this inform ation in a simplified form which is
con venient for. mathem atical man ipulation.
1.2. TilE Z- T RAN SFORM 327
and the outputs g are easily decomposed into weighted sums of complex
exponentials by means of transforms, and of course, once this is done, then
results such as (1.5) or (1.10) immediately give us the component-by-compo-
nent output of our system for each of these inputs; the total output is then
formed by summing the output components as in Eq. (1.13).
The fact that these transforms arise naturally in our system studies is really
only a partial answer to our basic question regarding their use in analysis.
The other and more pragmatic reason is that they greatly simplify the analysis
itself ; most often , in fact, the analysis can only proceed with the use of trans-
forms leading us to a partial solution from which properties of the system
behavior may be derived.
The remainder of this appendix is devoted to giving examples and proper-
ties of these two principal transforms which are so useful in queueing theory.
If the sum over all term s in the sequence f n is finite , th en certainly th e unit
disk [z] ~ 1 represents a range of analyticity for F(z). * In such a ca se we have
a>
From Eq . (US) again, exactly one term will be non zero , giving
U n_ k <=> Zk
As a third example, let us consider the unit step fun ction defined by
for n = 0, 1,2 , . ..
(recall that all functions are zero for n < 0). In thi s case we have a geometric
series, that is,
a> I
15 n<=> L l e" = - - (I.19)
n- O I - z
We note in thi s case that Izl < 1 in order for the z-transform to exist. An
extremely important sequence often encountered is th e geo metric series
n=0,1,2, .. .
* A functi on of a comple x varia ble is said to be analytic at a point in the complex plane if
that function has a unique derivative at that point. The Ca uchy- Rieman n necessary and
sufficient condition for analyticity of such functions may be found in any text on functions
of a complex variable [AHLF 66).
t Th e do uble bar denotes the tran sform relati on ship whereas the doubl e heads on the arrow
indicate that the journe y may be made in either direction , f => F a nd F => f
-
1.2. TH E Z-T RANSFORM 329
Its z-tra nsform may be calculated as
co
F(z) = L Aocnzn
n=O
A
1 - ocz
And so
n A
A oc <=> - - - (1.20)
1 - ocz
where, of course, the region of analyticity for th is function is [z] < i ]«;
note that oc may be greater or less than unity.
Linear transformations such as the z-transform enjoy a number of impor-
tant properties. Many of these are listed in Table 1.1. However , it is instructive
for us to derive the convolution property which is most important in queue ing
systems. Let us consider two functions of discrete time I n and gn, which may
take on nonzero values only for the no nnegative integers . Their respective
z-tra nsforms are , of course, F(z) and G(z). Let 0 denote the convolution
oper ator, which is defined for I n and g n as follows:
We are intere sted in deriving the z-transform of the convoluti on for I n and gn,
a nd this we do as follows:
co
f n 0 gn<=> L U n 0 gn)zn
n- O
00 n
= L L fn _kgk?n-kzk
n-=O k=O
co n 00 co
However, since L L=L L
n=O k=O 1;- 0 n= k
00 00
we have fn ® gn<=>L g~k L f n_kZn- k
k""'O n=k
= G(z)F(z)
--
330 APPE NDIX I
Table 1.1
Some Properties of the z-Transf orm
co
I. f n n = 0, 1,2, . . . F(z) = L: f nzn
n_ O
7. f n-1 zF (z)
dm
10. n(n - I)(n - 2), . . . , (Il - m + I )fn z" ' - F (z)
dz'"
11. f n @ g n F(z)G(z)
a o
- F (z)
14. -
oa [« (a is a parameter off n)
oa "
co
15. Series sum property F(I ) = L: f n
n= O
co
16. AlIem ating sum property F( -I) = L: ( - I) nfn
n= O
2 "he
Transform Pairs rm
"he
SEQUENCE z-TRA NSFORM ion
the
co
itly
n = 0, 1, 2, . . . F (z) = 2: I nzn
n= O vay
sed
(~
n = 0
, 1/ ~ 0
zk
, 1 1/ = 0 ,1 , 2 , .. .
1 - z
Zk
her
) is
1 - z
~i a l
A uo r
1 - «z wer
ctZ hen
(I - ctZ)2
Z ' to
(I - z )'
ress
In S-
ctZ(I + ctZ)
.ing
(I - ctZ )3
lOW
z(1 + z) for
(I - z)" . h is
1 IS a
1) ,,_
(I - ctz)2 5 in
rms
I)
(I - z )"-
I to
out
l + m)(1/ + m - I) . . . (1/ + 1)ctn .eed
(I _ ctz)m+l
eZ
Some comments regarding these tables are in order. First, in the propertv see!
table we note that Property 2 is a statement of linearity, and Properties 3 and pre
4 are statements regarding scale change in the transform and time domain, F(z
respectively . Properties 5-8 regard translation in time and are most useful. firs
In particular, note from Property 7 that the unit delay (delay by one unit of F (:
time) results in multiplication of the transform by the factor z whereas sec
Property 5 states that a unit advance involves division by the factor z, ex
Properties 9 and 10 show multiplication of the sequence by terms of the form wr
n(n - I) . . . (n - m). Combinations of these may be used in order to find, a,
for example, the transform of n2jn; this may be done by recognizing that
n2 = n(n - I) + n, and so the transform of n2jn is merely Z2 d 2F(z)/dz2 +
zdF(z)/dz. This shows the simple differentiation technique of obtaining
more complex transforms. Perhaps the most impor~ant, however, is Property
I I showing that the convolution of two time sequences has a transform that
is the product of the transform of each time sequence separately. Properties
12 and 13 refer to the difference and summation of various terms in the
sequence. Property 14 shows if a is an independent parameter of In' differen-
tiating the sequence with respect to this parameter is equivalenttodifferentiat-
ing the transform. Property 15 is also important and shows that the transform
expression may be evaluated at z = I directly to give the sum of all term s in
the sequence. Property 16 merely shows how to calculate the alternating sum.
From the definition of the z-tra nsform, the initial value theorem given in
Property 17 is obvious and shows how to calculate the initial term of the
sequence directly from the transform. Property 18, on the other hand, shows
how to calculate any term in the original sequence directly from its z-transform
by successive differentiation; this then corresponds to one method for
calculating the sequence given its transform. It can be seen from Property 18
that the sequence In forms the coefficients in the Taylor-series expansion
of F(z) about the point o. Since this power- series expansion is unique,
then it is clear that the inversion process is also unique. Property 19 gives
a direct method for calculating the final value of a sequence from its
z-transform.
Table 1.2 lists some useful transform pairs . This table can be extended
considerably by making use of the properties listed in Table 1.1 ; in some cases
this has already been done. For example, Pair 5 is derived from Pair 4 by use
of the delay theorem given as entry 8 in Table 1.1. One of the more useful
relationships is given in Pair 6 considered earlier.
Thus we see the effect of compressing a time scquence In into a single
function of the complex variable z. Recall that the use of the variable z
was to tag the terms in the sequence Inso that they could be recovered from
the compressed function; that is,ln was tagged with the factor z", We have
I.2. THE Z-TRANSFORM 333
seen how to form the z-transforrn of the sequence [through Eq . (U5)]. Th e
problem confronting us now is to find the sequence f n given the z-tra nsform
F(z). Th ere a re basically three meth ods for carrying out this inversio n. The
fi rst is th e powe r-series method, which attempts to ta ke the given func tion
F(z) a nd express it as a power series in z; once thi s is done the term s in the
sequence f n may be picked off by inspection since the tagging is now explicitly
expo sed . T he powe r series may be obtained in one of two ways : the first way
we have already seen through our intermediate value theorem expressed
as Item 18 in Table I.l , tha t is,
f = 1- d nF(z) I
n It! dz" %= 0
(t his meth od is useful if one is only interested in a few term s but is rather
tedi ou s if man y term s are required) ; the second way is useful if F(z) is
expressible as a rationa l fun ction of z (that is , as the rati o of a polyn omial
in z over a polynomial in z) and in thi s ca se one may divide the den omin ator
int o the numerator to pick off the sequence of leadin g term s in the power
series directly. The power-series expan sion meth od is usually difficult when
man y term s are req uired.
Th e second a nd most useful meth od for inverting z-tra nsforms [that is, to
calcul ate j', from F(z)] is the inspection method. That is, one att empts to express
F(::.) in a fas hion such that it co nsists of term s that are recognizable as tran s-
form pairs, for example , fr om Table I.2. The sta nda rd approach for placing
F(z) in this for m is to carry out a par tial-fraction expansion, * which we now
discuss. Th e partial-fr act ion expansion is merely an algebraic techn iqu e for
expre ssing rat ional fun ction s of z as sums of simple term s, each of which is
easily inverte d. In pa rticular , we will attempt to express a rati onal F(z) as a
sum of terms , each of which looks either like a simple pole (see entry 6 in
Ta ble I.2) or as a multi ple pole (see entry 13). Since the su m of the tran sform s
equals the tra nsform of the sum we may apply Property 2 from Tabl e I.l to
inve rt each of these now recognizable forms sepa rately, th ereby carrying out
the req uired inversion. To carry out the parti al-fraction expan sion we proceed
as follows. We ass ume that F(z) is in rati on al for m, that is
F(z) = N( z)
D(z)
where both the nu merat or N (z) and the den ominat or D (z) are each
• T his procedure is related to the La u rent expa nsion o f F( z) around each pole [G U IL
49 ].
334 APP ENDI X I
Equati on (1.21) implies th at the ith root at z = II"; occurs with multiplicity
m.. [We note here th at in most problems of interest, the difficult part of the
so lution is to take a n arbitr ary polynomial such as D(z) and to find its roots
IX;so that it ma y be put in the factored form given in Eq. (1.21). A t this point
we ass ume th at that difficult ta sk has been accomplished. ] If F(z) is in thi s
form then it is possible to express it as follows [G UlL 49]:
This last form is exactl y wha t we were looking for, since eac h term in this
sum may be found in o ur table o f transform pa irs ; in particul ar it is Pair I3
(a nd in the simplest case it is Pa ir 6). Thus if we succeed in ca rrying ou t the
partial-fraction expa nsion, then by inspection we ha ve o ur time seq uence In.
It rem ain s now to descr ibe the meth od for ca lculating th e coefficient s A i;'
The genera l expression for such a term is given by
A ·· =
1
(
1 / - 1 d l-l [
- -
) - ( I - IX .Z)m, ~
"( )J I (1.23)
" (j - I )! «, dZ -I 1
• D(z) :~ I/..
This rather formida ble procedure is, in fact , rather stra ightforwa rd as long as
the function F(z) is not terribly complex.
• We no te here tha t a partial-frac tion ex pansion ma y be ca rrie d ou t o nly if the degree of the
numerato r po lynomial is strictly less th an th e degr ee of the den omi na to r polyno mia l : if
thi s is not thc case, then it is nece ssary to d ivide the den omina to r into the numerator until
the remaind er is o f lower degree th an the de nom inat or. This remainder divided by th e
origina l den omi na tor may then be exp anded in partial frac tions by the method show n ; the
terms ge nerated from the division al so may be invert ed by inspectio n mak ing use of
tr an sform pa ir 3 in T a ble 1.2. An alterna tive way of sa tisfying the de gree co ndit ion is to
attempt to factor o ut e no ugh pow ers o f z from the numera tor if possi ble.
l.2. TH E Z-TRANSFORM 335
worthwhile at th is point to ca rry o utan example in order to demonstrate
etho d. Let us ass ume F(z) is given by
2
F(z) = 4z (1 - 8z) 0
(1.24)
(1 - 4z)(1 - 2z)"
(1 - 4z)(1 - 2z)'
All A. I A22
= --- + + - ----"'
0 '--
1 -4z ( 1 -2z)" ( 1-2z)
such as All (that is, coefficients of simple pole s) are easily obtained
:q. (1.23) by mult iplying the ori ginal functi on by the factor correspond -
the pole and t hen evalu ating the result at the po le itself (that is, whe n
; o n a value that d rives the facto r to 0). Thus in o ur example we ha ve
2 4[ 1 - (8/2) ]
A'I = (1 - 2z) G(z)lz_l/o = = 12
. • [I - (4/2)]
--------------
336 APP ENDI X I
<
A 22 = - -I -d
2 dz
[(I - 2Z)2G(Z)] I
%- 1/ 2
I d 4(1 - 8z) I
= - 2: dz ( I - 4z) %- 1/2
= _ !.(I - 4z)( -<32) - 4{~
2 (I - 4z)"
- 8z)(-4) I %~ 1 /2
=8
Thus we conclude that
-16 12
G(z ) = - - + + -8 -
I - 4z (I - 2Z)2 I - 2z
This is easily sh own to be equal to the original fact ored form of G(z) by
placing these terms over a common denominator. Our next step is to invert
G(z) by inspecti on . Thi s we do by observing that the first and third term s are
of the form given by transform pair 6 in Table 1.2 and that the second term
is given by transform pair 13. This, coupled with the linearity property 2
in Table 1.1 gives immediately that
0 n <0
G(z) <=> gn = (1.25)
{-16(4)' + 12(n + 1)(2)' + 8(2)n n = 0, I , 2, .. .
Of course, we must now account for the factor Z2 to give the expressio n for
f n. As menti oned ab ove we do thi s by taking ad vantage of Property 8 in
Table 1.1 and so we have (for n = 2 , 3, . . .)
-Ii
I n = -.
27TJ C
F(z)z-l- n dz (1.26)
J
1.2. THE Z-T RA NSFORM 337
whe rej = ,j~ a nd the int egral is evaluated in th e complex z-pla ne a round a
closed circular contou r C, whic h is large en ou gh * to surround a ll poles of
F(z) . T his method of eva lua tion works properly whe n facto rs of the for m Zk
are removed fro m th e express io n ; the reduced expre ssion is th en evalu at ed
a nd the final solutio n is obtained by taking ad vantage of Property 9 in Table
I.l as we sha ll see below. This contour integrati on is most ea sily performed by
making use of the Cauchy residue theorem [G UlL 49]. This th eorem may be
sta ted as follows:
(1.27)
- I 1. 4z- 1 - n (1 - 8z) d
gn = 27Tj 'J;; (1 _ 4z)(1 _ 2zl z
• Since Jo rdan 's lemma (see p. 353) req uires that F(z) ~ 0 as z ~ 00 if we a re to let the
. con tour grow , then we require tha t any function F(z) that we consider have this property ;
thu s for rat ional functi ons of z if the numerator degree is not less than the denominat or
degree , then we must divide the numerator by the denominator un til the remainder is of
lower degree than the denominator, as we ha ve seen ear lier. The terms generate d by this
division are easily transformed by inspection , as discussed ~·1 rlier . a nd it is only the remain-
ing function which we now consider in this inversion meth od for the z-transfo rm.
t A function F(z) of a complex variable z is said to be analytic in a region of the complex
plane if it is single-valued an d differentiab le at every point in that region .
338 APPENDI X I
where C is a circle large enou gh to enclose the pol es of F(z) at z = 1/4 and
z = 1/2 . Using the residue theorem and Eq . (1.27) we find that the residu e at
z = 1/4 is given by
r1f< = (z -.1)
4z- 1- n(1 - 8z)
-
I
4 ( I - 4z)(1 - 2Z)2 z-1 /'
_ (1/4)-1- n(l - (8/4)]
= (I - (2/4)]2 = 16(4)n
whereas the residue at z = 1/2 is calculated as
r1/ 2 = -d [( z - -
dz
B 2 1 n(1-
4z- - 8z)
(1 - 4=)(1 - 2Z)2
JI z- 1/2
d [z-l- n(1 - 8z )JI
= dz
=
1 - 4z z-1/2
(1 - 4 z)[( -1 - n)z-2- n(1 - 8z) + z-:-n( -8)]- Z-I-"(1 - 8z)( - 4)
(I - 4z)"
I
' =1/2
1)- 2- n
=(-1 ) [ ( - I - n) ( 2:
( I)-I~n
(-3)+ 2: (- 8) -
J (2:I)-I-n( - 3)( - 4)
= - 12(n + 1)2 n + 16(2)" - 24(2)"
N ow we mu st take 27Tj times the sum of the residues and then multiply by the
factor preceding the integral in Eq . (1.26) (thus we mu st take - 1 times the
su m of the residues) to yield
Agai n, we have ado pted the notation fo r genera l Laplace tr an sforms in which
we use a capital letter for the tran sform of a function of tim e, which is
described in terms of a .lo wer case letter. This is usually referred to as the
"two-sided, " or "bilateral" Laplace tran sform since it opera tes on both
the negative and positive time axes. We have assumed thatf(t ) = 0 for t < 0,
and in th is case the lower limit of integration may be replaced by 0- , which is
defined as the limit of 0 - € as €(> 0) go es to zero; further , we often denote
this lower limit merel y by 0 with the understanding th at it is meant as 0-
(usually thi s will cau se no confusion). There also exists what is known as the
"one-sided" Lapl ace transform in which the lower limit is repl aced by 0+,
which is defined as th e limit of 0 + e as €(> 0) goes to zero; th is o ne-sided
tr an sform has application in the so lution of tran sient problems in linear
systems . It is impo rtant th at th e reader distingu ish bet ween th ese two trans-
fo rms with zero as th eir lower limit since in th e former case (the bilat eral
tr ansform) an y accumulation at the origin (as, for example , the unit impulse
defined below) will be included in the tr an sform , wherea s in the la tte r case
(t he o ne-sided transform) it will be o mitted .
For o ur assumed case in which f(l) = 0 for t < 0 we may write o ur
t ran sform as
where, we repeat , the lo wer limit is to be int erpreted as 0- . Thi s Lap lace
transform will exist so lon g asf(l) gro ws no fa ster than an exponential , th at
is, so lon g as there is so me real number u. such that
-
340 APPENDIX I
The smallest possible value for " « is referred to as the abscissa of absol ute
con vergence. Aga in we stat e tha t the Laplace transfo rm F*(s) for a given
functio n j (r) is unique.
If the integral of f(l) is finite, then certainly the right-ha lf plane Re (s) ~ 0
represents a region of analyticity for F*(s) ; the notati on Re ( ) reads as
" the rea l part of the complex function withi n the parentheses." In such a
case we have, corresponding to Eq. (1.16),
From our earlier definition in Eq. (1.9) we see th at prope rt ies for the z-
tran sform when z = I will corres po nd to properties for the Lapl ace trans-
form when s = 0 as, for example , in Eqs. (1.16) a nd (I.30).
Let us now co nsider so me importa nt examples of Laplace tr an sfor ms. We
use notati on here ident ical to that used in Eq. (1.17) for z-t ransforms, namely,
we use a dou ble-ba rred , dou ble-headed arrow to denote the relat ion ship
bet ween a functio n a nd its transform; thu s, Eq. (1.29) may be written as
=A r
J.
e -( H al t dt
A
s+ a
And so we have the fund ament al relation sh ip
A
Ae-at 0(1) <=;0--- ( 1.32)
s+ a
-
1.3. T HE LAPLACE TR ANSFORM 341
where we hav e defined the unit ste p func tio n in con tinuo us time as
o(t) = {~ t~O
t<O
( 1.33)
uo(l ) corresponds to highly concentrated unit-a rea pu lses that are of such
sho rt durati on that th ey ca nno t be distin guished by a va ila ble measurement
instru ments fr om o ther perh ap s briefer pul ses. Therefore, as one might
expect , the exact sha pe of the pulse is unimportant , rather o nly its time of
occurrence a nd its area matter. This function has been stud ied and utili zed by
scientists for ma ny yea rs [G UlL 49], a mong them Dirac, a nd so th e un it
impulse funct ion is often referred to as the Dirac delta junction. For a lo ng
time pure mathematicians have refrained from using uo(l ) since it is a highly
improper fun ct ion , but yea rs ago Schwartz's the ory of distribution s [SCHW
59] pu t th e co ncept of a un it impulse fun cti on o n firm ma thematical gro und.
Part of the difficulty lies with the fact that the un it impulse functi on is not a
fun ction at all , but merely provides a notati on al way for handlin g di s-
co ntinu ities a nd th eir der ivat ives. In th is regard we will intro d uce the un it
impul se as the limit of a seq uence witho ut appealin g to the more so phisticated
generalized functi ons that place muc h of what we do in a more rigor ous
frame wor k.
~----------
342 APPENDI X I
ItI ::;; L
ItI > J.
20'.
This rectangular wave form has a height a nd width dependent up on the
parameter 0'. as shown in Figure 1.2. Note that this functi on has a con stant
area of unity (hence the name unit imp ulse function). As 0'. increases, we note
that the pul se gets taller and narrower. The limit of thi s seq uence as 0'. ->- ex)
(o r the lim it of an yone of an infinite number of other seq uences with simila r
properties, i.e. , increasing height, decreasing width , un it a rea) is wha t we
mean by the unit impulse "function ." Thus we are led to the following
description o f the unit impulse functi on.
t=O
uo(t ) = {ex)
o 1:;60
L: uo(t) dt = 1
fa it!
!
I 8
0: = 8
0: = 4
2
0: = 2
1
0: = 11 1
1 1 0 1 \
- 8" - 16 16 a
Figure 1.2 A sequence of functions whose limit is the unit impulse function uo(t ).
1.3. TH E LA PLA CE TRA NSFORM 343
o
Figure i.3 Graphical representation of Auo(t - a) .
indicating the area of the impulse; that is, A times a unit impulse function
loca ted at the point t = a is denoted as Auo(t - a) and is depicted as in Figure
1.3.
Let us now co nsider the integral of the unit imp ulse funct ion. It is clear that
if we integra te from - 00 to a point I where I < 01hen the total integral must
be 0 whereas if I > 0 th en we will ha ve succe ssfully integr ated past the unit
impulse and thereby will have accumulated a total area of unity. Thus we
conclude
I~O
r' uo(x) d x = {I
L" 0 1<0
But we note immediately that the right-hand side is the same as the definition
of the unit step fun ct ion given in Eq. (1.33). Therefore , we conclude th at the
un it step fun cti on is the integr al of th e un it impulse functi on , a nd so the
" de riva tive" o f th e uni t st ep functi on mu st therefore be a unit impulse
funct ion. However , we rec ogni ze th at th e derivative o f this discontinuous
funct ion (the step function) is not pr operly defined; once again we appeal to
the th eory of distr ibutions to place thi s o peration on a firm mathematical
foundation. We will therefore assume thi s is a proper operation and proceed
to use the unit impulse functi on as if it were an ordinary function .
One of the very important properties of the unit impulse fun ction is its
sifting property ; that is, for an arbitrary differentiable function g(l) we have
Th is las t equa tio n merel y says th a t the integr al o f the product of o u r fu nction
g(x) with an imp ulse loca ted at x = I "sifts" the fun cti on g(x) to p rod uce its
val ue a t t , g(I). We no te that it is possible al so t o define the deriva tive of the
unit impulse which we den ote by U,(I) = dUo(I) /d l; th is is known as th e uni t
doublel and ha s th e property th at it is everywhere 0 except in the vicinity of
the origi n where it run s o ff to 00 j ust to the left of the o rigin a nd o ff to - 00
344 APPE NDIX I
just to the right of the origin , and, in addition , ha s a tot al area equal to zero .
Such function s correspond to electrostatic dip oles, for example , used in
physics. In fact , an impulse function may be likened to the force placed o n a
piece o f paper when it is laid over the edge of a knife and pre ssed down
whereas a unit doublet is similar to the force the paper experiences when cut
with scissors. Higher-order deri vatives are possible a nd in genera l we may
have un(t) = dU n_l (t )/dt . In fact , a s we have seen, we may also go back down
the sequence by integrating these functi ons as, for example, by generating
the unit step function as the integral of the unit impulse functi on ; the obvious
notation for the unit step function , therefo re, would be U_I(t) and so we may
write uo(r) = dU_I(t)/dt. [Note, from Eq. (1.33), that we have also reserved
the notation bet) to represent the unit step functi on.) Thus we have defined
an infinite sequence of specialized functions beginning with the un it impulse
a nd proceeding to higher-order derivatives such as the doublet, a nd so on,
as well as integrating the unit imp ulse and thereby generating the unit ste p '
function , the ramp, namel y,
t~O
11_ 2(1) "' I'
=
-a:>
II _I( X) dx = {I
0 1<0
the parabol a , namely,
"-.<) g, L"~(x) ~ dx (:
I~O
1 <0
and in general
tn-l
I~O
(
~n - I) ! (1.35)
1 <0
This entire family is called the family of singularity fun ctions , and the most
important members are the un it step funct ion a nd the u nit impulse function.
Let us now return to our main discu ssion and con sider the Laplace tra ns-
form of uo(t ). We proceed directly from Eq . (1.28) to ob tai n
(Note that the lower limit is interpreted as 0- .) Thus we see th at the unit
impulse has a Laplace transform equal to the constant unity.
Let us now consider so me of the important pr operties of the transforma-
tion. As with the z-tra nsform , the convolution property is the most imp ortant
and we proceed to derive it here in the continuous time ca se. Thus. con sider
two functi on s of continuous time f(t) and get), which ta ke on non zero values
-
1.3. THE LAPLACE TRAl':SFO~\1 345
we may then ask for the Laplace tran sform of this convolut ion . We obta in
this formall y by plugging into Eq. (1.28) as follows :
= I t: o I~/(t - s
x)g(x) dx e- ' dt
x
= L : / (x)e- ' d X.C / (V)e- · · dv
And so we have
j( t) 0 g(t) -=- F*(s)G*(s)
Once again we see that the tran sform of the con volution of two functions
equals the produ ct of the transform s of each . In Table 1.3, we list a nu mber of
impo rta nt properties of the Laplace transform , and in Table 1.4 we list some
o f the import an t tran sform s themselves. In these tabl es we ado pt the usual
notatio n as follows:
dnJ (t ) ~ f nl(t) (1.37)
dt n
Fo r example, p - ll(t) = j!.. ",! (x) dx ; when we dea l with functions which are
zero for t < 0, then p-ll(O-) = O. We comm ent here "that the o ne-sided
tr an sform tha t uses 0+ as a lower limit in its definition is quite co mmonly
used in tran sient analysis, but we prefer 0- so as to include impul ses at the
origi n.
The table of properties permits one to compute many transform pairs from
a given pair. Proper ty 2 is the sta tement of linearit y and Pr oper ty 3 describes
the effect of a scale change. Property 4 gives the effect of a tr anslation in time,
Table {.3
Some Properties of the Laplace Transform
FUNCTION TRANSFORM
4. f (t - a) e- "'F*(s)
5. e-a'f(t) F*(s + a)
dF *(s)
6. tf(t ) - - -
ds
d nF*(s)
7. t nf (t ) ( _ I )n ~
8. f (t )
I
1:, F *(s, ) ds,
9. f( l)
In
r'"
J 51""" 3
ds; r'"
J 8: = 81
ds 2 ' " I"
) Sn= 5 n _ l
dsnF*(sn)
t d"f( t)
12. di" snF*(s)
F* (s)
B .t f ",f(t )dl
s
14. t f"",
~
f J (t )(dr)n
F*(s)
sn
n times
a a
15 . aa f(t) [a is a parameter) aa F(s)
, It dt n
F* (s) [1 -1)(0-) f (-2)(O -) f l-11) (0- )
I
'"
- co
...
n times
- 00
,.
f (t )(dt )n <=> - - +
sn s"
+ _ + ... +
s" 1 S
346
1.3. TH E LAPLA CE TRA NSFOR)\ 347
Table 1.4
Some Laplace Transform Pairs
6. 1C , (t - a)
s
a r n- 1
7. lI_n(r) = (II _ I) ! s· .
A
8. Ae- a t 6(t)
s+a
1
9. te- a t 6(t ) ---
(s + a)2
I
(s + a)"+1
whereas Property 5, its d ua l, gives the effect of a para meter shift in the
tra nsfo rm do mai n. Pr operties 6 a nd 7 show the effect of mul tiplication by t
(to so me power) , which co rres po nds to differen tia tion in the tran sform
d om a in ; sim ila rly, Prope rties 8 and 9 sho w the effect of di vision by t (to so me
power) , which correspo nds to integ ra tio n. Property 10, a most im porta nt
p rop er ty (de rived ea rlier), shows th e effect of con volut ion in the time d om ai n
going over to simple multiplicat ion in the tran sfo rm domain . Properties 11
and 12 give the effect of time diffe rentiatio n ; it shou ld be noted that this
corresponds to mu ltip lica tio n by s (to a p ower equal to the number of
differentia tion s in time) times the origi na l tra nsfo rm . In a simi la r way
Prop er ties 13 a nd 14 show the effect of time integration goi ng over to division
by s in the transform domai n. Property 15 shows that differ en tiat ion with
res pect to a pa ram eter off(t) corresp on d s to differentia tio n in the transfo rm
domai n as well. Pro perty 16, th e integra l pr op erty, shows the simple way in
which the tra nsform may be eva lua ted a t the o rigi n to give the total integral
-
348 APP END IX I
ofJ(t). Pr operties 17 and 18, the initial and final va lue theorems , show how to As
compute the va lues for J(t) at t = 0 and t = CIJ directly from the tra nsform. Let
In Table 1.4 we have a rather short list of important Laplace tran sform th a
pairs. Much more extensive tables exist and may be found elsewhere wil
[DOET 61]. Of course , a s we said earlier, the table shown can be extended wh
considerably by making use of the properties listed in Table 1.3. We n ote , for th,
example , th at the transform pair 3 in Table f.4 is obtained from tr ansform fOI
pair 2 by application of Pr operty 4 in Table 1.3. We point out again that thi s R,
table is limited in length since we ha ve included only those functi ons that find
relevance to the material contained in thi s text.
So far in this discu ssion of Laplace transforms we have been considering ta
only functionsJ(t) for which J(t) = 0 for t < O. This will be satisfactory for
most of the work we consider in this text. However, there is an occas iona l cc
need for transforming a function of time which may be nonzero an ywhere o n ta
the real -time axi s. For this purpose we mu st once again con sider the lower p
limit of integration to be - CIJ, that is,
"
F*(s) = L:J(IV S1
dt ( 1.38)
\'
One can ea sily sho w th at thi s (bilate ra l) Laplace transform may be calculated
in terms of one-sided time functi ons and their transforms as foll ows. First we
define
J ( /) t<O
J-C t ) = {0
t~O
t<O
J+(t) = { 0
J (t ) t~O
As always, these Laplace tran sfor ms have abscissas of abso lute convergence.
Let us therefor e define 0'+ as the co nvergence abscissa for F; *(s) ; this implies
th at the region of convergence for F+*(s) is Re (s) > 0'+. Similarl y, F_ *(s)
will have some abs cissa of abso lute convergence, which we will denote by 0'_ ,
which implies that F_ *(s) converges for Re (s ) > 0'_ , It then follows directly
that F_ *( - s) will have th e same convergence ab scissa (0'_) bu t will converge
for Re (s) < 0'_ , Thus we ha ve a situation where F *(s) converges for 0'+ <
Re (s) < 0'_ and therefor e we will have a " convergence st rip" if and only if
0'+ < 0'_ ; if such is not the case, then it is not useful to define F*(s). Of course,
a similar a rgument ca n be made in the case of z-transforrns for funct ion s that
ta ke on non zero values for negati ve time indices.
So far we have seen the effect of taggin g our time funct ion f( t) with the
compl ex exponential e- s t a nd then compressing (integrating) over all such
ta gged functi ons to form a new functi on, namel y, the tran sform F*(s). The
purpose of the tagging was so that we could later " untransfo rrn" or, if you
will, "unwind" the transform in order to obtainf(t) once again. In princ iple
we know this is possible since a tr an sform and its time functi on are uniquely
related. So far , we have specified how to go in the ~Jn e direction from f( t)
to F*(s). Let us now discuss the problem of inverting the Laplace tr an sform
F *(s) to recover f( t) . There are basically two meth ods for conducting this
inversio n : The insp ection method a nd th e f ormal inversion int egral m ethod.
These two meth od s are very similar.
First let us discuss the inspection method , which is perh ap s the most
useful scheme for invertin g transforms. Here , as with z-transforms, the
approac h is to rewrite F*(s ) as a sum of term s. each of which ca n be recog-
nized from the table of Laplace transform pa irs . Then , makin g use of the
linear ity pr operty, we may invert the transform term by term , and then sum
the result to recover f( t) . Once again , the basic method for writing F*(s) as a
sum of recognizable term s is that of the partial-fraction expan sion . Our
description of that meth od will be so mewhat sho rtened here since we have
discussed it at so me length in the z-tra nsform sectio n. First. we will ass ume
th at F*(s) is a rat ion al functio n of s , nam ely,
F*(s) = N( s)
D(s)
where bot h the numerator N(s) and den omin at or D (s ) are each polynomials
in s. Again , we ass ume that the degree of N(s) is less than the degree of D (s ) ;
if this is not t he case. N (s) must be divided by D(s) until the remainder is of
degree less than the degree of D(s), and then the partial-fract ion expan sion is
ca rried out for this remainder, whereas the terms of the qu otient resultin g
from the division will be simple powers of s, which may be inverted by
appealing to Tr an sform 4 in Table 1.4. In additi on , we will assume that the
350 AP PEN DIX [
" hard" part of the problem ha s been done , namely, that D (s) has been put in
facto red form
k
D(s) =
.
II (s + ai)m,
i :::lt l
(1.39)
,
Bkl Bk 2 Bk m •
+ + + ... + (1.40)
(s + ak)m. (s + ak)'n.-, (s + ak)
Once we ha ve expressed F* (s) as ab ove we are then ina position to invert
each term in this sum by inspection from Table 1.4. In particular, Pair s 8
(for simple poles) a nd 10 (for multiple poles) give us the answer dire ctly. As
before, the method for calculating th e coefficients B i; is given in general by
.
Bi ; = (j
I d
H
[
_ I)! ds i- ' (s + ai)m, D(s)
N(S)] Is--a, (1.41)
Thus we have a complete pre scription for findingf (t) from £* (s) by inspec-
tien in those cases where F*(s) is rat ional and where D (s ) has been facto red
a s in Eq. (1.39). This method works very well in those cases where F *(s ) is not
overly complex.
To elucidate some of these principles Jet us carry out a simple exam ple.
Assume that F *(s) is given by
* 8(S2 + 3s + l)
F (s) - - ' - -- --'- (1.42)
- (s + 3)(s + 1)3
F*(S)=~+ B 2, + B 22 +~
S+ 3 (s + 1)3 (s + 1)2 (s + 1)
1.3. TH E LAPLA CE TRANSFORM 351
Evaluat ion o f the coe fficients Bij pr oceeds as foll ows. Bl l is especi ally simple
since no differentia tions are requi red , and we obtain
* (9 - 9 + I)
B l l = (s + 3)F (S)I'~_3 = 8 (_2)3 =-1
B2 1 = (s + I) F *(s)I.~-1 =
3
8
(I - 3
2
+ I) = - 4
B = .!!...[8(S2 + 3s + I)J I
22
cis s +3 ,--I
= 8 (s + 3)(2s + 3) -
(s
(S2 + 3s
+ 3)2
+ 1)(1) I
.--1
=8
S2 + 6s + 81 =8 --~~
I - 6 + 8
(s + 3)" .--1 (2)2
=6
Lastly , the calculati on of B23 in volve s two differentiati ons ; however , we have
a lrea dy carried o ut the first diffe rentiation , a nd so we take adva ntage o f the
fo rm we ha ve derived in B22 j ust pr ior to eva luation at s = -I ; furthermore ,
we note th at since j = 3, we have for the first time an effect due to the term
(j - I)! fr om Eq . (1.4 1). Thus
B' 3 =
-
1 2 [8(S2 +s +3s 3+ OJ I.-- 1
ci
2! ds 2
_ 1 (8) .{ [S2 + 6s + 8J
- 2 cis (s + 3)2
I.-- 1
= 4 (s + 3)2(2s + 6) - (S2
(s
+
+ 3)4
6s + 8)(2)(s + 3) I
s~- I
= 4
(2)2(4) - ( I - 6
'---'-~---'---'----'-:"':""':
+ 8)(2)(2)
(2)4
=1
352 APP ENDI X I
This completes the evaluation of the con stants B ii to give the parti al-fracti o n
expansion
This last form lend s itself to inversion 'by inspecti on as we had promi sed . In
particular, we o bserve that the first a nd last terms invert directly accordin g to
transform pair 8 from Table lA, whereas the seco nd a nd third term s in vert
directly from Pair 10 of that table; thus we have for I ~ the following : °
(1.44)
°
for t ~ and (Je > (J a ' The integration in the complex s-plane is tak en to be a
straight-line inte grati on parallel to the imaginary axis a nd lying to the right
of (J a ' the ab scissa of ab solute con vergence for F *(s). The usu al means for
carrying out this integra tion is to make use of the Cauchy residue th eorem as
applied to the integral in the complex domain around a closed contour. T he
closed con tour we ch oose for this purpose is a semicircle of infinite radiu s as
shown in Figure IA. In thi s figure we see the path of int egrati on required for
Eq . (lAS) is S3 - S1 and the semicircle of infin ite radius closing thi s co nto ur is
given as S1 - S 2 - S3 ' If the integral al on g the path S 1 - S2 - S3 is 0, then the
integral along the entire closed contour will in fact give us J(I) from Eq.
(lAS) . To establish that this contribution is 0, we need
1.3. TH E LAPLA CE TR ANSFORM 353
W = Im (s)
s- plane
--:-1-----;;+---':-""+------ 0 = Re(s)
for t >0
Thus, in orde r to carry out the complex in version int egral show n in Eq .
(1.45) , we mu st first express F*(s) in a form for which J o rdan' s lemm a
applies. Ha ving done thi s we may then evaluate the integral a ro und the
clos ed co nto u r C by ca lculating residues and using Cauchy' s residu e the orem .
Thi s is most easily ca rried o ut if F*(s) is in rational form with a fact or ed
den ominat or as in Eq . (1.39). In order for Jordan' s lemma to apply, we will
require, as we did before , th at th e degree o f the numerator be strictly less than
the degree of the den om ina tor, a nd if thi s is not so , we mu stdividetheration al .
functi on until the remainder has th is property. That is all there is to the
meth od . Let us carry th is o ut o n our previous example, namely th at given in
Eq . (1.42). We note this is alread y in a form for which Jordan's lemm a
applies, and so we ma y proceed directly with Cauchy's residue theorem.
Our poles a re loca ted .at s = -3 and s = -I. We begin by ca lculating the
residu e a t s = -3 , thu s
r- 3 = (s + 3)F*(s)eSlf'~_3
= 8(s' + 3s + l )e',
(s + 1)3
I
s_ _ 3
8(9 - 9 + 1)e- 31
= ( _2)3
=_ e-3'
354 APPENDIX I
r_ 1=:;-;----;
_. ds
2
I d (5 + 1)3F*(s)e"
,~ - I
. I
2
= I d 8(S2 + 3s + l)e"/
2 d5
2
(5 + 3) ,-- 1
= !~[(s + 2
3)[8(2s + 3)e' t + 8(5 + 3s + 1) te")- 8(S2 + 3s + Oe"JI
2 ds (s + 3)2 .•~_I
- - - - - - - - - - - - -- - -
1.4. TRA NSFORMS IN DIFF ERENC E AND DIFFERE NTIAL EQUATIONS 355
A s may be anticipated from our contour integration methods, it is some -
times necessary to determine exactly how many singularities of a function
exist wit hin a closed region. A very powerful and convenient theorem which
a ids us in thi s determination is given as follows :
Rouche's Theorem [G UIL 49] If f(s) and g(s ) are analytic fun ctions of s
inside and on a closed contour C, and also if /g(s)! < If(s)1on C, then f (s)
and f (s) + g (s) have the same number of zeroes inside C.
(1.46)
This Nth·order polyn omial clearly ha s N so lutio ns, which we will de note by
ai ' <X 2 , • •• , <X,v , assuming fo r the moment th at a ll the <Xi a re distinct. As soci.
a ted wit h each suc h so lutio n is an a r bitra ry co nstant Ai whic h will be de ter -
mi ned fro m the initial co nditio ns for the differen ce equati on (o f which there
must be N). By ca ncelling th e com mon te rm A<xn- .Y from Eq . (1.48) we
fina lly arrive at the characteristic equation which determines the va lues <X i
( 1.49)
Thu s the sea rc h for th e homogeneous so lutio n is now reduced to find ing th e N
roots o f o ur characteristic equati on (1.49) . If a ll N o f the <X i a re disti nct, th en
the homogene ou s so lutio n is
g<;:) = A,<x l
n
+A 2 <X2
n
+ ... + A ,vaN n
In the case of nondist inct root s, we have a slightly different sit ua tio n . In
particula r, let <XI be a multiple root of o rde r k; in thi s case the k equ al ro ot s
will contrib ute to th e homogeneous so lutio n in the fo llowing fo r m:
(Aun k - I
+A k 2
12 11 - + ... + AU_,n + Alk)<x l
n
a nd simila rly for any o the r multi p le roots. As fa r as the particular solution
g~P) is con cerned, we know that it mu st be found by a n a ppro priate guess
fr om the form of en'
Let us illustrate so me of these principles by mea ns o f a n exam ple. Consider
the seco nd-o rder di ffere nce eq ua tio n
T his eq uati o n gives the rela tionsh ip among the u nkno wn functio ns g n for
n = 2 ,3 , 4 , .. . . Of co urse, we mu st give two ini tial co ndi tio ns (since the
o rder is 2) a nd we choose th ese to be go = 0, g l = 6/5 . In order to find the
hom ogeneo us solutio n we mu st form Eq . (1.49) , whic h in th is case becomes
0
on
( 11
) = A('!')"+
21
A.-,(.3!.)"
-
lA. TRA NSFOR~IS IN DIF FERENC E AND DIFF ERENTIAL EQUATIONS 357
The particular solution must be guessed at, and the correct guess in this case is
If we plug g~p) as given back int o our basic equation , namel y, Eq . (1.50), we
find that B = 1 and so we are convinced that the 'pa rticula r solution is
correct. Thus our comp lete sol ution is given by
We use the initial conditions to solve for Ai and A 2 and find Ai = 8 and
A2= -9. Thus our final solution is
Il = 0, 1, 2, . . . (1.51)
This completes the standard way for solving our difference equation.
Let us now describe the method of c-tra nsfo rrns for so lving difference
equations. Assume once again that we are given Eq . (1.46) a nd that it is good
in the range /I = k , k + .1 , .. .. Ou r approach begins by definin g the follow-
ing c-tra nsforrn:
<X>
Fro m o ur earlier d iscussio n we know that once we have found G(:) we may
then apply o u r inversion techniques to find th e desired solution gn' Our
next step is to multiply the nth equation from Eq. (1.4 6) by : n and then form
the sum of a ll such multiplied equation s from k to infinity ; that is. we form
ec S 00
.2 .2 Q ig n_ i z n = .2 e nz n
Tl = h: ; = 0 n=h:
We then carry out the summa tions and a ttempt to recogni ze G(z) in this
sin gle equ ati on. N ext we solve for G(:) algebraica lly and then pr oceed with
o ur inversion techniques to obtai n the so lution. Th is meth od does not requ ire
that we guess at the particula r solution , and so in th at sense is simpler th an
the di rect method : however, as we sha ll see, it still has the basic difficulty that
we must solve the char acteristic equation [Eq . (1.49)] and in general this is the
difficult part of the solution . However, even if we cannot so lve for the ro ot s
(Xi it is possible to o btai n meaningful properties of the solution g n from the
Let us solve our earlier example using the method of z-tra nsforrns. Accord-
ingly we begin with Eq. (1.50), multiply by z" and then sum; the sum will go
from 2 to infinity since this is the applicable range for that equation . Thus
We now factor out enough powers of z from each sum so that these powers
match the subscript on g thusly:
Focusing on the first summation we see that it is almost of the form G(z)
except that it is missing the terms for n = 0 and /I = I [see Eq. (1.52)];
applying this observation to each of the sums on the left-hand side and
carrying out the summation on the right-hand side directly, we find
• 6(1/5)2z2
6[G(z) - go - g,z] - 5z[G(z) - go] + z-G(z) = --'--'---'--
1 - (1/5) z
Observe how the first term in this last equation reflects the fact that our
summation was missing the first two terms for G(z). Solving for G(z) alge-
braically we find "
G(z) = -9 + 8 + 1
1 - (1/3) z 1 - (1/2) z 1 - (1/5)z
which by our usual inversion methods yields the final solution
n = 0, 1,2, .. .
Note that this is exactly the same as Eq . (LSI) and so our method checks. We
comment here that even were we not able to invert the given form for G(z)
we could still have found certain of its properties; for example, we could"find
l
IA. TRANSFORMS IN DIFFE REN CE AN D DIFFERENT IAL EQUATIONS 359
that the sum of all term s is given immed iately by G(I ), that is,
Here the coefficients a; are given constants, and e(t ) is a given driving func-
tion. Along with this equation we must a lso be given N initial co nditions in
order to carry out a complete solution; these conditions typically a re the
values of the first N derivat ives at some instant, usually at time zero. It is
required to find the function f( t). As usual, we will have a homogeneous
solut ion fihl( t), which solves the homogeneous equation [when e{t ) = OJ
as well as a particular solut ion f (OI(t) that corresponds to the nonhomogeneous
equation. The form for the homogeneous solution will be
Thi s equation will have N solutions <x" <x" ••• , <x n , which must solve the
characteristic equation
which is equivalent to Eq. (1.49) with a change in subscripts. If all of the <X i
are distinct , then the general form for our homogeneous solution will be
The evalu ation of th e coefficients A ; is carried out makin g use of the initial
condition s. In the case of multiple root s we have the following modificat ion.
Let us assume that <x, is' a repeated root of order k ; this multiple roo t will
contri bute to the hom ogeneous solution in th e following way:
360 APPENDIX I
and in the case of more than one multiple root the modification is obviou s.
As usual , one must guess in order to find the particular solution J< p)(t). The
complete solution then is, of course, the sum of the homogeneous and
particular solutions, namely,
Jet) = j (hl(t) +f p'(t)
Let us apply this method to the solution of the following differential
equation for illustrative purposes:
cx, = cx, = 3
and so the hom ogeneou s solution must be of the form
27 9
Since our initial conditions state that bothf(t) and its first derivative must
be zero at t '= 0- , we find that Au = 2/9 and A" = -4/27 , which gives for
our final and complete solution
2
J ( I) =- I + -4 + -2 le 3t
-
4 3t
- e
9 27 9 27
which is identical to our former solution given in Eq . (1.56).
In our study of queueing systems we often encounter not only difference
equ ati on s and differential equations but also the co mbinatio n in the form of
differential-difference equations. That is, if we refer back to Eq . (1.53) and
replace the time functi on s by time functions that depend upon an inde x, say
/1 , and if we then displa y a set of differenti al equations for vari ou s values of /1,
th en we have an infinite set of differential-difference equations. The solution
to such equations often requ ires that we take both the z-transform on the
discrete index 11 and the Laplace tran sform on the continuous time parameter
I. Examples of this type of analysis are to be found in the text itself.
362 APPENDIX t
REFEREN CES
AHLF 66 A hlfors, L. V., Complex Analysis, 2nd Ed ition, McGraw-Hili (New
York), 1966.
CA D Z 73 Ca dzow , J . A. , Discrete-Time Sys tems, Pre ntice-Ha ll (E nglewood
Cliffs, N.J.), 1973.
D OET 61 Doetsch, G. , Guide to the Applications of Laplace Transf orms, Van
Nostrand (Princeton), 1961.
GU[L 49 Guillemin, E. A., The Mathematics of Circuit Analy sis, Wiley (New
Yo rk) , 1949.
J URY 64 Jury, E. 1., Theory and Application of the z-Transfo rm Me /hod,
Wiley (New Yor k), 1964.
SC HW 59 Schwartz, L., Theorie des Distributions, 2nd printing , Ac tualities
scientifiques et industrielles Nos. [245 and I [ 22, Hermann et Cie.
(Paris), Vol. 1 (1957), Vol. 2 (1959).
WIDD 46 Widder , D . V., The Laplace Transf orm, Princeton Uni versity Press
(Princeto n), 1946.
APPENDIX II
It is appropriate at thi s point to define some set the or etic not ati on [for
example , the use of the symbo l U in property (c)]. Typically, we descr ibe a n
event A as follows : A = {w : w sa tisfies the membership pr operty for the
event A } ; thi s is read as "A is the set of sa mple points w such th at w satisfies
the memb er ship property for the event A ." We further de fine
I
P[ A B] ~ P[ AB]
P[B]
when ever P[B] '" O. The introduction of the conditional event B force s
us to restrict attention from the original sample spa ce S to a new sa mple
space defined by the event B; since this new constrained sample space mu st
now ha ve a total pr obability of unity, we magnify the probabilities associated
with co nditio nal event s by dividing by the term P[B] as given above.
The seco nd additiona l notion we need is that of statistical independence
of events. Two events A, B are said to be statistically indep endent if a nd only
if
P[ AB] = P[A]P[B] (11.5)
Fo r three events A, B , C we require that each pair of event s satisfies Eq . (11.5)
a nd in additi on
P[ABC] = P[ A]P[B]P[C]
Th is definition extend s of course to n event s requiring the n-fold fact oring of
the probab ility expression as well as a ll the (n - I)-fold factorings all the way
366 AP PEN DIX II
down to all the pairwi se factorings. It is easy to see for two ind ependen t
events A, B that PtA I B} = Pt A], which merely says that knowledge of the
occurrence of the event B in no way affects the probability of the Occurrence
of the independent event A .
The theorem of total probability is especially simple and important. It
relates the pr obab ility of an event B and a set of mutually exclusive exhau stive
events {Ai} as defined in Eq . (11.4). The the orem is
n
P[B] = L P[ AiB]
i= l
which merely says that if the event B is to occur it must occur in conjunction
with exactly one of the mutually exclusive exhaustive events Ai ' Howe ver
from the definition of conditional probability we may always write
P[AiB] = I
P[A i B] P[B] = P[B I Ai ] P[A i]
Thus we have the second important form of the theorem of total pro bability,
namel y ,
n
P[B) = L P[B I Ai]
i= l
pr All
Thi s last equ ation is perhaps one of the most useful for us in st udying qu eue-
ing theory. It suggests the following approach for finding the pr obability of
some complex event B , namely, first to condition the .event B on some event
Ai in such a way that the calculation of the occurrence of event B given this
cond ition is less comple x, and then of course to multipl y by the pr obabil ity
of the conditio nal event A , to yield the j oint probability P[ A,B) ; this having
been done for a set of mutu ally exclusive exhau stive events {A ,} we may then
sum these probabilities to find the probability of the event B. Of course, this
approach can be extended and we may wish to condition the event B on more
than one event then unc ondition each of these events suita bly (by multipl ying
by the probabilit y of the appropriate condition) and then sum all possible
form s of all conditions. We will use this approach man y times in the text.
We now come to the well-know n Bay es' theorem. Once aga in we consi der
a set of events {A ,}, which are mutually exclusive and exhaustive. Th e theo rem
says
P[B Ai) P[A i )I
P [A ; B) = - n --'--'--.:..:....---'---''-=--
I
L P[B I Ai )
j= l
P[A i)
I
J
ILl . RUL ES OF TH E GAME 367
imple example is in order here to illustrate some of these ideas. Con sider
'ou have just entered a ga mbling casino in Las Vegas. You approach a
: who is known to have an ident ical twin brother ; the twins cannot
tinguished. It is further kn own that one of the twin s is an honest dealer
.a s the second twin is a cheating dealer in the sense that when you play
he honest dealer yo u lose with probability one-h alf, whereas when you
vith the cheating dealer you lose with probability p (if P is greater than
alf, he is cheating again st you whereas if p is less than one-half he is
ng for you). Further more, it is equally likely that upon entering the
) you will find one or the other of these two dealers. Con sider that you
ilay one game with the particular twin who m you encounter and further
-ou lose. Of course yo u are disappointed and you would now like to
ate the probability that the dealer you faced was in fact the cheat, for if
an establish that thi s probability is close to unity, you have a case for
the casino. Let D II be the event that yo u pla y with the ho nest dealer
:t D c be the event that yo u play with the cheating dealer ; further let L
I
: event that you lose. What we are then asking for is P[D c L]. It is no t
diat ely ob vious how to make th is calculation ; howe ver , if we appl y
, the orem the calculation itself is trivial , for
I
P[L Del P[DC]
P[D c L]I = P[L I Dc] P[Del + I
P[L D II ] P[D ll ]
P D IL _ pm __2p_
[ c ]- pm+ mm 2p + I
s the answer we were seeking and we find that the probability of having
a cheating dealer, given that we lost in one play , ranges from 0 (p = 0)
(p = I). Thus, even if we know that the cheating dealer is completel y
Jest (p = I), we can only say that with probab ility 2/3 we faced this
"given th at we lost one play.
a final word on elementary topics, let us remind the reader that the
er of perm utations of N objects taken K at a time is
N'
-----'-'-'.=-- = N(N - I) . .. (N - K + 1)
(N - K )!
368 APPENDIX II
(~)
N!
K!(N - K)!
+5 wEW
X(w) = 0 W ED
{
-5 wE L
x u»
---'----------L.--------L.----- R ~a l line
~ 0 ~
FX (x )
3
8
5
,
8 ,
1
...-
3
-'---
a
I
-5 o +5
j to the d efin ition o f the probability density fun ction (pdf) defined a s
s:
to. d F x( X)
iX(X) = h (I I.! 0)
ix(x) ~ 0
the pdf is a function which when integrated over an interval gives the
bility that the random variable X lies in that interval , namely, for
. we ha ve
. - b, and the axio m sta ted in Eq. (II.! ) we see that this la st equation
mpli es
f x'<x) ~ 0
e-A X O~X
F x(x)
.
= (I0 - (I I.! 2)
x<O
: i. > O.
or res po ndi ng pdf is given by
O~ x
(11.13)
x<O
372 APPENDI X II
F or thi s example, the probability that the random va ria ble lies between the
va lues a( >O) and b( > a) may be calculated in eith er of the two following
ways:
P[a < x ~ b] = F x ( b) - Fx (a )
= e-.l. a _ e- 1 b
P[a <x~ b] = f fx (x) d x
= e - i.a _ e - J.. b
From o ur blackjack example we not ice that the PDF ha s a derivat ive
which is everywhere 0 except at the three critical points (x = -5 , 0 , + 5).
In o rder to complete the definition for the pdf when the PDF is discontinuou s
we recognize that we must introduce a function such that when it is integrated
o ver the region o f the discontinuity it yields a value equal to the size of the
discontinuous jump; that is, in the blackjack example the probability density
functi on mu st be such that when inte gr ated fr om -5 - E to -5 + E (for
small E > 0) it sh ould yield a probability equal to 3/8. Such a function has
alread y been studied in Appendix I and is, of course , the impulse functi on
(or Dirac delta funct ion ). Recall th at such a function uo(x) is given by
oo x = 0
o(x ) = 0
lI
r x ;= O
C'
. - a::
uo(x ) d x = I
• In the discrete case, the function P[X = xd is often referr ed to as the probability 1Il0SS
[unction. The genera lization to the pd f lead s one to the noti on of a mass density func tion .
[1.2. RANDOM VARIABLES 373
J J
8 8
• It can be shown that an y PD F may be decomposed into a sum of th ree part s, na mely, a
pure jump function (cont aining on ly discont inu ou s ju mps), a pure ly cont inuo us porti o n,
a nd a singula r port ion (which ra rcly occu rs in distribution functions of interest and which
will be con sidered no fu rther in thi s text) .
374 AP PEN DIX II
o 1
o - - --======--x
(a) PD F . (h) pdf
the natural exte nsion of the PD F for two rand om varia bles, namely,
6
F Xy(X , y) = P[X ~ x, Y ~ y ]
which is mer ely th e prob ability tha t X ta kes on a value less tha n or eq ua l to x
a t the sa me tim e Y ta kes on a va lue less th an o r equal to y; th at is, it is th e
su m of th e p robabilities associated with all sample p oint s in the intersecti o n
of the two events {w : X(w) ~ z }, ·{w : Y(w) ~ y }. FXY(x, y) is referred to a s
t he j oint PDF. Of co urse, associa ted with thi s funct ion is a joint probability
d ensity funct iondefined as
2F
A d XI'(X' y)
f.yy( x, y) = _-"--'.--'-~
dx dy
Gi ven a joint pdf, o ne naturally inq uires as to th e "marginal" density func-
ti on for one of th e varia bles and thi s is clearl y given by integra ting over a ll
possible val ues of the second variable, thus
a d fxy (x, y)
I
fXI y(x y) = - P[X ::;; x Iy = y) = :...:=-'--'--'-
dx fy(y)
much as the definition for conditional prob abil ity of events.
To review again, we see that a random variable is defined as a mapping
from the sample space for so me probability system into the real line and
from this mappin g the PDF may easily be determined. Usually, however, a
random var iable is not given in term s of its sample space and the mapping,
but rather directly in terms of its PDF or pdf.
It is possible to define one random variable Y in terms of a second random
variable X, in which case Y would be referred to as a function of the random
variable X . In its most general form we then have
Y= g(X) (11.15)
where g(' ) is some given function of its argument. Thus, once the value for X
is determined, then the value for Y may be computed ; however, the value
for X depends upon the sample point w, a nd therefore so does the value of
Y which we may therefore write as Y = Y(w) = g( X(w». Gi ven the random
va riable X and its PDF , one sho uld be able to calculate the PDF for the
random variable Y, once the functi on gO is known. In principle, the co mpu-
tat ion take s the following form:
Y = L: X i ( 11.16)
i- I
Let us derive the distribution function of the sum of two independent random
variables (n = 2). It is clear that this distribution is given by
X,
We have the situ ati on shown in Figure I1.6. Inte gratin g over the indic ated
region we have
F y(Y ) =
f [f"-X'
OO
- <Xl -00
]
!x ,(x ,) dx, ! x ,(x,) dx,
=foo Fx,(Y -
- 00
xJ!x,(x,) dx,
Th is last equ atio n is merely the convolut ion of the density functions for X,
a nd X, a nd, as in Eq. (1.36), we denote this con volut ion opera to r (which is
both associative an d com mutative) by a n asterisk enclosed within a circle.
Thus
II. 3. EXPECTATIO N 377
In a similar fash ion , o ne easily shows for the case of a rbitra ry n th at the pdf
fo r Yas de fined in Eq. (11.16) is given by the conv olu tion of the pd f's fo r the
X;'s, th at is,
(11.17)
II.3. EXPECTATION
In thi s sectio n we discuss certain measures associated with the PDF and the
pdf for a random variabl e. These measures will in genera l be ca lled expecta-
tions and they deal with inte grals of the pdf. As we saw in the last section ,
th e pdf involves certain difficulties in its definiti on , and the se difficultie s were
handily resolved by the use of impulse functions. However, in much of the
literature on pr obability the or y a nd in most of the literature o n queueing
the or y th e use of impulses is either not accepted, not und er stood or not kn own ;
as a result, special care and not ation has been built up to get a ro und the
problem of differentiating discontinuous functions. The result is that many
of the int egrals encountered are Stieltjes integrals rather than the usual
Riemann inte grals with which we are most fa miliar. Let us take a moment to
define the Stieltjes integral. A Stieltjes inte gral is defined in term s of a non-
decreasing function F(x) and a continuous function 'r ex); in additi on, two sets
of points {f.} a nd { ~.} such that 1. _ 1 < ~. ~ f. a re defined and a limit is con-
sidered where max If• .:... 1. _,1 ~ O. From the se definiti on s, con sider the sum
J qc(x) dF(x )
Of co urse, we rec ogn ize th at the PDF may be identified with the functi on F
in thi s definitio n a nd th at dF(x) may be identified with th e pdf [say , f (x)]
through
dF (x ) = f( x) d x
by defini tion . With out the use of impulses the pdf may not exist ; however,
the Stieltjes integral will always exist and therefore it avo ids the issue of
impulses. Howe ver , in thi s text we will feel free to incorporate impul se
functi on s and therefore will work with both the Riem ann and Stieltje s
integra ls ; when impulses a re pe rmitted in the fun ct ion f (x) we then have the
378 APP ENDI X II
following identity:
E[X]
&
=
-
X
.1
=
'1"
-X> X dF_,(x) (I 1.1 8)
This last is given in the form of a Stieltjes integral ; in the form of a Riemann
integral we have, of course,
E[X] = .¥ = L: xfx(x) dx
Y = g(X )
We may define the expecta tio n E r [Y] for Yin terms of its PDF just as we did
for X ; the subscript Yon the expectation is there to distingui sh expectation
with respect to Yas opposed to any other random va ria bles (in th is ca se X ).
Thus we have
II.3. EXP ECTAT ION 379
This last computation requi res that we find either Fy(Y) o r f y(y) , which as
mentioned in the previous section , may be a ra ther complex computation.
However, thefu/ldamentaltheorem ofexpectation gives a much more straight-
for ward calculatio n for th is expectation in ter ms of distributio n of the
underlying random variable X, na mely,
Edy ] = Ex[g(X)]
= roo g(x)fx(X) dx
.i_oo
We may define the expectation of the sum of two random variables given
by the followi ng obvious general izatio n of th e o ne-dimensional case :
E[X + Y] = L : L : (x + Y)fxy( x, y) d x dy
= L : L : Xf x y (x , y) d x dy +L : L :Yfxy(x, y) d x dy
In the special case where the two ran do m variables X and Ya re independent ,
we may write the pdf for th is joint ra ndom variab le as the product of the
pdf's for the individual rando m variab les, thus obtaining
The nth central moment may be expressed in terms of the first II mo ments
themselves ; to show thi s we first write down the foll owin g ident ity making
use of the bino mial the orem
= i
k -O
(/~) Xk( - Xj',-k
k
( 11.22 )
(X - X) = X - X = 0
11.4 . T RAN SFORMS AN D CHA RACTE RISTIC FUNC TIONS 38 1
ax 2 ~ (X - X)2
~ X2 - (1')2
In the second line of thi s last equation we have taken ad vantage of Eq.
(II.22) and have expre ssed the variance (a central moment) in term s of the
first two momen ts themselves. The square root of the variance ax is referred
to as the standard deviation. The ratio of the sta nda rd deviati on to the mean
of a random va riab le is a most important qu ant ity in sta tistics and a lso in
queueing the or y; th is ratio is referred to as the coefficient of variation and is
den oted by
(11 .23)
= L: eiuz;-x(x) dx
An important property of the char acteri stic function may be seen by expand-
ing the exponential in the integrand in terms of its power series and then
integrating each term separately as follows:
. _ (jU)2~
= 1 + lUX + - - X " + ,..
2!
From this expansion, we see that the characteristic function is expressed in
terms of all the moments of X. Now, if we set u = 0 we find that epx(O) = I.
Similarly, if we first form depx(u)/du and then set u = 0, we obtain j X
Thus, in general , we have
(11.24)
Thi s last important result gives a rather simple way for calculating a constant
times the nth moment of the random variable X .
Since this property is frequently used, we find it con venient to adopt the
following simplified notation (consistent with that in Eq. · 1.37) for the nth
der ivative of an arbitrary function g (x), evaluated at some fixed value x = xo:
gln )(x )
o
~ d ng( x) I . ( 11.25)
d x" %=:1: 0
ep:~)(O) = rx».
The mom ent generating function denoted by M x(v) is given below along
with the appropriate differential relationship that yields the nth moment of X
directly .
Mx (v) ~ E[e- x]
=L : e-Z! x(x) d x
M:~)(O) = X
n
where v is a real variable. From this last property it is easy to see where the
name "moment generating function" come s from. The deriv ation of this
moment relationship is the same as that for the characteristic function .
Another important and useful function is the Laplace transform of the pdf
of a random variable X. We find it expedient to use a notation now in which
the PDF for a rand om va riable is labeled in a way th at identifies the rand om
variable without the use of subscripts. Thus, for example. if we have a
IIA . TRANSFOIU\1S AN D CHARACTERISTIC FUNCTIONS 383
rand om var iable X , which represents , say, the interarrival time between
adjacent customers to a system, then we define A( x) to be the PD F for X;
A(x) = P[ X ~ x]
where the symbol A is keyed to the word "A rrival." Further , the pdf for th is
example would be denoted a(x). Finally, then , we den ote the Laplace tr an s-
form of a(x) by A *(s) and it is given by the following:
A *(s) ~ E[e- ' x]
~ L: e-' '''a(x) d x
Th e reader should take special note that the lower limit 0 is defined as 0- ;
that is, the limit comes in from the left so that we specifically mean to include
an y impulse functions at the origin. In the fashion identical to that for the
moment generating funct ion and for the characteristic function , we may find
the moments of X through the following formula:
A*(nl(o) = ( _ l)nx n (11.26)
For nonnegative random variab les
IA*(s)1 ~ f ',e-'''''IG(X)' d x
But the complex variable s consists of a real part Re (s) = a and an imaginary
par t 1m (s) = w such that s = a + j w. Th en we have
le-s"'l = le- ' ' 'e- ;w'' l
~ le-''' I le- ;w "'l
= le-a"'l
M oreover , for Re (s) ;:::: 0, le-a"'l ~ I and so we have from these last two
equations a nd from JO' a (x ) dx = I,
IA*(s)1 ~ 1 Re (s) ;:::: 0
It is clear tha t the three functions 1>x(u), M x (v), A *(s) are all close rel-
atives of each other. In particular, we ha ve the following relationship :
c/>x(sj) = M x (- s) = A*(s)
384 APPENDIX II
Thus we are not surprised that the moment generating pr operties (by dif-
ferentiati on) are so simila r for each; this property is the central pr operty th at
we will take ad vantage of in o ur studies. Thus the nth mom ent of X is cal-
culable from an y of the following expression s:
n
X = rn 4>~~)(O)
n
X =; .M~~)(o)
Xn = (_l )nA* <n\o)
It is perhaps worthwhile to carry out an example demonstrating the se
properties. Con sider the continuous random variable X, which represents,
say, the interarrival time of customers to a system and which is exponentially
distributed, that is ,
Ae- A., x ~ o
f x:(x) = a(x ) = {
- 0 x<O
By direct substitution into the defining integral s we find immediately th at
.J. J.
,/,X(II ) = - -.-
J. - )11
A-
M (v) = - -
x }. _ v
A *(s) = -.A-
A.+ s
It is alw ays true that
_ I
X= -;-
A.
and we ma y also verify that the second moment may be calculated from an y
of the three to yield
X2 =2
}.2
gk = P[X = k]
11.4. TRANSFO RMS AND C HARACTE RISTIC FUNCTION S 385
we make use of the probability generating f unction den ot ed by G(z) as foll o ws :
. A
G(z) = E[zx]
~ k
= ~ z gk (11.27)
k
IG(z)1 ~ L Izkllgkl
k
~ L gk
and so
IG(z)1 ~ I for [z] ~ I (11.28)
No te th at the first deri vati ve evaluated at z = I yie lds the first moment of X
(11.29)
GIZ)(l ) = XZ - X
G( z) =-3 z- 5 +-+-
1 3 z5
8 4 8
We no te here th at , of course , G( I) = I a nd furth er, th at the mean winni ngs
may be calculated as
X= GIll(l ) = 0
ep y(u) ~ E[eiuY]
= E e [ ;u.:E Xi]
, ~,
= E[eiUX'eiuX, . . . e;Ux,]
Now in Eq . (II.21) we showed that the expectation of the product of functi ons
of independent random variables is equal to the product of the expectat ions
of each function separately ; applying this to the above we have
ep y(u) = E[eiUX ']E[eiUX ,] . . . E[e iuX ,]
Of course the right-hand side of this equation is just a product of character-
istic functions , and so
(11.31)
We·have thus shown that the characteristic functi on of a sum of n identically
distributed independent random variables is the nth power of the character-
istic function of the individual random variable itself. This important result
also applies to our other transforms, namely, the moment generating func-
tion, the Laplace transform and the z-transform. It is this significant property
that accounts, in no small way, for the widespread use of transforms in
probability theory and in the theory of stochastic processes.
Let us say a few more words now about sums of independent random
variables. We have seen in Eq. (II. 17) that the pdf of a sum of independent
variables is equal to the convolution of the pdf for each ; also, we have seen
in Eq. (11.30) that the transform of the sum is equal to the product of the
transforms for each . From Eq. (II. 19) it is clear (regardless of the independ-
ence) that the expectation of the sum equals the sum of the expectations,
namely,
y = X, + X2 + .. . + X. (11.32)
For n = 2 we see that the second moment of Y must be
yo = (X , + X 2)2 = X, 2 + 2X,X2 + X 22
And also in this case
11.4. TRANSFOR.\1S AND CHARACTERISTIC FUNCTIONS 387
ming the va ria nce of Y and then using these last two eq ua tio ns we have
a y' = y~ - (Y)"
= Xt· - (Xt )' + X .' - (X,)2 + 2( X tX2 - XtX2)
= ax ,2 + ax + 2( X
,2 IX. - XI X2)
1 similar fas hion it is easy to show th at the va ria nce of the sum of n
'pendent random va riab les is equal to the sum of the varia nces of eac h,
: is,
'o ntin uing with sums of independe nt random va ria bles let us now ass ume
: the nu mber of these variables tha t are to be summed together is itself a
dorn variable, that is, we defi ne
x
Y = I X
i= l
i
OX>
ere we have den oted the Lap lace transfo rm of th e pd f for each of the X i
X ' (s) . The final expression given in Eq . (II .33) is immedia tely recognized
388 AP PENDI X II
P[X ~ z]
x
~-
x
Since on ly the mean value of the rand om var iable is utilized, this inequ ality
is rather weak. The Cheby shev inequality makes use of the mean and variance
a nd is somewhat tighter ; it states that for any x > 0,
a .2
P[IX - X/ ~ xl ~ ~
x-
Oth er simple inequal ities invol ve momen ts of two ran dom varia bles. as
follows: First we have the Cauchy-Schwarz inequality , which make s a
statement ab out the expectation of a product of rand om varia bles in term s of
11.5. INEQU ALITIES AND LIMIT TH EOREMS 389
the second moments of each.
(I 1.35)
A gene rali zati on of this last is Holder's inequality , which states for C1. > I,
f3 >I, C1.- 1 + f3- 1 = I, and X> 0, Y> 0 that
XY ~ (X ")l /'(yP/IP
IX + YI ~ IXI + IYI
A generalizat ion of the tri an gle inequality , which is kn own as the C.-inequality,
is
where
I O<r~1
C=
• { 2.- 1 I<r
Next we bound the expectat ion of a convex function g of an arbitrary
random varia ble X (whose first moment X is assumed to exist). A convex
funct ion g(x) is o ne that lies on or below all of its chord s, that is, for any
X l ~ X., and 0 ~ x ~ I
I n
_ X.
IVn =-")
ll i = l
If we now apply the Chebyshev inequ ality to the random variab le W n and
ma ke use of these last two observa tion s, we may express our bound in terms
of the mean and variance of the random variable X itself thu sly
(I 1.36)
Th is very important result says that the arithmetic mean of the sum of n
independent and identically distributed rand om variables will approach
its expected value as n increases. This is due to the decreasing value of
a X2/nx2 as n grows (a X 2/X2 remain s constant). In fact , th is leads us directly to
the weak law of larg e num bers, namely, that for any £ > 0 we ha ve
lim P [l Wn - X/ ;::: £] = 0
The strong law of large numbers states that
. lim Wn = X with probabil ity one
lim P[Z n ~ x]
n -<Xl
= lI>(x)
where
~
lI>(x) = IX - 1- e- ' z/ 2 d y
c- eo (27T)1/2
That is, the ap propriately norm alized sum of a large numb er of independe nt
random variables tends to a Gaussi an , or a normal distribution. There are
many other forms of the central limit theorem that deal, for example, with
dependent random variables.
----------- -
II.S. INEQUALITI ES AND LIMIT THEOREMS 391
A rather sophisticated means for bounding the tail of the sum of a large
number of independent random variables is available in the form of the
Chernoffbound. It involves an inequality similar to the Markov and Chebyshev
inequalities , but makes use of the entire distribution of the random variable
itself (in particular, the moment generating function) . Thus let us consider
the sum of n independent identically distributed random variables X i as
given by n
Y = LXi
i=l
From Eq. (II.31) we know that the moment generating function for Y,
M y(v), is related to the moment generating function for each of the random
variables X i [namely, 1\1 x (v)) through the relationship
(11.38)
As with our earlier inequalities, we are interested in the probability that
our sum exceeds a certain value, and this may be calculated as
Clearly, for v;::: 0 we have that the unit step function [see Eq. (1.33)) is
bounded above by the following exponential:
However, the integral on the right-hand side of this equation is merely the
moment genera ting function for Y , and so we have
v;::: 0 (11040)
Let us now define the " semi-invariant" generating function
y y (v) = nyx(v)
a nd appl ying these last two to Eq. (11040) we arrive at
prY ;::: y1::;; e---1>Y+nyxCvl v~O
392 APPENDI X II
Since this last is good for any value of v (~ 0), we should choose v to create
the tighte st possible bound; this is simply carried out by differenti atin g the
exponent and setting it equal to zero . We thus find the optimum relationship
between v and y as
y = nYi l(v) (H.41 )
Thus the Chernoff bound for the tail of a density function takes the final
form*
pry ;;:.; ny~i-l(v)l ~ e n[ Yz lvl-vYz l1Jlvl] v ;;:.; 0 (11.42)
It is perhaps worthwhile to carry out an example demonstrating the use of
this last bounding procedure. For this purpose, let us go back to the second
paragraph in this appendix, in which we estimated the odds that at least
490,000 heads would occur in a million tosses of a fair coin. Of course, that
calculation is the same as calculating the probability that no more than
510,000 head s will occur in the same experiment. assuming the coin js fair.
In this example the random variable X may be chosen as follows
X= {I heads
o tails
Since Y is the sum ofa million trials of this experiment, we have that n = 10· ,
and we now ask for the complementary probability that Yadd up to 510,000
or more, namely , pry ~ 510,000] . The moment-generating function for X is
Mx(v) = ! + !e V
and so
1
y,.(v)
., = log -2 (1 + e")
Similarly
V
y l1l ( v) = -e-.
x 1 + e"
From our formula (H.4I) we then must have
eV
nylll( v) = 106 -- = 510,000 =y
s: 1 + e"
Thus we have
51
eV = -
49
and
51
v = log-
49
• The same derivation leads to a bound on the "lower tail" in which all three inequalities
from Eq. (II.42) face thusly: ~. For example v ~ o.
11.6. STOCHASTI C PROCESSES 393
Thus we see typically how v might be calcul ated . Plugging these values
back into Eq. (11.42) we conclude .
P[ Y ~ 510,000] ~ e l 0 ' ( l o l< (50/4.)-o. 51 !Og (5 1/4 .) ]
Th is computat ion shows that the probability of exceeding 510 ,000 heads in a
million tosses of a fair coin is less than 10- 88 (this is where the number in
our opening par agraph s comes from). An alternative way of carrying out
this computation would be to make use of the central limit theorem. Let
us do so as an example . For this we require the calculation of the mean and
varia nce of X which are easily seen to be X = 1/2 , Gx 2 = 1/4. Thu s from
Eq, (11.37) we have
Y - 106(1 /2)
Z = ------'-:'---'-
n (1/2)103
If we require Y to be greater than 510 ,000 , then we are requiring that Z .
be greater than 20. If we now go to a tabl e of the cumulative norm al distribu-
tion, we find that
P[Z ~ 20] = 1 - <1>(20) ~ 25 x 10- . 0
Again we see the extreme implausibility of such an event occurring. On the
other hand, the Chebyshev inequality, as given in Eq. (11.36), yields the
following ;
p[ Iw• _.!2 I>- O.OIJ -< 100.2510- 6 • 4
= 25 x 10-4
Thi s result is twice as large as it should be for our calculation since we have
effectively calculated both tails (namely, the probability that more than
510 ,000 or less than 490 ,000 heads would occur); thus the appropriate an swer
for the Chebyshev inequ ality would be that the probability of exceeding
5 10 ,000 heads is less than or equal to 12.5 x IQ-4. Note what a poor result
this inequ ality gives comp ared to the central limit theorem approximat ion ,
which in this case is comp arabl e to the Chernoff bound.
X(I) = X ( 11.43)
and
R x .\:(t" 12 ) = R x x (12 - I,) (11.44)
th at is, R x x is a functi on onl y of the time difference -r = t 2 - t,. In the
sta tio nary case, then , random processes are characterized in the seco nd-
order the or y only by a con stant (their mean X) and a one-d imensiona l func -
tion Rx x (-r). A random pr ocess is sa id to be wide-sense stationary if Eqs.
(11.43) and (11.44) hold . No te that all sta tiona ry p rocesses are wide- sen se
sta tio nary, but not con versely.
REFERENCES
DAVE 70 Davenport, W. B. Jr. , Probability and Random Processes, McGraw-Hill
(New York), 1970.
FE LL 68 Feller, W., An Introduction to Probability Theory and Its Applications,
3rd Edition, Vol. I , Wiley (New York), 1968.
PAPO 65 Papoulis, A., Probability , Random Variables, and St ochastic Processes,
McGraw-Hill (New York), 1965.
PARZ 60 Parzen, E., Modern Probability Theory and Its Applications, Wiley
(New York), 1960.
f
c
G
G
Glossary of Notation* s.
g
}-
Ii
(Only the notat ion used ofte n in this book is included below.) I,
I'
TYPICAL PAGE
NOTATI ONt DEFI NITION REFER ENCE
r
A .(t) = A(t) P[t . ~ t] = P[i ~ t] 13 K
An*(s) = A *(s) Lapl ace transform of aCt ) 14 L
ak k th mome nt of aCt ) 14 tv
a. (t) = aCt) dA n(t) /dt = dA( t) /dt 14 tv
Bn(x) = B (x) P[x . ~ x ] = P[x ~ x ] 14 III
B ;*(s) = B*(s) Laplace tra nsform of b(x) 14 N
bk kth moment of b(x) 14 N
bn(x) = b(x) dBn(x)/dx = dB(x)/dx 14 O(
C2
b Coefficient of variati on for service time 187
01
C. nt h customer to enter the system II
p
Cn(u) = CCu) P[u. ~ u] 281
C. *(s) = C*(s) Lapl ace tra nsfor m of cn(lI) = c(u) 285 P
c.(lI) = c(u) dC . (lI)/du = dCCu)/du 281 P
D Deno tes determin istic distribution VIII
dk P[ij = k] 176 PI
E [X ] = X Expectation of t he ran dom variab le X 378 p<
Ei System sta te i 27 P,
Er Den otes r-stage Erlan gian distribution 124 Pi
FC FS Fir st-come-first-served 8 Pi
Fx(x) P [X ~ x] 370 fk
Q
• In those few cases where a symbo l has more than one meaning. the context (or a
specific statement) resolves the a m biguity.
t The use of the notation Y n - Y is mean t to indicate tha t y = lim Yn' as /I - co wherea s q"
y( t) - y indicates that Y = lim y(r) as t - "'J .
q"
396
GLOSSARY OF NOTATION 397
I,(x) dFx(x){dx 371
G Denot es genera l di stri bution VIII
GENERAL SYSTEMS
P = AX (G/G/ l) 18
p 4, Ax/m (G /G /m) .18
T= X + W 18
N = AT (Little's result) 17
N. = AW 17
N. = N - p 188
dPk(t) /dt = flow rate into Ek-flow rate o ut of Ek 59
P» = r k (for Poisson arrivals) 176
r k = dk [N(t) makes un it change s] 176
MARKOV PROCESSES
For a summary of discrete state Markov chains , see the table on
pp. 402-403 .
POISSON PROCESSES
P (t) = (At)k e- ;" k ~ 0, t ~ 0 60
k k!
N(t) = At 62
62
63
.. .. 69
400
SUMMARY OF IMPORTANT RESULTS 401
BIRTH-DEATH SYSTEMS
k ~ 1 57
k=O 57
k- 1 i..
Pk = Po II - ' (eq uilib rium so lutio n) 92
;_0 P HI
1
Po = co k- 1 A- 92
1+ I I I -
'
k =l i - O f-li +l
MIMII
W = pIp 191
1- p
T=~ 98
1- p
P[~k in system ] = l 99
One-ste p
PiI" . .
PiI(n , n+ I ) Po Po U , I + ~ I ) .
transition = P[X" t1 = J I X" = /] £ P[X" +l = j I X" = i] £ P[ X(t + ~I ) = j I X( I) = i] A P[X(I + ~I ) = jl X (I ) = i]
prob abil ity
Matrix of o ne-
step tran sition
proba bilit ies
I' £ [Pij] 1'(11) "
= [Pi /n, n + 1)] I' £ [PiI) 1'(1) £ [pi;(I, I + ~t)]
Multiple-step pci1'r l . .
Pij(III , n) Po (1) Po C' , I )
tra nsiti on
A
= p [X" t", = J I X" = /) A P[ X" = j I x'" = i] £ P[X (s + I) = j I Xes) = i] £ P[x(t ) = j I Xes) = i)
proba bilities
Mat rix of
multiple-step
tran sit ion I' ,m, £ [1'1;"1] H (III, n) £ )1'/1(11I, II)] H (t ) A [P iI (I )] H (s , I) " [ p i; (S,I )J
=.
prob abilities
Transition-rate P - I . p et) - I
- - Q =l im - - Q U) = lim - - -
matrix. t> t_O CJ.r t>t-O CJ. r
State probabi lity ,, ~,,) ~ P [X" = jl ,,~n ) ~ P[X" =j l " j(t) ~ P[X(I) =j l " j(t) ~ P[X (I ) = jl
Matrix of state
probabi lities n "lt ,1\, [ 1T ~ " )l n '" ' ~ [" I" )l n (l ) ~ [" j U)l n (t ) ,1\, [" jU )l
For ward n: (tll = n 1n- 11P n ' n, = n ,n- 1,p (1I _ I) d n(I) ldl = n (I )Q dn(I)ldl = n (I )Q (I )
equation
solution n ln ) = n lO'p n n:11ll n(l) = n (O)e o t n (l ) = n CO) exp WQ (II) dll ]
= n 'OIP(O)p (1) . . . P en - I)
Equilibrium
so lution n = nP - nQ =0 -
Tr an sform [I - %p l- 1 -¢> p n - [sl - Ql- l -¢>H (r) -
relatio nships
./>0
o
VJ
404 SU MMARY OF IMPORTA NT RESULTS
in = l(2n - 1
n n -
2)p n_l(1 + p)I- 2n 218
l - I,I.u ( .A.)k
Pk = ~ - CJ.1.u)K+I .u 104
(
otherwise (M/M/ I/ K)
M! (/.)k
(M - k)! .u
Pk = (M/ M/ IIIM) 107
I.
,-oeM-
M! ( ~)'
i) ! .u
M/M/rn
102
. (7)(2-) ( Erla ng C
P[qu eue m gj = ['II(mp)k + (m p)m)P(_ l _ ) ] formula)
103
k _O k! III! 1- p
CJ.1.ulj m <A1.u)'
Pk = ~ ,~ - i !- (M/M/m/m) 105
W= px 191
2(1 - p)
( ul .j (n p ) n-1
G(y) = 2: - - e- n p 219
n= l n!
,- 1
( )
f , =..!!.E.....-..
1
e-n p
n.
219
P; = (I - p) 2: A;(:=,r; ) = 1, 2, . . . , r 129
i= I
I - p k= O
Pk = 133
{ p(ZOT _ l )zo - rk k>O
H R (R- stage H ypere xponential Distribution)
R
b( x) = 2: 'XiP,ie- " '" x ~ O 141
i=1
co" ~ I 143
MARKOVIAN NETWO R KS
s
;., = y, + 2: }.;r ji
;"",,1
149
(closed) 152
406 SUM MARY OF I MFO RT A:-lT RESU LTS
-_ I - F(y) (resi
f "(y ) m,
residual life den sity) 172
176
181
v = p 183
v' - V = A'X' = l (1 + C b' ) 187
V( z) = B*(A - i.z) 184
__ + , (I+C/ ) (P-K mean value formula) 187
q - p P 2(1 - p)
T ( 1 + c /)
- = I + p -'---'----"--"- (P-K mean value formula ) 191
x 2(1 - p)
W
- = p
(I Co') + (P- K mean value formula)
x 2(1 - p) 19 1
Wo
W = -- (P-K mean value formula ) 190
I-p
Wo ~ AX' 190
2
s(I - p)
w*(s) = (P-K tra nsform eq uat ion ) 200
s - i. + AB*(s)-
5*(s) = B*(s) sO - p) 199
s - i. + i.B*(s)
P[J ~ y ] = 1 - e- A' y ~0 208
G*(s) = B*(s + A- AG*(S» 212
f" 00 . _ (Ax)n-I
G(y) = Jon~le-AX -n-!- b(n)(x) d x 226
X
gl = - 1 - 213
-p
g, = ( 1 - p)"
214
, V b' + p(x)'
V = 214
• (1 - p)3
x3 3i.C?)2
g -
3 - (1 _ p)'
+ (1 _ p)5
214
1
11 1 = - - 217
1- p
')(
h; = - p
1 - P) + I...,--:;
"x'
1
+ __ 218
- ( I - p)3 1- p
•
a lt - =
p(I _ p) + A2:2 21 8
(1 - p)"
aF(w, t) = aF(w, t )
---''---'---' - .I.F(w, t ) + ), 'fw (
B w - x d x F( x, t ) ) 227
at aw = 0
M/G/ a:;
234
T= x 234
s(y) = b(y ) 234
G/M!l
r k = (1 - O)if' k = 0, 1,2, .. . 251
a = A*(p - pa) 251
W(y ) = 1 - ae-~(l--<1) · y ~ 0 252
a
252
G[M[m
242
a = A*(mp - mu a) 249
P[queue size = I
n arrival queues] = (I - a)a n
n~O 249
254
m-2 00
Rk -
"R iP ik -
£.. "
£.. a
i +I- m
P ik
i=k i=m- l 254
Pk-l,k
245
245
SUM MARY OF IMPORTANT RESULTS 409
J = 1
I m-2 254
-I - -a +k~IR
O
k
W = Ja
256
m,u(l - a)2
GIGll
W. +l = (w. + 11 . ) + 277
C(II) = a( - II) 0 b(lI) 281
W(y) = (I"
-e ec
o
W(y - II) dCCII) y ~O
y<O
(Lin dley's
integral
equation)
283
* * 'Y+(s)
A ( -s)B (s) - I = 'Y _(s) 286
I . 'Y+(s) W(o+)
$ +(s) = - - 11m - - = - - 290
'¥+(s) s .-0
'Y+(s)
'Y (0)(1 - p)f
$ +(s) = [A *(- s)B *(s) _ 1]'Y _(s)
290
4 11
412 INDEX