Example: bachelor of science

1 IEOR 6711: Continuous-Time Markov Chains

Copyrightc 2009 by Karl Sigman1 IEOR 6711 : Continuous-Time Markov ChainsA Markov chain in discrete time ,{Xn:n 0}, remains in any state for exactly oneunit of time before making a transition (change of state). We proceed now to relax thisrestriction by allowing a chain to spend a continuous amount of time in any state, butin such a way as to retain the Markov property. As motivation, suppose we consider therat in the open maze. Clearly it is more realistic to be able to keep track of where therat is at any continuous -timet 0 as oppposed to only where the rat is aftern steps .Assume throughout that our state space isS=Z={ , 2, 1,0,1,2, }(or somesubset thereof). Suppose now that whenever a chain enters statei S, independent ofthe past, the length of time spent in stateiis a continuous , strictly positive (and proper)random variableHicalled theholding timein statei.

i called the holding time in state i. When the holding time ends, the process then makes a transition into state jaccording to transition probability P ij, independent of the past, and so on.1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process fX(t) : t 0gwith state space S.

Tags:

  Time, Chain, Eori, Holdings, Continuous, 6711, Markov, Holding time, Ieor 6711, Continuous time markov chains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of 1 IEOR 6711: Continuous-Time Markov Chains

1 Copyrightc 2009 by Karl Sigman1 IEOR 6711 : Continuous-Time Markov ChainsA Markov chain in discrete time ,{Xn:n 0}, remains in any state for exactly oneunit of time before making a transition (change of state). We proceed now to relax thisrestriction by allowing a chain to spend a continuous amount of time in any state, butin such a way as to retain the Markov property. As motivation, suppose we consider therat in the open maze. Clearly it is more realistic to be able to keep track of where therat is at any continuous -timet 0 as oppposed to only where the rat is aftern steps .Assume throughout that our state space isS=Z={ , 2, 1,0,1,2, }(or somesubset thereof). Suppose now that whenever a chain enters statei S, independent ofthe past, the length of time spent in stateiis a continuous , strictly positive (and proper)random variableHicalled theholding timein statei.

2 When the holding time ends,the process then makes a transition into statejaccording to transition probabilityPij,independent of the past, and so (t) denote the state at timet, we end upwith a Continuous-Time stochastic process{X(t) :t 0}with state objective is to place conditions on the holding times to ensure that the Continuous-Time process satisfies the Markov property:The future,{X(s+t) :t 0}, given thepresent state,X(s), is independent of the past,{X(u) : 0 u < s}.Such a process willbe called a Continuous-Time Markvov chain (CTMC), and as we will conclude shortly,the holding times will have to be exponentially formal definition is given byDefinition stochastic process{X(t) :t 0}with discrete state spaceSis calleda Continuous-Time Markvov chain (CTMC) if for allt 0, s 0, i S, j S,P(X(s+t) =j|X(s) =i,{X(u) : 0 u < s}) =P(X(s+t) =j|X(s) =i) =Pij(t).

3 Pij(t) is the probability that the chain will be in statej,ttime units from now, givenit is in eacht 0 there is a transition matrixP(t) = (Pij(t)),andP(0) =I,the identity for discrete- time Markov Chains , we are assuming here that the distribution of thefuture, given the present stateX(s), does not depend on the present times, but only onthe present stateX(s) =i, whatever it is, and the amount of time that has elapsed,t,since times. In particular,Pij(t) =P(X(t) =j|X(0) =i).1 Pii>0 is allowed, meaning that a transition back into stateifrom stateican ocurr. Each time thishappens though, a newHi, independent of the past, determines the new length of time spent in Section for unlike the discrete- time case, there is no smallest next time until the nexttransition, there is a continuum of such possible timest.

4 For each fixediandj,Pij(t), t 0 defines a function which in principle can be studied by use of calculus and differentialequations. Although this makes the analysis of CTMC s more difficult/technical than fordiscrete- time Chains , we will, non-the-less, find that many similarities with discrete-timechains follow, and many useful results can be little thought reveals that the holding times must have the memoryless propertyand thus are exponentially distributed. To see this, suppose thatX(t) =i. Timetliessomewhere in the middle of the holding timeHifor statei. The future after timettellsus, in particular, the remaining holding time in statei, whereas the past before timet,tells us, in particular, the age of the holding time (how long the process has been in statei). In order for the future to be independent of the past givenX(t) =i, we deduce thatthe remaining holding time must only depend (in distribution) oniand be independent ofits age; the memoryless property follows.

5 Since an exponential distribution is completelydetermined by its rate we conclude that for eachi S, there exists a constant (rate)ai>0, such that the chain , when entering statei, remains there, independent of thepast, for an amount of timeHi exp(ai):A CTMC makes transitions from state to state, independent of the past, ac-cording to a discrete- time Markov chain , but once entering a state remains inthat state, independent of the past, for an exponentially distributed amount oftime before changing state a CTMC can simply be described by a transition matrixP= (Pij), describinghow the chain changes state step-by-step at transition epochs, together with a set of rates{ai:i S}, the holding time rates. Each time stateiis visited, the chain spends, onaverage,E(Hi) = 1/aiunits of time there before moving Interpretation of the{ai}as transition rates; the transitionrate matrix (infinitesimal generator)QAssume here thatPi,i= 0 for alli be interpreted as the transition rateout of stateigiven thatX(t) =i; the intuitive idea being that the exponential holdingtime will end, independent of the past, in the nextdtunits of time with can be made rigorous recalling the littleo(t) results for the Poisson process: Let{Ni(t)}denote a Poisson counting process at rateai.

6 ThenP(Ni(h)>0) =aih+o(h)andP(Ni(h) = 1) =aih+o(h) (andP(Ni(h)>1) =o(h)). Thuslimh 0P(X(h)6=i|X(0) =i)/h= limh 0P(Ni(h) = 1)/h= limh 0(aih+o(h))/h=ai.(1)More generally, forj6=i, we additionally use the Markov property asserting that onceleaving stateithe chain will, independent of the past, next go to statejwith probability2Pi,jto obtainP i,j(0) = limh 0Pi,j(h)/h= limh 0P(Ni(h) = 1)Pi,j/h=aiPi,j.(2)aiPi,jcan thus be interpreted as the transition rate from stateito statejgiven thatthe chain is currently in ,Pi,i(h) = 1 P(X(h)6=i|X(0) =i) and soP i,i(0) = limh 0(Pi,i(h) 1)/h= limh 0 P(Ni(h) = 1)/h= ai.(3)Definition matrixQ=P (0)given explicitly by (2) and (3) is called the tran-sition rate matrix, or infinitesimal generator, of the Markov chainFor example, ifS={0,1,2,3,4}, thenQ= a0a0P0,1a0P0,2a0P0,3a0P0,4a1P1,0 a1a1P1,2a1P1,3a1P1,4a2P2,0a2P2,1 a2a2P2,3a2P2,4a3P3,0a3P3,1a3P3,2 a3a3P3,4a4P4,0a4P4,3a4P4,3a4P4,3 a4 Note in passing that since we assume thatPi,i= 0, i S, we conclude that each rowofQsums to The embedded discrete- time Markov chainLetting ndenote the time at which thenthchange of state (transition) occurs, we see thatXn=X( n+), the stateright afterthenthtransition, defines the underlying discrete-timeMarkov chain , called theembedded Markov chain .

7 {Xn}keeps track, consecutively, ofthe states visited right after each transition, and moves from state to state according tothe one-step transition probabilitiesPij=P(Xn+1=j|Xn=i). This transition matrix(Pij), together with the holding- time rates{ai}, completely determines the Chapman-Kolmogorov equationsThe Chapman-Kolmogorov equations for discrete- time Markov Chains generalizes toLemma (Chapman-Kolmogorov equations for CTMC s)For allt 0, s 0,P(t+s) =P(s)P(t),that is, for allt 0, s 0, i S, j SPij(t+s) = k SPik(s)Pkj(t).3As for discrete- time Chains , the (easy) proof involves first conditioning on what statekthe chain is in at timesgiven thatX(0) =i, yieldingPik(s), and then using theMarkov property to conclude that the probability that the chain , now in statek, wouldthen be in statejafter an additionalttime units is, independent of the past,Pkj(t).

8 Examples of CTMC counting process:Let{N(t) :t 0}be the counting process for a Poissonprocess ={tn}at rate . Then{N(t)}forms a CTMC withS={0,1,2,..},Pi,i+1= 1,ai= , i 0: IfN(t) =ithen, by the memoryless property, the nextarrival, arrivali+ 1, will, independent of the past, occur after an exponentiallydistributed amount of time at rate . The holding time in stateiis simply theinterarrival time ,ti+1 ti, and n=tnsinceN(t) only changes state at an arrivaltime. Assuming thatN(0) = 0 we conclude thatXn=N(tn+) =n, n 0; theembedded chain is deterministic. This is a very special kind of CTMC for severalreasons. (1) all holding timesHihave the same rateai= , and (2)N(t) is a non-decreasing process; it increases by one at each arrival time , and remains constantotherwise. Ast ,N(t) step by Consider the rat in the closed maze, in which at each transition, the rat is equallylikely to move to one of the neighboring two cells, but where now we assume thatthe holding time ,Hi, in celliis exponential at rateai=i, i= 1,2,3,4.

9 time is inminutes (say). LetX(t) denote the cell that the rat is in at timet. Given the rat isnow in cell 2 (say), he will remain there, independent of the past, for an exponentialamount of time with mean 1/2, and then move, independent of the past, to eithercell 1 or 4 The other transitions are similarly explained.{X(t)}formsa CTMC. Note how cell 4 has the shortest holding time (mean 1/4 minutes), andcell 1 has the longest (mean 1 minute). Of intrinisic interest is to calculate thelong-run proportion of time ( continuous time now) that the rat spends in each cell;Pidef= limt 1t t0I{X(s) =i}ds, i= 1,2,3, will learn how to compute these later; they serve as the Continuous-Time analogto the discrete- time stationary probabilities ifor discrete- time Markov Chains .~P= (P1,P2,P3,P4) is called the limiting (stationary) distribution for the M/M/1 queue:Arrivals to a single-server queue are Poisson at rate.

10 Thereis one line (queue) to wait in, and customers independently (and independent of thePoisson arrival process) have service times{Sn}that are exponentially distributedat rate . We assume that customers join the tail of the queue, and hence beginservice in the order that they arrivefirst-in-queue-first-out-of-queue (FIFO). LetX(t) denote the number of customers in the system at timet, where system 4means the line plus the service area. So (for example),X(t) = 2 means that thereis one customer in service and one waiting in line. Note that a transition can onlyoccur at customer arrival or departure times, and that departures occur whenevera service completion occurs. At an arrival timeX(t) jumps up by the amount 1,whereas at a departure timeX(t) jumps down by the amount the ratesai: IfX(t) = 0 then only an arrival can occur next, so theholding time is given byH0 exp( ) the time until the next arrival;a0= , thearrival rate.


Related search queries