Example: barber

1 Limiting distribution for a Markov chain

Copyrightc 2009 by Karl Sigman1 Limiting distribution for a Markov chainIn these lecture Notes, we shall study the Limiting behavior of Markov chains as timen .In particular, under suitable easy-to-check conditions, we will see that a Markov chain possessesa Limiting probability distribution , = ( j)j S, and that the chain , if started off initially withsuch a distribution will be a stationary stochastic process. We will also see that we can find by merely solving a set of linear Communication classes and irreducibility for Markov chainsFor a Markov chain with state spaceS, consider a pair of states (i,j). We say thatjis reachablefromi, denoted byi j, if there exists an integern 0 such thatPnij>0. This means thatstarting in statei, there is a positive probability (but not necessarily equal to 1) that the chainwill be in statejat timen(that is,nsteps later);P(Xn=j|X0=i)>0.

1 Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n!1. In particular, under suitable easy-to-check conditions, we will see that a Markov chain possesses a limiting probability distribution, ˇ= (ˇ j) …

Tags:

  Lecture, Distribution, Chain, Limiting, Markov, Markov chain, 1 limiting distribution for a markov chain

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of 1 Limiting distribution for a Markov chain

1 Copyrightc 2009 by Karl Sigman1 Limiting distribution for a Markov chainIn these lecture Notes, we shall study the Limiting behavior of Markov chains as timen .In particular, under suitable easy-to-check conditions, we will see that a Markov chain possessesa Limiting probability distribution , = ( j)j S, and that the chain , if started off initially withsuch a distribution will be a stationary stochastic process. We will also see that we can find by merely solving a set of linear Communication classes and irreducibility for Markov chainsFor a Markov chain with state spaceS, consider a pair of states (i,j). We say thatjis reachablefromi, denoted byi j, if there exists an integern 0 such thatPnij>0. This means thatstarting in statei, there is a positive probability (but not necessarily equal to 1) that the chainwill be in statejat timen(that is,nsteps later);P(Xn=j|X0=i)>0.

2 Ifjis reachablefromi, andiis reachable fromj, then the statesiandjare said tocommunicate, denoted byi j. The relation defined by communication satisfies the following states communicate with themselves:P0ii= 1> :Ifi j, thenj :Ifi kandk j, theni above conditions imply that communication is an example of an equivalence relation,meaning that it shares the properties with the more familiar equality relation = :i=i. Ifi=j, thenj=i. Ifi=kandk=j, theni= condition 3 above needs some justification, so we now prove it for completeness:Suppose there exists integersn,msuch thatPnik>0 andPmkj>0. Lettingl=n+m,we conclude thatPlij PnikPmkj>0 where we have formally used the Chapman-Kolmogorovequations. The point is that the chain can (with positive probability) go fromitojby firstgoing fromitok(nsteps) and then (independent of the past) going fromktoj(an additionalmsteps).

3 If we consider the rat in the open maze, we easily see that the set of statesC1={1,2,3,4}all communicate with one another, but state 0 only communicates with itself (since it is anabsorbing state). Whereas state 0 is reachable from the other states,i 0, no other state canbe reached from state 0. We conclude that the state spaceS={0,1,2,3,4}can be broken upinto two disjoint subsets,C1={1,2,3,4}andC2={0}whose union equalsS, and such thateach of these subsets has the property that all states within it communicate. Disjoint meansthat their intersection contains no elements:C1 C2= .A little thought reveals that this kind of disjoint breaking can be done with any Markovchain:Proposition each Markov chain , there exists a unique decomposition of the state spaceSinto a sequence of disjoint subsetsC1,C2.

4 ,S= i=1Ci,in which each subset has the property that all states within it communicate. Each such subsetis called a communication class of the Markov (X0=i|X0=i) = 1, a trivial we now consider the rat in the closed maze,S={1,2,3,4}, then we see that there is onlyone communication classC={1,2,3,4}=S: all states communicate. This is an example ofwhat is called anirreducibleMarkov Markov chain for which there is only one communication class is called anirreducible Markov chain : all states random walk is ,S={ 1,0,1, }. But since 0< p <1,we can always reach any state from any other state, doing so step-by-step, using the factthatPi,i+1=p, Pi,i 1= 1 p. For example 4 2 sinceP6 4,2 p6>0, and 2 4sinceP62, 4 (1 p)6>0; thus 4 2.

5 In generalPni,j>0 forn=|i j|. walk from the gambler s ruin problem is not , the random walkis restricted to the finite state space{0,1,..,N}andP00=PNN= {0}, C2={1,..N 1}, C3={N}are the communication Consider a Markov chain withS={0,1,2,3}and transition matrix given byP= 1/2 1/2001/2 1/2001/3 1/6 1/6 1/30001 .Notice how states 0,1 keep to themselves in that whereas they communicate with eachother, no other state is reachable from them (together they form an absorbing set). ThusC1={0,1}. Whereas every state is reachable from state 2, getting to state 2 is not possiblefrom any other state; thusC2={2}. Finally, state 3 is absorbing and soC3={3}. Thisexample illustrates the general method of deducing communication classes by analyzingthe the transition Recurrence and Stationary Recurrence and transienceLet iidenote thereturn timeto stateigivenX0=i: ii= min{n 1 :Xn=i|X0=i}, iidef= ,ifXn6=i, n represents the amount of time (number of steps) until the chain returns to stateigiven thatit started in statei.

6 Note how never returning is allowed in the definition by defining ii= ,so a return occurs if and only if ii< .fidef=P( ii< ) is thus the probability of ever returning to stateigiven that the chainstarted in statei. A stateiis calledrecurrentiffi= 1;transientiffi<1. By the (strong) Markov property, once the chain revisits statei, the future is independent of the past, and it isas if the chain is starting all over again in stateifor the first time: Each time stateiis visited,it will be revisited with the same probabilityfiindependent of the past. In particular, iffi= 1,then the chain will return to stateiover and over again, an infinite number of times. That iswhy the word recurrent is used. If stateiis transient (fi<1), then it will only be visited a2finite (random) number of times (after which only the remaining statesj6=ican be visited bythe chain ).

7 Counting over all time, the total number of visits to statei, given thatX0=i, isgiven by an infinite sequence of indicator rvsNi= n=0I{Xn=i|X0=i}(1)and has a geometric distribution ,P(Ni=n) =fn 1i(1 fi), n 1.(We count the initial visitX0=ias the first visit.)The expected number of visits is thus given byE(Ni) = 1/(1 fi) and we conclude thatA stateiis recurrent (fi= 1) if and only ifE(Ni) = ,or equivalentlyA stateiis transient (fi<1) if and only ifE(Ni)< .Taking expected value in (1) yieldsE(Ni) = n=0 Pni,i,becauseE(I{Xn=i|X0=i}) =P(Xn=i|X0=i) =Pni, thus obtainProposition stateiis recurrent if and only if n=0 Pni,i= ,transient any communication classC, all states inCare either recurrent or allstates inCare transient. Thus: ifiandjcommunicate andiis recurrent, then so ifiandjcommunicate andiis transient, then so isj.

8 In particular, for anirreducible Markov chain , either all states are recurrent or all states are :Suppose two statesi6=jcommunicate; choose an appropriatenso thatp=Pni,j> ifiis recurrent, then so must bejbecause every timeiis visited there is this same positiveprobabilityp( success probability) thatjwill be visitednsteps later. Butibeing recurrentmeans it will be visited over and over again, an infinite number of times, so viewing this assequence of Bernoulli trials, we conclude that eventually there will be a success. (Formally, weare using theBorel-Cantelli theorem.)Definition an irreducible Markov chain , if all states are recurrent, then we say thatthe Markov chain is recurrent; transient rat in the closed maze yields a recurrent Markov chain .

9 The rat in the open maze yields aMarkov chain that is not irreducible; there are two communication classesC1={1,2,3,4}, C2={0}.C1is transient, whereasC2is if the state space is finite for a given Markov chain , then not all the states can betransient (for otherwise after a finite number a steps (time) the chain would leave every statenever to return; where would it go?).Thus we conclude thatProposition irreducible Markov chain with a finite state space is always recurrent: allstates are observe (from the argument that if two states communicate and one is recurrentthen so is the other) that for an irreducible recurrent chain , even if we start in some other stateX06=i, the chain will still visit stateian infinite number of times:For an irreducible recurrentMarkov chain , each statejwill be visited over and over again (an infinite number of times)regardless of the initial stateX0= example, if the rat in the closed maze starts off in cell 3, it will still return over andover again to cell Expected return time to a given state.

10 Positive recurrence and nullrecurrenceA recurrent statejis calledpositive recurrentif theexpectedamount of time to return to statejgiven that the chain started in statejhas finite first moment:E( jj)< .A recurrent statejfor whichE( jj) = is callednull general even fori6=j, we define ijdef= min{n 1 :Xn=j|X0=i}, the time (aftertime 0) until reaching statejgivenX0= both recurrent. Ifiandjcommunicate and ifjis positiverecurrent (E( jj)< ), theniis positive recurrent (E( ii)< ) and alsoE( ij)< . Inparticular, all states in a recurrent communication class are either all together positive recurrentor all together null :Assume thatE( jj)< and thatiandjcommunicate. Choose the smallestn 1such thatPnj,i>0. WithX0=j, letA={Xk6=j,1 k n, Xn=i};P(A)>0.


Related search queries