Example: bankruptcy

MARKOV CHAINS: BASIC THEORY - University of Chicago

MARKOV CHAINS: BASIC THEORY1. MA R KOVCH A INS A ND T H EIRTR A NSI T IO NPRO BA B ILIT I and First (discrete-time) MARKOV chain with (finite or countable) state spaceXis a se-quenceX0,X1, .. ofX valued random variables such that for all statesi,j,k0,k1, and alltimesn=0, 1, 2, .. ,(1)P(Xn+1=j Xn=i,Xn 1=kn 1, ..)=p(i,j)wherep(i,j)depends only on the statesi,j, and not on the timenor the previous stateskn 1,n 2, ..The numbersp(i,j)are called thetransition probabilitiesof the random walkon the integer latticeZdis the MARKOV chain whose tran-sition probabilities arep(x,x ei)=1/(2d) x Zdwheree1,e2,..edare the standard unit vectors inZd. In other terms, the simple random walkmoves, at each step, to a randomly chosen nearest transpositionMarkov chain on the permutation groupSN(the set of allpermutations ofNcards) is a MARKOV chain whose transition probabilities arep(x, x)=1/ N2 for all transpositions ;p(x,y)= a permutation that exchanges two cards.

p(x,y)=1=N if the vectors x,y differ in exactly 1 coordinate =0 otherwise. The Ehrenfest model is a simple model of particle diffusion: Imagine a room with two compart-ments 0 and 1, and N molecules distributed throughout the two compartments (customarily called urns). At each time, one of the molecules is chosen at random and moved from its ...

Tags:

  Vector, Random

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of MARKOV CHAINS: BASIC THEORY - University of Chicago

1 MARKOV CHAINS: BASIC THEORY1. MA R KOVCH A INS A ND T H EIRTR A NSI T IO NPRO BA B ILIT I and First (discrete-time) MARKOV chain with (finite or countable) state spaceXis a se-quenceX0,X1, .. ofX valued random variables such that for all statesi,j,k0,k1, and alltimesn=0, 1, 2, .. ,(1)P(Xn+1=j Xn=i,Xn 1=kn 1, ..)=p(i,j)wherep(i,j)depends only on the statesi,j, and not on the timenor the previous stateskn 1,n 2, ..The numbersp(i,j)are called thetransition probabilitiesof the random walkon the integer latticeZdis the MARKOV chain whose tran-sition probabilities arep(x,x ei)=1/(2d) x Zdwheree1,e2,..edare the standard unit vectors inZd. In other terms, the simple random walkmoves, at each step, to a randomly chosen nearest transpositionMarkov chain on the permutation groupSN(the set of allpermutations ofNcards) is a MARKOV chain whose transition probabilities arep(x, x)=1/ N2 for all transpositions ;p(x,y)= a permutation that exchanges two cards.

2 Notice that there are exactly N2 transpositions. Thus, the MARKOV chain proceeds by the following rule: at each step, choose twodifferent cards at random and switch urn modelwithNballs is the MARKOV chain on the state spaceX={0, 1}Nthat evolves as follows: At each timen=1, 2, .. a random indexj [N]is chosen, and thejth coordinate of the last state is flipped. Thus, the transition probabilities arep(x,y)=1/Nif the vectorsx,ydiffer in exactly 1 coordinate= Ehrenfest model is a simple model of particle diffusion: Imagine a room with two compart-ments 0 and 1, andNmolecules distributed throughout the two compartments (customarilycalledurns). At each time, one of the molecules is chosen at random and moved from its currentcompartment to the 1, 2, .. be independent, identically distributed, positive integer-valued ran-dom variables with common distribution{qm}m 1.

3 Think of these as being the lifetimes of asequence of AAA batteries that I use in my wireless keyboard. Whenever a battery fails, I imme-diately replace it by a new one. Consequently, the partial sumsSn:= nj=1 jare the times at12 MARKOV CHAINS: BASIC THEORY which batteries are replaced. In this context, the sequence of random variables{Sn}n 0is calledarenewal are several interesting MARKOV chains associated with a renewal process: (A) Theageprocess A1,A2, .. is the sequence of random variables that record the time elapsed since the lastbattery failure, in other words,Anis the age of the battery in use at timen. At each step, the ageprocess either increases by+1, or it jumps to 0. It is not difficult to verify that it is a MARKOV chainwith transition probabilitiesp(m,m+1)= j=m+1qj/ j=mqj,p(m, 0)=qm/ j=mqj.(B) Theresidual lifetimeprocessR1,R2.

4 Is the sequence of random variables that record thetime until the next battery failure, that is, the remaining lifetime of the battery currently in use.(If a new battery is installed at timen, the residual lifetime is the lifetime of the new battery, not0.) The sequenceRnis a MARKOV chain with transition probabilitiesp(m,m 1)=1ifm 2;p(1,m)=qmfor allm Xnis a MARKOV chain with transition probabilities p(x,y)then for every sequenceof states x0,x1, .. ,xn+m,(2)P(Xm+i=xm+i 0<i n Xi=xi 0 i m)=n i=1p(xm+i 1,xm+i).Consequently, the n step transition probabilities(3)pn(x,y):=P(Xn+m=y Xm=x)depend only on the time lag n and the initial and terminal states x,y , but not on m . first statement can be proved by a completely routine induction argument, using thedefinition of a MARKOV chain and elementary properties of conditional probabilities. The secondfollows from the first, by summing over all possible sequencesxm+iof intermediate states: theright side of equation (2) makes it clear that this sum does not depend onm, since the factors inthe product depend only on the transitionsxi,xi+1made, and not on thetimesat which they aremade.

5 Equations and the Transition Probability hence-forth that{Xn}n 0is a discrete-time MARKOV chain on a state spaceXwith transition probabili-tiesp(i,j). Define thetransition probability matrixPof the chain to be theX Xmatrix withentriesp(i,j), that is, the matrix whoseith row consists of the transition probabilitiesp(i,j)forj X:(4)P=(p(i,j))i,j XIfXhasNelements, thenPis anN Nmatrix, and ifXis infinite, thenPis an infinite byinfinite matrix. Also, therow sumsofPmust all be 1, by the law of total probabilities. A matrixwith this property is CHAINS: BASIC THEORY3 Definition matrixis a matrix with nonnegative entries. Astochastic matrixis a square nonnegative matrix all of whose row sums are 1. Asubstochastic matrixis a squarenonnegative matrix all of whose row sums are 1. Adoubly stochasticmatrix is a stochasticmatrix all of whosecolumnsums are that if you start with a stochastic matrix and delete the rows and columns indexedby a set of statesi, you get a substochastic matrix.

6 The resulting substochastic matrix containsinformation about the so-calledfirst-passagetimes to various states, as you will see in one of thehomework n step transition probabilities pn(i,j)are the entries of the n th powerPnof the matrixP. Consequently, the n step transition probabilities pn(i,j)satisfy theChapman-Kolmogorov equations(5)pn+m(i,j)= k Xpn(i,k)pm(k,j). is easiest to start by directly proving the Chapman-Kolmogorov equations, by a dou-ble induction, first onn, then onm. The casen=1,m=1 follows directly from the definitionof a MARKOV chain and the law of total probability (to get fromitojin two steps, the Markovchain has to go throughsomeintermediate statek). The induction steps are left as an the Chapman-Kolmogorov equation is established, it follows that then step transitionprobabilitiespn(x,y)are the entries ofPn, because equation (5) is the rule for matrix multiplica-tion.

7 Suppose now that the initial stateX0is random , with distribution , that is,P {X0=i}= (i)for all statesi X.(Note: Henceforth when a probability distribution is used as a superscript as above, it denotesthe initial distribution, that is, the distribution ofX0.) Then by the Chapman-Kolmogorov equa-tions and the law of total probability,P {Xn=j}= i (i)pn(i,j),equivalently, if the initial distribution is T(here we are viewing probability distributions onXas row vectors) then the distribution afternsteps is TPn. Notice that if there is a probabilitydistribution onXsuch that T= TP, then T= TPnfor alln 1. Consequently, if theMarkov chain has initial distribution then the marginal distribution ofXnwill be for alln this reason, such a probability distribution is calledstationary:Definition probability distribution onXisstationaryif(6) T= and Communicating statejis said to beaccessiblefrom stateiif there is a positive-probability pathfromitoj, that is, if there is a finite sequence of statesk0,k1.

8 ,kmsuch thatk0=i,km=j,andp(kt,kt+1)>0 for eacht=0, 1, .. ,m 1. Statesiandjare said tocommunicateif each isaccessible from the other. This relation is denoted byi is an equivalence relation. In particular, it istransitive: if i communicateswith j and j communicates with k then i communicates with k .4 MARKOV CHAINS: BASIC THEORYThe proof is an exercise. It follows that the state spaceXis uniquely partitioned intocommu-nicating classes(the equivalence classes of the relation ). If there is only one communicatingclass (that is, if every state is accessible from every other) then the MARKOV chain (or its transitionprobability matrix) is said to beirreducible. In general, if there is more than one communicatingclass, then states in one communicating classC1may be accessible from states in another classC2; however, in such a case no state ofC2can be accessible from a state ofC1(why?)

9 Definition a stateiis the greatest common divisor of the set{n N:pn(i,i)>0}. If every state has period 1 then the MARKOV chain (or its transition probability matrix) is : Ifiis not accessible from itself, then the period is the of the empty set; by con-vention, we define the period in this case to be+ . Example: Consider simple random walkon the integers. If at time zero the walk starts in stateX0=0 then at any subsequenteventimethe state must be an even integer, and at anyoddtime the state must be an odd integer (why?).Consequently, all states have period states i,j communicate, then they must have the same period. Consequently, if theMarkov chain is irreducible, then all states have the same proof is another easy exercise. There is a simple test to check whether an irreducibleMarkov chain is aperiodic: If there is a stateifor which the 1 step transition probabilityp(i,i)>0, then the chain is the MARKOV chain has a stationary probability distribution for which (i)>0, and ifstates i,j communicate, then (j)> suffices to show (why?)

10 That ifp(i,j)>0 then (j)>0. But by definition (6), (j) = k (k)p(k,j) (i)p(i,j). 2. FIN IT ESTAT EMA R KOVCH A MARKOV the state space is finite and all states communicate (that is,the MARKOV chain is irreducible) then in the long run, regardless of the initial condition, theMarkov chain must settle into a steady state. Formally,Theorem irreducible MARKOV chain Xnon afinitestate spaceXhas auniquestationarydistribution . Furthermore, if the MARKOV chain is not only irreducible but also aperiodic, thenfor any initial distribution ,(7)limn P {Xn=j}= (j) j XThe remainder of this section is devoted to the proof of this theorem. Assume throughout thatthe hypotheses of Theorem 3 are met, and letPbe the transition probability matrix of the Markovchain. We will prove Theorem 3 by studying the action of the transition probability matrix on thesetP=PXof probability distributions onX.


Related search queries