Example: bankruptcy

MARKOV CHAINS: BASIC THEORY - University of Chicago

MARKOV CHAINS: BASIC THEORY1. MA R KOVCH A INS A ND T H EIRTR A NSI T IO NPRO BA B ILIT I and First (discrete-time) MARKOV chain with (finite or countable) state spaceXis a se-quenceX0,X1, .. ofX valued random variables such that for all statesi,j,k0,k1, and alltimesn=0, 1, 2, .. ,(1)P(Xn+1=j Xn=i,Xn 1=kn 1, ..)=p(i,j)wherep(i,j)depends only on the statesi,j, and not on the timenor the previous stateskn 1,n 2, ..The numbersp(i,j)are called thetransition probabilitiesof the random walkon the integer latticeZdis the MARKOV chain whose tran-sition probabilities arep(x,x ei)=1/(2d) x Zdwheree1,e2.

MARKOV CHAINS: BASIC THEORY 3 Definition 2. A nonnegative matrix is a matrix with nonnegative entries. A stochastic matrix is a square nonnegative matrix all of whose row sums are 1. A substochastic matrix is a square nonnegative matrix all of whose row sums are 1.

Tags:

  Chain, Markov, Markov chain

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of MARKOV CHAINS: BASIC THEORY - University of Chicago

1 MARKOV CHAINS: BASIC THEORY1. MA R KOVCH A INS A ND T H EIRTR A NSI T IO NPRO BA B ILIT I and First (discrete-time) MARKOV chain with (finite or countable) state spaceXis a se-quenceX0,X1, .. ofX valued random variables such that for all statesi,j,k0,k1, and alltimesn=0, 1, 2, .. ,(1)P(Xn+1=j Xn=i,Xn 1=kn 1, ..)=p(i,j)wherep(i,j)depends only on the statesi,j, and not on the timenor the previous stateskn 1,n 2, ..The numbersp(i,j)are called thetransition probabilitiesof the random walkon the integer latticeZdis the MARKOV chain whose tran-sition probabilities arep(x,x ei)=1/(2d) x Zdwheree1,e2.

2 Edare the standard unit vectors inZd. In other terms, the simple random walkmoves, at each step, to a randomly chosen nearest transpositionMarkov chain on the permutation groupSN(the set of allpermutations ofNcards) is a MARKOV chain whose transition probabilities arep(x, x)=1/ N2 for all transpositions ;p(x,y)= a permutation that exchanges two cards. Notice that there are exactly N2 transpositions. Thus, the MARKOV chain proceeds by the following rule: at each step, choose twodifferent cards at random and switch urn modelwithNballs is the MARKOV chain on the state spaceX={0, 1}Nthat evolves as follows: At each timen=1, 2.

3 A random indexj [N]is chosen, and thejth coordinate of the last state is flipped. Thus, the transition probabilities arep(x,y)=1/Nif the vectorsx,ydiffer in exactly 1 coordinate= Ehrenfest model is a simple model of particle diffusion: Imagine a room with two compart-ments 0 and 1, andNmolecules distributed throughout the two compartments (customarilycalledurns). At each time, one of the molecules is chosen at random and moved from its currentcompartment to the 1, 2, .. be independent, identically distributed, positive integer-valued ran-dom variables with common distribution{qm}m 1.

4 Think of these as being the lifetimes of asequence of AAA batteries that I use in my wireless keyboard. Whenever a battery fails, I imme-diately replace it by a new one. Consequently, the partial sumsSn:= nj=1 jare the times at12 MARKOV CHAINS: BASIC THEORY which batteries are replaced. In this context, the sequence of random variables{Sn}n 0is calledarenewal are several interesting MARKOV chains associated with a renewal process: (A) Theageprocess A1,A2, .. is the sequence of random variables that record the time elapsed since the lastbattery failure, in other words,Anis the age of the battery in use at timen.

5 At each step, the ageprocess either increases by+1, or it jumps to 0. It is not difficult to verify that it is a MARKOV chainwith transition probabilitiesp(m,m+1)= j=m+1qj/ j=mqj,p(m, 0)=qm/ j=mqj.(B) Theresidual lifetimeprocessR1,R2, .. is the sequence of random variables that record thetime until the next battery failure, that is, the remaining lifetime of the battery currently in use.(If a new battery is installed at timen, the residual lifetime is the lifetime of the new battery, not0.) The sequenceRnis a MARKOV chain with transition probabilitiesp(m,m 1)=1ifm 2;p(1,m)=qmfor allm Xnis a MARKOV chain with transition probabilities p(x,y)then for every sequenceof states x0,x1.

6 ,xn+m,(2)P(Xm+i=xm+i 0<i n Xi=xi 0 i m)=n i=1p(xm+i 1,xm+i).Consequently, the n step transition probabilities(3)pn(x,y):=P(Xn+m=y Xm=x)depend only on the time lag n and the initial and terminal states x,y , but not on m . first statement can be proved by a completely routine induction argument, using thedefinition of a MARKOV chain and elementary properties of conditional probabilities. The secondfollows from the first, by summing over all possible sequencesxm+iof intermediate states: theright side of equation (2) makes it clear that this sum does not depend onm, since the factors inthe product depend only on the transitionsxi,xi+1made, and not on thetimesat which they aremade.

7 Equations and the Transition Probability hence-forth that{Xn}n 0is a discrete-time MARKOV chain on a state spaceXwith transition probabili-tiesp(i,j). Define thetransition probability matrixPof the chain to be theX Xmatrix withentriesp(i,j), that is, the matrix whoseith row consists of the transition probabilitiesp(i,j)forj X:(4)P=(p(i,j))i,j XIfXhasNelements, thenPis anN Nmatrix, and ifXis infinite, thenPis an infinite byinfinite matrix. Also, therow sumsofPmust all be 1, by the law of total probabilities.

8 A matrixwith this property is CHAINS: BASIC THEORY3 Definition matrixis a matrix with nonnegative entries. Astochastic matrixis a square nonnegative matrix all of whose row sums are 1. Asubstochastic matrixis a squarenonnegative matrix all of whose row sums are 1. Adoubly stochasticmatrix is a stochasticmatrix all of whosecolumnsums are that if you start with a stochastic matrix and delete the rows and columns indexedby a set of statesi, you get a substochastic matrix. The resulting substochastic matrix containsinformation about the so-calledfirst-passagetimes to various states, as you will see in one of thehomework n step transition probabilities pn(i,j)are the entries of the n th powerPnof the matrixP.

9 Consequently, the n step transition probabilities pn(i,j)satisfy theChapman-Kolmogorov equations(5)pn+m(i,j)= k Xpn(i,k)pm(k,j). is easiest to start by directly proving the Chapman-Kolmogorov equations, by a dou-ble induction, first onn, then onm. The casen=1,m=1 follows directly from the definitionof a MARKOV chain and the law of total probability (to get fromitojin two steps, the Markovchain has to go throughsomeintermediate statek). The induction steps are left as an the Chapman-Kolmogorov equation is established, it follows that then step transitionprobabilitiespn(x,y)are the entries ofPn, because equation (5) is the rule for matrix multiplica-tion.

10 Suppose now that the initial stateX0is random, with distribution , that is,P {X0=i}= (i)for all statesi X.(Note: Henceforth when a probability distribution is used as a superscript as above, it denotesthe initial distribution, that is, the distribution ofX0.) Then by the Chapman-Kolmogorov equa-tions and the law of total probability,P {Xn=j}= i (i)pn(i,j),equivalently, if the initial distribution is T(here we are viewing probability distributions onXas row vectors) then the distribution afternsteps is TPn.


Related search queries