Example: dental hygienist

1 Discrete-time Markov chains - Columbia University

Copyrightc 2009 by Karl Sigman1 Discrete-time Markov Stochastic processes in discrete timeAstochastic processin discrete timen IN ={0,1,2,..}is a sequence of random variables(rvs)X0,X1,X2,..denoted byX={Xn:n 0}(or justX={Xn}). We refer to the valueXnas thestateof the process at timen, withX0denoting the initial state. If the randomvariables take values in a discrete space such as the integers ZZ ={.., 2, 1,0,1,2,..}(orsome subset of them), then the stochastic process is said to be discrete -valued; we then denotethe states byi,jand so on. In general, however, the collection of possible values that theXncan take on is called thestate space, is denoted bySand could be, for example,d dimensionalEuclidean space IRd, d 1, or a subset of processes are meant to model the evolution over time of real phenomena forwhich randomness is inherent.

1.3 Markov chains as recursions = F 1 1 1 i 1

Tags:

  University, Time, Chain, Discrete, Columbia university, Columbia, 1 1 1, Markov, Markov chain, 1 discrete time markov chains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of 1 Discrete-time Markov chains - Columbia University

1 Copyrightc 2009 by Karl Sigman1 Discrete-time Markov Stochastic processes in discrete timeAstochastic processin discrete timen IN ={0,1,2,..}is a sequence of random variables(rvs)X0,X1,X2,..denoted byX={Xn:n 0}(or justX={Xn}). We refer to the valueXnas thestateof the process at timen, withX0denoting the initial state. If the randomvariables take values in a discrete space such as the integers ZZ ={.., 2, 1,0,1,2,..}(orsome subset of them), then the stochastic process is said to be discrete -valued; we then denotethe states byi,jand so on. In general, however, the collection of possible values that theXncan take on is called thestate space, is denoted bySand could be, for example,d dimensionalEuclidean space IRd, d 1, or a subset of processes are meant to model the evolution over time of real phenomena forwhich randomness is inherent.

2 For example,Xncould denote the price of a stockndays fromnow, the population size of a given species afternyears, the amount of bandwidth in use ina telecommunications network afternhours of operation, or the amount of money that aninsurance risk company has right after it pays out itsnthclaim. The insurance risk exampleillustrates how time nneed not really be time , but instead can be a sequential indexing ofsome kind of events. Other such examples:Xndenotes the amount of water in a reservoir afterthenthrain storm of the year,Xndenotes the amount of time that thentharrival to a hospitalmust wait before being admitted, orXndenotes the outcome (heads or tails) of thenthflip ofa main challenge in the stochastic modeling of something is in choosing a model thathas on the one hand enough complexity to capture the complexity of the phenomenain question, but has on the other hand enough structure and simplicity to allow one tocompute things of interest.

3 In the context of our examples given above, we may be interestedin computingP(X30>50) for a stock that we bought forX0= $35 per share, or computingthe probability that the insurance risk company eventually gets ruined (runs out of money),P(Xn<0,for somen >0), or computing the long-run average waiting time of arrivals to thehospitallimN 1NN n= a very simple example, consider the sequential tossing of a fair coin. We letXndenotethe outcome of thenthtoss. We can take theXnasp= Bernoulli rvs,P(Xn= 0) =P(Xn=1) = , withXn= 1 denoting that thenthflip landed heads, andXn= 0 denoting that itlanded tails. We also would assume that the sequence of rvs are independent.

4 This then yieldsan example of anindependent and identically distributed(iid) sequence of rvs. Such sequencesare easy to deal with for they are defined by a single distribution (in this case Bernoulli), andare independent, hence lend themselves directly to powerful theorems in probability such as thestrong law of large numbers and the central limit the other examples given above, however, an iid sequence would not capture enoughcomplexity since we expect some correlations among the rvsXn. For example, in the hospitalexample, if the waiting timeXnis very large (and arrivals wait first-in-first-out ) then wewould expectXn+1to be very large as well.

5 In the next section we introduce a stochasticprocess called a Markov chain which does allow for correlations and also has enough structureand simplicity to allow for computations to be carried out. We will also see that Markov chainscan be used to model a number of the above Definition of a Markov chainWe shall assume that the state spaceS= ZZ ={.., 2, 1,0,1,2,..}, the integers, or a propersubset of the integers. Typical examples areS= IN ={0,1, }, the non-negative integers,orS={0,1, ,a}, orS={ b,..,0,1, ,a}for some integersa,b >0, in which case thestate space is stochastic process{Xn}is called a Markov chain if for all timesn 0andall statesi0.

6 ,i,j S,P(Xn+1=j|Xn=i,Xn 1=in 1,..,X0=i0) =P(Xn+1=j|Xn=i)(1)= the probability that the chain , whenever in statei, moves next (one unit of timelater) into statej, and is referred to as aone-step transition probability. The square matrixP= (Pij), i,j S,is called theone-step transition matrix, and since when leaving stateithechain must move to one of the statesj S, each row sums to one ( , forms a probabilitydistribution): For eachi S j SPij= are assuming that the transition probabilities do not depend on the timen, and so, inparticular, usingn= 0 in (1) yieldsPij=P(X1=j|X0=i).The defining Markov property (1) can be described in words asthe future is independentof the past given the present the present time , the future after timenis{Xn+1,Xn+2.}

7 }, the present state isXn, and the past is{X0,..,Xn 1}. If the valueXn=iis known, then the future evolution of the chain only depends (at most) oni, in that it isstochastically independent of the past valuesXn 1,.., on the rvXn, the future sequence of rvs{Xn+1,Xn+2,..}is indepen-dent of the past sequence of rvs{X0,..,Xn 1}.Examples of Markov in the open maze:Consider a rat in a maze with four cells, indexed 1 4, and theoutside (freedom), indexed by 0 (that can only be reached via cell 4). The rat startsinitially in a given cell and then takes a move to another cell, continuing to do so untilfinally reaching freedom.

8 We assume that at each move (transition) the rat, independentof the past, is equally likly to choose from among the neighboring cells (so we are assumingthat the rat does not learn from past mistakes). This then yields a Markov chain , whereXndenotes the cell visited right after {0,1,2,3,4}. For example,whenever the rat is in cell 1, it moves next (regardless of its past) into cell 2 or 3 withprobability 1/2;P1,2=P1,3= 1/2. We assume that when the rat escapes it remainsescaped forever after, so we haveP0,0= 1, P0,i= 0, i {1,2,3,4}. The transition matrixis given byP= 10000001/2 1/2001/2001/201/2001/21/301/3 1/30 2 State 0 here is an example of anabsorbingstate: Whenever the chain enters state 0, itremains in that state forever after;P(Xn+1= 0|Xn= 0) =P00= 1.

9 Of interest isdetermining the expected number of moves required until the rat reaches freedom giventhat the rat starts initially in celli. Let i,0= min{n 0 :Xn= 0|X0=i}, the numberof moves required to reach freedom when starting in celli. We wish to computeE( i,0).The trick is to condition onX1. For example, let us try to computeE( 1,0); we thusassume thatX0= 1. ThenX1= 2 or 3 1/2, andE( 1,0|X1= 2) = 1 +E( 2,0);E( 1,0|X1= 3) = 1 +E( 3,0). The point is that the rat must take at least 1 step to getout, and if the first step is to cell 2, then by the Markov property, the remaining numberof steps is as if the rat started initially in cell 2 and we wish to calculateE( 2,0), theexpected number of steps required to each freedom from cell 2; similarly ifX1= 3.

10 ThusE( 1,0) =E( 1,0|X1= 2)P(X1= 2|X0= 0) +E( 1,0|X1= 3)P(X1= 3|X0= 0)= (1 +E( 2,0))(1/2) + (1 +E( 3,0))(1/2)= 1 +E( 2,0)(1/2) +E( 3,0)(1/2).Using the same trick on each ofE( 2,0),E( 3,0),E( 4,0) yields, in the end, four linearequations with the four ( 40) is a little different since it is only from cell 4that the rat can escape;E( 40) =13(1) +13(1 +E( 20)) +13(1 +E( 30)) = 1 +13E( 20) +13E( 30). The full set ofequations isE( 10) = 1 +12E( 20) +12E( 30)E( 20) = 1 +12E( 10) +12E( 40)E( 30) = 1 +12E( 10) +12E( 40)E( 40) = 1 +13E( 20) +13E( 30)Solving (details left to the reader) yieldsE( 10) = 13E( 20) = 12E( 30) = 12E( 40) = independent and identically distributed (iid) sequence:Any iid sequence forms a Markov chain , for if{Xn}is iid, then{Xn+1,Xn+2.}


Related search queries