Example: tourism industry

13 Introduction to Stationary Distributions

13 Introduction to Stationary DistributionsWe first briefly review the classification of states in a Markov chainwith a quick example and then begin the discussion of the importantnotion ofstationary , let s review a little bit with the followingExample:Suppose we have the following transition matrix:1 2 3 4 5 6 7 8 9 10P=12345678910 .Determine the equivalence classes, the period of each equivalenceclass, and whether each equivalence class it transient or Introduction TO Stationary DISTRIBUTIONSS olution:The state space is small enough (10 elements) that one ef-fective way to determine classes is to just start following possible you see 1 s in the matrix a good place to start is in a state with a1 in the corresponding row.

So “stationary” refers to “stationary in time”. In particular, for a stationary process, the distribution of X n is the same for all n. So why do we care if our Markov chain is stationary? Well, if it were stationary and we knew what the distribution of each X nwas then we would know a lot because we would know the long run proportion of

Tags:

  Stationary

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of 13 Introduction to Stationary Distributions

1 13 Introduction to Stationary DistributionsWe first briefly review the classification of states in a Markov chainwith a quick example and then begin the discussion of the importantnotion ofstationary , let s review a little bit with the followingExample:Suppose we have the following transition matrix:1 2 3 4 5 6 7 8 9 10P=12345678910 .Determine the equivalence classes, the period of each equivalenceclass, and whether each equivalence class it transient or Introduction TO Stationary DISTRIBUTIONSS olution:The state space is small enough (10 elements) that one ef-fective way to determine classes is to just start following possible you see 1 s in the matrix a good place to start is in a state with a1 in the corresponding row.

2 If we start in state 1, we see that the path1 7 10 1must be followed with probability 1. This immedi-ately tells us that the set{1,7,10}is a recurrent class with period , we see that if we start in state 9, then we just stay there ,{9}is a recurrent class with period 1. Similarly, we can seethat{4}is a recurrent class with period 1. Next suppose we start instate 2. From state 2 we can go directly to states 2, 3, 4 or 5. We alsosee that from state 3, we can get to state 2 (by the path3 8 2)and from state 5 we can get to state 2 (directly). Therefore, state 2communicates with states 3 and 5.

3 We don t need to check if state 2communicates with states 1, 4, 7, 9, or 10 (why?). From state 2 wecan get to state 6 (by the path2 5 6) but from state 6 we mustgo to either state 4 or state 7, therefore from state 6 we cannot getto state 2. Therefore, state 2 and 6 do not communicate. Finally, wecan see that states 2 and 8 do communicate. Therefore,{2,3,5,8}isan equivalence class. It is transient because from this class we can getto state 4 (and never come back). Finally, it s period is 1 because theperiod of state 2 is clearly 1 (we can start in state 2 and come backto state 2 in 1 step).

4 The only state left that is still unclassified isstate 6, which is in a class by itself{6}and is clearly transient. Notethatp66(n) = 0for alln >0so the set of times at which we couldpossibly return to state 6 is the empty set. By convention, we will saythat the greatest common divisor of the empty set is infinity, so theperiod of state 6 is infinity. 113 Sometimes a useful technique for determining the equivalence classesin a Markov chain is to draw what is called astate transition diagram,which is a graph with one node for each state and with a (directed)edge between nodesiandjifpij>0.

5 We also usually write thetransition probabilitypijbeside the directed edge between nodesiandjifpij>0. For example, here is the state transition diagram for theprevious : State Transition Diagram for Preceding ExampleSince the diagram displays all one-step transitions pictorially, it isusually easier to see the equivalence classes with the diagram thanjust by looking at the transition matrix. It helps if the diagram can bedrawn neatly, with, for example, no edges crossing each Introduction TO Stationary DISTRIBUTIONSU sually when we construct a Markov model for some system the equiv-alence classes, if there are more than one, are apparent or obviousbecause wedesignedthe model so that certain states go together andwe designed them to be transient or times we may be trying to verify, modify, improve, or just under-stand someone else s (complicated)

6 Model and one of the first thingswe may want to know is how to classify the states, and it may notbe obvious or even easy to determine the equivalence classes if thestate space is large and there are many transitions that don t follow aregular pattern. ForSfinite, the following algorithm determinesT(i),the set of states accessible fromi,F(i), the set of states from whichiis accessible, andC(i) =F(i) T(i), the equivalence class of statei, for each statei:1. For each statei S, letT(i) ={i}andF(i) ={ }, the For each statei S, do the following: For each statek T(i),add toT(i)all statesjsuch thatpkj>0(ifkis not already inT(i).)

7 Repeat this step until no further addition is For each statei S, do the following: For each statej S, addstatejtoF(i)if stateiis inT(j).4. For each statei S, letC(i) =F(i) T(i).Note that ifC(i) =T(i)(the equivalence class containingiequalsthe set of states that are accessible fromi), thenC(i)isclosed(hencerecurrent since we are assumingSis finite for this algorithm). Thisalgorithm is taken fromAn Introduction to Stochastic Processes, byEdward P. C. Kao, Duxbury Press, 1997. Also in this reference is thelisting of aMATLAB implementation of this Markov ChainsNow that we know the general architecture of a Markov chain, it s timeto look at how we might analyse a Markov chain to make predictionsabout system behaviour.

8 For this we ll first consider the concept ofastationary distribution. This is distinct from the notion of limitingprobabilities, which we ll consider a bit later. First, let s define whatwe mean when we say that a process :A (discrete-time) stochastic process{Xn:n 0}isstationaryif for any time pointsi1,..,inand anym 0, the jointdistribution of(Xi1,..,Xin)is the same as the joint distribution of(Xi1+m,..,Xin+m).So Stationary refers to Stationary in time . In particular, for astationary process, the distribution ofXnis the same for why do we care if our Markov chain is Stationary ?

9 Well, if it werestationaryandwe knew what the distribution of eachXnwas then wewould know a lot because we would know the long run proportion oftime that the Markov chain was in any state. For example, supposethat the process was Stationary and we knew thatP(Xn= 2) = 1/10for everyn. Then over 1000 time periods we should expect thatroughly 100 of those time periods was spent in state 2, and overNtime periods roughlyN/10of those time periods was spent in state2. AsNwent to infinity, theproportionof time spent in state 2will converge to1/10(this can be proved rigorously by some form ofthe Strong Law of Large Numbers).

10 One of the attractive features ofMarkov chains is that we can often make them stationaryandthere isa nice and neat characterization of the distribution ofXnwhen it isstationary. We discuss this Introduction TO Stationary DISTRIBUTIONSS tationary DistributionsSo how do we make a Markov chain Stationary ? If it can be made sta-tionary (and not all of them can; for example, the simple random walkcannot be made Stationary and, more generally, a Markov chain whereall states were transient or null recurrent cannot be made Stationary ),then making it Stationary is simply a matter of choosing the right ini-tial distribution forX0.


Related search queries