Example: tourism industry

Markov Chains - dartmouth.edu

Chapter 11 Markov IntroductionMost of our study of probability has dealt with independent trials processes. Theseprocesses are the basis of classical probability theory and much of statistics. Wehave discussed two of the principal theorems for these processes: the Law of LargeNumbers and the Central Limit have seen that when a sequence of chance experiments forms an indepen-dent trials process, the possible outcomes for each experiment are the same andoccur with the same probability . Further, knowledge of the outcomes of the pre-vious experiments does not in uence our predictions for the outcomes of the nextexperiment. The distribution for the outcomes of a single experiment is su cientto construct a tree and a tree measure for a sequence ofnexperiments, and wecan answer any probability question about these experiments by using this probability theory studies chance processes for which the knowledgeof previous outcomes in uences predictions for future experiments.

Chapter 11 Markov Chains 11.1 Introduction Most of our study of probability has dealt with independent trials processes. These processes are the basis of classical probability theory and much of statistics.

Tags:

  Theory, Probability, Dartmouth, Probability theory

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Markov Chains - dartmouth.edu

1 Chapter 11 Markov IntroductionMost of our study of probability has dealt with independent trials processes. Theseprocesses are the basis of classical probability theory and much of statistics. Wehave discussed two of the principal theorems for these processes: the Law of LargeNumbers and the Central Limit have seen that when a sequence of chance experiments forms an indepen-dent trials process, the possible outcomes for each experiment are the same andoccur with the same probability . Further, knowledge of the outcomes of the pre-vious experiments does not in uence our predictions for the outcomes of the nextexperiment. The distribution for the outcomes of a single experiment is su cientto construct a tree and a tree measure for a sequence ofnexperiments, and wecan answer any probability question about these experiments by using this probability theory studies chance processes for which the knowledgeof previous outcomes in uences predictions for future experiments.

2 In principle,when we observe a sequence of chance experiments, all of the past outcomes couldin uence our predictions for the next experiment. For example, this should be thecase in predicting a student's grades on a sequence of exams in a course. But toallow this much generality would make it very di cult to prove general 1907, A. A. Markov began the study of an important new type of chanceprocess. In this process, the outcome of a given experiment can a ect the outcomeof the next experiment. This type of process is called a Markov a Markov ChainWe describe a Markov chain as follows: We have a set ofstates,S=fs1;s2;:::; process starts in one of these states and moves successively from one state toanother. Each move is called the chain is currently in statesi, thenit moves to statesjat the next step with a probability denoted bypij, and thisprobability does not depend upon which states the chain was in before the current405 406 CHAPTER 11.

3 Markov probabilitiespijare calledtransition process can remainin the state it is in, and this occurs with probabilitypii. An initial probabilitydistribution, de ned onS, speci es the starting state. Usually this is done byspecifying a particular state as the starting A. Howard1provides us with a picturesque description of a Markov chain asa frog jumping on a set of lily pads. The frog starts on one of the pads and thenjumps from lily pad to lily pad with the appropriate transition to Kemeny, Snell, and Thompson,2the Land of Oz isblessed by many things, but not by good weather. They never have two nice daysin a row. If they have a nice day, they are just as likely to have snow as rain thenext day. If they have snow or rain, they have an even chance of having the samethe next day.

4 If there is change from snow or rain, only half of the time is this achange to a nice day. With this information we form a Markov chain as take as states the kinds of weather R, N, and S. From the above informationwe determine the transition probabilities. These are most conveniently representedin a square array asP=0@RN SR1=21=41=4N1=201=2S1=41=41=21A:2 Transition MatrixThe entries in the rst row of the matrixPin Example represent the proba-bilities for the various kinds of weather following a rainy day. Similarly, the entriesin the second and third rows represent the probabilities for the various kinds ofweather following nice and snowy days, respectively. Such a square array is calledthematrix of transition probabilities,orthetransition consider the question of determining the probability that, given the chain isin stateitoday, it will be in statejtwo days from now.

5 We denote this probabilitybyp(2)ij. In Example , we see that if it is rainy today then the event that itis snowy two days from now is the disjoint union of the following three events: 1)it is rainy tomorrow and snowy two days from now, 2) it is nice tomorrow andsnowy two days from now, and 3) it is snowy tomorrow and snowy two days fromnow. The probability of the rst of these events is the product of the conditionalprobability that it is rainy tomorrow, given that it is rainy today, and the conditionalprobability that it is snowy two days from now, given that it is rainy the transition matrixP, we can write this product asp11p13. The other two1R. A. Howard,Dynamic Probabilistic Systems,vol. 1 (New York: John Wiley and Sons, 1971).2J. G. Kemeny, J. L. Snell, G.

6 L. Thompson,Introduction to Finite Mathematics,3rd ed.(Englewood Cli s, NJ: Prentice-Hall, 1974). INTRODUCTION407events also have probabilities that can be written as products of entries ofP. Thus,we havep(2)13=p11p13+p12p23+p13p33:This equation should remind the reader of a dot product of two vectors; we aredotting the rst row ofPwith the third column ofP. This is just what is donein obtaining the 1;3-entry of the product ofPwith itself. In general, if a Markovchain hasrstates, thenp(2)ij=rXk=1pikpkj:The following general theorem is easy to prove by using the above observation the transition matrix of a Markov chain. Theijth en-tryp(n)ijof the matrixPngives the probability that the Markov chain, starting instatesi, will be in proof of this theorem is left as an exercise (Exercise 17).

7 2 Example (Example continued) Consider again the weather in the Landof Oz. We know that the powers of the transition matrix give us interesting in-formation about the process as it evolves. We shall be particularly interested inthe state of the chain after a large number of steps. The programMatrixPowerscomputes the powers have run the programMatrixPowersfor the Land of Oz example to com-pute the successive powers ofPfrom 1 to 6. The results are shown in Table note that after six days our weather predictions are, to three-decimal-place ac-curacy, independent of today's weather. The probabilities for the three types ofweather, R, N, and S, are .4, .2, and .4 no matter where the chain started. Thisis an example of a type of Markov chain called aregularMarkov chain.

8 For thistype of chain, it is true that long-range predictions are independent of the startingstate. Not all Chains are regular, but this is an important class of Chains that weshall study in detail now consider the long-term behavior of a Markov chain when it starts in astate chosen by a probability distribution on the set of states, which we will call aprobability vector. A probability vector withrcomponents is a row vector whoseentries are non-negative and sum to 1. Ifuis a probability vector which representsthe initial state of a Markov chain, then we think of theith component ofuasrepresenting the probability that the chain starts in this interpretation of random starting states, it is easy to prove the fol-lowing theorem. 408 CHAPTER 11. Markov CHAINSP1=0@Rain Nice SnowRain:500:250:250 Nice:500:000:500 Snow:250:250:5001AP2=0@Rain Nice SnowRain:438:188:375 Nice:375:250:375 Snow:375:188:4381AP3=0@Rain Nice SnowRain:406:203:391 Nice:406:188:406 Snow:391:203:4061AP4=0@Rain Nice SnowRain:402:199:398 Nice:398:203:398 Snow:398:199:4021AP5=0@Rain Nice SnowRain:400:200:399 Nice:400:199:400 Snow:399:200:4001AP6=0@Rain Nice SnowRain:400:200:400 Nice:400:200:400 Snow:400:200:4001 ATable :Powers of the Land of Oz transition matrix.

9 INTRODUCTION409 Theorem the transition matrix of a Markov chain, and letube theprobability vector which represents the starting distribution. Then the probabilitythat the chain is in statesiafternsteps is theith entry in the vectoru(n)= proof of this theorem is left as an exercise (Exercise 18).2We note that if we want to examine the behavior of the chain under the assump-tion that it starts in a certain statesi, we simply chooseuto be the probabilityvector withith entry equal to 1 and all other entries equal to the Land of Oz example (Example ) let the initial probabilityvectoruequal (1=3;1=3;1=3). Then we can calculate the distribution of the statesafter three days using Theorem and our previous calculation ofP3. We obtainu(3)=uP3=(1=3;1=3;1=3)0@:406:203:3 91:406:188:406:391:203:4061A=(:401;:188; :401 ):2 ExamplesThe following examples of Markov Chains will be used throughout the chapter President of the United States tells person A his or her in-tention to run or not to run in the next election.

10 Then A relays the news to B,who in turn relays the message to C, and so forth, always to some new person. Weassume that there is a probabilityathat a person will change the answer from yesto no when transmitting it to the next person and a probabilitybthat he or shewill change it from no to yes. We choose as states the message, either yes or transition matrix is thenP= yesnoyes1 aanob1 b :The initial state represents the President's time a certain horse runs in a three-horse race, he has proba-bility 1/2 of winning, 1/4 of coming in second, and 1/4 of coming in third, indepen-dent of the outcome of any previous race. We have an independent trials process, 410 CHAPTER 11. Markov CHAINSbut it can also be considered from the point of view of Markov chain theory . Thetransition matrix isP=0@WP SW:5:25:25P:5:25:25S:5:25:251A:2 Example the Dark Ages, Harvard, dartmouth , and Yale admitted onlymale students.


Related search queries