Example: quiz answers

Chapter 1 Markov Chains - Yale University

Chapter 1 Markov ChainsA sequence of random variablesX0,X1,..with values in a countable setSisa Markov chain if at any timen, the future states (or values)Xn+1,Xn+2,..depend on the historyX0,..,Xnonly through the present are fundamental stochastic processes that have many diverse applica-tions. This is because a Markov chain represents any dynamical system whosestates satisfy the recursionXn=f(Xn 1,Yn),n 1, whereY1, and identically distributed ( ) andfis a deterministic func-tion. That is, the new stateXnis simply a function of the last state andan auxiliary random variable. Such system dynamics are typical of those forqueue lengths in call centers, stresses on materials, waiting times in produc-tion and service facilities, inventories in supply Chains , parallel-processingsoftware, water levels in dams, insurance funds, stock prices, Chapter begins by describing the basic structure of a Markov chainand how its single-step transition probabilities determine its evolution.

2 1MarkovChains 1.1 Introduction This section introduces Markov chains and describes a few examples. A discrete-time stochastic process {X n: n ≥ 0} on a countable set S is a collection of S-valued random variables defined on a probability space (Ω,F,P).The Pis a probability measure on a family of events F (a σ-field) in an event-space Ω.1 The set Sis the state space of the …

Tags:

  States, Introduction, Chain, Space, 1 introduction, Countable, Markov, Markov chain, State space, 1 markov chains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Chapter 1 Markov Chains - Yale University

1 Chapter 1 Markov ChainsA sequence of random variablesX0,X1,..with values in a countable setSisa Markov chain if at any timen, the future states (or values)Xn+1,Xn+2,..depend on the historyX0,..,Xnonly through the present are fundamental stochastic processes that have many diverse applica-tions. This is because a Markov chain represents any dynamical system whosestates satisfy the recursionXn=f(Xn 1,Yn),n 1, whereY1, and identically distributed ( ) andfis a deterministic func-tion. That is, the new stateXnis simply a function of the last state andan auxiliary random variable. Such system dynamics are typical of those forqueue lengths in call centers, stresses on materials, waiting times in produc-tion and service facilities, inventories in supply Chains , parallel-processingsoftware, water levels in dams, insurance funds, stock prices, Chapter begins by describing the basic structure of a Markov chainand how its single-step transition probabilities determine its evolution.

2 For in-stance, what is the probability of reaching a certain state, and how long doesit take to reach it? The next and main part of the Chapter characterizes thestationary or equilibrium distribution of Markov Chains . These distributionsare the basis of limiting averages of various cost and performance param-eters associated with Markov Chains . Considerable discussion is devoted tobranching phenomena, stochastic networks, and time-reversible Chains . In-cluded are examples of Markov Chains that represent queueing, productionsystems, inventory control, reliability, and Monte Carlo getting into the main text, a reader would benefit by a brief reviewof conditional probabilities in Section of this Chapter and related materialon random variables and distributions in Sections 1 4 in the Appendix. Therest of the Appendix, which provides more background on probability, wouldbe appropriate for later Serfozo,Basics of Applied Stochastic Processes,Probability and its Springer-Verlag Berlin Heidelberg IntroductionThis section introduces Markov Chains and describes a few discrete-timestochastic process{Xn:n 0}on a countable setSis a collection ofS-valued random variables defined on a probability space ( ,F,P).

3 ThePis a probability measure on a family of eventsF(a -field)in an event- space .1 The setSis thestate spaceof the process, and thevalueXn Sis thestateof the process represent aparameter other than time such as a length or a job of the process areP{X0=i0,..,Xn=in},i0,..,in S, n probabilities uniquely determine the probabilities of all events of theprocess. Consequently, two stochastic processes (defined on different probabil-ity spaces or the same one) are equal in distribution if their finite-dimensionaldistributions are equal. Various types of stochastic processes are defined byspecifying the dependency among the variables that determine the finite-dimensional distributions, or by specifying the manner in which the processevolves over time (the system dynamics).A Markov chain is defined as stochastic processX={Xn:n 0}on a countable setSis aMarkov Chainif, for anyi,j Sandn 0,P{Xn+1=j|X0.}

4 ,Xn}=P{Xn+1=j|Xn},( )P{Xn+1=j|Xn=i}=pij.( )Thepijis the probability that the Markov chain jumps from stateito probabilitiessatisfy j Spij=1,i S,andthematrixP=(pij)isthetransition matrixof the ( ), called theMarkov property, says that, at any timen,thenext stateXn+1is conditionally independent of the pastX0,..,Xn 1giventhe present stateXn. In other words, the next state is dependent on thepast and present only through the present state. The Markov property is anelementary condition that is satisfied by the state of many stochastic phe-nomena. Consequently, Markov Chains , and related continuous-time Markovprocesses, are natural models or building blocks for ( ) simply says the transition probabilities do not depend onthetimeparametern; the Markov chain is therefore time-homogeneous . Ifthe transition probabilities were functions of time, the processXnwould be anon-time-homogeneous Markov chain .

5 Such Chains are like time-homogeneous1 Further details on probability spaces are in the Appendix. We follow the convention ofnot displaying the space ( ,F,P) every time random variables or processes are introduced;it is mentioned only when needed for Introduction3chains, but the time dependency introduces added accounting details that wewill not address here. See Exercises 12 and 13 for further the state spaceSis countable , we will sometimes label the states byintegers, such asS={0,1,2,..}(orS={1,..,m}). Under this labeling,the transition matrix has the formP= p00p01p02 p10p11p12 p20p21p22 .. We end this section with a few preliminary 2. Binomial Markov processis a sequence ofindependent trials in which each trial results in a success or failure withrespective probabilitiespandq=1 the number of successesinntrials, forn 1.

6 By direct reasoning, it follows thatXnhas a binomialdistribution with parametersnandp:P{Xn=k}= nk pk(1 p)n k,0 k , suppose at thenth trial thatXn=i. Then at the next trial,Xn+1will equali+1 oriwith probabilitiespand 1 p, respectively, regardlessof the values ofX1,..,Xn a Markov chain with transitionprobabilitiespi,i+1=p,pii=1 pandpij= 0 otherwise. This binomialMarkov chain is a special case of the following random 3. Random Walk. SupposeY1,Y2,..are integer-valued ran-dom variables, and defineX0=0andXn=n m=1Ym,n processXnis arandom walkon the set of integersS,whereYnis thestep size at timen. A random walk represents a quantity that changes overtime ( , a stock price, an inventory level, or a gambler s fortune) such thatits increments (step sizes) are SinceXn+1=Xn+Yn+1,andYn+1isindependent ofX0,..,Xn, it follows that, for anyi,j Sandn 0,P{Xn+1=j|X0.}

7 ,Xn 1,Xn=i}=P{Xn+Yn+1=j|Xn=i}=P{Y1=j i}.Therefore, the random walkXnis a Markov chain on the nonnegative integersSwith transition probabilitiespij=P{Y1=j i}.41 MarkovChainsWhen the step sizesYntake values 1 or 1withp=P{Y1=1}andq=P{Y1= 1},thechainXnis asimple random walk. Its transitionprobabilities, for eachi,arepi,i+1=p, pi,i 1=q, pij=0,forj =i+1 ori type of walk restricted to a finite state space is described 4. Gambler s Ruin. Consider a Markov chain onS={0,1,..,m}with transition matrixP= One can interpret the state of the Markov chain as the fortune of a Gamblerwho repeatedly plays a game in which the Gambler wins or loses $1 withrespective probabilitiespandq=1 p. If the fortune reaches state 0, theGambler is ruined sincep00= 1 (state 0 is absorbing the chain stays thereforever). On the other hand, if the fortune reachesm, the Gambler retireswith the fortunemsincepmm=1(mis another absorbing state).

8 A versatile generalization to state-dependent gambles (and other applica-tions as well) is with a transition matrixP= 1rm 1pm qmrm In this case, the outcome of the game depends on the Gambler s the fortune isi, the Gambler either wins or loses $1 with respectiveprobabilitiespiorqi, or breaks even (the fortune does not change) withprobabilityri. Another interpretation is that the state of the chain is thelocation of a random walk with state-dependent steps of size 1, 0, or Chains are common models for a variety of systems and phenom-ena, such as the following, in which the Markov property is reasonable .Example 5. Flexible Manufacturing System. Consider a machine that is capa-ble of producing three types of parts. The state of the machine at time periodnis denoted by a random variableXnthat takes values inS={0,1,2,3},where 0 means the machine is idle andi=1,2 or 3 means the machine pro-duces a typeiin the time period.

9 Suppose the machine s production scheduleis Markovian in the sense that the next type of part it produces, or a Probabilities of Sample Paths5idle period, depends only on its current state, and the probabilities of thesechanges do not depend on time. ThenXnis a Markov chain . For instance,its transition matrix might beP= 1/51/51/52/51/10 1/21/10 3/101/501/53/51/502/52/5 Such probabilities can be estimated as in Exercise 65, provided one can ob-serve the evolution of the system. Otherwise, the probabilities can be deter-mined by subjective reasoning or other that at any period in which the machine produces a type 1 part, inthe next period it produces a type 1, 2 or 3 part with respective probabilities1/2, 1/10, and 3/10. Also, whenever the machine is idle, it remains idle in thenext period with probability 1/5. Consequently, the probability the machineis idle formperiodsis(4/5)(1/5)m 1, which is a geometric distribution withparameter 4 periods; see Exercise Probabilities of Sample PathsA basic issue in analyzing the structure of a stochastic process is to describeits finite-dimensional distributions.

10 This section shows that these distribu-tions for a Markov chainXnare simply products of its transition probabili-ties and the probability distribution of theinitial stateX0. Finite-dimensionaldistributions and more general properties of sample paths2are convenientlyexpressed byn-step transition probabilities, which are obtained as thenthproduct of the transition a Markov chain onSwith transition probabil-itiespijand initial distribution i=P{X0=i}. Then, for anyi0,..,in Sandn 0,P{X0=i0,..,Xn=in}= i0pi0,i1 pin 1, by induction, this statement is obvious forn= ,assume it is true for somen,andletAn={X0=i0,..,Xn=in}. Then thestatement is true forn+1, sinceP(An+1)=P(An)P{Xn+1=xn+1|An}= i0pi0,i1 pin 1,inpin,in+1,2 Asample pathof a stochastic processXnis a realization of it as a function of time. Forinstance, ifX0( )=i0,..,Xn( )=in,theni0.


Related search queries