Example: quiz answers

An introduction to Markov chains - ku

A N D E R S TO LV E RA N I N T R O D U C T I O N TOM A R KOV C H A I N SL E C T U R E N OT E S F O R S TO C H A S T I C P R O C E S S E S2 Anders TolverDepartment of Mathematical SciencesUniversity of CopenhagenUniversitetsparken5DK-2100 Copenhagen , Denmarkemail: printing, November2016 Copyright Anders TolverISBN:978-87-7078-952-3 PrefaceThese lecture notes have been developed for the courseStochastic Pro-cessesat Department of Mathematical Sciences, University of Copen-hagen during the teaching years2010-2016. The material covers as-pects of the theory for time-homogeneous Markov chains in discreteand continuous time on finite or countable state back bone of this work is the collection of examples and exer-cises in Chapters2and3. It is my hope that all mathematical resultsand tools required to solve the exercises are contained in Chapters2and3and in Appendix B. The manuscript was never intended toprovide complete mathematical proofs of all the main results sincethese may be found elsewhere in the literature.

1 Introduction 7 Motivation and some examples of Markov chains 7 About these lecture notes 10 ... B.4 First order differential equations 136 B.5 Second order linear recurrence equations 137 ... A stochastic process is a mathematical model for a sequence of

Tags:

  Introduction, Differential, Equations, An introduction, Stochastic, Differential equations

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of An introduction to Markov chains - ku

1 A N D E R S TO LV E RA N I N T R O D U C T I O N TOM A R KOV C H A I N SL E C T U R E N OT E S F O R S TO C H A S T I C P R O C E S S E S2 Anders TolverDepartment of Mathematical SciencesUniversity of CopenhagenUniversitetsparken5DK-2100 Copenhagen , Denmarkemail: printing, November2016 Copyright Anders TolverISBN:978-87-7078-952-3 PrefaceThese lecture notes have been developed for the courseStochastic Pro-cessesat Department of Mathematical Sciences, University of Copen-hagen during the teaching years2010-2016. The material covers as-pects of the theory for time-homogeneous Markov chains in discreteand continuous time on finite or countable state back bone of this work is the collection of examples and exer-cises in Chapters2and3. It is my hope that all mathematical resultsand tools required to solve the exercises are contained in Chapters2and3and in Appendix B. The manuscript was never intended toprovide complete mathematical proofs of all the main results sincethese may be found elsewhere in the literature.

2 Further, inclusion of(even more) long and technical proofs would remove focus from themain goal which is to learn how to solve problems. However, anylecturer using these lecture notes should spend part of the lectures on(sketches of) proofs in order to illustrate how to work with Markovchains in a formally correct way. This may include adding a numberof formal arguments not present in the lecture notes. Some exercisesin Appendix C are formulated as step-by-step instructions on howto construct formal proofs of selected theoretical results. It is defi-nitely advisable (and probably necessary) to consult other textbookson Markov chains in order to be able to solveallof the exercises inAppendix C. I advise students to postpone these exercises until theyfeel familiar with the exercises in further reading I can recommend the books by Asmussen[2003, ], Br maud [1999] and Lawler [2006, ]. Myown introduction to the topic was the lecture notes (in Danish) byJacobsen and Keiding [1985].

3 Many of the exercises presented in Chapter3are greatly inspiredby examples in Ragner Nordberg s lecture notes onBasic Life Insur-ance Mathematics(Version: September2002). The presentation of themathematical results on Markov chains have many similarities to var-ious lecture notes by Jacobsen and Keiding [1985], by Nielsen, S. F.,and by Jensen, S. of this material has been used forStochastic Processes2010/2011-2015/2016at University of Copenhagen. I thank Massimiliano Tam-borrino, Ketil Biering Tvermosegaard, Nina Munkholt, Rene Aakj rJensen, Niels Olsen, Frederik Riis Mikkelsen, Mads Rystok Bisgaard,Anne Helby Pedersen, Martin Jakobsen, Camilla Nicole Schaldemose,Susanne Ditlevsen, and the students for many useful commentsto this revised version. A special thanks goes to Christian Duffau-Rasmussen, Andreas Bjerre-Nielsen, Mathias Luidor Heltberg, Fred-erik S rensen, Gitte Lerche Aalborg, Line Rosendahl MeldgaardPedersen and S ren Wengel Mogensen who agreed to spend a fewhours to provide me with useful feedback on the very first version ofthe lecture.

4 Octorber2016 Anders TolverContents1 Introduction7 Motivation and some examples of Markov chains7 About these lecture notes10 Transition diagrams13 Overview of exercises142 Markov chains in discrete time15 Definition of a Markov chain15 Classification of states19 Limit results and invariant probabilities30 Absorption probabilities37 Exercises473 Markov chains in continuous time67 Definition and the minimal construction of a Markov chain67 Properties of the transition probabilities71 Invariant probabilities and absorption77 Birth-and-death processes90 Exercises97 ARandom variables and stochastic processes123 Probability measures123 Random variables124 stochastic processes1266 CONTENTSBM athematical conditional formulaes for sums and results for order differential order linear recurrence ratio test for to do certain computations inR139 CProofs of selected of visits to state of invariant the ergodic theorem for discrete-time Markov chains153 DBibliography157 EIndex1591 IntroductionMotivation and some examples of Markov chainsWhen my first child started in daycare, I started to register the out-come of a stochastic variable with two possible outcomesill:meaning that the child is not ready for daycareok:meaning that the child is ready for daycareConsecutive recordings of the health state of a child made everymorning is an excellent example of a sample of a discrete-timestochastic process.

5 The sampling regime is discrete because I do notregister the health state continuously at any time point but only oncea day. The process is stochastic (in contrast to deterministic) because Inever know with certainty whether the child will be ill or healthy onthe following sample of the health state on the first17days (also referred toas the sample path) is given belowok, ok ,ok ,ok ,ok ,ill ,ill ,ill ,ill ,ill ,ok ,ok ,ok ,ok ,ok ,ok ,ill ,..A stochastic process is a mathematical model for a sequence ofrandom variables. The model should allow us to compute the proba-bility of various events associated to a random phenomena. Person-ally, I was particularly interested in the following type of questions Will the child be ready for daycare tomorrow permitting me to goto work? Will the child be ready for daycare on Friday, where I have a meet-ing that can not be cancelled? Will the child be ready for daycare for all days during the nextweek? What is the average time between two periods of illness?

6 8 CHAPTER1. introduction How many days should I expect to be off work to take care of asick child within the next year?One of the aims of this course is to learn how to build mathematicalmodels for random events that allows us to compute the answer tothe type of questions listed above. 10 50510 10 50510n = 5 10 50510 10 50510n = 10 10 50510 10 50510n = : The trajectory of a symmet-ric random walk onZ Zobservedat timesn=5, 10, 100 when it starts instate(0, 0)at timen=0. Note that thefigure does not allow us to reconstructthe entire sample path (=sequence ofvisited states). It turns out that therandom walk returned to state(0, 0)attimen=15. If you do will know the probability that itwill ever return to state(0, 0).We are only going to deal with a very simple class of mathematicalmodels for random events namely the class of Markov chains on afinite or countable state space. The state space is the set of possiblevalues for the observations. Thus, for the example above the statespace consists of two states: ill and ok.

7 Below you will find an ex-ample of a Markov chain on a countably infinite state space, but firstwe want to discuss what kind of restrictions are put on a model byassuming that it is a Markov the class of stochastic processes one could say that Markovchains are characterised by the dynamical property that they neverlook back. The way a Markov chain continues tomorrow is affectedby where it is today but independent of where it was yesterday orthe day before yesterday. As long as we know the (present) valueof a Markov chain, our prediction about the future behaviour of theprocess does not change if we get additional information about pastrecordings of the is clear that many random processes from real life do not satisfythe assumption imposed by a Markov chain. When we want to guesswhether a child will be ready for daycare tomorrow, it probablyinfluences our prediction whether the child has only been ill todayor whether it has been ill for the past5days. This suggests that aMarkov chain might not be a reasonable mathematical model todescribe the health state of a shall now give an example of a Markov chain on an countablyinfinite state space.

8 The outcome of the stochastic process is gener-ated in a way such that the Markov property clearly holds. The statespace consists of the grid of points labeled by pairs of integers. Weassume that the process starts at time zero in state(0, 0)and that(every day) the process moves one step in one of the four directions:up, down, left, right. Each direction is chosen with equal probability(=1/4).This stochastic process is called the (symmetric) random walk onthe state spaceZ Z={(i,j)|i,j Z}.The process satisfies the Markov property because (by construction!)thenextstate is always constructed by moving one step in a randomMOTIVATION AND SOME EXAMPLES OF Markov CHAINS9direction from the current state no matter how the process arrived atthe current fact that the state space of the symmetric random walk onZ Zis infinite makes it a more complicated but at the same timemore interesting mathematical object to study. We may still (withsome work) answer simple questions of the form What is the probability that the random walk is in state(2, 2)attimen=10?

9 What is the probability that the random walk does not return tostate(0, 0)for the next time period of length5?However, as the state space is infinite there are a number of eventsthat may or may not occur even if we consider an infinite time hori-zon. In particular, it is not obviuos if the random walk will ever reach ( hit) state(2, 2) if the random walk will ever return to state(0, 0) what will be the average number of visits to state(0, 0)if we con-sider at very long time horizon up to timen=1000?The last three questions have to do with therecurrencepropertiesof the random walk. Much of our work on this course are cen-tered around mathematical tools to study the long run behaviourof Markov chains on countably infinite state far we have only discussed mathematical models for randomevents that are observed at discrete time points - for instance onceevery day. During this course we shall also consider stochastic pro-cesses in continuous time, where the value of a random experiment isavailable at any time windows of my office offer an excellent view to all the carsand busses driving on N rre All.

10 The first example of a continuous-time stochastic process that comes to my mind is the number ofyellow busses that have passed since I started writing this paragraphof the lecture notes (so far I have counted four!). Clearly, as I havekept my eyes fixed on N rre All at any time, the value of the pro-cess is available at any time. My collected data is an example of acontinous-time stochastic process. The possible values of the processisN0={0, 1, 2, ..}hence the state space is countably infinite. Thejump structure of the process is, however, very simple as only jump(upwards) of size 1 may occur. In the literature this is refered to as acounting INTRODUCTIONA mathematical model for the counting process of busses on N rreAll must describe the probability distribution for the passage , a stochastic process includes the description of a proba-bility space( ,F,P)and a family of random variables (indexed byt [0, ))X(t): X(t)( ) the stochastic process to be a Markov chain the distribution of{X(t)}t 0must satisfy some mathematical conditions that it is hardto state and verify formally.]


Related search queries