Example: air traffic controller

Expected Value and Markov Chains - aquatutoring.org

Expected Value and Markov ChainsKaren GeSeptember 16, 2016 AbstractAMarkov Chainis a random process that moves from one state toanother such that the next state of the process depends only on wherethe process is at the present state. Anabsorbing stateis a statethat is impossible to leave once reached. We survey common methodsused to find the Expected number of steps needed for a random walkerto reach an absorbing state in a Markov chain . These methods are:solving a system of linear equations, using a transition matrix, andusing a characteristic :probability, Expected Value , absorbing Markov Chains ,transition matrix, state diagram1 Expected ValueIn this section, we give a brief review of some basic definitions, properties,and examples of Expected the random variableXtake on valuesx1,x2,x3.

Expected Value and Markov Chains Karen Ge September 16, 2016 Abstract A Markov Chain is a random process that moves from one state to another such that the next state of the process depends only on where the process is at the present state. An absorbing state is a state

Tags:

  Chain, Value, Expected, Markov, Expected value and markov chains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Expected Value and Markov Chains - aquatutoring.org

1 Expected Value and Markov ChainsKaren GeSeptember 16, 2016 AbstractAMarkov Chainis a random process that moves from one state toanother such that the next state of the process depends only on wherethe process is at the present state. Anabsorbing stateis a statethat is impossible to leave once reached. We survey common methodsused to find the Expected number of steps needed for a random walkerto reach an absorbing state in a Markov chain . These methods are:solving a system of linear equations, using a transition matrix, andusing a characteristic :probability, Expected Value , absorbing Markov Chains ,transition matrix, state diagram1 Expected ValueIn this section, we give a brief review of some basic definitions, properties,and examples of Expected the random variableXtake on valuesx1,x2,x3.

2 With probabilitiesp1=P(X=x1),p2=P(X=x2),p3=P( X=x3), .., respectively. WecallPtheprobability distribution functionofXand define theexpected valueofXto be:E(X) = ixi pi= ixiP(X=xi).Example is the Expected number of rolls of a fair die until a 6turns up? get a 6 on the first roll with a probability of16. The probabil-ity that 6 does not turn up on the first roll, but does turn up on the secondroll is56 16. The probability that 6 does not turn up on the first two rolls,1but does turn up on the third roll is(56)2 16, and so on. Thus the expectednumber of rolls isE= 1 16+ 2 56 16+ 3 (56)2 16+ 4 (56)3 16+ ( )Multiplying56to both sides of the equation, we get56E=56 16+ 2 (56)2 16+ 3 (56)3 16+ ( )Subtracting ( ) from ( ) gives16E=16(1 +56+(56)2+(56)3+ )=16 11 56= ,E= 6.

3 That is, the Expected number of rolls of a fair die until a6 turns up is 1 can be generalized to the following theorem. Its proof is left tothe an event has a probability ofpto occur in a trial, then theexpected number of trials to obtain the first occurrence of this event in asequence of trials is the Expected number of rolls of a fair die until all 6numbers turn up? first number turns up on the first roll with a probability of1. So the Expected number of rolls until we get the first number is 1. Afterthat, a different number turns up with a probability of56.

4 By Theorem1, we see that the Expected number of rolls until we get a result with aprobability ofpis1p. Thus, the Expected number of rolls until we get twodifferent numbers is 1 +65. We proceed with similar reasoning and get thatthe Expected number of rolls of a fair die until all 6 numbers turn up is1 +65+64+63+62+61= 2(Linearity of Expectation).Given two random variablesXandY, we haveE(X+Y) =E(X) +E(Y). (X+Y) = i,j(xi+yj)P(X=xi,Y=yj)= i j(xi+yj)P(X=xi,Y=yj)= i jxiP(X=xi,Y=yj) + i jyjP(X=xi,Y=yj).Note that jP(X=xi,Y=yj) =P(X=xi) and iP(X=xi,Y=yj) =P(Y=yj)because iP(X=xi) = jP(Y=yj) = ,E(X+Y) = ixiP(X=xi) + jyjP(Y=yj) =E(X) +E(Y).

5 Note that in the theorem above,XandYdo not have to be a random variable taking only non-negative integervalues, thenE(X) = i=1P(X i).Proof. i=1P(X i) = i=1 j=iP(X=j).We can think of the RHS of the equation above as the summation of theentries of an upper triangular matrix (all entries below the main diagonalare zero) by rows. This summation can also be carried out by adding entriesby columns. Therefore, i=1 j=iP(X=j) = j=1j i=1P(X=j) = j=1jP(X=j) =E(X).3 Example is the Expected number of real numbers, chosen uni-formly at random from the interval [0,1], one must select until their sumexceeds 1?

6 The first integer index such that the sum of real numbersx1,x2, ..,xNexceeds 1. We computeP(N > n). Note thatP(N > n) =P(n i=1xi<1).This is exactly the volume of ann-dimensional simplex. A 1-dimensionalsimplex is a line segment with unit length, so it has volume (length) 1. A 2-dimensional simplex is a right isosceles triangle with two legs of unit length,so its volume (area) is12. A 3-dimensional simplex is a right triangularpyramid with three legs of unit length, so its volume is16. In general, ann-dimensional simplex has volume1n!.By Theorem 3, we haveE(N) = i=1P(N i) = i=0P(N > i)=P(N >0) +P(N >1) + = 1 +11!

7 +12!+13!+ = Markov ChainsAMarkov Chainis a random process that moves from one state to anothersuch that the next state of the process depends only on where the process isat present. Each transition is called astep. In a Markov chain , the next stepof the process depends only on the present state and it does not matter howthe process reaches the current state. In other words, it is memoryless. Anabsorbing stateis a state that is impossible to leave once reached. Astate that is not absorbing is called atransient state. If every state of aMarkov chain can reach an absorbing state, this Markov chain is called anabsorbing Markov Solving a System of Linear EquationsExample process moves on the integers 1, 2, 3, 4, and 5.

8 It startsat 1 and, on each successive step, moves to an integer greater than its4present position, moving with equal probability to each of the remaininglarger integers, until it reaches 5. Find the Expected number of steps ittakes to reach the integer (i) be the Expected number of steps the process takes toreach integer 5 from integeri. We need to findE(1). ClearlyE(5) = 0 andE(4) = 1. When the process is at 3, there is probability12each to reach 4or 5. Thus,E(3) =12(E(4) + 1)+12(E(5) + 1)= , when the process is at 2, there is probability13each to reach 3, 4,or 5.

9 Thus,E(2) =13(E(3) + 1)+13(E(4) + 1)+13(E(5) + 1)= , when the process is at 1, there is probability14each to reach 2, 3,4, or 5. Therefore,E(1) =14(E(2) + 1)+14(E(3) + 1)+14(E(4) + 1)+14(E(5) + 1)= Using a Transition MatrixExample Ice Castle in Arendelle has the shape of an stands at the bottom vertex (A) and needs to reach the top vertex(E) to meet her sister Elsa. At each step, Anna randomly picks one of thefive adjacent edges with equal probability and moves along that edge to thenext vertex. Find the Expected number of steps Anna takes to reach the topfor the first will use transition matrix to solve this problem.

10 Note thatthe icosahedron can be divided into 4 layers. Layer 0: Anna s starting point(A); Layer 1: the vertices (B) connected with vertex A; Layer 2: the vertices(C) connected with vertex E; and Layer 4: Anna s ending point (E). Thusthere are four states in this Markov chain . Definepijto be the probabilitythat Anna goes from stateito statej. The matrixP= (pij) is called thetransition matrixof the Markov chain . We haveP= 01001/5 2/5 2/5002/5 2/5 1/50001 .We see that State (E) is an absorbing state. LetQbe the sub-matrix ofPwithout the rows and columns of any absorbing states.


Related search queries