Example: marketing

Math 312 - Markov chains, Google's PageRank algorithm

Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmMath 312 Markov chains, google s PageRank algorithmJeff JaureguiOctober 25, 2012 Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).

Markov chains: examples Markov chains: theory Google’s PageRank algorithm Random processes Goal: model a random process in which a system transitions from one state to …

Tags:

  Chain, Algorithm, Google, Markov, Markov chain, Google s pagerank algorithm, Pagerank

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Math 312 - Markov chains, Google's PageRank algorithm

1 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmMath 312 Markov chains, google s PageRank algorithmJeff JaureguiOctober 25, 2012 Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).

2 Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).

3 Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmRandom processesGoal: model arandom processin which a systemtransitionsfrom onestateto another at discrete time each time, say there arenstates the system could be timek, we model the system as a vector~xk Rn(whoseentries represent the probability of being in each of thenstates).Here,k= 0,1,2, .., and initial state is~ vectoris a vector inRnwhose entries arenonnegative and sum to 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk).

4 We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.

5 ~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).

6 Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs. suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmCities/suburbsModel a population in the city vs.

7 Suburbs. Say~x0= ( , )(meaning 60% live in city, 40% live insuburbs).Given: each year, 5% of city dwellers move to suburbs (therest stay), and3% of suburbanites move to city (the rest stay)Let~xk= (ck,sk). We re told:ck+1= + +1= + ~xk+1=M~xk.~x1= ( , ),~x2= ( , ),~x10= ( , ), ,~xklimits to ( , ).Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmMarkov chainsDefinitionAMarkov matrix(orstochastic matrix) is a square matrixMwhose columns are probability chainis a sequence of probability vectors~x0,~x1,~x2, ..such that~xk+1=M~xkfor some Markov : a Markov chain is determined by two pieces of 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmMarkov chainsDefinitionAMarkov matrix(orstochastic matrix) is a square matrixMwhose columns are probability chainis a sequence of probability vectors~x0,~x1,~x2, ..such that~xk+1=M~xkfor some Markov : a Markov chain is determined by two pieces of 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmMarkov chainsDefinitionAMarkov matrix(orstochastic matrix) is a square matrixMwhose columns are probability chainis a sequence of probability vectors~x0,~x1,~x2.

8 Such that~xk+1=M~xkfor some Markov : a Markov chain is determined by two pieces of 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmSteady-state vectorsGiven a Markov matrixM, does there exist asteady-statevector?This would be a probability vector~xsuch thatM~x=~ for steady-state in city-suburb 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmSteady-state vectorsGiven a Markov matrixM, does there exist asteady-statevector?This would be a probability vector~xsuch thatM~x=~ for steady-state in city-suburb 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmSteady-state vectorsGiven a Markov matrixM, does there exist asteady-statevector?This would be a probability vector~xsuch thatM~x=~ for steady-state in city-suburb 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmVoter preferencesSuppose voter preferences (for parties D , R and L )shift around randomly via the Markov matrixA=.

9 20% of supporters of D transition to R each 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmVoter preferencesSuppose voter preferences (for parties D , R and L )shift around randomly via the Markov matrixA= . 20% of supporters of D transition to R each 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmVoter preferencesSuppose voter preferences (for parties D , R and L )shift around randomly via the Markov matrixA= . 20% of supporters of D transition to R each 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmQuestionsQuestions:How do we know a steady-state vector exists?Does a steady-state vector always have nonnegative entries?Is a steady-state vector unique? Can you ever guarantee it?Does the Markov chain always settle down to a steady-statevector?

10 Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmQuestionsQuestions:How do we know a steady-state vector exists?Does a steady-state vector always have nonnegative entries?Is a steady-state vector unique? Can you ever guarantee it?Does the Markov chain always settle down to a steady-statevector?Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmQuestionsQuestions:How do we know a steady-state vector exists?Does a steady-state vector always have nonnegative entries?Is a steady-state vector unique? Can you ever guarantee it?Does the Markov chain always settle down to a steady-statevector?Math 312 Markov chains: examplesMarkov chains: theoryGoogle s PageRank algorithmQuestionsQuestions:How do we know a steady-state vector exists?Does a steady-state vector always have nonnegative entries?Is a steady-state vector unique? Can you ever guarantee it?


Related search queries