Example: marketing

5 Random Walks and Markov Chains

5 RandomWalksand Markov ChainsA randomwalk on a directedgraphconsistsof a sequenceof verticesgeneratedfroma startvertexby selectingan edge,traversingthe edgeto a newvertex,andrepeatingthe will see thatif the graphis stronglyconnected,thenthe fractionof timethe walk spendsat the variousverticesof the graphconvergesto a graphis directed,theremight be verticeswithno out edgesandhencenowherefor the walk to go. Verticesin a stronglyconnectedcomponent withno in edgesfromthe remainderof the graphcan never be reached unlessthe component walk leaves a stronglyconnectedcomponent it can never our discussionof randomwalkswill involve Random walk at a vertexx0and thinkof the startingprobability distributionas putting a massof one onx0andzero on every , onecouldstart withany probability distributionp, wherepis a row vectorwithnonnegativecomponents summingto one,withpxbeingthe probability of startingat vertexx.

of random walks and Markov chains is given in Table 5.1. A state of a Markov chain is persistent if it has the property that should the state ever be reached, the random process will return to …

Tags:

  Chain, Walk, Random, Markov, Random walks and markov chains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of 5 Random Walks and Markov Chains

1 5 RandomWalksand Markov ChainsA randomwalk on a directedgraphconsistsof a sequenceof verticesgeneratedfroma startvertexby selectingan edge,traversingthe edgeto a newvertex,andrepeatingthe will see thatif the graphis stronglyconnected,thenthe fractionof timethe walk spendsat the variousverticesof the graphconvergesto a graphis directed,theremight be verticeswithno out edgesandhencenowherefor the walk to go. Verticesin a stronglyconnectedcomponent withno in edgesfromthe remainderof the graphcan never be reached unlessthe component walk leaves a stronglyconnectedcomponent it can never our discussionof randomwalkswill involve Random walk at a vertexx0and thinkof the startingprobability distributionas putting a massof one onx0andzero on every , onecouldstart withany probability distributionp, wherepis a row vectorwithnonnegativecomponents summingto one,withpxbeingthe probability of startingat vertexx.

2 Theprobability of being atvertexxat timet+ 1 is the sumover each adjacent vertexyofbeing atyat timetand takingthe transitionfromytox. Letp(t)be a row vectorwitha component for each vertexspecifyingthe probability massof the vertexat timetandletp(t+1)be the row vectorof probabilitiesat timet+ 1. In matrixnotation4p(t)P=p(t+1)wheretheijthe ntry of the matrixPis the probability of the walk at vertexiselectingthe edgeto fundamental property of a randomwalk is thatin the limit, the long-termaverageprobability of being ata particularvertex is independent of the startvertex,or an initialprobability distributionover vertices,providedonlythat the underlyinggraphis calledthestationaryprobabilities. Thisfunda-mental theoremis proved in the specialcaseof randomwalks,namely randomwalkson undirectedgraphs,hasimportant connectionsto , each edgehas a parametercalledconductance, like the electrical conductance,andif the walk is at vertexu, it choosesthe edgefromamongall edgesincident touto walk to the nextvertexwithprobabilitiesproportionalt o hittingtime,the expectedtimeto reach vertexystartingat vertexx,and cover time,the expectedtimeto visitevery.

3 Thesequantitiesare all boundedabove by polynomialsin the number of of thesefactswill rely on the4 Probability vectorsarerepresentedby row vectorstosimplifynotationinequationslike chaingraphstochasticprocessvertexstatest ronglyconnectedpersistentaperiodicaperio dicstronglyconnectedand :Correspondencebetween terminologyof Random walksand Markov chainsanalogybetween randomwalksand the theoryof randomwalkswas developed in computersciencewith animportant applicationin definingthe pagerankof pageson the WorldWideWeb by theirstationaryprobability. An equivalent conceptcalledaMarkovchainhad previouslybeendeveloped in the Markov chainhas a finiteset ofstates. For eachpairof statesxandy, thereis atransitionprobabilitypxyof goingfromstatexto stateywherefor eachx,Pypxy= 1.

4 A randomwalk in the Markov chainstartsat given timestep,if it is in statex, the nextstateyis selected randomlywithprobabilitypxy. A Markov chaincan be represented by a directedgraphwitha vertexrepresentingeach stateand an edgewithweightpxyfromvertexxto vertexy. We say thatthe Markovchainisconnectedif the underlyingdirectedgraphis stronglyconnected. Thatis, if thereis a directedpathfromevery vertexto every matrixPconsistingof thepxyis calledthetransitionprobabilitymatrixof the randomwalk and Markov chain are usedinterchangeably. Thecorrespondencebetween the terminologiesof randomwalksand Markov chainsis given in stateof a Markov chainispersistentif it has the property thatshouldthe stateeverbe reached,the Random processwill returnto it withprobability equivalentto the property thatthe stateis in a stronglyconnected component withno out mostof the chapter,we assumethatthe underlyingdirectedgraphis strongly discussherebrieflywhatmight happen if we do not have directedgraphin ,A,B, andC.

5 Startingfromany v ertexinA,thereis a nonzeroprobability of eventuallyreachingany v ertexinA. However, the probability of returningto a vertexinAis lessthanone andthus verticesinA,andsimilarlyverticesinB,are not persistent. Fromany v ertexinC, the walk eventuallywill return withprobability one to the vertex,sincethereis no way of leavingcomponentC. Thus, verticesinCare chainsare usedto model situationswhereall the informationof the system187 ABC(a)ABC(b) :(a) A directedgraphwithverticeshavingno out outedgesanda stronglyconnectedcomponentAwithno in edges.(b) A predictthe futurecan be encoded in the current typicalexampleis speech, wherefor a smallkthe current stateencodes the lastksyllablesutteredbythe speaker. Given the current state,thereis a certainprobability of each syllablebeingutterednextand thesecan be usedto calculatethe a gambler sassets,which can be modeledas a Markov chainwherethe currentstateis the amount of money the gamblerhas on wouldonlybe validif the gambler sbets dependonlyon current assets,not the the chapter,we study the widelyusedMarkov ChainMonte Carlomethod(MCMC).

6 Here,the objective is to samplea largespaceaccordingto of elements in the spacemay be very large,say Markov chainwherestatescorrespond to the elements of the the chainare designedso thatthe stationaryprobability of the chainis the probability distributionpwithwhich we want to by takinga randomwalk until the probability distributionis closeto the stationarydistributionofthe chainand thenselectsthe point the walk is at. Thewalk continues a number of stepsuntil the probability distributionis no longerdependent on wherethe walk was whenthefirstelement was secondpoint is thenselected,andso isimpossibleto storethe graphin a computersinceit has 10100vertices,to do the walk oneneedsonlystorethe vertexthe walk is at and be able togeneratethe adjacent verticesby criticalis thatthe probability of the walk convergesto thestationaryprobability in time logarithmicin the number of mentiontwo motivating to estimatethe probability of aregionRind-spaceaccordingto a probability density like the agridand make each gridpoint thatis inRa stateof the Markov chain .

7 Given a proba-bility densityp, designtransitionprobabilitiesof a Markov chainso thatthe stationarydistributionis exactlyp. In general,the number of states grows exponentiallyin the di-mensiond, but the timeto convergeto the stationarydistributiongrows secondexampleis ngridin the planewitha particleat each gridpoint. Each particlehas a spinof 1. Thereare a configurationis a functionof the central problemin statisticalmechanicsis to samplea spinconfigurationaccordingto theirprobability. It is easytodesigna Markov chainwithone state per spinconfigurationso thatthe stationaryprob-ability of a stateis proportionalto the state senergy. If a randomwalk getscloseto thestationaryprobability in timepolynomialtonratherthan2n2,thenone cansamplespinconfigurationsaccordingto quantity calledthemixingtime, looselydefinedas the timeneeded to get closetothe stationarydistribution,is oftenmuch smallerthanthe number of , we relatethe mixingtimeto a combinatorialnotioncallednormalized conductanceand derive good upper boundson the mixingtimein many (t)be the probability distributionaftertstepsof a (t)bya(t)=1t p(0)+p(1)+ +p(t 1).

8 Thefundamental theorem of Markov chainsassertsthatthe long-termprobability distri-butionof a connectedMarkov chainconvergesto a uniquelimitprobability v ector,whichwe denoteby . Executingone morestep,startingfromthislimitdistributi on,we getback the matrixnotation, P= wherePis the matrixof fact, thereis a uniqueprobability v ector(nonnegative componentssummingto one)satisfying P= and this vectoris the stepdoesnot changethe distribution,any number of stepswouldnot this reason, fundamental theoremof Markov Chains ,we firstprove a the transitionprobabilitymatrixfor a connected (n+ 1)matrixA= [P I ,1]obtained by augmentingthe matrixP Iwithanadditionalcolumnof oneshas :If the rankofA= [P I,1] was less thanntherewouldbe two linearlyindepen-dent solutionstoAx= row inPsumsto one so each row inP Isumsto (1,0),whereall but the last coordinate ofxis 1, is one solutiontoAx= a secondsolution(x, ) perpendicular to (1,0).

9 Then(P I)x+ 1=0orxi=Pjpijxj+ .Eachxiis a convex combinationof somexjplus . LetSbe the setofifor whichxiattainsits maximum value. Sis not empty sincexis perpendicularto1and hencePjxj= 0. Connectednessimpliesthatsomexkof maximum valueis adjacentto somexlof lower ,xk> must be greaterthan0 inxk=Pjpkjxj+ ..A symmetricargument withTthe set ofiwithxitakingits minimum valueimplies <0 producinga contradictionthereby provingthe (Fundamental Theoremof Markov Chains )For a connected Markovchainthere is a uniqueprobabilityvector satisfying P= . Moreover,for any startingdistribution,limt a(t)existsand equals .Proof:Notethata(t)is itselfa probability v ector,sinceits components are nonnegativeandsumto 1. Runone stepof the Markov chainstartingwithdistributiona(t); thedistributionafterthe stepisa(t)P.

10 Calculatethe changein probabilitiesdue to this (t)P a(t)=1t p(0)P+p(1)P+ +p(t 1)P 1t p(0)+p(1)+ +p(t 1) =1t p(1)+p(2)+ +p(t) 1t p(0)+p(1)+ +p(t 1) =1t p(t) p(0) .Thus,b(t)=a(t)P a(t)satisfies|b(t)| 2t 0,ast .By above,Ahas rankn. Then nsubmatrixBofAconsistingof allits columns exceptthe firstis (t)be obtainedfromb(t)by removingthefirstentry. Then,a(t)B= [c(t),1] and soa(t)= [c(t),1]B 1 [0,1]B 1. We have thetheoremwith = [0,1]B thatthe expectedtimerxfor a Markov chainstartingin statexto returntostatexis the reciprocal of the stationaryprobability follows by observingthatif a longwalk always returnsto statexin exactlyrxsteps,the frequencyof beingin a rigorousproof requiresthe StrongLawof finish this sectionwiththe followinglemmausefulin establishingthata probabilitydistributionis the stationaryprobability distributionfor a randomwalk on a connectedgraphwithedge a randomwalkon a stronglyconnected graph withprobabilitieson theedges,if the vector satisfies xpxy= ypyxfor allxandyandPx x= 1, then isthe stationarydistributionof the.


Related search queries