Example: bankruptcy

Markov Chains and Applications - University of Chicago

MarkovChainsandApplicationsAlexanderVolf ovskyAugust17,2007 AbstractInthispap erIprovideaquickoverviewofSto chasticpro ,probability, chasticpro ,IwouldliketothankSamRaskinforinspiratio nalconversationswhenthepap erseemedtob o okthatisusedinorderto ndstatementsofsomeofthetheoremsistherefe renceguidebyLarryWasserman,Al ductionInadeterministicworld,itisgo o dtoknowthatsometimesrandomnesscanstillo chasticpro cessistheexactopp ositeofadeterministicone,andisarandompro cessandthefunctionbywhichitisde ned,wecansp eakoflikelyoutcomesofthepro chasticpro nesMarkovchainsandgo esthroughtheirmainprop ertiesaswellassomeinterestingexamplesoft heactionsthatcanb ep erbydrawingonanimp ortantasp ectofMarkovchains:theMarkovchainMonteCar lo(MCMC)metho ,Section3employsimp ortancesamplinginordertodemonstratethep chasticpro cessesthathavetheMarkovProp erty, nitionofMarkovProp ertyInformallyitistheconditionthatgivena state,thepastandfuturestatesareindep neitasfol

Markov Chains and Applications Alexander olfoVvsky August 17, 2007 Abstract In this paper I provide a quick overview of Stochastic processes and then quickly delve into a discussion of Markov Chains.

Tags:

  Chain, Markov, Markov chain

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Markov Chains and Applications - University of Chicago

1 MarkovChainsandApplicationsAlexanderVolf ovskyAugust17,2007 AbstractInthispap erIprovideaquickoverviewofSto chasticpro ,probability, chasticpro ,IwouldliketothankSamRaskinforinspiratio nalconversationswhenthepap erseemedtob o okthatisusedinorderto ndstatementsofsomeofthetheoremsistherefe renceguidebyLarryWasserman,Al ductionInadeterministicworld,itisgo o dtoknowthatsometimesrandomnesscanstillo chasticpro cessistheexactopp ositeofadeterministicone,andisarandompro cessandthefunctionbywhichitisde ned,wecansp eakoflikelyoutcomesofthepro chasticpro nesMarkovchainsandgo esthroughtheirmainprop ertiesaswellassomeinterestingexamplesoft heactionsthatcanb ep erbydrawingonanimp ortantasp ectofMarkovchains:theMarkovchainMonteCar lo(MCMC)metho ,Section3employsimp ortancesamplinginordertodemonstratethep chasticpro cessesthathavetheMarkovProp erty, nitionofMarkovProp ertyInformallyitistheconditionthatgivena state,thepastandfuturestatesareindep neitasfollows:P(Xn=x|X0.)

2 , Xn 1) =P(Xn=x|Xn 1) n :Palwaysrepresentsatransitionmatrix, nitionStateiisrecurrentifP(Xn=iforsomen 1|X0=i) = nitionAchainisirreducibleifeverystatecan b (1)>0 i, j2 Westatewithoutpro ofthefollowingresults:Astateisrecurrenti fandonlyif npii(n) = anditistransientifandonlyif npii(n)< . ,al ofWetaketheshortestpathfromstateitostatej(letithavensteps),andtheshortestpathfromjtoi(letithavemsteps).Thuswehavepij(n) =a >0andpji(m) =b >0andsowehavepii(l+n+m) pij(n)pjj(l)pji(m)=abpjj(l)pjj(l+n+m) pji(m)pii(l)pij(n)=abpii(l).Soitisobviou sthateither npii(n)and npjj(n)areb oth niteorareb othin overesultswenotethatallthestatesofanirre duciblechainareeithertransientorrecurren t, j, j, nite,irreducibleMarkovchainareal n, m 0stpij(n)>0andpji(m)> l=1pjj(l) l=n+m+1pjj(l) k=1pji(m)pii(k)pij(n)=pji(m)( k=1pii(k))pij(n)= ethatatthisp ointthereaderrealizesafundamentaltruthab outtransientstates(asitb ecomesrelevantso on).

3 Wehavethatgivenanonzeroprobabilitypofret urningtotransientstateiafterstartingfrom it,thedistributionofthenumb cationinone' j n >0stpij(n)>0soform > nwehavepii(m) pij(n)pji(m n)andthuswehave: k=1pii(k) k=n+1pii(k) pij(n)( k=n+1pji(k n))=pij(n)( l=1pji(l))whichimpliesthat l=1pji(l) 1pij(n)( l=1pji(l))< , , xastateiandconsiderthenumb nitelymanystates,theexp ectationforthenumb eroftimesthatwepassstatejforsomestatejwouldb ein ectednumb erofreturnstostatejafterstartingatstatejwouldalsob ein erofreturns,whichhastheexp ectationb nitionMeanrecurrencetimeforarecurrentstateiismi= nnfii(n)wherefii(n) ofof(3) overesultcaneasilyb eextendedtothefactthata nitestateMarkovchainhasallitsrecurrentstatesb nitionThep erio dofstatei,d(i) =difpii(n) = 0ford-nandd=gcd{n|pii(n)>0}.

4 Thusastateisp erio dicifd(i)>1andap erio nitionFor i= limn pij(n d(i)),ifgreaterthanzerothennon-nullrecur rentotherwise, jthend(i) =d(j)Pro ofWeconsiderm, nstpij(n)>0andpji(m)>0thuswecannotethatf romtheKolmogorovChapmanequationswehave3p ii(m+n) =N k=1pik(n)pki(m) pij(n)pji(m).3 Thepro ofoftheKolmogorovChapmanEquationsisprovi dedintheApp endix4 Nowbyde nitionpii(n+m)>0andd(i)|(n+m).Nowwecanco nsiderpii(m+l+n)andapplythesamereasoning asab ovetoarriveat:pii(m+l+n) =N k=1pir(n)N t=1prk(l)pki(m) pij(n)pjj(l)pji(m)Soifwehavethatpjj(l)>0 thend(j)|limplyingasdesiredthatpii(m+l+n )>0andsod(i)|(n+m+l)butcombiningthiswith d(i)|(m+n)wegetthatd(i)|landsosinced(j) = gcd{l|pjj(l)>0}wegetthatd(j) d(i).

5 Wecanapplythesamelogicgoingfromjtoiandso wearriveattheconclusionthatd(i) =d(j) nitionAchainisergo dicifallofitsstatesarenon-nullrecurentan dap e-rio nitionLet b eaprobabilitymassfunction,thenwesaythat isasta-tionaryprobabilitydistributionif = P(thatis,if isaneighenvectorofthetransitionprobabili tymatrixP)De nitionAlimitingdistributionexistsifPn .. forsome (FundamentaltheoremforMarkovChains)Anirr educible,ergodicMarkovchainhasauniquesta tionarydistribution .Thelimitingdis-tributionexistsandisequa lto .Pro ofSincethechainisergo dic,itisnon-nullrecurrentwhichimpliesfro mab ovethat j= limn pij(n)>0 iand j= Mi=0pij(n) i=0pij(n) = ,lettingn wegetthat Mi=0 i 1 Mwhichimpliesfromab ovethatthesameistruefortheini ntecase i=0 i + 1stepssopij(n+ 1) = k=0pik(n)pkj Mk=0pik(n)pkj whichimpliesthat i Mk=0 kpkj Mwhichimpliesthat i k=0 : i=0 i i=0 k=0 kpki= k=0 k i=0pki= k=0 i= k=0 kpki i= i/ k=0 ktob.

6 I=P(Xn=i) = j=0P(Xn=i|X0=j)P(X0=j) = j=0pji(n) i Mj=0pji(n) jandtakingM, n wegetthat i j=0 j i= ibutweknowthatfromthetransitionmatrixtha tpji(n) 1andso i Mj=0pji(n) j+ j=M+1 j Mandsotakingn weget i Mj=0 i j+ j=M+1 j isastationarydistributionsoitsumsupto1,a ndsoweletM andweget i j=0 i j= ovepro cessthusshowstheexistenceofalimitingdist ributionandsowenowknowthatanergo ofallowsustotakeanyb oundedfunctiongandsaywithprobability1tha tlimN 1NN n=1g(Xn) E (g) jg(j) nition satis esdetailedbalanceif ipij=pji satis ofWeconsiderthejthelementof Pwhichis i ipij= i jpji= j ipji= ointsmadeab 'sb o ok(providedthereasexercisesratherthanass olvedexamples).

7 ExampleConsideratwo-stateMarkovchainwith state ={1,2}andtransi-tionmatrixP=[1 aab1 b]where0< a, b < Pn=[ba+baa+bba+baa+b].6 SolutionToverifythisweneedto rstshowthatthechainisirreducibleandergo dicweneedtoshowthatallstatesarereccurent,non-nullandap erio (1)>0wehavethatthischainisap eri-o eatsomelo cationso j Spij(n) = togetlimn j Spij(n) = 1butinourcaseSis ,ifeverystateinourMCistransientornullrec curentwewouldhavethatlimn pij(n) = 0thuscontradictingtheab ovestatement,soatleastonestatemustb ep ,sinceweareina nitestateMCwehavethatallstatesarep ositivereccurent(duetoquestion(7)b elow) :[ 1 2][1 aab1 b]=[ 1(1 a) + 2b 1a+ 2(1 b)]andnowwecansolvethissystemofequations withtheaddedrestriction 1+ 2= [ 1 2]=[ba+baa+b] [ ] [0 11 0]Showthat = ( , ) esthischainconverge?

8 SolutionAllweareaskedtoshowisthat =[ ]isastationarydistributionso:[ ][0 11 0]=[0 + + 0]=[ ] ,thechaindo esnotconvergeasitisclearthatifwestartwit h wewillhaveanequalprobabilityofb erio dofeachofthestatesis2andsothechaindo cessesthroughImp ortancesamplingisusedinstatisticsasavari ancere-ductionmetho dthatwedescrib eb elowdo esnot7necessarilyoptimizethevariance, ortancesamplingallowsustoestimatethedist ributionofarandomvariableusingadi ehindthepro cessisthatduringthesimulation,duetoweigh ingoftherandomvariablefromwhichwehavethe observations,wegetab etter, b est (intermsofminimizingthevariance)weight, ortancesamplingproblemwearetryingtoestim atethedistributionofIusing I=1N h(Xi)f(Xi)g(Xi)

9 Theoptimalchoiseofginthiscaseisg(x) =|h(x)|f(x) |h(s)|f(s) ' (FromLarryWasserman'sAllofStatistics, )LetfX,Y(x, y)b eabivariatedensityandlet(X1, Y1), .. ,(XN, YN) fX, (x)b fX(x) =1NN i=1fX,Y(x, Yi)w(x)fX,Y(Xi, Yi).Showthat,foreachx, fX(x)p fX(x). N(0,1)andX|Y=y N(y,1 +y2).Usethemetho din(1)toestimatefX(x). eitsownrandomvariable,andwenotethattheya reallidenticallyandindep :E[fX,Y(x, Yi)w(x)fX,Y(Xi, Yi)]= fX,Y(x, y)w(z)fX,Y(z, y)fX,Y(z, y)dzdy= fX,Y(x, y)w(z)dzdy= fX,Y(x, y)dy=fX(x)8andsowecanapplythelawoflargen umb erstonotethat1NN i=1fX,Y(x, Yi)w(x)fX,Y(Xi, Yi)p E[fX,Y(x, Yi)w(x)fX,Y(Xi, Yi)]whichisthesameas fX(x)p fX(x), ,pro-vidingsimplytheeasilyveri ableanswer:var fX(x) =1N[ f2X,Y(x, y)w2(z)fX,Y(z, y)dzdy f2X(x)].

10 :fX(x) = fX|Y(x)fY(y)dy= 1 2 (1 +y2)e (x y)22(1+y2)1 2 e ortancesamplingasin(1)inordertoestimatef X(x).Sowetakethedistributionofw(x)tob : fX(x) =1NN i=1fX,Y(x, Yi)w(x)fX,Y(Xi, Yi)=1NN i=1fX|Y(x|Yi)w(Xi)fX|Y(X|Yi)=1NN i=11 2 exp{ 12 (1 +y2)[(x Yi)2 (Xi Yi)2+(x(1 +y2))2]}usingtheab oveaverybasicapplicationofMarkovchainMon teCarlometho ovediscussedmetho disaverybasicandintro ossiblealgorithmsthatwecanapplyinorderto arriveattheb estp ossibleestimate,howevertheyareatopicfora notherpap erandwillonlyb ebrie olisHastingsalgorithmswhichuseacondition alprop osesthatX0wascho-senarbitrarilyandthenpr o ceedstousetheprop osaldistributioninorderto9generatecandid atesthatareeitheraddedtothechainorareove rlo okedbasedonasp eci erentincarnationsofthisalgorithm,withdi erentsuggestedprop osaldistributions.


Related search queries