Example: dental hygienist

Stochastic Di erential Equations. - NYU Courant

Chapter 4 Stochastic Existence and goal in this chapter is to construct Markov Processes that are Di usionsinRdcorresponding to speci ed coe cientsa(t;x)=fai;j(t;x)gandb(t;x)=fbi(t; x)g. Ito's method consists of starting from any ( ;Ft;P) and an adaptedBrownian Motion (t;!)=f i(t;!)grelative to ( ;Ft;P), with values is to say has almost surely continuous paths andexp < ; (t)> tk k22 is a martingale with respect to ( ;Ft;P) for all basic assumption onaandbare the symmetric positive semide nite matrixa(t;x) can be written asa(t;x)= (t;x) (t;x) for some matrix (t;x) that satis es a Lipschitz con-dition (t;x) (t;y)k Ajx coe cientsbi(t;x) satisfy a similar (t;x) b(t;y)k Ajx conditions. For simplicity we will assume that for some constantCk (t;x)k Candkb(t;x)k C4748 CHAPTER 4. Stochastic DIFFERENTIAL that the choice of is not unique. We only assume that there is a choice of that satis es the Lipschitz condition. The bounds of course are really Givens0 0and anFs0measurable,Rdvalued squareintegrable function 0(!)

Chapter 4 Stochastic Di erential Equations. 4.1 Existence and Uniqueness. Our goal in this chapter is to construct Markov Processes that are Di usions

Tags:

  Equations, Stochastic, Erential, Stochastic di erential equations

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Stochastic Di erential Equations. - NYU Courant

1 Chapter 4 Stochastic Existence and goal in this chapter is to construct Markov Processes that are Di usionsinRdcorresponding to speci ed coe cientsa(t;x)=fai;j(t;x)gandb(t;x)=fbi(t; x)g. Ito's method consists of starting from any ( ;Ft;P) and an adaptedBrownian Motion (t;!)=f i(t;!)grelative to ( ;Ft;P), with values is to say has almost surely continuous paths andexp < ; (t)> tk k22 is a martingale with respect to ( ;Ft;P) for all basic assumption onaandbare the symmetric positive semide nite matrixa(t;x) can be written asa(t;x)= (t;x) (t;x) for some matrix (t;x) that satis es a Lipschitz con-dition (t;x) (t;y)k Ajx coe cientsbi(t;x) satisfy a similar (t;x) b(t;y)k Ajx conditions. For simplicity we will assume that for some constantCk (t;x)k Candkb(t;x)k C4748 CHAPTER 4. Stochastic DIFFERENTIAL that the choice of is not unique. We only assume that there is a choice of that satis es the Lipschitz condition. The bounds of course are really Givens0 0and anFs0measurable,Rdvalued squareintegrable function 0(!)

2 , there exists an almost surely continuous progressivelymeasurable function (t)= (t;!)fort s0that solves the equation (t)= 0+Zts0 (s; (s))d (s)+Zts0b(s; (s))ds( )The solution is unique in the class of progressively measurable existence and uniqueness follow very closely the standard Picard'smethod for constructing solutions to ODE. We de ne 0(t) 0fort s0and de ne successively, fork 1, k(t)= 0+Zts0 (s; k 1(s))d (s)+Zts0b(s; k 1(s))ds( )Let us remark that the iterations are well de ned. They generate progressivelymeasurable almost surely continuous functions at each stage and by inductionthey are well de ned. In order to prove the convergence of the iteration schemewe estimate successive di erences. Let us assume with out loss of generality thats0= 0 and pick a time interval [0;T] in which we will prove convergence. SinceTis arbitrary that will be enough. If we denote the di erence k(t) k 1(t)by k(t), we have k+1(t)=Zt0[ (s; k(s)) (s; k 1(s))]d (s)+Zt0[b(s; k(s)) b(s; k 1(s))]ds=Zt0 k(s)db(s)+Zt0ek(s)ds( )Because of the Lipschitz assumptionk k(s)k Ak k(s)kandkek(s)k Ak k(s)( )We can estimatesup0 tk k( )k sup0 tkZ 0 k(s)db(s)k+Zt0kek(s) EXISTENCE AND Doob's inequality for martingales, the property of Stochastic integralsand equation ( )E sup0 tkZ 0 k(s)db(s)k2 C0E kZt0 k(s)db(s)k2 =C1Zt0E k k(s)k2 ds A2C1Zt0E k k(s)k2 dsOn the other hand we can also estimate fort T,E" Zt0kek(s)kds 2# TE Zt0kek(s)k2ds A2 TZt0E k k(s)k2 dsPutting the two pieces together, if we denote by k(t)=E sup0 tk k( )k2 then, withCT=A2C1(1 +T), k(t) CTZt0 k 1(s)dsClearly 1(t)=Zts0 (s; 0)d (s)+Zts0b(s; 0)dsand 1(t) CTtBy induction k(t) CkTtkk!

3 From the convergence ofPk[CkTTkk!]12we conclude thatXkE sup0 t Tk k(t)k <1:By Fubini's theoremXksup0 t Tk k(t)k<1a:e:P:50 CHAPTER 4. Stochastic DIFFERENTIAL other words for almost all!with respect toP,limk!1 k(t)= (t)exists uniformly in any nite time interval [0;T]. The limit (t) is easily seento be progressively measurable solution of equation ( ).Uniqueness is a slight variation of the same method. If we have two solutions (t)and 0(t), their di erence (t) satis es (t)=Zt0[ (s; (s)) (s; 0(s))]d (s)+Zt0[b(s; (s)) b(s; 0(s))]ds=Zt0 (s)db(s)+Zt0e(s)dswithk (s)k Ak (s)kandke(s)k Ak (s)kJust as in the proof of convergence, for the quantity (t)=E sup0 s tk (s)k2 we can now obtain (t) CTZt0 (s)dsWe have the obvious estimate (t) CTand we obtain by iteration (t) (CT)k+1tkk!for everyk. Therefore (t) 0 implying uniqueness theorem has a special form. If two solutions of equation ( )are constructed on the same same space for the same Brownian motion withthe same choice of then they are identical for almost all!

4 This seems toleave open the possibility that somehow di erent choices of or constructionsin di erent probability spaces could produce di erent results. That this is notthe case is easily established. Before we return to this let us proceed with can start with a constantxfor our initial value at some timesand construct a solution (t)= (t;s;x)fort s. If we de nep(s;x;t;A)=P (t;s;x)2A then our solutions are Markov processes with transition probabilityp(s;x;t;A). SOME EXAMPLES. A DISCUSSION OF proof is based on the following argument. Because of uniqueness thesolution starting from time 0 can be solved upto timesand then we can startagain at timeswith the initial value equal to the old solution, and we shouldnot get anything other than the solution obtainable in a single step. In otherwords (t;s; (s;0;x)) = (t;0;x)Since the solution (t;s; (s;0;x)) only depnds on (s;0;x) which isFsmea-surable and incrementsd of the Brownian paths over [s;t] that are independentofFs, the conditional distributionP[ (t)2 AjFs]=P[ (t;s; (s))2 AjFs]=P[ (t;s;z)2A]jz= (s)=p(s; (s);t;A)establishing the Markov similar argument will yield the strong Markov property.

5 We usethe fact that the after a stopping time the future increments of the Brownianmotion are still independent of the - eldF . There are some details to checkabout restarting the SDE at a stopping time. But this is left as an we have two solutions on two di erent spaces of the same equa-tion with the same constant ( non random) initial value, with the same andbthat satisfy our assumptions, then they have the same distributions asstochastic processes. If we notice our construction, each iteration k(t)wasawell de ned function of k 1and the Brownian incremets. The iteration schemeis the same in both. At each stage they are identical functions of di erent Brow-nian motions. Therefore they have the same distribution. Pass to the (t) is any solution anywhere for any choice of the square root,then is a di usion corresponding to the coe cientsa= ,band can berepresented, by enlarging the space if necessary, as a solution of equation( )with any arbitrary choice of the square root.

6 In particular if one is availablewith the Lipschitz property andbis also Lipschitz we are back in the old situ-ation. Therefore if there is a Lipschitz choice available then the distribution ofany solution with any choice of the square root is identical to the one comingfrom the Lipschitz choice. In particular the distribution of any two Lipschitzchoices are Some Examples. A Discussion of Stochastic di erential equation52 CHAPTER 4. Stochastic DIFFERENTIAL (t)= (t) ax(t)dt;x(0) =x0( )has an explicit solutionx(t)=e atx0+ e atZt0easd (s)which has a Gaussian distribution with meane atx0and variance given by 2(t)= 2e 2atZt0e2asds= 22a(1 e 2at)This is a Markov Process with stationary Gaussian transition probablity densi-ties:p(t;x;y)=1p2 (t)exp (y e atx)22 2(t) This is particularly interesting whena>0, which is the stable case, and thenlimt!1 2(t)= = 22aandlimt!1p(t;x;y)=1p2 exp y22 Geometric Brownian Motion:The functionx(t)=x0exp (t)+ t sat-is es according to Ito's formula the equationdx(t)= x(t)d (t)+( + 22)x(t)dt;x(0) =x0so that a solution ofdx(t)= x(t)d (t)+ x(t)dt;x(0) =x0is provided byx(t)=x0exp (t)+( 22)t Notice the behavior1tlogx(t)'( 22)a:e:as well as1tlogE[x(t)]' The explanation is that the larger expectation is accounted for by certain verylarge values with very small SOME EXAMPLES.

7 A DISCUSSION OF and solutionx(t)=x0exp (t)+( 22)t ofdx(t)= x(t)d (t)+ x(t)dt;x(0) =x0is nice smooth map of Brownian paths and makes sense for all functionsfx(t;f)=x0exp f(t)+( 22)t and for smooth functions as well. If we replace by a smooth pathf,itsolvesdx(t)= x(t)df(t)+( 22)x(t)dt;x(0) =x0 The Ito map satis es the wrong equation on smooth paths. This is are various ways of constructing a solution that correspond to a Di u-sion with coe cientsa(t;x)=fai;j(t;x)gandb(t;x)=fbi(t; x)g. For a squareroot satisfying =awe can attempt to solve the SDEdx(t)= (t;x(t))d (t)+b(t;x(t))dt;x(0) =x0on the Wiener space and get a map ( )!x( ). Such a solution if it existswill be called astrong Solutionis a measurePon =C[[0;1);Rd] such thatP[x(0) =x0]=1andforeachsmoothftheexpressionf(x( t)) f(x(0)) Zt0(Lsf)(x(s)dsis a martingale with respect to ( ;Ft;P). If we can construct on some proba-bility space ( ;Ft; ) a Brownian motion ( )andanx( )thatsatisfyx(t)=x0+Zt0 (s;x(s))d (s)+Zt0b(s;x(s))dsthen we callx( )aWeak Solutionto the SDE.]

8 We make the following strong solution is a weak solution, and if is Lipschitz, thenany weak solution is a strong solution. In particular two weak solutions on thesame space involving the same Brownian Motion are distributionPof any Weak Solution is a Martingale Solu-tion and conversely any Martingale Solution is the distribution of some a given square root if we dei ne the 2d 2dmatrix ~aas the2 2 matrix ofd dblocks~a= a I and~bas (b;0) , then a weak solution of ;bis the same as a Martingale Solutionof ~a;~ 4. Stochastic DIFFERENTIAL two weak solutions on di erent probability spaces can be puton the space space with the same Brownian needs an explanation. What we mean is the following: LetP1andP2be two martingale solutions for ~a;~b. Then we can construct aQwhich is amartingale solution for the 3ddimensional problem with coordinatesx;y;zforba;bbwhere in blocks ofd dba=24a(x) (x) (y) (x) (y) (x) a(y) (y) (x) (y) I35whilebbis given by [b(t;x);b(t;y);0] which has the following two additionalproperties:1.

9 The distribution ofx;zcoordinates isP1and that of they; Given thezcordinate thexandycoordinates are conditionally start withPthe Wiener measure,P!ithe conditional of `x( )' given theBrownian Motion underPiand de neQ=P(d!) [P!1 P!2] we build in conditional independence. We can check thatQis a MartingaleSolution for the 3ddimensional construction allows us to make the following it is true that for some ;bany two weak solutions on the samespace with the same Brownian Motion are identical, then any weak solution isa strong solution and in such a context the Martingale solution is unique.


Related search queries