Example: bachelor of science

MATH 545, Stochastic Calculus Problem set 2

MATH 545, Stochastic CalculusProblem set 2 January 24, 2019 These problems are due on TUE Feb 5th. You can give them to me in class, drop them in my box. In allof the problemsEdenotes the expected value with respect to the specified probability [Klebaner], Chapter4 and Brownian Motion Notes (by FEB 7th) Problem 1(Klebaner, Exercise ).Let{Bt}t 0be a standard Brownian Motion. Show that,{Xt}t [0,T],defined as below is a Brownian )Xt= Bt,We check that the defining properties of Brownian motion hold. It is clear thatB0= , and thatthe increments of the process are independent. Fort > s, the increments can be written as( Bt) ( Bs) = (Bt Bs).BecauseBt Bsis a gaussian RV with mean0and variancet s, (Bt Bs)must have the )Xt=BT t BTforT < ,It is clear thatB0= Fort > s, the increments of the process are given byXt Xs= (BT t BT) (BT s BT) =BT t BT increments are independent ofXs=BT s BTby the inependent increments propery ofBrownian motion.

d) X t = ˆ tB 1=t; if t>0 0; if t= 0;1 By Theorem 3.3 in [Klebaner]: X t is a mean zero gaussian process with covariance structure Cov(X s;X t) = min(s;t).Because rescaling time and brownian motion paths does not affect the mean of the process not its Gaussian structure, the first two points above are trivial.

Tags:

  Structure, Covariance, Covariance structure

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of MATH 545, Stochastic Calculus Problem set 2

1 MATH 545, Stochastic CalculusProblem set 2 January 24, 2019 These problems are due on TUE Feb 5th. You can give them to me in class, drop them in my box. In allof the problemsEdenotes the expected value with respect to the specified probability [Klebaner], Chapter4 and Brownian Motion Notes (by FEB 7th) Problem 1(Klebaner, Exercise ).Let{Bt}t 0be a standard Brownian Motion. Show that,{Xt}t [0,T],defined as below is a Brownian )Xt= Bt,We check that the defining properties of Brownian motion hold. It is clear thatB0= , and thatthe increments of the process are independent. Fort > s, the increments can be written as( Bt) ( Bs) = (Bt Bs).BecauseBt Bsis a gaussian RV with mean0and variancet s, (Bt Bs)must have the )Xt=BT t BTforT < ,It is clear thatB0= Fort > s, the increments of the process are given byXt Xs= (BT t BT) (BT s BT) =BT t BT increments are independent ofXs=BT s BTby the inependent increments propery ofBrownian motion.

2 The incrememnts are also clearly Gaussian random variables with mean0andvarianceVar(Xt Xs) =Var(BT t BT s) =|T t (T s)|=t )Xt=cBt/c2for allc >0,T < ,The independence of increments andX0= 0property are trivial as they are not affected by the increments are clearly Gaussian random variables as they are the sum of gaussian random variables,and the scaling preserves the mean 0 property. We The variance of the increments is given byVar(Xt Xs) =Var(cBt/c2 cBs/c2) =c2 Var(Bt/c2 Bs/c2) =c2(t/c2 s/c2) =t )Xt={tB1/t,ift >00,ift= 0,1By Theorem in [Klebaner]:Xtis a mean zero gaussian process with covariance structureCov(Xs,Xt) = min(s,t). Because rescaling time and brownian motion paths does not affect themean of the process not its Gaussian structure , the first two points above are trivial. Then, fors < twecompute the covariance structureCov(Xt,Xs) =Cov(tB1/t,sB1/s) =tsCov(B1/t,B1/s) =tsmin(1/t,1/s)=tst= min(t,s).}

3 To show continuity of the given process at 0 one could use the strong law of large numbers for a sumof independent gaussian random variables:1nBn=1nn i=1(Bi+1 Bi) 0in the sense of almost sure convergence of random variables asN . Problem 2(Klebaner, Exercise ).LetBtandWtbe two independent Brownian Motions. Show thatXt:= (Bt+Wt)/ 2is also a Brownian Motion. Find the correlation 0andXthas independent increments. The incrementsXt Xsare mean0 Gaussian randomvariables. The variance of the increments is given byVar(Xt Xs) =12 Var((Bt+Wt) (Bs+Ws))=12(Var(Bt Bs) +Var(Wt Ws) + 2 Cov(Bt Bs,Wt Ws))=12(t s+t s+ 0) =t s,where in the second last equality we used thatWandBare 3(Klebaner, Exercise ).LetMt:= max0 s tBt. Show that the random variables|Bt|,MtandMt Bthave the same distribution for allt > have seen in class that form >0we haveP[Mt> m] = 2P[Bt> m]so%M(m) = mP[Mt> m] = 2 mP[Bt> m] = 2%0,0,t(m).while form <0we have%M(m) = 0.

4 Similarly we have that%|B|(m) = mP[Bt> m] + mP[Bt< m] = 2%0,0,t(m).The third part of the exercise can be solved by simply applying Thm. in [Klebaner] and to integrate:%M B(m) = m%B,M(x,m+x) dx= m 2 2(m+x) xt3/2e (2(m+x) x)22tdx1 Hint: Use Theorem in [Klebaner]. Some extra work is needed att= 0here to prove continuity. To this end, you can use thatlimt 0tB1/t= limn n 0 2 m+xt3/2e (m+x)22tdx= 2 m[1 2 t 0e (m+x)22tdx]= m[1 2 t me x22tdx]= 2%0,0,t(m).form >0, where in the second and third line we have performed a change of {Bt}t 0be a standard Brownian )For any0 s < t, show that the joint distribution of(Bs,Bt)is a bivariate normal distribution anddetermine the mean vector and covariance matrix of this bivariate normal use the result at the end of pp. 59 of [Klebaner] for the random variablesX=BsandY=Bt Bs,which are independent gaussian random variables with mean0and variancessands , the random vector(Bs,Bt) = (X,X+Y)is distributed according to a2-dimensionalgaussian distribution with mean vector = (0,0)and covariance matrix 0=(s ss t).

5 B)Find a matrixA=[assastatsatt]so that[Z1Z2], defined as follows, has a standard bivariate normaldistribution.[Z1Z2]:=A[BsBt]=[assa statsatt][BsBt]=[assBs+astBtatsBs+attBt] A standard bivariate normal distribution is a bivariate normal distribution where the means of bothcoordinate variables are zero and the covariance matrix is the identity matrix. You can use the fact thatany linear combination of random variables following a multi-variate normal distribution has a [Z1Z2]have a standard bivariate normal distribution. We have[BsBt]D=[ s0 s t s][Z1Z2]To see this, one can check that the right side has a centered bivariate normal distribution with covariancematrix[s ss t]. Thus,Amust be the inverse of[ s0 s t s], ,A=[ s0 s t s] 1=[1 s0 1 t s1 t s].2 Hint: The discussion at the end of pp. 59 of [Klebaner] should be useful. Or using the independence ofBsandBt Bs, youcan first find the joint density of(Bs, Bt Bs).

6 Then do a transformation to get the joint density of(Bs, Bt)and recognize that it is a density of some bivariate normal 5(More martingales in Brownian Motion).Let{Bt}t 0be a standard Brownian Motion with anatural filtrationF={Ft}t 0, whereFt= (Bs,0 s t).(a) ComputeE[B4t|Fs]fort > s have thatE[B4t|Fs]=E[(Bs+ (Bt Bs))4|Fs]=E[B4s+ 4B3s(Bt Bs) + 4B2s(Bt Bs)2+ 4Bs(Bt Bs)3+ (Bt Bs)4|Fs]=E[B4s|Fs]+E[4B3s(Bt Bs)|Fs]+E[6B2s(Bt Bs)2|Fs]+E[4Bs(Bt Bs)3|Fs]+E[(Bt Bs)4|Fs]=B4s+ 4 BsE[(Bt Bs)3|Fs]+B2sE[6(Bt Bs)2|Fs]+ 4 BsE[(Bt Bs)3|Fs]+E[(Bt Bs)4|Fs]=B4s+ 6B2sE[(Bt Bs)2|Fs]+E[(Bt Bs)4|Fs]=B4s+ 6B2s(t s) + 3(t s) in the fifth equality we have used that oddmoments of normal distributions are0and in the last thedefinition of brownian motion increments.(b) Consider functionf4(t,x) =x4 6tx2+ 3t2. Show that{Mt}t 0given byMt=f4(t,Bt) =B4t 6tB2t+ 3t2is a martingale adapted to the only check thatE[Mt|Fs] =Ms. In light of the above we have thatE[Mt|Fs] =E[B4t|Fs] 6tE[B2t|Fs]+ 3t2=E[B4t|Fs] 6t(E[B2s|Fs]+ (t s)) + 3t2=B4s+ 6B2s(t s) + 3(t s)2 6t(B2s+ (t s)) + 3t2=B4s 6sB2s+ 3s2(c) We have shown in class thatMt=f(t,Bt)is a martingale for the casesf(t,x) =f1(t,x) :=x;f(t,x) =f2(t,x) :=x2 t;andf(t,x) =g( ,t,x) :=e x 2t/2,and the corresponding martingales are{Bt}t 0,{B2t t}and{e Bt 2t/2}t 0.

7 Check thatf1,f2andf4are solutions of the following (time reversed) heat equation( t+12 2 x2)f(t,x) = 0(PDE)with initial conditionfn(0,x) =xn, forn= 1,2, proof is immediate by inserting the suggested solution in the left hand side of the PDE and checkingthat the result (d)(Optional) Findf3(t,x)so that (i) it is a solution to (PDE) with initial conditionf3(0,x) =x3and (ii){f3(t,Bt)}t 0is a martingale. Hint: You can do this part by solving the PDE from the given initialcondition (if you know how to), or by computingE[B3t|Fs]and guessing whatf3(t,x)should look like.(e) (Optional). Check that, in fact, we havef1(t,x) = g( ,t,x) =0f2(t,x) = 2g( ,t,x) 2 =0, can use this part to find your solution to part (d). Problem 6(Klebaner, Exercise ).The first zero of the Standard Brownian Motion isB0= 0. What isthe second zero?By the arcsine law seen in class, the probability that Brownian motion has a zero in the interval(a,b)is givenbyP[Bt= 0fort (a,b)] =2 see that settinga= 0and takingb 0results ina probability of1, so there the second0of brownianmotion is also alternative way to prove the same result is by arguing similarly to Example in [Klebaner]: it is possiblethat the probability that the sign of brownian motion is constant in an(0, )is 0 independently of , provingthe 7(Optional: Diffusion and Brownian Motion).

8 LetBtbe a standard Brownian Motion startingfrom zero and definep(t,x) =1 2 te x22tGiven anyx R, defineXt=x+Bt. Of courseXtis just a Brownian Motion stating fromxat time a smooth bounded functionR R, we define the functionu(x,t)byu(x,t) =Ex[f(Xt)]where we have decorated the expectation with the subscriptxto remind us that we are starting from the pointx. Explain whyu(x,t) = f(y)p(t,x y)dy Show by direct calculation using the formula from the previous question that fort >0,u(x,t)satisfiesthe diffusion equation u t=c 2u x2for some constantc. (Find the correctc!)5 Show thatlimt 0u(t,x) =f(x)and hence the initial condition for the diffusion equation 8(Optional).Let{ k:k= 0, }be a collection of mutually independent standard Gaussianrandom variable with mean zero and variance one. DefineX(t) =t 0+ 2 k=1sin(kt)k that on the interval[0, ]X(t)has the same mean, variance and covariance as Brownian fact, it is Brownian Motion.

9 For extra credit, prove this. (There are a number of ways to do this. Oneis to seeXas the limit of the finite sums which are each continuous functions. Then prove thatXis theuniform limit of these continuous functions and hence is itself continuous.)Then observe that formally the time derivative ofX(t)is the sum of all frequencies with a randomamplitudes which are independent and identicalN(0,1)Gaussian random variables. This is the origin of theterm white noise since all frequencies are equally represented as in white the above calculations you may need the fact thatmin(t,s) =ts +2 k=1sin(kt) sin(ks)k2.(If you are interested, this can be shown by periodically extendingmin(t,s)to the interval[ , ]and thenshowing that it has the same Fourier transform as the right-hand side of the above expression. Then use thefact that two continuous functions with the same Fourier transform are equal on[ , ].)

10 Problem 9(optional).IfSandTare both stopping times relative to the filtrationF={Ft}t 0, thenmax(S,T)andS+Tare also stopping times relative need to check that for anyt 0,{S+T t} Ft, which is equivalent to{S+T > t} Ft. To dothis, we write{S+T > t}={S= 0,S+T > t} {S=t,S+T > t} {0< S < t,S+T > t}Apparently,{S= 0,S+T > t}={S= 0} {T > t} Ft{S=t,S+T > t}={S=t} {T >0} FtSo it suffices to show the third set{t > S >0,S+T > t} Ft. Notice{0< S < t,S+T > t}= r Q (0,t){r < S < t, T > t r}( )The right side of( )is inFt, because it is a countable union of sets inFt: for eachr Q (0,t){r < S < t, T > t r}={r < S < t} {T > t r} Ft6To show( ), denote the left side and the right side byALandAR, respectively. For eachr Q (0,t),{r < S( )< t,T( )> t r} {r < S( )< t,T( ) +S( )> t} {0< S( )< t,T( ) +S( )> t}=ALwhich meansAR AL. Moreover, for any AL, choose0< <min{S( ) +T( ) t,S( )},and then chooseq (S( ) ,S( ) /2) we haveq < S( )< tandS( ) +T( )> t+ T( )> t (S( ) )> t qthus {q < S < t,T > t q}for such choice ofq Q.


Related search queries