Example: biology

Chapter utorial: The Kalman Filter

Chapter11 lter[1]haslongbeenregardedastheoptimalso lutiontomanytrackinganddatapredictiontas ks,[2]. lterisconstructedasameansquarederrormini miser,butanalternativederivationofthe lterisalsoprovidedshowinghowthe lteringistoextracttherequiredinformation fromasignal, nethegoalofthe ;yk=akxk+nk( )where;ykisthetimedependentobservedsigna l,akisagainterm, erencebetweentheestimateof^xkandxkitself istermedtheerror;f(ek)=f(xk ^xk)( )Theparticularshapeoff(ek)isdependentupo ntheapplication,howeveritisclearthatthef unctionshouldbebothpositiveandincreasemo notonically[3].

space tec hniques, whic h unlik e Wiener's p erscription, enables the lter to b e used as either a smo other, a lter or a predictor. The latter of these three, the abilit

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Chapter utorial: The Kalman Filter

1 Chapter11 lter[1]haslongbeenregardedastheoptimalso lutiontomanytrackinganddatapredictiontas ks,[2]. lterisconstructedasameansquarederrormini miser,butanalternativederivationofthe lterisalsoprovidedshowinghowthe lteringistoextracttherequiredinformation fromasignal, nethegoalofthe ;yk=akxk+nk( )where;ykisthetimedependentobservedsigna l,akisagainterm, erencebetweentheestimateof^xkandxkitself istermedtheerror;f(ek)=f(xk ^xk)( )Theparticularshapeoff(ek)isdependentupo ntheapplication,howeveritisclearthatthef unctionshouldbebothpositiveandincreasemo notonically[3].

2 Anerrorfunctionwhichexhibitsthesecharac- teristicsisthesquarederrorfunction;f(ek) =(xk ^xk)2( )133 Sinceitisnecessarytoconsidertheabilityof the ltertopredictmanydataoveraperiodoftimeam oremeaningfulmetricistheexpectedvalueoft heerrorfunction;lossfunction=E(f(ek))( )Thisresultsinthemeansquarederror(MSE)fu nction; (t)=E e2k ( ) , ningthegoalofthe lterto ndingthe^ ;max[P(yj^x)]( )AssumingthattheadditiverandomnoiseisGau ssiandistributedwithastandarddeviationof kgives;P(ykj^xk)=Kkexp (yk ak^xk)22 2k ( ) ;P(yj^x)=YkKkexp (yk ak^xk)22 2k ( )Whichleadsto;logP(yj^x)= 12Xk (yk ak^xk)2 2k +constant( ) ,whichmaybemaximisedbythevariationof^ ^ lterisde nedasbeingthat lter,fromthesetofallpossible ltertheworkofNorbertWiener[4],should niteimpulseresponse(FIR) cetosaythathissolutionusesboththeautocor relationandthecrosscorrelationoftherecei vedsignalwiththeoriginaldata,inordertode riveanimpulseresponseforthe 'sprescriptionhassomeadvantagesoverWeine r's.

3 Itsidestepstheneedtodeterminetheimpulser esponseofthe lter, lterusingstate134spacetechniques,whichun likeWiener'sperscription,enablesthe ltertobeusedaseitherasmoother,a ,theabilityoftheKalman ningthe lterintermsofstatespacemethodsalsosimpli estheimplementationofthe lterinthediscretedomain, ;xk+1= xk+wk( )where;xkisthestatevectoroftheprocessatt imek,(nx1); isthestatetransitionmatrixoftheprocessfr omthestateatktothestateatk+1,andisassume dstationaryovertime,(nxm);wkistheassocia tedwhitenoiseprocesswithknowncovariance, (nx1).

4 Observationsonthisvariablecanbemodelledi ntheform;zk=Hxk+vk( )where;zkistheactualmeasurementofxattime k,(mx1);Histhenoiselessconnectionbetween thestatevectorandthemeasurementvector,an disassumedstationaryovertime(mxn); ,(mx1).Aswasshowninsection??fortheminimi sationoftheMSEtoyieldtheoptimal ;Q=E wkwTk ( )R=E vkvTk ( ) ;E ekeTk =Pk( )where;Pkistheerrorcovariancematrixattim ek,(nxn). ;Pk=E ekeTk =Eh(xk ^xk)(xk ^xk)Ti( )Assumingthepriorestimateof^xkiscalled^x 0k, ,combingtheoldestimatewithmeasurementdat athus;^xk=^x0k+Kk(zk H^x0k)( )where;KkistheKalmangain, H^ ;ik=zk H^xk( ) ;^xk=^x0k+Kk(Hxk+vk H^x0k)( ) ;Pk=E[[(I KkH)(xk ^x0k) Kkvk][(I KkH)(xk ^x0k) Kkvk]Ti( )Atthispointitisnotedthatxk ^ ;Pk=(I KkH)Eh(xk ^x0k)(xk ^x0k)Ti(I KkH)+KkE vkvTk KTk( ) ;Pk=(I KkH)P0k(I KkH)T+KkRKTk( ).]

5 Pkk=24E ek 1eTk 1 E ekeTk 1 E ek+1eTk 1 E ek 1eTk E ekeTk E ek+1eTk E ek 1eTk+1 E ekeTk+1 E ek+1eTk+1 35( ) rstdi erentiatedwithrespecttoKkandtheresultset tozeroinorderto ;Pk=P0k KkHP0k P0kHTKTk+Kk HP0kHT+R KTk( )Notethatthetraceofamatrixisequaltothetr aceofitstranspose,thereforeitmaywrittena s;T[Pk]=T[P0k] 2T[KkHP0k]+T Kk HP0kHT+R KTk ( )where;T[Pk] erentiatingwithrespecttoKkgives;dT[Pk]dK k= 2(HP0k)T+2Kk HP0kHT+R ( )Settingtozeroandre-arranginggives;136(H P0k)T=Kk HP0kHT+R ( )NowsolvingforKkgives;Kk=P0kHT HP0kHT+R 1( ) ,ikde nedas;Sk=HP0kHT+R( )Finally, ;Pk=P0k P0kHT HP0kHT+R 1HP0k=P0k KkHP0k=(I KkH)P0k( ) , , ;^x0k+1= ^xk( )Tocompletetherecursionitisnecessaryto ndanequationwhichprojectstheerrorcovaria ncematrixintothenexttimeinterval,k+ rstforminganexpressionsforthepriorerror; e0k+1=xk+1 ^x0k+1=( xk+wk) ^xk= ek+wk( ) +1;P0k+1=E e0k+1eT0k+1 =Eh( ek+wk)( ek+wk)Ti( )Notethatekandwkhavezerocross-correlatio nbecausethenoisewkactuallyaccumulatesbet weenkandk+.

6 P0k+1=E e0k+1eT0k+1 =Eh ek( ek)Ti+E wkwTk = Pk T+Q( )Thiscompletestherecursive GainUpdate EstimateUpdate CovarianceProject into k+1 Projected EstimatesInitial EstimatesUpdated State EstimatesMeasurementsDescriptionEquation KalmanGainKk=P0kHT HP0kHT+R 1 UpdateEstimate^xk=^x0k+Kk(zk H^x0k)UpdateCovariancePk=(I KkH)P0kProjectintok+1^x0k+1= ^xkPk+1= Pk T+ lterasachi-squaremeritfunctionTheobjecti veoftheKalman ,andwasderivedearlier, tasetofmodelparameterstoamodelaprocesskn ownasleastsquares lteriscommonlyknownasarecursiveleastsqua res(RLS) erentperspectiveonwhattheKalman ; 2=kXi=1 zi h(ai;x) i 2( )where;ziisthemeasuredvalue;hiisthedatam odelwithparametersx,assumedlinearina; ;138 2=kXi=11 i i[zi h(ai;x)]2( )Representingthechi-squareinvectorforman dusingnotationfromtheearlierKalmanderiva tion; 2k=[zk h(a;xk)]R 1[zk h(a.)]

7 Xk)]T( )where;R 1isthematrixofinversesquaredvariances, i ,kth, ,themeritfunctionuptotimekmaybere-writte nas; 2k 1=(xk 1 ^xk 1)P0 1k 1(xk 1 ^xk 1)T( )Tocombinethenewdatawiththeprevious, ttingthemodelparameterssoastominimisethe overallchi-squarefunction,themeritfuncti onbecomesthesummationofthetwo; 2=(xk 1 ^xk 1)P0 1k 1(xk 1 ^xk 1)T+[zk h(a;xk)]R 1[zk h(a;xk)]T( )Wherethe rstderivativeofthisisgivenby;d 2dx=2P0 1k 1(xk 1 ^xk 1) 2rxh(a;xk)TR 1[zk h(a;xk)]( )Themodelfunctionh(a;xk)withparameters ttedfrominformationtodate,maybeconsidere das;h(a;xk)=h(a;(^xk+ xk))( )where xk=xk ^ rstorderis;h(^xk+ x)=h(^xk)+ xrxh(^xk)( ) ;d 2dx=2P0 1k(xk ^xk) 2rxh(a;^xk)TR 1[zk h(a;^xk) (xk ^xk)rxh(a;^xkk)]( ) ,forasystemwhichislinearinathemodelderiv ativeisconstantandmaybewrittenas;rxh(a;x k)=rxh(a;^xk)=H( ) ;d 2dx=2P0 1k xk+2 HTR 1H xk 2 HTR 1[zk h(a;^xk)]( )139Re-arranginggives;d 2dx=2 P0 1k+HTR 1H xk 2 HTR 1[zk h(a;^xk)]( )Foraminimumthederivativeiszero,rearrang eintermsof xkgives.

8 Xk= P0 1k+HTR 1H 1 HTR 1[zk h(a;^xk)]( )x=^xk+ P0 1k+HTR 1H 1 HTR 1[zk h(a;^x)]( ) ,Kktobeidenti edas;Kk= P0 1k+HTR 1H 1 HTR 1( )Givingaparameterupdateequationoftheform ;xk=^xk+Kk[zk h(a;^xk)]( ) ;P 1k=P0 1k+HR 1HT( ) P 1k= , , ;Pk=(I KkH)P0kandP 1k=P0 1k+HR 1 HTTherefore;(I KkH)P0k P0 1k+HR 1HT=I( )SubstitutingforKkgives;hP0k P0kHT HP0kHT+R 1HP0ki P0 1k+HTR 1H =I P0kHTh HP0kHT+R 1 R 1+ HP0kHT+R 1HP0kHTR 1iH=I P0kHTh HP0kHT+R 1 I+HP0kHTR 1 R 1iH=I P0kHT R 1 R 1 H=I( )1whentheKalman lterisbuiltaroundtheinformationmatrixiti sknownastheinformation lter140


Related search queries