Example: biology

Extended Kalman Filter Tutorial - University of South Carolina

Extended Kalman Filter TutorialGabriel A. TerejanuDepartment of Computer Science and EngineeringUniversity at Buffalo, Buffalo, NY Dynamic processConsider the following nonlinear system, described by the difference equation and the observationmodel with additive noise:xk=f(xk 1) +wk 1(1)zk=h(xk) +vk(2)Theinitial state x0is a random vector with known mean 0=E[x0] and covarianceP0=E[(x0 0)(x0 0)T].In the following we assume that the random vectorwkcaptures uncertainties in the model andvkdenotes the measurement noise. Both are temporally uncorrelated (white noise), zero-mean randomsequences with known covariances and both of them are uncorrelated with the initial [wk] = 0E[wkwTk] =QkE[wkwTj] = 0 fork6=jE[wkxT0] = 0 for allk(3)E[vk] = 0E[vkvTk] =RkE[vkvTj] = 0 fork6=jE[vkxT0] = 0 for allk(4)Also the two random vectorswkandvkare uncorrel

Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following nonlinear system, described by the difference equation and the observation model with additive noise: x k = f(x k−1) +w k−1 (1) z

Tags:

  Tutorials

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Extended Kalman Filter Tutorial - University of South Carolina

1 Extended Kalman Filter TutorialGabriel A. TerejanuDepartment of Computer Science and EngineeringUniversity at Buffalo, Buffalo, NY Dynamic processConsider the following nonlinear system, described by the difference equation and the observationmodel with additive noise:xk=f(xk 1) +wk 1(1)zk=h(xk) +vk(2)Theinitial state x0is a random vector with known mean 0=E[x0] and covarianceP0=E[(x0 0)(x0 0)T].In the following we assume that the random vectorwkcaptures uncertainties in the model andvkdenotes the measurement noise. Both are temporally uncorrelated (white noise), zero-mean randomsequences with known covariances and both of them are uncorrelated with the initial [wk] = 0E[wkwTk] =QkE[wkwTj] = 0 fork6=jE[wkxT0] = 0 for allk(3)E[vk] = 0E[vkvTk] =RkE[vkvTj] = 0 fork6=jE[vkxT0] = 0 for allk(4)Also the two random vectorswkandvkare uncorrelated:E[wkvTj] = 0 for allkandj(5)Vectorial functionsf( ) andh( ) are assumed to beC1functions (the function and its first derivativeare continuous on the given domain).

2 Dimension and description of variables:xkn 1 State vectorwkn 1 Process noise vectorzkm 1 Observation vectorvkm 1 Measurement noise vectorf( )n 1 Process nonlinear vector functionh( )m 1 Observation nonlinear vector functionQkn n Process noise covariance matrixRkm m Measurement noise covariance matrix2 EKF derivationAssuming the nonlinearities in the dynamic and the observation model are smooth, we can expandf(xk) andh(xk) in Taylor Series and approximate this way the forecast and the next estimate Forecast StepInitially, since the only available information is the mean, 0, and the covariance,P0, of the initialstate then the initial optimal estimatexa0and error covariance is:xa0= 0=E[x0](6)P0=E[(x0 xa0)(x0 xa0)T](7)Assume now that we have anoptimal estimate xak 1 E[xk 1|Zk 1] withPk 1covariance at timek 1.

3 The predictable part ofxkis given by:xfk E[xk|Zk 1](8)=E[f(xk 1) +wk 1|Zk 1]=E[f(xk 1)|Zk 1]Expandingf( ) in Taylor Series aboutxak 1we get:f(xk 1) f(xak 1) +Jf(xak 1)(xk 1 xak 1) + (9)whereJfis the Jacobian off( ) and the higher order terms ( ) are considered negligible. Hence,the Extended Kalman Filter is also called the First-Order Filter . The Jacobian is defined as:Jf f1 x1 f1 x2 f1 fn x1 fn x2 fn xn (10)wheref(x) = (f1(x), f2(x), .. , fn(x))Tandx= (x1, x2, .. , xn)T. The eq.(9) becomes:f(xk 1) f(xak 1) +Jf(xak 1)ek 1(11)whereek 1 xk 1 xak 1. The expectated value off(xk 1) conditioned byZk 1:E[f(xk 1)|Zk 1] f(xak 1) +Jf(xak 1)E[ek 1|Zk 1](12)whereE[ek 1|Zk 1] = 0.

4 Thus the forecast value ofxkis:xfk f(xak 1)(13)Substituting (11) in the forecast error equation results:efk xk xfk(14)=f(xk 1) +wk 1 f(xak 1) Jf(xak 1)ek 1+wk 12 The forecast error covariance is given by:Pfk E[efk(efk)T](15)=Jf(xak 1)E[ek 1eTk 1]JTf(xak 1) +E[wk 1wTk 1]=Jf(xak 1)Pk 1 JTf(xak 1) +Qk 1 Data Assimilation StepAt timekwe have two pieces of information: the forecast valuexfkwith the covariancePfkand themeasurementzkwith the covarianceRk. Our goal is to approximate the best unbiased estimate, inthe least squares sense,xakofxk. One way is to assume the estimate is a linear combination of bothxfkandzk[4].

5 Let:xak=a+Kkzk(16)From the unbiasedness condition:0 =E[xk xak|Zk](17)=E[(xfk+efk) (a+Kkh(xk) +Kkvk)|Zk]=xfk a KkE[h(xk)|Zk]a=xfk KkE[h(xk)|Zk](18)Substitute (18) in (16):xak=xfk+Kk(zk E[h(xk)|Zk])(19)Following the same steps as in model forecast step, expandingh( ) in Taylor Series aboutxfkwe have:h(xk) h(xfk) +Jh(xfk)(xk xfk) + (20)whereJhis the Jacobian ofh( ) and the higher order terms ( ) are considered negligible. TheJacobian ofh( ) is defined as:Jh h1 x1 h1 x2 h1 hm x1 hm x2 hm xn (21)whereh(x) = (h1(x), h2(x), .. , hm(x))Tandx= (x1, x2, .. , xn)T. Taken the expectation on bothsides of (20) conditioned byZk:E[h(xk)|Zk] h(xfk) +Jh(xfk)E[efk|Zk](22)whereE[efk|Zk] = 0.

6 Substitute in (19), the state estimate is:xak xfk+Kk(zk h(xfk))(23)3 The error in the estimatexakis:ek xk xak(24)=f(xk 1) +wk 1 xfk Kk(zk h(xfk)) f(xk 1) f(xak 1) +wk 1 Kk(h(xk) h(xfk) +vk) Jf(xak 1)ek 1+wk 1 Kk(Jh(xfk)efk+vk) Jf(xak 1)ek 1+wk 1 KkJh(xfk)(Jf(xak 1)ek 1+wk 1) Kkvk (I KkJh(xfk))Jf(xak 1)ek 1+ (I KkJh(xfk))wk 1 KkvkThen, the posterior covariance of the new estimate is:Pk E[ekeTk](25)= (I KkJh(xfk))Jf(xak 1)Pk 1 JTf(xak 1)(I KkJh(xfk))T+(I KkJh(xfk))Qk 1(I KkJh(xfk))T+KkRkKTk= (I KkJh(xfk))Pfk(I KkJh(xfk))T+KkRkKTk=Pfk KkJh(xfk)Pfk PfkJTh(xfk)KTk+KkJh(xfk)PfkJTh(xfk)KTk+K kRkKTkThe posterior covariance formula holds for anyKk.

7 Like in the standard Kalman Filter we find outKkby minimizingtr(Pk) = tr(Pk) Kk(26)= (Jh(xfk)Pfk)T PfkJTh(xfk) + 2 KkJh(xfk)PfkJTh(xfk) + 2 KkRkHence the Kalman gain is:Kk=PfkJTh(xfk) Jh(xfk)PfkJTh(xfk) +Rk 1(27)Substituting this back in (25) results:Pk= (I KkJh(xfk))Pfk (I KkJh(xfk))PfkJTh(xfk)KTk+KkRkKTk(28)= (I KkJh(xfk))Pfk PfkJTh(xfk) KkJh(xfk)PfkJTh(xfk) KkRk KTk= (I KkJh(xfk))Pfk hPfkJTh(xfk) Kk Jh(xfk)PfkJTh(xfk) +Rk iKTk= (I KkJh(xfk))Pfk hPfkJTh(xfk) PfkJTh(xfk)iKTk= (I KkJh(xfk))Pfk3 Summary of Extended Kalman FilterModel and Observation:xk=f(xk 1) +wk 1zk=h(xk) +vk4 Initialization:xa0= 0with error covarianceP0 Model Forecast Step/Predictor:xfk f(xak 1)Pfk=Jf(xak 1)Pk 1 JTf(xak 1) +Qk 1 Data Assimilation Step/Corrector:xak xfk+Kk(zk h(xfk))Kk=PfkJTh(xfk) Jh(xfk)PfkJTh(xfk) +Rk 1Pk= I KkJh(xfk) Pfk4 Iterated Extended Kalman FilterIn the EKF,h( ) is linearized about the predicted state estimatexfk.

8 The IEKF tries to linearize itabout the most recent estimate, improving this way the accuracy [3,1]. This is achieved by calculatingxak,Kk,Pkat each ,ithe estimate at timekandith iteration. The iteration process is initialized withxak,0= the measurement update step becomes for eachi:xak,i xfk+Kk(zk h(xak,i))Kk,i=PfkJTh( xk,i) Jh(xak,i)PfkJTh(xak,i) +Rk 1Pk,i= I Kk,iJh(xak,i) PfkIf there is little improvement between two consecutive iterations then the iterative process is accuracy reached this way is achieved with higher computational StabilitySinceQkandRkare symmetric positive definite matrices then we can write:Qk=GkGTk(29)Rk=DkDTk(30)Denote by and the high order terms resulted in the following subtractions.

9 F(xk) f(xak) =Jf(xak)ek+ (xk,xak)(31)h(xk) h(xak) =Jh(xak)ek+ (xk,xak)(32)Konrad Reif showed in [2] that the estimation error remains bounded if the followings hold:51. , , 1, 2>0 are positive real numbers and for everyk:kJf(xak)k (33)kJh(xak)k (34) 1I Pk 2(35) nonsingular for everyk3. There are positive real numbers , , , >0 such that the nonlinear functions , arebounded via:k (xk,xak)k kxk xakk2withkxk xakk (36)k (xk,xak)k kxk xakk2withkxk xakk (37)Then the estimation errorekis exponentially bounded in mean square and bounded with probabilityone, provided that the initial estimation error satisfies:kekk (38)and the covariance matrices of the noise terms are bounded via.

10 GkGTk I(39)DkDTk I(40)for some , > ConclusionIn EKF the state distribution is propagated analytically through the first-order linearization of thenonlinear system. It does not take into account thatxkis a random variable with inherent uncertaintyand it requires that the first two terms of the Taylor series todominate the remaining version exists [4,5], but the computational complexity required makes it unfeasiblefor practical usage in cases of real time applications or high dimensional [1] Arthur Optimal Estimation. Press, 1974.[2] , , , and StochasticStability of the Discrete-Time ExtendedKalman Control, 1999.


Related search queries