Example: tourism industry

Unscented Kalman Filter Tutorial

Unscented Kalman Filter TutorialGabriel A. TerejanuDepartment of Computer Science and EngineeringUniversity at Buffalo, Buffalo, NY IntroductionThe Unscented Kalman Filter belongs to a bigger class of filters calledSigma-Point Kalman FiltersorLinear Regression Kalman Filters, which are using thestatistical linearizationtechnique[1,5]. This technique is used to linearize a nonlinear function of a random variable through a linearregression betweennpoints drawn from the prior distribution of the random variable. Since we areconsidering the spread of the random variable the techniquetends to be more accurate than Taylorseries linearization [7].In the same family of filters we have The Central Difference Kalman Filter , The Divided Differ-ence Filter , and also the Square-Root alternatives for UKF and CDKF [7].In EKF the state distribution is propagated analytically through the first-order linearization of thenonlinear system due to which, the posterior mean and covariance could be corrupted.

update to Cholesky factorization. Sf k = cholupdate Sf k, (x f,0 k −x f k), sgn{W 0} √ W0 (29) where sgn is the sign function and cholupdate returns the Cholesky factor of Sf k (S f k) T +W0 xf,0 k −x f k xf,0 k −x f k T Therefore the forecast covariance matrix can be written Pf k = S f k (S f k) T. The same way the posterior co ...

Tags:

  Cholesky

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Unscented Kalman Filter Tutorial

1 Unscented Kalman Filter TutorialGabriel A. TerejanuDepartment of Computer Science and EngineeringUniversity at Buffalo, Buffalo, NY IntroductionThe Unscented Kalman Filter belongs to a bigger class of filters calledSigma-Point Kalman FiltersorLinear Regression Kalman Filters, which are using thestatistical linearizationtechnique[1,5]. This technique is used to linearize a nonlinear function of a random variable through a linearregression betweennpoints drawn from the prior distribution of the random variable. Since we areconsidering the spread of the random variable the techniquetends to be more accurate than Taylorseries linearization [7].In the same family of filters we have The Central Difference Kalman Filter , The Divided Differ-ence Filter , and also the Square-Root alternatives for UKF and CDKF [7].In EKF the state distribution is propagated analytically through the first-order linearization of thenonlinear system due to which, the posterior mean and covariance could be corrupted.

2 The UKF,which is a derivative-free alternative to EKF, overcomes this problem by using a deterministic sam-pling approach [9]. The state distribution is represented using a minimal setof carefully chosen samplepoints, calledsigma points. Like EKF, UKF consists of the same two steps: model forecastanddata assimilation, except they are preceded now by another step for the selection of sigma UKF AlgorithmThe UKF is founded on the intuition that it is easier to approximate a probability distribution that itis to approximate an arbitrary nonlinear function or transformation[4]. The sigma points are chosenso that their mean and covariance to be exactlyxak 1andPk 1. Each sigma point is then propagatedthrough the nonlinearity yielding in the end a cloud of transformed points. The new estimated meanand covariance are then computed based on their process is called Unscented Unscented transformation is a method for calculating the statistics of a random variablewhich undergoes a nonlinear transformation[9].

3 Consider the following nonlinear system, described by the difference equation and the observationmodel with additive noise:xk=f(xk 1) +wk 1(1)zk=h(xk) +vk(2)Theinitial state x0is a random vector with known mean 0=E[x0] and covarianceP0=E[(x0 0)(x0 0)T]. In the case of non-additive process and measurement noise, the unscentedtransformation scheme is applied to the augmented state [9]:xaugk= [xTkwTk 1vTk]T(3)Set Selection of Sigma PointsLetXk 1be a set of 2n+ 1 sigma points (wherenis the dimension of the state space) and theirassociated weights:Xk 1=n xjk 1, Wj |j= 0..2no(4)Consider the following selection of sigma points, selection that incorporates higher order informationin the selected points [4]:x0k 1=xak 1(5) 1< W0<1(6)xik 1=xak 1+ rn1 W0Pk 1 ifor alli= 1.. n(7)xi+nk 1=xak 1 rn1 W0Pk 1 ifor alli= 1.. n(8)Wj=1 W02nfor allj= 1..2n(9)where the weights must obey the condition:2nXj=0Wj= 1(10)and qn1 W0Pk 1 iis the row or column of the matrix square root ofn1 W0Pk theposition of sigma points:W0 0 points tend to move further from the origin,W0 0 points tendto be closer to the origin.

4 A more general selection scheme for sigma points, calledscaled unscentedtransformation, is given in [9,2].Model Forecast StepEach sigma point is propagated through the nonlinear process model:xf,jk=f(xjk 1)(11)The transformed points are used to compute the mean and covariance of the forecast value ofxk:xfk=2nXj=0 Wjxf,jk(12)Pfk=2nXj=0Wj xf,jk xfk xf,jk xfk T+Qk 1(13)2We propagate then the sigma points through the nonlinear observation model:zf,jk 1=h(xjk 1)(14)With the resulted transformed observations, their mean andcovariance (innovation covariance) iscomputed:zfk 1=2nXj=0 Wjzf,jk 1(15)Cov(ezfk 1) =2nXj=0Wj zf,jk 1 zfk 1 zf,jk 1 zfk 1 T+Rk(16)The cross covariance betweenexfkandezfk 1is:Cov(exfk,ezfk 1) =2nXj=0Wj xf,jk xfk zf,jk 1 zfk 1 T(17)Data Assimilation StepWe like to combine the information obtained in the forecast step with the new observation measuredzk. Like in KF assume that the estimate has the following form:xak=xfk+Kk(zk zfk 1)(18)The gainKkis given by:Kk=Cov(exfk,ezfk 1)Cov 1(ezfk 1)(19)The posterior covariance is updated after the following formula:Pk=Pfk KkCov(ezfk 1)KTk(20)3 Square-Root UKFNote that in order to compute the new set of sigma points we need the square root matrix of the pos-terior covariance each time (Pk=SkSTk).

5 Since the update is applied to the full posterior covariancewe can change the algorithm to propagate directly the squareroot matrix, selection scheme of sigma points becomes:x0k 1=xak 1(21) 1< W0<1(22)xik 1=xak 1+ rn1 W0Sk 1 ifor alli= 1.. n(23)xi+nk 1=xak 1 rn1 W0Sk 1 ifor alli= 1.. n(24)Wj=1 W02nfor allj= 1..2n(25)3 The Filter is initialized by computing the initial square root matrix via a cholesky factorization of thefull error covariance E[(x0 0)(x0 0)T] (26)SinceWj>0 for alli 1, in thetime updatestep the forecast covariance matrix can be written as:Pfk=2nXj=0Wj xf,jk xfk xf,jk xfk T+Qk 1=2nXj=1 Wj xf,jk xfk Wj xf,jk xfk T+pQk 1pQk 1T+W0 xf,0k xfk xf,0k xfk T=h Wj xf,jk xfk pQk 1i Wj xf,jk xfk TpQk 1T +W0 xf,0k xfk xf,0k xfk Tforj= 1..2n(27)wherepQk 1is the square-root matrix of the process noise covariance matrix. This form is compu-tationally undesirable since we have tripled the number of Wj xf,jk xfk pQk 1i Rn 3nforj= 1.

6 2nWe can use the QR-decomposition to express the transpose of the above matrix in terms of an orthog-onal matrixOk R3n nand an upper triangular matrix (Sfk)T Rn Wj xf,jk xfk pQk 1iT=Ok(Sfk)Tforj= 1..2nHence the error covariance matrix:Pfk=SfkOTkOk(Sfk)T+W0 xf,0k xfk xf,0k xfk T=Sfk(Sfk)T+W0 xf,0k xfk xf,0k xfk T(28)In order to include the effect of the last term in the square-root matrix, we have to perform a rank 1update to cholesky Sfk,(xf,0k xfk), sgn{W0} W0 (29)wheresgnis the sign function andcholupdatereturns the cholesky factor ofSfk(Sfk)T+W0 xf,0k xfk xf,0k xfk TTherefore the forecast covariance matrix can be writtenPfk=Sfk(Sfk)T. The same way the posterior co-variance can be expressed asPk=Sk(Sk)Tand the innovation covariance asCov(ezfk 1) =Sezfk 1 STezfk summaryxf,jk=f(xjk 1)(30)xfk=2nXj=0 Wjxf,jk(31)Sfk=qr h Wj xf,jk xfk pQk 1i forj= 1..2n(32)Sfk=cholupdate Sfk,(xf,0k xfk), sgn{W0} W0 (33)Redraw sigma points to incorporate effect of process noise:xf,0k=xfk(34)xf,ik=xfk+ rn1 W0 Sfk ifor alli= 1.

7 N(35)xf,i+nk=xfk rn1 W0 Sfk ifor alli= 1.. n(36)Propagate the new sigma points through measurement model:zf,jk 1=h(xf,jk 1)(37)zfk 1=2nXj=0 Wjzf,jk 1(38)Sezfk 1=qr h Wj zf,jk 1 zfk 1 pRki forj= 1..2n(39)Sezfk 1=cholupdate Sezfk 1,(zf,0k 1 zfk 1), sgn{W0} W0 (40)Cov(exfk,ezfk 1) =2nXj=0Wj xf,jk xfk zf,jk 1 zfk 1 T(41)Theqrfunction returns only the lower triangular summaryxak=xfk+Kk(zk zfk 1)(42)Kk= Cov(exfk,ezfk 1)/STezfk 1 /Sezfk 1(43)Sk=cholupdate Sfk,KkCov(exfk,ezfk 1), 1 (44)where/denotes a back-substitution operation. This is a better alternative to the matrix the cholesky factor is a lower triangular matrix, we can findKkusing two back-substitutionoperations in the equation:Kk(Sezfk 1 STezfk 1) =Cov(exfk,ezfk 1)(45)5In eqn. (44) since the middle argument of thecholupdatefunction is a matrix Rn n, the result isnconsecutive updates of the cholesky factor using thencolumns of the QR-decomposition and cholesky factorization tend tocontrol better the round off errors andthere are no matrix inversions, the SR-UKF has better numerical properties and it also guaranteespositive semi-definiteness of the underlying state covariance [8].

8 4 ConclusionThe UKF represents the extra uncertainty on a linearized function due to linearization errors by the co-variance of the deviations between the nonlinear and the linearized function in the regression points[6].The approximations obtained with at least 2n+ 1 sampling points are accurate to the 3rd orderof Gaussian inputs for all nonlinearities and at least to the2nd for non-Gaussian inputs. The ReducedSigman Point Filters [3] uses onlyn+ 1 sampling points but this time it does not take into accountthe linearization [1] Arthur Optimal Estimation. Press, 1974.[2] S. Julier. The Scaled Unscented Transformation, 1999.[3] S. Julier, J. Jeffrey, and K. Uhlmann. Reduced Sigma PointFilters for the Propagation of Meansand Covariances Through Nonlinear Transformations. 2002.[4] S. J. Julier and J. K. Uhlmann. Unscented Filtering and Nonlinear of theIEEE, 92(3):401 422, 2004.[5] Tine Lefebvre and Herman Bruyninckx.

9 Kalman Filters forNonlinear Systems: A Comparison ofPerformance.[6] Tine Lefebvre and Herman Bruyninckx. Comment on A New Method for the Nonlinear Trans-formation of Means and Covariances in Filters and Estimators .IEEE Control,2002.[7] R. van der Merwe. Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-SpaceModels. Technical report, 2003.[8] R. van der Merwe and E. Wan. The Square-Root Unscented Kalman Filter for State and Parameter-Estimation, 2001.[9] E. Wan and R. van der Unscented Kalman Filter . Wiley Publishing.


Related search queries