Example: confidence

Kalman Filtering and Model Estimation - Steven Lillywhite

Kalman Filtering and Model EstimationSteven LillywhiteSteven Lillywhite () Kalman Filtering and Model Estimation1 / 29 IntroductionWe aim to do the the basics of theKalman the relationship with MLE some real Lillywhite () Kalman Filtering and Model Estimation2 / 29 Overview1 Some Applications2 Some History3 Minimum Variance Estimation4 Kalman FilterState-Space FormKalman Filter AlgorithmInitial State ConditionsStability5 Maximum Likelihood Estimation6 Estimating Commodities ModelsSteven Lillywhite () Kalman Filtering and Model Estimation3 / 29 ApplicationsKeywords: Estimation , control theory, signal processing, Filtering ,linear stochastic systems. Used in the Apollo lunar landing. From theNASA Ames website: The Kalman -Schmidt filter was embedded in the Apollo navigationcomputer and ultimately into all air navigation systems, and laid thefoundation for Ames future leadership in flight and air trafficresearch.

Overview 1 Some Applications 2 Some History 3 Minimum Variance Estimation 4 Kalman Filter State-Space Form Kalman Filter Algorithm Initial State Conditions Stability 5 Maximum Likelihood Estimation 6 Estimating Commodities Models Steven Lillywhite Kalman Filtering and Model Estimation 3 / 29

Tags:

  Model, Estimation, Filtering, Kalman, Kalman filtering and model estimation

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Kalman Filtering and Model Estimation - Steven Lillywhite

1 Kalman Filtering and Model EstimationSteven LillywhiteSteven Lillywhite () Kalman Filtering and Model Estimation1 / 29 IntroductionWe aim to do the the basics of theKalman the relationship with MLE some real Lillywhite () Kalman Filtering and Model Estimation2 / 29 Overview1 Some Applications2 Some History3 Minimum Variance Estimation4 Kalman FilterState-Space FormKalman Filter AlgorithmInitial State ConditionsStability5 Maximum Likelihood Estimation6 Estimating Commodities ModelsSteven Lillywhite () Kalman Filtering and Model Estimation3 / 29 ApplicationsKeywords: Estimation , control theory, signal processing, Filtering ,linear stochastic systems. Used in the Apollo lunar landing. From theNASA Ames website: The Kalman -Schmidt filter was embedded in the Apollo navigationcomputer and ultimately into all air navigation systems, and laid thefoundation for Ames future leadership in flight and air trafficresearch.

2 Satellite tracking. Missile tracking. Radar vision. Robotics. Speech , Math Lillywhite () Kalman Filtering and Model Estimation4 / 29 Gauss was no dummy!Late 1700s. Problem: estimate planet and comet motion using datafrom Gauss first uses least-squares method at age of Fisher introduces the method of maximum Wiener-Kolmogorov linear minimum variance estimationtechnique. Signal processing. Unwieldly for large data Kalman introduces Kalman Lillywhite () Kalman Filtering and Model Estimation5 / 29 Minimum Variance EstimatorsLet ( ,P) be a probability (Estimator)LetX,Y1,Y2,..Yn L2(P) be random variables. LetYdef= (Y1,Y2,..,Yn). By anestimator XforXgivenYwe mean arandom variable of the form X=gY, whereg:Rn Ris a givenBorel-measurable (Minimum Variance Estimator)An estimator XofXgivenYis called aminimum variance estimatorif X X hY X (1)for all Borel-measurableh.

3 Let us denoteMVE(X|Y)def= Lillywhite () Kalman Filtering and Model Estimation6 / 29 Minimum Variance EstimatorsMVE(X) =E(X|Y)LetM(Y)def={gY|gBorel-measurable, gY L2(P)}. ThenM(Y)is a closed subspace ofL2(P), andMVE(X|Y) is the projection ofXontoM(Y).As a corollary,MVE(X|Y) exists, is unique, and is characterized bythe condition:(MVE(X|Y) X) M(Y)(2) Steven Lillywhite () Kalman Filtering and Model Estimation7 / 29 Linear Minimum Variance EstimatorsDefinition (Estimator)LetX,Y1,Y2,..Yn L2(P) be random variables. LetYdef= (Y1,Y2,..,Yn). By anlinear estimator XforXgivenYwe mean arandom variable of the form X=gY, whereg:Rn Ris a given s ramp it up a bit by lettingXbe (Best Linear Minimum Variance Estimator)LetX L2(P)n,Y L2(P)m. An linear estimator XofXgivenYiscalled abest linear minimum variance estimatorif X X hY X (3)for all linearh. Let us denoteBLMVE(X|Y)def= X.

4 Herehis given by ann Lillywhite () Kalman Filtering and Model Estimation8 / 29 Linear Minimum Variance EstimatorsIfXandYare multivariate normal, thenMVE(X|Y) =BLMVE(X|Y) (up to a constant term). Steven Lillywhite () Kalman Filtering and Model Estimation9 / 29 State-Space FormDefinition (State-Space Form)The state-space form is defined by the following pair of equations:xi+1=Jixi+gi+ui(state)zi=Hixi+ bi+wi(observation)Herexi,ziare vectors representing a discrete random general the elements ofxiare not assume that the elements ofziare white noise assume that all vectors and matrices take values in Euclideanspace and can vary withi, but apart fromxiandzi, that they onlyvary in a deterministic Lillywhite () Kalman Filtering and Model Estimation10 / 29 State-Space FormFurthermore, we denoteQidef=E(uiuTi) andRidef=E(wiwTi), and assumethat the following hold:E(uixT0) = 0(4)E(wixT0) = 0(5)E(uiwTj) = 0for all i,j(6) Steven Lillywhite () Kalman Filtering and Model Estimation11 / 29 Kalman Filter NotationDefinitionDenoteYjdef= (z0,z1.)

5 Zj). By xi|j, (resp. zi|j) we shall mean the bestlinear minimum variance estimate(BLMVE) ofxi( ) based also definePi|jdef=E{(xi xi|j)(xi xi|j)T}(7)and call this the error matrix. Wheni=j, the estimate is called afilteredestimate, wheni>j, the estimate is called apredictedestimate, andwheni<j, the estimate is called asmoothedestimateSteven Lillywhite () Kalman Filtering and Model Estimation12 / 29 Discrete Kalman FilterTheorem ( Kalman , 1960)The BLMVE xi|imay be generated recursively by xi+1|i=Ji xi|i+gi(predicted state)Pi+1|i=JiPi|iJTi+Qi(predicted state error matrix) zi+1|i=Hi+1 xi+1|i+bi+1(predicted observation)ri+1def=zi+1 zi+1|i(predicted obs error) i+1def=Hi+1Pi+1|iHTi+1+Ri+1(predicted obs error matrix)Ki+1=Pi+1|iHTi+1 1i+1( Kalman gain) xi+1|i+1= xi+1|i+Ki+1ri+1(next filtered state)Pi+1|i+1= [I Ki+1Hi+1]Pi+1|i(next filtered state error matrix) Steven Lillywhite ()

6 Kalman Filtering and Model Estimation13 / 29 Discrete Kalman FilterIf the initial statex0and the innovationsui,wiare multivariateGaussian, then the forecasts xi|j, (resp. zi|j) are minimum varianceestimators(MVE).Note that the updated filtered state estimate is a sum of thepredicted state estimate and the predicted observation error weightedby the gain that the gain matrix is proportional to the predicted stateerror covariance matrix, and inversely proportional to the predictedobservation error covariance matrix. Thus, in updating the stateestimator, more weight is given to the observation error when theerror in the predicted state estimate is large, and less when theobservation error is Lillywhite () Kalman Filtering and Model Estimation14 / 29 Kalman Filter Initial State ConditionsTo run the Kalman filter, we begin with the pair x0|0,P0|0(alternatively,one may also use x1|0,P1|0).

7 A difficuly with the Kalman filter is thedetermination of these initial conditions. In many real applications, thedistribution forx0is unknown. Several approaches are stationary state series, we can compute x0|0,P0| prior: x0|0= 0, andP0|0=kI,k 0. The details are one may treatx0as a fixed vector, taking x0|0=x0, andP0|0= 0,and estimate its components by treating them as extra parameters inthe Model . The details are more rule of thumb is that for long time series, the initial stateconditions will have little Lillywhite () Kalman Filtering and Model Estimation15 / 29 Kalman Filter StabilityUnder certain conditions, the err matricesPi+1|i(equivalentlyPi|i) willstabilizelimi Pi+1|i= P(8)with Pindependent ofP1|0. Convergence is often exponentially means that for stable filters, the initial state conditions won thave much impact so long as we have enough data to get to a stablestate.

8 Need to be more concerned with initial state conditions insmall gain significant computational advantage exploiting convergencein the filter. Especially when the matrices are time-invariant, the thepredicted observation err matrix and the Kalman gain stabilize, next Lillywhite () Kalman Filtering and Model Estimation16 / 29 Kalman Filter StabilityThis part is independent of the dataPi+1|i=JiPi|iJTi+Qi(predicted state error matrix) i+1def=Hi+1Pi+1|iHTi+1+Ri+1(predicted obs error matrix)Ki+1=Pi+1|iHTi+1 1i+1( Kalman gain)Pi+1|i+1= [I Ki+1Hi+1]Pi+1|i(next filtered state error matrix) xi+1|i=Ji xi|i+gi(predicted state) zi+1|i=Hi+1 xi+1|i+bi+1(predicted observation)ri+1def=zi+1 zi+1|i(predicted obs error) xi+1|i+1= xi+1|i+Ki+1ri+1(next filtered state) Steven Lillywhite () Kalman Filtering and Model Estimation17 / 29 Kalman Filter DivergenceNumerical instability in the algorithm, round-off errors, etc.

9 , cancause divergence in the fit. If the underlying state Model does not fit the real-worldprocess well, then the filter can If we cannot observe some of the state variables(orlinear combinations), then we can get divergence in the Lillywhite () Kalman Filtering and Model Estimation18 / 29 Kalman Filter Other ItemsKalman advantage: real-time updating. No need to store past data toupdate current handle missing data, since the matrices in the algorithm can varyover The filter algorithm above gives BLMVE at timetbasedon data up to timet. However, once all data is in, we can make betterestimates of the state variables at timetusing also data after forms for the filter algorithm based on algebraicmanipulation of the equations. Information filter computesP 1i| on the situation, this can be more(or less) filter uses square roots ofP 1i|i.

10 It is morecomputationally burdensome, but can improve numerical Lillywhite () Kalman Filtering and Model Estimation19 / 29 Kalman Filter Other ItemsNon-linear state-space filters. This is called theExtended KalmanFilter. Here, we allow arbitrary functions in the state-spaceformulation, rather than the linear functions +1=f(xi,gi,ui)(state)zi=h(xi,bi,wi)(obse rvation)One proceeds by linearizing the functions about the estimates at eachstep, and thereby obtain an analogous filter is a continuous version of the filter due to Kalman and Lillywhite () Kalman Filtering and Model Estimation20 / 29 Maximum Likelihood EstimationIf the initial statex0and the innovationsui,wiare multivariate Gaussian,then the distribution ofziconditional on the setYi 1is also Gaussian, andthe error matrices above are covariance matrices of the error |Yi 1 N( zi|i 1, i)(9)Now let us suppose that the state-space vectors and matrices depend oncertain unknown parameters.


Related search queries