Skip to main content
Engineering LibreTexts

9.5: Extended Kalman Filter

  • Page ID
    14827
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In the extended Kalman filter, the state transition and observation models do not need to be linear functions of the state but may instead be differentiable functions. The action update step looks as follows:

    \[\textbf{x̂}_{k'|k-1}=f(\textbf{x̂}_{k-1|k-1},\textbf{u}_{k-})\]

    Here f() is a function of the old state xk−1 and control input uk−1. This is nothing else as the odometry update we are used to, where f() is a function describing the forward kinematics of the robot, xk its position and uk the wheel-speed we set.

    We can also calculate the covariance matrix of the robot position

    \[\mathbf{P}_{k'|k-1}=∇_{x,y,\theta}f\mathbf{P}_{k-1|k-1}∇_{x,y,\theta }f^{T}+∇_{\Delta _{r,l}}f\mathbf{Q}_{k-1}∇_{\Delta _{r,l}f}\]

    This is nothing else as the error propagation law applied to the odometry of the robot with Qk the covariance matrix of the wheel-slip and the Jacobian matrices of the forward kinematic equations f() with respect to the robot’s position (indicated by the index x, y, θ) and with respect to the wheel-slip of the left and right wheel.

    The perception update (or prediction) step looks as follows:

    \[\boldsymbol{x̂}_{k|k'}=\boldsymbol{x̂}_{k'|k-1}+\boldsymbol{K}_{k'}\boldsymbol{y}_{k'}\]

    \[\boldsymbol{P}_{k|k'}=(I-\boldsymbol{K}_{k'}\boldsymbol{H}_{k'})\boldsymbol{P}_{k'|k-1}\]

    At this point the indices k should start making sense. We are calculating everything twice: once we update from k − 1 to an intermediate result k' during the action update, and obtain the final result after the perception update where we go from k' to k.

    We need to calculate three additional variables:

    1. The innovation k = zk − h(k|k−1 )
    2. The covariance of the innovation Sk = HkPk|k−1HTk + Rk
    3. The (near-optimal) Kalman gain Kk = Pk|k−1HTk S−1k

    Here h() is the observation model and H its Jacobian. How these equations are derived is involved (and is one of the fundamental results in control theory), but the idea is the same as introduced above: we wish to minimize the error of the prediction.

    9.5.1. Odometry using the Kalman Filter

    We will show how a mobile robot equipped with a laser scanner can correct its position estimate by relying on unreliable odometry, unreliable sensing, but a correct map, in an optimal way. Whereas the update step is equivalent to forward kinematics and error propagation we have seen before, the observation model and the “innovation” require additional steps to perform odometry.

    1. Prediction Update: We assume for now that the reader is familiar with calculating k'|k−1 = f(x, y, θ)T and its variance Pk'|k−1 . Here, Qk−1 , the covariance matrix of the wheel-slip error, is given by

    \[Q_{k-1}=\begin{bmatrix}
    k_{r}|\Delta s_{r} & 0\\
    0 & k_{l}|\Delta s_{l}
    \end{bmatrix}\]

    where ∆s is the wheel movement of the left and right wheel and k are constants. See also the odometry lab for detailed derivations of these calculations and how to estimate kr and kl. The state vector k'|k−1 is a 3x1 vector, the covariance matrix Pk'|k−1 is a 3x3 matrix, and ∇∆r,l that is used during error propagation is a 3x2 matrix. See the error propagation section for details on how to calculate ∇∆r,l.

    2. Observation: Let us now assume that we can detect line features zk,i = (αi, ri)T , where α and r are the angle and distance of the line from the coordinate system of the robot. These line features are subject to variances σα,i and σr,i, which make up the diagonal of Rk. See the line detection section for a derivation of how angle and distance as well as their variance can be calculated from a laser scanner. The observation is a 2x1 matrix.

    3. Measurement Prediction: We assume that we can uniquely identify the lines we are seeing and retrieve their real position from a map. This is much easier for unique features, but can be done also for lines by assuming that our error is small enough and we therefore can search through our map and pick the closest lines. As features are stored in global coordinates, we need to transpose them into how the robot would see them. In practice this is nothing but a list of lines, each with an angle and a distance, but this time with respect to the origin of the global coordinate system. Transposing them into robot coordinates is straightforward. With k = (xk, yk, θk)T and mi = (αi , ri) the corresponding entry from the map, we can write

    \[h(\boldsymbol{x̂}_{k|k-1})=\begin{bmatrix}
    \alpha _{k,i}\\
    r _{k,i}
    \end{bmatrix}=h(\boldsymbol{x},m_{i})=\begin{bmatrix}
    \alpha _{i}-\theta \\
    r_{i}-(xcos(\alpha _{i})+ysin(\alpha _{i}))
    \end{bmatrix}\]

    and calculate its Jacobian Hk as the partial derivatives of α to x, y, θ in the first row, and the partial derivatives of r in the second. How to calculate h() to predict the radius at which the robot should see the feature with radius ri from the map is illustrated in the figure below.

    • Example on how to predict the distance to a feature the robot would see given its estimated position and its known location from a map.

    4. Matching: We are now equipped with a measurement zk and a prediction h(k|k−1 ) based on all features stored in our map. We can now calculate the innovation

    \[\boldsymbol{y}_{k}=\boldsymbol{z}_{k}-h(\boldsymbol{x̂}_{k|k-1})\]

    5. Estimation: We now have all the ingredients to perform the perception update step of the Kalman filter:

    \[\boldsymbol{x̂}_{k'|k-1}=\boldsymbol{x̂}_{k'|k-1}+\boldsymbol{K}_{k'}\boldsymbol{y}_{k'}\]

    \[\boldsymbol{P}_{k|k'}=(I-\boldsymbol{K}_{k'}\boldsymbol{H}_{k'})\boldsymbol{P}_{k'|k-1}\]

    It will provide us with an update of our position that fuses our odometry input and information that we can extract from features in the environment in a way that takes into account their variances. That is, if the variance of your previous position is high (because you have no idea where you are), but the variance of your measurement is low (maybe from a GPS or a symbol on the Ratslife wall), the Kalman filter will put more emphasis on your sensor. If your sensors are poor (maybe because you cannot tell different lines/walls apart), more emphasis will be on the odometry.

    As the state vector is a 3x1 vector and the innovation a 2x1 matrix, the Kalman gain must be a 3x2 matrix. This can also be seen when looking at the covariance matrix that must come out as a 3x3 matrix, and knowing that the Jacobian of the observation function is a 2x3 matrix. We can now calculate the covariance of the innovation and the Kalman gain using

    \[\boldsymbol{S}_{k}=\boldsymbol{H}_{k}\boldsymbol{P}_{k|k-1}\boldsymbol{H}_{k}^{T}+\boldsymbol{R}_{k}\]

    \[\boldsymbol{K}_{k}=\boldsymbol{P}_{k|k-1}\boldsymbol{H}_{k}^{T}\boldsymbol{S}_{k}^{-1}\]


    This page titled 9.5: Extended Kalman Filter is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Nikolaus Correll via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.