You are on page 1of 28

1

ACM 116: The Kalman filter


Example
General Setup
Derivation
Numerical examples

Estimating the voltage

1D tracking

2D tracking

Example: Navigation Problem


Truck on frictionless straight rails
Initial position X0 = 0
Movement is buffeted by random accelerations
We measure the position every t seconds
State variables (Xk , Vk ) position and velocity at time kt
Xk = Xk1 + Vk1 t + ak1 t2 /2
Vk = Vk1 + ak1 t
where ak1 is a random acceleration
Observations
Yk = Xk + Zk
where Zk is a noise term.
The goal is to estimate the position and velocity at all times.

General Setup
Estimation of a stochastic dynamic system
Dynamics
Xk = Fk1 Xk1 + Bk1 uk1 + Wk1

Xk : state of the system at time k

uk1 : control-input

Wk1 : noise

Observations
Yk = Hk Xk + Zk

Yk is observed

Zk is noise

The noise realizations are all independent


Goal: predict state Xk from past data Y0 , Y1 , . . . , Yk1 .

Derivation
Derivation in the simpler model where the dynamics is of the form
Xk = ak1 Xk1 + Wk1
and the observations
Yk = Xk + Zk
The objective is to find, for each time k, the minimum MSE filter based on
Y0 , Y1 , . . . , Yk1
k
X
(k1)
k =
X
hj
Ykj
j=1

To find the filter, we apply the orthogonality principle


E((Xk

k
X
j=1

(k1)

hj

Ykj )Y` ) = 0,

` = 0, 1, . . . , k 1.

Recursion
The beautiful thing about the Kalman filter is that one can almost deduce the
optimal filter to predict Xk+1 from that predicting Xk .
(k)

(k)

(k1)

hj+1 = (ak h1 )hj

j = 1, . . . , k.
(k)

Given the filter h(k1) , we only need to find h1


time step.

to get the filter at the next

(k)

How to find h1 ?
Observe that the next prediction is equal to
k+1 = h1(k) Yk +
X

k
X

(k)

(k1)

(ak h1 )hj

Ykj

j=1

k + h1(k) (Yk X
k)
= ak X
Interpretation
k+1 = ak X
k + h1(k) Ik
X
k is the prediction based on the estimate at time k
ak X
(k)

h1 Ik is a corrective term which is available since we now see Yk


(k)

is called the gain

h1

k is called the innovation


Ik = Yk X

Error of Prediction
(k)

To find h1 , we look at the error of prediction


k
 k = Xk X
We have the recursion
(k)

(k)

k+1 = (ak h1 )k + Wk h1 Zk


0 = Z0
E(k ) = 0
(k)

(k)

E(2k+1 ) = [ak h1 ]2 E(2k ) + E(Wk2 ) + [h1 ]2 E(Zk2 )

(k)

To minimize the MSE k+1 , we adjust h1

so that
(k)

(k)

h(k) E(2k+1 ) = 0 = 2(ak h1 )E(2k ) + 2h1 E(Zk2 )


1

which is given by
(k)
h1

ak E(2k )
E(2k ) + E(Zk2 )

Note that this gives the recurrence relation


(k)

E(2k+1 ) = ak (ak h1 )E(2k ) + E(Wk2 )

The Kalman Filter Algorithm


0 = 0, E(2 ) = E(Z 2 )
Initialization X
0
0
Loop: for k = 0, 1, . . .
(k)
h1

ak E(2k )
E(2k ) + E(Zk2 )

k+1 = ak X
k + h1(k) (Yk X
k)
X
(k)

E(2k+1 ) = ak (ak h1 )E(2k ) + E(Wk2 )

10

Benefits
Requires no knowledge about the structure of Wk and Zk (only variances)
Easy implementation
Many applications

Inertial guidance system

Autopilot

Satellite navigation system

Many others

11

General Formulation
Xk = Fk1 Xk1 + Wk1
Yk = Hk Xk + Zk
The covariance of Wk is Qk and that of Zk is Rk .
Two variables:
k|k estimate of the state at time k based upon Y0 , . . . , Yk1
X
k|k )
Ek|k error covariance matrix, Ek|k = Cov(Xk X

12

Prediction
k+1|k = Fk X
k|k1
X
Ek+1|k = Fk Ek|k1 FkT + Qk
Update
k+1|k
Ik = Yk Hk X
Sk = Hk Ek+1|k HkT + Rk
Kk = Ek+1|k HkT Sk1
k+1|k+1 = X
k+1|k + Kk Ik
X
Ek+1|k+1 = (Id Kk Hk )Ek+1|k

Innovation
Innovation covariance
Kalman Gain
Updated state estimate
Updated error covariance

13

Estimating Constant Voltage


We wish to estimate some voltage which is almost constant except for some
small random fluctuations. Our measuring device is imperfect (e.g. because of
a poor A/D conversion). The process is governed by:
X k = X 0 + Wk ,

k = 1, 2, . . .

with X0 = 0.5V , and the measurements are


Zk = Xk + Vk ,

k = 1, 2, . . .

where Wk , Vk are uncorrelated Gaussian white noise processes, with


R := Var(Vk ) = 0.01, Var(Wk ) = 105 .

14

Accurate knowledge of measurement variance, Rest=R=0.01


0.8

0.7

0.6

Voltage

0.5

0.4

0.3

0.2

0.1

estimate
exact process
measurements
0

10

15

20

25
Iteration

30

35

40

45

50

15

Optimistic estimate of measurement variance, Rest=0.0001


0.8

0.7

0.6

Voltage

0.5

0.4

0.3

0.2

0.1

estimate
exact process
measurements
0

10

15

20

25
Iteration

30

35

40

45

50

16

Pessimistic estimate of measurement variance, Rest=1


0.8

0.7

0.6

Voltage

0.5

0.4

0.3

0.2

0.1

estimate
exact process
measurements
0

10

15

20

25
Iteration

30

35

40

45

50

17

1D Tracking
Estimation of the position of a vehicle.
Let X be a state variable (position and speed), and A is a transition matrix

1 t
.

A=
0 1
The process is governed by:
Xn+1 = AXn + Wn
where Wn is a zero-mean Gaussian white noise process. The observation is
Yn = CXn + Zn
where the matrix C only picks up the position and Zn is another zero-mean
Gaussian white noise process independent of Wn .

18

Estimation of a moving vehicle in 1!D.


8
exact position
measured position
estimated position
6

Position

!2

!4

10

15
Time

20

25

19

2D Example
General setup
X(t + 1) = F X(t) + W (t),
Y (t) = HX(t) + V (t),

W N (0, Q),
V N (0, R)

Moving particle at constant velocity subject to random perturbations in its


trajectory. The new position (x1, x2) is the old position plus the velocity
(dx1, dx2) plus noise w.

x1 (t)

x (t)
0
2

=
dx1 (t)
0

dx2 (t)
0

0
x1 (t 1)
w1(t 1)

1 x2 (t 1) w2 (t 1)

0 dx1 (t 1) dw1 (t 1)

1
dx2 (t 1)
dw2 (t 1)

20

Observations
We only observe the position of the particle.

y (t)
1
1 =
y2 (t)
0

x1 (t)

0
x2 (t)

v1 (t)

0
v2 (t)
dx1 (t)
dx2 (t)

Source: http://www.cs.ubc.ca/murphyk/Software/Kalman/kalman.html

21

Implementation
% Make a point move in the 2D plane
% State = (x y xdot ydot). We only observe (x y).

% This code was used to generate Figure 17.9 of


% "Artificial Intelligence: a Modern Approach",
% Russell and Norvig, 2nd edition, Prentice Hall, in preparation.
% X(t+1) = F X(t) + noise(Q)
% Y(t) = H X(t) + noise(R)
ss = 4; % state size
os = 2; % observation size
F = [1 0 1 0; 0 1 0 1; 0 0 1 0; 0 0 0 1];
H = [1 0 0 0; 0 1 0 0];
Q = 1*eye(ss);
R = 10*eye(os);
initx = [10 10 1 0];
initV = 10*eye(ss);

22

seed = 8; rand(state, seed);


randn(state, seed);
T = 50;
[x,y] = sample_lds(F,H,Q,R,initx,T);

23

Apply Kalman Filter


[xfilt,Vfilt] = kalman_filter(y,F,H,Q,R,initx,initV);
dfilt = x([1 2],:) - xfilt([1 2],:);
mse_filt = sqrt(sum(sum(dfilt.2)))
figure;
plot(x(1,:), x(2,:), ks-);
hold on
plot(y(1,:), y(2,:), g*);
plot(xfilt(1,:), xfilt(2,:), rx:);
hold off
legend(true, observed, filtered, 0)
xlabel(X1), ylabel(X2)

24

70
true
observed
filtered
60

X2

50

40

30

20

10
!40

!30

!20

!10
X1

10

20

25

140

120

100

X2

80

60

40

20

true
observed
filtered
0

20

40

60

80
X1

100

120

140

26

Apply Kalman Smoother


[xsmooth, Vsmooth] = kalman_smoother(y,F,H,Q,R,initx,initV);
dsmooth = x([1 2],:) - xsmooth([1 2],:);
mse_smooth = sqrt(sum(sum(dsmooth.2)))
figure;
hold on
plot(x(1,:), x(2,:), ks-);
plot(y(1,:), y(2,:), g*);
plot(xsmooth(1,:), xsmooth(2,:), rx:);
hold off
legend(true, observed, smoothed, 0)
xlabel(X1), ylabel(X2)

27

(0
*+,./0-+1-2
03..*4-2
'0

)2

50

!0

#0

20

10
!!0

!#0

!20

!10
)1

10

20

28

140

120

100

X2

80

60

40

20

true
observed
smoothed
0

20

40

60

80
X1

100

120

140

You might also like