Professional Documents
Culture Documents
Noisy data
Some Applied Math
Introduction
Predict Correct
Predict
Correct
xk 1 0 xk 1 ~ xk 1
y 0 1 y ~ y
k 1
k 1
k
State
State transition
Nose
xk Axk 1 wk 1 Or
xk 1 Ak xk wk
Noise
Measurement Model
uk H x 0
0
H
v
y
measurement
xk
~ uk
y k ~ vk
Measurement matrix
Nose
z k Hxk vk
Noise
Preparation
1
A
0
1 State Transition
0
Q xx
T
Q E w w
Process Noise Covariance
Q yy
0
R xx
T
R E v v
0
R yy
Initialization
xo Hzo
Po
0
Predict
xk Axk 1
T
Pk APk 1 A Q
Uncertainty
Transition
10
Correct
xk xk K z k Hxk
Actual
Predicted
Pk I KH Pk
K Pk H HPk H R
Measurement space
11
Summary
xk Axk 1
Pk APk 1 AT Q
K Pk H T HPk H T R
xk xk K ( z k Hxk )
Pk I KH Pk
12
xk 1 Ak xk wk
z k H k xk vk
White noise
The System
Measurement noise
vk
+
wk
xk+1
z-1
xk
Hk
zk
+
Ak
State transition matrix
14
Prior estimate of xk
The predictor will turn out to be a system that looks much like
the actual system.
15
The Predictor
+
zk
Ak K
xk1
xk
Z-1
zk
Ak
Ak
16
The Filter
Zk+1
Ak K
xk1
xk
Z-1
zk
Ak
Ak
17
The Kalman gain is the optimal weighting matrix for combining new
sensor data with a prior estimate to obtain a new estimate.
19
x Fx Gu
y Bx
u(t)
Linear System
y(t)
20
Forward path
u(t)
bm s m bm 1s m 1 .. bo
s n an 1s n 1 .. ao
y(t)
Feedback path
21
b3
b1
b2
1/s
1
U(s)
G(s)
-a3
1/s
x4
1/s
x3
-a2
b3 s 3 b2 s 2 b1s b0
s 4 a3 s 3 a 2 s 2 a1s a0
b3 s 1 b2 s 2 b1s 3 b0 s 4
1 a3 s 1 a 2 s 2 a1s 3 a0 s 4
-a1
1/s
x2
-a0
bo
x1
Y(s)
.
.
.
x1 x2 ; x 2 x3 ; x 3 x4
.
x 4 a0 x1 a1x2 a2 x3 a3 x4 u
y (t ) b0 x1 b1x2 b2 x3 b3 x4
x1 0
d x2 0
0
dt x3
x4 - a0
1
0
0
- a1
0
0
1
0
0
1
- a2 - a3
x1 0
x
2 0 u (t )
x3 0
x4 1
x1
x
2
y (t ) b0 b1 b2 b3
x3
x4
22
Forward path
u(t)
m s m m 1s m 1 ..
s n n 1s n 1 .. o
y(t)
Feedback path
23
b3
b1
b2
1/s
1
U(s)
-a3
1/s
x4
1/s
x3
-a2
3s 3 2 s 2 1s 0
G( s)
s 4 3s 3 2 s 2 1s 0
3 s 1 2 s 2 1s 3 0 s 4
1 3 s 1 2 s 2 1s 3 0 s 4
-a1
1/s
x2
-a0
bo
x1
Y(s)
.
.
.
x1 x2 ; x 2 x3 ; x 3 x4
.
x 4 0 x1 1x2 2 x3 3 x4 u
y (t ) 0 x1 1x2 2 x3 3 x4
1 0
x1 (k 1) 0
0
1
d x 2 ( k 2) 0
0
0
dt x3 (k 3) 0
x4 (k 1) - 0 - 1 - 2
x1 (k )
x
(
k
)
2
y (t ) 0 1 2 3
x3 (k )
x4 (k )
x1 (k ) 0
x ( k )
2 0 w(t )
x3 (k ) 0
1
- 3 x4 (k ) 1
0
0
24
X k 1 k X k Wk
Yk B k X k
X k State vector at time k
k State transition matrix for X k X k 1
B k Output equation gain matrix
E[X k ] 0
25
26
X
k
- E X so the error in the prediction X
- is
E X
k
k
k
- (X true; X
- predicted)
e -k X k X
k
k
k
-
with mean E e -k E X k - X
k
27
Pk- E e -k e T E
k
X - X
-
k
k
X X
-
k
k
X k I - K k H k X k K k Zk
X
- K (Z H X
- )
X
k
k
k k
k k
28
x ( ) x () K gain z Hx ( )
Based on measurement
z Hx noise
29
There are two basic processes that are modeled by a Kalman filter. The first process
is a model describing how the error state vector changes in time. This model is the
system dynamics model. The second model defines the relationship between the
error state vector and any measurements processed by the filter and is the
measurement model.
Time Update
Measurement Update
(Correct)
30
1
T
p( x)
exp( x P 1[ x ])
2
(2 ) n det P
32
Likelihood Functions
1
T
(x, , Y) exp( [x ] Y[x ])
2
33
The purpose of a Kalman filter is to optimally estimate the values of variables describing
the state of a system from a multidimensional signal contaminated by noise
Multiple
noise
inputs
Multiple noises
Kalman filter
Multiple state
Variable estimates
34
The multiple measurements (at each time point) are also vectors
that a recursive algorithm processes sequentially in time. This
means that the algorithm iteratively repeats itself for each new
measurement vector, using only values stored from the previous
cycle. This procedure distinguishes itself from batch-processing
algorithms, which must save all past measurements.
35
36
Finally, to prepare for the next measurement vector, the filter must
project the updated state estimate and its associated covariance to
the next measurement time.
37
Predicted
initial state
Estimate and covariance
38
Mathematical Definitions
The variance and the closely-related standard deviation
are measures of how spread out a distribution is. It is a
measure of estimation quality.
The covariance is a statistical measure of correlation of
the fluctuations of two different quantities. Intuitively,
covariance is the measure of how much two variables
vary together.
Least squares is a mathematical optimization technique
which, when given a series of measured data, attempts
to find a function which closely approximates the data (a
"best fit"). It attempts to minimize the sum of the squares
of the ordinate differences (called residuals) between
points generated by the function and corresponding
points in the data. It is sometimes called least mean
squares.
39
Xk+1 = Xk.
[1]
40
zk = xk + vk
[2]
41
How it Works?
The Kalman gain is the optimal weighting matrix for combining new
sensor data with a prior estimate to obtain a new estimate.
42
43
u(k) is called white noise, which means it is not correlated with any
other random variables and most especially not correlated with past
values of u.
44
In later lessons we will extend the Kalman filter to cases where the
dynamic equation is not linear and where u is not white noise. But
for this lesson, the dynamic equation is linear and w is white noise
with zero mean.
Now suppose that at time t0 someone came along and told you he
thought x(t0) = 1000 but that he might be in error and he thinks the
variance of his error is equal to P. Suppose that you had a great
deal of confidence in this person and were, therefore, convinced
that this was the best possible estimate of x(t0). This is the initial
estimate of x. It is sometimes called the a priori estimate.
46
Substitute the above equations in for x(t1) and new xe and you get
48
ye = M*new xe
for our numerical example we would have ye = 900
Dr. Kalman says the new best estimate of x(t1) is given by
Newer xe = new xe + K*(y(1) - M*new xe)
= new xe + K*(y(1) - ye) (Eq. 3)
where K is a number called the Kalman gain.
Notice that y(1) - ye is just our error in estimating y(1). For our
example, this error is equal to plus 300. Part of this is due to the
noise, w and part to our error in estimating x.
If all the error were due to our error in estimating x, then we would
be convinced that new xe was low by 300. Setting K = 1 would
correct our estimate by the full 300. But since some of this error is
due to w, we will make a correction of less than 300 to come up with
newer xe. We will set K to some number less than one.
49
These are the five equations of the Kalman filter. At time t2, we start
again using newer xe to be the value of xe to insert in equation 1
and newer P as the value of P in equation 2.
Then we calculate K from equation 4 and use that along with the
new measurement, y(2), in equation 3 to get another estimate of x
and we use equation 5 to get the corresponding P. And this goes on
computer cycle after computer cycle.
In the multi-dimensional Kalman filter, x is a column matrix with
many components. For example if we were determining the orbit of
a satellite, x would have 3 components corresponding to the
position of the satellite and 3 more corresponding to the velocity
plus other components corresponding to other random variables.
Equations 1 through 5 would become matrix equations and the
simplicity and intuitive logic of the Kalman filter becomes obscured.
51
i
DGPS
Reference
Station
DGPS
Receiver
Radio
Modem
Trans
Radio
Modem
Receiver
Kalman Filter
Wheel
Speed
Sensor
Magnetic
Compass
52
vE
v h , , S wheel , S wheelspeed v
N
(1 S wheelspeed ) S wheel
sin( - )
v
(1
S
)
S
cos(
wheelspeed
wheel
vE east velocity; v N north velocity; v sensor noise
53
Kalman filter provides a simple algorithm that can easily lend itself to
integrated systems and requires only adequate statistical models of
the state variables and associated noises for its optimal
performance.
54
Measurement error
Inertial
Navigator
Best total
estimates
55
Case Study
56