You are on page 1of 20

Kalman lter

Prediction step
Prior knowledge
of state Based on e.g. can run in real time, using only the present input measure-
physical model
ments and the previously calculated state and its uncer-
Next timestep tainty matrix; no additional past information is required.
The Kalman lter does not make any assumption that the
Update step
Compare prediction
Measurements
errors are Gaussian.[3] However, the lter yields the exact
to measurements conditional probability estimate in the special case that all
Output estimate errors are Gaussian-distributed.
of state

Extensions and generalizations to the method have also


been developed, such as the extended Kalman lter and
The Kalman lter keeps track of the estimated state of the sys- the unscented Kalman lter which work on nonlinear sys-
tem and the variance or uncertainty of the estimate. The es- tems. The underlying model is a Bayesian model similar
timate is updated using a state transition model and measure- to a hidden Markov model but where the state space of
ments. xk|k1 denotes the estimate of the systems state at time the latent variables is continuous and where all latent and
step k before the k-th measurement y has been taken into ac- observed variables have Gaussian distributions.
count; Pk|k1 is the corresponding uncertainty.

Kalman ltering, also known as linear quadratic esti-


mation (LQE), is an algorithm that uses a series of mea- 1 Naming and historical develop-
surements observed over time, containing statistical noise
and other inaccuracies, and produces estimates of un-
ment
known variables that tend to be more accurate than those
based on a single measurement alone, by using Bayesian The lter is named after Hungarian migr Rudolf E.
inference and estimating a joint probability distribution Klmn, although Thorvald Nicolai Thiele[4][5] and Peter
over the variables for each timeframe. The lter is named Swerling developed a similar algorithm earlier. Richard
after Rudolf E. Klmn, one of the primary developers S. Bucy of the University of Southern California con-
of its theory. tributed to the theory, leading to it often being called
The Kalman lter has numerous applications in tech- the KalmanBucy lter. Stanley F. Schmidt is gener-
nology. A common application is for guidance, navi- ally credited with developing the rst implementation of a
Kalman lter. He realized that the lter could be divided
gation, and control of vehicles, particularly aircraft and
spacecraft.[1] Furthermore, the Kalman lter is a widely into two distinct parts, with one part for time periods be-
tween sensor outputs and another part for incorporating
applied concept in time series analysis used in elds such
as signal processing and econometrics. Kalman lters measurements.[6] It was during a visit by Klmn to the
NASA Ames Research Center that Schmidt saw the ap-
also are one of the main topics in the eld of robotic
motion planning and control, and they are sometimes in- plicability of Klmns ideas to the nonlinear problem of
trajectory estimation for the Apollo program leading to
cluded in trajectory optimization. The Kalman lter also
works for modeling the central nervous systems control its incorporation in the Apollo navigation computer. This
of movement. Due to the time delay between issuing mo- Kalman lter was rst described and partially developed
tor commands and receiving sensory feedback, usage of in technical papers by Swerling (1958), Kalman (1960)
the Kalman lter supports the realistic model for mak- and Kalman and Bucy (1961).
ing estimates of the current state of the motor system and Kalman lters have been vital in the implementation of
issuing updated commands.[2] the navigation systems of U.S. Navy nuclear ballistic mis-
The algorithm works in a two-step process. In the predic- sile submarines, and in the guidance and navigation sys-
tion step, the Kalman lter produces estimates of the cur- tems of cruise missiles such as the U.S. Navys Tomahawk
rent state variables, along with their uncertainties. Once missile and the U.S. Air Force's Air Launched Cruise
the outcome of the next measurement (necessarily cor- Missile. It is also used in the guidance and navigation
rupted with some amount of error, including random systems of reusable launch vehicles and the attitude con-
noise) is observed, these estimates are updated using a trol and navigation systems of spacecraft
[7]
which dock at
weighted average, with more weight being given to esti- the International Space Station.
mates with higher certainty. The algorithm is recursive. It This digital lter is sometimes called the Stratonovich

1
2 4 TECHNICAL DESCRIPTION AND CONTEXT

KalmanBucy lter because it is a special case of a more volved in a single set of calculations. This allows for a rep-
general, non-linear lter developed somewhat earlier by resentation of linear relationships between dierent state
the Soviet mathematician Ruslan Stratonovich.[8][9][10][11] variables (such as position, velocity, and acceleration) in
In fact, some of the special case linear lters equa- any of the transition models or covariances.
tions appeared in these papers by Stratonovich that were
published before summer 1960, when Kalman met with
Stratonovich during a conference in Moscow.
3 Example application
As an example application, consider the problem of de-
2 Overview of the calculation termining the precise location of a truck. The truck can
be equipped with a GPS unit that provides an estimate of
the position within a few meters. The GPS estimate is
The Kalman lter uses a systems dynamics model (e.g.,
likely to be noisy; readings 'jump around' rapidly, though
physical laws of motion), known control inputs to that
remaining within a few meters of the real position. In
system, and multiple sequential measurements (such as
addition, since the truck is expected to follow the laws
from sensors) to form an estimate of the systems varying
of physics, its position can also be estimated by integrat-
quantities (its state) that is better than the estimate ob-
ing its velocity over time, determined by keeping track
tained by using only one measurement alone. As such, it
of wheel revolutions and the angle of the steering wheel.
is a common sensor fusion and data fusion algorithm.
This is a technique known as dead reckoning. Typically,
Noisy sensor data, approximations in the equations that the dead reckoning will provide a very smooth estimate
describe the system evolution, and external factors that of the trucks position, but it will drift over time as small
are not accounted for all place limits on how well it is pos- errors accumulate.
sible to determine the systems state. The Kalman lter
In this example, the Kalman lter can be thought of as op-
deals eectively with the uncertainty due to noisy sensor
erating in two distinct phases: predict and update. In the
data and to some extent also with random external factors.
prediction phase, the trucks old position will be modied
The Kalman lter produces an estimate of the state of the
according to the physical laws of motion (the dynamic or
system as an average of the systems predicted state and of
state transition model). Not only will a new position
the new measurement using a weighted average. The pur-
estimate be calculated, but a new covariance will be cal-
pose of the weights is that values with better (i.e., smaller)
culated as well. Perhaps the covariance is proportional
estimated uncertainty are trusted more. The weights
to the speed of the truck because we are more uncer-
are calculated from the covariance, a measure of the esti-
tain about the accuracy of the dead reckoning position
mated uncertainty of the prediction of the systems state.
estimate at high speeds but very certain about the posi-
The result of the weighted average is a new state estimate
tion estimate when moving slowly. Next, in the update
that lies between the predicted and measured state, and
phase, a measurement of the trucks position is taken from
has a better estimated uncertainty than either alone. This
the GPS unit. Along with this measurement comes some
process is repeated at every time step, with the new esti-
amount of uncertainty, and its covariance relative to that
mate and its covariance informing the prediction used in
of the prediction from the previous phase determines how
the following iteration. This means that the Kalman lter
much the new measurement will aect the updated pre-
works recursively and requires only the last best guess,
diction. Ideally, as the dead reckoning estimates tend to
rather than the entire history, of a systems state to calcu-
drift away from the real position, the GPS measurement
late a new state.
should pull the position estimate back towards the real po-
The relative certainty of the measurements and current sition but not disturb it to the point of becoming rapidly
state estimate is an important consideration, and it is com- jumping and noisy.
mon to discuss the response of the lter in terms of the
Kalman lters gain. The Kalman gain is the relative
weight given to the measurements and current state es-
timate, and can be tuned to achieve particular perfor- 4 Technical description and con-
mance. With a high gain, the lter places more weight text
on the most recent measurements, and thus follows them
more responsively. With a low gain, the lter follows the The Kalman lter is an ecient recursive lter that
model predictions more closely. At the extremes, a high estimates the internal state of a linear dynamic system
gain close to one will result in a more jumpy estimated from a series of noisy measurements. It is used in a
trajectory, while low gain close to zero will smooth out wide range of engineering and econometric applications
noise but decrease the responsiveness. from radar and computer vision to estimation of struc-
When performing the actual calculations for the lter (as tural macroeconomic models,[12][13] and is an important
discussed below), the state estimate and covariances are topic in control theory and control systems engineering.
coded into matrices to handle the multiple dimensions in- Together with the linear-quadratic regulator (LQR), the
3

Kalman lter solves the linear-quadratic-Gaussian control


with the framework of the Kalman lter. This means
problem (LQG). The Kalman lter, the linear-quadratic specifying the following matrices: Fk, the state-transition
regulator and the linear-quadratic-Gaussian controller are
model; Hk, the observation model; Qk, the covariance of
solutions to what arguably are the most fundamental the process noise; Rk, the covariance of the observation
problems in control theory. noise; and sometimes Bk, the control-input model, for
In most applications, the internal state is much larger each time-step, k, as described below.
(more degrees of freedom) than the few observable pa- Time= k -1 Time=k Time= k +1
rameters which are measured. However, by combining Observed z z
k -1 z k k +1
u u u
a series of measurements, the Kalman lter can estimate Supplied
k -1
0, R 0, R
k
0, R
k +1

by user 0, Q 0, Q 0, Q
the entire internal state. F B H F B H F B H
wk -1 vk -1 wk vk wk +1 vk+1
Hidden
In DempsterShafer theory, each state equation or obser- xk-1,Pk -1 xk ,Pk xk+1,Pk +1

vation is considered a special case of a linear belief func-


tion and the Kalman lter is a special case of combining Model underlying the Kalman lter. Squares represent matri-
linear belief functions on a join-tree or Markov tree. Ad- ces. Ellipses represent multivariate normal distributions (with the
ditional approaches include belief lters which use Bayes mean and covariance matrix enclosed). Unenclosed values are
or evidential updates to the state equations. vectors. In the simple case, the various matrices are constant with
time, and thus the subscripts are dropped, but the Kalman lter
A wide variety of Kalman lters have now been devel-
allows any of them to change each time step.
oped, from Kalmans original formulation, now called
the simple Kalman lter, the KalmanBucy lter,
The Kalman lter model assumes the true state at time k
Schmidts extended lter, the information lter, and a
is evolved from the state at (k 1) according to
variety of square-root lters that were developed by
Bierman, Thornton and many others. Perhaps the most
commonly used type of very simple Kalman lter is the
phase-locked loop, which is now ubiquitous in radios, xk = Fk xk1 + Bk uk + wk
especially frequency modulation (FM) radios, television
sets, satellite communications receivers, outer space com- where
munications systems, and nearly any other electronic
communications equipment. Fk is the state transition model which is applied to
the previous state xk;

Bk is the control-input model which is applied to the


5 Underlying dynamical system control vector uk;

model wk is the process noise which is assumed to be drawn


from a zero mean multivariate normal distribution,
N , with covariance, Qk: wk N (0, Qk ) .
The Kalman lters are based on linear dynamical systems
discretized in the time domain. They are modelled on a
Markov chain built on linear operators perturbed by er- At time k an observation (or measurement) zk of the true
rors that may include Gaussian noise. The state of the sys- state xk is made according to
tem is represented as a vector of real numbers. At each
discrete time increment, a linear operator is applied to the
state to generate the new state, with some noise mixed in, zk = Hk xk + vk
and optionally some information from the controls on the
system if they are known. Then, another linear operator where
mixed with more noise generates the observed outputs
from the true (hidden) state. The Kalman lter may Hk is the observation model which maps the true
be regarded as analogous to the hidden Markov model, state space into the observed space and
with the key dierence that the hidden state variables
take values in a continuous space (as opposed to a dis- vk is the observation noise which is assumed to be
crete state space as in the hidden Markov model). There zero mean Gaussian white noise with covariance Rk:
is a strong duality between the equations of the Kalman vk N (0, Rk ) .
Filter and those of the hidden Markov model. A review of
this and other models is given in Roweis and Ghahramani The initial state, and the noise vectors at each step {x0 ,
(1999),[14] and Hamilton (1994), Chapter 13.[15] w1 , , wk, v1 vk} are all assumed to be mutually
In order to use the Kalman lter to estimate the inter- independent.
nal state of a process given only a sequence of noisy ob- Many real dynamical systems do not exactly t this model.
servations, one must model the process in accordance In fact, unmodelled dynamics can seriously degrade the
4 6 DETAILS

lter performance, even when it was supposed to work 6.1 Predict


with unknown stochastic signals as inputs. The reason
for this is that the eect of unmodelled dynamics depends 6.2 Update
on the input, and, therefore, can bring the estimation al-
gorithm to instability (it diverges). On the other hand, The formula for the updated estimate covariance above is
independent white noise signals will not make the algo- only valid for the optimal Kalman gain. Usage of other
rithm diverge. The problem of distinguishing between gain values requires a more complex formula found in the
measurement noise and unmodelled dynamics is a di- derivations section.
cult one and is treated in control theory under the frame-
work of robust control.[16][17]
6.3 Invariants
If the model is accurate, and the values for x
0|0 and P0|0
accurately reect the distribution of the initial state val-
6 Details ues, then the following invariants are preserved:

The Kalman lter is a recursive estimator. This means


that only the estimated state from the previous time step E[xk x
k|k ] = E[xk x
k|k1 ] = 0
and the current measurement are needed to compute the E[yk ] = 0
estimate for the current state. In contrast to batch esti-
mation techniques, no history of observations and/or es- where E[] is the expected value of . That is, all esti-
timates is required. In what follows, the notation xn|m mates have a mean error of zero.
represents the estimate of x at time n given observations
Also:
up to and including at time m n.
The state of the lter is represented by two variables:
Pk|k = cov(xk x
k|k )
Pk|k1 = cov(xk x
k|k1 )
x
k|k , the a posteriori state estimate at time k given
observations up to and including at time k; Sk = cov(yk )

so covariance matrices accurately reect the covariance


Pk|k , the a posteriori error covariance matrix (a of estimates.
measure of the estimated accuracy of the state es-
timate).
6.4 Estimation of the noise covariances Qk
and Rk
The Kalman lter can be written as a single equation,
however it is most often conceptualized as two distinct Practical implementation of the Kalman Filter is often
phases: Predict and Update. The predict phase uses dicult due to the diculty of getting a good estimate
the state estimate from the previous timestep to produce of the noise covariance matrices Qk and Rk. Extensive
an estimate of the state at the current timestep. This pre- research has been done in this eld to estimate these co-
dicted state estimate is also known as the a priori state variances from data. One of the more promising and
estimate because, although it is an estimate of the state at practical approaches to do this is the autocovariance
the current timestep, it does not include observation in- least-squares (ALS) technique that uses the time-lagged
formation from the current timestep. In the update phase, autocovariances of routine operating data to estimate the
the current a priori prediction is combined with current covariances.[20][21] The GNU Octave and Matlab code
observation information to rene the state estimate. This used to calculate the noise covariance matrices using the
improved estimate is termed the a posteriori state esti- ALS technique is available online under the GNU Gen-
mate. eral Public License license.[22]

Typically, the two phases alternate, with the prediction


advancing the state until the next scheduled observation, 6.5 Optimality and performance
and the update incorporating the observation. However,
this is not necessary; if an observation is unavailable for It follows from theory that the Kalman lter is the optimal
some reason, the update may be skipped and multiple pre- linear lter in cases where a) the model perfectly matches
diction steps performed. Likewise, if multiple indepen- the real system, b) the entering noise is white (uncor-
dent observations are available at the same time, multiple related) and c) the covariances of the noise are exactly
update steps may be performed (typically with dierent known. Several methods for the noise covariance estima-
observation matrices Hk).[18][19] tion have been proposed during past decades, including
5

ALS, mentioned in the section above. After the covari- an unknown input and G applies that eect to the state
ances are estimated, it is useful to evaluate the perfor- vector) where
mance of the lter; i.e., whether it is possible to improve
the state estimation quality. If the Kalman lter works [ ]
optimally, the innovation sequence (the output prediction 1 t
F=
error) is a white noise, therefore the whiteness property 0 1
of the innovations measures lter performance. Several [ 1 2]
t
dierent methods can be used for this purpose.[23] If the G = 2
noise terms are non-Gaussian distributed, methods for t
assessing performance of the lter estimate, which use
probability inequalities or large-sample theory, are given so that
in [24] and.[25]

xk = Fxk1 + wk
7 Example application, technical where

wk N (0, Q)
[1 4 1 3
]
4 t 2 t
Q= GGT a2 = a2 .
1 3 2
2 t t

Please note that the matrix Q is not full rank (it is of rank
one if t = 0 ). Hence, the distribution N (0, Q) is
not absolutely continuous and has no probability density
function. Another way to express this, avoiding explicit
degenerate distributions is given by
Truth; ltered process; observations.

Consider a truck on frictionless, straight rails. Initially, ( )


the truck is stationary at position 0, but it is bueted this wk G N 0, a2
way and that by random uncontrolled forces. We measure
At each time step, a noisy measurement of the true po-
the position of the truck every t seconds, but these mea-
sition of the truck is made. Let us suppose the measure-
surements are imprecise; we want to maintain a model
ment noise vk is also normally distributed, with mean 0
of where the truck is and what its velocity is. We show
and standard deviation z.
here how we derive the model from which we create our
Kalman lter.
Since F, H, R, Q are constant, their time indices are zk = Hxk + vk
dropped.
The position and velocity of the truck are described by where
the linear state space
[ ]
[ ] H= 1 0
x
xk = and
x
where x is the velocity, that is, the derivative of position
with respect to time. [ ] [ ]
R = E vk vTk = z2
We assume that between the (k 1) and k timestep un-
controlled forces cause a constant acceleration of ak that We know the initial starting state of the truck with perfect
is normally distributed, with mean 0 and standard devia- precision, so we initialize
tion a. From Newtons laws of motion we conclude that
[ ]
0
x
0|0 =
xk = Fxk1 + Gak 0

(note that there is no Bu term since we have no known and to tell the lter that we know the exact position and
control inputs. Instead, we assume that ak is the eect of velocity, we give it a zero covariance matrix:
6 8 DERIVATIONS

which, using our invariant on Pk | k and the denition


[ ] of Rk becomes
0 0
P0|0 =
0 0
T T
If the initial position and velocity are not known perfectly, Pk|k = (I Kk Hk )Pk|k1 (I Kk Hk ) + Kk Rk Kk
the covariance matrix should be initialized with suitable
variances on its diagonal: This formula (sometimes known as the Joseph form of
the covariance update equation) is valid for any value of
Kk. It turns out that if Kk is the optimal Kalman gain,
[ 2 ]
0 this can be simplied further as shown below.
P0|0 = x 2
0 x

The lter will then prefer the information from the rst 8.2 Kalman gain derivation
measurements over the information already in the model.
The Kalman lter is a minimum mean-square error esti-
mator. The error in the a posteriori state estimation is
8 Derivations
x x

8.1 Deriving the a posteriori estimate co- k k|k
variance matrix We seek to minimize the expected
[ value of the square
] of
2
the magnitude of this vector, E xk x k|k . This
Starting with our invariant on the error covariance Pk | k
as above is equivalent to minimizing the trace of the a posteriori
estimate covariance matrix Pk|k . By expanding out the
terms in the equation above and collecting, we get:
( )
Pk|k = cov xk x
k|k

substitute in the denition of x


k|k Pk|k = Pk|k1 Kk Hk Pk|k1 Pk|k1 HTk KTk + Kk (Hk Pk|k1 HTk + R

= Pk|k1 Kk Hk Pk|k1 Pk|k1 HTk KTk + Kk Sk KTk


[ ( )]
Pk|k = cov xk xk|k1 + Kk yk The trace is minimized when its matrix derivative with
respect to the gain matrix is zero. Using the gradient ma-
and substitute yk trix rules and the symmetry of the matrices involved we
nd that
( [ ( )])
Pk|k = cov xk xk|k1 + Kk zk Hk x
k|k1
tr(Pk|k )
and zk = 2(Hk Pk|k1 )T + 2Kk Sk = 0.
Kk
Solving this for Kk yields the Kalman gain:
( [ ( )])
Pk|k = cov xk x
k|k1 + Kk Hk xk + vk Hk x
k|k1

and by collecting the error vectors we get Kk Sk = (Hk Pk|k1 )T = Pk|k1 HTk
Kk = Pk|k1 HTk S1
k
[( ) ]
Pk|k = cov I Kk Hk )(xk x
k|k1 Kk vk This gain, which is known as the optimal Kalman gain, is
the one that yields MMSE estimates when used.
Since the measurement error vk is uncorrelated with the
other terms, this becomes
8.3 Simplication of the a posteriori error
[ ( )] covariance formula
Pk|k = cov (I Kk Hk ) xk x
k|k1 + cov [Kk vk ]
The formula used to calculate the a posteriori error co-
by the properties of vector covariance this becomes variance can be simplied when the Kalman gain equals
the optimal value derived above. Multiplying both sides
( ) of our Kalman gain formula on the right by SkKkT , it
T
Pk|k = I Kk Hk ) cov(xk x
k|k1 (I Kk Hk ) +Kk follows KTk
cov (vk )that
7

Pak|k . While computing the actual error covariance us-


[( )( )T ]
T
Kk Sk Kk = Pk|k1 Hk KkT T ing P a
k|k = E xk k|k xk x
x k|k , substituting
[ ]
for[ b T
xk|k ]and using the fact that E wk wk = Qak and
Referring back to our expanded formula for the a poste- E vk vTk = Rak , results in the following recursive equa-
riori error covariance, tions for Pak|k :

Pk|k = Pk|k1 Kk Hk Pk|k1 Pk|k1 HTk KTk +Kk Sk KTk Pak|k1 = Fk Pak1|k1 FTk + Qak

we nd the last two terms cancel out, giving and

Pk|k = Pk|k1 Kk Hk Pk|k1 = (I Kk Hk )Pk|k1 . T


Pak|k = (I Kk Hk ) Pak|k1 (I Kk Hk ) + Kk Rak KTk

This formula is computationally cheaper and thus nearly While computing


always used in practice, but is only correct for the op- [ PTk|k ] , by design the
[ lter
T
] implicitly as-
sumes that E wk wk = Qk and E vk vk = Rk . Note
timal gain. If arithmetic precision is unusually low caus- that the recursive expressions for Pak|k and Pk|k are iden-
ing problems with numerical stability, or if a non-optimal tical except for the presence of Qak and Rak in place of the
Kalman gain is deliberately used, this simplication can- design values Qk and Rk respectively. Researches have
not be applied; the a posteriori error covariance formula been done to analyze Kaman lter systems robustness.[27]
as derived above (Joseph form) must be used.

10 Square root form


9 Sensitivity analysis
One problem with the Kalman lter is its numerical stabil-
The Kalman ltering equations provide an estimate of the ity. If the process noise covariance Qk is small, round-o
state x
k|k and its error covariance Pk|k recursively. The error often causes a small positive eigenvalue to be com-
estimate and its quality depend on the system parameters puted as a negative number. This renders the numerical
and the noise statistics fed as inputs to the estimator. This representation of the state covariance matrix P indenite,
section analyzes the eect of uncertainties in the statisti- while its true form is positive-denite.
cal inputs to the lter.[26] In the absence of reliable statis-
tics or the true values of noise covariance matrices Qk Positive denite matrices have the property that they have
and Rk , the expression a triangular matrix square root P = SST . This can be
computed eciently using the Cholesky factorization al-
gorithm, but more importantly, if the covariance is kept
in this form, it can never have a negative diagonal or
Pk|k = (I Kk Hk )Pk|k1 (I Kk Hk )T + Kk Rk KTk
become asymmetric. An equivalent form, which avoids
no longer provides[ the actual error covariance. In other many of the square root operations required by the matrix
( )( )T ] square root yet preserves the desirable numerical proper-
words, Pk|k = E xk x k|k xk x k|k . In most ties, is the U-D decomposition form, P = UDUT , where
real-time applications, the covariance matrices that are U is a unit triangular matrix (with unit diagonal), and D
used in designing the Kalman lter are dierent from the is a diagonal matrix.
actual (true) noise covariances matrices. This sensitivity
analysis describes the behavior of the estimation error co- Between the two, the U-D factorization uses the same
variance when the noise covariances as well as the system amount of storage, and somewhat less computation, and
matrices Fk and Hk that are fed as inputs to the lter are is the most commonly used square root form. (Early lit-
incorrect. Thus, the sensitivity analysis describes the ro- erature on the relative eciency is somewhat misleading,
bustness (or sensitivity) of the estimator to misspecied as it assumed that square roots were much more time-
statistical and parametric inputs to the estimator. consuming than divisions,[28]:69 while on 21-st century
computers they are only slightly more expensive.)
This discussion is limited to the error sensitivity analy-
sis for the case of statistical uncertainties. Here the ac- Ecient algorithms for the Kalman prediction and up-
tual noise covariances are denoted by Qak and Rak respec- date steps in the square root form were developed by G.
tively, whereas the design values used in the estimator J. Bierman and C. L. Thornton.[28][29]
are Qk and Rk respectively. The actual error covari- The LDLT decomposition of the innovation covariance
ance is denoted by Pak|k and Pk|k as computed by the matrix S is the basis for another type of numerically e-
Kalman lter is referred to as the Riccati variable. When cient and robust square root lter.[30] The algorithm starts
Qk Qak and Rk Rak , this means that Pk|k = with the LU decomposition as implemented in the Linear
8 11 RELATIONSHIP TO RECURSIVE BAYESIAN ESTIMATION

Algebra PACKage (LAPACK). These results are further


factored into the LDLT structure with methods given by
Golub and Van Loan (algorithm 4.1.2) for a symmetric
k

nonsingular matrix.[31] Any singular covariance matrix is p(x0 , . . . , xk , z1 , . . . , zk ) = p(x0 ) p(zi | xi )p(xi | xi1 )
i=1
pivoted so that the rst diagonal partition is nonsingular
and well-conditioned. The pivoting algorithm must retain However, when the Kalman lter is used to estimate the
any portion of the innovation covariance matrix directly state x, the probability distribution of interest is that as-
corresponding to observed state-variables H x | - that sociated with the current states conditioned on the mea-
are associated with auxiliary observations in y . The ldlt surements up to the current timestep. This is achieved by
square-root lter requires orthogonalization of the obser- marginalizing out the previous states and dividing by the
vation vector.[29][30] This may be done with the inverse probability of the measurement set.
square-root of the covariance matrix for the auxiliary
variables using Method 2 in Higham (2002, p. 263).[32] This leads to the predict and update steps of the Kalman
lter written probabilistically. The probability distribu-
tion associated with the predicted state is the sum (inte-
11 Relationship to recursive gral) of the products of the probability distribution asso-
ciated with the transition from the (k 1)-th timestep to
Bayesian estimation the k-th and the probability distribution associated with
the previous state, over all possible xk1 .
The Kalman lter can be presented as one of the sim-
plest dynamic Bayesian networks. The Kalman lter cal-
culates estimates of the true values of states recursively
p(xk | Zk1 ) = p(xk | xk1 )p(xk1 | Zk1 ) dxk1
over time using incoming measurements and a mathemat-
ical process model. Similarly, recursive Bayesian estima-
tion calculates estimates of an unknown probability den- The measurement set up to time t is
sity function (PDF) recursively over time using incoming
measurements and a mathematical process model.[33]
In recursive Bayesian estimation, the true state is assumed Zt = {z1 , . . . , zt }
to be an unobserved Markov process, and the measure-
ments are the observed states of a hidden Markov model The probability distribution of the update is proportional
(HMM). to the product of the measurement likelihood and the pre-
dicted state.

p(zk | xk )p(xk | Zk1 )


p(xk | Zk ) =
p(zk | Zk1 )

The denominator


hidden markov model
p(zk | Zk1 ) = p(zk | xk )p(xk | Zk1 ) dxk
because of the Markov assumption, the true state is con-
ditionally independent of all earlier states given the im- is a normalization term.
mediately previous state.
The remaining probability density functions are

p(xk | x0 , . . . , xk1 ) = p(xk | xk1 )


p(xk | xk1 ) = N (Fk xk1 , Qk )
Similarly, the measurement at the k-th timestep is depen- p(zk | xk ) = N (Hk xk , Rk )
dent only upon the current state and is conditionally inde-
pendent of all other states given the current state. p(xk1 | Zk1 ) = N (xk1 , Pk1 )

Note that the PDF at the previous timestep is inductively


p(zk | x0 , . . . , xk ) = p(zk | xk ) assumed to be the estimated state and covariance. This
is justied because, as an optimal estimator, the Kalman
Using these assumptions the probability distribution over lter makes best use of the measurements, therefore the
all states of the hidden Markov model can be written sim- PDF for xk given the measurements Zk is the Kalman
ply as: lter estimate.
9

12 Marginal likelihood
T

Related to the recursive Bayesian interpretation described p(z) = p(zk | xk )p(xk | zk1 , . . . , z0 )dxk
above, the Kalman lter can be viewed as a generative k=0
T

model, i.e., a process for generating a stream of random
observations z = (z0 , z1 , z2 , ). Specically, the process = N (zk ; Hk xk , Rk )N (xk ; x
k|k1 , Pk|k1 )dxk
k=0
is

T
= N (zk ; Hk x
k|k1 , Rk + Hk Pk|k1 HTk )
k=0
1. Sample a hidden state x0 from the Gaussian prior
distribution p(x0 ) = N (
x0|0 , P0|0 ) .
T
= N (zk ; Hk x
k|k1 , Sk ),
k=0
2. Sample an observation z0 from the observation i.e., a product of Gaussian densities, each correspond-
model p(z0 | x0 ) = N (H0 x0 , R0 ) . ing to the density of one observation zk under the cur-
rent ltering distribution Hk x
k|k1 , Sk . This can easily
3. For k = 1, 2, 3, . . . , do be computed as a simple recursive update; however, to
avoid numeric underow, in a practical implementation
it is usually desirable to compute the log marginal like-
(a) Sample the next hidden state xk from the tran- lihood = log p(z) instead. Adopting the convention
sition model p(xk | xk1 ) = N (Fk xk1 + (1) = 0 , this can be done via the recursive update rule
Bk uk , Qk ).

(b) Sample an observation zk from the observation (k) = (k1) 1 (yT S1 y + log |S | + d log 2 ) ,
k k y
model p(zk | xk ) = N (Hk xk , Rk ). 2 k k
where dy is the dimension of the measurement vector.[34]

Note that this process has identical structure to the hidden An important application where such a (log) likelihood
Markov model, except that the discrete state and obser- of the observations (given the lter parameters) is used
vations are replaced with continuous variables sampled is multi-target tracking. For example, consider an ob-
from Gaussian distributions. ject tracking scenario where a stream of observations is
the input, however, it is unknown how many objects are
In some applications, it is useful to compute the prob- in the scene (or, the number of objects is known but is
ability that a Kalman lter with a given set of parame- greater than one). In such a scenario, it can be unknown
ters (prior distribution, transition and observation mod- apriori which observations/measurements were generated
els, and control inputs) would generate a particular ob- by which object. A multiple hypothesis tracker (MHT)
served signal. This probability is known as the marginal typically will form dierent track association hypothe-
likelihood because it integrates over (marginalizes out) ses, where each hypothesis can be viewed as a Kalman
the values of the hidden state variables, so it can be lter (in the linear Gaussian case) with a specic set of pa-
computed using only the observed signal. The marginal rameters associated with the hypothesized object. Thus,
likelihood can be useful to evaluate dierent parameter it is important to compute the likelihood of the obser-
choices, or to compare the Kalman lter against other vations for the dierent hypotheses under consideration,
models using Bayesian model comparison. such that the most-likely one can be found.
It is straightforward to compute the marginal likelihood
as a side eect of the recursive ltering computation. By
the chain rule, the likelihood can be factored as the prod- 13 Information lter
uct of the probability of each observation given previous
observations, In the information lter, or inverse covariance lter, the
estimated covariance and estimated state are replaced by
the information matrix and information vector respec-
T tively. These are dened as:
p(z) = p(zk | zk1 , . . . , z0 )
k=0
Yk|k = P1
k|k
and because the Kalman lter describes a Markov pro- y = P1 x
k|k k|k k|k
cess, all relevant information from previous observations
k|k1 , Pk|k1 . Similarly the predicted covariance and state have equiva-
is contained in the current state estimate x
Thus the marginal likelihood is given by lent information forms, dened as:
10 15 FIXED-INTERVAL SMOOTHERS

where:
Yk|k1 = P1
k|k1
x
t|t1 is estimated via a standard Kalman lter;
yk|k1 = P1
k|k1 xk|k1

yt|t1 = zt H xt|t1 is the innovation produced
as have the measurement covariance and measurement
considering the estimate of the standard Kalman l-
vector, which are dened as:
ter;
the various xti|t with i = 1, . . . , N 1 are new
Ik = HTk R1
k Hk variables; i.e., they do not appear in the standard
ik = HTk R1
k zk Kalman lter;
The information update now becomes a trivial sum.[35] the gains are computed via the following scheme:

Yk|k = Yk|k1 + Ik
[ ]1
yk|k = yk|k1 + ik K(i) = P(i) HT HPHT + R
The main advantage of the information lter is that N and
measurements can be ltered at each timestep simply by [ ]i
summing their information matrices and vectors. P(i) = P (F KH)
T

where P and K are the prediction error covari-



N
Yk|k = Yk|k1 + Ik,j ance and the gains of the standard Kalman lter
j=1 (i.e., Pt|t1 ).

N
yk|k = yk|k1 + ik,j If the estimation error covariance is dened so that
j=1

To predict the information lter the information matrix [( ) ( ) ]


and vector can be converted back to their state space P i := E xti x
ti|t xti x
ti|t | z 1 . . . z t ,
equivalents, or alternatively the information space predic-
tion can be used.[35] then we have that the improvement on the estimation of
xti is given by:
[ ]T
Mk = F1 Yk1|k1 F1
i [ ]
k k
[ ]
1 1
( )1 ( (i) )T
Ck = Mk Mk + Qk P Pi = P(j) HT HPHT + R H P
Lk = I Ck j=0

Yk|k1 = Lk Mk LTk + Ck Q1 T
k Ck
[ 1 ]T 15 Fixed-interval smoothers
yk|k1 = Lk Fk yk1|k1
Note that if F and Q are time invariant these values can
The optimal xed-interval smoother provides the optimal
be cached. Note also that F and Q need to be invertible.
k|n ( k < n ) using the measurements from
estimate of x
a xed interval z1 to zn . This is also called Kalman
Smoothing. There are several smoothing algorithms in
14 Fixed-lag smoother common use.

The optimal xed-lag smoother provides the optimal es-


kN |k for a given xed-lag N using the mea- 15.1 RauchTungStriebel
timate of x
surements from z1 to zk .[36] It can be derived using the
previous theory via an augmented state, and the main The RauchTungStriebel (RTS) smoother is an ecient
equation of the lter is the following: two-pass algorithm for xed interval smoothing.[37]
The forward pass is the same as the regular Kalman lter
(0)
algorithm. These ltered a-priori and a-posteriori state
x
t|t I 0 ... 0 x K x
t1|t1 estimates
.. x ,x
k|k and covariances Pk|k1 , Pk|k are
x I t2|t1 K(1) k|k1
t1|t 0 0 .
saved for
use in the backwards pass.
.. = .. xt|t1 +
..

.. .. + .. yt|t1
. . ..
. . . . .
In the backwards pass, we compute the smoothed state
xtN +1|t 0 0 ... I x K(N 1)
tN +1|t1 estimates x
k|n and covariances Pk|n . We start at the last
15.3 Minimum-variance smoother 11

time step and proceed backwards in time using the fol- 15.3 Minimum-variance smoother
lowing recursive equations:
The minimum-variance smoother can attain the best-
possible error performance, provided that the models are
x
k|n = x xk+1|n x
k|k + Ck ( k+1|k ) linear, their parameters and the noise statistics are known
precisely.[38] This smoother is a time-varying state-space
P = P + C (P P )CT generalization of the optimal non-causal Wiener lter.
k|n k|k k k+1|n k+1|k k
The smoother calculations are done in two passes. The
where forward calculations involve a one-step-ahead predictor
and are given by

Ck = Pk|k FTk+1 P1
k+1|k

Note that xk|k is the a-posteriori state estimate of x k+1|k = (Fk Kk Hk )


xk|k1 + Kk zk
timestep k and xk+1|k is the a-priori state estimate of
timestep k + 1 . The same notation applies to the co-
variance. 1 1
k = Sk 2 Hk x
k|k1 + Sk 2 zk

The above system is known as the inverse Wiener-Hopf


15.2 Modied BrysonFrazier smoother factor. The backward recursion is the adjoint of the above
forward system. The result of the backward pass k may
An alternative to the RTS algorithm is the modied be calculated by operating the forward equations on the
BrysonFrazier (MBF) xed interval smoother developed time-reversed k and time reversing the result. In the
by Bierman.[29] This also uses a backward pass that pro- case of output estimation, the smoothed estimate is given
cesses data saved from the Kalman lter forward pass. by
The equations for the backward pass involve the recursive
computation of data which are used at each observation
time to compute the smoothed state and covariance.
The recursive equations are yk|N = zk Rk k

Taking the causal part of this minimum-variance


k = HTk S1 Hk + C
T

k k Ck
smoother yields
k
k1 = FTk
k Fk
n = 0
1
yk|k = zk Rk Sk 2 k
k = HT S1 yk + C
T

k k k k
k1 = FT
k
k which is identical to the minimum-variance Kalman l-
n = 0
ter. The above solutions minimize the variance of the out-
put estimation error. Note that the RauchTungStriebel
k = I Kk Hk smoother derivation assumes that the underlying distribu-
where Sk is the residual covariance and C
. The smoothed state and covariance can then be found tions are Gaussian, whereas the minimum-variance solu-
by substitution in the equations tions do not. Optimal smoothers for state estimation and
input estimation can be constructed similarly.
A continuous-time version of the above smoother is de-
Pk|n = Pk|k Pk|k
k Pk|k scribed in.[39][40]
k
xk|n = xk|k Pk|k Expectation-maximization algorithms may be employed
to calculate approximate maximum likelihood estimates
or of unknown state-space parameters within minimum-
variance lters and smoothers. Often uncertainties re-
main within problem assumptions. A smoother that ac-
Pk|n = Pk|k1 Pk|k1
k Pk|k1 commodates uncertainties can be designed by adding a
k . positive denite term to the Riccati equation.[41]
xk|n = xk|k1 Pk|k1
In cases where the models are nonlinear, step-wise lin-
An important advantage of the MBF is that it does not earizations may be within the minimum-variance lter
require nding the inverse of the covariance matrix. and smoother recursions (extended Kalman ltering).
12 17 NON-LINEAR FILTERS

16 Frequency-weighted Kalman the Kalman lter equations. This process essentially lin-
earizes the non-linear function around the current esti-
lters mate.

Pioneering research on the perception of sounds at dier-


ent frequencies was conducted by Fletcher and Munson in 17.2 Unscented Kalman lter
the 1930s. Their work led to a standard way of weighting
measured sound levels within investigations of industrial When the state transition and observation modelsthat
noise and hearing loss. Frequency weightings have since is, the predict and update functions f and h are highly
been used within lter and controller designs to manage non-linear, the extended Kalman lter can give partic-
performance within bands of interest. ularly poor performance.[43] This is because the covari-
ance is propagated through linearization of the underlying
Typically, a frequency shaping function is used to weight
non-linear model. The unscented Kalman lter (UKF)
the average power of the error spectral density in a spec- [43]
uses a deterministic sampling technique known as
ied frequency band. Let y y denote the output esti-
the unscented transform to pick a minimal set of sam-
mation error exhibited by a conventional Kalman lter.
ple points (called sigma points) around the mean. These
Also, let W denote a causal frequency weighting trans-
sigma points are then propagated through the non-linear
fer function. The optimum solution which minimizes
functions, from which a new mean and covariance es-
the variance of W (y y) arises by simply constructing
timate are then formed. The result is a lter which,
W1 y .
for certain systems, more accurately estimates the true
The design of W remains an open question. One way mean and covariance.[44] This can be veried with Monte
of proceeding is to identify a system which generates the Carlo sampling or Taylor series expansion of the posterior
estimation error and setting W equal to the inverse of statistics. In addition, this technique removes the require-
that system.[42] This procedure may be iterated to ob- ment to explicitly calculate Jacobians, which for complex
tain mean-square error improvement at the cost of in- functions can be a dicult task in itself (i.e., requiring
creased lter order. The same technique can be applied complicated derivatives if done analytically or being com-
to smoothers. putationally costly if done numerically), if not impossible
(if those functions are not dierentiable).

17 Non-linear lters Predict

The basic Kalman lter is limited to a linear assumption. As with the EKF, the UKF prediction can be used inde-
More complex systems, however, can be nonlinear. The pendently from the UKF update, in combination with a
non-linearity can be associated either with the process linear (or indeed EKF) update, or vice versa.
model or with the observation model or with both.
The estimated state and covariance are augmented with
the mean and covariance of the process noise.
17.1 Extended Kalman lter
[ [ ]]T
Main article: Extended Kalman lter xak1|k1 = xTk1|k1 E wTk
[ ]
a Pk1|k1 0
In the extended Kalman lter (EKF), the state transition Pk1|k1 =
0 Qk
and observation models need not be linear functions of
the state but may instead be non-linear functions. These A set of 2L + 1 sigma points is derived from the aug-
functions are of dierentiable type. mented state and covariance where L is the dimension of
the augmented state.

xk = f (xk1 , uk ) + wk
zk = h(xk ) + vk 0k1|k1 = xak1|k1
( )
The function f can be used to compute the predicted state ik1|k1 = xak1|k1 + (L + )Pak1|k1 , i = 1, . . . , L
from the previous estimate and similarly the function h ( )i
can be used to compute the predicted measurement from k1|k1 = xk1|k1
i a
(L + )Pak1|k1 , i = L + 1, . . .
iL
the predicted state. However, f and h cannot be applied to
the covariance directly. Instead a matrix of partial deriva- where
tives (the Jacobian) is computed.
At each timestep the Jacobian is evaluated with cur- ( )
rent predicted states. These matrices can be used in (L + )Pak1|k1
i
17.2 Unscented Kalman lter 13

is the ith column of the matrix square root of As before, a set of 2L + 1 sigma points is derived from the
augmented state and covariance where L is the dimension
of the augmented state.
(L + )Pak1|k1

using the denition: square root A of matrix B satises 0k|k1 = xak|k1


( )
ik|k1 = xak|k1 + (L + )Pak|k1 , i = 1, . . . , L
i
T
B AA . ( )
ik|k1 = xak|k1 (L + )Pak|k1
, i = L + 1, . . . , 2L
iL
The matrix square root should be calculated using numer-
ically ecient and stable methods such as the Cholesky Alternatively if the UKF prediction has been used the
decomposition. sigma points themselves can be augmented along the fol-
The sigma points are propagated through the transition lowing lines
function f.
[ [ ]]T
( ) k|k1 := Tk|k1 E vTk (L + )Rak
ik|k1 = f ik1|k1 i = 0, . . . , 2L
where
where f : RL R|x| . The weighted sigma points are re-
combined to produce the predicted state and covariance. [ ]
0 0
Rak =
0 Rk


2L The sigma points are projected through the observation
x
k|k1 = Wsi ik|k1 function h.
i=0

2L [ ][ ]T
Pk|k1 = Wci ik|k1 x
k|k1 ik|k1 x
k|k1 i = h(i
k k|k1 ) i = 0, . . . , 2L
i=0
The weighted sigma points are recombined to produce
where the weights for the state and covariance are given the predicted measurement and predicted measurement
by: covariance.


2L
Ws0 =
L+ zk = Wsi ki
i=0
0
Wc = + (1 2 + )
L+
2L
[ ][ ]T
1 Pzk zk = Wci ki zk ki zk
Wsi = Wci = i=0
2(L + )
= 2 (L + ) L The state-measurement cross-covariance matrix,

and control the spread of the sigma points. is re-


lated to the distribution of x .
2L
Pxk zk = k|k1 ][ki zk ]T
Wci [ik|k1 x
Normal values are = 103 , = 0 and = 2 . If the i=0
true distribution of x is Gaussian, = 2 is optimal.[45]
is used to compute the UKF Kalman gain.
Update
Kk = Pxk zk P1
zk zk
The predicted state and covariance are augmented as be-
fore, except now with the mean and covariance of the As with the Kalman lter, the updated state is the pre-
measurement noise. dicted state plus the innovation weighted by the Kalman
[ [ ]]T gain,
xak|k1 = x Tk|k1 E vTk
[ ]
a Pk|k1 0
Pk|k1 = x k|k1 + Kk (zk zk )
k|k = x
0 Rk
14 21 APPLICATIONS

And the updated covariance is the predicted covariance,


minus the predicted measurement covariance, weighted
by the Kalman gain. x(t)
= F(t)x(t) + B(t)u(t) + w(t), w(t) N (0, Q(t))
zk = Hk xk + vk , vk N (0, Rk )

where
Pk|k = Pk|k1 Kk Pzk zk KkT

xk = x(tk )
18 KalmanBucy lter
Initialize x
0|0 = E [x(t0 )] , P0|0 = Var [x(t0 )]
The KalmanBucy lter (named after Richard Snow-
den Bucy) is a continuous time version of the Kalman
lter.[46][47] (t) = F(t)
x x(t) + B(t)u(t) with ,
x(tk1 ) = x
k1|k1
It is based on the state space model x
k|k1 = x (tk )
Predict

P(t) = F(t)P(t) + P(t)F(t)T + Q(t) with ,P(tk1 ) = Pk
Pk|k1 = P(tk )
d
x(t) = F(t)x(t) + B(t)u(t) + w(t)
dt
The prediction equations are derived from those of
z(t) = H(t)x(t) + v(t)
continuous-time Kalman lter without update from mea-
where Q(t) and R(t) represent the intensities (or, more surements, i.e., K(t) = 0 . The predicted state and co-
accurately: the Power Spectral Density - PSD - matrices) variance are calculated respectively by solving a set of
of the two white noise terms w(t) and v(t) , respectively. dierential equations with the initial value equal to the
estimate at the previous step.
The lter consists of two dierential equations, one for
the state estimate and one for the covariance: ( )1
Kk = Pk|k1 HTk Hk Pk|k1 HTk + Rk
Update x k|k1 + Kk (zk Hk x
k|k = x k|k1 )
d Pk|k = (I Kk Hk )Pk|k1
x
(t) = F(t)x(t) + B(t)u(t) + K(t) (z(t) H(t) x(t))
dt
d
P(t) = F(t)P(t) + P(t)FT (t) + Q(t) K(t)R(t)KT (t) The update equations are identical to those of the
dt discrete-time Kalman lter.
where the Kalman gain is given by

20 Variants for the recovery of


T 1
K(t) = P(t)H (t)R (t) sparse signals
Note that in this expression for K(t) the covariance of the
observation noise R(t) represents at the same time the The traditional Kalman lter has also been employed for
covariance of the prediction error (or innovation) y(t) = the recovery of sparse, possibly dynamic, signals from
z(t) H(t) x(t) ; these covariances are equal only in thenoisy observations. Recent works[49][50][51] utilize no-
case of continuous time.[48] tions from the theory of compressed sensing/sampling,
such as the restricted isometry property and related prob-
The distinction between the prediction and update steps
abilistic recovery arguments, for sequentially estimating
of discrete-time Kalman ltering does not exist in contin-
the sparse state in intrinsically low-dimensional systems.
uous time.
The second dierential equation, for the covariance, is an
example of a Riccati equation. 21 Applications
Attitude and heading reference systems
19 Hybrid Kalman lter
Autopilot
Most physical systems are represented as continuous-
Battery state of charge (SoC) estimation[52][53]
time models while discrete-time measurements are fre-
quently taken for state estimation via a digital processor. Brain-computer interface
Therefore, the system model and measurement model are
given by Chaotic signals
15

Tracking and vertex tting of charged particles in Linear-quadratic-Gaussian control


particle detectors[54]
Masreliezs theorem
Tracking of objects in computer vision
Moving horizon estimation
Dynamic positioning
Non-linear lter
Economics, in particular macroeconomics, time se-
ries analysis, and econometrics[55] Particle lter estimator

Inertial guidance system PID controller

Nuclear medicine single photon emission com- Predictor corrector


puted tomography image restoration[56] Recursive least squares
Orbit Determination SchmidtKalman lter
Power system state estimation Separation principle
Radar tracker Sliding mode control
Satellite navigation systems Stochastic dierential equations
Seismology[57] Switching Kalman Filter
Sensorless control of AC motor variable-frequency Volterra series
drives
Wiener lter
Simultaneous localization and mapping
Zakai equation
Speech enhancement
Visual odometry
23 References
Weather forecasting
[1] Paul Zarchan; Howard Muso (2000). Fundamentals of
Navigation system
Kalman Filtering: A Practical Approach. American Insti-
3D modeling tute of Aeronautics and Astronautics, Incorporated. ISBN
978-1-56347-455-2.
Structural health monitoring
[2] Wolpert, Daniel; Ghahramani, Zoubin (2000). Compu-
Human sensorimotor processing[58] tational principles of movement neuroscience. Nature
Neuroscience. 3: 12127. doi:10.1038/81497. PMID
11127840.

22 See also [3] Kalman, R. E. (1960). A New Approach to Linear Fil-


tering and Prediction Problems. Journal of Basic Engi-
neering. 82: 35. doi:10.1115/1.3662552.
Alpha beta lter
[4] Steen L. Lauritzen. Time series analysis in 1880. A
Bayesian MMSE estimator
discussion of contributions made by T.N. Thiele. Inter-
Covariance intersection national Statistical Review 49, 1981, 319333. JSTOR
1402616
Data assimilation
[5] Steen L. Lauritzen, Thiele: Pioneer in Statistics, Oxford
Ensemble Kalman lter University Press, 2002. ISBN 0-19-850972-3.

Extended Kalman lter [6] Mohinder S. Grewal and Angus P. Andrews

[7] Gaylor, David; Lightsey, E. Glenn (2003). GPS/INS


Fast Kalman lter
Kalman Filter Design for Spacecraft Operating in the
Filtering problem (stochastic processes) Proximity of International Space Station. AIAA Guid-
ance, Navigation, and Control Conference and Exhibit.
Generalized ltering doi:10.2514/6.2003-5445. ISBN 978-1-62410-090-1.

Invariant extended Kalman lter [8] Stratonovich, R. L. (1959). Optimum nonlinear systems
which bring about a separation of a signal with constant
Kernel adaptive lter parameters from noise. Radiozika, 2:6, pp. 892901.
16 23 REFERENCES

[9] Stratonovich, R. L. (1959). On the theory of optimal non- [24] Spall, James C. (1995). The Kantorovich inequal-
linear ltering of random functions. Theory of Probability ity for error analysis of the Kalman lter with un-
and its Applications, 4, pp. 223225. known noise distributions. Automatica. 31 (10): 1513.
doi:10.1016/0005-1098(95)00069-9.
[10] Stratonovich, R. L. (1960) Application of the Markov pro-
cesses theory to optimal ltering. Radio Engineering and [25] Maryak, J.L.; Spall, J.C.; Heydon, B.D. (2004).
Electronic Physics, 5:11, pp. 119. Use of the Kalman Filter for Inference in State-
Space Models with Unknown Noise Distributions.
[11] Stratonovich, R. L. (1960). Conditional Markov Pro- IEEE Transactions on Automatic Control. 49: 87.
cesses. Theory of Probability and its Applications, 5, pp. doi:10.1109/TAC.2003.821415.
156178.
[26] Anderson, Brian D. O.; Moore, John B. (1979). Optimal
[12] Ingvar Strid; Karl Walentin (April 2009). Block Filtering. New York: Prentice Hall. pp. 129133. ISBN
Kalman Filtering for Large-Scale DSGE Models. Com- 0-13-638122-7.
putational Economics. Springer. 33 (3): 277304.
doi:10.1007/s10614-008-9160-4. [27] Jingyang Lu. False information injection attack on dy-
namic state estimation in multi-sensor systems, Fusion
[13] Martin Mller Andreasen (2008). Non-linear DSGE 2014
Models, The Central Dierence Kalman Filter, and The
Mean Shifted Particle Filter (PDF). [28] Thornton, Catherine L. (15 October 1976). Triangular
Covariance Factorizations for Kalman Filtering (PDF).
[14] Roweis, S; Ghahramani, Z (1999). A unifying review (PhD thesis). NASA. NASA Technical Memorandum 33-
of linear gaussian models. Neural computation. 11 798.
(2): 30545. doi:10.1162/089976699300016674. PMID
9950734. [29] Bierman, G.J. (1977). Factorization Methods for Dis-
crete Sequential Estimation. Factorization Methods
[15] Hamilton, J. (1994), Time Series Analysis, Princeton Uni- for Discrete Sequential Estimation. Academic Press.
versity Press. Chapter 13, 'The Kalman Filter' Bibcode:1977fmds.book.....B.

[16] Ishihara, J.Y.; Terra, M.H.; Campos, J.C.T. (2006). [30] Bar-Shalom, Yaakov; Li, X. Rong; Kirubarajan, Thia-
Robust Kalman Filter for Descriptor Systems. IEEE galingam (July 2001). Estimation with Applications to
Transactions on Automatic Control. 51 (8): 1354. Tracking and Navigation. New York: John Wiley & Sons.
doi:10.1109/TAC.2006.878741. pp. 308317. ISBN 978-0-471-41655-5.

[17] Terra, Marco H.; Cerri, Joao P.; Ishihara, Joao Y. [31] Golub, Gene H.; Van Loan, Charles F. (1996). Matrix
(2014). Optimal Robust Linear Quadratic Regulator Computations. Johns Hopkins Studies in the Mathemat-
for Systems Subject to Uncertainties. IEEE Trans- ical Sciences (Third ed.). Baltimore, Maryland: Johns
actions on Automatic Control. 59 (9): 25862591. Hopkins University. p. 139. ISBN 978-0-8018-5414-9.
doi:10.1109/TAC.2014.2309282.
[32] Higham, Nicholas J. (2002). Accuracy and Stability of
[18] Kelly, Alonzo (1994). A 3D state space formulation of a Numerical Algorithms (Second ed.). Philadelphia, PA:
navigation Kalman lter for autonomous vehicles (PDF). Society for Industrial and Applied Mathematics. p. 680.
DTIC Document: 13. 2006 Corrected Version ISBN 978-0-89871-521-7.

[19] Reid, Ian; Term, Hilary. Estimation II (PDF). www. [33] Masreliez, C. Johan; Martin, R D (1977). Robust
robots.ox.ac.uk. Oxford University. Retrieved 6 August Bayesian estimation for the linear model and robustifying
2014. the Kalman lter. IEEE Transactions on Automatic Con-
trol. 22 (3): 361371. doi:10.1109/TAC.1977.1101538.
[20] Rajamani, Murali (October 2007). Data-based Tech-
[34] Ltkepohl, Helmut (1991). Introduction to Multiple Time
niques to Improve State Estimation in Model Predictive
Series Analysis. Heidelberg: Springer-Verlag Berlin,. p.
Control (PDF) (PhD Thesis). University of Wisconsin
435.
Madison.
[35] Gabriel T. Terejanu (2012-08-04). Discrete Kalman Fil-
[21] Rajamani, Murali R.; Rawlings, James B. (2009).
ter Tutorial (PDF). Retrieved 2016-04-13.
Estimation of the disturbance structure from
data using semidenite programming and opti- [36] Anderson, Brian D. O.; Moore, John B. (1979). Optimal
mal weighting. Automatica. 45 (1): 142148. Filtering. Englewood Clis, NJ: Prentice Hall, Inc. pp.
doi:10.1016/j.automatica.2008.05.032. 176190. ISBN 0-13-638122-7.
[22] Autocovariance Least-Squares Toolbox. Jbr- [37] Rauch, H.E.; Tung, F.; Striebel, C. T. (August
www.che.wisc.edu. Retrieved 2014-06-02. 1965). Maximum likelihood estimates of linear dy-
namic systems. AIAA Journal. 3 (8): 14451450.
[23] Three optimality tests with numerical examples are de- Bibcode:1965AIAAJ...3.1445.. doi:10.2514/3.3166.
scribed in Peter, Matisko, (2012). Optimality Tests and
Adaptive Kalman Filter. 16th IFAC Symposium on Sys- [38] Einicke, G.A. (March 2006). Optimal and Robust Non-
tem Identication. 16th IFAC Symposium on System causal Filter Formulations. IEEE Trans. Signal Process-
Identication. p. 1523. doi:10.3182/20120711-3-BE- ing. 54 (3): 10691077. Bibcode:2006ITSP...54.1069E.
2027.00011. ISBN 978-3-902823-06-9. doi:10.1109/TSP.2005.863042.
17

[39] Einicke, G.A. (April 2007). Asymptotic Opti- quasi-norms. IEEE Transactions on Signal Process-
mality of the Minimum-Variance Fixed-Interval ing. 58 (4): 24052409. Bibcode:2010ITSP...58.2405C.
Smoother. IEEE Trans. Signal Processing. 55 doi:10.1109/TSP.2009.2038959.
(4): 15431547. Bibcode:2007ITSP...55.1543E.
doi:10.1109/TSP.2006.889402. [51] Zachariah, Dave; Chatterjee, Saikat; Jansson, Magnus
(2012). Dynamic Iterative Pursuit. IEEE Trans-
[40] Einicke, G.A.; Ralston, J.C.; Hargrave, C.O.; Reid, D.C.; actions on Signal Processing. 60 (9): 49674972.
Hainsworth, D.W. (December 2008). Longwall Min- doi:10.1109/TSP.2012.2203813.
ing Automation. An Application of Minimum-Variance
Smoothing. IEEE Control Systems Magazine. 28 (6): 28 [52] Vasebi, Amir; Partovibakhsh, Maral; Bathaee, S. Mo-
37. doi:10.1109/MCS.2008.929281. hammad Taghi (2007). A novel combined battery model
for state-of-charge estimation in lead-acid batteries based
[41] Einicke, G.A. (December 2009). Asymptotic Op- on extended Kalman lter for hybrid electric vehicle ap-
timality of the Minimum-Variance Fixed-Interval plications. Journal of Power Sources. 174: 3040.
Smoother. IEEE Trans. Automatic Control. 54 doi:10.1016/j.jpowsour.2007.04.011.
(12): 29042908. Bibcode:2007ITSP...55.1543E.
doi:10.1109/TSP.2006.889402. [53] Vasebi, A.; Bathaee, S.M.T.; Partovibakhsh, M. (2008).
Predicting state of charge of lead-acid batteries for
[42] Einicke, G.A. (December 2014). Iterative hybrid electric vehicles by extended Kalman lter.
Frequency-Weighted Filtering and Smoothing Pro- Energy Conversion and Management. 49: 7582.
cedures. IEEE Signal Processing Letters. 21 doi:10.1016/j.enconman.2007.05.017.
(12): 14671470. Bibcode:2014ISPL...21.1467E.
doi:10.1109/LSP.2014.2341641. [54] Fruhwirth, R. (1987). Application of Kalman ltering
to track and vertex tting. Nucl. Instrum. Meth. A262
[43] Julier, Simon J.; Uhlmann, Jerey K. (1997). A new ex- (23): 444450. Bibcode:1987NIMPA.262..444F.
tension of the Kalman lter to nonlinear systems (PDF). doi:10.1016/0168-9002(87)90887-4.
Int. Symp. Aerospace/Defense Sensing, Simul. and
Controls. Signal Processing, Sensor Fusion, and Target [55] Harvey, Andrew C. (1994). Applications of the Kalman
Recognition VI. 3: 182. Bibcode:1997SPIE.3068..182J. lter in econometrics. In Bewley, Truman. Advances in
doi:10.1117/12.280797. Retrieved 2008-05-03. Econometrics. New York: Cambridge University Press.
pp. 285f. ISBN 0-521-46726-8.
[44] Gustafsson, Fredrik; Hendeby, Gustaf (2012). Some
Relations Between Extended and Unscented Kalman Fil- [56] Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber,
ters. IEEE Transactions on Signal Processing. 2: 545 R.; Kuduvalli, G.R. (1994). Two-dimensional restora-
555. tion of single photon emission computed tomography im-
ages using the Kalman lter. IEEE Transactions on Med-
[45] Wan, E.A.; Van Der Merwe, R. (2000). The un- ical Imaging. 13 (1): 102109. doi:10.1109/42.276148.
scented Kalman lter for nonlinear estimation. PMID 18218487.
Proceedings of the IEEE 2000 Adaptive Systems for
Signal Processing, Communications, and Control [57] Bock, Y.; Crowell, B.; Webb, F.; Kedar, S.; Clayton,
Symposium (Cat. No.00EX373) (PDF). p. 153. R.; Miyahara, B. (2008). Fusion of High-Rate GPS and
doi:10.1109/ASSPCC.2000.882463. ISBN 0-7803- Seismic Data: Applications to Early Warning Systems for
5800-7. Mitigation of Geological Hazards. American Geophysi-
cal Union. 43: 01. Bibcode:2008AGUFM.G43B..01B.
[46] Bucy, R.S. and Joseph, P.D., Filtering for Stochastic Pro-
cesses with Applications to Guidance, John Wiley & Sons, [58] Wolpert, D. M.; Miall, R. C. (1996). Forward Mod-
1968; 2nd Edition, AMS Chelsea Publ., 2005. ISBN 0- els for Physiological Motor Control. Neural Netw. 9
8218-3782-6 (8): 12651279. doi:10.1016/S0893-6080(96)00035-4.
PMID 12662535.
[47] Jazwinski, Andrew H., Stochastic processes and ltering
theory, Academic Press, New York, 1970. ISBN 0-12-
381550-9
24 Further reading
[48] Kailath, T. (1968). An innovations approach to least-
squares estimation--Part I: Linear ltering in additive Einicke, G.A. (2012). Smoothing, Filtering and Pre-
white noise. IEEE Transactions on Automatic Control. 13 diction: Estimating the Past, Present and Future. Ri-
(6): 646655. doi:10.1109/TAC.1968.1099025. jeka, Croatia: Intech. ISBN 978-953-307-752-9.
[49] Vaswani, Namrata (2008). Kalman ltered
Jinya Su; Baibing Li; Wen-Hua Chen (2015).
Compressed Sensing. 2008 15th IEEE Interna-
tional Conference on Image Processing. p. 893.
On existence, optimality and asymptotic sta-
doi:10.1109/ICIP.2008.4711899. ISBN 978-1-4244- bility of the Kalman lter with partially ob-
1765-0. served inputs. Automatica. 53: 149154.
doi:10.1016/j.automatica.2014.12.044.
[50] Carmi, Avishy; Gurl, Pini; Kanevsky, Dimitri (2010).
Methods for sparse signal recovery using Kalman l- Gelb, A. (1974). Applied Optimal Estimation. MIT
tering with embedded pseudo-measurement norms and Press.
18 25 EXTERNAL LINKS

Kalman, R.E. (1960). A new approach to Maybeck, Peter S. (1979). Chapter 1. Stochastic
linear ltering and prediction problems (PDF). Models, Estimation, and Control (PDF). Mathemat-
Journal of Basic Engineering. 82 (1): 3545. ics in Science and Engineering. 141-1. New York:
doi:10.1115/1.3662552. Archived from the origi- Academic Press. ISBN 0-12-480701-1.
nal (PDF) on 2008-05-29. Retrieved 2008-05-03.
Moriya, N. (2011). Primer to Kalman Filtering: A
Kalman, R.E.; Bucy, R.S. (1961). New Results in Physicist Perspective. New York: Nova Science Pub-
Linear Filtering and Prediction Theory. lishers, Inc. ISBN 978-1-61668-311-5.

Harvey, A.C. (1990). Forecasting, Structural Time Dunik, J.; Simandl M.; Straka O. (2009). Meth-
Series Models and the Kalman Filter. Cambridge ods for Estimating State and Measurement Noise
University Press. Covariance Matrices: Aspects and Comparison.
15th IFAC Symposium on System Identication,
Roweis, S.; Ghahramani, Z. (1999). A 2009. 15th IFAC Symposium on System Iden-
Unifying Review of Linear Gaussian Mod- tication, 2009. France. pp. 372377.
els. Neural Computation. 11 (2): 305345. doi:10.3182/20090706-3-FR-2004.00061. ISBN
doi:10.1162/089976699300016674. PMID 978-3-902661-47-0.
9950734.
Chui, Charles K.; Chen, Guanrong (2009). Kalman
Simon, D. (2006). Optimal State Estimation: Filtering with Real-Time Applications. Springer Se-
Kalman, H Innity, and Nonlinear Approaches. ries in Information Sciences. 17 (4th ed.). New
Wiley-Interscience. York: Springer. p. 229. ISBN 978-3-540-87848-3.
Stengel, R.F. (1994). Optimal Control and Estima- Spivey, Ben; Hedengren, J. D. and Edgar, T. F.
tion. Dover Publications. ISBN 0-486-68200-5. (2010). Constrained Nonlinear Estimation for In-
dustrial Process Fouling. Industrial & Engineer-
Warwick, K. (1987). Optimal observers ing Chemistry Research. 49 (17): 78247831.
for ARMA models (PDF). International doi:10.1021/ie9018116.
Journal of Control. 46 (5): 14931503.
doi:10.1080/00207178708933989. Retrieved Thomas Kailath; Ali H. Sayed; Babak Hassibi
2008-05-03. (2000). Linear Estimation. NJ: PrenticeHall.
ISBN 978-0-13-022464-4.
Bierman, G.J. (1977). Factorization Methods for
Discrete Sequential Estimation. Mathematics in Sci- Ali H. Sayed (2008). Adaptive Filters. NJ: Wiley.
ence and Engineering. 128. Mineola, N.Y.: Dover ISBN 978-0-470-25388-5.
Publications. ISBN 978-0-486-44981-4.
Bozic, S.M. (1994). Digital and Kalman ltering. 25 External links
ButterworthHeinemann.
Haykin, S. (2002). Adaptive Filter Theory. Prentice A New Approach to Linear Filtering and Prediction
Hall. Problems, by R. E. Kalman, 1960

Liu, W.; Principe, J.C. and Haykin, S. (2010). Ker- How a Kalman lter works, in pictures. Illuminates
nel Adaptive Filtering: A Comprehensive Introduc- the Kalman lter with pictures and colors
tion. John Wiley. KalmanBucy Filter, a derivation of the Kalman
Bucy Filter
Manolakis, D.G. (1999). Statistical and Adaptive
signal processing. Artech House. MIT Video Lecture on the Kalman lter on
YouTube
Welch, Greg; Bishop, Gary (1997). SCAAT:
incremental tracking with incomplete infor- An Introduction to the Kalman Filter, SIGGRAPH
mation (PDF). SIGGRAPH '97 Proceedings 2001 Course, Greg Welch and Gary Bishop
of the 24th annual conference on Computer
Kalman Filter webpage, with lots of links
graphics and interactive techniques. ACM
Press/Addison-Wesley Publishing Co. pp. Kalman lters used in Weather models (PDF).
333344. doi:10.1145/258734.258876. ISBN SIAM News. 36 (8). October 2003.
0-89791-896-7.
Haseltine, Eric L.; Rawlings, James B. (2005).
Jazwinski, Andrew H. (1970). Stochastic Processes Critical Evaluation of Extended Kalman Filter-
and Filtering. Mathematics in Science and Engi- ing and Moving-Horizon Estimation. Industrial
neering. New York: Academic Press. p. 376. ISBN & Engineering Chemistry Research. 44 (8): 2451.
0-12-381550-9. doi:10.1021/ie034308l.
19

Gerald J. Biermans Estimation Subroutine Library:


Corresponds to the code in the research monograph
Factorization Methods for Discrete Sequential Es-
timation originally published by Academic Press in
1977. Republished by Dover.

Matlab Toolbox implementing parts of Gerald J.


Biermans Estimation Subroutine Library: UD /
UDU' and LD / LDL' factorization with associ-
ated time and measurement updates making up the
Kalman lter.
Matlab Toolbox of Kalman Filtering applied to
Simultaneous Localization and Mapping: Vehicle
moving in 1D, 2D and 3D

The Kalman Filter in Reproducing Kernel Hilbert


Spaces A comprehensive introduction.
Matlab code to estimate CoxIngersollRoss inter-
est rate model with Kalman Filter: Corresponds to
the paper estimating and testing exponential-ane
term structure models by kalman lter published by
Review of Quantitative Finance and Accounting in
1999.
Online demo of the Kalman Filter. Demonstration
of Kalman Filter (and other data assimilation meth-
ods) using twin experiments.

Botella, Guillermo; Martn h., Jos Antonio; San-


tos, Matilde; Meyer-Baese, Uwe (2011). FPGA-
Based Multimodal Embedded Sensor System Inte-
grating Low- and Mid-Level Vision. Sensors. 11
(12): 12511259. doi:10.3390/s110808164.
Examples and how-to on using Kalman Filters with
MATLAB A Tutorial on Filtering and Estimation
Explaining Filtering (Estimation) in One Hour, Ten
Minutes, One Minute, and One Sentence by Yu-Chi
Ho
20 26 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

26 Text and image sources, contributors, and licenses


26.1 Text
Kalman lter Source: https://en.wikipedia.org/wiki/Kalman_filter?oldid=778501475 Contributors: AxelBoldt, Marj Tiefert, The Anome,
Fnielsen, Michael Hardy, Kku, Ee79, Egil, Ellywa, Angela, Mark Foskey, Ciphergoth, Cyan, Jordi Burguet Castell, Novum, Catskul,
Robbot, Benwing, Jredmond, Aetheling, Giftlite, Seabhcan, Michael Devore, Alexander.stohr, Eoghan, Eregli bob, Ablewisuk, Rdsmith4,
Cihan, Sam Hocevar, Qef, Flex, N328KF, Rich Farmbrough, TedPavlic, Caesar, Bender235, Neko-chan, Violetriga, MisterSheik, Sietse
Snel, Peter M Gerdes, Kghose, Meggar, 9SGjOSfyHJaQVsEmy9NS, Giraedata, Warnet, Crust, Don Reba, Arthena, Kotasik, PAR, Null-
stein~enwiki, BRW, Cburnett, Alai, Jehrlich, Forderud, Oleg Alexandrov, Firsfron, GregorB, Chrislloyd, Joe Beaudoin Jr., Rjwilmsi,
Strait, Gareth McCaughan, Amelio Vzquez, Mathbot, GnniX, Kri, Chobot, YurikBot, Ashsong, Gaius Cornelius, VogonPoet, Nils
Grimsmo, Voidxor, Light current, The imp, Ccwen, Mebden, Amit man, SmackBot, PEHowland, Ekschuller, Number 8, Eike Welk, Mcld,
Benjaminevans82, Thumperward, Nbarth, Torzsmokus, Rkinch, Joeyo, Memming, SeanAhern, Forrestv, Cronholm144, Sohale, Kvng,
Iridescent, Kurtan~enwiki, Ioannes Pragensis, Khaosoahk, BoH, Myasuda, Cydebot, Krauss, Ryan, Alan1507, Ztolstoy, Gogo Dodo,
Ksood22, Skittleys, Oruanaidh, Mtpaley, A.K., Thijs!bot, Headbomb, Nick Number, Binarybits, Robbbb, Erxnmedia, Drizzd~enwiki,
.anacondabot, OM, Livermouse, Albmont, Netzer moriya, Michel ouiki~enwiki, Nikevich, User A1, Martynas Patasius, Vilwarin, Chris
G, Gcranston, KorgBoy, Khbkhb, Stochastics, Jiuguang Wang, Mintz l, Tdadamemd, JParise, Wgarn, Molly-in-md, Epiphany Johnson, Je-
vansen, VolkovBot, AlnoktaBOT, Julian I Do Stu, Slaunger, Damien d, Marcosaedro, Bsrancho, Kovianyo, PaulTanenbaum, Ilyushka88,
Jmath666, Hmms, Forwardmeasure, Petteri Aimonen, StevenBell, Cwkmail, Stelleg151, Markdask, KoenDelaere, Melcombe, Headless-
platter, Rinconsoleao, Tigergb, Clarkmoody, Gvw007, Ezavarei, JimInTheUSA, Niemeyerstein en, Bradka, Niceguyedc, Excirial, Simon-
mckenzie, PixelBot, Dilumb, Brews ohare, Butala, Chaosdruid, AlexCornejo, M.B.Shadmehr, Viktor O. Ledenyov, Qwfp, XLinkBot, Stic-
kee, Chanakal, Mikejulietvictor, Paulginz, Daveros2008, Addbot, Jjunger, DOI bot, Olli Niemitalo, Keithbierman, Fgnievinski, PaleS2,
MrOllie, EconoPhysicist, Publichealthguru, SpBot, Oldlab, Ettrig, CountryBot, Luckas-bot, Yobot, AnomieBOT, Jose278, Fiacobelli,
Citation bot, Kotika98, DayDuck1981, TheAMmollusc, Anschmid13, Omnipaedista, Rainald62, Kaslanidi, MultiPoly, 55604PP, Fres-
coBot, Fetchmaster, Fortdj33, Olexa Riznyk, Sanpitch, Anterior1, Briardew, OgreBot, Citation bot 1, Zacio, Kiefer.Wolfowitz, Havocgb,
Martinvl, Jerha202, YWD, Houber1, Serols, Rothhapm, Trappist the monk, Emmanuel Battesti, Reach Out to the Truth, Vishalv2050,
Wogud86, Obankston, RjwilmsiBot, Iulian.serbanoiu, Fblasqueswiki, EmausBot, Kronick, Drrggibbs, Mmeijeri, Brent Perreault, Yoepp,
, Osure, Vgiri88, Mineotto6, AManWithNoPlan, Cblambert, Ray Czaplewski, Ptsneves, Yunfei Chu, Chris857, Ebehn, Kkddkkdd,
Thundersh24, KlappCK, BarrelProof, Lakshminarayanan r, Frietjes, Jerrykorulla, Danim, Hikenstu, Mdutch2001, Helpful Pixie Bot,
Qzertywsx, Mrajaa, Strike Eagle, Bibcode Bot, Schnicki, BG19bot, Davmre, Stuartmacgregor, Eitt, Manoguru, Newstateofme, Dlituiev,
Kfriston, Pshark sk, BattyBot, ChrisGualtieri, Gohchunfan, IjonTichyIjonTichy, Brybot, Me, Myself, and I are Here, Pearl92, IheartDA,
Liwangyan, Reculet, Smirglis, Pdecalculus, Babitaarora, Stamptrader, Xenxax, Dalemcl, Avi.nehemiah, Impsswoon, Monkbot, Luigidi-
giacomo, Potwang, Pezcore2014, Loraof, Purefel, Vivic2000, HelpUsStopSpam, Navraj42, KasparBot, Lipkenszaal, PrzemekMajewski,
Wikitidy, Colorete87, Myraeri, InternetArchiveBot, Jamal Abdalla, Bytesock, Ferroeld, Yongqi lee, Spacetime123, Tomhallward, Fanta-
syJXF, Yazdandaste, Pglotov, Lujingyang1029 and Anonymous: 432

26.2 Images
File:Basic_concept_of_Kalman_filtering.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Basic_concept_of_
Kalman_filtering.svg License: CC0 Contributors: Own work Original artist: Petteri Aimonen
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:HMM_Kalman_Filter_Derivation.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/HMM_Kalman_Filter_
Derivation.svg License: Public domain Contributors: Own work Original artist: Qef
File:Kalman.png Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Kalman.png License: CC BY-SA 4.0 Contributors:
Own work Original artist: Lipkenszaal
File:Kalman_filter_model_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Kalman_filter_model_2.svg License:
Public domain Contributors: Own work (Original text: I made this image myself, and I release it to the public domain.) Original artist:
User:Headlessplatter
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:
Wiki_letter_w.svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_
letter_w.svg/50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/
Wiki_letter_w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
100px-Wiki_letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward

26.3 Content license


Creative Commons Attribution-Share Alike 3.0

You might also like