You are on page 1of 19

DEPARTMENT OF ELECTRICAL AND

ELECTONICS
AJAY KUMAR GARG ENGINEERING
COLLEGE

Report topic :-

KALMAN FILTER FOR


STATE
ESTIMATION

Project supervisor :
Team members :-

Arun Kumar Sharma


Vipul Verma
Vishal Maurya

Vivek Kumar

ACKNOWLEDGMENT

We express my sincere gratitude to my supervisor, Arun kumar Sharma, for his


constant support, suggestions, patience and unswerving guidance during my project
work, without which, this Project would not be in its present form. He is the constant
source of inspiration and support throughout the semister for me. We are grateful to all
the faculty members of the department for their consistent effort to provide the best
available resources and share their technical expertise and experience to inculcate
professionalism and condence.
We wish to thank all my classmates, roommates, friends and my well wishers who
helped me directly or indirectly for the completion of this work.

My warmest thanks go to my family for their support, love, encouragement and patience.

Vipul Verma
Vishal Maurya
Vivek Kumar

ABSTRACT

This thesis presents adaptive square root lter for Network Control Systems (NCS) under
various uncertainties like random missing measurements, sensors delays and packet
dropouts. The probabilities of occurrence of all these uncertainties are assumed to be
known a priori. The results presented in for state estimation of unstable NCSs appears to
be erroneous due to the presence of state covariance matrix in the ltering algorithm. A
modied adaptive Kalman lter in square root form is proposed and subsequently, a
memoryless state feedback
controllerforasymptoticstabilityoftheNCSinlinearmatrixinequality (LMI) frame work has
been considered. For NCSs, where system matrices are dependent of unknown
parameter vector with known bounds of parameter variations, a simultaneous parameter
and state estimation algorithm has been proposed in bootstrap manner. Unknown
parameters are estimated based on innovation sequence in measurement information
and gradient of innovation sequence. A design of estimator based controller for
stabilization of NCSs under structure parameter perturbation has been proposed.
Thecontrollerdesignisbasedonknownboundsofthestructureparameterassociatedwiththesy
stemmatrices. The designed controller uses the estimated state to generate control
signal, which stabilizes the composite system.

Content
Introduction
Observability of discrete-time systems
Kalman filter variable
Derivation of the Kalman filter
- Statement of problem
- Propagation step
- Update step
Kalman Filter (Algorithmic Steps)
Covariance matrices
Features of the Kalman Filter
Kalman filter Application
Kalman filter Program
Simulation output Plots

INTRODUCTION
Kalman Filter was introduced in 1960s. Kalman filter is an algorithm that uses a series of measurements
observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown
variables that tend to be more precise than those based on a single measurement alone. The filter is named
after Rudolf E. Klmn, one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common application is for guidance, navigation and
control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied concept
in time series analysis used in fields such as signal processing and econometrics. Kalman filters also are one of the
main topics in the field of robotic motion planning and control, and they are sometimes included in trajectory
optimization. The multi-fractional order estimator is a simple and practical alternative to the Kalman filter for tracking
targets.
The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the
current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily
corrupted with some amount of error, including random noise) is observed, these estimates are updated using
a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It
can run in real time, using only the present input measurements and the previously calculated state and its
uncertainty matrix; no additional past information is required.
The Kalman filter does not require any assumption that the errors are Gaussian. However, the filter yields the exact
conditional probability estimate in the special case that all errors are Gaussian-distributed.
Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and
the unscented Kalman filter which work on non-linear systems. The underlying model is a Bayesian model similar to
a hidden Markov model but where the state space of the latent variables is continuous and where all latent and
observed variables have Gaussian distributions.

KALMAN FILTER is a mathematical algorithm to sequentically


process the measurement for estimation of system states

Observability of discrete-time systems


A necessary condition for the Kalman Filter to work correctly is that the
system for which the states are to be estimated, is observable. Therefore,
you should check for observability before applying the Kalman Filter.
(There may still be other problems that prevent the Kalman Filter from
producing accurate state estimates, as a faulty or inaccurate
mathematical model.)
Consider the discrete linear time varying system
x(k+1) = A(k)x(k) + B(k)w(k) + E(k)u(k)
z(k) = C(K)x(k) + v(k)
where x(k) Rn is the state vector with x(k0) = x(0), z(k) Rp is the
measured output, u(k) Rm is the control input, and w(k) Rm and v(k)
Rp are stationary, zero-mean, uncorrelated, white noise Gaussian sequences
with covariance matrices w and v respectively.
E[w(k)w(k)T] = w
E[v(k)v(k)T] = v
E[w(k)v(k)T] = E[w(k)w(r)T] = E[v(k)v(k)T] = 0
The initial conditions satisfy the mean and covariance conditions:
E[ x(0)] = xo , E (x(0)xo)(x(0)xo)T = o
Let xs(k|k) be the estimates state at time k and (k|k) = E e(k)e(k)T error
covariance, where,
e(k) = x(k)xs(k).

Kalman filter variables


k = state vector
y = observation vector
u = control vector
A: state transition matrix --- dynamics
B: input matrix (maps control commands onto state changes)
E: input error matrix
C: output matrix
V: output error matrix

Derivation of the Kalman filter


Statement of problem
Let there be a system which is represented in the state
space domain as follows:
x ( k + 1 )= ( k +1, k ) x ( k )+G( k )u(k )
y ( k )=H ( k ) x ( k ) +v (k)

The driving process u( k)


both assumed to be zero mean,

and the measurement noise are

So that
E [ u ( k ) ]=0
E [ v ( k ) ]=0

Moreover,

u(k)

and

v(k)

are independent of each other, white

and have covariance matrix

Q(k )

and variance

R(k )

respectively.

E [ u ( k ) v t ( l ) ]=E [ u ( k ) ] E [ v ( l ) ]=0 k , l
E [ u ( k ) uT ( k ) ]=Q(k )
E [ v ( k ) v ( k ) ] =R (k )

In addition, both processes satisfy the following conditions


E [ x 0 ( k ) u ( k ) ]=0 k 0

E[ x 0 ( k ) ] v (k )=0 k 0

Propagation step: relationship between


and

^x (k +1/k )

^x ( k /k ) :

Taking into consideration the linearity of equation, the state vector


estimation x^ (k+1/k) is also characterized by transition matrix (k +1 /k)
x
^x ( k +1/k )=E
x ( k +1/k )= ( k +1, k ) E [ x ( k )| y ( 1 ) , , y ( k ) ] +G ( k ) E[u(k ) y ( 1 ) , , y(k )]

u ( k| y ( 1 ) , , y ( k ) ]=0
E

As
We obtain

x ( k +1/k )=(k +1, k ) ^x (k /k )


^x ( k /k )=( k +1/k ) x ( k /k )

The posteriori2-error correlation matrix at k is dene by:


x ( k ) ^x ( k /k )T }
P ( k /k ) =E {[ x ( k )^x ( k / k ) ]
x ( k )^x

( kk ) [ x ( k ) x^ ( kk ) } (k + 1k )+G( k ) E [u ( k ) u ( k ) ]G T rrelation k is define by :


T

k +1/k =(k +1, k ) E {


P

(k+1|k) = A(k)(k|k)A(k)T +B(k)w B(k)T

Update step

Let us consider the recursive estimation of the state vector. Hereafter, we


will adopt the following linear form of the state vector estimation:
This form is inspired by the linear recursive algorithm
derived analytically in the section on the recursive least squares estimation
A posteriori estimation

^x (k /k )

of the state vector corresponds to the

prediction ^x (k /k 1) updated by a corrective term, i.e. the weighted


difference between the actual measurement and its prediction.

Xs(k+1|k+1) = xs(k+1|k)+K(k+1)
[y(k+1)C(k+1)xs(k+1|k)]

Kalman Filter (Algorithmic Steps)

STEP 1: Predicted state


Xs(k+1|k) = A(k)xs(k|k) + E(k)u(k)
STEP 2: Predicted error covariance matrix
(k+1|k) = A(k)(k|k)A(k)T +B(k)w B(k)T
STEP 3: Kalman gain matrix
K(k+1) = (k+1|k)C(k+1)T [C(k+1)(k+1|K)C(K +1)T +v]-1

STEP 4: Measurement update of state


Xs(k+1|k+1) = xs(k+1|k)+K(k+1)[y(k+1)C(k+1)xs(k+1|k)]
STEP 5: Measurement update of error covariance
(k+1|K +1) = [I K(k+1)C(k+1)](k+1|K)C(k)[I
K(k+1)C(k+1)]T +K(k+1)vK(k+1)

Covariance Matrices
The covariance of two random variable x1 and x2 is
Cov(x1,x2)

1 x1

( x 1 x1)(x 2 x2) p(x 1 , x 1) d x1 d x 2

=
S2x

x2

where p is the joint probability density function of x 1 and x2.


The correlation coefficient is the normalised quantity
2

Sx x
r 12
,1 r 12 + 1
Sx Sx
1

The covariance of a column vector x=[x1 .. xn] is dened as


cov(x)

E [ ( x x ) ( xx )' ]

( xx )(x x )' p( x ) dx

xx

dxn

and is a symmetric n by n matrix and is positive denite unless


there is a linear dependence among the components of x.
The (i,j)th element of Pxx is

S 2x x
i

Interpreting a covariance matrix:


diagonal elements
correlations.

are

the

variances,

off-diagonal

encode

Features of the Kalman


Filter
1. Model errors: There are always model errors since no model can give a
perfect description of a real (practical system). It is possible to analyze the
implications of the model errors on the state estimates calculated by the
Kalman Filter, but this will not be done in this book. However, in general, the
estimation errors are smaller with the Kalman Filter than with a so-called
ballistic state estimator, which is the state estimator that you get if the Kalman
Filter gain K is set to zero. In the latter case there is no correction of the
estimates. It is only the predictions that make up the estimates. The Kalman
Filter then just runs as a simulator. Note that you can try to estimate model
errors by augmenting the states with states representing the model errors.
2. The error-model: Assuming that the system model is linear and that the
model is correct (giving a correct representation of the real system), it can be
shown that the behaviour of the error of the corrected state estimation,

3. The dynamics of the Kalman Filter: The error-model of the Kalman


Filter represents a dynamic system. The dynamics of the Kalman Filter can be
analyzed by calculating the eigenvalues of the system matrix of. These
eigenvalues are
{1, 2,..., n } = eig [(I KC)A]

4. The stability of the Kalman Filter: It can be shown that the Kalman
Filter always is an asymptotically stable dynamic system (otherwise it could not
give an optimal estimate). In other words, the eigenvalues dened by always
inside the unity circle.

5. Predictor type Kalman Filter. In the predictor type of the Kalman Filter
there is only one formula for the calculation of the state estimate:
(k + 1) = A xest(k) + K [ y(k)C xest(k)]
Thus, there is no distinction between the predicted state x est estimate and the
corrected estimate (it the same variable). (Actually, this is the original version of
the Kalman Filter. The predictor type Kalman Filter has the drawback that there is a
time delay of one time step between the measurement y(k) and the calculated
state estimate xest(k + 1).

Kalman filter Applications


a. Estimation of various states and parameters for spacecraft based on
sensors output.
b. Signal processing in guidance, navigation and control of vehicles,
particularly aircraft.
c. Kalman lter also has one of the main applications in the eld of robotic
motion planning and control . It is sometimes also included in trajectory
optimization.
D. Estimation of the parameters of an AR model.
e. Many computer vision applications
- Stabilizing depth measurements
- Feature tracking
- Cluster tracking
- Fusing data from radar, laser scanner

- Many more

Reference:
-Modelling estimation and Optimal Filteration in signal processing
-Wikipedia

Kalman filter
Program
Simulation
value

using example with numerical

x ( k + 1 )= Ax ( k ) +Gu ( k ) +Bw (k )

y=Cx ( k )+ v ( k)
A=

1 0.07857
0 0.6065

]
B=

[
[

.0043
.00008771

G= .0043
.000787

C= [1 0]
cov [ w k ] = w =1

v = 5

Assume initial condition


for the system

1 0
(0)=
x

0
( 0)=[1 0]
xe
0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

k=0:2*pi/1000:2*pi-2*pi/1000;
%u=sin(k);
u(1:1000)=1;
f=[1 .0787;0 .6065];
g=[.0043;0717];
b=[.00043; .000787];
c=[1 0];
q = 1;r =5;
X=[1;0];xe=[1;0];
p=[1 0;0 0];
pd=[];
xs=[];
xes=[];
ks=[];
w=sqrt(q)*randn(1000,1);
v=sqrt(r)*randn(1000,1);
for i=1:1000
xs=[xs x];
xes=[xes xe];
x=f*x+g*u(i)+b*w(i);
y=c*x+v(i);
xe=f*xe+g*u(i);

%original signal
%time update
%predicted state

25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55

p=f*p*f'+b*q*b';
kk=p*c'/(c*p*c'+r);
xe=xe+kk*(y-c*xe);
p=(eye(2)-kk*c)*p;
ks=[ks kk];

%gain
%measurement update

end
figure
hold on
plot(k,xs(1,:),'b-')
plot (k,xes(1,:),'r-')
grid
xlabel ('time k')
ylabel('x1')
title ('x1 signal')
legend ('original x1','estimated x1')
hold off
figure
hold on
plot(k,xs(2,:),'b-')
plot(k,xes(2,:),'r-')
grid
xlabel('time k')
ylabel('x2')
title ('x2 signal')
legend ('original x2','estimated x2')
hold off
Figure
plot(k,ks)
title('gain')

Figure 1

Figure 2
Figure 3

Figure 4
Figure 5

Figure 6

You might also like