Professional Documents
Culture Documents
ELECTONICS
AJAY KUMAR GARG ENGINEERING
COLLEGE
Report topic :-
Project supervisor :
Team members :-
Vivek Kumar
ACKNOWLEDGMENT
My warmest thanks go to my family for their support, love, encouragement and patience.
Vipul Verma
Vishal Maurya
Vivek Kumar
ABSTRACT
This thesis presents adaptive square root lter for Network Control Systems (NCS) under
various uncertainties like random missing measurements, sensors delays and packet
dropouts. The probabilities of occurrence of all these uncertainties are assumed to be
known a priori. The results presented in for state estimation of unstable NCSs appears to
be erroneous due to the presence of state covariance matrix in the ltering algorithm. A
modied adaptive Kalman lter in square root form is proposed and subsequently, a
memoryless state feedback
controllerforasymptoticstabilityoftheNCSinlinearmatrixinequality (LMI) frame work has
been considered. For NCSs, where system matrices are dependent of unknown
parameter vector with known bounds of parameter variations, a simultaneous parameter
and state estimation algorithm has been proposed in bootstrap manner. Unknown
parameters are estimated based on innovation sequence in measurement information
and gradient of innovation sequence. A design of estimator based controller for
stabilization of NCSs under structure parameter perturbation has been proposed.
Thecontrollerdesignisbasedonknownboundsofthestructureparameterassociatedwiththesy
stemmatrices. The designed controller uses the estimated state to generate control
signal, which stabilizes the composite system.
Content
Introduction
Observability of discrete-time systems
Kalman filter variable
Derivation of the Kalman filter
- Statement of problem
- Propagation step
- Update step
Kalman Filter (Algorithmic Steps)
Covariance matrices
Features of the Kalman Filter
Kalman filter Application
Kalman filter Program
Simulation output Plots
INTRODUCTION
Kalman Filter was introduced in 1960s. Kalman filter is an algorithm that uses a series of measurements
observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown
variables that tend to be more precise than those based on a single measurement alone. The filter is named
after Rudolf E. Klmn, one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common application is for guidance, navigation and
control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied concept
in time series analysis used in fields such as signal processing and econometrics. Kalman filters also are one of the
main topics in the field of robotic motion planning and control, and they are sometimes included in trajectory
optimization. The multi-fractional order estimator is a simple and practical alternative to the Kalman filter for tracking
targets.
The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the
current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily
corrupted with some amount of error, including random noise) is observed, these estimates are updated using
a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It
can run in real time, using only the present input measurements and the previously calculated state and its
uncertainty matrix; no additional past information is required.
The Kalman filter does not require any assumption that the errors are Gaussian. However, the filter yields the exact
conditional probability estimate in the special case that all errors are Gaussian-distributed.
Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and
the unscented Kalman filter which work on non-linear systems. The underlying model is a Bayesian model similar to
a hidden Markov model but where the state space of the latent variables is continuous and where all latent and
observed variables have Gaussian distributions.
So that
E [ u ( k ) ]=0
E [ v ( k ) ]=0
Moreover,
u(k)
and
v(k)
Q(k )
and variance
R(k )
respectively.
E [ u ( k ) v t ( l ) ]=E [ u ( k ) ] E [ v ( l ) ]=0 k , l
E [ u ( k ) uT ( k ) ]=Q(k )
E [ v ( k ) v ( k ) ] =R (k )
E[ x 0 ( k ) ] v (k )=0 k 0
^x (k +1/k )
^x ( k /k ) :
u ( k| y ( 1 ) , , y ( k ) ]=0
E
As
We obtain
Update step
^x (k /k )
Xs(k+1|k+1) = xs(k+1|k)+K(k+1)
[y(k+1)C(k+1)xs(k+1|k)]
Covariance Matrices
The covariance of two random variable x1 and x2 is
Cov(x1,x2)
1 x1
=
S2x
x2
Sx x
r 12
,1 r 12 + 1
Sx Sx
1
E [ ( x x ) ( xx )' ]
( xx )(x x )' p( x ) dx
xx
dxn
S 2x x
i
are
the
variances,
off-diagonal
encode
4. The stability of the Kalman Filter: It can be shown that the Kalman
Filter always is an asymptotically stable dynamic system (otherwise it could not
give an optimal estimate). In other words, the eigenvalues dened by always
inside the unity circle.
5. Predictor type Kalman Filter. In the predictor type of the Kalman Filter
there is only one formula for the calculation of the state estimate:
(k + 1) = A xest(k) + K [ y(k)C xest(k)]
Thus, there is no distinction between the predicted state x est estimate and the
corrected estimate (it the same variable). (Actually, this is the original version of
the Kalman Filter. The predictor type Kalman Filter has the drawback that there is a
time delay of one time step between the measurement y(k) and the calculated
state estimate xest(k + 1).
- Many more
Reference:
-Modelling estimation and Optimal Filteration in signal processing
-Wikipedia
Kalman filter
Program
Simulation
value
x ( k + 1 )= Ax ( k ) +Gu ( k ) +Bw (k )
y=Cx ( k )+ v ( k)
A=
1 0.07857
0 0.6065
]
B=
[
[
.0043
.00008771
G= .0043
.000787
C= [1 0]
cov [ w k ] = w =1
v = 5
1 0
(0)=
x
0
( 0)=[1 0]
xe
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
k=0:2*pi/1000:2*pi-2*pi/1000;
%u=sin(k);
u(1:1000)=1;
f=[1 .0787;0 .6065];
g=[.0043;0717];
b=[.00043; .000787];
c=[1 0];
q = 1;r =5;
X=[1;0];xe=[1;0];
p=[1 0;0 0];
pd=[];
xs=[];
xes=[];
ks=[];
w=sqrt(q)*randn(1000,1);
v=sqrt(r)*randn(1000,1);
for i=1:1000
xs=[xs x];
xes=[xes xe];
x=f*x+g*u(i)+b*w(i);
y=c*x+v(i);
xe=f*xe+g*u(i);
%original signal
%time update
%predicted state
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
p=f*p*f'+b*q*b';
kk=p*c'/(c*p*c'+r);
xe=xe+kk*(y-c*xe);
p=(eye(2)-kk*c)*p;
ks=[ks kk];
%gain
%measurement update
end
figure
hold on
plot(k,xs(1,:),'b-')
plot (k,xes(1,:),'r-')
grid
xlabel ('time k')
ylabel('x1')
title ('x1 signal')
legend ('original x1','estimated x1')
hold off
figure
hold on
plot(k,xs(2,:),'b-')
plot(k,xes(2,:),'r-')
grid
xlabel('time k')
ylabel('x2')
title ('x2 signal')
legend ('original x2','estimated x2')
hold off
Figure
plot(k,ks)
title('gain')
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6