Professional Documents
Culture Documents
= value of contained in T
Cov{x}
_x
Var(x)
A-1
= Inverse of Matrix A
= Process output
yM
= Model output
(t)
w(t)
= Plant noise
v(t)
= Measurement Noise
{S}
{S}
= Spectrum of Z
gz
h(t,)
= Weighting function
H(s)
= Transfer function
VW
= Cov{ w(t),w(s)}
g(t)
^
= Estimate of g
(t-s)
= Dirac Function
x(t0)
= Expectation of x(t0)
Vx(t0)
= Covariance of x(t0)
_._
= Norm of Matrix
= Model
'_'
= True system
= Plant Matrix(Continuous-time)
= Control Matrix(Continuous-time)
= Observation Matrix(--do---)
AD
= Discrete analogue of A
BD
= Discrete analogue of B
CD
= Discrete analogue of C
V(t)
W(t)
VVW
= Cov{v(t),w(t)}
RVW
ABSTRACT
The most popular real-time filtering algorithm for
non-linear system is perhaps the Extended Kalman Filter (EKF).
EKF is an approximate filter for non-linear systems based on
first order linearization.
estimation
for
continuous-time
has
not
joint states and parameters has not achieved the same status as
the other methods because of the divergence associated with it
or
huge
computational
requirements.
Gain
matrix(weighting
So far as
continuous-time approach is
The latest approach reported
that
it
involves
the
sampling
time
in
the
The
very
good
identifier.
Convergence
of
parameters
were
is
believed
that
algorithm
reported
in
this
CHAPTER-1
INTRODUCTION
functions.
Submarine,aircraft
and
spacecraft
must
system
is
determined
through
devices(e.g
this
dissertation.
engineering
since
state
It
is
of
estimates
immense
are
importance
required
in
in
the
can
facilitate
in
designing
fault
approach because
-diagnostic
of its inherent
identification
have
been
theories
developed
and
greatly
by
practical
number
of
computers.
identification,
identification
In
the
literature
well
known
would
include
the
methods
of
system
parameter
standard
such
as
recursive
[1,2,3]
Least
(ELS),
Maximum
Likelihood
(ML)
and
Instrumental
[4]
(Ljung,1987;Sderstrm
and
Soderstrom,1983;).
multi-input
and
Stoica,1989,Ljung
multi
output
identification.
Usually
these
well
in
practical
situations,
it
may
not
be
hence
the
standard
erroneous results.
a
resurging
identification
methods
may
give
interest
in
such
problems.
Since
the
Kalman
7
structure
represent
representation,it
has
appropriate
great
noise
intuitive
and
appeal
signal
for
its
was
discussed
by
[5]
Kalman
(1983),
Deistler
and
time
Sderstrom
system
(1989)
in
the
presence
investigated
some
of
methods
input
noise.
of
system
computation
results.
intensive.
Chapter-2
is
However,
devoted
it
to
leads
the
to
accurate
Identification
estimation
Filter
through
Kalman
(continuous
time)
and
8
Chapter
-4
is
dedicated
to
the
estimation
of
facts:
process
represented
since all
by
the physical
differential
estimation
can
be
used
parameter
estimation
to
equation
monitor
can
be
models are
and
system
used
parameter
i.e.
as
the
diagnostic
observer.[7]
2)It is independent of sampling rate and the physical meaning
of
the
process
1s
retained
in
the description.[5]
key
issue
continuous
is
time
rate/interval
is
that
of
stochastic
critical
sampling
rate;
how
to
Choice
of
sampling
identification
procedue
process.
to
the
sample
[6,10].
Furthermore,
not
only
does
the
model
change
9
drastically as the sampling interval is reduced but it also
does
not
bear
continuous
any
time
direct
model
correspondingly
resemblence
obtained
the
transfer
using
to
the
the
corresponding
operator,
function
in
or
s-domain.
in
difficulties.
the
sampling
interval
causes
computational
frequency must not be less than twice and not more than 5 times
the largest undamped natural frequency of the system to the
identified.
We have concentrated on the EKF approach to estimate
both parameters and state of the system which is posed as a
non-linear estimation problem.
due
to
its
inherent
robustness
to
model
uncertainities.
3)Representation of Kalman filter structure in the state space
10
offers
flexible
generic
and
structural
portable.
approach
All
this
of
Kalman
filter
which
can
is
be
both
readily
However, successful
relies
heavily
on
the
-6
concludes
the
efficacy
of
the
Joint
or of the
be
optimal
simultaneously.
2
t-Xt] =min
with
respect
to
set
of
criteria
condition _[^
X
or _[^
Xt-Xt+_]2=min represent a criterion of optimality.
11
of values of t is in fact the requirement that the filter be
optimal
with
respect
simultaneously.
to
the
corresponding
set
of
criteria
The
state
of
linear
system
at
every
t>to
represents
an
exception.
The absolutely optimal filter giving
x
^
t=
(1.1)
are linear
sense
only
in
such
cases
where
the
estimate
can
be
12
In order to obtain the optimal solution for nonlinear equations ,it is desirable to linearise them.
It is
one
or
more
dependent
variable
functions
of
time
are
or
state
vector
by
formal
integration
of
the
equation i.e.
x(t) - x(0) = t0 f[x( ), ]d + t0 G[x( ), ]du( )
t
(1.3)
13
Where
du(t)
integral of
is
the
Wiener
process-which
is
defined
gaussian-
as
the
white noise-
process w(t) or
u(t) = t0 w( )d , u(0) = 0 where
t
(1.4)
cov {w(t), w( )} = Q. D (t - )
and
the
random
u(t1),.........,u(tk)-u(tk-1)
1>.....>t00.
variable
are
U(t1)-u(t0),u(t2)-
independent
for
tK>tk-
u(tK),
given
the
observed
values
value
u(tK-1),
where
tK>tK-1>..........>t0
thus;
_ u( t k ) | u( t k -1 ,u( t k -2 ),.....,u( t0 )= u( t k -1 )
(1.5)
14
k -1
Lim
0
[u( t k ) - u( t k -1 ) ] 2 = T.Qw
max( - )
t k t k -1
k
The
second
integral
of
Eqn.(1.3)
(1.6)
is
stochastic
Lt k -1
=k G(t,t i )[u( t i+1 ) - u( t i )]
0 i=1
7.
Random Noise:
(1.7)
the
input
signal
contaminated
more
defined
insidious
as
any
In
be
noise.
signal.
handle
can
with
of
to
noise
be
general,
developed
course,
may
enemy-random
unwanted
noise.
noise
[14]
is
any
noise
whose
present
15
continuous wave which could be generated by an infinity of
interconnected sin- wave oscillations.
were
timed
to
frequencies
at
If the oscillations
random
but
within
certain
that
description
of
this
random
function
cannot
be
In control system it is
The
importance
of
ODE
stems
in
large
part
from
17
certain nice mathematical properties that they enjoy.
The differential equations
dx(t)/dt={x(t)}
(1.8)
(1.9)
discrete
or
continous
parameter
stochastic
(1.10)
This
continuous
parameter
process
Eqn.(1.10)
18
(1.11)
(1.12)
(1.13)
(1.14)
(1.15)
Therefore,
the
conditional
law
of
Markov
(1.16)
process
can
densities
are
called
the
transition
be
The
probability
19
densities of the Markov process.
9.
White-Noise
[15]:In
modelling
any
physical
process
or
guidance
and
control
system,
the
variables
predictions are
prediction.
This
may
be
because
there
are
actually
exists,
or
there
is
random
errors
in
the
of
determined
by
the
noise
repeated
(its
probability
experimentation
with
law)
the
can
be
process
or
xk | xl=( xk ), K > l
(1.17)
As
20
result,
knowing
the
realization
totally
of
x1
in
no
way
helps
in
unpredictable.
If
the
xK's
are
all
normally
is
the
statement
of
the
central
limit
theorem.
The
stem from
this fact. The motivation for calling the sequence white will
become apparent from the discussion of the continuous parameter
process.
Let {xn,n=1,2,...} be a White Noise Gaussian Random
vector
sequence.
Because
the
sequence
is
Gaussian,
its
all
n>1
and
the
covariance
matrix
where is
x - _ x x
n
T
- _ xm = Qn nm
(1.18)
semidefinite matrix.
Let us now consider the continuous parameter case.
We may define the white process {xt,tT} as a Markov process for
which
xk | x ={ xk }, t > T
(1.19)
21
If the
xt's are normally distributed for each tT, then the process is
a
white
Gaussian
process.
This
formal
definition,
though
(t + ,t) = 2 ( /2) e- | |
Now
for
large
this
process
(1.20)
approximates
the
We note that
| , 1 < 2 ....)
2
= 2 ( )
(1.21)
Where
Q(t)
is
positive
semi-definite
(1.22)
covariance
22
matrix, and
reasons
process
this
process.
often
called
delta
For obvious
-
correlated
( ) = - e-i (t + ,t)d
(1.23)
( )=
2
1+ ( / )2
(1.24)
with
= , p ( ) = 2
a positive constant.
realizable.
23
A Gaussian signal has following properties [14]:
i)If a Gaussian signal is passed through a linear network,
the output signal is again Gaussian; this invariance
property belongs only to Gaussian signals.
ii)In fact, as a consequence of Central limit theorem,
even
non-Gaussian
through
respect
to
filter
the
Density
whose
input;
function,
pass
band
spectrum
when
passed
is
narrow
tends
to
with
become
gaussian.
iii)Gaussian processes occur widely in nature, and it is
indeed
fortunate
that
they
are
at
the
same
time
CHAPTER-2
IDENTIFICATION
Most
specifically
of
the
problems
in
system
Engineering
and
future
system
model.
and
consequent
establishment
identification(#1), knowledge
of
of
with the
the
current
identification
function,decision
function
and
control
of
the
system
model
from
records
of
system
problem
of
system
identification
is
to
The
determine
25
(2.1)
where
x(t) is the state vector ,u(t) is the input signal ,w(t) is
input disturbance,(t) is the unknown parameters of the system,
v(t)
is
the
observation
noise.
The
State
vector
x(t)
is
(2.2)
general
identification
problem
is
illustrated
of the system differential equation and mean and variance coefficients of the system noise w(t) and observation noise v(t)
as well.
v(t) are
27
iii)No
observation
of
the
input
noise
W(t)
is
possible.
iv)The system model and/or observation model are non-linear .
In summary, Identification means the diagnosis of a
black box.
known
under
different
guises
'estimation';'characterization';
such
as
'identification';
'evaluation';'modelling';
or
In other words, we
As per
[ ] Zedah (1971)
In reality, we can't
Thus, it will
Though identification
and estimation are two separate terms, but they are still used
interchangeably.
Identification
Identification,
we
collect
normal
input
and
28
output,
determine
and
devise
weighting
function
by
some
from
complicated
the
simplest
multi-loop
form
system
of
using
ON
OFF
auxilliary
control
to
variables
to
and
Kalman
filtering,Bellman's
Dynamic
Pontryagin's
Maximum Principle .
the
facilitate
conventional
the
PID
application
computer-aided-design
of
controllers,
any
and
theoretical
they
would
concept
with
the
model
was
required
to
be
of
specified
type
However,it is
This
29
aspect of inferring models from observations and studying their
properties motivated to opt for the identification procedures.
Availability of computers gave further impetus to this field.
The interest in this field has widely differing roots [18]
i)the
desire
knowledge
control
of
the
of
the
and
the
practising
specific
lower
engineer
plants
to
in
operating
obtain
order
cost
or
to
better
improve
increased
efficiency.
ii)the task of studying high performance of aero and space
vehicles,as well as the
dynamics of down-to-earth-objects
other
types of control.
iv)research of biological functions e.g.neuro-muscular system
like eye pupil response, arm and leg control,heart rate
control etc.
The need and possibility of estimation has undergone
substantial changes with the development of computer hardware
and software.
2.2 PURPOSE OF IDENTIFICATION [2]
While contemplating the solution of an identification
problem, we must have the purpose of the identification in
mind.
one field to the other and from one problem to the other.
control problems
In
30
aim.
transfer
co-efficients
of
Industrial
process
and
Many problems of
control
identification
is
too,
to
even
design
if
the
control
purpose
system,
of
the
state to another.
3)
Design
process
regulator
which
minimises
the
variation
in
the
first
case
even
the
crude
model
of
the
most
practical
problems,
sufficient
-priori
hence
experiments
are
required
to
be
performed
during
31
affected/disturbed.
acceptable
limits,
might
be
necessary
to
influence
on
the
estimated
have
several
results.
During
the
information,perform
that
experiment,plan a
new
should
results?
3)What type of perturbation signal should be used to get as
good results as possible within the limits of experimental
conditions?
4)If a digital computer is used what is the suitable choice of
the sampling interval ?
Inspite
of
large
work
carried
out
in
the
In practice most of
32
Interaction with a system, needs some concept of how
its
variable
relate
to
each
other.
Such
an
assumed
in
terms
of
mathematical
expression
like
adjective
(time-continuous,
time-discrete,
lumped
or
where
the
system
is
of
complex
nature
and
can't
be
regularities
33
After studying and validating a number of variants
of the system model one may choose the most suitable one.Since
no mathematical model can represent truely a physical system,
we can obtain an equivalent model.
To obtain a model [2] , specification of a class of
systems
"equivalent" is necessitated.
V = V(y, y M )
V( y1 , y M 1) = V( y1 , y M 2)
of the identification.
Class of model
such
as
impulse
functions,spectral
responses,transfer
densities,volterra
functions,covariance
series
are
termed
as
34
NON-PARAMETRIC models and those characterised by state-space
representation such as
(2.5)
equivalence
is
defined
by
means
of
loss
class,
work
_
is
{S
done
}
by
where
defining
is
the
as
parameter.
The
estimation
and
decision
theory.
In
particular
,it
is
[19] :
conditions) is informative
35
enough
to
distinguish
between
different
models
as
well
as
model
structure
is
globally
identifiable
if
it
is
identifiable
if
it
is
model
structure
is
globally
denotes an - neighbourhood of *.
The identifiability concept will provide useful guidance
in finding an M such that it is true representation to the
unknown system _.
2.5 IDENTIFICATION PROCEDURES [17]:
2.5.1 LINEAR SYSTEM:
The
mathematical
model
of
time
invariant
linear
(2.6)
36
(2.7)
Nb
(2.8)
where
u
- Input variable
- State Vector
- Output variable
V,W,
- Noise
A,B,C,D
- Matrices of parameters
ai , bi
- Parameters
Na
Nb
Upper
bound
of
the
past
history
considered
The input noise W(t) and output noise V(t) may be
termed as White ,mutually independent and Gaussian distributed.
2.5.1.1 DETERMINISTIC APPROACH
The
identification
schemes
can
be
classified
The
admit zero mean noise but they can't express the uncertainity
of the estimates caused by the noise. Some of the examples
illustrating the main idea may be found in [17].
37
Some identification procedures apply an error cost
function assuming the existence of noise.
To this category
methods
of
identification
are
of
great
is
assumed
to
be
considered to be unknown.
acting
on
the
system
and
is
type
of
distribution
and
has
convenient
first
moment i.e. mean value but the actual noise acting on the
system at the sampling instant is unknown.
unmeasureable whereas the
measured.
However,
The noise is
measured
data
never
give
the
exact
an
estimator
unbiasedness
should
satisfy
some
conditions
convergence,
like
consistency,
38
the neighbourhood of true value i.e. variation from the true
value will be very small.
Stochastic
approaches
of
identification
are
probabilistic concept.
V(y, y M ) = 0 e2 (t)dt
(2.9)
observation
non-linear
system
identification
,it
is
(2.10)
39
Furthermore it is customery to estimate both states
and
parameters
simultaneously
,if
needed.
The
various
Non-linear
filtering
- Invariant imbedding
2.6 DESIRABLE FEATURE OF IDENTIFICATION PROCEDURE [20]:
a) Class of the Models:Because of the versatility of State-space model, it is the
obvious choice for representations.
In fact identification is
model
can
be
directly
used
since
almost
all
modern
strucutre,
does
not
correspond
to
the
process
40
signals
as,
for
instance,
pulse
functions,
step
functions
It is on
complete
identification
(e.g.
constant
input),
apart,
initial
unrealistic).
state
(zero
initial
states
are
very
same
external
behaviour
i.e.
when
for
every
inital
41
.._..
CHAPTER-3
statistical
point
of
view
into
communication
and
control theory.
The subject of estimation is a vast one.
Least
certain specific
interpreting
prediction.
series
for
estimation
observations
and
making
estimates
and
purposes.
which
attempts
Also
beginning
were
made
at
of
theory
of
minimisation
of
was
first
used
by
Gauss
in
1795,
however,
it
was
first
these
developments,
independently
developed
this
method
in
1808.
of
stochastic
processes
was
made
by
Kolmogorov,
Necessity
additive noise.
of waveforms.
3.2 ESTIMATION
Estimation
assorted
types.
We
problems,
even
can
prediction
have
linear
ones
or
are
of
filtering
or
44
process.
We
(3.1)
t0,tf],
interpolation.
the
Of
problem
is
particular
one
of
interest
the
is
smoothing
the
fixed
or
lag
(3.2)
The risk is
C= _
g(t) - g(t)
2
(3.3)
now
consider
the
problem
of
estimating
z( ) : t0 _ _ t f
(3.4)
45
of Z(t) , so that the observation interval consists of the
closed interval [to,tf].
z(t) have zero mean .
g(t) =
)z( i )
i=1
tf
= t0 h(t, )z( )d
lim
H(t,
0
(3.5)
lim N
g(t)
h(t, )z( ) z( = 0, t 0 < < t f
0 i=1
(3.6)
(3.7)
gz (t, ) = tt h(t, ) z ( , )d
f
(3.8)
In case the
46
t
gz (t - ) = - h(t - ) ( - )d = h(t - )
for - < _t
(3.9)
gz =
(j - l)
j= -
l [- , k]
h(k - j)
(3.10)
H(s)= gz (s)
(3.11)
1 gz (s)
(s) -z (s) +
+
z
(3.12)
(s)
-1
-1
(3.13)
47
period, but there were reasons for being dissatisfied [21].
i)They were rather complicated, often requiring the solution of
auxiliary
differential
and
algebraic
equation
and
were
not
easily
updated
with
increases
in
the
observation interval.
iii)They could not be conveniently adopted to the vector case
(>1)
iv)Moreover the Wiener filter and its extension was limited
inherently
to
expressed
linear
in
terms
systems
of
because
transfer
the
results
function
or
were
impulse
late
fifties
in
the
problem
of
determining
satellite
to
tackle
problem
and
he
predicted
some
useful
by
group
at
the
Bell
space
ideas
in
Laboratories,
who
added
deterministic
problems
developed
48
somewhat
more
restricted
algorithms
than
Swerling
which
Groups at
It
data.
-Numerical determination of the optimal impulse response is
often
quite
computation.
involved
The
and
poorly
situation
gets
suited
rapidly
to
machine
worse
with
generalisations
non-stationery
frequently
(e.g.
prediction
of
growing-memory
require
considered
new
difficulty
filters,
derivations,
to
the
non-specialists.
-the
mathematics
of
the
derivation
are
not
transparent.
49
obscured.
Kalman [24] changed the conventional formulation of
the problem.
(3.14)
(3.15)
gz (t, ) = tt H(t, ) z ( , )d ,
f
t 0 __ t f
(3.16)
50
into two components ,one of which is white and has zero mean.
In this particular case also we ,may decompose z(t) into two
components , one of which can be determined from the past
values of z(.) and the other white process (t) containing the
new information in z(t).
"innovation
is
process"
and
due
to
kailath
The
idea
estimate
process
of
the
The
state
term
is
contained
residual
is
in
also
the
innovation
appropriate
as
in
terms
of
the
innovation
the
estimate
_(t)
can
be
(3.17)
satisfies
g(t) -
tf
t0
H(t, ) ( )d T = 0
, t 0 __ t f
(3.18)
or
_ {g(t) T ( )} = Tt0 f H(t, )_{ ( ) T ( )}d
, t 0 __ t f
(3.19)
51
Let the covariance matrix of (t) be given as
Cov (t) = _ { (t)T ( )}+V v (t) (t - )
(3.20)
(3.21)
white noise
Y(t) = C(t)x(t)
(3.22)
(3.23)
Also
Kalman
This
assumed
assumption
is
that
xo
and
physically
w(.)
are
reasonable
uncorrelated
and
has
important
(3.24)
52
restrict the discussion to filtered estimate.
From Eqn.(3.21) for tf = t , we have
X(t/t) = Tt0 f _{x(t)T ( )}V v-1 (t)( )d
On
differentiating
both
sides
and
using
(3.25)
the
signal
model
(3.26)
(3.27)
where
(3.28)
-1
in
1724
and
calculus
of
variation.
since
condition,
digital
or
analog
Non
then
often
linear
encountered
equation
in
in
the
the
linear
comparatively
computer
because
easy
they
to
solve
involve
only
on
the
53
3.3
basic
traceable to Gauss
idea
.
of
least-square
estimation
is
to help astronomer
Given
Whittle in
innovations
to
give
complete
and
"Stochastic
process"
by
Doob,
Wiley(1953).
The
elegant
the Book
solution
encountered
in
problems
in
control.
optimal
studies
on
Thus,
quadratic
it
paved
minimisation
the
way
for
"duality"
between
the
filtering
and
control
54
problems,
Kalman
also
established
that
under
steady-state
if
the
conditions
arise
in
the
classical
of
"observability"
and
Weiner
problem,
which
roughly
It is a striking fact
called
Kalman
Bucy
or
Bucy-Kalman
filter.
Bucy
Laboratory
of
Johns
Hopkins
University,
in
which
analogies
of
optimal
control
(1960).
Later
Kalman
55
to the discrete time formulas .
Hopf
equation.
However,
Seigert
finite time
in
1953-55
had
Wiener-
Hopf
Riccati
equation
could
be
solved
by
reduction
to
differential equation.
It
of
linearised
problem
arose
and
Naturally the
in
doing
so
nor
did
he
anywhere
have
Riccati
equation.
Swerling gave very useful and interesting ideas for linear and
non-
linear
filtering,
1958,1959,1971
many
of
through
which
his
have
papers
been
published
widely
during
overlooked.
56
....
CHAPTER-4
transient
reference
response
to
data
either
frequency
obtained
response
during
planned
data
or
experiments
electronic
computers
identification
Indeed
the
and
most
that
the
parameter
important
field
of
estimation
development
dynamic
was
in
system
revolutionised.
the
field
can
be
of
computers
meant
that
it
made
sense
to
'go
in
discrete
time
terms
so
that
the
mathematical
(revision by Balakrishanan
and
Rao,1976;
and
Goodwin
and
Payne,1977)
Ljung
and
Contrary to
digital
Seemingly,
computers
has
not
obtained
the
same
status.
developments
in
continous
time
model
estimation,
identification
(Young,1981).
comprehensive
and
self
Unbehauen
review
of
tuning
and
the
Rao
recent
adaptive
Control
(1987,1988)
gave
developments
in
a
the
approach
has
the
via
discrete
advantage
of
time
using
model
the
identification
parameter
estimation
feasbile
(Isermann,1984).
for
In
on-line
contrast
to
real-time
the
applications
indirect
approach,
59
more
attention
initial
in
conditions
recent
arises
years.
with
However,
the
the
problem
of
integration.
Initial
They have
However,
LIF is an
the
present
method
may
include
some
numerical
the
non-linear
state
space
(SS)
(4.1)
(4.2)
OBSERVATION EQUATION
Where
x(t)
is
an
dimensional
vector
of
state
variables
60
characterising the system dynamics behaviour;
u(t)
is an
be
dimensional
assumed
vector
to
of
be
measured
unmeasurable
exactly;w(t)
input
is
an
disturbances
'l'
that
is
nominally
non-stationarity,
constant
or
but
slowly
time
its
variable
elements
varying
(when
may
to
reflect
often
compared
with
be
the
assumed
the
state
linear
approximation
to
Eqns.(4.1)&
(4.2).
(4.3)
61
where
(t),(t)
are
suitably
dimensional
matrices
whose
polynomialmatrix
description(PMD)
of
observation
the
general
form,
A(s).x(t) = B(s).u(t)
(4.4)
(4.5)
or
Where
Ai,
Bi,
i=1,2,.......n
(4.6)
are
propriately
62
be null.
(t)
is
'p
'dimensional
vector
of
stochastic
(t) = G N (s).e(t)
where
GN(s)
C-1
(s).D(s)
is
the
(4.7)
noise
transfer
function
If we combine the
= G(s)u(t)+ G N (s)e(t)
there
are
other
AUTOREGRESSIVE
forms
MOVING
of
representation
AVERAGE
EXOGENEOUS
(4.8)
of
models
such
(ARMAX,defined
as
with
-1
(t) = (s I - ) w(t) + v(t)
(4.9)
63
Alternately, we can consider the special 'innovation' or Kalman
Filter
i.e.
dX(t)
= (t)x(t) + (t)u(t) + K(t) (t)
dt
(4.10)
B
u(t)
A
(4.11)
64
(4.12)
it
the
differentiation
of
possibly
In order to
noisy
signal
(4.13)
the
current
estimates
of
the
prediction of y(t)
parameter
which
In other words
(t) =
C
B
y(t) - u(t)
D
A
(4.14)
66
functions.
distribution
which
results
in
ML
formulation
the
probability
prior
information
on
approach.
context
Bayesian
because
concept
most
is
important
in
methods
can
recurrsive
the
be
dissertation
concentrates
on
Extended
Kalman
The Kalman
involve
estimates
to
some
allow
form
for
of
the
linearisation
definition
about
and
the
solution
They
current
of
the
68
shown that Discrete EKF is an approximate PE algorithom.
Model
In order to obtain the continuous time model for
EKF, we have simply augmented the state vector with unknown
parameter "" and have applied
time
the model
x(t)
4.5
(4.15)
LINEARISATION [27]
The
idea
of
approximate
solution
of
non-linear
However, linearization
of equations
The process of
(4.16)
69
The purpose of studying the solution of model (Eqn.(4.16)) is
to understand the motion of the physical system; and when we
ignore modelling error, we assign a one-to-one correspondance
to these two concepts; generally, infinitely many solutions
exist, one for each set of initial conditions.
Hence, we must
We know
of any differentiable
2 f
2
t
(x - x ) + H
x= x
(4.17
Taylor
series
expansion
of
(4.18)
f(x,,t)
about
the
the role of xT
x = x
f T f T
X = ( x , ,t) +
+
x x= x =
=
1 (x - x ) ...( - ) F 1
2 (x - x )T ...( - ) F n x= x
+H
(4.19)
70
C( t )=
g
x
x ,
(4.20)
= y- y
y
Some compactness in notation is achieved
2 f i
2
x
Fi =
2 f
i
by defining
2
fi
x
2
f i
2 x ,
(4.21)
f T f T
(t) = ,
x x ,
(4.21)
.
11
x + h
X(t)=
=C(t).
Z(t). (t)
(t)+
+
y(t)
.
22
TT TT
((xx ,, ))G
Fnn
(t) = x, T
(4.24)
(4.23)
linearised
function in
model
is
to
expand
each
non-linear
scalar
required
71
are already linear.
The
brute
application
of
recipes
(Eqn.(4.19)
to
2 f i
2 f i
2 and 2
x x ,
x ,
(4.25)
are zero for i= 1,...... n. For this special case of nonlinear system ,the first three terms in the Taylor's series
expansion Eqn.(4.23) & (4.24) give
TT N
(t)
1
NiiN
. . .
22 gg2i fi
.
x(t)
y(t)
=
C(t).
(t)
=i = i
NiiN
y(t)=
=Z(t).
C(t).(t)
(t)++
+ . . x,xx,N
=
xx
x xx,,
TT. .
NNk (t)
TN
N k
(4.26)
(4.27)
Such system are called bilinear system and have been studied
extensively in the literature.
The
non-linear
system
Eqn.(4.18)
has
unique
( x1 , 1 ,t) - ( x2 , 2 ,t) _
x1 - x2
1 - 2
(4.28)
72
linear terms.
means
that
the
linearization
is
only
valid
when
x(t)
h(x, ,t)
x _0
= 0
x
Lt
(4.29)
to
non-linear
systems
were
based
on
the
relative to
equations.
In both the cases the order of the filter is p(p+3)/2
instead of in the case of linear equations .
This is due to
the fact that the equation for Vx i.e. error variance depends on
the results of observation in these cases in consequence of
which this equation cannot be integrated separately beforehand
73
and must be integrated together with the equation for .
Let
y(t) = h(x(t), t)
(4.30)
(4.31)
OBSERVATION MODEL
for state
x n (t) xn (t)
h( xn (t), t)
z(t) - (t) x(t)
xn
(4.33)
(4.32)
and
V x (t/t) =
+V x
( x n (t), t)
V x (t)
xn (t)
T ( x n (t), t)
+
xn (t)
(4
h ( x n (t), t) -1 h( x n (t), t) T
Vv
V x
x n (t)
x n (t)
T
V x (t)
(4.35)
.34)
74
~
~
V x (t/t) = cov [ x (t/t), x (t/t)]
(4.36)
The estimate^
x (t/t) can be determined
as
4.7
(4.37)
can
write
Eqns.(4.30a)and (4.32).
an
expression
for^
x
(t/t)
by
adding
We then have
x = x n + x(t/t)
( x n (t), t)
= ( x n (t), t) +
x(t/t) + K(t)
x n (t)
(4.38)
h( x n (t), t)
z(t) - y n (t) (t) x(t)
xn
By noting that
( xn (t),t)
(x(t/t), t) = ( xn (t), t) +
x(t/t)
(t)
X = x n (t) + x(t/t)xn
( x n (t), t)
= .( x n (t), t) +
x(t/t) + K(t)
x n (t)
h( x n (t), t)
z(t)
(t)
x
(t)
y
n
xn (t)
(4.38)
h( xn (t),t)
x(t/t)
xn (t)
(4.40
(4.39
and
h(x(t/t), t) = h( xn (t),t) +
Thus Eqn.(4.38) can be written as
d
x(t/t) = (x(t), t) + K(t)Z(t) - h(x(t/t)
dt
(4.41)
75
Since
~
X(t/t) = x(t) - x(t/t) = [ xn (t) + x(t)]
- [ xn (t) + x(t/t)]
= x(t/t)
(4.42)
(4.43)
and
V x (t/t) =
+V x
(x(t), t)
V x (t)
x(t)
T (x(t), t)
+
x(t)
(4.4
h (x(t), t) -1 h(x(t), t) T
Vv
Vx
x(t)
x(t)
T
V x (t)
( xn (t), t) (x(t/t), t)
=
xn (t)
x(t/t)
(4.45)
h(x(t/t), t)
x(t/t)
(4.46)
These algorithms
some
cases
the
differential
equations
of
the
4)
76
model of a studied system may contain unknown parameters whose
values can be known only approximately.
equations
of
motion
of
aircraft
contain
the
aerodynamic
testing
of
the
aircraft.
Naturally
they
are
_ and
depend on the
(4.47)
y(t) = h(x(t), , t)
Where
_(x,,t)
and
(x,,t)
are
completely
known
is
determined
by
differential
equation.
0.
Thus
(4.48)
(4.49)
or
77
4.9 EXTENDED KALMAN FILTER AS PARAMETER ESTIMATOR [28,29,19]
The problem of parameter estimation of a stochastic
linear
dynamic
system
has
received
considerable
attention
estimation
which
been
applied
successfully.
augmented
state
estimator
is
used
to
obtain
joint
To
make the
Algorithm to the
state
equation, is used.
The above EKF approach to parameter estimation has a
strong intuitive appeal and offers the possibility of using
standard
Kalman
filter
programmes
to
solve
the
parameter
78
i)Computational
burden
of
estimating
the
augmented
state,
may
be
prohibitive.
Since
this
is
based
on
Further
applicability
performance
of
of
more
the
this
the
condition
solution
method
has
are
required
quite
not
been
vague
for
and
completely
satisfactory.
ii)EKF approach has divergence associated with them unless the
filter is modified; the modification require additional
computing time.
Observability
is
the
necessary
condition
for
the
on
line
application.
Magill(1965)
Hilborn
and
Kalman
filters,
79
which is totally impractical for many applicatiions.
Saridis
and
Stein(1968)
proposed
stochastic
single
output
systems.
Mehra(1971)
used
correlation
the
problem
for
multivariable
case.
and also
Budin(1972)
the
above
compututionally
observations
economical
motivated
and
robust
the
development
parameters
and
of
state
estimators.
Sinha and Nagraja in 1990 [30] presented a Continuous
Extended Kalman Filter algorithm.
equations.
An
algorithm
involving
its
can't
be
it
as
continuous
analogue
of
the
However,
Discrete
if we
Model,
it
80
and
can be avoided.
A modification of this method is presented here.
It
has been observed that the modified algorithm give very high
accuracy of parameter and State Estimation without going in for
any modification of Gain Matrix
(weighting
function).
The
In all
measured
input
and
output
data
d
x(t) = A.x(t) + B.u(t) + w(t)
dt
Z(t) = C.x(t) + v(t)
where
u(t),z(t)
respecively .
and
x(t)
are
vectors
(4.50)
of
dimension
nu,nz,nx
(4.51)
_ (w(t). vT (s)) = V vw t s
It is
81
Let
us
consider
the
EKF
approach
to
estimate
the
unknown
(4.52)
where
A( ).x(t) + B( ).u(t)
( (t), u(t)) =
(4.53)
dtdt
(N(t).
=VV(0)
. N Tx(t)
t 0 ,) D(
C(
=+
(t),
(t))
V w-(t))
(4.55)
(4.54)
(4.57)
(4.56
, P(0) = Po
where
x(0)
0
0(,x,(t),
V wM(
x ( o )
u(t))
=(A( ,
,B(
u)|V(t))u)
u)
=
)x((o)
+
| =
= (t)
,
(0)
=
P
=
Vw=
0 V (0)
0 0
(0)
(t), x(t), u(t))
A(, x)) =M(
(C( )x)| =
= D(
0
0
(4.59)
(4.60)
(4.58)
where
The functions M(,^
x,u) is a [nx x n] matrix and D(,^
x
) is a [ny x n ]matrix.These functions are of course linear in
82
x and u and depend in an essential way on the parameterization.
and V(o) represent some apriori information about parameter
vector .
information is available.
Let define [32]
(t) =
0 u 0
T
and = [A B ]
X 0 u
(4.61)
so that
(4.62)
x = X
x+V A
xC
v+
(t)
[y(t)
(t)]
K
,
=
,
=
kx
K kx dtV x R K k V x R
V x V
(4.66)
(4.65)
(4.67)
(4.63)
(4.64)
where
T
V x = _ {(x - x) (x - x ) }
T
V x = _ {(x - x) (x - x ) }
T
V = _ { - ) ( - ) }
(4.68)
83
T
V x ( t i ) +V x ( t i )
2
T
V ( t i ) +V ( t i )
V =
2
Vx=
(4.69)
Call Randu
Call Runge
x i x i +1
while T Sdo
while T T F do
Solve Estimtor and Var. eqns.
Call Runge
x i x i +1 , i i +1 , V xi V xi+1
, V x i V x i+1 , V i V i+1
T = T+H
Output X,
84
2
d
X + 2 (1 - x 2 )x + x = 0
2
dt
X 1 = X 2
2
X 2 = 2 ( x2 - 1) x2 - x1
NUMERICAL PROBLEM 1
Consider the well known Rayleigh Equation [34]
which
X 1 = X 2
3
X 2 = -0.5 x2 - 0.166 x2 - x1
We can linearize these equations
to obtain a linearised model as under
1
0
0
X(t) =
X(t) + U(t) + w(t)
- 1 - 0.5
0
(4.70)
(4.71)
85
X(0) = X(0) = [2 0 ] T
(0) = (0) = 0
10 0
V x (0) =
0 10
V (0) = Diag 10 10 10 10 10 10
V x (0) = 0
0
0.25
Rv (0) =
0 0.25
0
0.01
Q w (0) =
0 0.01
sampling of 1 second .
h
1
1
) k 2 + 2(1+
) k 3 + k 4
y i+1 = yi + k 1 + 2(1 6
2
2
k 1 = f( xi , y i )
1
1
k 2 = f xi + h, yi + h k 1
2
2
1
1
1 1
h k 1 + 1 h k 2
k 3 = f xi + h, yi + - +
2
2
2
2
1
1
h k 2 + 1+
h k 3
k 4 = f xi + h, yi 2
2
87
Random
Numbers
were
generated
for
simulating
the
For the
i 1, j 1
Uniformly Distributed Numbers(0 to 1)
x i = seed
while i_2 do
xseed = ( + x i )2
whilej_n do
Gaussian Distributed Numbers
1
,
y j = 2. 2 . ln ( )
xj
Wi = yi . cos (2 x j+1)
In order
88
TABLE 4.2
ESTIMATED
CONDITIONS
[SNR=40]
FINAL
INITIAL
PARAMETERS
TIME
STATES
[8 0]T
30
[5 0]T
30
[4 0]T
30
[3 0]T
30
0.998
0.0102
1.0179
0.4911
0.0079
0.0044
0.996
0.0063
0.0255
0.9847
0.4781
0.0028
0.996
0.0337
0.9751
0.4721
0.0021
0.995
0.0480
0.9589
0.4633
0.0013
0.0077
0.0095
TABLE 4.3
ESTIAMTED PARAMETERS FOR DIFFERENT SIGNAL-TO-NOISE RATIO
INITIAL STATES[3 0]T
FINAL
TIME
SNR
PARAMETERS
91
1
30
30
30
30
40
20
10
0.990
0.0199
0.9769
0.4845
0.989
0.0201
0.9779
0.4842
0.992
0.0087
0.9920
0.4920
0.982
0.0679
0.9309
0.4621
6
0.0036
0.0059
0.0035
0.0009
0.0014
0.0004
0.0107
0.0016
2
2
= P a = Pi - Pu = 1 -
2
t
where
M= 0.01432 units power sec2 per elect radian
Formulation of the above swing equation in the State
Space Representation will be
0 1
0
X =
X
+
U + w(t)
- 44.4792 0
69.8324
(4.72)
92
This
problem
was
simulated
with
the
initial
conditions
(0) = 0
x(0) = x(0) = [2.5 0 ] T ,
V = diag 10000 10000 10000 10000 10000 10000
V x = diag[ 1 1 ],V x = 0
0
0
0.001
0.1
Vv=
,V w =
0 0.001
0 0.1
Impact
INITIAL
PARAMETERS
TIME
STATES
0.9
[1.29]T
-0.1123
0.9834
-43.8752
-0.0165
-0.1382
-69.7893
0.9
[2 0]T
-0.1635
1.0307
-44.2381
-0.0429
-0.2327
-69.4937
95
0.9
[2.5 0]T
-0.0797
1.0099
-44.4418
-0.0106
-0.1031
-69.7477
0.9
[2.5 0]T
-0.0293
1.0045
-44.4653
-0.0084
-0.0417
-69.8711
TABLE 4.6
ESTIAMATED PARAMETERS FOR DIFFERENT SIGNAL-TO-NOISE RATIO
INITIAL STATE [3 0]T
FINAL
SNR
PARAMETERS
TIME
[1]
0.9
0.9
100
[3]
[4]
[5]
[6]
69.074
1.008
0.0023
44.18
0.027
0.012
18
50
0.0012
0.9
[2]
5
0.0687
1.010
44.20
0.025
0.015
43
1.010
44.36
0.035
0.104
10
69.076
6
69.021
0
96
1
0
0
X =
x(t)
+
(4.74)
where U(t) = 5
Observation model was considered as
(4.75)
0 0.25
0 0.1
Integration
time
step
was
taken
as
0.05
sec.
Estimated
99
TABLE 4.8
ESTIMATED PARAMETERS FOR VARIOUS INITIAL STATES
[SNR=16]
FINAL
INITIAL
PARAMETERS
TIME
STATES
[2 5]T
[3 5]T
[4 5]T
[5 5]T
0.933
0.011
0.0338
2.918
4.126
0.919
0.013
0.0443
2.948
4.157
0.877
0.024
0.0777
3.010
4.250
0.727
0.064
0.1939
3.135
4.420
6
1.2313
1.2502
1.2754
1.3202
TABLE 4.9
ESTIMATED PARAMETERS FOR VARIOUS SIGNAL-TO NOISE RATIO
INITIAL STATE [1
FINAL
SNR
5]T
PARAMETERS
100
TIME
[1]
9
40
20
10
[2]
[3]
[4]
[5]
0.904
0.0549
2.933
4.221
0.020
[6]
1.2249
0.929
1.2161
0.0424
2.929
4.269
0.015
0.942
1.1302
0.0421
2.768
4.010
0.021
0.814
1.1914
0.1218
2.222
4.165
0.092
101
0 1
0
X =
x(t) + U(t) + w(t)
- 6 - 5
0
(4.76)
0 0.01
0 0.01
For
solving
the
differential
equations
of
the
0.075
sec.
Estimates
of
States
and
Parameters
TABLE 4.11
ESTIMATED PARAMETERS WITH VARIOUS INITIAL STATES
SNR=40
FINAL
TIME
INITIAL
PARAMETERS
103
STATES
15
15
15
15
[0 2]T
[1 2]T
[2 2]T
[2 0]T
0.1465
1.052
5.946
5.292
0.004
1.046
6.027
4.883
0.005
1.039
5.955
4.540
0.004
0.991
5.883
4.534
0.005
0.0275
0.0100
0.0078
6
0.1060
0.1297
0.3060
0.4228
TABLE 4.12
ESTIMATED PARAMETERS FOR VARIOUS SIGNAL-TO-NOISE RATIOS
INITIAL STATE [0
FINAL
1]T
SNR
PARAMETERS
TIME
15
50
[1]
[2]
[3]
[4]
[5]
[6]
0.2583
0.839
5.657
4.898
0.038
0.0121
104
2.25
4.5
4.5
40
10
1.014
0.0133
5.306
4.495
0.000
0.905
0.1752
6.408
4.679
0.001
0.814
0.1218
2.222
4.165
0.092
0.0144
0.1848
0.1249
the
system
described
by
the
differential
equation
2
dx
dx
d X
+ 2 + 5x = 5,
= 0, x = -1
2
dt
dt
dt
1
0
0
X =
X(t) + U(t) + w(t)
- 5 - 2
5
(4.78)
(4.79)
105
INITIAL CONDITIONS
(0) = 0
x(0) = x(0) = [ - 1 0 ] T ,
0 0.25
0 0.1
Time
step
for
integration
H=
0.05
Seconds,
parameters
initial
for
various
states
and
Signal-to-Noise
ratios respectively.
TABLE 4.14
ESTIMATED PARAMETERS WITH VARIOUS INITIAL STATES
[SNR=4]
FINAL
INITIAL
PARAMETERS
TIME
STATES
4.5
[-1 1]T
0.1613
1.194
4.679
1.840
0.207
6
4.9571
107
4.5
4.5
4.5
[-2 1]T
[-2 2]T
[-2 -2]T
0.1048
4.965
1.984
0.157
1.115
5.004
1.950
0.171
1.101
4.995
2.116
0.068
0.1190
1.120
0.0346
5.1855
5.1881
5.2892
TABLE 4.15
ESTIAMTED PARAMETERS WITH VARIOUS SIGNAL-TO-NOISE RATIOS
INITIAL STATE [-1
FINAL
0]T
SNR
PARAMETERS
TIME
4.5
4.5
4.5
40
20
10
[1]
[2]
0.1237
1.221
4.401
1.867
0.199
1.214
4.442
1.874
0.178
0.1073
0.0975
1.206
[3]
[4]
[5]
[6]
4.5589
4.6160
4.6971
108
8
4.5
0.0915
4.499
1.899
0.163
1.187
4.731
1.966
0.145
4.9887
0 1
0
X =
X(t) + u(t) + w(t)
- 1 0
10
(4.80)
INITIAL CONDITIONS:
(0) = 0
x(0) = x(0) = [ 0 0 ] T ,
0 0.5
0 0.1
Integration
time
step
H=0.05
second,
Observation
109
sampling time S=0.30 second.
Estimates of States and Parameters are depicted in
Table 4.16 & Fig. 4.12 whereas estimates of parameters with
various initial states and SNR's are appended in Table 4.17 &
4.18 respectively.
TABLE 4.17
ESTIMATED PARAMETERS WITH VARIOUS INITIAL STATES
[SNR=10]
FINAL
INITIAL
PARAMETERS
TIME
STATES
9.0
9.0
[0
[1
1]T
[1
1.001
0.025
10.100
0.0007
1.001
0.000
92
1.006
10.044
0.999
0.007
0.048
1.005
10.043
0.998
0.005
0.021
0]T
0.0061
9.0
1]T
0.0035
111
9.0
[1
1.003
1.001
0.002
0.001
2]T
0.0011
7
10.079
7
TABLE 4.18
ESTIMATED PARAMETERS WITH VARIOUS SIGNAL-TO-NOISE RATIOS
INITIAL STATE [0
FINAL
0]T
SNR
PARAMETERS
TIME
[1]
9.0
9.0
9.0
50
25
[2]
[3]
[4]
[5]
[6]
1.000
0.012
10.046
0.0005
1.001
0.000
1.000
0.014
10.048
0.0006
1.001
0.000
1.003
0.012
10.078
1.000
0.001
20
0.0008
112
the
system
described
by
the
differential
equation;
2
dx
d x
+ 14 + 40x = 5 , dx/dt = 1, x = 1
dt2
dt
1 0
0
X =
+ + w(t)
- 40 - 14 5
(4.82)
(4.83)
INITIAL CONDITIONS;
(0) = 0
x(0) = x(0) = [ 2 1 ] T ,
0 0.001
0 0.01
113
respectively.
TABLE 4.20
ESTIMATED PARAMETERS WITH VARIOUS INITIAL STATES
[SNR=1000]
FINAL
INITIAL
PARAMETERS
TIME
STATES
0.75
[2
2]T
0.0458
0.75
[1
2]T
0.0266
0.75
[2
0]T
0.0294
1.037
0.052
39.57
13.77
80
74
1.023
0.015
39.87
13.91
78
34
1.016
0.014
39.89
13.94
39
24
6
5.1405
5.0135
5.0343
TABLE 4.21
ESTIMATED PARAMETERS FOR VARIOUS SIGNAL-TO-NOISE RATIO
INITIAL STATE [2 1]T
115
FINAL
SNR
PARAMETERS
TIME
0.75
0.75
0.75
0.75
100
50
10
10
[1]
[2]
0.0051
0.993
40.29
14.14
0.020
56
44
1.012
39.95
13.97
0.008
43
82
0.967
41.20
14.57
0.085
53
59
0.995
41.62
14.78
0.102
98
53
0.0248
0.0031
0.0981
[3]
[4]
[5]
[6]
4.9161
5.0074
4.6820
4.5517
- 15 3
X =
X(t) + 0
3 -7
(4.84)
116
Observation Model was considered as
(4.85)
INITIAL CONDITIONS;
x(0) = x(0) = [ - 16
(0) = 0
- 6 ]T ,
0 0.25
0 0.1
conditions
are
depicted
in
Table
4.22
&
Fig.4.14.
TABLE 4.23
ESTIMATED PARAMETERS VARIOUS INITIAL STATES
[SNR=8]
119
FINAL
INITIAL
PARAMETERS
TIME
STATES
0.75
[-16 -8]T
15.020
[-16 -10]T
2.802
0.140
6.810
0.0472
0.0977
14.838
2.497
2.732
6.752
0.140
4
0.60
[-20
-6]T
2.706
1
0.60
15.132
2.725
2.906
6.858
0.113
0.0064
TABLE 4.24
ESTIMATED PARAMETERS WITH VARIOUS SIGNAL-TO-NOISE RATIOS
INITIAL STATE [-16 -6]T
FINAL
SNR
PARAMETERS
TIME
[1]
0.60
0.60
50
10
[2]
[3]
[4]
[5]
[6]
2.936
3.008
15.1518
6.9770
0.0313
0.0205
3.149
2.975
15.3606
6.9498
0.0027
0.1213
120
0.60
2.456
2.795
14.9564
6.7398
0.3069
0.0725
0 1
0
T
X =
X(t) + U(t)+ w(t), X(0) = 1 1
- 10 0
10
(4.86)
(4.87)
INITIAL CONDITIONS:
(0) = 0
x(0) = x(0) = [ 1 1 ] T ,
0 0.001
0 0.01
121
Time,S= 0.05.
Estimates of States and Parameters obtained with the
above initial conditions are depicted in Table 4.25 and Fig.
4.15.
Estimates of Parameters for various initial states
and SNR's are appended in Table 4.26 & 4.27 respectively.
TABLE 4.26
ESTIMATED PARAMETERS WITH VARIOUS INITIAL STATES
[SNR=100]
FINAL
INITIAL
PARAMETERS
TIME
STATES
1.5
1.5
1.5
1.5
[2
[1
[2
[3
1]T
2]T
2]T
2]T
0.0001
1.002
9.9702
1.003
0.0137
9.8666
1.002
0.017
0.0075
9.9736
0.0002
1.001
0.012
0.0045
9.9957
0.0022
0.0023
5
0.000
6
9.9702
7
0.0000
0.023
9.8108
8
9.9561
9.9786
123
TABLE 4.27
ESTIAMTES OF PARAMETERS WITH VARIOUS SIGNAL-TO-NOISE RATIOS
INITIAL STATE [2 1]T
FINAL
SNR
PARAMETERS
TIME
1
1.5
1.5
1.5
50
10
0.0411
0.0408
0.0407
1.007
9.46246
0.0236
0.0487
1.006
9.46243
0.0153
0.0487
1.005
-9.4768
0.0089
0.0490
6
9.3569
9.3786
9.3958
J + S = T U T Where T _ 0 1 ,U T _
u 2
0 0
124
S is the skew symmetric-matrix i.e.
0 ( J 11 - J 33 ) 3 0
S = ( J 11 - J 33 ) 3
0 0 , J = interia, = angular velocity
0
0 0
Let (J11-J33)/J22 = -1 , u2 = 0.
Thus we get
0 - 1
1
X =
x(t) + U(t) + w(t)
0
1 0
(4.88)
V = diag 10 10 10 10 10 10
V x = diag[ 10 10 ],V x = 0
0
0
0.5
0.01
Vv=
,V w =
0 0.5
0 0.01
Estimates of States
127
FINAL
INITIAL
PARAMETERS
TIME
STATES
1.5
[2
1.5
[1
1.5
[2
1.5
[2
1]T
2]T
2]T
0.998
1.507
0.0074
1.0042
0.0039
0.0658
0.992
1.097
0.0207
1.0246
0.0011
0.0511
0.997
1.064
0.0061
1.0079
0.0017
0.0620
0.996
1.059
1.0080
0.0026
0.0589
3]T
0.0004
TABLE 4.30
ESTIMATED PARAMETERS FOR VARIOUS SIGNAL-TO NOISE RATIOS
1]T
INITIAL STATE [1
FINAL
SNR
PARAMETERS
TIME
[1]
1.5
1.5
50
20
0.0088
[2]
[3]
[4]
[5]
[6]
0.974
1.070
1.0326
0.0131
0.0658
1.001
1.084
128
1.5
0.0323
1.0068
0.0017
0.0776
1.009
0.0067
1.074
0.0320
0.9985
0.0816
third
order
system
described
by
the
differential equation
T
3
2
dx
d x d x
+
+ K 2 (1 - k 3 x2 ) + K 1 x = 0, T = 1, K 1 = K 3 = 0.915, K 2 = 1
3
2
dt
dt
dt
X =
0
0 1 X(t) + 0 u(t) + w(t)
- 0.915 - 0.77125 - 1
0
(4.90)
(4.91)
129
(0) = 0
x(0) = x(0) = [ 3 0 0 ] T ,
V = diag 10 10 10 10 10 10 10 10 10 10 10 10
V x = diag[ 2 2 2 ],V x = 0
0
0
0
0
0.1
0.1
0 ,V w = 0 0.1
0
V v = 0 0.1
0
0
0 0.1
0 0.1
Integration
time
step,H=0.05
second,
Observation
X = 0 0
1 X + 0 U + w(t), X(0) = 0
0 - 3 - 2
1
INITIAL CONDITIONS:
(4.93)
132
(0) = 0
x(0) = x(0) = [ 1 1 1 ] T ,
V = diag 100 100 100 100 100 100 100 100 100 100 100 100
V x = diag[ 5 5 5 ],V x = 0
0
0
0
0
0.1
0.01
0 ,V w = 0 0.01
0
V v = 0 0.1
0
0
0 0.1
0 0.01
X = 0
0
1 X + 0 U + w(t), X(0) = 1 0 0 T
- 6 - 11 - 6
0
(4.94)
(4.95)
136
INITIAL CONDITIONS:
(0) = 0
x(0) = x(0) = [ 8 0 0 ] T ,
V = diag 100 100 100 100 100 100 100 100 100 100 100 100
V x = diag[ 3 3 3 ],V x = 0
0
0
0
0
0.05
0.01
0 ,V w = 0 0.01
0
V v = 0 0.05
0
0
0 0.05
0 0.01
,Observation sampling
0.25
Estimated States and Parameters are appended in Table
X = 0
0 - 0.08 X + 20 U + w(t)
30.3 0.3 - 0.11
0
(4.96)
(4.97)
140
INITIAL CONDITIONS:
CHAPTER-5
5.1 INTRODUCTION:
Discrete version of the Extended Kalman Filter has
been widely discussed and its numerous application in various
fields have been reported in the literature.
application can be found in
Accounts of these
[15,40-57]
The
estimate
of
the
state
(t+1)
(5.1)
is
achieved
from
the
(5.2)
(5.3)
144
P(t + 1) = F(t, (t))P(t) F T (t,(t)) + R w (t)
- N(t)[ Rv (t) + H(t,(t))
(5.4)
where
F(t, ) =
(t, )| = ,
H(t, ) =
h(t, )| =
(5.5)
T
T
RV (t) = _ (V(t) V (t)); R w (t) = _ (w(t) w (t))
and
between.
time
update
and
to
make
relinearisation
in
Our
the
linearisation.
problem
We
is
will
non-linear
have
the
one
and
approach
will
similar
require
to
the
approach
to
determine
the
unknown
parameter
145
x(t + 1) = AD ( )x(t) + B D ( )u(t) + (t)
(5.6)
where
_ (t) T (s) = Qv ts , _ e (t) eT (s) = Qe ts
_ (t) eT (s) = Qc ts , _ x(0) = 0
(5.7)
_ x(0) xT (0) = ( )
x(t)
Z(t) =
(t)
z(t + 1) = (z(t),u(t)) +
0
y(t) = h(z(t))+ e(t)
(5.8)
A( )x(t) + B( )u(t)
(z(t),u(t)) =
h(z(t)) = C( )x(t)
(5.9)
where
0 , P(0) = Po
I
(5.10)
(5.11)
(5.14)
(5.13)
(5.12)
146
Estimator for linearised Model is given by
TT T T -1 T T
T
TT T T
T T
PS(t
K(t)
[L(t)
++
(t
1)
=
(t)
(t)
=
(t)
+
1)
-(t)
(t)
(t
+
(t)
-tPtP
L(t)
(t)
=
[t(t)
+
]D
A+
P
M
At M
Dt
P
A
PP
ASC
P
t (t)]
SttP-t2 y(t)
C
C2+
1(t)
L(t)[y(t)
(t)]
PX2(t(t=
A
L
PC
P
t+
1C
tA
tt+
1C
Lt2y+
++1
1)
=
xt=
K(t)[y(t)
2tD
t1)
S
tu(t)
t(t)
P(t)
3t=
3+
t-tK(t)
2=
33(t)
A1)
B
ttM
T T
c
T (0)
T T (t)
-1 T e
+t+
+
(t)STtM
003Q
(0)
=
PP
M
Pt 2PC
A(0)
M
=t=
0
2+
+D
(t)
3
tD
+
otP
PM
D
P
3 ]D
2t (t)
t
t(t)
2x
t +t Q
(5.19)
(5.21)
(5.22)
(5.15)
(5.16)
(5.17)
(5.18)
(5.20)
- K(t) S t K T (t) + Qv
P1 (0) = 0 ( 0 )
P1 (t)
M( (t), X(t),
u(t)) P2 (t)
M
t=
K(t)
N(t) =
,
P(t)
=
T
x(t))
(t)
Dt = D( (t), P
L(t)
(t)
P
3
2
At = A( (t)), Bt = B( (t)), C t = C( (t))
T
c
S t = ( C t Dt )P(t)( C t Dt ) + Q
(5.24)
(5.23)
(5.25)
x 0 u 0
M( (t), X(t), u(t)) =
0 x 0 u
It is also assumed that is the column matrix
B]T.
(5.26)
such as = [A
147
(t +
+1)
1)=
=M
K(t +
+1)
1)P01(t(t+
+1)
1/t)
PX1(t
P1t(t(t)+1/t)
(t) +- K(t
= P2 (t +
1/t) - K(t + 1) 2 (t + 1/t)
P2 (t +1)
(t + 1)
= (t) + L(t) 0 (t +P1)
T
T
P1 (t + 1/t) = At (t) P1 (t) At + M t (t) P3 (t) M t (t)
+ At (t) P 2 (t) M Tt (t) + M t (t) PT2 (t) ATt
P 2 (t + 1/t) = At (t) P 2 (t) + M t (t) P3 (t)
(5.28)
(5.27)
(5.29)
P3 (t + 1/t) = P3 (t)
(5.30)
where
C(t)
is
an
estimation
of
the
suitable
(5.31)
compensator.
(5.32)
148
where Gx(t+1) is given as
G x (t + 1) = P1 (t + 1/t) - M t (t) P3 (t) M t (t)Q
e-1
(5.33)
of
algorithm
identical
to
the
model
reference
x1 (k + 1)
=
x2 (k + 1)
1 2 x1 (k) 5
+ u(k)
3 4 x2 (k) 6
z1 (k + 1) x1 (k + 1) v1 (k + 1)
=
+
(k
+
1)
(k
+
1)
z 2
x2
v2 (k + 1)
Initial conditions:
(5.34)
(5.35)
149
(0) = 0
x(0) = x(0) = [ 0 0 ] T ,
= [ 0.5 0.1 0.1 0.5 1.0 1.5 ]
P3 = diag 10 10 10 10 10 10
P1 = diag[ 10 10 ], P 2 = 0
1. 0
Vv=
0 1.
5.2.
Input
was
simulated
with
Pseudo-random
Binary
BSi = j then
BSi +1
whilei_N do For j = 1, N do
Else
BSi -1
per
the
algorithm
given
in
Chapter
4.
Both
uniformly
152
for checking the identifiability.
estimated
parameters
for
uniformly
distributed
Noise
and
1.8 1.0
1.0
X(k + 1) =
x(k)
+
u(k)
- 0.8 0.0
- 0.4
This
problem
was
simulated
with
(5.36)
the
initial
1.0 1.0
1.0
X(k + 1) =
x(k)+
u(k)
- 2.0 0.0
- 2.0
(5.37)
initial
conditions
as
that
of
Problem
No.1
and
the
estimated states and parameters are shown in Table 5.5 and Fig.
5.4.
CHAPTER 6
CONCLUSION
presently
under
intensive
research.
After
the
IFAC
The
of
state
variable
in
case
of
known
diverge.
The
displayed
by
insight
into
Ljung,1979
the
convergence
[31].
Realization
mechanism
of
was
modification
less
estimation
prevelent
than
of
Continuous-time
that
of
system
discrete-time
is
but
approach
reported
so
for
joint
far(mostly
estimation
for
of
states
and
discrete-time)
are
In order to make
we
express
parameters
high
i.e.
uncertainity
high
in
variance,
the
initial
rapid
value
changes
in
of
the
minimum
time
,however,
uncertainity
in
the
most(if
not
all)
engineering
problems
initial
Since
apriori
163
the parameter estimator gain Kk high for a longer period.
Whereas
estimator,
this
more
algorithm
insights
can
into
be
considered
the
convergence
as
Robust
analysis
algorithm
systems
developed
is
very
in
this
simple
and
dissertation
for
computationally
164
REFERENCES
strm,K,J
&
Eykhoff,P."System
Identification",Automatica,Vol.7,pp.123-162,1971
[3] Nieman,R.E.;Fisher,D.G. & Seborg,D.E."A Review of Process
Identification and Parameter Estimation Techniques",Int.Journal
of Control,Vol.13,No.2,pp.209-264,1971
[4] Allen,R.M.&Young,B.R."Gain-Scheduled Lumped Parameter MIMO
of a Pilot-Plant Climbing Film Evaporator",IFAC Control Engg.
Practice,Vol.2,pp.219-225,1994
[5] Yang,Z.J.,Sagara,S. & Wada,K."Identification of Continuoustime
Systems
Eliminating
from
Sampled
Input-Ouput
Techniques",Control
Data
Theory
Using
and
Bias
Advanced
Technology,Vol.9,No.1,pp.53-75,3/93.
[6]
Wahlberg,B.,Ljung,L.&
Continuous-time
Stochastic
Sderstrm,T."On
Sampling
of
Theory
and
Processes",Control
Advanced Technolgy,Vol.9,No.1,pp.99-112,3/93.
[7] Isermann,Rolf."Process Fault detection Based on Modelling
and
Estimation
Methods
survey",Automatica,Vol.20,No.4,pp.387-404,1984.
[8]
Unbehauen,H.&
Identification
Rao,G.P."Continuous
A
Survey
approaches
to
System
",Automatica,Vol.26,No.1,pp.23-
35,1990.
[9] Sagara,S. & Zhao,Zhen-Yu."Numerical Itegration Approach to
On-Line
Identification
of
Continuous-time
System",Automatica,Vol.26,No.1,pp.63-74,1990.
[10] Stericker,D.L. & Sinha,N.K."Identification of Continuoustime Systems from Samples of Input-Output Data Using the Operator",Control-Theory
and
Advanced
Technology,Vol.9,No.1,pp.113-125,3/93.
[11]
Crisafulli,S.
&
Medhurst,T.P."Robust
On-Line
Mining",IFAC
Control
Engg.
Practice,Vol.2,No.2,pp.201-
209,4/94.
[12]
Pugachev,V.S.
&
Sinitsyn,I.N."Stochastic
Differential
Gibson,John
E."Non-Linear
Automatic
Control",New
York:McGraw-Hill Inc.,1963
[15]
Jazwinski,Andrew
H."Stochastic
Processes
and
Filtering
Sage,A.P.
&
Melsa,J.L."System
Identification",New
York:Academic Press,1971
[17] Vladimir,Strejc "Trends in Identification , Automatica ,
Vol.17, No.1,pp.7-21,1981
[18] Eykhoff,P."System Identification",London:Wiley,1974
166
[19] Ljung,L."System Identification: Theory for the User",New
Jersey:PTR Prentice Hall Inc.,1987.
[20]
Guidorzi,Roberto
System
P."Invariants
and
Structural
Canonical
and
Forms
for
Parametric
Identification",Automatica,Vol.17,No.1,pp.117-133,1981.
[21]
Kailath,Thomas."A
View
of
Three
Decades
Information
Theory",Vol.IT-20,No.2,March,1994.
[22] Srinath,M.D. & Rajasekaran,"An Introduction to Statistical
Signal
Processing
with
Applications",London:John
Wiley
Inc.,1979.
[23]
Kalman,R.E."A
Prediction
New
Approach
Problems",Trans.
to
Linear
Filtering
ASME,Journal
of
and
Basic
Engineering,March,1960.
[24] Kalman,R.E. & Bucy,R.S."New Results in Linear Filtering
and
Prediction
Theory",Trans
of
ASME,Journal
Of
Basic
Engg.,March,1961
[25]
Young,Peter."
Parameter
estimation
for
Continuous-time
Ljung,L."Analysis
of
Recursive
Stochastic
Algorithms",
Skeltor,Robert
E."Dynamic
Systems
Control-Linear
System
Estimation
in
Linear
System
with
Correlated
167
Noise",IEEE Trans. on AC, Vol.25,No.2,4/1990.
[29] Nelson, Lawrence W."The Simultaneous On-Line Estimation of
Parameters
and
States
in
Linear
System",IEEE
Trans.
on
AC,Feb,1976.
Continuous
System
Parameter
Identification",Computer
and
Elect. Engg,Vol.16,No.1,pp.51-64,1990.
[31] Ljung,Lennart."Asymptotic Behaviour of the Extended Kalman
Filter as a Parameter Estimator for Linear System",IEEE Trans
on AC,Vol.24,No.1,pp.36-50,2/1979.
[32]
Yoshimura,
Modified
systems
Toshio.,Konishi,Katsuobu
Extended
Kalman
Filter
with
for
&
Soeda
Linear
Unknown
,Takashi,"A
Discrete-time
Parameters",
Automatica,Vol.17,No.4,pp.657-669,1981.
[33]
Gopal,M."Modern
Control
Theory",New
Delhi:Wiley
Eastern
Ltd,1984
[34] Chen,Chih-Fan & Haas,I.John,"Elements of Control System
Analysis - Classical and Modern Approaches",NJ:Prentice Hall
Inc.,1968.
[35]
Graham,Neill"Programming
and
Problem
Solving
with
Kimbark,"Power
System
Stability",New
York:Wiley
Easten
Inc,1967.
[37] Gupta,Sushil Das"Control System Theory",New Delhi:Khanna
168
Publishers,1973.
[38] Toro,Del Vincent"Principle of Electrical Engineering" New
Delhi:Prentice Hall of India Ltd.,1987.
[39]
Rao,A.Subha
&
Desai,R.
Parag,"A
Course
in
Control
Lettenmaier,Dennis
estimation
Water
Techniques
P.
in
Resources
&
Burges,Stephen,J."Use
Water
Resources
of
System
Bulletin,American
State
Modelling"
Water
Resources
Society,Vo.12,No.1,pp.83-99,2/1976.
[41]
Safonov,Michael
aspects
of
G.
Non-Linear
et.
al"Robustness
Stochastic
and
Estimators
Computational
and
Regulators"
Filter
Problem
of
State
a
Estimation
Tanker
in
of
a
the
Automatic
Seaway",IEEE
Steering
Trans.
on
AC,Vol.29,No.7,pp.577-584,7/1984.
[43] Grewal, Mohinder S. ; Henderson Vinson D. & Miyasako,
Randy S."Application of Kalman Filtering to the Calibration and
Alignment
of
Inertial
Navigation
System"
IEEE
Trans.
On
AC,Vol.36,No.1,pp.4-13,1/1991.
[44]
Sage,A.P.
Identification
&
of
Wakefield,C.D."Maximum
time
Varying
and
Likelihood
Random
System
Parameters",INT. J. Control,Vol.16,No.1,pp.81-100,1972.
[45]
Pugachev,V.S."Estimation
Continuous
Non-Linear
of
State
and
Systems"
Parameters
of
Avtomatika
169
I,Telemekhanika,No.6,pp.63-79, 6/1979.
[46] Shin,V.I."A Sub-Optimal Algorithm of Estimation of the
State and the Parameter of Multi-Dimensional Continuous NonLinear
system",Avtomatika
Telemekhanika,No.1,pp.101-
106,1/1984.
[47] Chui, Charles K et.al," Modified Extended Kalman Filtering
and
real-time
Parallel
Identification",IEEE
Trans.
Algorithm
On
for
System
AC,Vol.35,No.1,pp.100-
104,1/1990.
[48] Tsay,Yih T. & Shieh,Leang S."State-Space Approach for Self
Tunning
Feedback
Control
with
Pole
Assignment",IEE
Proc.,Vol.128,Pt.D,No.3,pp.93-101,5/1981.
[49]
Grimble,M.J.,"Adaptive
Systems
with
Kalman
Filter
Unknown
for
Control
of
Disturbances",IEE
Proc.,Vol.128,Pt.D,No.6,pp.263-267,1981.
[50]
Tyss/
o,A."Modelling
&
Parameter
Estimation
of
Ship
Boiler",Automatica,Vol.17,No.1,pp.157-166,1981.
[51]
Brovko,O.
et.
al"
The
Extended
Kalman
Filter
as
170
Trans. on AC,Vol.28,No.3,pp.416-427,3/1983.
[54] Landau,I.D."Unbiased Recursive Identification Using Model
Reference
Adaptive
Techniques",IEEE
Trans
on
AC,Vol.21,No.2,pp.194-202,4/1976.
[55]
Landau,I.D."An
Addendum
to
Unbiased
Recursive
Non-Linear
State
Estimation",Automatica,Vol.6,pp.477-
480,1970.
[57] Maisel,Herbert & Gnugnoli,Giuliano"Simulation of Discrete
Stochastic System", Tronto:Science Research Associate,1972.
[58]
Lawrence,R.Rabiner
Digital
Signal
&
Gold,B."Theory
Processing,"New
&
Application
Jersey:Prentice
of
Hall
Inc.,Englewood cliffs,1975.
[59] Simpson,M.A. et al " Statistical Properties of a Class of
Pseudorandom Sequences",Proc.IEE,Vol.113,No.12,12/1996.
[60] Roberts,P.D. et al " Statistical Properties of Smoothed
Maximal-Length
Linear
Binary
Sequences",Proc.
IEE,Vol.113,No.1,1/1966.
[61]
Briggs,P.A.N.
et.
al
"
Pseudorandom
Signals
for
the
Macleod,C.J."System
Pseudorandom
Identification
Using
Time-Weighted
Sequences",INT.J.Control,1971,Vol.14.No.1,pp.97-
171
109,1971.
[63] Barker,H.A. & Raeside,D."Linear Modelling of Multivariable
Systems
with
Pseudo-random
Signals",Automatica,Vol.4,pp.393-416,1968.
Binary
Input
172
ANNEXTURE-A
since
they
were
shown
to
imply
un
integral
may
approximate
-correlated
However, the
delta
function
every
simulations
utilizes
based
set
on
of
random
completely
unique
sequence of numbers.
number
and
generator
determined
rigid
rules,
used
for
calculation
to
generate
,
a
as Pseudo-Random Numbers.
Before
, we
go in for simulation of
white
noise
(A.1)
173
point number Zi , uniformly distributed between 0 and 1 , is
then given by Zi = xi/m.
distributed
stochastic system .
variables
are
encountered
in
the
x1 = - 2 ln r 1 cos(2 r 2 )
(A.2)
x2 = - 2 ln r 2 sin(2 r 2 )
-6 /4
The procedure is ;
i=1
ri
(A.3)
x = ( C1 R2 + C 2 ) R2 + C3R2 + C4R2 + C5 R
(A.4)
R=
12
Finally set
gaussian
uniformly
Theory.
distributed
distributed
nubers
numbers
by
can
be
the
use
obtained
of
from
Central
the
Limit
174
Given the various methods available in the literature
for generation of Random Numbers, the procedure adopted in this
thesis for this purpose is appended as below:
I
(A.5)
y(n) = 2 2 ln
x(n)
(A.6)
y ( y0 ) = 2 exp
2
2
y0
(A.7)
Thus we can form two random variables W(n) and W(n+1) with zero
mean and variance 2 as under:
W(n) = y(n) cos(2x(n + 1))
W(n + 1) = y(n) sin(2x(n + 1))
(A.8)
175
with
the
observed
response
of
the
system
to
PRBS
perturbing signal.
Also PRBS as test signal has been used to estimate
the parameters of pulse transfer function.
properties
of
pseudorandom
sequence
The Statistical
are
obtained
by
functions.
Pseudorandom
sequences
are
basically
signal.
The
statistical
properties
of
equivalent
The
1 (n) =
1 r=0
C r C r +n
L L-1
(A.9)
176
PRBS is the most probably the most convenient inputs for the
purpose
of
Identification
since
their
auto-correlation
are detailed
below:
a) Maximal Length Pseudo-random sequence:
It satisfies a linear difference equation (modulo 2)
, as follow:
m
m-1
D x D x + .........Dx x = Y
(A.10)
(A.11)
resulting
sequence
is
called
maximum
length
null-
sequence(MLNS).
A polynomial equation of the form:
( Dm ......... D I)x = 0
(A.12)
177
that yields an MLNS must be irreducible(i.e. it should not be
product of two or more lower order polynomials and it should
not be a factor modulo 2 of DN1 @ n<2m-1 i.e. it should not
divide modulo 2, the expression Dm +1).
We considered a 11 th
xi11
Generate M LNS
For j = 1toN do x i +1, j+1 x ij
2.
QUADRATIC
-RESIDUE
CODES
FOR
(A.13)
PSEUDO-RANDOM
SEQUENCES
Whereas the number of elements in MLNS codes can take
values of 2n-1 only( say 3,7,15,31,63) , the number of elements
in quadratic-residue sequences can take values that are much
closer together
(3,7,11,19,23).
Consequently, sequences of
178
q 1
BSi = j then
BSi +1
whilei_N do For i = 1, N do
Else
BSi -1
(A.14
world
are
non-linear.
In
many
cases,
however,
the
non-
In such cases,
of
course,
is
that
because
of
the
approximate
real
system
over
the
range
of
operation
under
consideration.
These are systems which could not be linearised
easily
and
require
different
techniques
to
cope
with
the
problem.
Whereas
unavoidable.
into
the
the
non-linearils
are
understandable
and
system
purposely
to
achieve
certain
objectives.
phenomenon-characteristics
such
as
time
cycle,
system,
obtained.
linearised
version
of
the
real
system
is
STABLITY:
So
long
as
any
minute
disturbance
of
the
An
example,
conserved
equilibrium
that
in
that
is,
position
system
in
whose
mechanical
'conservative
corresponding
to
It is known,
energy
system'
minimum
of
is
an
the
This is
schematically
represented
in
Fig____,
where
the
fricherless
The
point
A (
potential energy)corresponds to a
a relative
minimum of the
target)
corresponds
to
unstable
equilibrium
position.
Langrange
(1788)
developed
theory
for
regorous
proof
was
given
later
by
Dirichlet.
plays
an
important
differential equations.
appeneded below:
x = f (x,t)
where XT = (1, 2......,)
f= (,t...,
role
in
the
theory
of
binary
It
is
assumed
that
condition
sufficient
to
More precisely :
The
solution
x(x0,
to,t)
thus
remains
in
an
A solution
x(a,to,t) is
attractive
iff there
attractive
is
solution
which
is
both
stable
and
called
asymplotically stable.
2.2.3
Even in linear
all
bounded
inputs.
For
non-linear
systems,
numerous
The particular
Local stability
First
2)
Global stability
Stability for a
3) Finite stability
usually
may
be
infeared
from
local
stability,
this
is
The converse
consideration
close
to
the
the
singularity
state
as
point
time
approaches
approaches
arbitrarily
infinity,
the
Global
Monotric stability is