You are on page 1of 7

Digital Control Module 7 Lecture 3

Module 7: Discrete State Space Models


Lecture Note 3
1 Characteristic Equation, eigenvalues and eigen vectors
For a discrete state space model, the characteristic equation is dened as
|zI A| = 0
The roots of the characteristic equation are the eigenvalues of matrix A.
1. If det(A) = 0, i.e., A is nonsingular and
1
,
2
, ,
n
are the eigenvalues of A, then,
1

1
,
1

2
, ,
1
n
will be the eigenvalues of A
1
.
2. Eigenvalues of A and A
T
are same when A is a real matrix.
3. If A is a real symmetric matrix then all its eigenvalues are real.
The n 1 vector v
i
which satises the matrix equation
Av
i
=
i
v
i
(1)
where
i
, i = 1, 2, , n denotes the i
th
eigenvalue, is called the eigen vector of A associated
with the eigenvalue
i
. If eigenvalues are distinct, they can be solved directly from equation (1).
Properties of eigen vectors
1. An eigen vector cannot be a null vector.
2. If v
i
is an eigen vector of A then mv
i
is also an eigen vector of A where m is a scalar.
3. If A has n distinct eigenvalues, then the n eigen vectors are linearly independent.
Eigen vectors of multiple order eigenvalues
When the matrix A an eigenvalue of multiplicity m, a full set of linearly independent may not
exist. The number of linearly independent eigen vectors is equal to the degeneracy d of I A.
The degeneracy is dened as
d = n r
where n is the dimension of A and r is the rank of I A. Furthermore,
1 d m
I. Kar 1
Digital Control Module 7 Lecture 3
2 Similarity Transformation and Diagonalization
Square matrices A and

A are similar if
AP = P

A
or,

A = P
1
AP
and, A = P

AP
1
The non-singular matrix P is called similarity transformation matrix. It should be noted that
eigenvalues of a square matrix A are not altered by similarity transformation.
Diagonalization:
If the system matrix A of a state variable model is diagonal then the state dynamics are
decoupled from each other and solving the state equations become much more simpler.
In general, if A has distinct eigenvalues, it can be diagonalized using similarity transforma-
tion. Consider a square matrix A which has distinct eigenvalues
1
,
2
, . . .
n
. It is required
to nd a transformation matrix P which will convert A into a diagonal form
=
_
_

1
0 . . . 0
0
2
. . . 0
0 0 . . .
n
_
_
through similarity transformation AP = P. If v
1
, v
2
, . . . , v
n
are the eigenvectors of matrix
A corresponding to eigenvalues
1
,
2
, . . .
n
, then we know Av
i
=
i
v
i
. This gives
A[v
1
v
2
. . . v
n
] = [v
1
v
2
. . . v
n
]
_
_

1
0 . . . 0
0
2
. . . 0
0 0 . . .
n
_
_
Thus P = [v
1
v
2
. . . v
n
]. Consider the following state model.
x(k + 1) = Ax(k) + Bu(k)
If P transforms the state vector x(k) to z(k) through the relation
x(k) = Pz(k), or, z(k) = P
1
x(k)
then the modied state space model becomes
z(k + 1) = P
1
APz(k) + P
1
Bu(k)
where P
1
AP = .
3 Computation of (t)
We have seen that to derive the state space model of a sampled data system, we need to know
the continuous time state transition matrix (t) = e
At
.
I. Kar 2
Digital Control Module 7 Lecture 3
3.1 Using Inverse Laplace Transform
For the system x(t) = Ax(t) + Bu(t), the state transition matrix e
At
can be computed as,
e
At
= L
1
_
(sI A)
1
_
3.2 Using Similarity Transformation
If is the diagonal representation of the matrix A, then = P
1
AP. When a matrix is in
diagonal form, computation of state transition matrix is straight forward:
e
t
=
_
_
e

1
t
0 . . . 0
0 e

2
t
. . . 0
0 0 . . . e
nt
_
_
Given e
t
, we can show that
e
At
= Pe
t
P
1
e
At
= I + At +
1
2!
A
2
t
2
+ . . .
P
1
e
At
P = P
1
_
I + At +
1
2!
A
2
t
2
+ . . .
_
P
= I + P
1
APt +
1
2!
P
1
APP
1
APt
2
+ . . .
= I + t +
1
2!

2
t
2
+ . . .
= e
t
e
At
= Pe
t
P
1
3.3 Using Caley Hamilton Theorem
Every square matrix A satises its own characteristic equation. If the characteristic equation is
() = |I A| =
n
+
1

n1
+ +
n
= 0
then,
(A) = A
n
+
1
A
n1
+ +
n
I = 0
Application: Evaluation of any function f() and f(A)
f() = a
0
+ a
1
+ a
2

2
+ + a
n

n
+ order
f()
()
= q() +
g()
()
f() = q()() + g()
= g()
=
0
+
1
+ +
n1

n1
order n 1
I. Kar 3
Digital Control Module 7 Lecture 3
If A has distinct eigenvalues
1
, ,
n
, then,
f(
i
) = g(
i
), i = 1, , n
The solution will give rise to
0
,
1
, ,
n1
, then
f(A) =
0
I +
1
A + +
n1
A
n1
If there are multiple roots (multiplicity = 2), then
f(
i
) = g(
i
) (2)

i
f(
i
) =

i
g(
i
) (3)
Example 1:
If A =
_
_
0 0 2
0 1 0
1 0 3
_
_
then compute the state transition matrix using Caley Hamilton Theorem.
() = |IA| =

0 2
0 1 0
1 0 3

= (1)
2
(2) = 0
1
= 1 (with multiplicity 2),
2
= 2
Let f() = e
t
and g() =
0
+
1
+
2

2
Then using (2) and (3), we can write
f(
1
) = g(
1
)

1
f(
1
) =

1
g(
1
)
f(
2
) = g(
2
)
This implies
e
t
=
0
+
1
+
2
(
1
= 1)
te
t
=
1
+ 2
2
(
1
= 1)
e
2t
=
0
+ 2
1
+ 4
2
(
2
= 2)
Solving the above equations

0
= e
2t
2te
t
,
1
= 3te
t
+ 2e
t
2e
2t
,
2
= e
2t
e
t
te
t
Then
e
At
=
0
I +
1
A +
2
A
2
=
_
_
2e
t
e
2t
0 2e
t
2e
2t
0 e
t
0
e
2t
e
t
0 2e
2t
e
t
_
_
I. Kar 4
Digital Control Module 7 Lecture 3
Example 2 For the system x(t) = Ax(t) + Bu(t), where A =
_
1 1
1 1
_
. compute e
At
using 3
dierent techniques.
Solution: Eigenvalues of matrix A are 1 j1.
Method 1
e
At
= L
1
(sI A)
1
= L
1
=
_
s 1 1
1 s 1
_
1
= L
1
1
s
2
2s + 2
_
s 1 1
1 s 1
_
= L
1
_
s1
(s1)
2
+1
1
(s1)
2
+1
1
(s1)
2
+1
s1
(s1)
2
+1
_
=
_
e
t
cos t e
t
sin t
e
t
sin t e
t
cos t
_
Method 2
e
At
= Pe
t
P
1
where e
t
=
_
e
(1+j)t
0
0 e
(1j)t
_
. Eigen values are 1 j. The corresponding
eigenvectors are found by using equation Av
i
=
i
v
i
as follows:
_
1 1
1 1
_ _
v
1
v
2
_
= (1 + j)
_
v
1
v
2
_
Taking v
1
= 1, we get v
2
= j. So, the eigenvector corresponding to 1 + j is
_
1
j
_
and the one
corresponding to 1 j is
_
1
j
_
. The transformation matrix is given by
P = [v
1
v
2
] =
_
1 1
j j
_
P
1
=
1
2
_
1 j
1 j
_
Now,
e
At
= Pe
t
P
1
=
1
2
_
1 1
j j
_ _
e
(1+j)t
0
0 e
(1j)t
_ _
1 j
1 j
_
=
1
2
_
e
(1+j)t
e
(1j)t
je
(1+j)t
je
(1j)t
_ _
1 j
1 j
_
=
1
2
_
e
(1+j)t
+ e
(1j)t
j
_
e
(1+j)t
e
(1j)t
_
j
_
e
(1+j)t
e
(1j)t
_
e
(1+j)t
+ e
(1j)t
_
=
1
2
_
2e
t
cos t j(j)e
t
2 sin t
e
t
(j)(j)2 sin t 2e
t
cos t
_
=
_
e
t
cos t e
t
sin t
e
t
sin t e
t
cos t
_
I. Kar 5
Digital Control Module 7 Lecture 3
Method 3: Caley Hamilton Theorem
The eigenvalues are
1,2
= 1 j.
e

1
t
=
0
+
1

1
e

2
t
=
0
+
1

2
Solving,

0
=
1
2
(1 + j)e
(1+j)t
+
1
2
(1 j)e
(1j)t

1
=
1
2j
_
e
(1+j)
t
e
(1j)
t
_
Hence,
e
At
=
0
I +
1
A
=
_
e
t
cos t e
t
sin t
e
t
sin t e
t
cos t
_
We will now show through an example how to derive discrete state equation from a contin-
uous one.
Example: Consider the following state model of a continuous time system.
x(t) =
_
1 1
0 2
_
x(t) +
_
0
1
_
u(t)
y(t) = x
1
(t)
If the system is under a sampling process with period T, derive the discrete state model of
the system.
To derive the discrete state space model, let us rst compute the state transition matrix of
the continuous time system using Caley Hamilton Theorem.
() = |I A| =

1 1
0 2

= ( 1)( 2) = 0
1
= 1,
2
= 2
Let f() = e
t
This implies
e
t
=
0
+
1
(
1
= 1)
e
2t
=
0
+ 2
1
(
2
= 2)
Solving the above equations

1
= e
2t
e
t

0
= 2e
t
e
2t
I. Kar 6
Digital Control Module 7 Lecture 3
Then
e
At
=
0
I +
1
A
=
_
e
t
e
2t
e
t
0 e
2t
_
Thus the discrete state matrix A is given as
A = (T) =
_
e
T
e
2T
e
T
0 e
2T
_
The discrete input matrix B can be computed as
B = (T) =
_
T
0
(T t

)
_
0
1
_
dt

=
_
T
0
_
e
T
.e
t

e
2T
.e
2t

e
T
.e
t

0 e
2T
.e
2t

_ _
0
1
_
dt

=
_
e
T
1 0.5e
2T
e
T
+ 0.5
0 0.5e
2T
0.5
_ _
0
1
_
=
_
0.5e
2T
e
T
+ 0.5
0.5e
2T
0.5
_
The discrete state equation is thus described by
x((k + 1)T) =
_
e
T
e
2T
e
T
0 e
2T
_
x(kT) +
_
0.5e
2T
e
T
+ 0.5
0.5e
2T
0.5
_
u(kT)
y(kT) =
_
1 0

x(kT)
When T = 1, the state equations become
x(k + 1) =
_
2.72 4.67
0 7.39
_
x(k) +
_
1.48
3.19
_
u(k)
y(k) =
_
1 0

x(k)
I. Kar 7

You might also like