Professional Documents
Culture Documents
SA01010048
LING QING
2.1 Consider the memoryless system with characteristics shown in Fig 2.19, in which u denotes
the input and y the output. Which of them is a linear system? Is it possible to introduce a new
output so that the system in Fig 2.19(b) is linear?
Figure 2.19
Translation: 2.19 u y
2.19(b)
y = a *u
Here a is a constant. It is a memoryless system. Easy to testify that it is a linear system.
The input-output relation in Fig 2.1(b) can be described as:
y = a *u + b
Here a and b are all constants. Testify whether it has the property of additivity. Let:
y1 = a * u1 + b
y2 = a * u2 + b
then:
( y1 + y 2 ) = a * (u1 + u 2 ) + 2 * b
So it does not has the property of additivity, therefore, is not a linear system.
But we can introduce a new output so that it is linear. Let:
z = y b
z = a *u
z is the new output introduced. Easy to testify that it is a linear system.
The input-output relation in Fig 2.1(c) can be described as:
y = a (u ) * u
a(u) is a function of input u. Choose two different input, get the outputs:
y1 = a1 * u1
1
SA01010048
LING QING
y2 = a2 * u 2
Assure:
a1 a 2
then:
( y1 + y 2 ) = a1 * u1 + a 2 * u 2
So it does not has the property of additivity, therefore, is not a linear system.
2.2 The impulse response of an ideal lowpass filter is given by
g (t ) = 2
sin 2 (t t 0 )
2 (t t 0 )
for all t, where w and to are constants. Is the ideal lowpass filter causal? Is is possible to built
the filter in the real world?
Translation: tw to
Answer: Consider two different time: ts and tr, ts < tr, the value of g(ts-tr) denotes the output at
time ts, excited by the impulse input at time tr. It indicates that the system output at time
ts is dependent on future input at time tr. In other words, the system is not causal. We
know that all physical system should be causal, so it is impossible to built the filter in
the real world.
2.3 Consider a system whose input u and output y are related by
u (t ) for t a
y (t ) = ( Pa u )(t ) :=
0 for t > a
where a is a fixed constant. The system is called a truncation operator, which chops off the
input after time a. Is the system linear? Is it time-invariant? Is it causal?
Translation: a
a
y=u
y=0
Easy to testify that it is linear. So for any time, the system is linear.
Consider whether it is time-invariable. Define the initial time of input to, system input is
u(t), t>=to. Let to<a, so It decides the system output y(t), t>=to:
2
SA01010048
LING QING
u (t ) for t 0 t a
y (t ) =
0 for other t
Shift the initial time to to+T. Let to+T>a , then input is u(t-T), t>=to+T. System output:
y ' (t ) = 0
Suppose that u(t) is not equal to 0, y(t) is not equal to y(t-T). According to the definition,
this system is not time-invariant.
For any time t, system output y(t) is decided by current input u(t) exclusively. So it is a
causal system.
2.4 The input and output of an initially relaxed system can be denoted by y=Hu, where H is some
mathematical operator. Show that if the system is causal, then
Pa y = Pa Hu = Pa HPa u
where Pa is the truncation operator defined in Problem 2.3. Is it true PaHu=HPau?
Translation: y=Hu H
Pa 2.3
PaHu=HPau
Answer: Notice y=Hu, so:
Pa y = Pa Hu
Define the initial time 0, since the system is causal, output y begins in time 0.
If a<=0,then u=Hu. Add operation PaH in both left and right of the equation:
Pa Hu = Pa HPa u
If a>0, we can divide u to 2 parts:
u (t ) for 0 t a
p (t ) =
0 for other t
u (t ) for t > a
q (t ) =
0 for other t
u(t)=p(t)+q(t). Pay attention that the system is casual, so the output excited by q(t) cant
affect that of p(t). It is to say, system output from 0 to a is decided only by p(t). Since
PaHu chops off Hu after time a, easy to conclude PaHu=PaHp(t). Notice that p(t)=Pau,
also we have:
Pa Hu = Pa HPa u
It means under any condition, the following equation is correct:
Pa y = Pa Hu = Pa HPa u
PaHu=HPau is false. Consider a delay operator H, Hu(t)=u(t-2), and a=1, u(t) is a step
input begins at time 0, then PaHu covers from 1 to 2, but HPau covers from 1 to 3.
3
SA01010048
LING QING
2.5 Consider a system with input u and output y. Three experiments are performed on the system
using the inputs u1(t), u2(t) and u3(t) for t>=0. In each case, the initial state x(0) at time t=0 is
the same. The corresponding outputs are denoted by y1,y2 and y3. Which of the following
statements are correct if x(0)<>0?
1. If u3=u1+u2, then y3=y1+y2.
2. If u3=0.5(u1+u2), then y3=0.5(y1+y2).
3. If u3=u1-u2, then y3=y1-y2.
Translation: u y u1(t),
u2(t) u3(t)t>=0 x(0)
y1,y2 y3 x(0)
Answer: A linear system has the superposition property:
1 x1 (t 0 ) + 2 x 2 (t 0 )
1 y1 (t ) + 2 y 2 (t ), t t 0
1u1 (t ) + 2 u 2 (t ), t t 0
In case 1:
1 = 1
2 = 1
1 x1 (t 0 ) + 2 x 2 (t 0 ) = 2 x(0) x(0)
So y3<>y1+y2.
In case 2:
2 = 0.5
1 = 0.5
1 x1 (t 0 ) + 2 x 2 (t 0 ) = x(0)
So y3=0.5(y1+y2).
In case 3:
1 = 1
2 = 1
1 x1 (t 0 ) + 2 x 2 (t 0 ) = 0 x(0)
So y3<>y1-y2.
2.6 Consider a system whose input and output are related by
u 2 (t ) / u (t 1) if u (t 1) 0
y (t ) =
0 if u (t 1) = 0
for all t.
Show that the system satisfies the homogeneity property but not the additivity property.
Translation: ,,.
Answer: Suppose the system is initially relaxed, system input:
p (t ) = u (t )
a is any real constant. Then system output q(t):
4
SA01010048
LING QING
p 2 (t ) / p(t 1) if p(t 1) 0
q(t ) =
0 if p (t 1) = 0
u 2 (t ) / u (t 1) if u (t 1) 0
=
0 if u (t 1) = 0
a = m/n
Here m and n are both integer. Firstly, prove that if system input-output can be described
as following:
x y
then:
mx my
x y
then:
x/n y/n
Suppose:
x/n u
Using additivity:
n * ( x / n) = x nu
So:
y = nu
u = y/n
5
SA01010048
LING QING
It is to say that:
x/n y/n
Then:
x*m/n y*m/n
ax ay
It is the property of homogeneity.
2.8 Let g(t,T)=g(t+a,T+a) for all t,T and a. Show that g(t,T) depends only on t-T.
Translation: t,T ag(t,T)=g(t+a,T+a) g(t,T) t-T
Answer: Define:
y = t T
x = t +T
So:
t=
Then:
x+ y
2
T=
x y
2
x+ y x y
,
)
2
2
x+ y
x y
= g(
+ a,
+ a)
2
2
x+ y x+ y x y x+ y
= g(
+
,
+
)
2
2
2
2
g (t , T ) = g (
= g ( y,0)
So:
g (t , T ) g ( y,0)
=
=0
x
x
2.9 Consider a system with impulse response as shown in Fig2.20(a). What is the zero-state
response excited by the input u(t) shown in Fig2.20(b)?
Fig2.20
6
SA01010048
LING QING
t 0 t 1
g (t ) =
2 t 1 t 2
1 0 t 1
u (t ) =
1 1 t 2
then y(t) equals to the convolution integral:
t
y (t ) = g (r )u (t r )dr
0
t2
y (t ) = rdr =
2
0
If 1<=t<=2:
y (t ) =
t 1
g (r )u (t r )dr +
0
t 1
g (r )u (t r )dr + g (r )u (t r )dr
= y1 (t ) + y 2 (t ) + y 3 (t )
Calculate integral separately:
y1 (t ) =
t 1
g (r )u (t r )dr
0 r 1
1 t r 2
0 r 1
0 t r 1
1 r 2
0 t r 1
t 1
rdr =
0
(t 1) 2
2
g (r )u (t r )dr
y 2 (t ) =
t 1
rdr =
t 1
1 (t 1) 2
2
2
y 3 (t ) = g (r )u (t r )dr
1
= (2 r )dr = 2(t 1)
1
t 2 1
2
3
y (t ) = y1 (t ) + y 2 (t ) + y 3 (t ) = t 2 + 4t 2
2
7
SA01010048
LING QING
y+ 2 y 3 y = u u
What are the transfer function and the impulse response of the system?
Translation:
Answer: Applying the Laplace transform to system input-output equation, supposing that the
System is initial relaxed:
s 2Y ( s ) + 2sY ( s ) 3Y ( s ) = sY ( s ) Y ( s )
System transfer function:
G (s) =
U (s)
s 1
1
= 2
=
Y ( s) s + 2s 3 s + 3
Impulse response:
g (t ) = L1 [G ( s )] = L1 [
1
] = e 3t
s+3
2.11 Let y(t) be the unit-step response of a linear time-invariant system. Show that the impulse
response of the system equals dy(t)/dt.
Translation: y(t) dy(t)/dt.
Answer: Let m(t) be the impulse response, and system transfer function is G(s):
Y ( s) = G ( s) *
1
s
M (s) = G (s)
M ( s) = Y ( s) * s
So:
m(t ) = dy (t ) / dt
2.12 Consider a two-input and two-output system described by
SA01010048
LING QING
D11 ( s ) D12 ( s ) Y1 ( s ) N 11 ( s ) N 12 ( s ) U 1 ( s )
D ( s ) D ( s ) Y ( s ) = N ( s ) N ( s ) U ( s )
22
22
21
2 21
2
Y1 ( s ) D11 ( s ) D12 ( s )
Y ( s ) = D ( s ) D ( s )
22
2 21
N 11 ( s ) N 12 ( s ) U 1 ( s )
N ( s ) N ( s ) U ( s )
22
21
2
D11 ( s ) D12 ( s )
G (s ) =
D21 ( s ) D22 ( s )
N 11 ( s ) N 12 ( s )
N ( s) N ( s)
22
21
D11 ( s ) D12 ( s )
D ( s) D ( s)
22
21
exists.
2.11 Consider the feedback systems shows in Fig2.5. Show that the unit-step responses of the
positive-feedback system are as shown in Fig2.21(a) for a=1 and in Fig2.21(b) for a=0.5.
Show also that the unit-step responses of the negative-feedback system are as shown in Fig
2.21(c) and 2.21(d), respectively, for a=1 and a=0.5.
Fig 2.21
Translation: 2.5 a=1
2.21(a) a=0.5 2.21(b)
2.21(c) 2.21(b) a=1 a=0.5
9
SA01010048
LING QING
Answer: Firstly, consider the positive-feedback system. Its impulse response is:
g (t ) = a i (t i )
i =1
y (t ) = a i r (t i )
i =1
y ( n) = a i
i =1
y (t ) = y (n)
n t n +1
Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(a) and Fig
2.21(b) shown.
Secondly, consider the negative-feedback system. Its impulse response is:
g (t ) = (a ) i (t i )
i =1
y (t ) = (a ) i r (t i )
i =1
y ( n) = ( a ) i
i =1
y (t ) = y (n)
n t n +1
Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(c) and Fig
2.21(d) shown.
2.14 Draw an op-amp circuit diagram for
2 4
2
x=
x + u
0 5
4
y = [3 10]x 2u
2.15 Find state equations to describe the pendulum system in Fig 2.22. The systems are useful to
model one- or two-link robotic manipulators. If , 1 and 2 are very small, can you
consider the two systems as linear?
10
SA01010048
LING QING
Translation: 2.22
Answer: For Fig2.22(a), the application of Newtons law to the linear movements yields:
f cos mg = m
u f sin = m
2
d2
(
l
cos
)
=
ml
(
sin
cos )
dt 2
2
d2
(
l
sin
)
=
ml
(
cos
sin )
dt 2
Assuming and to be small, we can use the approximation sin = , cos =1.
g
l
= +
1
u
ml
1
0
1
x=
x+
u
g / l 0
1 / ml
y = [1 0]x
For Fig2.22(b), the application of Newtons law to the linear movements yields:
f1 cos 1 f 2 cos 2 m1 g = m1
d2
(l1 cos 1 )
dt 2
d2
(l1 cos 1 + l 2 cos 2 )
dt 2
11
Assuming
SA01010048
LING QING
sin 2 = 2 , cos 1 =1, cos 2 =1. By retaining only the linear terms in 1 , 2 and
1 =
2 =
m g
(m1 + m2 ) g
1 + 2 2
m1l1
m1l1
(m1 + m2 ) g
(m + m2 ) g
1
1 1
2 +
u
m1l 2
m1l 2
m2 l 2
x1 = 1 , x 2 = 1 , x3 = 2 , x 4 = 2 and output
y1 1
y = :
2 2
0
x 1
x 2 (m1 + m2 ) g / m1l1
=
0
x3
x (m1 + m2 ) g / m1l 2
4
y1 1 0 0 0
y = 0 0 1 0
1
0
0
m2 g / m1l1
0
0
0 (m1 + m2 ) g / m2 l 2
0 x1
0
0
0 x 2
u
+
0
1 x3
0 x 4
1 / m2 l 2
x1
x
2
x3
x4
2.17 The soft landing phase of a lunar module descending on the moon can be modeled as shown
in Fig2.24. The thrust generated is assumed to be proportional to the derivation of m, where
m is the mass of the module. Then the system can be described by
m y = k m mg
Where g is the gravity constant on the lunar surface. Define state variables of the system as:
x1 = y , x 2 = y , x3 = m , y = m
Find a state-space equation to describe the system.
Translation: 2.24 m
g
12
SA01010048
y = gt / 2 + y
y = g + y
y = gt + y
LING QING
m=m
m = m0 + m
So:
( m0 + m) ( g + y ) = k m ( m0 + m) g
m0 g m g + m0 y = k m m0 g m g
m0 y = k m
Define state variables as:
x1 = y , x 2 = y , x 3 = m , y = m
Then:
x1
x1
0 1 0 0
x + k / m u
=
0
0
0
x
0
2
2
0 0 0 1
x3
x3
x1
y = [1 0 0] x 2
x3
2.19 Find a state equation to describe the network shown in Fig2.26.Find also its transfer function.
Translation: 2.26
Answer: Select state variables as:
SA01010048
x3 = ( x 2 L x3 ) / R
u = C x1 + x1 R
u = C x 2 + x3
y = x2
From the upper equations, we get:
x1 = x1 / CR + u / C = x1 + u
x 2 = x3 / C + u / C = x3 + u
x3 = x 2 / L Rx3 / L = x 2 x3
y = x2
They can be combined in matrix form as:
x1 1 0 0 x1 1
x = 0 0 1 x + 1u
2
2
x3 0 1 1 x3 0
x1
y = [0 1 0] x 2
x3
Use MATLAB to compute transfer function. We type:
A=[-1,0,0;0,0,-1;0,1,-1];
B=[1;1;0];
C=[0,1,0];
D=[0];
[N1,D1]=ss2tf(A,B,C,D,1)
Which yields:
N1 =
0
1.0000
2.0000
1.0000
D1 =
1.0000
2.0000
2.0000
1.0000
So the transfer function is:
^
G (s) =
s 2 + 2s + 1
s +1
= 2
3
2
s + 2s + 2s + 1 s + s + 1
14
LING QING
SA01010048
LING QING
2.20 Find a state equation to describe the network shown in Fig2.2. Compute also its transfer
matrix.
Translation: 2.2
Answer: Select state variables as Fig 2.2 shown. Applying Kirchhoffs current law:
u1 = R1C1 x1 + x1 + x 2 + L1 x 3 + R2 x 3
C1 x 1 + u 2 = C 2 x 2
x3 = C 2 x 2
u1 = R1 ( x3 u 2 ) + x1 + x 2 + L1 x 3 + R2 x3
y = L1 x 3 + R2 x3
From the upper equations, we get:
x1 = x3 / C1 u 2 / C1
x 2 = x3 / C 2
x3 = x1 / L1 x 2 / L1 ( R1 + R1 ) x3 / L1 + u1 / L1 + R1u 2 / L1
y = x1 x 2 R1 x3 + u1 + R1u 2
They can be combined in matrix form as:
x1 0
x = 0
2
x3 L1
0
0
L1
1 / C1
1/ C2
( R1 + R2 ) / L1
x1 0
x + 0
2
x3 1 / L1
1 / C1
u
0 1
u
R1 / L1 2
x1
u
y = [ 1 1 R1 ] x 2 + [1 R2 ] 1
u 2
x3
Applying Laplace Transform to upper equations:
^
y(s) =
^
L1 s + R2
u 1 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s
^
( L1 s + R2 )( R1 + 1 / C 2 s )
u 2 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s
L1 s + R2
G ( s) =
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s
15
( L1 s + R2 )( R1 + 1 / C 2 s )
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s
SA01010048
LING QING
2.18 Find the transfer functions from u to y1 and from y1 to y of the hydraulic tank system
shown in Fig2.25. Does the transfer function from u to y equal the product of the two
transfer functions? Is this also true for the system shown in Fig2.14?
Translation: 2.25 u y1 y1 y
u y 2.14
Answer: Write out the equation about u , y1 and y :
y1 = x1 / R1
y 2 = x 2 / R2
A1 dx1 = (u y1 ) / dt
A2 dx 2 = ( y1 y ) / dt
Applying Laplace transform:
^
y 1 / u = 1 /(1 + A1 R1 s )
^
y/ y 1 = 1 /(1 + A2 R2 s )
y/ u = 1 /(1 + A1 R1 s )(1 + A2 R2 s )
So:
^
y/ u = ( y 1 / u ) ( y/ y 1 )
But it is not true for Fig2.14, because of the loading problem in the two tanks.
16
3.1consider Fig3.1 ,what is the representation of the vector x with respect to the basis
What is the representation of q1 with respect ro (i2
? q1 (i2
(q1
i2 ) ?
q 2 ) ? 3.1 , x (q1 i2 )
q 2 ) ?
1
8
q1 and i2 as
3
3
1 8
is
, this can
3 3
(q1
i2 )
1
3 0 13
i2 3 =
8
8
1
1
3
3
1
be verified from x = = q 2
3
(i2
3
q 2 , thus the representation of q1
2
3
q1 = = i2
1
3
, this can be verified from
2
q 2 ) , is 2
2 0 2 2
q2 3 =
3
2 1 2 2
3.2 what are the 1-morm ,2-norm , and infinite-norm of the vectors x1 = [2
3 1] , x 2 = [1 1 1] ,
x1 = [2 3 1] , x 2 = [1 1 1] 1-,2- ?
x1
x1
x1
= x1i = 2 + 3 + 1 = 6
x2
i =1
= x1' x1 = (2 2 + (3) 2 + 12 )
= max xi x1i = 3
x2
= 1+1+1 = 3
= 14
x2
= (12 + 12 + 12 )
= 3
= max xi x 2i = 1
3.3 find two orthonormal vectors that span the same space as the two vectors in problem 3.2 , 3.2
.
Schmidt orthonormalization praedure ,
u1 = x1 = [2 3 1]
q1 =
u 2 = x 2 ( q1' x 2 ) q1 = x 2
u1
u1
q2 =
u2
u2
1
14
=
[2
1
3
3 1]
[1
2
1
3
The two orthomormal vectors are q1 =
14
1
1 1]
1
1
q2 =
1
3
1
x1
2
1
=
3
x1
14
1
q2 =
x2
1
1
=
1
x2
3
1
] [ ]
A = a1 a 2 a m = aij
n
0 if
ai' ai = ali ali =
i =1
1 if
n m
i j
i= j
a1'
'
a1' a1 a1' a 2 a1' a m 1 0 0
a 2 ' '
AA = ai a 2 a m' =
= 0 1 0 = I m
a m' a1 a m' a 2 a m' a m 0 0 1
a '
m
a1'
'
m
a 2 m
then AA = ai' a 2' a m' = ai ai' = ( ail a jl ) nn
i =1
i =1
a '
m
in
general
a
i =1
il
0 if
a jl =
1 if
i j
i= j
if A is a symmetric square matrix , that is to say , n=m ail a jl for every i, l = 1,2 n
then AA = I m
0 1 0
A1 = 0 0 0
0 0 1
4 1 1
A2 = 3 2 0
1 1 0
3 4
1 2
A3 = 0 1 2 2
0 0
0 1
1
the last two columns of A1 are linearly independent , so the set 0 ,
0
0
0 can be used as a
1
basis of the range space of A1 , all columns of A2 are linearly independent ,so the set
4
3
1
2
2
1
1
0 can be used as a basis of the range spare of A
2
let A3 = a1
a2
a3
a 2 and
a3 and
a 4 are
{a
a2
2 1
1
3.7 consider the linear algebraic equation 3 3 x = 0 x it has three equations and two
1
1 2
unknowns does a solution x exist in the equation ? is the solution unique ? does a solution exist if
y = [1 1 1] ?
2 1
1
3 3 x = 0 x , ?
1
1 2
? y = [1 1 1] ?
2 1
Let A = 3 3 = a1
1 2
y is the sum of
exists in A x = y
Nullity(A)=2- rank(A)=0
3 4
3
1 2
3.8 find the general solution of 0 1 2 2 x = 2 how many parameters do you have ?
1
0 0
0 1
3 4
3
1 2
0 1 2 2 x = 2 ,?
1
0 0
0 1
3 4
1 2
Let A = 0 1 2 2
0 0
0 1
3
y = 2 we can readily obtain rank(A)= rank([A y ])=3 so
1
means the dimension of the null space of A is 1 , the number of parameters in the general solution
1
1
0
2
A x = y can be expressed an x = x p + n =
+ for any real
0
1
0
1
0
0
1
2 + 2 4
4
1
1
2
for
the general solution in example 3.3 is x = + 1 + 2 =
0
0
1
1
2
0
0
1
any real
1 and 2
x = 12 + ( 1 + 2 2 4) 2 + ( 1 ) 2 + ( 2 ) 2
= 3 12 + 5 22 + 4 1 2 8 1 16 2 + 16
x
2 1
x
2 2
4
11
8
4
= 0 3 1 + 2 2 4 = 0 1 =
11
x = 11
16
4
= 0 5 2 + 2 1 8 = 0 2 =
11
11
16
11
has
the
smallest
Euclidean
norm ,
3.10 find the solution in problem 3.8 that has the smallest Euclidean norm 3.8
,
1
1
2
0
6
1
6
1
has the
n 1
where A is an n n matrix and b is an n 1 column vector ,under what conditions on A and b will there
exist u[o], u[1], u[ n 1] to meet the equation for any x[n], and x[0] ?
A n n , b n 1 , A b , u[o], u[1], u[ n 1] ,
u[0]
where [b, Ab A n1 b] is an n n matrix and x{n} A n x[0] is an n 1 column vector , from the equation we
can see , u[o], u[1], u[ n 1] exist to meet the equation for any x[n], and x[0] ,if and only if
[b, Ab A n 1 b] = n under this condition , there will exist u[o], u[1], u[n 1] to meet the equation for any
x[n], and x[0] .
2
0
3.12 given A =
0
the basis b
2
0
A=
0
Ab
1
2
0
0
0
1
2
0
A2 b
0
1
0
0
0
b=
b = what are the representations of A with respect to
1
3
0
1
1
Ab
1 0 0
0
1
2 1 0
0
2
b = b = A b
1
3
0 2 0
0 0 1
1
A2 b
Ab
A 3 b , respectively?
A2 b
A 3 b b
Ab
A2 b
A 3 b
24
0
1
1
4
32
2
3
2
? Ab =
, A b = A Ab =
, A b = A A b =
16
2
4
1
1
1
A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b
we have
Ab
Ab
A2 b
] [
A3 b = b
Ab
A2 b
0
1
3
A b =
0
0 0
8
0 0 20 thus the representation of A
1 0 18
0 1 7
Ab
A2 b
0
1
A 3 b is A =
0
0
0
1
0
0 8
0 20
0 18
1 7
4
15
50
152
7
20
52
128
2
3
4
Ab =
,A b =
,A b =
,A b =
48
6
12
24
1
1
1
1
A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b
Ab
A2 b
Ab
] [
A3 b = b
Ab
Ab
A2 b
A2 b
0
1
3
A b =
0
0
1
A 3 b is A =
0
0
0
1
0
0
0
1
0
0 8
0 20
thus the representation of A with
0 18
1 7
0 8
0 20
0 18
1 7
1
0
4
3
0
1 0 1
0
1 4 10
0
1 . A3 = 0 1 0 . A4 = 0 20
16 .
A1 = 0 2 0 , A2 = 0
0 25 20
0 0 2
2 4 3
0 0 3
the characteristic polynomial of A1 is 1 = det(I A1 ) = ( 1)( 2)( 3) thus the eigenvelues of A1 are
1 ,2 , 3 , they are all distinct . so the Jordan-form representation of A1 will be diagonal .
the eigenvectors associated with = 1, = 2, = 3 ,respectively can be any nonzero solution of
A1 q1 = q1 q1 = [1 0 0]
A1 q3 = 3q3 q3 = [5 0 1]
{q
q2
1 0 0
q3 is A1 = 0 2 0
0 0 3
2 ( ) = det(I A2 ) = (3 + 32 + 4 + 2 = ( + 1)( + 1 + i )( + 1 i )
eigenvalues 1,
1 + j and
1, 1 + j and
[1
1 1] , [1
A2 has
1 j are , respectively
1
1
1
Q = 1 1 + j 1
0
2j
2j
0
0
1
A = 0 1 + j
0
2
0
0
1
j and
= Q 1 A Q
2
associated with 1 ,
( A3 I )q = 0 q1 = [1 0 0]
q 2 = [0 1 0]
( A3 I )q3 = 0 q3 = [1 0 1]
1 0 1
Q = 1 1 0 and
0 0 1
thus we have
1 0 0
A 3 = 0 1 0 = Q 1 A3 Q
0 0 2
A1 v1 = 0 v1 = [1 0 0]
of A4 from equations below A1 v 2 = v1 v 2 = [0
A1 v3 = v 2 v3 = [0 3 4]
v2
v3 is
0
1 0
0 1 0
A = 0 0 1 = Q 1 A Q where Q = 0 4 3
4
4
0 5 4
0 0 0
1
1
3.14 consider the companion-form matrix A =
0
2
0
1
0
3
0
0
1
4
0
0
3
i
A ( ) = + 1 + 2 + 3 + 4 , i i
4
, ( ) = 0 , i
3i i 1] A i
proof:
+ 1 2 3 4
1
0 0
( ) = det(I A2 ) = det
0
0
1
0 1
0
0 0
2 3 4
= ( + 1 ) det 1 0 + det 1
0
0 1
0 1
0
4
= 4 + 13 + 2 det
+ det 3
1
1
= 4 + 1 3 + 2 2 + 3 + 4
1
1
2
0
1
0
3
0
0
1
that is to say i
4 3i 13i 2 2i 3 i 4 4i
3i
2
3
0 2i
i
2i
= 2 = I i
=
i
i
0 i
i
0 1
1
1
i
13
2
32
22
2
33 34
23 24
equals
3 4
13
2
1
1
32
22
2
33
23
3
34
24
4 1i< j 4 ( j i ) ,
, , ,
proof:
13
2
det 1
1
32
22
2
33
23
3
0 22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )
34
24
0 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
= det
0
4
2 1
3 1
4 1
22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )
= det 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
2 1
3 1
4 1
3 (3 1 )(3 2 ) 4 ( 4 1 )( 4 2 )
0
(3 1 )(3 2 )
( 4 1 )( 4 2 )
= det 0
2 1
3 1
4 1
( 1 )(3 2 ) 4 ( 4 1 )( 4 2 )
= det 3 3
( 2 1 )
( 4 1 )( 4 2 )
(3 1 )(3 2 )
= (3 1 )[3 (3 1 )(3 2 )( 4 1 )( 4 2 ) 4 ( 4 1 )( 4 2 )(3 1 )(3 2 )]
= ( 2 1 )( 4 2 )(3 1 )(3 2 )( 4 1 )( 4 2 )
= 1i < j 4
let a, b, c and d be the eigenvalues of a matrix A , and they are distinct Assuming the matrix is
singular , that is abcd=0 , let a=0 , then we have
0 b 3
0 b2
det
0 b
1 1
c3
c2
c
1
d3
b 3
2
d
= det b 2
d
b
c3
c2
c
b 2
d3
d 2 = bcd det b
1
d
c2
c
1
d2
a 3
2
a
det
a
b3
c3
b2
b
c2
c
0 b 3
d3
d2
0 b2
= det
0 b
d
1
1 1
0 b 3
2
so we can see det 0 b
0 b
1 1
c3
c2
c
1
c3
c2
c
1
d3
d2
= (d c)(d b)(c a )(b a ) = bcd (d c)(d b)
d
0 b 3
d3
d2
0 b2
= det
0 b
d
1
1 1
d3
d2 ,
d
c3
c2
c
1
this implies the assumption is not true , that is , the matrix is nonsingular let q be the
i
eigenvectors of A , Aq = a q
1
Aq 2 = a q 2
Aq 3 = a q 3
Aq 4 = a q 4
a 1 q 1 + a 2 q 2 + a 3 q 3 + a 4 q 4 = 0
aa 1 q 1 + ba 2 q 2 + ca 3 q 3 + da 4 q 4 = 0
a1 q1 a 2 q 2 a 3 q 3 a 4 q 4
2
2
2
2
a
a
q
b
a
q
c
a
q
d
a
q
0
+
+
+
=
4
3
2
1
4
3
2
1
3
3
3
3
a a 1 q 1 + b a 2 q 2 + c a 3 q 3 + d a 4 q 4 = 0
a i q i = 0 n1
so
a = 0 q i linenrly independent
and q i 0 n1 i
3.16
1
1
a
b
c
d
show that the companion-form matrix in problem 3.14 is nonsingular if and only if
1
A =
0
1
4
1
0
0
0
1
0
0
0
1
0
0
1
4 0 , A = 0
1
4
3.14
3
4
1
0
0
0
1
0
0
0
1
3
4
a2
b2
d2
d2
a3
b3
= 0 44
d3
d 3
4 0
0
0
0
1
4
1
1
A 1
1
0
0
0
1
0
0
0
1
1 2 3 4 1
0
0
0 0
1
0
1
0
0 0
3
1
2
0
1
0 0
4
4
4 0
1
0
0 1
2 3 4 0
0
1
0 0
0
0
0 0
0
0
1 0
1
0
0 0
3
1
2
1
0
1
0 4
4
4
4 0
1
0
0
0
0
0
1
0
= 0
0
0
1
1
3
1
2
4
4
4
4
3.17 consider A = 0
0 0
0
1
0
0
0
0
1
0
0
0
= I4
0
0
1
0
0
0
0
1
0
0
0
= I4
0
tT 2 / 2
2 T 2
T 2 / 2 0
tT
0
0
1 0
0 0
1
A 0 ,T>0 , [0
0 1] 3 , Q 3
1 0
3 , Q AQ = 0 t 1
0 0
1
0 0
2
( A I ) = 0 = 0
1 0
0 0
3
( A I ) = 0 = 0
1 0
0 2 T 2 0 2 T 2
0
0 0 = 0 0
0
0 1 0
0 2T 2 0 T
0
0 0 0
0
0 0 0
T 2 / 2 0 0 0 0 0
T 0 = 0 0 0 0 = 0
0
0 0 0 1
2
2 2
0 0 T T / 2 0 T
T 0 = T
( A I ) = 0 = 0 0
1 0 0
0 1 0
and
2T 2 0 T T 2 / 2 2T 2 2T 2
T T = 0
( A I ) = T = 0 0
0 0 0
0 0 0
AQ = 0
0 0
T 2 / 2 2T 2
T 0
0
2 2
1 0 T
Q 0 1 = 0
0 0 0
T 2 / 2 0 3T 2 32T 2 / 2 T 2 / 2
T
0 = 0
T
T
0
1 0
0
T 2 / 2 0 1 0 3T 2 32T 2 / 2 T 2 / 2
T
0 0 1 = 0
2T
T
0
1 0 0 0
0
1 0
Q AQ = 0 1
0 0
1
3.18 Find the characteristic polynomials and the minimal polynomials of the following matrices
,
1
0
1
0
0
0
1
1
0
0
0
(a)
0
1
0
1
0
0
0
1
1
0
0
0
(b)
0
1
0
1
0
0
0
0
1
0
0
0
(c )
0
1
0
1
0
0
0
0
1
0
0
0
(d )
0
(a ) 1 ( ) = ( 1 ) 3 ( 2 )
1 ( ) = ( 1 ) 3 ( 2 )
(b) 2 ( ) = ( 1 ) 4
2 ( ) = ( 1 ) 3
(c) 3 ( ) = ( 1 ) 4
3 ( ) = ( 1 ) 2
(d ) 4 ( ) = ( 1 ) 4
4 ( ) = ( 1 )
3.19
f ( ) ,
proof let A be an n n matrix , use theorem 3.5 for any function f (x) we can define
f ( ) A x x A 2 x = A x = 2 x,
f ( A) = h( A)
A 3 x = 3 x , 3 A k x = k x
f ( A) x = h( A) x = ( 0 I + 1 A + 3 + n 1 A n 1 ) x
= 0 x + 1x + 3 + n 1n 1 x
= h ( ) x = f ( ) x
3.20
show that an n n matrix has the property A k = 0 for k m if and only if A has
eigenvalues 0 with multiplicity n and index m of less , such a matrix is called a nilpotent matrix
n n A k = 0 A n 0 m ,
,
proof : if A has eigenvalues 0 with multiplicity n and index M or less then the Jordan-form
J1
representation of A is A =
0 1
ni = n
where J = 0 1
J2
i =1
i
ni m
J 3
0 n n ma
i
i
i
for
k ni so if
J k1
then A k = 0
If A k = 0
J1
A =
and only if f ( A ) = 0)
( f ( A) = 0 if
for
J k2
=0
k
J 3
k m, then A k = 0
l i
J2
, Ji =
J 3
1
li
for
1
li n n
i
i
k m, where
l
n
i =1
=n
So we have
k!
k
k 1
li k ni +1
kli
l i
(k ni + 1)(ni 1)!
k
k
li
J i =
= 0 i = 1,2, l
li k
1 1 0
0 0 1
and
e At A A10 , A103
2 = 8 2 = 9
A10 = 8 A + 9 A 2
1 1 0 1 0 1 1 1 9
= 80 0 1 + 9 0 0 1 = 0 0 1
0 0 1 0 0 1 0 0 1
the compute
and
e At ,
0 = 0
0103 = 0
103
1 = 0 + 1 + 2 1 = 101
2 = 102
103 1103 = 1 + 2 2
A103
= 1010 0 1 + 102 0 0 1 = 0 0 1
0 0 1 0 0 1
0 0 1
e0 = 0
0 = 1
t
e = 0 + 1 + 2 1 = 2e t te t 2
te t = 1 + 2 2
2 = te t e t + 1
At
to compute e :
Ae At = 0 I + 1 A + 2 A 2
1 1 1
1 1 0
1 0 0
t
t
t
t
= 0 1 0 + (2e e 2) 0 0 1 + (2e e + 1) 0 0 1
0 0 1
0 0 1
0 0 1
e t e t 1 te t e t + 1
1
et 1
= 0
0
0
et
At
3.13 A 1 A 4 e
e A2t
{[1
0 0]
Ca e
Cb
A1t
[4
1 0]
[5
0 1] is A1 = diag {1, 2, 3}
1 4 5
= Qe Q , where Q = 0 1 0
0 0 1
A1t
t
1 4 5 e
= 0 1 0 0
0 0 1 0
0
e 2t
0
0 1 4 5
0 0 1 0
e 3t 0 0 1
Cc
Cd
e t
= 0
0
4e 2 t
e 2t
0
5e 3t 1 4 5 e t
0 0 1 0 = 0
e 3t 0 0 1 0
4(e 3t e t ) 5(e 3t e t )
e 2t
0
3t
e
0
0
1 0
0 1 0
A = 0 0 1 = Qe A4t Q where Q = 0 4 3
4
0 5 4
0 0 0
e A4t = Qe A4t Q 1
2
1 0 0 1 t t / 2
t
= 0 4 3 0 1
0 5 4 0 0
1
1 4t + 5t 2 / 2 3t + 2t 2
20t + 1
16t
= 0
0
25t
20t + 1
0 = 3e t 3e 3t + e 3t
f (1) = h(1) e t = 0 + 1 +
5
3
f (2) = h(2) e 2t = 0 + 2 1 + 3 2 2 = e t + 4e 2t + e 3t
2
2
f (3) = h(3) e 3t = 0 + 3 1 + 9 2 2
1 t
3 = ( e 2e 2 t + e 3 t )
2
e A1t = 0 I + 1 A2 + 2 A12
3e t 3e 2t + e 3t
0
=
0
3e 3e 2t + e 3t
0
t
0
1 4 10
3 3t
5 t
2t
0
+ ( 2 e + 4e 2 e ) 0 2 0
0 0 3
3e t 3e 2t + e 3t
1 12 40
1 t
+ (e 2e 2t + e 3t ) 0 4 0
2
0 0 9
e t 4(e 2t e t ) 5(e 3t e t )
e 2t
0
= 0
3t
0
e
0
f (0) = h(0) : 1 = 0
f (0) = h (0) : t = 1
f (0) = h (0) : t 2 = 2 2
on the
e A4t = 0 I + 1 A4 + 1 A42
thus
0
4
3t + 2t 2
0 5 4
2
= I + t 0 20
16 + t / 2 0 0 0
0 25
0 0 0
20
1 4t + 5t 2 / 2 3t + 2t 2
= 0
20t + 1
16t
0
25t
20t + 1
At
proof: let
At
= e At A f ( A) g ( A) = g ( A) f ( A)
= e At A
g ( A) = 0 I + 1 A + + n 1 An 1
( n is the order of A)
f ( A) = 0 I + 1 A + + n 1 An 1
n 1
f ( A) g ( A) = 0 0 I + ( 0 1 + 1 0 ) A + + i n 1 i A
i =0
n 1
= ( i k i ) Ak +
k =0 i =0
n 1
f ( A) g ( A) = ( i k i ) Ak +
k =0 i =0
let f ( A) = A,
2n 2
(
k = n i = k n +1
2n 2
k = n i = k n +1
+ i n i An + + n 1 n 1 A2 n 2
i =1
k i
) Ak
k i
) Ak = f ( A) g ( A)
n 1
g ( A) = e At then we have Ae At = e At A
1 0
3.24 let C = 0 2
0 0
0
0 , find a B such that e B = C show that of i = 0 for some
3
1 0
let C = 0 2
0 0
0
0 , find a B such that e B = C Is it true that ,for any nonsingular c ,there
3
B
1 0
C = 0 2
0 0
0
0 1 1 1 = 0 , B e B = C
3
1 0
C = 0 2
0 0
0
0 , C B , e B = C ?
3
Let f ( ) = n e = so f ( B ) = ln e B = B
0
ln l1
ln l2
B = ln C = 0
0
0
0
0
ln l3
where
0 n 1
0
n n
1 0
= 0 n
n
0
0
for C = 0 0 we have B = n C = 0
,
0 0
0
0
n 0
0 n
where > 0,
if
that , it is mot true that , for any nonsingular C THERE EXISTS a B such that
3.25
let ( sI A) =
ek = c
1
Adj ( sI A) and let m(s) be the monic greatest common divisor of
( s)
all entries of Adj(Si-A) , Verify for the matrix A2 in problem 3.13 that the minimal polynomial
of A equals ( s ) m( s )
, ( sI A) =
1
Adj ( sI A) , m(s) Adj(Si-A),
( s)
3.13 A2 A ( s ) m( s )
verification
1 0 0
1 0 1
A3 = 0 1 0
A3 = 0 1 0
0 0 2
0 0 2
j ( s ) = ( s 1) \ ( s 2)
( s ) = ( s 1) 2 ( s 2)
0
1
S 1
0 ,
sI A = 0
S 1
0
0
s 2
so
m( s ) = s 1
we can easily obtain that
3.26
Define
0
( s 1)
( s 1)( s 2)
0
( s 1)( s 2)
0
Adj ( sI A) =
0
0
( s 1) 2
( s ) = ( s ) m( s )
( sI A) 1 =
1
R0 s n 1 + R1 s n 2 + + Rn 2 s + Rn 1
( s )
( s ) = det( sI A) := s 2 + a 1 s n 1 + a 2 s n 2 + + a n
and
where
definition is valid because the degree in s of the adjoint of (sI-A) is at most n-1 , verify
tr ( AR0 )
1
tr ( AR1 )
2 =
2
tr ( AR1 )
3 =
2
1 =
R0 = I
R1 = AR0 + 1 I = A + 1 I
R2 = AR1 + 2 I = A 2 + 1 A + 2 I
tr ( AR1 )
n 1
tr ( AR1 )
n =
n
n 1 =
Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
0 = ARn 1 + n I
where tr stands for the trase of a matrix and is defined as the sum of all its diagonal entries this
process of computing a i and
( SI A) 1 :=
1
[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ] (s ) A
( s)
( s ) = det( sI A) := s + a 1 s
n
n 1
+ a 2 s n2 + + a n
and
Ri ,
, SI-A S N-1
tr ( AR0 )
1
tr ( AR1 )
2 =
2
tr ( AR1 )
3 =
2
1 =
R0 = I
R1 = AR0 + 1 I = A + 1 I
R2 = AR1 + 2 I = A 2 + 1 A + 2 I
tr ( AR1 )
n 1
tr ( AR1 )
n =
n
n 1 =
Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
0 = ARn 1 + n I
tr , i Ri leverrier .
verification:
( SI A)[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
= R0 S n + ( R1 AR0 ) S n 1 + ( R2 AR1 ) S n 2 + ( Rn 1 ARn 2 ) S + ARn 1
= I ( S n + 1 S n 1 + 2 S n 2 + + n 1 S + n )
= I( S )
which implies
( SI A) 1 :=
1
[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
( s)
where ( s ) = s + 1 s
n
n 1
+ 2 s n2 + + n
R0 = I
R1 AR0 = 1 I
R2 AR1 = 2 I
Rn 1 ARn 2 = n 1 I
ARn 1 = n I
multiplying ith equation by A n i +1 yields ( i = 1,2 n )
A n R0 = A n
A n 1 R1 A n R0 = 1 A n 1
A n 2 R2 A n 1 R1 = 2 A n 2
ARn 1 A 2 Rn 2 = n 1 A
ARn 1 = n I
then we can see
A n + 1 A n 1 + 2 A n 2 + + n 1 A + n I
= A n R0 + + A n 1 R1 A n R0 + + ARn 1 A 2 Rn 2 ARn 1 = 0
that is
( A) = 0
( sI A) 1
1
=
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)
3.26 ,
Proof:
1
[ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s)
1
=
[ S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( s)
( SI A) 1 :=
( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]
1
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)
another : let 0 = 1
( SI A) 1 :=
1
[ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s )
1 n 1
1 n 1 n 1i n 1
i l A L
Ri S n 1i =
S
( s ) i =i
( s ) i =0
i =0
n 1
1 N 1
n 1 I
0
i S n 1 I A + 3 +
+
S
A
i
( s ) i =0
i =0
S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]
=
1
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)
that is
P; = Q 1
Aqi = I qi define Q = q1
q 2 q n and define
p1
p
2
=: ,
p n
where pi is the ith row of P , show that pi is a left eigenvector of A associated with i , that
is pi A = i pi A , qi i ,
Aqi = I qi , Q = q1
q 2 q n P; = Q 1
p1
p
2
=:
p n
pi P I , pi A i , pi A = i pi
Proof: all eigenvalues of A are distinct , and qi is a right eigenvector of A associated with i ,
and Q = q1
q2 qn
1
=
so we know that A
= Q 1 AQ = PAP 1
2
PA = A P That is
p1 A 1 p1
p1
p A p
p2
2 = 2 2
p n A n p n
p n
p1
1
p
2
:
A =
p n
so pi A = i pi , that is , pi is a
3.30 show that if all eigenvalues of A are distinct , then ( SI A) 1 can be expressed as
( SI A) 1 =
with i
1
qi pi where qi and pi are right and left eigenvectors of A associated
s i
A , ( SI A)
( SI A) 1 =
1
qi pi qi pi A i ,
s i
Proof: if all eigenvalues of A are distinct , let qi be a right eigenvector of A associated with i ,
then Q = q1
q 2 q n is nonsingular , and : Q 1
p1
p
2
= where is a left eigenvector of
p n
qi pi ( SI A)
i
1
1
A associated with i , =
( s qi pi qi pi A) =
( s qi pi qi i pi )
s i
s i
=
That is ( SI A)
1
( s i )(qi pi ) = qi pi
s i
1
qi pi
s i
3.31
find
the
1
0
A=
2 2
to
meet
the
lyapunov
equation
in
(3.59)
with
3
B = 3 C = what are the eigenvalues of the lyapunov equation ? is the
3
AM + MB = C
3 1
0
M =CM =
2 1
3
A ( ) = det(I A) = ( + 1 j )( + 1 + j ) B ( ) = det(I B) = + 3
The eigenvalues of the Lyapunov equation are
1 = 1 + j + 3 = 2 + j ) 2 = 1 j + 3 = 2 j
The lyapunov equation is nonsingular M satisfying the equation
1
0
1 2
3
3
B = 1 C1 = C 2 = with two
3
3
different C , A, B, C, 3.31 ,
AM + MB = C
1 1
M = C1 No solution
1 1
1 1
1 1 M = C 2 M = 3 for ny
A ( ) = ( + 1) 2
B ( ) = 1
singular because it has zero eigenvalue if C lies in the range space of the lyapunov equation , then
solution exist and are not unique
,
3.33 check to see if the following matrices are positive definite or senidefinite
2 3 2 0 0 1 a1 a1
0 0 0 a a
, 3 1 0
2 1
2 0 2 1 0 2 a3 a1
a1 a 2
a2 a2
a3 a 2
a1 a3
a 2 a3
a3 a3
2 3 2
2 3
1. 7 det
= 2 9 = 7 < 0 3 1 0 is not positive definite , nor is pesitive
3
1
2 0 2
semidefinite
0 1
1 0 2
so the second matrix is not positive definite , neither is positive demidefinte ,,
3
a a
det 1 1
a 2 a1
a1 a 2
a a
= 0 det 1 1
a2 a2
a3 a1
a1 a3
a a
= 0 det 2 2
a3 a3
a3 a 2
a1 a1
a 2 a3
= 0 det a 2 a1
a3 a3
a3 a1
a1 a 2
a2 a2
a3 a 2
that is all the principal minors of the thire matrix are zero or positive , so the matrix is positive
semidefinite ,
3.34 compute the singular values of the following matrices ,
1 0 1 1 2
2 1 0 2 4
1 2
1 0 1
2 2
2 1 0 0 1 = 2 5
1
0
( ) = ( 2)( 5) 4 = 2 7 + 6 = ( 1)( 6)
1 2
1 0 1
0 1 are 6 and 1 , thus the singular values of
the eigenvalues of
2 1 0 1
0
1 0 1
2 1 0 are
6 and 1 ,
1 2 1 2 5 6
2 4 2 4 = 6 20
( ) = ( 5)( 20) 36 = (
25 3
25 3
+
41)(
41)
2 2
2 2
1 2 1 2
25 3
2 4 2 4 are 2 + 2 41 and
the eigenvalues of
25 3
a1 a3
a 2 a3 = 0
a3 a3
1
1 2
25 3
2
2 4 are ( 2 + 2 41) = 4.70 and
1
25 3
41) 2 = 1.70
2 2
3.35 if A is symmetric , what is the ralatimship between its eigenvalues and singular values ?
A ?
If A is symmetric , then AA = AA = A 2 LET qi be an eigenvector of A associated with
eigenvaue i that is , Aqi = i qi , (i = 1,3, 3 n) thus we have
2
A 2 qi = Ai qi = i Aqi = i qi (i = 1,3, 3 n)
Which implies
i 2 is the eigenvalue of A 2
a1
2
3.36 show det I n + [b1
a n
b2
bn ] = 1 + a m bm
m =1
a1
a
2
let A =
a n
B = [b1
b2 bn ] A is n 1 and B is 1 n
a1
det I n + 2 [b1
a n
b2
bn ] = det I 1 + [b1
= 1 + a m bm
m 1
b2
a1
a
bn ] 2
a n
sIm
N =
0
sIm
A
Q=
sIn
B
sIn
sIm
P=
B
sIn
then we have
sI m
sI AB 0
NP = m
QP
=
sB
sI n
sA
sI n BA
because
mn
A y a solution ? if not ,
y a solution ?
A y A x = y ? ,
, ? A( AA)
y ?
A y isnt a
solution if A x = y
If m=n , and rankA=m , so A is nonsingular , then we have rank( AA )=rank(A)=m , and
A ( AA)
RankA=M
Rank( AA )=m AA is monsingular and ( AA) 1 exists , so we have
PROBLEMS OF CHAPTER 4
4.1 An oscillation can be generated by
0 1
X =
X
1 0
cos t sin t
X (0)
sin t cos t
At
Proof: X (t ) = e X (0) = e
Let h( ) = 0 + 1 .If
0 1
t
1 0
X (0)
h( j ) = 0 + 1 j = e jt = cos t + j sin t
h( j ) = 0 1 j = e
jt
0 = cos t
1 = sin t
then
= cos t j sin t
1 0
0 1 cos t sin t
+ sin t
0 1
1 0 sin t cos t
so h( A) = 0 I + 1 A = cos t
At
X (t ) = e X (0) = e
0 1
t
1 0
cos t sin t
X (0) =
X (0)
sin t cos t
1
0
1
X =
X + U
2 2
1
Y = [2 3]X
Answer: assuming the initial state is zero state.
method1:we use (3.20) to compute
( sI A)
then e
At
s 1
=
2 s + 2
s + 2 1
1
s + 2s + 2 2 s
2
sin t t
cos t + sin t
= L1 (( sI A) 1 ) =
e
cos t sin t
2 sin t
Y ( s ) = C ( sI A) 1 BU ( s ) =
5s
5
= 2
( s + 2 s + 2) s ( s + 2 s + 2)
2
then y (t ) = 5e t sin t
method2:
and
for t>=0
y (t ) = C e A(t t ) Bu (t )dt
0
0 e t cos t 3e t sin t 1
1
= 5 sin te t
for t>=0
4.3 Discretize the state equation in Problem 4.2 for T=1 and T= . 4.3
T 1
Answer:
T
X [k + 1] = e AT X [k ] + e A d BU [k ]
0
Y [k ] = CX [k ] + DU [k ]
0.3096
0.5083
1.0471
X [k + 1] =
X [k ] +
U [k ]
0.6191 0.1108
0.1821
Y [k ] = [2 3]X [k ]
for T= ,use matlab:
[ab,bd]=c2d(a,b,3.1415926)
ab =-0.0432
0.0000
-0.0000 -0.0432
bd =1.5648
-1.0432
0
0.0432
1.5648
X [k + 1] =
X [k ] +
U [k ]
0.0432
0
1.0432
Y [k ] = [2 3]X [k ]
4.4 Find the companion-form and modal-form equivalent equations of
0
1
2 0
X = 1
0
1 X + 0U
1
0 2 2
Y = [1 1 0]X
1
0 0 4
X = 1 0 6 X + 0U
0 1 4
0
Y = [1 4 8]X
use use [ab ,bb,cb,db,p]=canon(a,b,c,d) we get the modal form
ab = -1
1
0
-1
-1
0
0
0
-2
bb = -3.4641
0
1.4142
cb = 0
-0.5774
0.7071
db = 0
p = -1.7321 -1.7321 -1.7321
0
1.7321
0
1.4142
0
0
0
3.4641
1 1
X = 1 1 0 X + 0 U
1.4142
0 0 2
Y = [0 0.5774 0.7071]X
4.5 Find an equivalent state equation of the equation in Problem 4.4 so that all state variables have
their larest magnitudes roughly equal to the largest magnitude of the output.If all signals are
required to lie inside 10 volts and if the input is a step function with magnitude a,what is the
permissible largest a? 4.4
10 a
a
Answer: first we use matlab to find its unit-step response .we type
x2
x1
x3
0
1
2 0
X = 0.5 0 0.5 X + 0U
1
0 4 2
Y = [1 2 0]X
the largest permissible a is 10/0.55=18.2
b
0
X + 1 U
0
b2
4.6 Consider X =
Y = [c1
c1 ]X
where the overbar denotes complex conjugate.Verify that the equation can be transformed into
X = A X + B u
with
0
A=
1
+
y = C1 X
0
B = C1 = 2 Re( b1c1 ) 2 Re(b1c1 ) ,by using the
1
b1
transformation X = QX with Q =
b1
b1
b1
b1 b1
,we get
b1 b1
b
0
QX =
QX + 1 U
0
b2
Y = [c1 c1 ]QX
Q 1 =
b1
( )b1b1 b1
1
b1
b1
so
b
0
X = Q 1
QX + Q 1 1 U
0
b2
=
b1
( )b1b1 b1
1
b1 0 b1
b1 0 b1
b1
b1
b1 b1
1
U
X +
( )b1b1 b1 b1 b2
b1
0
b1
1
X +
( )b b U
( )b1b1
b1
1 1
b1 b1 b1
( )b1b1 2 b1 2 b1 b1
1
0
0
=
X + U = A X + B U
+
1
=
Y = [c1
c1 ]QX = [c1
b1
c1 ]
b1
b1
X = b1c1 b1c1
b1
c1b1 + c1b1 X = C1 X
0
X =
y = [c1
c2
c3
c1
c2
b1
b3
X + U
b 1
b2
1
b3
c3 ]X
X = 0
0
I2
A
0
B
0
I 2 X + B u
B
A
y = C1
C2
C3 X
where A , B , and C i are defined in Problem 4.6 and I 2 is the unit matrix of order 2.
A , B , and C i 4.6 I 2
PROOF: Change the order of the state variables from [x1 x2 x3 x4 x5 x6] to [x1 x4 x2 x5 x3 x6]
And then we get
x1
x
4
x
X = 2 =
x 5
x 3
x 6
y = [c1
c1
c2
1
1
c2
c3
x1 b1
x b
4 1 A
x 2 b2
1
+ = 0
1 x5 b2
0
x3 b3
x6 b3
c3 ]X = C1
C2
I2
A
0
B1
0
I 2 X + B2 u
B3
A
C3 X
1
2 1 2
X = 0 2 2 X + 1U
0
0 0 1
y = [1 1 0]X
and
1
2 1 1
X = 0 2 1 X + 1U
0
0 0 1
y = [1 1 0]X
s 2 1 2 1
1
G 1 ( s ) = C ( sI A) B = [1 1 0] 0
s 2 2 1
0
0
s 1 0
( s 1)
2( s 1) 1
( s 2)( s 1)
[
1 1 0]
0
( s 2)( s 1) 2( s 2) 1
=
2
( s 2) ( s 1)
0
0
( s 2) 2 0
=
1
( s 2) 2
1
1 1
s 2 1
1
G 2 ( s ) = C ( sI A) B = [1 1 0] 0
s 2 1 1
0
s + 1 0
0
( s + 1)
( s 1) 1
( s 2)( s + 1)
[
1 1 0]
0
( s 2)( s + 1) ( s 2) 1
=
2
( s 2) ( s + 1)
0
0
( s 2) 2 0
=
1
( s 2) 2
1 I q
I
2 q
X =
r 1 I q
r Iq
Iq
0
0
Iq
Y = Iq
0
N1
N
0
2
X + U
Iq
N r 1
N r
0
0 0 0X
This is called the observable canonical form realization and has dimension rq.It is dual to (4.34).
(4.33) rq(4.34)
Answer:
define
C ( sI A) 1 = [ Z 1
then
C = [ Z1
Z2 Zr ]
Z 2 Z r ]( sI A)
we get
Z i 1
, i = 2 r
s
r
r
Z
sZ 1 = I q Z i i =I q 1i 1i then
i =1
i =1 s
sZ i = AZ i , i = 2 r Z i =
s r 1
s r 2
1
Z1 =
Iq , Z2 =
I q ,, Z r =
Iq
d ( s)
d ( s)
d ( s)
then
1
C ( sI A) 1 B =
( s r 1 N 1 + + N r )
d (s)
this satisfies (4.33).
4.10 Consider the 1 2 proper rational matrix
G ( s ) = [d1
d2 ]+
1
s + 1s + 2 s 2 + 3 s + 4
4
11 s 3 + 21 s 2 + 31 s + 41
12 s 3 + 22 s 2 + 32 s + 42 ]
Show that its observable canonical form realization can be reduced from Problem 4.9 as
4.9
1
X = 2
3
1
0
0
0
0
1
0
0
0
11
0
X + 21
31
1
0
41
y = [1 0 0 0]X + [d1
12
22
32
42
d 2 ]U
I q = 1, N 1 = [ 11
12 ], N 2 = [ 21
22 ], N 3 = [ 31
32 ], N 4 = [ 41
42 ]
its observable canonical form realization can be reduced from 4.9 to 4.10
4.11 Find a realization for the proper rational matrix
G ( s ) = s + 1
s 2
s + 1
=
2s 3 2
( s + 1)( s + 2) = s + 1
s
3
s+2
s + 1
2s 3
( s + 1)( s + 2) + 0 0
2
1 1
s+2
2
2 4 3 0 0
1
s
+
+
s + 3s + 2 3 2 6 2 1 1
2
so the realization is
3
0
X =
1
0
3
0
1
2
0
0
0
2
Y =
3
2
2
4
6
0
1 0
0 1
2
U
X +
0 0
0
0
0 0
3
0 0
+
X
1 1U
2
4.12
Find
a
realization
for
each column of
G ( s ) in Problem
4.11,and
then
connect them.as shown in fig4.4(a).to obtain a realization of G ( s ) .What is the dimension of this
realization ?Compare this dimension with the one in Problem 4.11. 4.11 G ( s )
4.4(a) G ( s ) 4.11
Answer:
1 2 0
1 2
=
G 1 ( s ) =
+
s + 1 s 2 s + 1 3 1
X = X + U
1
2
0
YC1 = X 1 + U 1
3
1
G 2 ( s ) =
2 3 0
2 s 3 0
1
1
+ =
+
s
+
( s + 1)( s + 2) 2s 2 1 ( s + 1)( s + 2) 2 2 1
3 2
1
X 2 =
X 2 + U 2
0
1
0
2 3
0
YC 2 =
X 2 + U 2
2 2
1
These two realizations can be combined as
0
1 0
1 0
X = 0 3 2 X + 0 1U
0 0
1
0
0
2 3
2
0 0
Y =
X +
U
3 2 2
1 1
the dimension of the realization of 4.12 is 3,and of 4.11 is 4.
4.13 Find a realization for each row of G ( s ) in Problem 4.11 and then connect them,as shown in
Fig.4.4(b),to obtain a realization of G ( s ) .What is the dimension of this realization of this
realization?Compare this dimension with the ones in Problems 4.11 and 4.12. 4.11
G ( s ) 4.4(b) G ( s )
4.114.12
Answer:
G 1 ( s ) =
1
[2s + 4 2s 3] = 2 1 [s[2 2] + [4 3]]
( s + 1)( s + 2)
s + 3s + 2
3 1
2 2
X 1 =
X1 +
U 1
2 0
4 3
YC1 = [1 0]X 1 + [0 0]U 1
G 2 ( s ) =
1
1
[ 3( s + 2) 2( s + 1)] + [1 1] =
[s[ 3 2] + [ 6 2]] + [1 1]
( s + 1)( s + 2)
( s + 1)( s + 2)
3 1
3 2
X 2 =
X2 +
U 2
2 0
6 2
YC 2 = [1 0]X 2 + [1 1]U 2
These two realizations can be combined as
3
2
X =
0
1 0
0 0
0 3
0 2
0
2
2
0
4 3
X+
U
3 2
1
0
6 2
1 0 0 0
0 0
Y =
X +
U
0 0 1 0
1 1
the dimension of this realization is 4,equal to that of Problem 4.11.so the smallest dimension is
of Problem 4.12.
(12 s + 6)
4.14 Find a realization for G ( s ) =
3s + 34
22 s + 23
3s + 34
(12 s + 6)
22 s + 23
3s + 34
G ( s ) =
3s + 34
Answer:
1
(12 s + 6) 22 s + 23
[
]
[130 / 3 679 / 9]
=
+
G ( s) =
4
22
/
3
3s + 34
s + 34 / 3
3s + 34
34
X = X + [130 / 3 679 / 9]U
3
Y = X + [ 4 22 / 3]U
4.16 Find fundamental matrices and state transition matrices for
0 1
X =
X
0 t
and
1 e 2t
X =
X
0 1
Answer:
t
x1 = x 2 x1 (t ) = x 2 (t )dt + x1 (0)
1
1
we have X (0) = X (t ) =
0
0
and
t e 0.5t 2 dt
0
X (0) = X (t ) = 0
2
e 0.5t
1
e
0
1
(t , t 0 ) =
0
0.5t 2 dt
e 0.5t
e 0.5t
e
and
dt
0.5t 2
t0
e 0.5t
e 0.5t0
dt
1 e 0.5t0 2 dt t e 0.5t 2 dt
t0
=
0.5 ( t 2 t 0 2 )
0
e
x1 = x1 + e 2t x 2 (t ) = x1 + e t x 2 (0)
t
t
dt
dt
x1 (t ) = e 0 [ x 2 (0)e t e dt + c] = 0.5 x 2 (0)(e t e t ) + x1 (0)e t
0
x 2 = x 2 x 2 (t ) = x 2 (0)e t
we have
e t
1
X (0) = X (t ) =
0
0
and
0.5(e t e t )
0
X (0) = X (t ) =
e t
1
e t
X (t ) =
0
0.5(e t e t )
e t
e t
(t , t 0 ) =
0
e t 0 t
=
0
4.17 Show
and
0.5(e t e t ) e t0
e t
0
0.5(e t0 e t0 )
e t0
e t0 t
(t 0 , t ) / t = (t 0 , t ) A(t ).
(t 0 , t ) / t
= (t 0 , t ) A(t ).
(t , t 0 )
= A(t )(t , t 0 )
t
then
(t 0 , t )
(t , t ) ((t , t 0 )(t 0 , t )) (t , t 0 )
=
=
( t 0 , t ) + (t , t 0 )
t
t
t
t
(t 0 , t )
(t 0 , t )
= A(t )(t , t 0 )(t 0 , t ) + 1 (t 0 , t )
= A(t ) + 1 (t 0 , t )
t
t
=0
(t 0 , t ) / t = (t 0 , t ) A(t )
a11 (t ) a12 (t )
,show de ( , 0 ) = exp ( a11 ( ) + a 22 ( ))d
0
a 21 (t ) a 22 (t )
A(t), de ( , 0 ) = exp
(a11 ( ) + a 22 ( ))d
Proof:
11 12 a11
=
21 22 a 21
so
a12 11 12
a 22 21 22
de ( , 0 ) = ce
wih ( 0 , 0 ) = I
we ge
c =1
hen de ( , 0 ) = e
0
x
cos t
Answer:
e 1+ cos t
1
X (0) = X (t ) =
0
0
0
0
X (0) = X (t ) = sin t
1
e
and
e 1+ cos t
X (t ) =
0
0
and
e
sin t
0 e 1+ cos t0
e sin t 0
e 1+ cos t
(t , t 0 ) =
0
0
sin t 0
e
e cos t cos t0
=
0
0
e
sin t + sin t 0
X = AX + XB
Verify:first we know
x(0) = C
At
e = Ae At = e At A ,then
t
X = (e At Ce Bt ) = ( Ae At )Ce Bt + e At C (e Bt B) = AX + XB
t
X (0) = e 0 Ce 0 = C
Q.E.D
4.23 Find an equivalent time-invariant state equation of the equation in Problem 4.20.
4.20
Answer: let P (t ) = X
Then A (t ) = 0
e1cos t
(t ) =
0
0
and
e
sin t
x (t ) = P(t ) x(t )
x (t ) = 0
X (t ) = e At
P(t ) = X 1 (t )
B (t ) = X 1 (t ) B = e At B
C (t ) = CX (t ) = Ce At
(t , t 0 ) = X (t ) X 1 (t 0 ) = e A(t t0 )
4.25 Find a time-varying realization and a time-invariant realization of the inpulse response
g (t ) = t 2 e t .
Answer:
g ( , ) = g ( ) = ( ) 2 e ( ) = ( 2 2 + 2 )e
= 2 e
2e
e
2 e
e t
0 0 0
x (t ) = 0 0 0 x(t ) + te t u (t )
t 2 e t
0 0 0
y (t ) = t 2 e t
3l
x = 1
0
e t x(t )
2
s 3ls + 3ls l3
(4.41), we get the time inviant realization :
g ( s ) = L[t 2 e lt ] =
u sin g
2te t
3l2
0
1
y = [0 0 2]x
l3
0 x + 0u (t )
0
0
( )
x (t ) = N (t )u (t ) = e t cos tu (t )
y (t ) = sin te t x(t )
but we cant get g(t) from it,because g ( , ) g ( ) ,so its impossible to find a
time-invariant state eqution realization.
5.1
Is the network shown in Fing5.2 BIBO stable ? If not , find a bounded input that will excite an unbound output
5.2 BIBO ?. .
From Fig5.2 .we can obtain that
x=uy
y=x
y ( s ) =
y( s )
s
= 2
u ( s ) s + 1
g (t ) = L1 [ g ( s )] = L1 [
because
s
] = cos t
s +1
2
x
2
0
2 k +3
2
2 k +1
k =0
2
=1+
=1+
=1+
(1)
k +1
[sin(2 + 32 ) sin( k + 12 )
(1)
k +1
(1)
2 k +1
k =0
k =0
(1) 2 k +1
2
k =0
=1+2
1 =
k =0
Which is not bounded .thus the network shown in Fin5.2 is not BIBO stable ,If
y(t)=
we can see u(t)is bounded ,and the output excited by this input is not bounded
5.2
consider a system with an irrational function y ( s ) .show that a necessary condition for the system to be BIBO stable is
that |g(s)| is finite for all Res 0
S BIBO Res 0. |g(s)|
Proof let s= + j, then
g ( s ) = g ( + j ) = g (t )e t e jt dt =
0
| g (t ) | dt M < +
g (t )dt is convergent .
|g(t) e t cos t | | g (t ) |
|g(t0 e t sin t || g (t ) |
so
g (t )e t cos tdt
and
g (t )e t sin tdt
g (t )e t cos t = N < +
g (t )e t sin tdt = L < +
2
2
2
2
| g ( s ) |= ( N + ( L) ) 2 = ( N + L ) 2
| g (t )e (Re s )t dt | g (t 0 | dt M < + .
0
5.3
Is
1
t
BIBO stable ? how about g(t)=t e for t 0 ?
t +1
1
t
BIBO ? g (t ) = te , t 0 ?
t +1
1
1
0 | t + 1 | dt = 0 t + 1d t = ln(t + 1) =
g(t)=
Because
| te t | dt = te t dt = (te t e t ) | 0 = 1 < +
0
is BIBO stable.
1
t
is not BIBO stable ,while the system with impulse response g(t)= te
t +1
g ( s ) =
e 2 s
BIBO
( s + 1)
g (t ) = f (t a ).
1
] = e t , t 0 .
L1 [
s +1
| g (t ) | dt = = e (t 2 ) dt = 1 < +
2
g ( s ) = e s f ( s )
then
Thus
1
] = e (t 2 ) , t 2
s +1
. for t 2
L1 [e s
e 2 s
BIBO stable
( s + 1)
5.5
Show that negative-feedback shown in Fig . 2.5(b) is BIBO stable if and only if and the gain a has a
magnitude less than 1.For
a=1 ,find a bounded input r (t ) that will excite an unbounded output.
2.5(b) BIBO a 1 a=1 r(t),
Poof ,If r(t)= (t ) .then the output is the impulse response of the system and equal
g (t ) = a(t 1) a 3 (r 3) a 4 (t 4) + = (1) i +1 a i (t i )
t =1
the impulse is definded as the limit of the pulse in Fig.3.2 and can be considered to be positive .thus we have
| g (t ) |= | a |i (t i )
t =1
and
| g (t ) | t = | a |i
t =1
if
(t i )t = | a |i = |a|
(1|a|) < + if
t =1
| a | 0
| a |< 1
i +1
i =1
i =1
i =1
y ( ) = g ( )r ()d = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 (1) i sin()
0
= sin()1
i =1
And
5.6
( s 2)
what are the steady-state responses by
( s + 1)
u ( t )=3.for
g (t ) = L1 [
3
s2
] = L1 [`
] = (t ) 3e t .
s +1
s +1
t 0
| g (t ) | t = | (t ) 3e t | t (| (t ) | + | 3e t )t = (t )t + 3e t t = 1 + 3 = 4 < +
0
2 10
sin( tt + arctan 3) = 1.26 sin( tt + 1.25)
5
1 10
2
x + u
0 1
0
3] x 2u
consider x =
5.7
is it BIBO stable ?
y=[ -2
BIBO
The transfer function of the system is
1
s +1
s + 1 10 2
[
]
g ( s ) = [ 2 3]
2
2
3
1 0
0
0
4
2s + 2
=
2=
s +1
s +1
1
The pole of
10
( s + 1)( s 1) 2 2
1
0
s 1
is 1.which lies inside the left-half s-plane, so the system is BIBO stable
5.8
g[ k ] = k (0.8) k 0, BIBO
Because
1
(1 0.8) k (0.8) k
0.2
k =0
k =0
k =0
1
1
1
0.8
[ k (0.8) k k (0.8) k +1 ] =
0.8 k =
= 20 < +
0.2 k =0
0.2 k =0
0.2 1 0.8
k =0
G[k]is absolutely sumable in [0, ]. The discrete-time system is BIBO stable .
| g[k ] |= k (0.8) k =
5.9
Is the state equation in problem 5.7 marginally stable? Asymptotically stable ? 5.7
+ 1 10
( ) = det(I A) = det
= ( + 1)( 1)
1
0
the eigenralue Ihas positive real part ,thus the equation is not marginally stable neither is Asymptotically stable.
5.10
1 0 1
0 0 0
1 0 1
x = 0 0 0 x
0 0 0
+ 1 0 1
The characteristic polynomial is
() = det(I A) = det 0
0 = 2 ( + 1)
0
0
And the minimal polynomial is ( ) = ( + 1) .
The matrix has eigenvalues 0 , 0 . and 1 . the eigenvalue 0 is a simple root of the
stable
The system is not asymptotically stable because the matrix has zero eigenvalues
5 11
1 0 1
0 0 0
1 0 1
x = 0 0 0 x
0 0 0
+ 1 0 1
2
The characteristic polynomial
+ 1.
() = det 0
1 =
0
0
And the minimal polynomial is ( ) = 2 ( + 1) ..the matrix has eigenvalues 0. 0. and 1 . the eigenvalue 0 is a
repeated root of the minimal polymial ( ). so the equation is not marginally stable ,neither is asymptotically
stable .
5.12
0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
2
The characteristic of the system matrix is ( ) = ( 1
5.13
asymptotically stable ?
0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
marginally stable ?
0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
2
Its characteristic
polynomial is
() = ( 1
5.14
1
0
0.5 1
5.5 A .
Poof :For any given definite symmetric matrix
a b
with a>0 .and
b c
where N =
ac b 2 > 0
1 a b
0 0.5 m11 m12 m11 m12 0
+
1 1 m
m21 m22
m22 = a + 0.5c
a>0
a>0
lyapunor equation and M is symmetric and
c>0
ac b 2 > 0
1
1
1
(1.5a + 0.25c) ac = [(6a + c) 2 16ac] = (36a 2 + c 2 4ac) = [(2a c) 2 + 32a 2 ] 0
16
16
16
2
2
(1.5a + 0.25c) ac > b
m11 = 1.5a + 0.25c b > 0
a > 0, c > 0
m 12
m
det 11
= m 11 m 22 m 12 m 21 = (1.5a + 0.25c b)(a + .5c) a 2
m 21 m 22
1
= (4a 2 + 8ac + c 2 8ab 4bc)
8
1
= [(2a 2b) 2 + (c 2b) 2 + 8(ac b 2 )] > 0
8
5.15 Use theorem 5.D5show that all eigencalues of the A in Problem 5.14 magnitudes less than 1.
Poof : For any given positive definite symmetric matrix N , we try to find the solution of discrete Lyapunov
equation M
M AMA = N
and
a b
. where a>0
b c
Let N =
is
assumed
ac b 2 > 0
and
m
M = 11
m21
as
m12
m22
,then
5 m12 m22 )
m21 + 0
a b
=
b c
this
m11 =
8
3
4
4
4
2
12
12
16
a + c b, m12 = m21 = a + c b, m22 = a + c b
5
5
5
5
5
5
5
5
5
m12
m
M = 11
is the unique symmetric solution of the discrete Lyapunov equation
m21 m22
a>0
a>0
ac b ca > 0
8
3
4
a+ c b>0
5
5
5
m12
1
= m11 m22 m12 m21 = [(8a + 3(4b)(12a + 12c 16b) (4a + 4c 2b) 2 ]
m22
25
4
(4a 2 + c 2 + 3b 2 + 5ac 8ab 4bc)
5
4
= [(2a 2b) 2 + (c 2b) 2 + 5(ac b 2 )] > 0
5
=
we see M is positive definite .use theorem 5.D5,we can conclude that all eigenvalues of A have magnitudes less
than 1.
5.16
a1
2 1
a 2 a1
M =
2 + 1
a3 a1
+
3
1
a1 a3
1 + 3
a 2 a3
2 + 3
a2
3
2 3
a1 a 2
1 + 2
a2
2
2 2
a3 a 2
3 + 2
is positive definite .
M i ai
i , i = 1,2,3. are distinct negatire real numbers and ai are nonzero real numbers , so we have
Poof :
2
1
a
>0
21
2
a1
aa
1 2
2 2
a12 a 22
1
1
2 1
1 + 2 a1 a 2
(2) . det
=
= a12 a 22 (
2
2
a 2 4 1 2 ( 1 + 2 )
a 2 a1
4 1 2 ( 1 + 2 ) 2
+
2 2
1
2
1 + 22 4 1 2 = 21 + 22 + 2 1 2 4 1 2 = 1 22
(1),
1 + 22 > 4 1 2
2
a1
2 1
det
a 2 a1
+
1
2
(3)
and
a1 a 2
1 + 2
a2
2
2 2
1
1
>
4 1 2 ( 1 + 2 ) 2
= a1 a 2 (
1
1
) >0
4 1 2 ( 1 + 2 ) 2
1
2 1
1
det M = a12 a 22 a32
2 + 1
3 + 1
1
2 1
= a12 a 22 a32 det 0
1
1 + 2
1
2 2
1
3 + 2
1 + 3
1
2 + 3
1
2 3
1
1
1 + 2
1 + 3
2 3
2 1
1
1
2
2 2 2 + 1
1 + 3
2 + 3 ( 1 + 2 ))
2 1
2 1
1
1
2 3 1 + 22
1 + 3
2 + 3 ( 1 + 2 ))
1
2 1
2 1
2 1
1
1
1 2 2 2
2
a1 a 2 a3 det
2
2 + 3 ( 1 + 2 ))
1 + 3
2 3 3 + 12
2 1
2 2 2 + 1
1
4 1
1
1 2 2 2
1
a1 a 2 a3 det
2 1
1 + 3
1= 2 + 3
( 2 + 3 ) 2 ( 1 + 2 ))
3 1 + 2 2 ( 1 + 3 )
4 2 3
1 2 2 2
a1 a 2 a3 > 0,
2 1
1
1
1
1
> 0,
> 0,
> 0.
2 ( 1 + 3 )
4 2 3
3 1 + 2
3 1 + 2
4 1
> 0.
1 + 3
1= 2 + 3
( 1 + 2 ))
det M < 0
M is positive definite
5.17 A real matrix M (not necessarily symmetric ) is defined to be positive definite if X M X > 0
for any non zero X .Is it true that the matrix M is positive definite if all eigenvalues of Mare real
and positive or you check its positive definiteness ?
M X X M X > 0 M
M / M M
If all eigenvalues of M are real and positive or if all its leading pricipal minors are positive ,the matrix
M may not be positive definite .the exampal following can show it .
1 8
, Its eigenvalues 1,1 are real and positive and all its leading pricipal minors are
0 1
positive , but for non zero X = [1 2] we can see X MX = 11 < 0 . So M is not positive
Let M =
definite .
Because
X MX
( X M X ) = X M X = X M X , and
1
1
1
X M X = X M X + X M X = X [ M + M )] X
2
2
2
1
This X M X > 0, if and only if X [ M + M )] X >0, that is , M is positive and the matrix
2
1
M + M ) is symmetric , so we can use theorem 3.7 to cheek its positive definiteness .
2
5.18 show that all eigenvalues of A have real parts less than < 0 if and only if ,for any given
AM + MA + 2M = N M M
Proof : the equation can be written as ( A + I ) + M ( A + I ) = N
Let B = A + I , then the equation becomes B M + MB = N . So all eigenvalues of B have
megative parts if and only if ,for any given positive definite symmetric matrix N, the equation
B M + MB = N has a unique symmetric solution m and is positive definitive .
And we know B = A + I , so det(I B ) = det(I A I ) = det (( ) I A) , that is , all
eigenvalues of A are the eigenvalues of B subtracted . So all eigenvalues of A are real parts less
than < 0 .if and only all eigenvalues of B have negative real parts , eguivalontly if and only if ,
for any given positive symmetric matrix N the equation AM + MA + 2M = N has a unique
symmetric solution M and M is positive definite .
5.19
show that all eigenvalues of A have magnitudes less than if and only if , for any given
positive definite symmetric matrix N ,the equation M AMA = N . Has a unique symmetric
solution M and M is positive definite .
A N
2
2 M AMA = 2 N M M
1
1
Proof : the equation can be written as M (c)M ( A) = N let B = A , then
det(I B ) = det(I 1 A) = 1 det(I A). that is ,all eigenvalues of A are the eigenvalues
B multiplied by ,so all eigenvalue of B have msgritudes less than 1 . equivalently if and only if ,
for any given positive definite symmetric matrix N the equation 2 M AMA = 2 N has a unique
symmetric solution M and M is positive definite .
520
g (t , t) = sin(e
2 | | | |
(t t)
) cos t ?
2 | | | |
(t t)
BIBO g (t , t) = sin(e
g ( , ) = e
) cos t ?
BIBO
|e
0
2 | | | |
| d = e 2 | | | | d = e | | d
e 2t (e t0 e t )
= e 2 t ( 2 e t0 e t )
e 2t (2 e t0 e t )
if
t0 0
t 0 < 0 and t > 0
if
if
t0 < 0
and
2 <+
t<0
| sin | (e
( )
)d =| sin | e e d =| sin | (1 e ( 0 ) )
0
t0 t,
| sin | (e
( )
| sin (e
( )
t 0 t
( 0 ,1]
)d 0 < +
so
y = e t
y = e t
x = 2tt + u
x (t 0 = 2tx(t ) x(t ) = x( 0 ) e t2
x( t ) = e t2
2
is a fundamental matrix
t0
= e
g ( , )d = e d e d = 2e d = e d = 2
2
| ( t , t 0 ) |=| e t
t0
BIBO
= < +
2
|= e t t0 ,
t t 0 is not bounded ,that is . there does not exist a finite
constant M such that ( t , t 0 ) M < +. so the equation is not marginally stable and is not
asymptotically stable .
5.22
t 2
with
p (t ) = e , into x = 0.x + e u
y = x is the equation BIBO
stable ?marginally stable ? asymptotically ? is the transformation a lyapunov transformation ?
x = p (t ) x,
5 21 x = p (t ) x,
2
x = 0.x + e t u
with
p (t ) = e t
y = x BIBO
lyapunov ?
t2
proof : we have found a fundamental matrix of the equation in problem 5.21 .is x(t ) = e . so
1
t 2
. and x = p (t ) x,
, then we have
A(t ) = 0
1
B
t ) = x (t ) B
t ) = e t .
C
t ) = C (t ) B
t ) = e t e t = 1
. D
t ) = D(t ) = 0
2
t
and the equation can be transformed into
x = 0.x + e u
y=x
the impulse response is invariant under equivalence transformation , so is the BIBO stability .
that is marginally stable . x = 0.x x(t ) = x(0) x(t ) = 1
is a fundamental matrix
stable .
5.23
1
3t
e
1
3t
e
x =
0
x ,
0
0
x
0
for t 0 0
t 0 0
x1 (t ) = x1 (0)e t
x1 = x1
0
x
1
1
x 2 (t ) = x1 (0)e 4t + x 2 (0) x1 (0)
x 2 = e 3t x1
0
4
4
e t
0
x = 1 4t
1
4 e
e t +t0
0
1
is the state
is a fundamental matrix of the equation . (t ) = X (t ) X (t 0 ) = 1 4t +t0
1
e
4
1 4 t + t 0 1 3t 0
t + t
transition of the equation. || ( t , t 0 ) || = max e 0 ,1 + e
e for all t 0 0
4
4
1 1
1
1
and t t 0
0 e t +t0 1; e 3t0 0, so
0 e t +t0 1; 0 e 4t +t0
4 4
4
4
3
1 4 t + t 0 1 3t 0 5
5
.
1+ e
e
1
x = 3t
e
t + t 0
1; 0
1 4t +t0 1
. every entry of
e
4
4
t , the equation is not
6.1
1
0
1
0
x = 0
0
1 x + 0
is the stable equation
0
1 3 3
y = [1 2 1]x
controllable ? observable ?
1
0
0
0
1 = 3
([ B AB A B]) = ( 0
1 3 3
thus the state equation is controllable ,
2
0
1
1
0= ( 1 2 1= 1
1
0
2
1
2
0 1
0 1 0
x = 0 0 1 x + 1 0u
6.2 is the state equation
0 0
1 2 1
y = [1 0 1]x
controllable ? observable?
([ B
0 1 1 0
AB]) = ( 1 0 0 0 ) = 3 ,Thus the state equation is controllable
0 0 2 0
1
1 0
C
A 2 B A n B] ? If
( [B A [ AB
A 2 B A n B] B A n1 B] ]= ()
If A is nonsingular {( ( [ AB
A 2 B A n B] .
A 2 B A n B] )= (A [B AB A n1 B] ]= ( [B
A11
A21
A12
B1
+
x
0 u is controllable if and only if the pair
A22
6.5 find a state equation to describe the network shown in fig.6.1 and then check its controllability
and observability
6.1 ,??
The state variables x1 and
x1 + x1 = u
x 2 are choosen as shown ,then we have x 2 + x 2 = 0
thus
y = x 2 + 2u
a
state
equation
describing
the
x1 1 0 x1 1
x = 0 1 x + 0U
2
2
checking
x1
y [0 1] + 2u
x2
network
the
can
be
controllability
es
and
pressed
as
observability
1 0
AB] = (
) = 1
0 1
The equation is neither controllable nor obserbable
C
1 0
( ) = (
) = 1
CA
0 1
([ B
6.6 find the controllability index and obserbability index of the state equation in problems 6,1 and 6.2
6.1 6.2 .
x1 1 0 x1 1
x = 0 1 x + 0U
2
2
the equation is controllable ,so
The state equation in problem 6.1 is
x1
y [0 1] + 2u
x2
we have n p m min(n, n p + 1) n p + 1. where
n p n p + 1. ie 3 3
=3
C
C
([c ]) = C A = 1, V = 1
C A C A 2
n = 3, p = rank (b) = 1
The controllability index and cbservability index of the state eqyation in problem 6.1 are 3 and 1 ,
0
x = 0
respectively .the state equation in problem 6.2 is
0
y [1
The
state
equation
in
is
p p + 1. ad
both
0
0 1
0 1 x + 1 0u
0 0
2 1
0 1]x
controllable
and
observable
,so
we
have
p p +1
n p n p + 1. where p = rank ( B) = n = 1
6.7 what is the controllability index of the state equation x = A X + IU WHERE I is the unit matrix ?
x = A X + IU
Solution . C = [ B
AB
C is an n n
A 2 B A n 1 B ] = [ I
A A 2 A n 1 ]
6.8
1 4
1
x + u
3 1
1
1 3
AB ]) = (
) = 1 < 2
3 3
choosing Q = P
1 1
=
and let x = p x then we
1 0
have A = PAP
0 1 1 4 1 1 3 4
=
1 1 4 1 1 0 0 5
0 1 1 1
1 1
B = PB =
= . C = CP 1 = [1 1]
= [2 1]
1 1 1 0
1 0
thus the state equation can be reduced to a controllable subequation
x c = 3 xc + u
y = 2 xc
6.9 reduce the state equation in problem 6.5 to a controllable and observable equation . 6.5
.
Solution : the state equation in problem 6.5 is
1 0
1
x =
x + U
0 1
0
y [0 1]x + 2u
x1 1 0 x1 1
x = 0 1 x + 0U
2
2
from the form of the state equation ,we can reaelily obtain
x
y [0 1] 1 + 2u
x2
thus the reduced controllable state equation can be independent of x c .
reduced to y=2u .
6.10
1
0
x = 0
reduce the state equation
0
0
1
1
0
0
0
0
1
1
0
0
0
0
0
2
0
y = [0 1 1 0 1]x
0
0
0
1
x + 0 u
to a controllable and
0
1
2
observable equation
Solution the state equation is in Jordan-form , and the state variables associated with 1 are
independent of the state variables associated with 2 thus we can decompose the state equation into
two independent state equations .and reduce the two state equation controllable and observable equation ,
respectively and then combine the two controllable and observable equations associated with 1 and
0
x 2 = 2
x 2 + u
1
0 2
x1
,
x 2
Where x =
y = [0 1 1]x1
y = [0 1]x 2
y = y1 + y 2
0 1
Q = P := 1 l 1
0 0
0
0 and let
1
1
A1 = PA1 P
C 1 0 1
= 1 0 0 0
0 0 1 0
11
B1 = PB1 = 0
0
1
0
1 0 0 1
0 0 1 = 0
0 1 0 0
x1 = p x1
0 0 1
1 1 1
1 0 0
then
0 1 0
= [0 1 1]1 C 0 = [1 1 1]
0 0 1
0 21
1
x 1c =
x 1c + u
0
1 1
y1 = [1 2 1 ]x 1c
1
(O) =
1
1
=1< 2
21
1 1
.and let ,
0 1
x 1c
have
0
1
0 0 21
0 = 1 2 1
1 0
0
C1 = C1 P 1
we
1 1 0 21 1 1 1
A1 =
=
1 1
0 1 1 2 1 0
1 1 1 1
= p x 1c then we have B 1 =
=
0 1 0 0
1 1
C1 = [1 1 ]
= [1 0]
1
0
0
1
thus we obtain that reduced controllable and observable equation of the sub equation associated
with
x1co = 1 x1c 0 + u
1 :
y1 = x1c 0
deal with the second sub equation :
0 1
(C 2 ) = (
) = 2
1 2
it is controllable
0 1
(O2 ) = (
) = 1 < 2
0 2
It
is
not
observable
choosing
0 1
p=
1 0
and
let
x 2 = p x 2 then we have
1 2
0 0
1 0 1 2
=
2 1 0 1
1 0 1
=
0 1 0
0 1
C 2 = [0 1]
= [1 0]
1 0
0
A2 =
1
0
B2 =
1
0
2
x ico = 2 xico + u
y 2 = xico
combining the two controllable and observable state equations . we obtain the reduced controllable and
observable state equation of the original equation and it is
x co = 1
0
0
1
x co + u
2
1
y = [1 1]x co
x = A x + Bu
the rank of its controllability matrix is
y = c x + ou
assumed to be n1 < n . Let Q1 be an n n1 matrix whose columns are any n1 linearly independent
columns of the controllability matrix let p1 be an n1 n matrix such that p1Q1 = In1 ,where In1
is the unit matrix of order n1 .show that the following n1 -dimensional state equation
x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is controllable and has the same transfer matrix as the original state equation
n
x = A x + Bu
n1 < n Q1 n1
y = c x + ou
Let P = Q
x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
P
:= 1 , where P1 is an n1 n matrix and P2 is an (n n1 ) n one ,and we
P2
have
QP = Q1 P1 + Q2 P2 = I n
PQ
PQ = 1 1
P2 Q1
P1Q2 I n1
=
P2 Q2 0
0
= In
I n n1
that is
P1Q1 = I n1
P2 Q2 = I n n1
Then the equivalence trans formation x = p x will transform the original state equation into
x c p1
x c p1
= A[Q1 Q2 ] + Bu
x c p2
x c p2
P AQ P1 AQ2 x c p1 B
= 1 1
+
u
P2 AQ1 P2 AQ2 x c p 2 B
A
= C
0
A12 x c Bc
+ u
AC x c 0
where A c = P1 AQ1 .
state equation
x
Q2 ] c + Du
x c
x
= [CQ1 CQ2 ] c + Du
x c
x
= C c C c c + Du
x c
y = C [Q1
B c = P1 B. C C = CP1 , and
x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
state equation
6.12 In problem 6.11 the reduction procedure reduces to solving for P1 in P1Q1 = I . how do you
solve P1 ?
6.11 P1Q1 = I . P1 , P1 ?
Solution : from the proof of problem 6.11 we can solve P1 in this way
(C)= [ AB
A 2 B A n B] = n1 < n
we fprm the n < n matrix Q = ([q1 q n1 q n ] , where the first n1 columns are any n1 linearly
independent columns of C , and the remaining columns can aribitrarily be chosen as Q is nonsingular .
Let P = Q
P
:= 1 , where P1 is an n1 n matrix and p1Q1 = In1
P2
6.13 Develop a similar statement as in problem 6.11 for an unobservable state equation .
6.11 .
Solution : consider the n-dimensional state equation
x = A x + Bu
the rank of its observability matrix
y = C x + Du
is assumed to be n1 < n .let P1 be an n1 n matrix whose rows are any n1 linearly independent
rows of the observability matrix . let Q1 be an n n1 matrix such that p1Q1 = In1 .where In1 is
the unit matrix of order n1 , then the following n1 -dimensional state equation
x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is observable and has the same transfer matrix as the original state equation
2
0
x = 0
0
0
0
1
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
1
0
0
2
2
0
1
0
0 x + 3
1
0
0
1
1
1
2 1 1
solution : ( 1 1 1 ) = 3
3 2 1
so
the
state
equation
and
is
1 0
1 1
1 1
2 2 1 3 1 1 1
2 1u y = 1 1 1 2 0 0 0 x
0 1 1 0 1 1 0
0 1
0 1
0 0
1 0 1
(
) = 2
1 0 0
controllable
,however
,it
is
not
observable
because
2 1 3
( 1 1 2 ) = 2 3
0 1 1
6.15 is it possible to find a set of
1
0
x = 0
0
0
1 0 0 0
b11 b12
1 0 0 0
21 b22
0 1 1 0 x + b31 b32 u
0 0 1 0
b41 b42
b51 b52
0 0 0 1
c11
y = c 21
c31
c12
c13
c14
c 22
c 23
c 24
c32
c33
c34
c15
c 25 u
c35
solution : it is impossible to find a set of bij such that the state equation is observable ,because
b21
( b41
b51
b22
b42 2 < 3
b52
it is possible to find a set of cij such that the state equation is observable .because
C11
( C 21
C 31
C13
C 23
C 33
C11
C 31
C13
C 23
C 33
C15
C 25 3
C 35
C11
such as C 21
C 31
C15
C 25 = 3 6
C 35
1
0
x = 0
0
0
y = [C1
0
0
0
1
b1
0
0
b1
1
0
0
2
b2
C11
C12
C 21
0
b1
b
0
11
0 x + b12 u
b2
b21
b22
1
C 22 ]x
C13
C 23
C 33
C15
C 25 = I the state equation is
C 35
it is the modal form discussed in (4.28) . it has one real eigenvalue and two pairs of complex
conjugate eigenvalues . it is assumed that they are distinct , show that the state equation is
controllable if and only if b1 0;
and only if c1 0;
bi1 0 or bi 2 0
ci1 0 or ci 2 0
for i = 1 , 2 it is observable if
for i = 1 , 2
4.82
b1 0; bi1 0 or bi 2 0
for i = 1 , 2
c1 0; ci1 0 or ci 2 0
for i = 1 , 2
0
0
0
1 0
0 0.5 0.5 j 0
0
p = 0 0.5 0.5 j
0
0 ,
0
0.5 0.5 j
0 0
0 0
0
0.5 0.5 j
A = PAP 1
1
0
=0
0
0
C = CP 1 = [C1
0
1 + J1
0
0
0
C11 + jC12
p 1
1
0
= 0
0
0
0
0
1 J1
0
0
C11 jC12
0
1
0
0
i i 0 0
0 0 1 1
0 0 i j
0
0
0
2 + J 2
0
C 21 + jC 22
0
1
0
0
0
0
0
0
2 J 2
0.5( j )
11
12
B = PB = 0.5(11 + j12 )
0.5(211 j22 )
0.5(21 + j22 )
C 22 jC 22 ]
b1 0;
0.5(b11 jb12 ) 0;
b1 0;
bi1 0; or
bi 2 0;
for i = 1, 2
for i = 1 , 2
6.17 find two- and three-dimensional state equations to describe the network shown in fig 6.12 .
discuss their controllability and observability
6.12
solution: the network can be described by the following equation :
u + x1
= x 2 2 x1 ;
2
x 2 = 3 x 3 ;
x3 = x1 x 2 ;
y = x3
2
2
x1 u
11
11
3
3
x 2 =
x1 + u
22
22
1
1
x 3 =
x1 + u
22
22
y = x3 = x1 x 2
x1 =
2
x1 11 0 x1 11
following : =
+ 3 u
x 2 3
0 x 2
22
22
y = x3 = x1 x 2
2
2
2
11 ) = 1 < 2
and it is not controllable because (c) = ( 11
2
3
3
22 11 22
1 1
) = 2
22 0
2
0
0
11
x
x1 11
3
1
3
u
x2 +
it : x 2 =
0
0
22
22
x3 1
x 3 1
0 0
22
22
observable because ( B
AB
A2 B ) = 1 < 3
C
( CA ) = 2 < 3
CA 2
618 check controllability and observability of the state equation obtained in problem 2.19 . can uou
give a physical interpretation directly from the network
2.19
Solution : the state equation obtained in problem 2.19 is
x1 1 0 0 x1 1
x = 0 0 1 x + 1u
2
2
x 3 0 1 1 x3 0
y = [0 1 0]x
( B
1 1 1
A B ) = 1 0 1 = 3
0 1 1
AB
0 1
0
C
( CA ) = 0 0 1 2
0 1 1
CA 2
A physical interpretation I can give is that from the network , we can that if u = 0
and
x1 (0) = a 0
x 2 (0) = 0 then y 0 that is any x(0) = [a 0 *] and u (t ) 0 yield the same output
T
Y (t ) 0 thus there is no way to determine the initial state uniquely and the state equation describing
the network is not observable and the input is has effect on all of the three variables . so the state
equation describing the network is controllable
6.19 continuous-time state equation in problem 4.2 and its discretized equations in problem 4.3 with
sampling period T=1 and . Discuss controllability and observability of the discretized equations .
4.2 4.3 T=1 and .
1
0
1
x + u
2 2
1
y = [2 3]x
the discretized
equation :
(cos T + sin T )e T
x[k + 1] =
T
2 sin Te
3 3
sin Te T
( conT + sin T )e T
[
]
x
k
u[k ]
+
2 2
2
1 + (2 sin T + conT )e T
(cos T sin T )e T
y[k ] = [2 3]x[k ]
0.3096
0.5083
0.6191 0.1108
T=1 Ad =
0
0.0432
0.0432
0
T= : Ad =
1.0471
Bd =
0.1821
1.5648
Bd =
1.0432
The xontinuous-time state equation is controllable and observable , and the sustem it describes is
single-input so the drscretized equation is controllable and observable if and only if
| I m [ i j ] | 2m / 2
for
m = 1, 2 whenever Re [ i j ] = 0
for T=1 . 2m 2
for
discretized equation is neither controllable nor observable . for T= 2m=1 .so the descretized
equation is controllable and observable .
0 1
0
x + u
0 t
1
y = [0 1]x
1
the determinant of the matrix [M 0
t
0 1
M1] =
is 1. which
1 t
Solution : M 1 = A(t ) M 0
1 e 0.5 2 e 0.5 2 d
0
and
Its state transition matrix is ( , 0 ) =
0.5 ( 2 02 )
0
e
1 e 0.5 t2 t e 0.5 t2 d
2)
2
t0
= 0 e 0.5( t t0
C (t) (t , t 0 ) = [0 1]
2)
2
0
e 0.5( t t0
1
0
2
2
W0 ( 0 , 1 ) = 0.5( 2 02 ) 0 e 0.5( 0 ) d
0 e
1 0
0
=
d
2 02
0 0
e
We see . W0 (t 0 , t1 ) is singular for all t 0 and t , thus the state equation is not observable at any t 0 .
0 0
1
x + t u
0 1
e
y = 0 e t x solution :
x1 (t ) = x1 (0)
0
1 0 1 0 1
1
(t , t 0 ) = XtX (t 0 ) =
and
=
t0
t + t 0
t
0 e 0 e 0 e
0 1 1
1
(t , t) B (t) =
= t
t + t t
0 e e e
0
1
= 0 e 2 t+t0
C (t) (t, t 0 ) = 0 e t
t+ t0
0 e
WE compute
1
1
Wc ( 0 , ) = 1 e d =
0 e
0 e
0
e ( 0 )
e
d
2
e 2
e ( 0 ) e ( 0 )
0
1
0
Wc ( 0 , ) = 2 + 0 1 e 2 +0 d =
d its determinant is identically zero for all
4
+ 20
0 e
0 0
(t , t 0 )
(t 0 , t )
(t , t 0 ) (t 0 , t ) =
(t 0 , t ) + (t , t 0 )
t
t
t
= A(t ) (t , t 0 ) (t 0 , t ) + (t , t 0 ) (t 0 , t ) = A(t ) + (t , t 0 ) (t 0 , t ) = 0
t
t
There we have
(t 0 , t )
= A(t ) + (t 0 , t )
t
(t 0 , t )
= (t 0 , t ) + A(t ) and.
t
(1 , )B() B () (1 , )d is nonsingular
( A(t ), B (t )) is observable at t 0 if and only if there exists a finite t1 > t 0 such that
1
Wc ( 0 , ) = ( 0 , )B() B () ( 0 , )d is nonsingular.
0
For time-invariant systems , show that A,Bis controllable if and only if (-A,B) is controllable , Is true
foe time-varying systems ?
A,B-A,B
Proof: for time-invariant system (A,B) is controllable if and only if
(C1 ) = B
AB A n 1 B = n
(assuming a is n n
AB A n 1 B = n
we know that any column of a matrix multiplied by nonzero constant does not change the rank of the
matrix . so (C1 ) = (C 2 ) the conditions are identically the same , thus we conclude that (A,B)is
controllable if and only if (-A,B)is controllable .for time-systems, this is not true .
s 1
. Find a three-dimensional controllable realization check its
s 1)( s + 2
7.1 Given g ( s ) =
observalility
s 1
s 1)( s + 2
g (s) =
Solution
s 1
s 1
using (7.9) we can find a
g ( s) = 2
2
s 1)( s + 2
s + 2 s 2 s 2
1
2 1 2
x = 1 0 0 x + 0u
0
0 1 0
s 1
s 1)( s + 2
7.1 g ( s ) =
Solution : using (7.14) we can find a 3-dimensional observable realization for the transfer
0
2 1 0
function : x = 1 0 1 x + 1 u
1
2 0 0
s 1
1
1
=
= 2
s 1)( s + 2 ( s + 1)( s + 2) s + 3s + 2
Solution : g ( s ) =
3 2
1
x =
x + u
0
1
0
y = [0 1]x
1
3 2 0
0 0 x + 0u
is x = 1
0
0
0 1
a minimal realization is ,
y = [0 1 0]x
74 use the Sylvester resultant to find the degree of the transfer function in problem 7.1
Sylvester resultant 7.1
solution :
0
0
0
2 1 0
1 1 2 1 0
0
0 1 1 2 1
s=
0
2
0 1 1
1
0
0
1
0
2
0
0
0
0
1
0
0
rank(s)=5. because all three D-columns of s are linearly independent . we conclude that s has only
two linearly independent N-columns . thus deg y ( s ) = 2
7.5
2s 1
to a coprime fraction Sylvester resultant
4s 2 1
2s 1
4s 2 1
0
1 1 0
0
2 1 1
Solution : s =
rank ( s ) = 3,
4 0
0
2
0
4 0
0
1
null ( s1 ) =
2
2s 1
=
4s 2 1
7.6
1
2
s+
1
2
1
2
=
0 1
N ( s) =
thus we have
s1 = s
1
+ 0s
2
D( s) =
1
+s
2
and
1
2s + 1
s+2
by arranging the coefficients of
s + 2 s
N (s ) and
u =1
order from left to right . is it true that all D-columns are linearly independent of their LHS
columns ?is it true that the degree of g ( s ) equals the number of linearly independent N-columns?
s+2
N (s )
g (s) = 2
s + 2 s
D- LHS g ( s )
N-
s+2
D( s)
g ( s ) = 2
:=
s + 2 s N ( s )
Solution
deg g ( s ) = 1
D( s ) = D2 S 2 + D1 S + D0
N (s) = N 2 S 2 + N1 S + N 0
from the Sylvester resultant :
1
2
s=
0
0
1
2
0
0
1
2
0
0
0
1
N-columns
7.7
1 s + 2
D( s)
and its realization
=
s + 1 s + 2 N ( s )
consider g ( s ) =
x = 1
1
2
1
x + u
0
0
y = [1 2 ]x
show that the state equation is observable if and only if the Sylvester resultant of
D( s ) and
x = 1
1
s + 2
D( s)
=
N ( s ) is nonsingular . g ( s ) = 2 1
s + 1 s + 2 N ( s )
2
1
x + u
0
0
D( s ) and
N ( s ) Sylvester resultant
1
1
proof: x =
y = [1 2 ]x
2
1
x + u
0
0
g ( s ) from theorem 7.1 , we know the state equation is observable if and only id D( s ) and
N (s)
are coprime
from the formation of Sylvester resultant we can conclude that D ( s )
and
N ( s ) are coprime if
and only if the Sylvester resultant is nonsingular thus the state equation is observable if and only if the
Sylvester resultant of D ( s )
and
N ( s ) is nonsingular
7.8 repeat problem 7.7 for a transfer function of degree 3 and its controllable-form realization ,
3 7.7
1 s 3 + 1 s 2 + 2 s + 3 N ( s )
=
s 3 + 1 s 2 + 2 s + 3 D( s )
and
N ( s ) are coprime
0 1 0 ) s 2 + ( 2 2 0 ) s + 2 + 3 0
1
1 2 3
x = 1
0
0 x + 0u
canonical form realization of g ( s )
0
0
1
0
y = [1 1 0 2 2 0 3 3 0 ]x + 0 u
the Sylvester resultant of D ( s )
and
N ( s ) are
verification : g ( s ) =
1
1
. g ( s ) =
7.7
2
( s + 1)
( s + 1) 2
1
is a strictly proper rational function with degree 2, and it can be
( s + 1) 2
g ( s ) = 0 s 1 + s 2 2 s + s 4 4 s 5 + 5s 6 + s 1
g ( s ) =
1
to find on irreducible companion-form realization
( s + 1) 2
( s + 1) 2
g ( s ) = 0 s 1 + s 2 + (2) s 3 + 3s 4 + (4) s 5 + 5s 6 + 3
solution
0 1
=2
1 2
T (2,2) =
deg g ( s ) =2
1 2 0 1
~
A = T (2,2)T 1 (2,2) =
2 3 1 2
0
b=
c = [1 0]
1
1
1 2 2 1 0
=
=
2 3 1 0 1 2
1
to find an irreducible balanced-form realization
( s + 1) 2
g ( s ) .
0 1
1 2
Solution : T (2,2) =
t = [0 1;1 2]tt = [1 2; 2 3]
[k , s, l ] = svd (t ), sl = sqrt ( s);
Using Matlab I type o = k * sl ; c = sl * l ;
A = inv(o) * * inv(c)
b = c(; ,1), c = O(1,1)
This yields the following balanced realization
1.7071 0.7071
0.5946
x =
x+
0.7071 0.2929
0.5946
y = [ 0.5946 0.5946]x
7.12
Show
that
the
two
2 0
1
x =
x + u
1 1
2
state
equation
2 1
1
x =
x + u
0 1
0
y = [2 2]x
) , are they
(s 2 s 2
Proof :
(2 s + 2)
) ,??
(s 2 s 2
(2 s + 2)
2( s + 1)
2
=
=
2
( s s 2) ( s + 1)( s 2) s 2
and
1
1
s 2
s
2
1
1
=
[2 2]
[
]
2
2
s 1 0
0
0
( s 1)( s 2) 1 = 2
1
0 s 2
s 1
s2
0 1
s2
[2 0]
2 = [2 0]
1
+
s
1
1
( s + 1)( s 2)
0 1
2
1 2 = s 2
s + 1
(2 s + 2)
(s 2 s 2
is 1 , and the two state equations are both two-dimensional .so they are nor minimal realizations .
they are not algebraically equivalent because there does not exist a nonsingular matrix p such that
2 1 1 2 0
P
P = 1 1
0 1
1 1
P =
0 0
[2
7.13
2]P 1 = [2 0]
find the characteristic polynomials and degrees of the following proper rational matrices
G 1 = s
1
s + 3
s + 3
s +1
s
s +1
1
( s + 1) 2
G 1 =
1
s + 2
( s + 1)( s + 2)
( s + 1)( s + 2)
1
2
G 3 = ( s + 1)
1
s + 2
s+3 1
s + 2 s + 5
s +1 1
s + 4 s
note that each entry of G s ( s ) has different poles from other entries
G 1 ( s ) , G 2 ( s ) G 3 ( s ) .
1
Solution : the matrix G 1 ( s ) has ,
s
s+3
,
s +1
1
,
s+3
s
and det G 1 ( s ) =0 as its minors
s +1
G 2 ( s ) has
s2 s +1
1
1
1
1
,
,
,
. det G 2 ( s ) =
as
( s + 1) 3 ( s + 2) 2
( s + 1) 2 ( s + 1)( s + 2) s + 2 ( s + 1)( s + 2)
its minors , thus the characteristic polynomial of G 2 ( s ) = ( s + 1) 3 ( s + 2) 2 and
dG 2 ( s ) = 5
Every entry of G 3 ( s ) has poles that differ from those of all other entries , so the characteristic
2
2
polynomial of G 3 ( s ) is s ( s + 1) ( s + 2)( s + 3) ( s + 4)( s + 5) and
G 3 ( s ) = 8
s 1 1
7.14 use the left fraction G ( s ) =
to form a generalized resultant as in (7.83), and
s s 1
then search its linearly independent columns in order from left to right ,what is the number of linearly
independent N-columns ?what is the degree of G ( s ) ? Find a right coprime fraction lf G ( s ) ,is the
given left fraction coprime? G ( s ) (7.83),
. N-> G ( s ) ? G ( s ) .
?
1
s 1 1
Solution : G ( s ) =
=: D 1 ( s ) N ( s )
s s 1
s 1 0 1 1 0
D (s) =
=
+
s
s s 0 0 1 1
Thus we have
1
N (s) =
1
And the generalized resultant
0
0
1
S=
-1
0
1 1 0
0 -1 0
0
0
0
1
0
0
1
0
0
1
0
0
0
0
0
0
1
-1
0
0
1
rank s=5 ,
-1
0
null ( s ) = [ 1 0 0 0 0 1]
N 0 D0 N 1 D1
So we have
D( S ) = 0 + 1 S = S
+ 1 0
+ 1
N (S ) = + s =
0 0
0
1
G ( s ) = s 1
0
deg = G ( s ) = u = 1
the given left fraction is not coprime .
7.15 are all D-columns in the generalized resultant in problem 7.14 linearly independent .pf their LHS
columns ?Now in forming the generalized resultant ,the coefficient matrices of D (S ) and N (S ) are
arranged in descending powers of s , instead of ascending powers of s as in problem 7.14 .is it true that
all D-columns are linearly independent of their LHS columns? Does he degree of G ( s ) equal the
number of linearly independent N-columns ?does theorem 7.M4hold ? 7.14
D- LHS ? D (S ) N (S ) S .
. D- LHS ? G ( s )
N-? 7.M4 ?
Solution : because D1 0, ALL THE D-columns in the generalized resultant in problem7.14 are
linearly independent of their LHS columns
Now forming the generalized resultant by arranging the coefficient matrices of D (S ) and N (S ) in
descending powers of s :
D1
S = D0
0
N1
N0
0
D1
D0
1
1
0
0
N1 =
0
N 0
0
0 0
0
1 0
0
1 1
1
0 1 1
0 0
0
0 0
0
0
0
0
1
0
0
0
0
rank
1 1
0 1
S =5
we see the D0 -column in the second colums block is linearly dependent of its LHS columns .so it is
not true that all D-columns are linearly independent of their LHS columns .
the number of linearly independent N -columns is 2 and the degree of G ( s ) is I as known in problem
7.14 , so the degree of G ( s ) does not equal the number of linearly independent N -columns , and the
theorem 7.M4 does not hold .
7.16, use the right coprime fraction of G ( s ) obtained in problem 7.14 to form a generalized tesultant as
in (7.89). search its linearly independent rows in order from top to bottom , and then find a left coprime
fraction of G ( s ) 7.14 G ( s ) ,(7.89),
, G ( s ) .
1
Solution : G ( s ) = s 1
0
D( s) = s
1
N ( s) =
0
0
1
0
T =
0
0
1 0
0 0
0 0
0 1
1 0
0 0
rank T = 3 1 = 1, 2 = 0
0 1 0
mull ( 1 0 0 ) = [0 0 1]
0 0 0
0 1 0
1 0 0
) = [ 1 0 0 1]
mull (
0 0 1
0 1 0
1 0 0 0 1 0
D0 ] =
0 0 1 0 0 0
0 0 1 0
s 0
+
D (s) =
s=
0 1 0 0
0 1
+ 1
N (s) =
0
[ N
D0
N0
s 2 + 0.5s 0.5s
s 0 1
thus a left coprime fraction of
is G ( s ) =
s 0.5
0 1 0
0.5
s2 +1
3
7.17 find a right coprime fraction of G ( s ) = s
s+2
s 2
2 s + 1
G ( s ) ]
s2 +1
3
solution : G ( s ) = s
s+2
s 2
where
s 3
D (s) =
0
2 s + 1
s 2 =: D 1 ( S ) N ( S )
2
s
0 0 0 0 0 0 0 2 1 0 3
=
+
s +
s + 0 0 s
s 2 0 0 0 0 0 1
s 2 + 1 s (2 s + 1) 1 0 0 1 1 2 2
N (s) =
=
+
s +
s
2 s 2 0 1 2 0 0
s+2
0
0
0
0
0
S =
1
0
0
0
0
0 1 0 0 0 0 0 0 0 0 0
0 2 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1
0 0 0 0
0 1 2 0 0 2 0 0 0 0 0
0 1 2 0 0 0 1 0 0 1 0
1 0 0 0 0 1 2 0 0 2 0
0 0 0 0 0 1 2 0 0 0 1
0 0 0 0 1 0 0 0 0 1 2
0 0 0 1 0 0 0 0 0 1 2
0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0
rank s = 9,
1 = 2, 2 = 1
the monic null vectors of the submatrices that consist of the primary dependent N 2 -columns and
D2
0 0.5 0.5
0
1 0 2 s 2 + 0.5s 0.5s
D(=
s)
s
+
+
0 0 s=
1
s 0.5
0.5 0.5 0
0.5
s + 0.5 2.5
0.5 2.5 1 0
N ( s )=
s=
+
2.5 2.5 1 0
s + 2.5 2.5
s + 0.5 2.5 s 2 + 0.5s 0.5s
s 0.5
s + 2.5 2.5 0.5
s 2
we define H ( s ) =
0
s 0
L( s ) = 1 0
0 1
then we have
0
0
1 0.5
0.5
=
D( s)
H (s) +
L( s )
0 1
0 0.5 0.5
1 0.5 2.5
N (s) =
L( s )
1 2.5 2.5
1
1 0.5
1 0.5
=
=
D
0 1
0
1
0
0 0.5 0.25 0.25
1 0.5 0.5
=
Dhc1 Dic =
x 1
0
0 x + 0
0 u
0
0
0.5
0.5
1
1 0.5 2.5
y=
x
1 2.5 2.5
2 1
1
x + u
1 1
2
y = [1 1]x
8.1 given x =
find the state feedback gain k so that the feedback system has 1 and 2 as its eigenvalues .
compute k directly with using any equivalence transformation , ,
k -1,-2.
Solution : introducing state feedback u = r [k1
2 1
1
x =
x [k1
1 1
2
k 2 ]x , we can obtain
1
k 2 ]x + r
2
f ( s ) = ( s 2 + k1 )( s 1 + 2k 2 ) (1 2k1 )(1 k 2 )
= s 2 + (k1 + 2k 2 3) s + (k1 5k 2 + 3)
the desired characteristic polynomial is ( s + 1)( s + 2) = s 2 + 3s + 2 ,equating
k1 + 2k 2 3 = 3 and k1 5k 2 + 3 = 2 , yields
k1 = 4 and
D ( s ) = ( s 2)( s 1) + 1 = s 2 3s + 1
D f ( s ) = ( s + 1)( s + 2) = s 2 + 3s + 2
k = [3 (3) 2 3] = [6 1]
1 3
=
0 1
D = [b
1 3
C=
0 1
1
1 4
1
7
=
Ab] =
C
2
1
2 7
using (8.13)
1 3 1 7 4 7
= [4 1]
k = k CC 1 = [6 1]
0 1 2 7 1 7
8.3 Repeat problem 8.1 by solving a lyapunov equation
lyapunov 8.1.
solution : ( A, b) is controllable
1 0
and k = [1 1] then
0 2
selecting F =
0 13
9 1
AT TF = b k T =
T 1 =
9
13 0
1
13
9 1
k = k T 1 = [1 1]
= [4 1]
13 0
1
1 1 2
8.4 find the state foodback gain for the state equation x = 0 1 1 x + 0 u
1
0 0 1
so that the resulting system has eigenvalues 2 and 1+j1 . use the method you think is the
simplest by hand to carry out the design .
-2 -1+j1 /
solution : ( A, b) is controllable
( s ) = ( s 1) 3 = s 3 3s 2 + 3s 1
f ( s ) = ( s + 2)( s + 1 + j1)( s + 1 j1) = s 3 + 4s 2 + 6s + 4
k = [4 (3) 6 3 4 (1)] = [7 3 5]
1 3
1 3 3
C = 0 1 3
C = 0 1
0 0
0 0
1
1
1
1 1 2
2 C 1 = 2 3
C = 0 1
1
1 1
1
2
1
6
3
1
0
2
1
1
0
1 3 6 1
( s 1)( s + 2)
is it possible to
( s + 1)( s 2)( s + 3)
( s 1)
by state feedback? Is the resulting
( s + 2)( s + 3)
( s 1)
( s 1)( s + 2)
=
( s + 2)( s + 3) s + 2) 2 ( s + 3)
We can easily see that it is possible to change g ( s ) to g f ( s ) by state feedback and the resulting
system is asymptotically stable and BIBO stable .
( s 1)( s + 2)
is it possible to
( s + 1)( s 2)( s + 3)
1
by state feedback ? is the resulting system BIBO
s+3
g ( s ) g f ( s ) ? BIBO ??
Solution ; g f ( s ) =
1
( s 1)( s + 2)
=
s + 3 ( s 1)( s + 2)( s + 3)
1
1 1 2
x = 0 1 1 x + 0u
1
0 0 1
y = [2 0 0]x
let u = pr k x , find the feedforward gain p and state feedback gain k so that the resulting
system has eigenvalues 2 and 1 j1 and will track asymptotically any step reference input
, u = pr k x p k
-2 1 j1 , .
Solution :
1
s 1
g ( s ) = c( sI A)b = [2 0 0] 0
3
2
f ( s ) = s + 4 s + 6 s +4
P=
1
( s 1) 2
1
s 1
0
1
1
3
( s 1)
( s 1) 2 1
2
1
0 = 2 s 8 s + 8
s 3 3s 2 + 3s 1
( s 1) 2
1
1
s 1
3 4
= = 0.5
b3 8
k = [15 47 8]
1
x[k + 1] = 0
0
y[k ] = [2 0
1 2
1
1 1 x + 0u[k ]
1
0 1
0]x[k ]
find the state feedback gain so that the resulting system has all eigenvalues at x=0 show that for
any initial state the zero-input response of the feedback system becomes identically zero for
k 3
, Z=0,
,( k 3 )
solution ; ( A, b) is controllable
( z ) = z 3 3s + 3 z 1
f ( z) = z 3
k = [3 3 1]
1
0
1 3 6 1
k = C C k = [3 3 1]0 1 3 2 3 2 = [1 5 2]
0 0 1 1
2 1
1
1
0 4 4
1
1 1 2 1
as following y zi [ k ] = C A x[0]
compute A
0 1 0
A = Q 0 0 1Q 1
0 0 0
K
0 1 0
k
A = Q 0 0 1 Q 1
0 0 0
using the nilpotent property
K
0 1 0
0 0 1 = 0
0 0 0
for
k 3
consider the discret-time state equation in problem 8.8 let for u[k ] = pr[k ] k x[k ]
where p
is a feedforward gain for the k in problem 8.8 find a gain p so that the output will track any step
reference input , show also that y[k]=r[k] for k 3 ,thus exact tracking is achieved in a finite
number of sampling periods instead of asymptotically . this is possible if all poles of the resulting
system are placed at z=0 ,this is called the dead-beat design
8.8 , u[k ] = pr[k ] k x[ k ] , p , 8.8
k , p , k 3 y[k]=r[k].
,. Z=0 ,
,[.
Solution ;
2 z 2 8s + 8
( z ) = z 3
z 3 3s 2 + 3z 1
z 2 8z + 8
g f ( z ) = p
z3
g ( z ) =
( A, b) is controllable all poles of g f ( z ) can be assigned to lie inside the section in fig 8.3(b)
under this condition , if the reference input is a step function with magnitude a , then the output
y[k] will approach the constant
g f (1) a as k +
thus in order for y[k] to track any step reference input we need g f (1) = 1
g f (1) = 2 p = 1 p = 0.5
the resulting system can be described as
0.5
0 4 4
x[k ] = 0
1
1 x[k ] + 0 r[k ]
0.5
1 5 1
= A x[k ] + b r[k ]
y[k ] = [2 0 0]x[k ]
the response excited by r[k] is
K
y[k ] = C A x[0] +
K 1
C A
K 1 m
b r[m]
M =0
as we know,
A =0
for
k 3, so
2
2
0
8.10 consider the uncontrollable state equation x =
0
1 0 0
0
1
2 0 0
x + u is it possible
1
0 1 0
0 0 1
1
to find a gain k so that the equation with state feedback u = r k x has eigenvalues 2, -2 ,
-1 ,-1 ? is it possible to have eigenvalues 2, -2 ,2 ,-1 ? how about 2 ,-2, 2 ,-2 ? is the equation
stabilizable?, k u = r k x
-2. 2, -1, -1, ?-2, -2, -2,-1?-2,-2,-2,-2??
Solution: the uncontrollable state equation can be transformed into
0
x c 1
=
xc 0
0 4 0
1
0 0
0 xc 0
u
+
1 3
0 x c 0
0 0 1
0
A
0 xc bc
= c
+ u
0 Ac xc 0
0 1
1 2
=
1 1
1 1
4
4
1
1
0
0
0
{ 3
2 j 2}
8.1 , .
{ 3
2 j 2}.
2 1
A=
1 1
1
b=
2
c = [1 1]
l1
let l = then the characteristic polynomial of x = ( A l c) is
l 2
( s ) = ( s 2 + l1 )( s 1 + l 2 ) + (1 + l 2 )(1 l1 )
= s 2 + (l1 + l 2 3) s + 3 2l1 l 2
= ( s + 2) 2 + 4 = s 2 + 4s + 8
l1 = 12
l 2 = 19
thus a full-dimensional state estimor with eigenvalues 2 j 2 has been designed as following
13
14
1
12
x =
x + u +
y
20 18
2
19
designing a reduced-dimensional state estimator with eigenvalue 3 :
T = [t1
t2 ]
2 1
t 2 ]
+ 3[t1
1 1
4
3
T =
21 21
let T : [t1
t 2 ] = [1 1]
z = 3 x +
13
u+ y
21
1
x = 5
21
1 y 4 21 y
4 =
5 21 z
z
21
8.12 consider the state equation in problem 8.1 .compute the transfer function from r to y of the
state feedback system compute the transfer function from r to y if the feedback gain is applied to
the estimated state of the full-dimensional estimator designed in problem 8.11 compute the
transfer function from r to y if the feedback gain is applied to the estimated state of the
reduced-dimensional state estimator also designed in problem 8.11 are the three overall transfer
functions the same ?
8.1 , r y . 8.11
, r y ,
8.11 , r y ,
?
Solution
x = A x + bu
y = cx u = r k x
(1) y x y = c( sI A + b k ) b
1
s+2
= [1 1]
9
( s + 1)( s + 2)
0 1
3s 4
1 2 = ( s + 1)( s + 2)
s + 1
2
x =
1
2
=
1
1 1
2 1 1 1
x + u =
x + r [4 1]x
1 2
1 1 2 2
1 4 1 1
x
x +
r
1 8 2 2
13 1 12
14
x =
x + u +
y
(2)
20 18 2 19
13 1 1
14
12
=
x + r [4 1]x +
[1 1]x
20 18 2 2
19
12 12 12 1
10
=
x+
r
x +
19 2
28 20 19
y = [1 2]x
1
4
1
2
1
1
1
8
2
+ 2 r
=
12 x 1
x 12 12 10
2
19 28 20
19
x
y = [1 1 0 0]
x
g r y
s 2 1
1
s 1
= [1 1 0 0]
12 12
19 19
3s 3 + 8s 2 + 8s 32
= 4
s + 7 s 3 + 22s 2 + 32s + 16
4
8
1
2
12
s + 20
1
2
1
10
28
2
(3s 4)( s 2 + 4 s + 8)
3s 4
=
=
2
( s + 1)( s + 2)( s + 4 s + 8) ( s + 1)( s + 2)
2
x =
1
2
=
1
(3)
2
=
1
13
=
21
1
1
2 1
4 1
1 1
x + u =
x
x + r r
1
2
1 1
8 2
2 2
1
4 1 4 21 y 1
x
x
+ r
1
8 2 5 21 z 2
1
11
63
1
+
[
]
1
1
x
x
z
22
126
2 r
1
12
63
1
x + z + r
23
126
2
4 21 y
13
(r [4 1]
) + [1 1]x
21
5 21 z
13
164 164
=
x 42 z + r
21
21 21
y = [1 1]x
z = 3 z +
13 12
x
= 21 23
z 164 164
21 21
x
y = [1 1 0]
z
1
63
x
126 + 2 r
z 13
42
21
s 13
g r y ( s ) = [1 1 0] 21
164
21
2
3s + 5s 12
= 3
s + 6 s 2 + 11s + 6
1
1 2
63
s 23 126 + 2
13
164
s + 42
21
21
(3s 4)( s + 3)
3s 4
=
=
( s + 1)( s + 2)( s + 3) ( s + 1)( s + 2)
these three overall transfer functions are the same , which verifies the result discussed in section
8.5.
0
0
8.13 let A =
3
1
0
1
1
0
1
2
0
0
0
0
0
B=
1
3
0
0
0
0
find two different constant matrices k such that
2
43j 5 4 j
0
4 3 0
3 4 0
0
0
4 5
0
the eigenvalues of F are 4 3 j and 5 4 j
1 0 0 0
, ( F , k1 ) is observable
0 0 1 0
(2)if k1 =
AT1 T1 F = B K 1 T1 =
0.1322 0.0714 0.1731 0.0185
(3)IF K 2 =
AT2 T2 F = B K 2 T2 =
0.2036
0.0607
0.3440
0.0245
0.2007
0.0048 0.0037 0.2473
252.9824 55.2145 0.0893 3.2509
1
K 2 = K 2T2 =
59.8815
7.4593
2.5853
185.5527