You are on page 1of 609

Syllabus

The 'Information' section of the course syllabus contains a list of questions each student was asked to answer about
their background. The information provided would have been used to create a class e-mail list and to adjust the pace
of the course.

Instructor

z Professor Jonathan P. How


Feedback Control

z Introduction to the state-space approach of analysis and control synthesis.
z State-space representation of dynamic systems; controllability and observability.
z State-space realizations of transfer functions and canonical forms.
z Design of state-space controllers, including pole-placement and optimal control.
z Introduction to the Kalman filter.
z Limitations on performance of control systems.
z Introduction to the robustness of multivariable systems.






Homework

z Weekly problem sets will be handed out on Fridays (due the following week).
There will be approximately two labs that will be graded as part of the homework.
z Midterm: Between Lecture 27 and 28, in class.
z Final Exam: After Lecture 37.
z Course Grades: Homework 30%, Midterm 20%, Final 50%.


Textbooks

Required:
z Pierre Belanger. Control Engineering: A Modern Approach. Oxford University, 1995.
NUMBER OF
LECTURES
TOPICS
5
Review of Classical Synthesis
Techniques
8 State Space - Linear Systems
7 Full State Feedback
3 State Estimation
4 Output Feedback
7 Robustness
Page 1 of 3 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...


Goal

To teach the fundamentals of control design and analysis using state-space methods. This includes both the
practical and theoretical aspects of the topic. By the end of the course, you should be able to design
controllers using state-space methods and evaluate whether these controllers are robust.
z Review classical control design techniques.
z Explore new control design techniques using state-space tools.
z Investigate analysis of robustness.


Prerequisites

z Basic understanding of how to develop system equations of motion.
z Classical analysis and synthesis techniques (root locus, Bode). There will be a brief review during the
first 2 weeks.
z Course assumes a working knowledge of MATLAB

.



Policies

z You are encouraged to discuss the homework and problem sets. However, your submitted work
must be your own.
z Homework will be handed out Fridays, due back the following Friday at 5PM. Late homework will not
be accepted unless prior approval is obtained from Professor How. The grade on all late homework will
be reduced 25% per day. No homework will be accepted for credit after the solutions have been
handed out.
z There will hopefully be 2 Labs later in the semester that will be blended in with the homework. This
will be group work, depending on the class size.
z For feedback and/or questions, contact me through email for the best results.


Supplemental Textbooks

There are many others, but these are the ones on my shelf. All pretty much have the same content, but each
author focuses on different aspects.


Modeling and Control
z Palm. Modeling, Analysis, and Control of Dynamic Systems. Wiley.



Basic Control
z Franklin and Powell. Feedback Control of Dynamics Systems. Addison-Wesley.
z Van de Vegte. Feedback Control Systems. Prentice Hall.
z Di Stefano. Feedback Control Systems. Schaums outline.
z Luenberger. Introduction to Dynamic Systems. Wiley.
Page 2 of 3 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...
z Ogata. Modern Control Engineering. Prentice Hall.



Linear Systems
z Chen. Linear Systems Theory and Design. Prentice Hall.
z Aplevich. The Essentials of Linear State-Space Systems. Wiley.



State Space Control
z Brogan. Modern Control Theory. Quantum Books.
z Burl. Linear Optimal Control. Prentice Hall.
z Kwakernaak and Sivan. Linear Optimal Control Systems. Wiley Interscience.

Page 3 of 3 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...
This course calendar incorporates the lecture schedule and assignment schedule for the semester. Some lecture topics
were taught over several class sessions.


LEC # TOPICS HOMEWORK OUT HOMEWORK IN
1 Introduction HW1
2 Introduction
3 Root Locus Analysis
4 Root Locus Synthesis
5 Bode-Very Brief Discussion HW2 HW1
6 State Space (SS) Introduction
7 Stace Space (SS) to Transfer Function (TF) HW3 HW2
8 Transfer Function (TF) to State Space (SS)
9 Time Domain
10 Observability HW4 HW3
11 Controllability
12 Pole/Zero (P/Z) Cancellation
13 Observability, Residues HW5 HW4
14 Pole Placement
15 Pole Placement HW6 HW5
16 Performance
17 Performance
18 Pole Locations HW7 HW6
19 Pole Locations
20 Pole Locations
21 State Estimators HW8 HW7
22 State Estimators
23 State Estimators
24 Dynamic Output Feedback HW8
25 Dynamic Output Feedback
26 Dynamic Output Feedback
27 Sensitivity of LQG HW9
Quiz
28 Sensitivity of LQG HW9
29 Bounded Gain Theorem HW10
30 Error Models
31 MIMO Systems
Page 1 of 2 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...
32 MIMO Systems HW10
33 MIMO Systems
34 LQR Optimal
35 LQR Optimal
36 Reference Cmds II
37 LDOC-Review
Final Exam
Page 2 of 2 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...
Lecture #1
16.31 Feedback Control
Copyright2001byJonathanHow.
1
Fall 2001 16.31 11
Introduction
K(s) G(s)
-
6
?
u(t)
y(t) e(t) r(t)
d(t)
Goal: Design a controller K(s) so that the system has some desired
characteristics. Typical objectives:
Stabilize the system (Stabilization)
Regulate the system about some design point (Regulation)
Follow a given class of command signals (Tracking)
Reduce the response to disturbances. (Disturbance Rejection)
Typically think of closed-loop control so we would analyze the
closed-loop dynamics.
Open-loop control also possible (called feedforward) more
prone to modeling errors since inputs not changed as a result of
measured error.
Note that a typical control system includes the sensors, actuators,
and the control law.
The sensors and actuators need not always be physical devices
(e.g., economic systems).
A good selection of the sensor and actuator can greatly simplify
the control design process.
Course concentrates on the design of the control law given the
rest of the system (although we will need to model the system).
Fall 2001 16.31 12
Why Control?
Easy question to answer for aerospace because many vehicles (space-
craft, aircraft, rockets) and aerospace processes (propulsion) need to
be controlled just to function
Example: the F-117 does not even y without computer control,
and the X-29 is unstable
OPERATION IRAQI FREEDOM -- An F-117 from the 8th Expeditionary Fighter Squadron out of Holloman Air Force
Base, N.M., flies over the Persian Gulf on April 14, 2003. The 8th EFS has begun returning to Hollomann A.F.B. after
having been deployed to the Middle East in support of Operation Iraqi Freedom. (U.S. Air Force photo by Staff Sgt.
Derrick C. Goode). http://www.af.mil/photos.html.
Fall 2001 16.31 13
Feedback Control Approach
Establish control objectives
Qualitative dont use too much fuel
Quantitative settling time of step response <3 sec
Typically requires that you understand the process (expected
commands and disturbances) and the overall goals (bandwidths).
Often requires that you have a strong understanding of the phys-
ical dynamics of the system so that you do not ght them in
appropriate (i.e., inecient) ways.
Select sensors & actuators
What aspects of the system are to be sensed and controlled?
Consider sensor noise and linearity as key discriminators.
Cost, reliability, size, . . .
Obtain model
Analytic (FEM) or from measured data (system ID)
Evaluation model reduce size/complexity Design model
Accuracy? Error model?
Design controller
Select technique (SISO, MIMO), (classical, state-space)
Choose parameters (ROT, optimization)
Analyze closed-loop performance. Meet objectives?
Analysis, simulation, experimentation, . . .
Yes done, No iterate . . .
Fall 2001 16.31 14
Example: Blimp Control
Control objective
Stabilization
Red blimp tracks the motion of the green blimp
Sensors
GPS for positioning
Compass for heading
Gyros/GPS for roll attitude
Actuators electric motors (propellers) are very nonlinear.
Dynamics
rigid body with strong apparent mass eect.
Roll modes.
Modeling
Analytic models with parameter identication to determine mass.
Disturbances wind
Fall 2001 16.31 15
State-Space Approach
Basic questions that we will address about the state-space approach:
What are state-space models?
Why should we use them?
How are they related to the transfer functions used in classical
control design?
How do we develop a state-space model?
How do we design a controller using a state-space model?
Bottom line:
1. What: representation of the dynamics of an n
th
-order system
using n rst-order dierential equations:
m q + c q + kq = u
_

_
q
q
_

_ =
_

_
0 1
k/m c/m
_

_
_

_
q
q
_

_ +
_

_
0
1/m
_

_ u
x = Ax + Bu
2. Why:
State variable form convenient way to work with complex dy-
namics. Matrix format easy to use on computers.
Transfer functions only deal with input/output behavior, but
state-space form provides easy access to the internal fea-
tures/response of the system.
Allows us to explore new analysis and synthesis tools.
Great for multiple-input multiple-output systems (MIMO),
which are very hard to work with using transfer functions.
Fall 2001 16.31 16
3. How: There are a variety of ways to develop these state-space
models. We will explore this process in detail.
Linear systems theory
4. Control design: Split into 3 main parts
Full-state feedback cticious since requires more information
than typically (ever?) available
Observer/estimator design process of estimating the sys-
tem state from the measurements that are available.
Dynamic output feedback combines these two parts with
provable guarantees on stability (and performance).
Fortunately there are very simple numerical tools available
to perform each of these steps
Removes much of the art and/or magic required in classi-
cal control design design process more systematic.
Word of caution: Linear systems theory involves extensive use
of linear algebra.
Will not focus on the theorems/proofs in class details will be
handed out as necessary, but these are in the textbooks.
Will focus on using the linear algebra to understand the behav-
ior of the system dynamics so that we can modify them using
control. Linear algebra in action
Even so, this will require more algebra that most math courses
that you have taken . . . .
Fall 2001 16.31 17
My reasons for the review of classical design:
State-space techniques are just another to design a controller
But it is essential that you understand the basics of the control
design process
Otherwise these are just a bunch of numerical tools
To truly understand the output of the state-space control design
process, I think it is important that you be able to analyze it
from a classical perspective.
Try to answer why did it do that?
Not always possible, but always a good goal.
Feedback: muddy cards and oce hours.
Help me to know whether my assumptions about your back-
grounds is correct and whether there are any questions about
the material.
Matlab will be required extensively. If you have not used it before,
then start practicing.
Fall 2001 16.31 18
System Modeling
Investigate the model of a simple system to explore the basics of
system dynamics.
Provide insight on the connection between the system response
and the pole locations.
Consider the simple mechanical system (2MSS) derive the system
model
1. Start with a free body diagram
2. Develop the 2 equations of motion
m
1
x
1
= k(x
2
x
1
)
m
2
x
2
= k(x
1
x
2
) + F
3. How determine the relationships between x
1
, x
2
and F?
Numerical integration - good for simulation, but not analysis
Use Laplace transform to get transfer functions
Fast/easy/lots of tables
Provides lots of information (poles and zeros)
Fall 2001 16.31 19
Laplace transform
L{f(t)}
_

0

f(t)e
st
dt
Key point: If L{x(t)} = X(s), then L{ x(t)} = sX(s) assuming
that the initial conditions are zero.
Apply to the model
L{m
1
x
1
k(x
2
x
1
)} = (m
1
s
2
+ k)X
1
(s) kX
2
(s) = 0
L{m
2
x
2
k(x
1
x
2
) F} = (m
2
s
2
+ k)X
2
(s) kX
1
(s) F(s) = 0
_

_
m
1
s
2
+ k k
k m
2
s
2
+ k
_

_
_

_
X
1
(s)
X
2
(s)
_

_ =
_

_
0
F(s)
_

_
Perform some algebra to get
X
2
(s)
F(s)
=
m
1
s
2
+ k
m
1
m
2
s
2
(s
2
+ k(1/m
1
+ 1/m
2
))
G
2
(s)
G
2
(s) is the transfer function between the input F and the
system response x
2
Fall 2001 16.31 110
Given that F G
2
(s) x
2
. If F(t) known, how nd x
2
(t)?
1. Find G
2
(s)
2. Let F(s) = L{F(t)}
3. Set X
s
(s) = G
2
(s) F(s)
4. Compute x
2
(t) = L
1
{X
2
(s)}
Step 4 involves an inverse Laplace transform, which requires an ugly
contour integral that is hardly ever used.
x
2
(t) =
1
2i
_

c
+i

c
i
X
2
(s)e
st
ds
where
c
is a value selected to be to the right of all singularities of
F(s) in the s-plane.
Partial fraction expansion and inversion using tables is much
easier for problems that we will be dealing with.
Example with F(t) = 1(t) F(s) = 1/s
X
2
(s) =
m
1
s
2
+ k
m
1
m
2
s
3
(s
2
+ k(1/m
1
+ 1/m
2
))
=
c
1
s
+
c
2
s
2
+
c
3
s
3
+
c
4
s + c
5
s
2
+ k(1/m
1
+ 1/m
2
)
Solve for the coecients c
i
Then inverse transform each term to get x
2
(t).
Fall 2001 16.31 111
Note that there are 2 special entries in the tables
1.
1
(s+a)
e
at
which corresponds to a pole at s+a = 0, or s = a
2.

2
n
(s
2
+2
n
s+
2
n
)
e

n
t
sin(
n

1
2
t)
Corresponds to a damped sinusoidal response
is the damping ratio

n
is the natural frequency

d
=
n

1
2
is the damped frequency.
These results point out that there is a very strong connection be-
tween the pole locations and the time response of the system
But there are other factors that come into play, as we shall see.
Fall 2001 16.31 112
For a second order system, we can be more explicit and relate the
main features of the step response (time) and the pole locations
(frequency domain).
G(s) =

2
n
(s
2
+ 2
n
s +
2
n
)
with u(t) a step, so that u(s) = 1/s
Then y(s) = G(s)u(s) =

2
n
s(s
2
+2
n
s+
2
n
)
which gives ( =
n
)
y(t) = 1 e
t
_
_
cos(
d
t) +

d
sin(
d
t)
_
_
Several key time domain features:
Rise time t
r
(how long to get close to the nal value?)
Settling time t
s
(how long for the transients to decay?)
Peak overhsoot M
p
, t
p
(how far beyond the nal value does the
system respond, and when?)
Can analyze the system response to determine that:
1. t
r
2.2/w
h
w
h
= w
n
_
1 2
2
+
_
2 4
2
+ 4
4
_
1/2
or can use t
r
1.8/w
n
2. t
s
(1%) = 4.6/(
n
)
3. M
p
= e

1
2
and t
p
= /
d
Formulas relate time response to pole locations. Can easily evalute
if the closed-loop system will respond as desired.
Use to determine acceptable locations for closed-loop poles.
Fall 2001 16.31 113
Examples:
Max rise time - min
n
Max settling time min =
n
Max overshoot min
Usually assume that the response of more complex systems (i.e.
ones that have more than 2 poles) is dominated by the lowest
frequency pole pair.
Then the response is approximately second order, but we
must check this
These give us a good idea of where we would like the closed-loop
poles to be so that we can meet the design goals.
Feedback control is all about changing the location of the system
poles from the open-loop locations to the closed-loop ones.
This course is about a new way to do these control designs
Please refer to the Design Aids section of:

Franklin, Gene F., Powell, J. David and Abbas Emami-Naeini. 1994. Feedback Control
of Dynamic Systems 3
rd
Ed. Addison-Wesley
Slow dominant pole
num=5;den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fast dominant pole
num=5.5*[1 0.91];den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Similar example, but with second order dynamics combined with a simple real pole.
z=.15;wn=1;plist=[wn/2:1:10*wn];
nd=wn^2;dd=[1 2*z*wn wn^2];t=[0:.25:20]';
sys=tf(nd,dd);[y]=step(sys,t);
for p=plist;
num=nd;den=conv([1/p 1],dd);
sys=tf(num,den);[ytemp]=step(sys,t);
y=[y ytemp];
end
plot(t,y(:,1),'d',t,y(:,2),'+',t,y(:,4),'+',t,y(:,8),'v');
ylabel('step response');xlabel('time (sec)')
legend('2
nd
',num2str(plist(1)),num2str(plist(3)),num2str(plist(7)))
0 5 10 15 20
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
s
t
e
p

r
e
s
p
o
n
s
e

time (sec)
2nd
0.5
2.5
6.5
For values of p=2.5 and 6.5, the response is very similar to the second order system. The
response with p=0.5 is clearly no longer dominated by the second-order dynamics
2 - 5
2 - 6
2 - 7
2 - 8

2 - 10
2 - 11
2 - 12
2 - 13
2 - 14
2 - 15
2 - 16
2 - 17
EX1 - 1
EX1 - 2
2 - 18
2 - 19
2 - 20
2 - 21
2 - 22
2 - 23
2 - 24
2 - 25
2 - 26
2 - 27
2 - 28
2 - 29
Example: G(s)=1/2^2
Design Gc(s) to put the clp poles at 1 + 2j
z=roots([-20 49 -10]);z=max(z),k=25/(5-2*z),alpha=5*z/(5-2*z),
num=1;den=[1 0 0];
knum=k*[1 z];kden=[1 10*z];
rlocus(conv(num,knum),conv(den,kden));
hold;plot(-alpha+eps*j,'d');plot([-1+2*j,-1-2*j],'d');hold off
r=rlocus(conv(num,knum),conv(den,kden),1)'
z = 2.2253
k = 45.5062
alpha = 20.2531
These are the actual roots that I found from the locus using a gain of
1 (recall that the K gain is already in the compensator)
r =
-20.2531
-1.0000 - 2.0000i
-1.0000 + 2.0000i
-20 -15 -10 -5 0 5
-20
-15
-10
-5
0
5
10
15
20
Real Axis
I
m
a
g

A
x
i
s

2 - 30
MATLAB is a trademark of The MathWorks, Inc.

Slow dominant pole


num=5;den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fast dominant pole
num=5.5*[1 0.91];den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Similar example, but with second order dynamics combined with a simple real pole.
z=.15;wn=1;plist=[wn/2:1:10*wn];
nd=wn^2;dd=[1 2*z*wn wn^2];t=[0:.25:20]';
sys=tf(nd,dd);[y]=step(sys,t);
for p=plist;
num=nd;den=conv([1/p 1],dd);
sys=tf(num,den);[ytemp]=step(sys,t);
y=[y ytemp];
end
plot(t,y(:,1),'d',t,y(:,2),'+',t,y(:,4),'+',t,y(:,8),'v');
ylabel('step response');xlabel('time (sec)')
legend('2
nd
',num2str(plist(1)),num2str(plist(3)),num2str(plist(7)))
0 5 10 15 20
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
s
t
e
p

r
e
s
p
o
n
s
e

time (sec)
2nd
0.5
2.5
6.5
For values of p=2.5 and 6.5, the response is very similar to the second order system. The
response with p=0.5 is clearly no longer dominated by the second-order dynamics
2 - 5
2 - 6
2 - 7
2 - 8

2 - 10
2 - 11
2 - 12
2 - 13
2 - 14
2 - 15
2 - 16
2 - 17
EX1 - 1
EX1 - 2
2 - 18
2 - 19
2 - 20
2 - 21
2 - 22
2 - 23
2 - 24
2 - 25
2 - 26
2 - 27
2 - 28
2 - 29
Example: G(s)=1/2^2
Design Gc(s) to put the clp poles at 1 + 2j
z=roots([-20 49 -10]);z=max(z),k=25/(5-2*z),alpha=5*z/(5-2*z),
num=1;den=[1 0 0];
knum=k*[1 z];kden=[1 10*z];
rlocus(conv(num,knum),conv(den,kden));
hold;plot(-alpha+eps*j,'d');plot([-1+2*j,-1-2*j],'d');hold off
r=rlocus(conv(num,knum),conv(den,kden),1)'
z = 2.2253
k = 45.5062
alpha = 20.2531
These are the actual roots that I found from the locus using a gain of
1 (recall that the K gain is already in the compensator)
r =
-20.2531
-1.0000 - 2.0000i
-1.0000 + 2.0000i
-20 -15 -10 -5 0 5
-20
-15
-10
-5
0
5
10
15
20
Real Axis
I
m
a
g

A
x
i
s

2 - 30
MATLAB is a trademark of The MathWorks, Inc.

Slow dominant pole


num=5;den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fast dominant pole
num=5.5*[1 0.91];den=conv([1 1],[1 5]);step(num,den,6)
A
m
p
l
i
t
u
d
e

Time (sec.)
Step Response
0 1 2 3 4 5 6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Similar example, but with second order dynamics combined with a simple real pole.
z=.15;wn=1;plist=[wn/2:1:10*wn];
nd=wn^2;dd=[1 2*z*wn wn^2];t=[0:.25:20]';
sys=tf(nd,dd);[y]=step(sys,t);
for p=plist;
num=nd;den=conv([1/p 1],dd);
sys=tf(num,den);[ytemp]=step(sys,t);
y=[y ytemp];
end
plot(t,y(:,1),'d',t,y(:,2),'+',t,y(:,4),'+',t,y(:,8),'v');
ylabel('step response');xlabel('time (sec)')
legend('2
nd
',num2str(plist(1)),num2str(plist(3)),num2str(plist(7)))
0 5 10 15 20
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
s
t
e
p

r
e
s
p
o
n
s
e

time (sec)
2nd
0.5
2.5
6.5
For values of p=2.5 and 6.5, the response is very similar to the second order system. The
response with p=0.5 is clearly no longer dominated by the second-order dynamics
2 - 5
2 - 6
2 - 7
2 - 8

2 - 10
2 - 11
2 - 12
2 - 13
2 - 14
2 - 15
2 - 16
2 - 17
EX1 - 1
EX1 - 2
2 - 18
2 - 19
2 - 20
2 - 21
2 - 22
2 - 23
2 - 24
2 - 25
2 - 26
2 - 27
2 - 28
2 - 29
Example: G(s)=1/2^2
Design Gc(s) to put the clp poles at 1 + 2j
z=roots([-20 49 -10]);z=max(z),k=25/(5-2*z),alpha=5*z/(5-2*z),
num=1;den=[1 0 0];
knum=k*[1 z];kden=[1 10*z];
rlocus(conv(num,knum),conv(den,kden));
hold;plot(-alpha+eps*j,'d');plot([-1+2*j,-1-2*j],'d');hold off
r=rlocus(conv(num,knum),conv(den,kden),1)'
z = 2.2253
k = 45.5062
alpha = 20.2531
These are the actual roots that I found from the locus using a gain of
1 (recall that the K gain is already in the compensator)
r =
-20.2531
-1.0000 - 2.0000i
-1.0000 + 2.0000i
-20 -15 -10 -5 0 5
-20
-15
-10
-5
0
5
10
15
20
Real Axis
I
m
a
g

A
x
i
s

2 - 30
MATLAB is a trademark of The MathWorks, Inc.

Topic #3
16.31 Feedback Control
Frequency response methods
Analysis
Synthesis
Performance
Stability
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 31
Introduction
Root locus methods have:
Advantages:
Good indicator if transient response;
Explicity shows location of all closed-loop poles;
Trade-os in the design are fairly clear.
Disadvantages:
Requires a transfer function model (poles and zeros);
Dicult to infer all performance metrics;
Hard to determine response to steady-state (sinusoids)
Frequency response methods are a good complement to the root
locus techniques:
Can infer performance and stability from the same plot
Can use measured data rather than a transfer function model
The design process can be independent of the system order
Time delays are handled correctly
Graphical techniques (analysis and synthesis) are quite simple.
Fall 2001 16.31 32
Frequency response Function
Given a system with a transfer function G(s), we call the G(j),
[0, ) the frequency response function (FRF)
G(j) = |G(j)| arg G(j)
The FRF can be used to nd the steady-state response of a
system to a sinusoidal input. If
e(t) G(s) y(t)
and e(t) = sin 2t, |G(2j)| = 0.3, arg G(2j) = 80

, then the
steady-state output is
y(t) = 0.3 sin(2t 80

)
The FRF clearly shows the magnitude (and phase) of the
response of a system to sinusoidal input
A variety of ways to display this:
1. Polar (Nyquist) plot Re vs. Im of G(j) in complex plane.
Hard to visualize, not useful for synthesis, but gives denitive
tests for stability and is the basis of the robustness analysis.
2. Nichols Plot |G(j)| vs. arg G(j), which is very handy for
systems with lightly damped poles.
3. Bode Plot Log |G(j)| and arg G(j) vs. Log frequency.
Simplest tool for visualization and synthesis
Typically plot 20log |G| which is given the symbol dB
Fall 2001 16.31 33
Use logarithmic since if
log |G(s)| =

(s + 1)(s + 2)
(s + 3)(s + 4)

= log |s + 1| + log |s + 2| log |s + 3| log |s + 4|


and each of these factors can be calculated separately and then
added to get the total FRF.
Can also split the phase plot since
arg
(s + 1)(s + 2)
(s + 3)(s + 4)
= arg(s + 1) + arg(s + 2)
arg(s + 3) arg(s + 4)
The keypoint in the sketching of the plots is that good straightline
approximations exist and can be used to obtain a good prediction
of the system response.
Fall 2001 16.31 34
Example
Draw Bode for
G(s) =
s + 1
s/10 + 1
|G(j)| =
|j + 1|
|j/10 + 1|
log |G(j)| = log[1 + (/1)
2
]
1/2
log[1 + (/10)
2
]
1/2
Approximation
log[1 + (/
i
)
2
]
1/2

0
i
log[/
i
]
i
Two straightline approximations that intersect at
i
Error at
i
obvious, but not huge and the straightline approxima-
tions are very easy to work with.
10
2
10
1
10
0
10
1
10
2
10
0
10
1
10
2
Freq
|
G
|
Fall 2001 16.31 35
To form the composite sketch,
Arrange representation of transfer function so that DC gain of
each element is unity (except for parts that have poles or zeros
at the origin) absorb the gain into the overall plant gain.
Draw all component sketches
Start at low frequency (DC) with the component that has the
lowest frequency pole or zero (i.e. s=0)
Use this component to draw the sketch up to the frequency of
the next pole/zero.
Change the slope of the sketch at this point to account for the
new dynamics: -1 for pole, +1 for zero, -2 for double poles, . . .
Scale by overall DC gain
10
2
10
1
10
0
10
1
10
2
10
3
10
1
10
0
10
1
10
2
Freq
|
G
|
Figure 1: G(s) = 10(s + 1)/(s + 10) which is a lead.
Fall 2001 16.31 36
Since arg G(j) = arg(1 + j) arg(1 + j/10), we can construct
phase plot for complete system in a similar fashion
Know that arg(1 + j/
i
) = tan
1
(/
i
)
Can use straightline approximations
arg(1 + j/
i
)

0 /
i
0.1
90

/
i
10
45

/
i
= 1
Draw the components using breakpoints that are at
i
/10 and 10
i
10
2
10
1
10
0
10
1
10
2
10
3
0
10
20
30
40
50
60
70
80
90
100
Freq
A
r
g

G
Figure 2: Phase plot for (s + 1)
Fall 2001 16.31 37
Then add them up starting from zero frequency and changing the
slope as
10
2
10
1
10
0
10
1
10
2
10
3
80
60
40
20
0
20
40
60
80
Freq
A
r
g

G
Figure 3: Phase plot G(s) = 10(s + 1)/(s + 10) which is a lead.
Fall 2001 16.31 38
10
4
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
Freq (Hz)
M
a
g
n
i
t
u
d
e
Actual
LF
MF
HF
+1
0
2
2
+1
1
10
4
10
3
10
2
10
1
10
0
10
1
10
2
180
160
140
120
100
80
60
40
20
0
20
Freq (Hz)
P
h
a
s
e

(
d
e
g
)
Actual
LF
MF
HF
Bode for G(s) =
4.54s
s
3
+ 0.1818s
2
31.1818s 4.4545
.
The poles are at (-0.892, 0.886, -0.0227)
Fall 2001
Non-minimum Phase Systems
Bode plots are particularly complicated when we have non-minimum
phase systems
A system that has a pole/zero in the RHP is called non-minimum
phase.
The reason is clearer once you have studied the Bode Gain-
Phase relationship
Key point: We can construct two (and many more) systems
that have identical magnitude plots, but very dierent phase
diagrams.
Consider G
1
(s) =
s+1
s+2
and G
2
(s) =
s1
s+2
10
1
10
0
10
1
10
2
10
1
10
0
Freq
|
G
|MP
NMP
10
1
10
0
10
1
10
2
0
50
100
150
200
Freq
A
r
g

G
MP
NMP
Figure 4: Magnitude plots are identical, but the phase plots are dramatically dierent. NMP has a 180 deg
phase loss over this frequency range.
Topic #7
16.31 Feedback Control
State-Space Systems
What are state-space models?
Why should we use them?
How are they related to the transfer functions used in classical control design
and how do we develop a state-space model?
What are the basic properties of a state-space model, and how do we analyze
these?
Copyright2001byJonathanHow.
1
Fall 2001 16.31 71
Introduction
State space model: a representation of the dynamics of an N
th
order
system as a rst order dierential equation in an N-vector, which
is called the state.
Convert the N
th
order dierential equation that governs the dy-
namics into N rst-order dierential equations
Classic example: second order mass-spring system
m p + c p + kp = F
Let x
1
= p, then x
2
= p = x
1
, and
x
2
= p = (F c p kp)/m
= (F cx
2
kx
1
)/m

p
p

0 1
k/m c/m

p
p

0
1/m

u
Let u = F and introduce the state
x =

x
1
x
2

p
p

x = Ax + Bu
If the measured output of the system is the position, then we
have that
y = p =

1 0

p
p

=

1 0

x
1
x
2

= cx
Fall 2001 16.31 72
The most general continuous-time linear dynamical system has form
x(t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
where:
t R denotes time
x(t) R
n
is the state (vector)
u(t) R
m
is the input or control
y(t) R
p
is the output
A(t) R
nn
is the dynamics matrix
B(t) R
nm
is the input matrix
C(t) R
pn
is the output or sensor matrix
D(t) R
pm
is the feedthrough matrix
Note that the plant dynamics can be time-varying.
Also note that this is a MIMO system.
We will typically deal with the time-invariant case
Linear Time-Invariant (LTI) state dynamics
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
so that now A, B, C, D are constant and do not depend on t.
Fall 2001 16.31 73
Basic Denitions
Linearity What is a linear dynamical system? A system G is
linear with respect to its inputs and output
u(t) G(s) y(t)
if superpositionholds:
G(
1
u
1
+
2
u
2
) =
1
Gu
1
+
2
Gu
2
So if y
1
is the response of G to u
1
(y
1
= Gu
1
), and y
2
is the
response of G to u
2
(y
2
= Gu
2
), then the response to
1
u
1
+
2
u
2
is
1
y
1
+
2
y
2
A system is said to be time-invariant if the relationship between
the input and output is independent of time. So if the response to
u(t) is y(t), then the response to u(t t
0
) is y(t t
0
)
x(t) is called the state of the system at t because:
Future output depends only on current state and future input
Future output depends on past input only through current state
State summarizes eect of past inputs on future output like
the memory of the system
Example: Rechargeable ashlight the state is the current state of
charge of the battery. If you know that state, then you do not need
to know how that level of charge was achieved (assuming a perfect
battery) to predict the future performance of the ashlight.
Fall 2001 16.31 74
Creating Linear State-Space Models
Most easily created from N
th
order dierential equations that de-
scribe the dynamics
This was the case done before.
Only issue is which set of states to use there are many choices.
Can be developed from transfer function model as well.
Much more on this later
Problem is that we have restricted ourselves here to linear state
space models, and almost all systems are nonlinear in real-life.
Can develop linear models from nonlinear system dynamics
Fall 2001 16.31 75
Linearization
Often have a nonlinear set of dynamics given by
x = f(x, u)
where x is once gain the state vector, u is the vector of inputs, and
f(, ) is a nonlinear vector function that describes the dynamics
Example: simple spring. With a mass at the end of a linear spring
(rate k) we have the dynamics
m x = kx
but with a leaf spring as is used on car suspensions, we have a
nonlinear spring the more it deects, the stier it gets. Good
model now is
m x = (k
1
x + k
2
x
3
)
which is a cubic spring.
Restoring force depends on the deection x in a nonlinear way.
0 2 4 6 8 10 12
2
1
0
1
2
X
Time
Nonlinear
Linear
0 2 4 6 8 10 12
3
2
1
0
1
2
3
V
Time
Nonlinear
Linear
Figure 1: Response to linear k and nonlinear (k
1
= 0, k
2
= k) springs (code at the end)
Fall 2001 16.31 76
Typically assume that the system is operating about some nominal
state solution x
0
(t) (possibly requires a nominal input u
0
(t))
Then write the actual state as x(t) = x
0
(t) + x(t)
and the actual inputs as u(t) = u
0
(t) + u(t)
The is included to denote the fact that we expect the varia-
tions about the nominal to be small
Can then develop the linearized equations by using the Taylor
series expansion of f(, ) about x
0
(t) and u
0
(t).
Recall the vector equation x = f(x, u), each equation of which
x
i
= f
i
(x, u)
can be expanded as
d
dt
(x
0
i
+ x
i
) = f
i
(x
0
+ x, u
0
+ u)
f
i
(x
0
, u
0
) +
f
i
x

0
x +
f
i
u

0
u
where
f
i
x
=

f
i
x
1

f
i
x
n

and |
0
means that we should evaluate the function at the nominal
values of x
0
and u
0
.
The meaning of small deviations now clear the variations in x
and u must be small enough that we can ignore the higher order
terms in the Taylor expansion of f(x, u).
Fall 2001 16.31 77
Since
d
dt
x
0
i
= f
i
(x
0
, u
0
), we thus have that
d
dt
(x
i
)
f
i
x

0
x +
f
i
u

0
u
Combining for all n state equations, gives (note that we also set
=) that
d
dt
x =

f
1
x

0
f
2
x

0
.
.
.
f
n
x

x +

f
1
u

0
f
2
u

0
.
.
.
f
n
u

u
= A(t)x + B(t)u
where
A(t)

f
1
x
1
f
1
x
2

f
1
x
n
f
2
x
1
f
2
x
2

f
2
x
n
.
.
.
f
n
x
1
f
n
x
2

f
n
x
n

0
and
B(t)

f
1
u
1
f
1
u
2

f
1
u
m
f
2
u
1
f
2
u
2

f
2
u
m
.
.
.
f
n
u
1
f
n
u
2

f
n
u
m

0
Fall 2001 16.31 78
Similarly, if the nonlinear measurement equation is y = g(x, u), can
show that, if y(t) = y
0
+ y, then
y =

g
1
x

0
g
2
x

0
.
.
.
g
p
x

x +

g
1
u

0
g
2
u

0
.
.
.
g
p
u

u
= C(t)x + D(t)u
Typically think of these nominal conditions x
0
, u
0
as set points
or operating points for the nonlinear system. The equations
d
dt
x = A(t)x + B(t)u
y = C(t)x + D(t)u
then give us a linearized model of the system dynamic behavior
about these operating/set points.
Note that if x
0
, u
0
are constants, then the partial fractions in the
expressions for AD are all constant LTI linearized model.
One particularly important set of operating points are the equilib-
rium points of the system. Dened as the states & control input
combinations for which
x = f(x
0
, u
0
) 0
provides n algebraic equations to nd the equilibrium points.
Fall 2001 16.31 79
Example
Consider the nonlinear spring with (set m = 1)
y = k
1
y + k
2
y
3
gives us the nonlinear model (x
1
= y and x
2
= y)
d
dt

y
y

y
k
1
y + k
2
y
3

x = f(x)
Find the equilibrium points and then make a state space model
For the equilibrium points, we must solve
f(x) =

y
k
1
y + k
2
y
3

= 0
which gives
y
0
= 0 and k
1
y
0
+ k
2
(y
0
)
3
= 0
Second condition corresponds to y
0
= 0 or y
0
=
r
k
1
/k
2
,
which is only real if k
1
and k
2
are opposite signs.
For the state space model,
A =

f
1
x
1
f
1
x
2
f
2
x
1
f
2
x
2

0
=

0 1
k
1
+ 3k
2
(y)
2
0

0
=

0 1
k
1
+ 3k
2
(y
0
)
2
0

and the linearized model is



x = Ax
Fall 2001 16.31 710
For the equilibrium point y = 0, y = 0
A
0
=

0 1
k
1
0

which are the standard dynamics of a system with just a linear


spring of stiness k
1
Stable motion about y = 0 if k
1
< 0
Assume that k
1
= 1, k
2
= 1/2, then we should get an equilibrium
point at y = 0, y =

2, and since k
1
+ k
2
(y
0
)
2
= 0
A
1
=

0 1
2k
1
0

0 1
2 0

are the dynamics of a stable oscillator about the equilibrium point


Will explore this in detail later
0 5 10 15 20 25 30 35
1.3
1.35
1.4
1.45
1.5
y
Time
0 5 10 15 20 25 30 35
0.1
0.05
0
0.05
0.1
d
y
/
d
t
Time
1.34 1.36 1.38 1.4 1.42 1.44 1.46 1.48 1.5
0.1
0.08
0.06
0.04
0.02
0
0.02
0.04
0.06
0.08
0.1
y
d
y
/
d
t
Figure 2: Nonlinear response (k
1
= 1, k
2
= .5). The gure on the right shows the oscillation about the
equilibrium point.
Fall 2001 16.31 711
Linearized Nonlinear Dynamics
Usually in practice we drop the as they are rather cumbersome,
and (abusing notation) we write the state equations as:
x(t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
which is of the same form as the previous linear models
Fall 2001 16.31 712
Example: Aircraft Dynamics
Assumptions:
1. Earth is an inertial reference frame
2. A/C is a rigid body
3. Body frame B xed to the aircraft (
~
i,
~
j,
~
k)
The basic dynamics are:
~
F = m

~v
c
I
and
~
T =

~
H
I

1
m
~
F =

~v
c
B
+
BI
~ ~v
c
Transport Thm.

~
T =

~
H
B
+
BI
~
~
H
Instantaneous mapping of ~v
c
and
BI
~ into the body frame is given
by
BI
~ = P
~
i + Q
~
j + R
~
k ~v
c
= U
~
i + V
~
j + W
~
k

BI

B
=

P
Q
R

(v
c
)
B
=

U
V
W

Fall 2001 16.31 713


The overall equations of motion are then:
1
m
~
F =

~v
c
B
+
BI
~ ~v
c

1
m

F
x
F
y
F
z

0 R Q
R 0 P
Q P 0

U
V
W

U + QW RV

V + RU PW

W + PV QU

These are clearly nonlinear need to linearize about the equilibrium


states.
To nd suitable equilibrium conditions, must solve
1
m

F
x
F
y
F
z

+QW RV
+RU PW
+PV QU

= 0
Assume steady state ight conditions with

P =

Q =

R = 0
Fall 2001 16.31 714
Dene the trim angular rates, velocities, and Forces
BI

o
B
=

P
Q
R

(v
c
)
o
B
=

U
o
0
0

F
o
B
=

F
o
x
F
o
y
F
o
z

that are associated with the ight condition (they dene the type
of equilibrium motion that we analyze about).
Note:
W
0
= 0 since we are using the stability axes, and
V
0
= 0 because we are assuming symmetric ight
Can now linearize the equations about this ight mode. To proceed,
dene
Velocities U
0
, U = U
0
+ u

U = u
W
0
= 0, W = w

W = w
V
0
= 0, V = v

V = v
Angular Rates P
0
= 0, P = p

P = p
Q
0
= 0, Q = q

Q = q
R
0
= 0, R = r

R = r
Angles
0
, =
0
+

=

0
= 0, =

=

0
= 0, =

=

Fall 2001 16.31 715


Linearization for symmetric ight
U = U
0
+ u, V
0
= W
0
= 0, P
0
= Q
0
= R
0
= 0.
Note that the forces and moments are also perturbed.
1
m

F
0
x
+ F
x

=

U + QW RV u + qw rv
u
1
m

F
0
y
+ F
y

=

V + RU PW v + r(U
0
+ u) pw
v + rU
0
1
m

F
0
z
+ F
z

=

W + PV QU w + pv q(U
0
+ u)
w qU
0

1
m

F
x
F
y
F
z

u
v + rU
0
w qU
0

Which gives the linearized dynamics for the aircraft motion about
the steady-state ight condition.
Need to analyze the perturbations to the forces and moments to
fully understand the linearized dynamics take 16.61
Can do same thing for the rotational dynamics.
Fall 2001 16.31 716
% save this entire code as plant.m
%
function [xdot] = plant(t,x)
global n
xdot(1) = x(2);
xdot(2) = -3*(x(1))^n;
xdot = xdot;
return
% the use this part of the code in Matlab
% to call_plant.m
global n
n=3; %nonlinear
x0 = [-1 2]; % initial condition
[T,x]=ode23(plant, [0 12], x0); %simulate NL equations for 12 sec
n=1; % linear
[T1,x1]=ode23(plant, [0 12], x0);
subplot(211)
plot(T,x(:,1),T1,x1(:,1),--);
legend(Nonlinear,Linear)
ylabel(X)
xlabel(Time)
subplot(212)
plot(T,x(:,2),T1,x1(:,2),--);
legend(Nonlinear,Linear)
ylabel(V)
xlabel(Time)
MATLAB is a trademark of The MathWorks, Inc.

Topic #8
16.31 Feedback Control
State-Space Systems
What are state-space models?
Why should we use them?
How are they related to the transfer functions used in
classical control design and how do we develop a state-
space model?
What are the basic properties of a state-space model, and how do
we analyze these?
Copyright 2001 by Jonathan How.
Fall 2001 16.31 81
TFs to State-Space Models
The goal is to develop a state-space model given a transfer function
for a system G(s).
There are many, many ways to do this.
But there are three primary cases to consider:
1. Simple numerator
y
u
= G(s) =
1
s
3
+ a
1
s
2
+ a
2
s + a
3
2. Numerator order less than denominator order
y
u
= G(s) =
b
1
s
2
+ b
2
s + b
3
s
3
+ a
1
s
2
+ a
2
s + a
3
=
N(s)
D(s)
3. Numerator equal to denominator order
y
u
= G(s) =
b
0
s
3
+ b
1
s
2
+ b
2
s + b
3
s
3
+ a
1
s
2
+ a
2
s + a
3
These 3 cover all cases of interest
Fall 2001 16.31 82
Consider case 1 (specic example of third order, but the extension
to n
th
follows easily)
y
u
= G(s) =
1
s
3
+ a
1
s
2
+ a
2
s + a
3
can be rewritten as the dierential equation
y
(3)
+ a
1
y + a
2
y + a
3
y = u
choose the output y and its derivatives as the state vector
x =

y
y
y

then the state equations are


x =

y
(3)
y
y

a
1
a
2
a
3
1 0 0
0 1 0

y
y
y

1
0
0

u
y =

0 0 1

y
y
y

+ [0]u
This is typically called the controller form for reasons that will
become obvious later on.
There are four classic (called canonical) forms oberver, con-
troller, controllability, and observability. They are all useful in
their own way.
Fall 2001 16.31 83
Consider case 2
y
u
= G(s) =
b
1
s
2
+ b
2
s + b
3
s
3
+ a
1
s
2
+ a
2
s + a
3
=
N(s)
D(s)
Let
y
u
=
y
v

v
u
where y/v = N(s) and v/u = 1/D(s)
Then the representation of v/u = 1/D(s) is the same as case 1
v
(3)
+ a
1
v + a
2
v + a
3
v = u
use the state vector
x =

v
v
v

to get
x = A
2
x + B
2
u
where
A
2
=

a
1
a
2
a
3
1 0 0
0 1 0

and B
2
=

1
0
0

Then consider y/v = N(s), which implies that


y = b
1
v + b
2
v + b
3
v
=

b
1
b
2
b
3

v
v
v

= C
2
x + [0]u
Fall 2001 16.31 84
Consider case 3 with
y
u
= G(s) =
b
0
s
3
+ b
1
s
2
+ b
2
s + b
3
s
3
+ a
1
s
2
+ a
2
s + a
3
=

1
s
2
+
2
s +
3
s
3
+ a
1
s
2
+ a
2
s + a
3
+ D
= G
1
(s) + D
where
D( s
3
+a
1
s
2
+a
2
s +a
3
)
+( +
1
s
2
+
2
s +
3
)
= b
0
s
3
+b
1
s
2
+b
2
s +b
3
so that, given the b
i
, we can easily nd the
i
D = b
0

1
= b
1
Da
1
.
.
.
Given the
i
, can nd G
1
(s)
Can make a state-space model for G
1
(s) as described in case 2
Then we just add the feed-through term Du to the output equa-
tion from the model for G
1
(s)
Will see that there is a lot of freedom in making a state-space model
because we are free to pick the x as we want
Fall 2001 16.31 85
Modal Form
One particular useful canonical form is called the Modal Form
It is a diagonal representation of the state-space model.
Assume for now that the transfer function has distinct real poles p
i
(but this easily extends to the case with complex poles)
G(s) =
N(s)
D(s)
=
N(s)
(s p
1
)(s p
2
) (s p
n
)
=
r
1
s p
1
+
r
2
s p
2
+ +
r
n
s p
n
Now dene a collection of rst order systems, each with state x
i
X
1
U(s)
=
r
1
s p
1
x
1
= p
1
x
1
+ r
1
u
X
2
U(s)
=
r
2
s p
2
x
2
= p
2
x
2
+ r
2
u
.
.
.
X
n
U(s)
=
r
n
s p
n
x
n
= p
n
x
n
+ r
n
u
Which can be written as
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
with
A =

p
1
.
.
.
p
n

B =

r
1
.
.
.
r
n

C =

1
.
.
.
1

T
Good representation to use for numerical robustness reasons.
Fall 2001 16.31 86
State-Space Models to TFs
Given the Linear Time-Invariant (LTI) state dynamics
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
what is the corresponding transfer function?
Start by taking the Laplace Transform of these equations
L{ x(t) = Ax(t) + Bu(t)}
sX(s) x(0

) = AX(s) + BU(s)
L{y(t) = Cx(t) + Du(t)}
Y (s) = CX(s) + DU(s)
which gives
(sI A)X(s) = BU(s) + x(0

)
X(s) = (sI A)
1
BU(s) + (sI A)
1
x(0

)
and
Y (s) =

C(sI A)
1
B + D

U(s) + C(sI A)
1
x(0

)
By denition G(s) = C(sI A)
1
B + D is called the
Transfer Function of the system.
And C(sI A)
1
x(0

) is the initial condition response. It is part


of the response, but not part of the transfer function.
Fall 2001 16.31 87
State-Space Transformations
State space representations are not unique because we have a lot of
freedom in choosing the state vector.
Selection of the state is quite arbitrary, and not that important.
In fact, given one model, we can transform it to another model that
is equivalent in terms of its input-output properties.
To see this, dene Model 1 of G(s) as
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
Now introduce the new state vector z related to the rst state x
through the transformation x = Tz
T is an invertible (similarity) transform matrix
z = T
1
x = T
1
(Ax + Bu)
= T
1
(ATz + Bu)
= (T
1
AT)z + T
1
Bu =

Az +

Bu
and
y = Cx + Du = CTz + Du =

Cz +

Du
So the new model is
z =

Az +

Bu
y =

Cz +

Du
Are these going to give the same transfer function? They must if
these really are equivalent models.
Fall 2001 16.31 88
Consider the two transfer functions:
G
1
(s) = C(sI A)
1
B + D
G
2
(s) =

C(sI

A)
1

B +

D
Does G
1
(s) G
2
(s) ?
G
1
(s) = C(sI A)
1
B + D
= C(TT
1
)(sI A)
1
(TT
1
)B + D
= (CT)

T
1
(sI A)
1
T

(T
1
B) +

D
= (

C)

T
1
(sI A)T

1
(

B) +

D
=

C(sI

A)
1

B +

D = G
2
(s)
So the transfer function is not changed by putting the state-space
model through a similarity transformation.
Note that in the transfer function
G(s) =
b
1
s
2
+ b
2
s + b
3
s
3
+ a
1
s
2
+ a
2
s + a
3
we have 6 parameters to choose
But in the related state-space model, we have A3 3, B3 1,
C 1 3 for a total of 15 parameters.
Is there a contradiction here because we have more degrees of free-
dom in the state-space model?
No. In choosing a representation of the model, we are eectively
choosing a T, which is also 3 3, and thus has the remaining 9
degrees of freedom in the state-space model.
Fall 2001 16.31 91
Topic #9
16.31 Feedback Control
State-Space Systems
What are the basic properties of a state-space model,
and how do we analyze these?
SS to TF
Copyright 2001 by Jonathan How.
Fall 2001 16.31 91
SS TF
In going from the state space model
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
to the transfer function G(s) = C(sI A)
1
B +D need to form the inverse
of the matrix (sI A) a symbolic inverse not easy at all.
For simple cases, we can use the following:

a
1
a
2
a
3
a
4

1
=
1
a
1
a
4
a
2
a
3

a
4
a
2
a
3
a
1

For larger problems, we can also use Cramers Rule


Turns out that an equivalent method is to form:
G(s) = C(sI A)
1
B + D =
det

sI A B
C D

det(sI A)
Reason for this will become more apparent later when we talk about
how to compute the zeros of a state-space model (which are the roots
of the numerator)
Example from before:
A =

a
1
a
2
a
3
1 0 0
0 1 0

, B =

1
0
0

, C =

b
1
b
2
b
3

T
then
G(s) =
1
det(sI A)

s + a
1
a
2
a
3
1
1 s 0 0
0 1 s 0
b
1
b
2
b
3
0

=
b
3
+ b
2
s + b
1
s
2
det(sI A)
and det(sI A) = s
3
+ a
1
s
2
+ a
2
s + s
3
Key point: Characteristic equation of this system given by det(sI A)
Fall 2001 16.31 92
Time Response
Can develop a lot of insight into the system response and how it is modeled
by computing the time response x(t)
Homogeneous part
Forced solution
Homogeneous Part
x = Ax, x(0) known
Take Laplace transform
X(s) = (sI A)
1
x(0)
so that
x(t) = L
1

(sI A)
1

x(0)
But can show
(sI A)
1
=
I
s
+
A
s
2
+
A
2
s
3
+ . . .
so L
1

(sI A)
1

= I + At +
1
2!
(At)
2
+ . . .
= e
At
So
x(t) = e
At
x(0)
e
At
is a special matrix that we will use many times in this course
Transition matrix
Matrix Exponential
Calculate in MATLAB
r
using expm.m and not exp.m
1
Note that e
(A+B)t
= e
At
e
Bt
i AB = BA
We will say more about e
At
when we have said more about A (eigenvalues
and eigenvectors)
Computation of e
At
= L
1
{(sI A)
1
} straightforward for a 2-state system
1
MATLAB
r
is a trademark of the Mathworks Inc.
Fall 2001 16.31 93
Example: x = Ax, with
A =

0 1
2 3

(sI A)
1
=

s 1
2 s + 3

1
=

s + 3 1
2 s

1
(s + 2)(s + 1)
=

2
s + 1

1
s + 2
1
s + 1

1
s + 2
2
s + 1
+
2
s + 2
1
s + 1
+
2
s + 2

e
At
=

2e
t
e
2t
e
t
e
2t
2e
t
+ 2e
2t
e
t
+ 2e
2t

Fall 2001 16.31 101


Topic #10
16.31 Feedback Control
State-Space Systems
What are the basic properties of a state-space model, and how do
we analyze these?
Time Domain Interpretations
System Modes
Copyright 2001 by Jonathan How.
Fall 2001 16.31 101
Forced Solution
Consider a scalar case:
x = ax + bu, x(0) given
x(t) = e
at
x(0) +

t
0
e
a(t)
bu()d
where did this come from?
1. x ax = bu
2. e
at
[ x ax] =
d
dt
(e
at
x(t)) = e
at
bu(t)
3.

t
0
d
d
e
a
x()d = e
at
x(t) x(0) =

t
0
e
a
bu()d
Forced Solution Matrix case:
x = Ax + Bu
where x is an n-vector and u is a m-vector
Just follow the same steps as above to get
x(t) = e
At
x(0) +

t
0
e
A(t)
Bu()d
and if y = Cx + Du, then
y(t) = Ce
At
x(0) +

t
0
Ce
A(t)
Bu()d + Du(t)
Ce
At
x(0) is the initial response
Ce
A(t)
B is the impulse response of the system.
Fall 2001 16.31 102
Have seen the key role of e
At
in the solution for x(t)
Determines the system time response
But would like to get more insight!
Consider what happens if the matrix A is diagonalizable, i.e. there exists a
T such that
T
1
AT = which is diagonal =

1
.
.
.

Then
e
At
= Te
t
T
1
where
e
t
=

1
t
.
.
.
e

n
t

Follows since e
At
= I + At +
1
2!
(At)
2
+ . . . and that A = TT
1
, so we can
show that
e
At
= I + At +
1
2!
(At)
2
+ . . .
= I + TT
1
t +
1
2!
(TT
1
t)
2
+ . . .
= Te
t
T
1
This is a simpler way to get the matrix exponential, but how nd T and ?
Eigenvalues and Eigenvectors
Fall 2001 16.31 103
Eigenvalues and Eigenvectors
Recall that the eigenvalues of A are the same as the roots of the characteristic
equation (page 91)
is an eigenvalue of A if
det(I A) = 0
which is true i there exists a nonzero v (eigenvector) for which
(I A)v = 0 Av = v
Repeat the process to nd all of the eigenvectors. Assuming that the n
eigenvectors are linearly independent
Av
i
=
i
v
i
i = 1, . . . , n
A

v
1
v
n

v
1
v
n

1
.
.
.

AT = T T
1
AT =
One word of caution: Not all matrices are diagonalizable
A =

0 1
0 0

det(sI A) = s
2
only one eigenvalue s = 0 (repeated twice). The eigenvectors solve

0 1
0 0

r
1
r
2

= 0
eigenvectors are of the form

r
1
0

, r
1
= 0 would only be one.
Need the Jordan Normal Form to handle this case (section 3.7.3)
Fall 2001 16.31 104
Mechanics
Consider A =

1 1
8 5

(sI A) =

s + 1 1
8 s 5

det(sI A) = (s + 1)(s 5) + 8 = s
2
4s + 3 = 0
so the eigenvalues are s
1
= 1 and s
2
= 3
Eigenvectors (sI A)v = 0
(s
1
I A)v
1
=

s + 1 1
8 s 5

s=1

v
11
v
21

= 0

2 1
8 4

v
11
v
21

= 0 2v
11
v
21
= 0, v
21
= 2v
11
v
11
is then arbitrary (= 0), so set v
11
= 1
v
1
=

1
2

(s
2
I A)v
2
=

4 1
8 2

v
12
v
22

= 0 4v
12
v
22
= 0, v
22
= 4v
12
v
2
=

1
4

Conrm that Av
i
=
i
v
i
Fall 2001 16.31 105
Dynamic Interpretation
Since A = TT
1
, then
e
At
= Te
t
T
1
=

| |
v
1
v
n
| |

1
t
.
.
.
e

n
t

w
T
1

.
.
.
w
T
n

where we have written


T
1
=

w
T
1

.
.
.
w
T
n

which is a column of rows.


Multiply this expression out and we get that
e
At
=
n

i=1
e

i
t
v
i
w
T
i
Assume A diagonalizable, then x = Ax, x(0) given, has solution
x(t) = e
At
x(0) = Te
t
T
1
x(0)
=
n

i=1
e

i
t
v
i
{w
T
i
x(0)}
=
n

i=1
e

i
t
v
i

i
State solution is a linear combination of the system modes v
i
e

i
e

i
t
Determines the nature of the time response
v
i
Determines extent to which each state contributes to that mode

i
Determines extent to which the initial condition excites the mode
Fall 2001 16.31 106
Note that the v
i
give the relative sizing of the response of each part of the
state vector to the response.
v
1
(t) =

1
0

e
t
mode 1
v
2
(t) =

0.5
0.5

e
3t
mode 2
Clearly e

i
t
gives the time modulation

i
real growing/decaying exponential response

i
complex growing/decaying exponential damped sinusoidal
Bottom line: The locations of the eigenvalues determine the pole locations
for the system, thus:
They determine the stability and/or performance & transient behavior
of the system.
It is their locations that we will want to modify when we start
the control work
Fall 2001 16.31 107
Zeros in State Space Models
Roots of the transfer function numerator are called the system zeros.
Need to develop a similar way of dening/computing them using a state
space model.
Zero: is a generalized frequency s
0
for which the system can have a non-zero
input u(t) = u
0
e
s
0
t
, but exactly zero output y(t) 0 t
Note that there is a specic initial condition associated with this response
x
0
, so the state response is of the form x(t) = x
0
e
s
0
t
u(t) = u
0
e
s
0
t
x(t) = x
0
e
s
0
t
y(t) 0
Given x = Ax + Bu, substitute the above to get:
x
0
s
0
e
s
0
t
= Ax
0
e
s
0
t
+ Bu
0
e
s
0
t

s
0
I A B

x
0
u
0

= 0
Also have that y = Cx + Du = 0 which gives:
Cx
0
e
s
0
t
+ Du
0
e
s
0
t
= 0

C D

x
0
u
0

= 0
So we must solve for the s
0
that solves: or

s
0
I A B
C D

x
0
u
0

= 0
This is a generalized eigenvalue problem that can be solved in
MATLAB
r
using eig.m or tzero.m
2
2
MATLAB
r
is a trademark of the Mathworks Inc.
Fall 2001 16.31 108
Is a zero at the frequency s
0
if there exists a non-trivial solution of
det

s
0
I A B
C D

= 0
Compare with equation on page 91
Key Point: Zeros have both a direction

x
0
u
0

and a frequency s
0
Just as we would associate a direction (eigenvector) with each pole (fre-
quency
i
)
Example: G(s) =
s+2
s
2
+7s+12
A =

7 12
1 0

B =

1
0

C =

1 2

D = 0
det

s
0
I A B
C D

= det

s
0
+ 7 12 1
1 s
0
0
1 2 0

= (s
0
+ 7)(0) + 1(2) + 1(s
0
) = s
0
+ 2 = 0
so there is clearly a zero at s
0
= 2, as we expected. For the directions,
solve:

s
0
+ 7 12 1
1 s
0
0
1 2 0

s
0
=2

x
01
x
02
u
0

5 12 1
1 2 0
1 2 0

x
01
x
02
u
0

= 0?
gives x
01
= 2x
02
and u
0
= 2x
02
so that with x
02
= 1
x
0
=

2
1

and u = 2e
2t
Fall 2001 16.31 109
Further observations: apply the specied control input in the frequency
domain, so that
Y
1
(s) = G(s)U(s)
where u = 2e
2t
, so that U(s) =
2
s+2
Y
1
(s) =
s + 2
s
2
+ 7s + 12

2
s + 2
=
2
s
2
+ 7s + 12
Say that s = 2 is a blocking zero or a transmission zero.
The response Y (s) is clearly non-zero, but it does not contain a component
at the input frequency s = 2. That input has been blocked.
Note that the output response left in Y
1
(s) is of a very special form it
corresponds to the (negative of the) response you would see from the system
with u(t) = 0 and x
0
=

2 1

T
Y
2
(s) = C(sI A)
1
x
0
=

1 2

s + 7 12
1 s

2
1

1 2

s 12
1 s + 7

2
1

1
s
2
+ 7s + 12
=
2
s
2
+ 7s + 12
So then the total output is Y (s) = Y
1
(s) + Y
2
(s) showing that Y (s) = 0
y(t) = 0, as expected.
Fall 2001 16.31 1010
Summary of Zeros: Great feature of solving for zeros using the generalized
eigenvalue matrix condition is that it can be used to nd MIMO zeros of
a system with multiple inputs/outputs.
det

s
0
I A B
C D

= 0
Need to be very careful when we nd MIMO zeros that have the same fre-
quency as the poles of the system, because it is not obvious that a pole/zero
cancellation will occur (for MIMO systems).
The zeros have a directionality associated with them, and that must
agree as well, or else you do not get cancellation
More on this topic later.
Relationship to transfer function matrix:
If z is a zero with (right) direction [
T
, u
T
]
T
, then

zI A B
C D


u

= 0
If z not an eigenvalue of A, then = (zI A)
1
B u, which gives

C(zI A)
1
B + D

u = G(z) u = 0
Which implies that G(s) loses rank at s = z
If G(s) is square, can test: det G(s) = 0
If any of the resulting roots are also eigenvalues of A, need to re-check
the generalized eigenvalue matrix condition.
Fall 2001 16.31 1011
Note that the transfer function matrix (TFM) notion is a MIMO general-
ization of the SISO transfer function
It is a matrix of transfer functions
G(s) =

g
11
(s) g
1m
(s)
.
.
.
g
p1
(s) g
pm
(s)

where g
ij
(s) relates the input of actuator j to the output of sensor i.
Example:
G(s) =

1
s + 1
0
1
s 2
s 2
s + 2

It is relatively easy to go from a state-space model to a TFM, but not


obvious how to go back the other way.
Note: we have to be careful how to analyze these TFMs.
Just looking at the individual transfer functions is not useful.
Need to look at the system as a whole will develop a new tool based
on the singular values of G(s)
Topic #11
16.31 Feedback Control
State-Space Systems
State-space model features
Observability
Controllability
Minimal Realizations
Copyright2001byJonathanHow.
1
Fall 2001 16.31 111
State-Space Model Features
There are some key characteristics of a state-space model that we
need to identify.
Will see that these are very closely associated with the concepts
of pole/zero cancellation in transfer functions.
Example: Consider a simple system
G(s) =
6
s + 2
for which we develop the state-space model
Model # 1 x = 2x + 2u
y = 3x
But now consider the new state space model x = [x x
2
]
T
Model # 2

x =
_
_
2 0
0 1
_
_
x +

2
1

u
y =

3 0

x
which is clearly dierent than the rst model.
But lets looks at the transfer function of the new model:

G(s) = C(sI A)
1
B + D
Fall 2001 16.31 112
This is a bit strange, because previously our gure of merit when
comparing one state-space model to another (page 8-8) was whether
they reproduced the same same transfer function
Now we have two very dierent models that result in the same
transfer function
Note that I showed the second model as having 1 extra state,
but I could easily have done it with 99 extra states!!
So what is going on?
The clue is that the dynamics associated with the second state
of the model x
2
were eliminated when we formed the product

G(s) =

3 0

"
2
s+2
1
s+1
#
because the A is decoupled and there is a zero in the C matrix
Which is exactly the same as saying that there is a pole-zero
cancellation in the transfer function

G(s)
6
s + 2
=
6(s + 1)
(s + 2)(s + 1)
,

G(s)
Note that model #2 is one possible state-space model of

G(s)
(has 2 poles)
For this system we say that the dynamics associated with the second
state are unobservable using this sensor (denes the C matrix).
There could be a lot motion associated with x
2
, but we would
be unware of it using this sensor.
Fall 2001 16.31 113
There is an analogous problem on the input side as well. Consider:
Model # 1 x = 2x + 2u
y = 3x
with x = [ x x
2
]
T
Model # 3

x =
_
_
2 0
0 1
_
_
x +

2
0

u
y =

3 2

x
which is also clearly dierent than model #1, and has a dierent
form from the second model.

G(s) =

3 2

_
_
sI
_
_
2 0
0 1
_
_
_
_
1

2
0

=

3
s+2
2
s+1

2
0

=
6
s + 2
!!
Once again the dynamics associated with the pole at s = 1 are
cancelled out of the transfer function.
But in this case it occurred because there is a 0 in the B matrix
So in this case we can see the state x
2
in the output C =

3 2

,
but we cannot inuence that state with the input since B =

2
0

So we say that the dynamics associated with the second state are
uncontrollable using this actuator (denes the B matrix).
Fall 2001 16.31 114
Of course it can get even worse because we could have

x =
_
_
2 0
0 1
_
_
x +

2
0

u
y =

3 0

x
So now we have
]
G(s) =

3 0

_
_
sI
_
_
2 0
0 1
_
_
_
_
1

2
0

=

3
s+2
0
s+1

2
0

=
6
s + 2
!!
Get same result for the transfer function, but now the dynamics
associated with x
2
are both unobservable and uncontrollable.
Summary:
Dynamics in the state-space model that are uncontrollable, un-
observable, or both do not show up in the transfer function.
Would like to develop models that only have dynamics that are
both controllable and observable
Vcalled a minimal realization
It is has the lowest possible order for the given transfer function.
But rst need to develop tests to determine if the models are ob-
servable and/or controllable
Fall 2001 16.31 115
Observability
Denition: An LTI system is observable if the initial state
x(0) can be uniquely deduced from the knowledge of the input u(t)
and output y(t) for all t between 0 and any T > 0.
If x(0) can be deduced, then we can reconstruct x(t) exactly
because we know u(t) Vwe can nd x(t) t.
Thus we need only consider the zero-input (homogeneous) solu-
tion to study observability.
y(t) = Ce
At
x(0)
This denition of observability is consistent with the notion we used
before of being able to see all the states in the output of the
decoupled examples
ROT: For those decoupled examples, if part of the state cannot
be seen in y(t), then it would be impossible to deduce that
part of x(0) from the outputs y(t).
Fall 2001 16.31 116
Denition: A state x
?
6= 0 is said to be unobservable if the
zero-input solution y(t), with x(0) = x
?
, is zero for all t 0
Equivalent to saying that x
?
is an unobservable state if
Ce
At
x
?
= 0 t 0
For the problem we were just looking at, consider Model #2 with
x
?
= [ 0 1 ]
T
6= 0, then

x =
_
_
2 0
0 1
_
_
x +

2
1

u
y =

3 0

x
so
Ce
At
x
?
=

3 0

e
2t
0
0 e
t

0
1

=

3e
2t
0

0
1

= 0 t
So, x
?
= [ 0 1 ]
T
is an unobservable state for this system.
But that is as expected, because we knew there was a problem with
the state x
2
from the previous analysis
Fall 2001 16.31 117
Theorem: An LTI system is observable i it has no
unobservable states.
We normally just say that the pair (A,C) is observable.
Pseudo-Proof: Let x
?
6= 0 be an unobservable state and compute
the outputs from the initial conditions x
1
(0) and x
2
(0) = x
1
(0) +x
?
Then
y
1
(t) = Ce
At
x
1
(0) and y
2
(t) = Ce
At
x
2
(0)
but
Thus 2 dierent initial conditions give the same output y(t), so it
would be impossible for us to deduce the actual initial condition
of the system x
1
(t) or x
2
(t) given y
1
(t)
Testing system observability by searching for a vector x(0) such that
Ce
At
x(0) = 0 t is feasible, but very hard in general.
Better tests are available.
Fall 2001 16.31 118
Theorem: The vector x
?
is an unobservable state if
_

_
C
CA
CA
2
.
.
.
CA
n1
_

_
x
?
= 0
Pseudo-Proof: If x
?
is an unobservable state, then by denition,
Ce
At
x
?
= 0 t 0
But all the derivatives of Ce
At
exist and for this condition to hold,
all derivatives must be zero at t = 0. Then
Ce
At
x
?

t=0
= 0 Cx
?
= 0
d
dt
Ce
At
x
?

t=0
= 0
d
2
dt
2
Ce
At
x
?

t=0
= 0
.
.
.
d
k
dt
k
Ce
At
x
?

t=0
= 0
We only need retain up to the n 1
th
derivative because of the
Cayley-Hamilton theorem.
Fall 2001 16.31 119
Simple test: Necessary and sucient condition for observability
is that
rank M
o
, rank
_

_
C
CA
CA
2
.
.
.
CA
n1
_

_
= n
Why does this make sense?
The requirement for an unobservable state is that for x
?
6= 0
M
o
x
?
= 0
Which is equivalent to saying that x
?
is orthogonal to each row
of M
o
.
But if the rows of M
o
are considered to be vectors and these
span the full n-dimensional space, then it is not possible
to nd an n-vector x
?
that is orthogonal to each of these.
To determine if the n rows of M
o
span the full n-dimensional
space, we need to test their linear independence, which is
equivalent to the rank test
1
.
1
Let M be a m p matrix, then the rank of M satises:
1. rank M number of linearly independent columns of M
2. rank M number of linearly independent rows of M
3. rank M min{m, p}
Topic #12
16.31 Feedback Control
State-Space Systems
State-space model features
Controllability
Copyright 2001 by Jonathan How.
1
Fall 2001 16.31 121
Controllability
Denition: An LTI system is controllable if, for every x
?
(t)
and every T > 0, there exists an input function u(t), 0 < t T ,
such that the system state goes from x(0) = 0 to x(T ) = x
?
.
Starting at 0 is not a special case if we can get to any state
in nite time from the origin, then we can get from any initial
condition to that state in nite time as well.
Need only consider the forced solution to study controllability.
x(t) =
Z
t
0
e
A(t )
Bu( )d
Change of variables
2
= t , d = d
2
gives
x(t) =
Z
t
0
e
A
2
Bu(t
2
)d
2
This denition of observability is consistent with the notion we used
before of being able to inuence all the states in the system in the
decoupled examples we looked at before.
ROT: For those decoupled examples, if part of the state cannot
be inuenced by u(t), then it would be impossible to move
that part of the state from 0 to x
?
Fall 2001 16.31 122
Denition: A state x
?
6= 0 is said to be uncontrollable if the
forced state response x(t) is orthogonal to x
?
t > 0 and all input
functions.
You cannot get there from here
This is equivalent to saying that x
?
is an uncontrollable state if
(x
?
)
T
Z
t
0
e
A
2
Bu(t
2
)d
2
=
Z
t
0
(x
?
)
T
e
A
2
Bu(t
2
)d
2
= 0
Since this identity must hold for all input functions u(t
2
), this
can only be true if
(x
?
)
T
e
At
B 0 t 0

Fall 2001 16.31 123


For the problem we were just looking at, consider Model #3 with
x
?
= [ 0 1 ]
T
6= 0, then

2 0
Model # 3 x =

x +
0
2

u
0 1

3 2

x y =
so

e
2t
0
0 e
t

0

2

0 1

e
t

(x
?
)
T
e
At
B =

2
0

= 0 t

0 =
So x
?
= [ 0 1 ]
T
is an uncontrollable state for this system.
But that is as expected, because we knew there was a problem with
the state x
2
from the previous analysis
Fall 2001 16.31 124
Theorem: An LTI system is controllable i it has no
uncontrollable states.
We normally just say that the pair (A,B) is controllable.
Pseudo-Proof: The theorem essentially follows by the denition
of an uncontrollable state.
If you had an uncontrollable state x
?
, then it is orthogonal to the
forced response state x(t), which means that the system cannot
reach it in nite time ; the system would be uncontrollable.
Theorem: The vector x
?
is an uncontrollable state i
(x
?
)
T

B AB A
2
B A
n1
B

= 0
See page 81.
Simple test: Necessary and sucient condition for controllability
is that
rank M
c
, rank

B AB A
2
B A
n1
B

= n

Fall 2001
Examples
16.31 125

2 0
With Model # 2: x =

x +
1
2

u
0 1

y = 3 0

x

C 3 0
M
0
=
CA

=
6 0


M
c
=

B AB

=
2 4
1 1

rank M
0
= 1 and rank M
c
= 2
So this model of the system is controllable, but not observable.

2 0
With Model # 3: x =

x +
0
2

u
0 1

y = 3 2

x

C 3 2
M
0
=
CA

=
6 2

M
c
=

B AB

=

2
0
4
0

rank M
0
= 2 and rank M
c
= 1
So this model of the system is observable, but not controllable.
Note that controllability/observability are not intrinsic properties
of a system. Whether the model has them or not depends on the
representation that you choose.
But they indicate that something else more fundamental is wrong...
Fall 2001 16.31 126
Example: Loss of Observability
Typical scenario: consider system G(s) of the form
1
s + a
x
1
s + a
s + 1
x
2
u y
so that
s + a
s + 1

1
s + a
G(s) =
Clearly a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ u
x
2
= x
2
+ (a 1)x
2
y = x
1
+ x
2

a 0
a 1 1

, B =

1
0

, C =

1 1

, D = 0 A =
The Observability/Controllability tests are (a = 2):

C
CA

= rank
1
1
1

= 1 < n = 2
1
rank

1 2
1

= 2
0

B AB

= rank rank
System controllable, but unobservable. Consistent with the picture:
Both states can be inuenced by u
But e
at
mode dynamics canceled out of the output by the zero.
Fall 2001 16.31 127
Example: Loss of Controllability
Repeat the process, but now use the system G(s) of the form
s + a
s + 1
x
2
1
s + a
x
1
u y
so that
1
s + a

s + a
s + 1
G(s) =
Still a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ x
2
+ u
x
2
= x
2
+ (a 1)u
y = x
1

a 1
0 1

, B
2
=
1
a 1

, C
2
=

1 0

, D
2
= 0 A
2
=
The Observability/Controllability tests are (a = 2):

C
2
C
2
A
2

= rank
1 0
2 1

= 2 rank

1 1
1

= 1 < n = 2
1

B
2
A
2
B
2

= rank rank
System observable, but uncontrollable. Consistent with the picture:
u can inuence state x
2
, but eect on x
1
canceled by zero
Both states can be seen in the output (x
1
directly, and x
2
because
it drives the dynamics associated with x
1
)

Fall 2001 16.31 128


Modal Tests
Earlier examples showed the relative simplicity of testing observ
ability/controllability for system with a decoupled A matrix.
There is, of course, a very special decoupled form for the state-space
model: the Modal Form (85)
Assuming that we are given the model
x = Ax + Bu
y = Cx + Du
and the A is diagonalizable (A = T T
1
) using the transformation
T =

| |

v
1
v
n
| |

based on the eigenvalues of A. Note that we wrote:


T
1
=

w
T

1
.
.
.
w
T

which is a column of rows.


Then dene a new state so that x = Tz, then
z = T
1
x = T
1
(Ax + Bu) = (T
1
AT )z + T
1
Bu
= z + T
1
Bu
y = Cx + Du = CTz + Du
Fall 2001 16.31 129
The new model in the state z is diagonal. There is no coupling in
the dynamics matrix .
But by denition,

w
T
1
.
T
1
B =

.
.
B
w
T
n
and

CT = C v
1
v
n
Thus if it turned out that
T
w
i
B 0
then that element of the state vector z
i
would be uncontrollable
by the input u.
Also, if
Cv
j
0
then that element of the state vector z
j
would be unobservable
with this sensor.
Thus, all modes of the system are controllable and ob-
servable if it can be shown that
w
i
T
B 6= 0 i
and
Cv
j
6= 0 j
Fall 2001 16.31 1210
Cancelation
Examples show the close connection between pole-zero cancelation
and loss of observability and controllability. Can be strengthened.
Theorem: The mode (
i
, v
i
) of a system (A, B, C, D) is unob

. servable i the system has a zero at
i
with direction
v
i
0
Proof: If the system is unobservable at
i
, then we know
(
i
I A)v
i
= 0 It is a mode
Cv
i
= 0 That mode is unobservable
Combine to get:

(
i
I A)
C

v
i
= 0
Or

(
i
I A) B
C D

v
i
0

= 0
which implies that the system has a zero at that frequency as well,

with direction
v
i
0

.
Can repeat the process looking for los

s of controllability, but now

using zeros with left direction w


T
0
i
.
Fall 2001 16.31 1211
Combined Denition: when a MIMO zero causes loss of ei
ther observability or controllability we say that there is a pole/zero
cancelation.
MIMO pole-zero (right direction generalized eigenvector) cance
lation mode is unobservable
MIMO pole-zero (left direction generalized eigenvector) cancela
tion mode is uncontrollable
Note: This cancelation requires an agreement of both the fre
quency and the directionality of the system mode (eigenvector) and

zero
v
i
0

or

w
T
0
i

.
Fall 2001 16.31 1212
Weaker Conditions
Often it is too much to assume that we will have full observability
and controllability. Often have to make do with the following:
A system is called detectable if all unstable modes
are observable
A system is called stabilizable if all unstable modes
are controllable
So if you had a stabilizable and detectable system, there could be
dynamics that you are not aware of and cannot inuence, but you
know that they are at least stable.
Topic #12
16.31 Feedback Control
State-Space Systems
State-space model features
Controllability
Copyright 2001 by Jonathan How.
1
Fall 2001 16.31 121
Controllability
Denition: An LTI system is controllable if, for every x
?
(t)
and every T > 0, there exists an input function u(t), 0 < t T ,
such that the system state goes from x(0) = 0 to x(T ) = x
?
.
Starting at 0 is not a special case if we can get to any state
in nite time from the origin, then we can get from any initial
condition to that state in nite time as well.
Need only consider the forced solution to study controllability.
x(t) =
Z
t
0
e
A(t )
Bu( )d
Change of variables
2
= t , d = d
2
gives
x(t) =
Z
t
0
e
A
2
Bu(t
2
)d
2
This denition of observability is consistent with the notion we used
before of being able to inuence all the states in the system in the
decoupled examples we looked at before.
ROT: For those decoupled examples, if part of the state cannot
be inuenced by u(t), then it would be impossible to move
that part of the state from 0 to x
?
Fall 2001 16.31 122
Denition: A state x
?
6= 0 is said to be uncontrollable if the
forced state response x(t) is orthogonal to x
?
t > 0 and all input
functions.
You cannot get there from here
This is equivalent to saying that x
?
is an uncontrollable state if
(x
?
)
T
Z
t
0
e
A
2
Bu(t
2
)d
2
=
Z
t
0
(x
?
)
T
e
A
2
Bu(t
2
)d
2
= 0
Since this identity must hold for all input functions u(t
2
), this
can only be true if
(x
?
)
T
e
At
B 0 t 0

Fall 2001 16.31 123


For the problem we were just looking at, consider Model #3 with
x
?
= [ 0 1 ]
T
6= 0, then

2 0
Model # 3 x =

x +
0
2

u
0 1

3 2

x y =
so

e
2t
0
0 e
t

0

2

0 1

e
t

(x
?
)
T
e
At
B =

2
0

= 0 t

0 =
So x
?
= [ 0 1 ]
T
is an uncontrollable state for this system.
But that is as expected, because we knew there was a problem with
the state x
2
from the previous analysis
Fall 2001 16.31 124
Theorem: An LTI system is controllable i it has no
uncontrollable states.
We normally just say that the pair (A,B) is controllable.
Pseudo-Proof: The theorem essentially follows by the denition
of an uncontrollable state.
If you had an uncontrollable state x
?
, then it is orthogonal to the
forced response state x(t), which means that the system cannot
reach it in nite time ; the system would be uncontrollable.
Theorem: The vector x
?
is an uncontrollable state i
(x
?
)
T

B AB A
2
B A
n1
B

= 0
See page 81.
Simple test: Necessary and sucient condition for controllability
is that
rank M
c
, rank

B AB A
2
B A
n1
B

= n

Fall 2001
Examples
16.31 125

2 0
With Model # 2: x =

x +
1
2

u
0 1

y = 3 0

x

C 3 0
M
0
=
CA

=
6 0


M
c
=

B AB

=
2 4
1 1

rank M
0
= 1 and rank M
c
= 2
So this model of the system is controllable, but not observable.

2 0
With Model # 3: x =

x +
0
2

u
0 1

y = 3 2

x

C 3 2
M
0
=
CA

=
6 2

M
c
=

B AB

=

2
0
4
0

rank M
0
= 2 and rank M
c
= 1
So this model of the system is observable, but not controllable.
Note that controllability/observability are not intrinsic properties
of a system. Whether the model has them or not depends on the
representation that you choose.
But they indicate that something else more fundamental is wrong...
Fall 2001 16.31 126
Example: Loss of Observability
Typical scenario: consider system G(s) of the form
1
s + a
x
1
s + a
s + 1
x
2
u y
so that
s + a
s + 1

1
s + a
G(s) =
Clearly a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ u
x
2
= x
2
+ (a 1)x
2
y = x
1
+ x
2

a 0
a 1 1

, B =

1
0

, C =

1 1

, D = 0 A =
The Observability/Controllability tests are (a = 2):

C
CA

= rank
1
1
1

= 1 < n = 2
1
rank

1 2
1

= 2
0

B AB

= rank rank
System controllable, but unobservable. Consistent with the picture:
Both states can be inuenced by u
But e
at
mode dynamics canceled out of the output by the zero.
Fall 2001 16.31 127
Example: Loss of Controllability
Repeat the process, but now use the system G(s) of the form
s + a
s + 1
x
2
1
s + a
x
1
u y
so that
1
s + a

s + a
s + 1
G(s) =
Still a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ x
2
+ u
x
2
= x
2
+ (a 1)u
y = x
1

a 1
0 1

, B
2
=
1
a 1

, C
2
=

1 0

, D
2
= 0 A
2
=
The Observability/Controllability tests are (a = 2):

C
2
C
2
A
2

= rank
1 0
2 1

= 2 rank

1 1
1

= 1 < n = 2
1

B
2
A
2
B
2

= rank rank
System observable, but uncontrollable. Consistent with the picture:
u can inuence state x
2
, but eect on x
1
canceled by zero
Both states can be seen in the output (x
1
directly, and x
2
because
it drives the dynamics associated with x
1
)

Fall 2001 16.31 128


Modal Tests
Earlier examples showed the relative simplicity of testing observ
ability/controllability for system with a decoupled A matrix.
There is, of course, a very special decoupled form for the state-space
model: the Modal Form (85)
Assuming that we are given the model
x = Ax + Bu
y = Cx + Du
and the A is diagonalizable (A = T T
1
) using the transformation
T =

| |

v
1
v
n
| |

based on the eigenvalues of A. Note that we wrote:


T
1
=

w
T

1
.
.
.
w
T

which is a column of rows.


Then dene a new state so that x = Tz, then
z = T
1
x = T
1
(Ax + Bu) = (T
1
AT )z + T
1
Bu
= z + T
1
Bu
y = Cx + Du = CTz + Du
Fall 2001 16.31 129
The new model in the state z is diagonal. There is no coupling in
the dynamics matrix .
But by denition,

w
T
1
.
T
1
B =

.
.
B
w
T
n
and

CT = C v
1
v
n
Thus if it turned out that
T
w
i
B 0
then that element of the state vector z
i
would be uncontrollable
by the input u.
Also, if
Cv
j
0
then that element of the state vector z
j
would be unobservable
with this sensor.
Thus, all modes of the system are controllable and ob-
servable if it can be shown that
w
i
T
B 6= 0 i
and
Cv
j
6= 0 j
Fall 2001 16.31 1210
Cancelation
Examples show the close connection between pole-zero cancelation
and loss of observability and controllability. Can be strengthened.
Theorem: The mode (
i
, v
i
) of a system (A, B, C, D) is unob

. servable i the system has a zero at
i
with direction
v
i
0
Proof: If the system is unobservable at
i
, then we know
(
i
I A)v
i
= 0 It is a mode
Cv
i
= 0 That mode is unobservable
Combine to get:

(
i
I A)
C

v
i
= 0
Or

(
i
I A) B
C D

v
i
0

= 0
which implies that the system has a zero at that frequency as well,

with direction
v
i
0

.
Can repeat the process looking for los

s of controllability, but now

using zeros with left direction w


T
0
i
.
Fall 2001 16.31 1211
Combined Denition: when a MIMO zero causes loss of ei
ther observability or controllability we say that there is a pole/zero
cancelation.
MIMO pole-zero (right direction generalized eigenvector) cance
lation mode is unobservable
MIMO pole-zero (left direction generalized eigenvector) cancela
tion mode is uncontrollable
Note: This cancelation requires an agreement of both the fre
quency and the directionality of the system mode (eigenvector) and

zero
v
i
0

or

w
T
0
i

.
Fall 2001 16.31 1212
Weaker Conditions
Often it is too much to assume that we will have full observability
and controllability. Often have to make do with the following:
A system is called detectable if all unstable modes
are observable
A system is called stabilizable if all unstable modes
are controllable
So if you had a stabilizable and detectable system, there could be
dynamics that you are not aware of and cannot inuence, but you
know that they are at least stable.
Topic #12
16.31 Feedback Control
State-Space Systems
State-space model features
Controllability
Copyright 2001 by Jonathan How.
1
Fall 2001 16.31 121
Controllability
Denition: An LTI system is controllable if, for every x
?
(t)
and every T > 0, there exists an input function u(t), 0 < t T ,
such that the system state goes from x(0) = 0 to x(T ) = x
?
.
Starting at 0 is not a special case if we can get to any state
in nite time from the origin, then we can get from any initial
condition to that state in nite time as well.
Need only consider the forced solution to study controllability.
x(t) =
Z
t
0
e
A(t )
Bu( )d
Change of variables
2
= t , d = d
2
gives
x(t) =
Z
t
0
e
A
2
Bu(t
2
)d
2
This denition of observability is consistent with the notion we used
before of being able to inuence all the states in the system in the
decoupled examples we looked at before.
ROT: For those decoupled examples, if part of the state cannot
be inuenced by u(t), then it would be impossible to move
that part of the state from 0 to x
?
Fall 2001 16.31 122
Denition: A state x
?
6= 0 is said to be uncontrollable if the
forced state response x(t) is orthogonal to x
?
t > 0 and all input
functions.
You cannot get there from here
This is equivalent to saying that x
?
is an uncontrollable state if
(x
?
)
T
Z
t
0
e
A
2
Bu(t
2
)d
2
=
Z
t
0
(x
?
)
T
e
A
2
Bu(t
2
)d
2
= 0
Since this identity must hold for all input functions u(t
2
), this
can only be true if
(x
?
)
T
e
At
B 0 t 0

Fall 2001 16.31 123


For the problem we were just looking at, consider Model #3 with
x
?
= [ 0 1 ]
T
6= 0, then

2 0
Model # 3 x =

x +
0
2

u
0 1

3 2

x y =
so

e
2t
0
0 e
t

0

2

0 1

e
t

(x
?
)
T
e
At
B =

2
0

= 0 t

0 =
So x
?
= [ 0 1 ]
T
is an uncontrollable state for this system.
But that is as expected, because we knew there was a problem with
the state x
2
from the previous analysis
Fall 2001 16.31 124
Theorem: An LTI system is controllable i it has no
uncontrollable states.
We normally just say that the pair (A,B) is controllable.
Pseudo-Proof: The theorem essentially follows by the denition
of an uncontrollable state.
If you had an uncontrollable state x
?
, then it is orthogonal to the
forced response state x(t), which means that the system cannot
reach it in nite time ; the system would be uncontrollable.
Theorem: The vector x
?
is an uncontrollable state i
(x
?
)
T

B AB A
2
B A
n1
B

= 0
See page 81.
Simple test: Necessary and sucient condition for controllability
is that
rank M
c
, rank

B AB A
2
B A
n1
B

= n

Fall 2001
Examples
16.31 125

2 0
With Model # 2: x =

x +
1
2

u
0 1

y = 3 0

x

C 3 0
M
0
=
CA

=
6 0


M
c
=

B AB

=
2 4
1 1

rank M
0
= 1 and rank M
c
= 2
So this model of the system is controllable, but not observable.

2 0
With Model # 3: x =

x +
0
2

u
0 1

y = 3 2

x

C 3 2
M
0
=
CA

=
6 2

M
c
=

B AB

=

2
0
4
0

rank M
0
= 2 and rank M
c
= 1
So this model of the system is observable, but not controllable.
Note that controllability/observability are not intrinsic properties
of a system. Whether the model has them or not depends on the
representation that you choose.
But they indicate that something else more fundamental is wrong...
Fall 2001 16.31 126
Example: Loss of Observability
Typical scenario: consider system G(s) of the form
1
s + a
x
1
s + a
s + 1
x
2
u y
so that
s + a
s + 1

1
s + a
G(s) =
Clearly a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ u
x
2
= x
2
+ (a 1)x
2
y = x
1
+ x
2

a 0
a 1 1

, B =

1
0

, C =

1 1

, D = 0 A =
The Observability/Controllability tests are (a = 2):

C
CA

= rank
1
1
1

= 1 < n = 2
1
rank

1 2
1

= 2
0

B AB

= rank rank
System controllable, but unobservable. Consistent with the picture:
Both states can be inuenced by u
But e
at
mode dynamics canceled out of the output by the zero.
Fall 2001 16.31 127
Example: Loss of Controllability
Repeat the process, but now use the system G(s) of the form
s + a
s + 1
x
2
1
s + a
x
1
u y
so that
1
s + a

s + a
s + 1
G(s) =
Still a pole-zero cancelation in this system (pole s = a)
The state space model for the system is:
x
1
= ax
1
+ x
2
+ u
x
2
= x
2
+ (a 1)u
y = x
1

a 1
0 1

, B
2
=
1
a 1

, C
2
=

1 0

, D
2
= 0 A
2
=
The Observability/Controllability tests are (a = 2):

C
2
C
2
A
2

= rank
1 0
2 1

= 2 rank

1 1
1

= 1 < n = 2
1

B
2
A
2
B
2

= rank rank
System observable, but uncontrollable. Consistent with the picture:
u can inuence state x
2
, but eect on x
1
canceled by zero
Both states can be seen in the output (x
1
directly, and x
2
because
it drives the dynamics associated with x
1
)

Fall 2001 16.31 128


Modal Tests
Earlier examples showed the relative simplicity of testing observ
ability/controllability for system with a decoupled A matrix.
There is, of course, a very special decoupled form for the state-space
model: the Modal Form (85)
Assuming that we are given the model
x = Ax + Bu
y = Cx + Du
and the A is diagonalizable (A = T T
1
) using the transformation
T =

| |

v
1
v
n
| |

based on the eigenvalues of A. Note that we wrote:


T
1
=

w
T

1
.
.
.
w
T

which is a column of rows.


Then dene a new state so that x = Tz, then
z = T
1
x = T
1
(Ax + Bu) = (T
1
AT )z + T
1
Bu
= z + T
1
Bu
y = Cx + Du = CTz + Du
Fall 2001 16.31 129
The new model in the state z is diagonal. There is no coupling in
the dynamics matrix .
But by denition,

w
T
1
.
T
1
B =

.
.
B
w
T
n
and

CT = C v
1
v
n
Thus if it turned out that
T
w
i
B 0
then that element of the state vector z
i
would be uncontrollable
by the input u.
Also, if
Cv
j
0
then that element of the state vector z
j
would be unobservable
with this sensor.
Thus, all modes of the system are controllable and ob-
servable if it can be shown that
w
i
T
B 6= 0 i
and
Cv
j
6= 0 j
Fall 2001 16.31 1210
Cancelation
Examples show the close connection between pole-zero cancelation
and loss of observability and controllability. Can be strengthened.
Theorem: The mode (
i
, v
i
) of a system (A, B, C, D) is unob

. servable i the system has a zero at
i
with direction
v
i
0
Proof: If the system is unobservable at
i
, then we know
(
i
I A)v
i
= 0 It is a mode
Cv
i
= 0 That mode is unobservable
Combine to get:

(
i
I A)
C

v
i
= 0
Or

(
i
I A) B
C D

v
i
0

= 0
which implies that the system has a zero at that frequency as well,

with direction
v
i
0

.
Can repeat the process looking for los

s of controllability, but now

using zeros with left direction w


T
0
i
.
Fall 2001 16.31 1211
Combined Denition: when a MIMO zero causes loss of ei
ther observability or controllability we say that there is a pole/zero
cancelation.
MIMO pole-zero (right direction generalized eigenvector) cance
lation mode is unobservable
MIMO pole-zero (left direction generalized eigenvector) cancela
tion mode is uncontrollable
Note: This cancelation requires an agreement of both the fre
quency and the directionality of the system mode (eigenvector) and

zero
v
i
0

or

w
T
0
i

.
Fall 2001 16.31 1212
Weaker Conditions
Often it is too much to assume that we will have full observability
and controllability. Often have to make do with the following:
A system is called detectable if all unstable modes
are observable
A system is called stabilizable if all unstable modes
are controllable
So if you had a stabilizable and detectable system, there could be
dynamics that you are not aware of and cannot inuence, but you
know that they are at least stable.
Topic #13
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we change the pole locations to?
How well does this approach work?
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 131
Full-state Feedback Controller
Assume that the single-input system dynamics are given by
x = Ax + Bu
y = Cx
so that D = 0.
The multi-actuator case is quite a bit more complicated as we
would have many extra degrees of freedom.
Recall that the system poles are given by the eigenvalues of A.
Want to use the input u(t) to modify the eigenvalues of A to
change the system dynamics.
Assume a full-state feedback of the form:
u = r Kx
where r is some reference input and the gain K is R
1n
If r = 0, we call this controller a regulator
Find the closed-loop dynamics:
x = Ax + B(r Kx)
= (A BK)x + Br
= A
cl
x + Br
y = Cx
Fall 2001 16.31 132
Objective: Pick K so that A
cl
has the desired properties, e.g.,
A unstable, want A
cl
stable
Put 2 poles at 2 2j
Note that there are n parameters in K and n eigenvalues in A, so
it looks promising, but what can we achieve?
Example #1: Consider:
x =
_
1 1
1 2
_
x +
_
1
0
_
u
Then
det(sI A) = (s 1)(s 2) 1 = s
2
3s + 1 = 0
so the system is unstable.
Dene u =
_
k
1
k
2

x = Kx, then
A
cl
= ABK =
_
1 1
1 2
_

_
1
0
_
_
k
1
k
2

=
_
1 k
1
1 k
2
1 2
_
So then we have that
det(sI A
cl
) = s
2
+ (k
1
3)s + (1 2k
1
+ k
2
) = 0
Thus, by choosing k
1
and k
2
, we can put
i
(A
cl
) anywhere in
the complex plane (assuming complex conjugate pairs of poles).
Fall 2001 16.31 133
To put the poles at s = 5, 6, compare the desired characteristic
equation
(s + 5)(s + 6) = s
2
+ 11s + 30 = 0
with the closed-loop one
s
2
+ (k
1
3)x + (1 2k
1
+ k
2
) = 0
to conclude that
k
1
3 = 11
1 2k
1
+ k
2
= 30
_
k
1
= 14
k
2
= 57
so that K =
_
14 57

, which is called Pole Placement.


Of course, it is not always this easy, as the issue of controllability
must be addressed.
Example #2: Consider this system:
x =
_
1 1
0 2
_
x +
_
1
0
_
u
with the same control approach
A
cl
= A BK =
_
1 1
0 2
_

_
1
0
_
_
k
1
k
2

=
_
1 k
1
1 k
2
0 2
_
so that
det(sI A
cl
) = (s 1 + k
1
)(s 2) = 0
So the feedback control can modify the pole at s = 1, but it cannot
move the pole at s = 2.
The system cannot be stabilized with full-state feed-
back control.
Fall 2001 16.31 134
What is the reason for this problem?
It is associated with loss of controllability of the e
2t
mode.
Consider the basic controllability test:
M
c
=
_
B AB

=
_ _
1
0
_ _
1 1
0 2
_ _
1
0
_ _
So that rank M
c
= 1 < 2.
Consider the modal test to develop a little more insight:
A =
_
1 1
0 2
_
, decompose as AV = V = V
1
AV
where
=
_
1 0
0 2
_
V =
_
1 1
0 1
_
V
1
=
_
1 1
0 1
_
Convert
x = Ax + Bu
z=V
1
x
z = z + V
1
Bu
where z =
_
z
1
z
2

T
. But:
V
1
B =
_
1 1
0 1
_ _
1
0
_
=
_
1
0
_
so that the dynamics in modal form are:
z =
_
1 0
0 2
_
z +
_
1
0
_
u
With this zero in the modal B-matrix, can easily see that the mode
associated with the z
2
state is uncontrollable.
Must assume that the pair (A, B) are controllable.
Fall 2001 16.31 135
Ackermanns Formula
The previous outlined a design procedure and showed how to do it
by hand for second-order systems.
Extends to higher order (controllable) systems, but tedious.
Ackermanns Formula gives us a method of doing this entire
design process is one easy step.
K =
_
0 . . . 0 1

M
1
c

d
(A)
where
M
c
=
_
B AB . . . A
n1
B


d
(s) is the characteristic equation for the closed-loop poles,
which we then evaluate for s = A.
It is explicit that the system must be controllable because
we are inverting the controllability matrix.
Revisit example # 1:
d
(s) = s
2
+ 11s + 30
M
c
=
_
B AB

=
_ _
1
0
_ _
1 1
1 2
_ _
1
0
_ _
=
_
1 1
0 1
_
So
K =
_
0 1

_
1 1
0 1
_
1
_
_
1 1
1 2
_
2
+ 11
_
1 1
1 2
_
+ 30I
_
=
_
0 1

__
43 14
14 57
__
=
_
14 57

Automated in Matlab: place.m & acker.m (see polyvalm.m too)

Fall 2001 16.31 136


Where did this formula come from?
For simplicity, consider a third-order system (case #2), but this
extends to any order.
A =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_
B =
_
_
1
0
0
_
_
C =
_
b
1
b
2
b
3

See key benet of using control canonical state-space model


This form is useful because the characteristic equation for the
system is obvious det(sI A) = s
3
+ a
1
s
2
+ a
2
s + a
3
= 0
Can show that
A
cl
= A BK =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_

_
_
1
0
0
_
_
_
k
1
k
2
k
3

=
_
_
a
1
k
1
a
2
k
2
a
3
k
3
1 0 0
0 1 0
_
_
so that the characteristic equation for the system is still obvious:

cl
(s) = det(sI A
cl
)
= s
3
+ (a
1
+ k
1
)s
2
+ (a
2
+ k
2
)s + (a
3
+ k
3
) = 0
Fall 2001 16.31 137
We then compare this with the desired characteristic equation de-
veloped from the desired closed-loop pole locations:

d
(s) = s
3
+ (
1
)s
2
+ (
2
)s + (
3
) = 0
to get that
a
1
+ k
1
=
1
.
.
.
a
n
+ k
n
=
n
_
_
_
k
1
=
1
a
1
.
.
.
k
n
=
n
a
n
To get the specics of the Ackermann formula, we then:
Take an arbitrary A, B and transform it to the control canonical
form (x z = T
1
x)
Solve for the gains

K using the formulas above for the state z
(u =

Kz)
Then switch back to gains needed for the state x, so that
K =

KT
1
(u =

Kz = Kx)
Pole placement is a very powerful tool and we will be using it for
most of this course.
MATLAB is a trademark of The MathWorks, Inc.

Topic #13
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we change the pole locations to?
How well does this approach work?
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 131
Full-state Feedback Controller
Assume that the single-input system dynamics are given by
x = Ax + Bu
y = Cx
so that D = 0.
The multi-actuator case is quite a bit more complicated as we
would have many extra degrees of freedom.
Recall that the system poles are given by the eigenvalues of A.
Want to use the input u(t) to modify the eigenvalues of A to
change the system dynamics.
Assume a full-state feedback of the form:
u = r Kx
where r is some reference input and the gain K is R
1n
If r = 0, we call this controller a regulator
Find the closed-loop dynamics:
x = Ax + B(r Kx)
= (A BK)x + Br
= A
cl
x + Br
y = Cx
Fall 2001 16.31 132
Objective: Pick K so that A
cl
has the desired properties, e.g.,
A unstable, want A
cl
stable
Put 2 poles at 2 2j
Note that there are n parameters in K and n eigenvalues in A, so
it looks promising, but what can we achieve?
Example #1: Consider:
x =
_
1 1
1 2
_
x +
_
1
0
_
u
Then
det(sI A) = (s 1)(s 2) 1 = s
2
3s + 1 = 0
so the system is unstable.
Dene u =
_
k
1
k
2

x = Kx, then
A
cl
= ABK =
_
1 1
1 2
_

_
1
0
_
_
k
1
k
2

=
_
1 k
1
1 k
2
1 2
_
So then we have that
det(sI A
cl
) = s
2
+ (k
1
3)s + (1 2k
1
+ k
2
) = 0
Thus, by choosing k
1
and k
2
, we can put
i
(A
cl
) anywhere in
the complex plane (assuming complex conjugate pairs of poles).
Fall 2001 16.31 133
To put the poles at s = 5, 6, compare the desired characteristic
equation
(s + 5)(s + 6) = s
2
+ 11s + 30 = 0
with the closed-loop one
s
2
+ (k
1
3)x + (1 2k
1
+ k
2
) = 0
to conclude that
k
1
3 = 11
1 2k
1
+ k
2
= 30
_
k
1
= 14
k
2
= 57
so that K =
_
14 57

, which is called Pole Placement.


Of course, it is not always this easy, as the issue of controllability
must be addressed.
Example #2: Consider this system:
x =
_
1 1
0 2
_
x +
_
1
0
_
u
with the same control approach
A
cl
= A BK =
_
1 1
0 2
_

_
1
0
_
_
k
1
k
2

=
_
1 k
1
1 k
2
0 2
_
so that
det(sI A
cl
) = (s 1 + k
1
)(s 2) = 0
So the feedback control can modify the pole at s = 1, but it cannot
move the pole at s = 2.
The system cannot be stabilized with full-state feed-
back control.
Fall 2001 16.31 134
What is the reason for this problem?
It is associated with loss of controllability of the e
2t
mode.
Consider the basic controllability test:
M
c
=
_
B AB

=
_ _
1
0
_ _
1 1
0 2
_ _
1
0
_ _
So that rank M
c
= 1 < 2.
Consider the modal test to develop a little more insight:
A =
_
1 1
0 2
_
, decompose as AV = V = V
1
AV
where
=
_
1 0
0 2
_
V =
_
1 1
0 1
_
V
1
=
_
1 1
0 1
_
Convert
x = Ax + Bu
z=V
1
x
z = z + V
1
Bu
where z =
_
z
1
z
2

T
. But:
V
1
B =
_
1 1
0 1
_ _
1
0
_
=
_
1
0
_
so that the dynamics in modal form are:
z =
_
1 0
0 2
_
z +
_
1
0
_
u
With this zero in the modal B-matrix, can easily see that the mode
associated with the z
2
state is uncontrollable.
Must assume that the pair (A, B) are controllable.
Fall 2001 16.31 135
Ackermanns Formula
The previous outlined a design procedure and showed how to do it
by hand for second-order systems.
Extends to higher order (controllable) systems, but tedious.
Ackermanns Formula gives us a method of doing this entire
design process is one easy step.
K =
_
0 . . . 0 1

M
1
c

d
(A)
where
M
c
=
_
B AB . . . A
n1
B


d
(s) is the characteristic equation for the closed-loop poles,
which we then evaluate for s = A.
It is explicit that the system must be controllable because
we are inverting the controllability matrix.
Revisit example # 1:
d
(s) = s
2
+ 11s + 30
M
c
=
_
B AB

=
_ _
1
0
_ _
1 1
1 2
_ _
1
0
_ _
=
_
1 1
0 1
_
So
K =
_
0 1

_
1 1
0 1
_
1
_
_
1 1
1 2
_
2
+ 11
_
1 1
1 2
_
+ 30I
_
=
_
0 1

__
43 14
14 57
__
=
_
14 57

Automated in Matlab: place.m & acker.m (see polyvalm.m too)

Fall 2001 16.31 136


Where did this formula come from?
For simplicity, consider a third-order system (case #2), but this
extends to any order.
A =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_
B =
_
_
1
0
0
_
_
C =
_
b
1
b
2
b
3

See key benet of using control canonical state-space model


This form is useful because the characteristic equation for the
system is obvious det(sI A) = s
3
+ a
1
s
2
+ a
2
s + a
3
= 0
Can show that
A
cl
= A BK =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_

_
_
1
0
0
_
_
_
k
1
k
2
k
3

=
_
_
a
1
k
1
a
2
k
2
a
3
k
3
1 0 0
0 1 0
_
_
so that the characteristic equation for the system is still obvious:

cl
(s) = det(sI A
cl
)
= s
3
+ (a
1
+ k
1
)s
2
+ (a
2
+ k
2
)s + (a
3
+ k
3
) = 0
Fall 2001 16.31 137
We then compare this with the desired characteristic equation de-
veloped from the desired closed-loop pole locations:

d
(s) = s
3
+ (
1
)s
2
+ (
2
)s + (
3
) = 0
to get that
a
1
+ k
1
=
1
.
.
.
a
n
+ k
n
=
n
_
_
_
k
1
=
1
a
1
.
.
.
k
n
=
n
a
n
To get the specics of the Ackermann formula, we then:
Take an arbitrary A, B and transform it to the control canonical
form (x z = T
1
x)
Solve for the gains

K using the formulas above for the state z
(u =

Kz)
Then switch back to gains needed for the state x, so that
K =

KT
1
(u =

Kz = Kx)
Pole placement is a very powerful tool and we will be using it for
most of this course.
MATLAB is a trademark of The MathWorks, Inc.

Topic #14
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we change the pole locations to?
How well does this approach work?
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 141
Reference Inputs
So far we have looked at how to pick K to get the dynamics to have
some nice properties (i.e. stabilize A)
The question remains as to how well this controller allows us to
track a reference command?
Performance issue rather than just stability.
Started with
x = Ax + Bu y = Cx
u = r Kx
For good tracking performance we want
y(t) r(t) as t
Consider this performance issue in the frequency domain. Use the
nal value theorem:
lim
t
y(t) = lim
s0
sY (s)
Thus, for good performance, we want
sY (s) sR(s) as s 0
Y (s)
R(s)

s=0
= 1
So, for good performance, the transfer function from R(s) to Y (s)
should be approximately 1 at DC.
Fall 2001 16.31 142
Example #1: Consider:
x =

1 1
1 2

x +

1
0

u
y =

1 0

x
Already designed K =

14 57

Then the closed-loop system is


x = (A BK)x + Br y = Cx
Which gives the transfer function
Y (s)
R(s)
= C (sI (A BK))
1
B
=

1 0

s + 13 56
1 s 2

1
0

=
s 2
s
2
+ 11s + 30
Assume that r(t) is a step, then by the FVT
Y (s)
R(s)

s=0
=
2
30
= 1 !!
So our step response is quite poor.
Fall 2001 16.31 143
One obvious solution is to scale the reference input r(t) so that
u =

Nr Kx


N is an extra gain used to scale the closed-loop transfer function
Now we have
x = (A BK)x + B

Nr , y = Cx
so that
Y (s)
R(s)
= C (sI (A BK))
1
B

N
If we had made

N = 15, then
Y (s)
R(s)
=
15(s 2)
s
2
+ 11s + 30
so with a step input, y(t) 1 as t .
So the steady state step error is now zero, but is this OK?
See plots big improvement in the response, but the transient
is a bit weird.
Fall 2001 16.31 144
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Step Response
u=rKx
u=Nbar rKx
Figure 1: Response to step input with and without the

N correction.
Formal way to compute

N is to change the form of the control input.
Consider the analysis for the response to a step input r = r
ss
1(t)
At steady state x = 0, so we have
x = Ax + Bu
y = Cx

0 = Ax
ss
+ Bu
ss
y
ss
= Cx
ss
and if things are going well, then y
ss
= r
ss
.

A B
C 0

x
ss
u
ss

0
r
ss

which can be easily solved for x


ss
and u
ss
.
Fall 2001 16.31 145
For purposes of scaling, dene:
x
ss
N
x
r
ss
u
ss
N
u
r
ss
We would then implement the control in the new form
u =

Nr Kx = (N
u
+ KN
x
)r Kx
= N
u
r + K(N
x
r x)
= u
ss
+ K(x
ss
x)
which can be visualized as:
Use N
x
to modify the reference command r to generate a feed-
forward state command to the system x
ss
.
Use N
u
to modify the reference command r to generate a feed-
forward control input u
ss
Note that this development assumed that r was constant, but it
could also be used if r is a slowly time-varying command.
But as we have seen, the architecture is designed to give good
steady-state behavior, and it might give weird transient responses.
Fall 2001 16.31 146
For our example,

x
ss
u
ss

A B
C 0

0
1

1
0.5
0.5

so
x
ss
=

1
0.5

, u
ss
=

0.5

and

N = N
u
+ KN
x
= 0.5 +

14 57

1
0.5

= 15
as we had before.
Fall 2001 16.31 147
0 0.5 1 1.5 2 2.5 3 3.5 4
0.08
0.06
0.04
0.02
0
0.02
0.04
0.06
time (sec)
X

s
t
a
t
e
Step Response
x
1
x
2
0 0.5 1 1.5 2 2.5 3 3.5 4
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
U

c
o
n
t
r
o
l
Step Response: u=rKx
u=rKx
Figure 2: Response to step input without the

N correction. The steady state x and u values are non-zero but
they are not the values that give the desired y
ss
.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time (sec)
X

s
t
a
t
e
Step Response
x
1
x
2
0 0.5 1 1.5 2 2.5 3 3.5 4
15
10
5
0
5
time (sec)
U

c
o
n
t
r
o
l
Step Response: u=Nbar rKx
u=Nbar rKx
Figure 3: Response to step input with the

N correction. Gives the desired steady-state behavior, but note the
higher u(0).
Fall 2001 16.31 148
Examples
Example from Problem set
G(s) =
8 14 20
(s + 8)(s + 14)(s + 20)
Target pole locations 12 12j, 20
% system
[a,b,c,d]=tf2ss(8*14*20,conv([1 8],conv([1 14],[1 20])))
% controller gains to place poles at specified locations
k=place(a,b,[-12+12*j;-12-12*j;-20])
TT=[a b;c d];
% find the feedforward gains
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:1];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
2 216 3520
2.5714
u=rKx
u=Nbar rKx
Figure 4: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 149
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 5: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1410
Example from second problem set
G(s) =
0.94
s
2
0.0297
Target pole locations 0.25 0.25j
[a,b,c,d]=tf2ss(.94,[1 0 -0.0297])
k=place(a,b,[-1+j;-1-j]/4)
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.1:30];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 5 10 15 20 25 30
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
0.5 0.1547
0.13298
u=rKx
u=Nbar rKx
Figure 6: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1411
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 7: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1412
Ok, so lets try something very challenging...
G(s) =
8 14 20
(s 8)(s 14)(s 20)
Target pole locations 12 12j, 20
[a,b,c,d]=tf2ss(8*14*20,conv([1 -8],conv([1 -14],[1 -20])))
k=place(a,b,[-12+12*j;-12-12*j;-20])
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:1];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
86 216 8000
2.5714
u=rKx
u=Nbar rKx
Figure 8: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1413
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 9: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1414
The worst possible...
G(s) =
(s 1)
(s + 1)(s 3)
Unstable, NMP!!
Target pole locations 1 j
[a,b,c,d]=tf2ss([1 -1],conv([1 1],[1 -3]))
k=place(a,b,[[-1+j;-1-j]])
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:10];[y,t,x]=step(sys1,t);[y2,t2,x2]=step(sys2,t);
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
4 5
2
u=rKx
u=Nbar rKx
Figure 10: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1415
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 11: Closed-loop frequency response. Clearly shows that the DC gain is unity
Full state feedback process is quite simple as it can be automated
in Matlab using acker and/or place
With more than 1 actuator, we have more than n degrees of freedom
in the control we can change the eigenvectors as desired, as well
as the poles.
The real issue now is where to put the poles...
And to correct the fact that we cannot usually measure the state
develop an estimator.

MATLAB is a trademark of The MathWorks, Inc.

Topic #14
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we change the pole locations to?
How well does this approach work?
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 141
Reference Inputs
So far we have looked at how to pick K to get the dynamics to have
some nice properties (i.e. stabilize A)
The question remains as to how well this controller allows us to
track a reference command?
Performance issue rather than just stability.
Started with
x = Ax + Bu y = Cx
u = r Kx
For good tracking performance we want
y(t) r(t) as t
Consider this performance issue in the frequency domain. Use the
nal value theorem:
lim
t
y(t) = lim
s0
sY (s)
Thus, for good performance, we want
sY (s) sR(s) as s 0
Y (s)
R(s)

s=0
= 1
So, for good performance, the transfer function from R(s) to Y (s)
should be approximately 1 at DC.
Fall 2001 16.31 142
Example #1: Consider:
x =

1 1
1 2

x +

1
0

u
y =

1 0

x
Already designed K =

14 57

Then the closed-loop system is


x = (A BK)x + Br y = Cx
Which gives the transfer function
Y (s)
R(s)
= C (sI (A BK))
1
B
=

1 0

s + 13 56
1 s 2

1
0

=
s 2
s
2
+ 11s + 30
Assume that r(t) is a step, then by the FVT
Y (s)
R(s)

s=0
=
2
30
= 1 !!
So our step response is quite poor.
Fall 2001 16.31 143
One obvious solution is to scale the reference input r(t) so that
u =

Nr Kx


N is an extra gain used to scale the closed-loop transfer function
Now we have
x = (A BK)x + B

Nr , y = Cx
so that
Y (s)
R(s)
= C (sI (A BK))
1
B

N
If we had made

N = 15, then
Y (s)
R(s)
=
15(s 2)
s
2
+ 11s + 30
so with a step input, y(t) 1 as t .
So the steady state step error is now zero, but is this OK?
See plots big improvement in the response, but the transient
is a bit weird.
Fall 2001 16.31 144
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Step Response
u=rKx
u=Nbar rKx
Figure 1: Response to step input with and without the

N correction.
Formal way to compute

N is to change the form of the control input.
Consider the analysis for the response to a step input r = r
ss
1(t)
At steady state x = 0, so we have
x = Ax + Bu
y = Cx

0 = Ax
ss
+ Bu
ss
y
ss
= Cx
ss
and if things are going well, then y
ss
= r
ss
.

A B
C 0

x
ss
u
ss

0
r
ss

which can be easily solved for x


ss
and u
ss
.
Fall 2001 16.31 145
For purposes of scaling, dene:
x
ss
N
x
r
ss
u
ss
N
u
r
ss
We would then implement the control in the new form
u =

Nr Kx = (N
u
+ KN
x
)r Kx
= N
u
r + K(N
x
r x)
= u
ss
+ K(x
ss
x)
which can be visualized as:
Use N
x
to modify the reference command r to generate a feed-
forward state command to the system x
ss
.
Use N
u
to modify the reference command r to generate a feed-
forward control input u
ss
Note that this development assumed that r was constant, but it
could also be used if r is a slowly time-varying command.
But as we have seen, the architecture is designed to give good
steady-state behavior, and it might give weird transient responses.
Fall 2001 16.31 146
For our example,

x
ss
u
ss

A B
C 0

0
1

1
0.5
0.5

so
x
ss
=

1
0.5

, u
ss
=

0.5

and

N = N
u
+ KN
x
= 0.5 +

14 57

1
0.5

= 15
as we had before.
Fall 2001 16.31 147
0 0.5 1 1.5 2 2.5 3 3.5 4
0.08
0.06
0.04
0.02
0
0.02
0.04
0.06
time (sec)
X

s
t
a
t
e
Step Response
x
1
x
2
0 0.5 1 1.5 2 2.5 3 3.5 4
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
U

c
o
n
t
r
o
l
Step Response: u=rKx
u=rKx
Figure 2: Response to step input without the

N correction. The steady state x and u values are non-zero but
they are not the values that give the desired y
ss
.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time (sec)
X

s
t
a
t
e
Step Response
x
1
x
2
0 0.5 1 1.5 2 2.5 3 3.5 4
15
10
5
0
5
time (sec)
U

c
o
n
t
r
o
l
Step Response: u=Nbar rKx
u=Nbar rKx
Figure 3: Response to step input with the

N correction. Gives the desired steady-state behavior, but note the
higher u(0).
Fall 2001 16.31 148
Examples
Example from Problem set
G(s) =
8 14 20
(s + 8)(s + 14)(s + 20)
Target pole locations 12 12j, 20
% system
[a,b,c,d]=tf2ss(8*14*20,conv([1 8],conv([1 14],[1 20])))
% controller gains to place poles at specified locations
k=place(a,b,[-12+12*j;-12-12*j;-20])
TT=[a b;c d];
% find the feedforward gains
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:1];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
2 216 3520
2.5714
u=rKx
u=Nbar rKx
Figure 4: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 149
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 5: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1410
Example from second problem set
G(s) =
0.94
s
2
0.0297
Target pole locations 0.25 0.25j
[a,b,c,d]=tf2ss(.94,[1 0 -0.0297])
k=place(a,b,[-1+j;-1-j]/4)
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.1:30];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 5 10 15 20 25 30
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
0.5 0.1547
0.13298
u=rKx
u=Nbar rKx
Figure 6: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1411
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 7: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1412
Ok, so lets try something very challenging...
G(s) =
8 14 20
(s 8)(s 14)(s 20)
Target pole locations 12 12j, 20
[a,b,c,d]=tf2ss(8*14*20,conv([1 -8],conv([1 -14],[1 -20])))
k=place(a,b,[-12+12*j;-12-12*j;-20])
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];
Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:1];
[y,t,x]=step(sys1,t);
[y2,t2,x2]=step(sys2,t);
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
86 216 8000
2.5714
u=rKx
u=Nbar rKx
Figure 8: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1413
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 9: Closed-loop frequency response. Clearly shows that the DC gain is unity
Fall 2001 16.31 1414
The worst possible...
G(s) =
(s 1)
(s + 1)(s 3)
Unstable, NMP!!
Target pole locations 1 j
[a,b,c,d]=tf2ss([1 -1],conv([1 1],[1 -3]))
k=place(a,b,[[-1+j;-1-j]])
TT=[a b;c d];
N=inv(TT)*[zeros(1,length(a)) 1];Nx=N(1:end-1);Nu=N(end);Nbar=Nu+k*Nx;
sys1=ss(a-b*k,b,c,d);
sys2=ss(a-b*k,b*Nbar,c,d);
t=[0:.01:10];[y,t,x]=step(sys1,t);[y2,t2,x2]=step(sys2,t);
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
4 5
2
u=rKx
u=Nbar rKx
Figure 10: Response to step input with and without the

N correction. Gives the desired steady-state behavior,
with little diculty!
Fall 2001 16.31 1415
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Figure 11: Closed-loop frequency response. Clearly shows that the DC gain is unity
Full state feedback process is quite simple as it can be automated
in Matlab using acker and/or place
With more than 1 actuator, we have more than n degrees of freedom
in the control we can change the eigenvectors as desired, as well
as the poles.
The real issue now is where to put the poles...
And to correct the fact that we cannot usually measure the state
develop an estimator.

MATLAB is a trademark of The MathWorks, Inc.

Topic #15
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we put the poles?
Linear Quadratic Regulator
Symmetric Root Locus
How well does this approach work?
Copyright2001byJonathanHow.
1
Fall 2001 16.31 151
Pole Placement
So far we have looked at how to pick K to get the dynamics to have
some nice properties (i.e. stabilize A)

i
(A) ;
i
(ABK)
Classic Question: where should we put these closed-loop poles?
Of course we can use the time-domain specications to locate the
dominant poles roots of:
s
2
+ 2
n
s +
2
n
= 0
Then place rest of the poles so they are much faster than the
dominant behavior. For example:
Could keep the same damped frequency w
d
and then move the
real part to be 23 times faster than real part of dominant poles

n
Just be careful moving the poles too far to the left because it takes
a lot of control eort
Fall 2001 16.31 152
Could also choose the closed-loop poles to mimic a system that has
similar performance to what you would like to achieve:
Just set pole locations equal to those of the prototype system.
Various options exist
Bessel Polynomial Systems of order k G
p
(s) =
1
B
k
(s)
All scaled to give settling times of 1 second, which you can change
to t
s
by dividing the poles by t
s
.
Fall 2001 16.31 153
Procedure for an n
th
order system:
Determine the desired settling time t
s
Find the k = n polynomial from the table.
Divide pole locations by t
s
Form desired characteristic polynomial
d
(s) and use acker/place
to determine the feedback gains.
Simulate to check performance and control eort.
Example:
G(s) =
1
s(s + 4)(s + 1)
with
A =

5 4 0
1 0 0
0 1 0

B =

1
0
0

so that n = k = 3.
Want t
s
= 2 sec. So there are 3 poles at:
5.0093/2 = 2.5047 and
(3.9668 3.7845i)/2 = 1.9834 1.8922i
Use these to form
d
(s) and nd the gains using acker
The Bessel approach is ne, but the step response is a bit slow.
Fall 2001 16.31 154
Another approach is to select the poles to match the n
th
polyno-
mial that was designed to minimize the ITAE integral of the time
multiplied by the absolute value of the error
J
ITAE
=
Z

0
t |e(t)| dt
in response to a step function.
Both Bessel and ITAE are tabulated in FPE-508.
Comparison for k = 3 (Given for
0
= 1 rad/sec, so slightly
dierent than numbers given on previous page)

B
d
= (s + 0.9420)(s + 0.7465 0.7112i)

ITAE
d
= (s + 0.7081)(s + 0.5210 1.068i)
So the ITAE poles are not as heavily damped.
Some overshoot
Faster rise-times.
Problem with both of these approaches is that they completely ig-
nore the control eort required the designer must iterate.
Fall 2001 16.31 155
Linear Quadratic Regulator
An alternative approach is to place the pole locations so that the
closed-loop (SISO) system optimizes the cost function:
J
LQR
=
Z

0

x
T
(t)(C
T
C)x(t) + r u(t)
2

dt
Where:
y
T
y = x
T
(C
T
C)x {assuming D = 0} is called the State Cost
u
2
is called the Control Cost, and
r is the Control Penalty
Simple form of the Linear Quadratic Regulator Problem.
Can show that the optimal control is a linear state feedback:
u(t) = K
lqr
x(t)
K
lqr
found by solving an Algebraic Riccati Equation (ARE).
We will look at the details of this solution procedure later. For now,
lets just look at the optimal closed-loop pole locations.
Fall 2001 16.31 156
Consider a SISO system with a minimal model
x = Ax + Bu , y = Cx
where
a(s) = det(sI A) and C(sI A)
1
B
b(s)
a(s)
Then
1
with u(t) = K
lqr
x(t), closed-loop dynamics are:
det(sI A + BK
lqr
) =
n
Y
i=1
(s p
i
)
where the p
i
={ the left-hand-plane roots of (s)}, with
(s) = a(s)a(s) + r
1
b(s)b(s)
Use this to nd the optimal pole locations, and then use
those to nd the feedback gains required using acker.
The pole locations can be found using standard root-locus tools.
(s) = a(s)a(s) + r
1
b(s)b(s) = 0
1 + r
1
G(s)G(s) = 0
The plot is symmetric about the real and imaginary axes.
Symmetric Root Locus
2n poles are plotted as a function of r
The poles we pick are always the n in the LHP.
1
Several leaps made here for now. We will come back to this LQR problem later.
Fall 2001 16.31 157
LQR Notes
1. The state cost was written using the output y
T
y, but that does not
need to be the case.
We are free to dene a new system output z = C
z
x that is not
based on a physical sensor measurement.
J
LQR
=
Z

0

x
T
(t)(C
T
z
C
z
)x(t) + r u(t)
2

dt
Selection of z used to isolate the system states you are most
concerned about, and thus would like to be regulated to zero.
2. Note what happens as r ; high control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 a(s)a(s) = 0
So the n closed-loop poles are:
Stable roots of the open-loop system (already in the LHP.)
Reection about the j-axis of the unstable open-loop poles.
3. Note what happens as r ;0 low control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 b(s)b(s) = 0
Assume order of b(s)b(s) is 2m < 2n
So the n closed-loop poles go to:
The m nite zeros of the system that are in the LHP (or the
reections of the systems zeros in the RHP).
The system zeros at innity (there are n m of these).
Fall 2001 16.31 158
Note that the poles tending to innity do so along very specic paths
so that they form a Butterworth Pattern:
At high frequency we can ignore all but the highest powers of s
in the expression for (s) = 0
(s) = 0 ; (1)
n
s
2n
+ r
1
(1)
m
(b
o
s
m
)
2
= 0
s
2(nm)
= (1)
nm+1
b
2
o
r
The 2(n m) solutions of this expression lie on a circle of radius
(b
2
0
/r)
1/2(nm)
at the intersection of the radial lines with phase from the neg-
ative real axis:

l
n m
, l = 0, 1, . . . ,
n m1
2
, (n m) odd

(l + 1/2)
n m
, l = 0, 1, . . . ,
n m
2
1 , (n m) even
Examples:
n m Phase
1 0
2 /4
3 0, /3
4 /8, 3/8
Note: Plot the SRL using the 180
o
rules (normal) if nm is even
and the 0
o
rules if n m is odd.
Fall 2001 16.31 159
Figure 1: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
2.042 2.9408 5.0676
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1510
Figure 2: Example #2: G(s) =
0.94
s
2
0.0297
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 5 10 15 20 25 30
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
1.6489 1.3594
1.4146
u=rKx
u=Nbar rKx
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1511
Figure 3: Example #3: G(s) =
81420
(s8)(s14)(s20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
23.042 2.94079 9.44262
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1512
Figure 4: Example #4: G(s) =
(s1)
(s+1)(s3)
5 4 3 2 1 0 1 2 3 4 5
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
7.3589 3.6794
4.3589
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1513
Figure 5: Example #5: G(s) =
(s2)(s4)
(s1)(s3)(s
2
+0.8s+4)s
2
6 4 2 0 2 4 6
8
6
4
2
0
2
4
6
8
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
10.0869 4.64874 8.01734 3.81
3.1623
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1514
As noted previously, we are free to pick the state weighting matrices
C
z
to penalize the parts of the motion we are most concerned with.
Simple example oscillator with x = [ p , v ]
T
A =

0 1
2 0.5

, B =

0
1

but we choose two cases for z


z = p =

1 0

x and z = v =

0 1

x
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
is
SRL with Position Penalty
2 1.5 1 0.5 0 0.5 1 1.5 2
1.5
1
0.5
0
0.5
1
1.5
Real Axis
I
m
a
g

A
x
is
SRL with Velocity Penalty
Figure 6: SRL with position (left) and velocity penalties (right)
Clearly, choosing a dierent C
z
impacts the SRL because it com-
pletely changes the zero-structure for the system.
Fall 2001 16.31 1515
Summary
Dominant second and prototype design approaches (Bessel and ITAE)
place the closed-loop pole locations with no regard to the
amount of control eort required.
Designer must iterate on the selected bandwidth (
n
) to ensure
that the control eort is reasonable.
LQR/SRL approach selects closed-loop poles that balance between
system errors and the control eort.
Easy design iteration using r poles move along the SRL.
Sometimes dicult to relate the desired transient response to the
LQR cost function.
Nice thing about the LQR approach is that the designer is focused
on system performance issues
The pole locations are then supplied using the SRL.
Topic #15
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we put the poles?
Linear Quadratic Regulator
Symmetric Root Locus
How well does this approach work?
Copyright2001byJonathanHow.
1
Fall 2001 16.31 151
Pole Placement
So far we have looked at how to pick K to get the dynamics to have
some nice properties (i.e. stabilize A)

i
(A) ;
i
(ABK)
Classic Question: where should we put these closed-loop poles?
Of course we can use the time-domain specications to locate the
dominant poles roots of:
s
2
+ 2
n
s +
2
n
= 0
Then place rest of the poles so they are much faster than the
dominant behavior. For example:
Could keep the same damped frequency w
d
and then move the
real part to be 23 times faster than real part of dominant poles

n
Just be careful moving the poles too far to the left because it takes
a lot of control eort
Fall 2001 16.31 152
Could also choose the closed-loop poles to mimic a system that has
similar performance to what you would like to achieve:
Just set pole locations equal to those of the prototype system.
Various options exist
Bessel Polynomial Systems of order k G
p
(s) =
1
B
k
(s)
All scaled to give settling times of 1 second, which you can change
to t
s
by dividing the poles by t
s
.
Fall 2001 16.31 153
Procedure for an n
th
order system:
Determine the desired settling time t
s
Find the k = n polynomial from the table.
Divide pole locations by t
s
Form desired characteristic polynomial
d
(s) and use acker/place
to determine the feedback gains.
Simulate to check performance and control eort.
Example:
G(s) =
1
s(s + 4)(s + 1)
with
A =

5 4 0
1 0 0
0 1 0

B =

1
0
0

so that n = k = 3.
Want t
s
= 2 sec. So there are 3 poles at:
5.0093/2 = 2.5047 and
(3.9668 3.7845i)/2 = 1.9834 1.8922i
Use these to form
d
(s) and nd the gains using acker
The Bessel approach is ne, but the step response is a bit slow.
Fall 2001 16.31 154
Another approach is to select the poles to match the n
th
polyno-
mial that was designed to minimize the ITAE integral of the time
multiplied by the absolute value of the error
J
ITAE
=
Z

0
t |e(t)| dt
in response to a step function.
Both Bessel and ITAE are tabulated in FPE-508.
Comparison for k = 3 (Given for
0
= 1 rad/sec, so slightly
dierent than numbers given on previous page)

B
d
= (s + 0.9420)(s + 0.7465 0.7112i)

ITAE
d
= (s + 0.7081)(s + 0.5210 1.068i)
So the ITAE poles are not as heavily damped.
Some overshoot
Faster rise-times.
Problem with both of these approaches is that they completely ig-
nore the control eort required the designer must iterate.
Fall 2001 16.31 155
Linear Quadratic Regulator
An alternative approach is to place the pole locations so that the
closed-loop (SISO) system optimizes the cost function:
J
LQR
=
Z

0

x
T
(t)(C
T
C)x(t) + r u(t)
2

dt
Where:
y
T
y = x
T
(C
T
C)x {assuming D = 0} is called the State Cost
u
2
is called the Control Cost, and
r is the Control Penalty
Simple form of the Linear Quadratic Regulator Problem.
Can show that the optimal control is a linear state feedback:
u(t) = K
lqr
x(t)
K
lqr
found by solving an Algebraic Riccati Equation (ARE).
We will look at the details of this solution procedure later. For now,
lets just look at the optimal closed-loop pole locations.
Fall 2001 16.31 156
Consider a SISO system with a minimal model
x = Ax + Bu , y = Cx
where
a(s) = det(sI A) and C(sI A)
1
B
b(s)
a(s)
Then
1
with u(t) = K
lqr
x(t), closed-loop dynamics are:
det(sI A + BK
lqr
) =
n
Y
i=1
(s p
i
)
where the p
i
={ the left-hand-plane roots of (s)}, with
(s) = a(s)a(s) + r
1
b(s)b(s)
Use this to nd the optimal pole locations, and then use
those to nd the feedback gains required using acker.
The pole locations can be found using standard root-locus tools.
(s) = a(s)a(s) + r
1
b(s)b(s) = 0
1 + r
1
G(s)G(s) = 0
The plot is symmetric about the real and imaginary axes.
Symmetric Root Locus
2n poles are plotted as a function of r
The poles we pick are always the n in the LHP.
1
Several leaps made here for now. We will come back to this LQR problem later.
Fall 2001 16.31 157
LQR Notes
1. The state cost was written using the output y
T
y, but that does not
need to be the case.
We are free to dene a new system output z = C
z
x that is not
based on a physical sensor measurement.
J
LQR
=
Z

0

x
T
(t)(C
T
z
C
z
)x(t) + r u(t)
2

dt
Selection of z used to isolate the system states you are most
concerned about, and thus would like to be regulated to zero.
2. Note what happens as r ; high control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 a(s)a(s) = 0
So the n closed-loop poles are:
Stable roots of the open-loop system (already in the LHP.)
Reection about the j-axis of the unstable open-loop poles.
3. Note what happens as r ;0 low control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 b(s)b(s) = 0
Assume order of b(s)b(s) is 2m < 2n
So the n closed-loop poles go to:
The m nite zeros of the system that are in the LHP (or the
reections of the systems zeros in the RHP).
The system zeros at innity (there are n m of these).
Fall 2001 16.31 158
Note that the poles tending to innity do so along very specic paths
so that they form a Butterworth Pattern:
At high frequency we can ignore all but the highest powers of s
in the expression for (s) = 0
(s) = 0 ; (1)
n
s
2n
+ r
1
(1)
m
(b
o
s
m
)
2
= 0
s
2(nm)
= (1)
nm+1
b
2
o
r
The 2(n m) solutions of this expression lie on a circle of radius
(b
2
0
/r)
1/2(nm)
at the intersection of the radial lines with phase from the neg-
ative real axis:

l
n m
, l = 0, 1, . . . ,
n m1
2
, (n m) odd

(l + 1/2)
n m
, l = 0, 1, . . . ,
n m
2
1 , (n m) even
Examples:
n m Phase
1 0
2 /4
3 0, /3
4 /8, 3/8
Note: Plot the SRL using the 180
o
rules (normal) if nm is even
and the 0
o
rules if n m is odd.
Fall 2001 16.31 159
Figure 1: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
2.042 2.9408 5.0676
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1510
Figure 2: Example #2: G(s) =
0.94
s
2
0.0297
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 5 10 15 20 25 30
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
1.6489 1.3594
1.4146
u=rKx
u=Nbar rKx
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1511
Figure 3: Example #3: G(s) =
81420
(s8)(s14)(s20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
23.042 2.94079 9.44262
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1512
Figure 4: Example #4: G(s) =
(s1)
(s+1)(s3)
5 4 3 2 1 0 1 2 3 4 5
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
7.3589 3.6794
4.3589
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1513
Figure 5: Example #5: G(s) =
(s2)(s4)
(s1)(s3)(s
2
+0.8s+4)s
2
6 4 2 0 2 4 6
8
6
4
2
0
2
4
6
8
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
10.0869 4.64874 8.01734 3.81
3.1623
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1514
As noted previously, we are free to pick the state weighting matrices
C
z
to penalize the parts of the motion we are most concerned with.
Simple example oscillator with x = [ p , v ]
T
A =

0 1
2 0.5

, B =

0
1

but we choose two cases for z


z = p =

1 0

x and z = v =

0 1

x
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
is
SRL with Position Penalty
2 1.5 1 0.5 0 0.5 1 1.5 2
1.5
1
0.5
0
0.5
1
1.5
Real Axis
I
m
a
g

A
x
is
SRL with Velocity Penalty
Figure 6: SRL with position (left) and velocity penalties (right)
Clearly, choosing a dierent C
z
impacts the SRL because it com-
pletely changes the zero-structure for the system.
Fall 2001 16.31 1515
Summary
Dominant second and prototype design approaches (Bessel and ITAE)
place the closed-loop pole locations with no regard to the
amount of control eort required.
Designer must iterate on the selected bandwidth (
n
) to ensure
that the control eort is reasonable.
LQR/SRL approach selects closed-loop poles that balance between
system errors and the control eort.
Easy design iteration using r poles move along the SRL.
Sometimes dicult to relate the desired transient response to the
LQR cost function.
Nice thing about the LQR approach is that the designer is focused
on system performance issues
The pole locations are then supplied using the SRL.
Topic #15
16.31 Feedback Control
State-Space Systems
Full-state Feedback Control
How do we change the poles of the state-space system?
Or, even if we can change the pole locations.
Where do we put the poles?
Linear Quadratic Regulator
Symmetric Root Locus
How well does this approach work?
Copyright2001byJonathanHow.
1
Fall 2001 16.31 151
Pole Placement
So far we have looked at how to pick K to get the dynamics to have
some nice properties (i.e. stabilize A)

i
(A) ;
i
(ABK)
Classic Question: where should we put these closed-loop poles?
Of course we can use the time-domain specications to locate the
dominant poles roots of:
s
2
+ 2
n
s +
2
n
= 0
Then place rest of the poles so they are much faster than the
dominant behavior. For example:
Could keep the same damped frequency w
d
and then move the
real part to be 23 times faster than real part of dominant poles

n
Just be careful moving the poles too far to the left because it takes
a lot of control eort
Fall 2001 16.31 152
Could also choose the closed-loop poles to mimic a system that has
similar performance to what you would like to achieve:
Just set pole locations equal to those of the prototype system.
Various options exist
Bessel Polynomial Systems of order k G
p
(s) =
1
B
k
(s)
All scaled to give settling times of 1 second, which you can change
to t
s
by dividing the poles by t
s
.
Fall 2001 16.31 153
Procedure for an n
th
order system:
Determine the desired settling time t
s
Find the k = n polynomial from the table.
Divide pole locations by t
s
Form desired characteristic polynomial
d
(s) and use acker/place
to determine the feedback gains.
Simulate to check performance and control eort.
Example:
G(s) =
1
s(s + 4)(s + 1)
with
A =

5 4 0
1 0 0
0 1 0

B =

1
0
0

so that n = k = 3.
Want t
s
= 2 sec. So there are 3 poles at:
5.0093/2 = 2.5047 and
(3.9668 3.7845i)/2 = 1.9834 1.8922i
Use these to form
d
(s) and nd the gains using acker
The Bessel approach is ne, but the step response is a bit slow.
Fall 2001 16.31 154
Another approach is to select the poles to match the n
th
polyno-
mial that was designed to minimize the ITAE integral of the time
multiplied by the absolute value of the error
J
ITAE
=
Z

0
t |e(t)| dt
in response to a step function.
Both Bessel and ITAE are tabulated in FPE-508.
Comparison for k = 3 (Given for
0
= 1 rad/sec, so slightly
dierent than numbers given on previous page)

B
d
= (s + 0.9420)(s + 0.7465 0.7112i)

ITAE
d
= (s + 0.7081)(s + 0.5210 1.068i)
So the ITAE poles are not as heavily damped.
Some overshoot
Faster rise-times.
Problem with both of these approaches is that they completely ig-
nore the control eort required the designer must iterate.
Fall 2001 16.31 155
Linear Quadratic Regulator
An alternative approach is to place the pole locations so that the
closed-loop (SISO) system optimizes the cost function:
J
LQR
=
Z

0

x
T
(t)(C
T
C)x(t) + r u(t)
2

dt
Where:
y
T
y = x
T
(C
T
C)x {assuming D = 0} is called the State Cost
u
2
is called the Control Cost, and
r is the Control Penalty
Simple form of the Linear Quadratic Regulator Problem.
Can show that the optimal control is a linear state feedback:
u(t) = K
lqr
x(t)
K
lqr
found by solving an Algebraic Riccati Equation (ARE).
We will look at the details of this solution procedure later. For now,
lets just look at the optimal closed-loop pole locations.
Fall 2001 16.31 156
Consider a SISO system with a minimal model
x = Ax + Bu , y = Cx
where
a(s) = det(sI A) and C(sI A)
1
B
b(s)
a(s)
Then
1
with u(t) = K
lqr
x(t), closed-loop dynamics are:
det(sI A + BK
lqr
) =
n
Y
i=1
(s p
i
)
where the p
i
={ the left-hand-plane roots of (s)}, with
(s) = a(s)a(s) + r
1
b(s)b(s)
Use this to nd the optimal pole locations, and then use
those to nd the feedback gains required using acker.
The pole locations can be found using standard root-locus tools.
(s) = a(s)a(s) + r
1
b(s)b(s) = 0
1 + r
1
G(s)G(s) = 0
The plot is symmetric about the real and imaginary axes.
Symmetric Root Locus
2n poles are plotted as a function of r
The poles we pick are always the n in the LHP.
1
Several leaps made here for now. We will come back to this LQR problem later.
Fall 2001 16.31 157
LQR Notes
1. The state cost was written using the output y
T
y, but that does not
need to be the case.
We are free to dene a new system output z = C
z
x that is not
based on a physical sensor measurement.
J
LQR
=
Z

0

x
T
(t)(C
T
z
C
z
)x(t) + r u(t)
2

dt
Selection of z used to isolate the system states you are most
concerned about, and thus would like to be regulated to zero.
2. Note what happens as r ; high control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 a(s)a(s) = 0
So the n closed-loop poles are:
Stable roots of the open-loop system (already in the LHP.)
Reection about the j-axis of the unstable open-loop poles.
3. Note what happens as r ;0 low control cost case
a(s)a(s) + r
1
b(s)b(s) = 0 b(s)b(s) = 0
Assume order of b(s)b(s) is 2m < 2n
So the n closed-loop poles go to:
The m nite zeros of the system that are in the LHP (or the
reections of the systems zeros in the RHP).
The system zeros at innity (there are n m of these).
Fall 2001 16.31 158
Note that the poles tending to innity do so along very specic paths
so that they form a Butterworth Pattern:
At high frequency we can ignore all but the highest powers of s
in the expression for (s) = 0
(s) = 0 ; (1)
n
s
2n
+ r
1
(1)
m
(b
o
s
m
)
2
= 0
s
2(nm)
= (1)
nm+1
b
2
o
r
The 2(n m) solutions of this expression lie on a circle of radius
(b
2
0
/r)
1/2(nm)
at the intersection of the radial lines with phase from the neg-
ative real axis:

l
n m
, l = 0, 1, . . . ,
n m1
2
, (n m) odd

(l + 1/2)
n m
, l = 0, 1, . . . ,
n m
2
1 , (n m) even
Examples:
n m Phase
1 0
2 /4
3 0, /3
4 /8, 3/8
Note: Plot the SRL using the 180
o
rules (normal) if nm is even
and the 0
o
rules if n m is odd.
Fall 2001 16.31 159
Figure 1: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
2.042 2.9408 5.0676
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1510
Figure 2: Example #2: G(s) =
0.94
s
2
0.0297
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 5 10 15 20 25 30
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
1.6489 1.3594
1.4146
u=rKx
u=Nbar rKx
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1511
Figure 3: Example #3: G(s) =
81420
(s8)(s14)(s20)
30 20 10 0 10 20 30
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
time (sec)
Y

o
u
t
p
u
t
Step Response
23.042 2.94079 9.44262
3.3166
u=rKx
u=Nbar rKx
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1512
Figure 4: Example #4: G(s) =
(s1)
(s+1)(s3)
5 4 3 2 1 0 1 2 3 4 5
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
7.3589 3.6794
4.3589
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1513
Figure 5: Example #5: G(s) =
(s2)(s4)
(s1)(s3)(s
2
+0.8s+4)s
2
6 4 2 0 2 4 6
8
6
4
2
0
2
4
6
8
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
time (sec)
Y

o
u
t
p
u
t
Unstable, NMP system Step Response
10.0869 4.64874 8.01734 3.81
3.1623
u=rKx
u=Nbar rKx
10
2
10
1
10
0
10
1
10
2
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
Freq (rad/sec)
G
c
l
Closedloop Freq Response
u=rKx
u=Nbar rKx
Fall 2001 16.31 1514
As noted previously, we are free to pick the state weighting matrices
C
z
to penalize the parts of the motion we are most concerned with.
Simple example oscillator with x = [ p , v ]
T
A =

0 1
2 0.5

, B =

0
1

but we choose two cases for z


z = p =

1 0

x and z = v =

0 1

x
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
is
SRL with Position Penalty
2 1.5 1 0.5 0 0.5 1 1.5 2
1.5
1
0.5
0
0.5
1
1.5
Real Axis
I
m
a
g

A
x
is
SRL with Velocity Penalty
Figure 6: SRL with position (left) and velocity penalties (right)
Clearly, choosing a dierent C
z
impacts the SRL because it com-
pletely changes the zero-structure for the system.
Fall 2001 16.31 1515
Summary
Dominant second and prototype design approaches (Bessel and ITAE)
place the closed-loop pole locations with no regard to the
amount of control eort required.
Designer must iterate on the selected bandwidth (
n
) to ensure
that the control eort is reasonable.
LQR/SRL approach selects closed-loop poles that balance between
system errors and the control eort.
Easy design iteration using r poles move along the SRL.
Sometimes dicult to relate the desired transient response to the
LQR cost function.
Nice thing about the LQR approach is that the designer is focused
on system performance issues
The pole locations are then supplied using the SRL.
Topic #16
16.31 Feedback Control
State-Space Systems
Open-loop Estimators
Closed-loop Estimators
Observer Theory (no noise) Luenberger
IEEE TAC Vol 16, No. 6, pp. 596602, December 1971.
Estimation Theory (with noise) Kalman
Copyright2001byJonathanHow.
1
Fall 2001 16.31 161
Estimators/Observers
Problem: So far we have assumed that we have full access to the
state x(t) when we designed our controllers.
Most often all of this information is not available.
Usually can only feedback information that is developed from the
sensors measurements.
Could try output feedback
u = Kx u =

Ky
Same as the proportional feedback we looked at at the beginning
of the root locus work.
This type of control is very dicult to design in general.
Alternative approach: Develop a replica of the dynamic sys-
tem that provides an estimate of the system states based on the
measured output of the system.
New plan:
1. Develop estimate of x(t) that will be called x(t).
2. Then switch from u = Kx(t) to u = K x(t).
Two key questions:
How do we nd x(t)?
Will this new plan work?
Fall 2001 16.31 162
Estimation Schemes
Assume that the system model is of the form:
x = Ax + Bu , x(0) unknown
y = Cx
where
1. A, B, and C are known.
2. u(t) is known
3. Measurable outputs are y(t) from C 6= I
Goal: Develop a dynamic system whose state
x(t) = x(t)
for all time t 0. Two primary approaches:
Open-loop.
Closed-loop.
Open-loop Estimator
Given that we know the plant matrices and the inputs, we can just
perform a simulation that runs in parallel with the system

x(t) = A x + Bu(t)
Then x(t) x(t) t provided that x(0) = x(0)
Major Problem: We do not know x(0)
Fall 2001 16.31 163
Analysis of this case:
x(t) = Ax + Bu(t)

x(t) = A x + Bu(t)
Dene the estimation error x(t) = x(t) x(t).
Now want x(t) = 0 t. (But is this realistic?)
Subtract to get:
d
dt
(x x) = A(x x)

x(t) = A x
which has the solution
x(t) = e
At
x(0)
Gives the estimation error in terms of the initial error.
Fall 2001 16.31 164
Does this guarantee that x = 0 t?
Or even that x 0 as t ? (which is a more realistic goal).
Response is ne if x(0) = 0. But what if x(0) 6= 0?
If A stable, then x 0 as t , but the dynamics of the estima-
tion error are completely determined by the open-loop dynamics of
the system (eigenvalues of A).
Could be very slow.
No obvious way to modify the estimation error dynamics.
Open-loop estimation does not seem to be a very good idea.
Closed-loop Estimator
An obvious way to x this problem is to use the additional informa-
tion available:
How well does the estimated output match the measured output?
Compare: y = Cx with y = C x
Then form y = y y C x
Fall 2001 16.31 165
Approach: Feedback y to improve our estimate of the state. Basic
form of the estimator is:

x(t) = A x(t) + Bu(t) + L y(t)


y(t) = C x(t)
where L is the user selectable gain matrix.
Analysis:

x = x

x = [Ax + Bu] [A x + Bu + L(y y)]
= A(x x) L(Cx C x) = A x LC x = (ALC) x
So the closed-loop estimation error dynamics are now

x = (ALC) x with solution x(t) = e


(ALC)t
x(0)
Bottom line: Can select the gain L to attempt to improve the
convergence of the estimation error (and/or speed it up).
But now must worry about observability of the system model.
Fall 2001 16.31 166
Note the similarity:
Regulator Problem: pick K for ABK
3 Choose K R
1n
(SISO) such that the closed-loop poles
det(sI A + BK) =
c
(s)
are in the desired locations.
Estimator Problem: pick L for ALC
3 Choose L R
n1
(SISO) such that the closed-loop poles
det(sI A + LC) =
o
(s)
are in the desired locations.
These problems are obviously very similar in fact they are called
dual problems.
Fall 2001 16.31 167
Estimation Gain Selection
For regulation, were concerned with controllability of (A, B)
For a controllable system we can place the eigenvalues
of A BK arbitrarily.
For estimation, were concerned with observability of pair (A, C).
For a observable system we can place the eigenvalues
of A LC arbitrarily.
Test using the observability matrix:
rank M
o
, rank

C
CA
CA
2
.
.
.
CA
n1

= n
The procedure for selecting L is very similar to that used for the
regulator design process.
Write the system model in observer canonical form

x
1
x
2
x
3

a
1
1 0
a
2
0 1
a
3
0 0

x
1
x
2
x
3

b
1
b
2
b
3

u
y =

1 0 0

x
1
x
2
x
3

Fall 2001 16.31 168


Now very simple to form
A LC =

a
1
1 0
a
2
0 1
a
3
0 0

l
1
l
2
l
3

1 0 0

=

a
1
l
1
1 0
a
2
l
2
0 1
a
3
l
3
0 0

The closed-loop poles of the estimator are at the roots of


det(sI A + LC) = s
3
+ (a
1
+ l
1
)s
2
+ (a
2
+ l
2
)s + (a
3
+ l
3
) = 0
So we have the freedom to place the closed-loop poles as desired.
Task greatly simplied by the selection of the state-space model
used for the design/analysis.
Fall 2001 16.31 169
Another approach:
Note that the poles of (ALC) and (A LC)
T
are identical.
Also we have that (A LC)
T
= A
T
C
T
L
T
So designing L
T
for this transposed system looks like a standard
regulator problem (ABK) where
A A
T
B C
T
K L
T
So we can use
K
e
= acker(A
T
, C
T
, P) , L K
T
e
Note that the estimator equivalent of Ackermanns formula is that
L =
e
(s)M
1
o

0
.
.
.
0
1

Fall 2001 16.31 1610


Estimators Example
Simple system
A =

1 1.5
1 2

, B =

1
0

, x(0) =

0.5
1

C =

1 0

, D = 0
Assume that the initial conditions are not well known.
System stable, but
max
(A) = 0.18
Test observability:
rank

C
CA

= rank

1 0
1 1.5

Use open and closed-loop estimators


Since the initial conditions are not well known, use
x(0) =

0
0

Open-loop estimator:

x = A x + Bu
y = C x
Closed-loop estimator:

x = A x + Bu + L y = A x + Bu + L(y y)
= (ALC) x + Bu + Ly
y = C x
Which is a dynamic system with poles given by
i
(A LC)
and which takes the measured plant outputs as an input and
generates an estimate of x.
Fall 2001 16.31 1611
Typically simulate both systems together for simplicity
Open-loop case:
x = Ax + Bu
y = Cx

x = A x + Bu
y = C x

=

A 0
0 A

x
x

B
B

u ,

x(0)
x(0)

0.5
1
0
0

y
y

=

C 0
0 C

x
x

Closed-loop case:
x = Ax + Bu

x = (ALC) x + Bu + LCx

=

A 0
LC A LC

x
x

B
B

u
Example uses a strong u(t) to shake things up
Fall 2001 16.31 1612
Figure 1: Open-loop estimator. Estimation error converges to zero, but very
slowly.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
Open loop estimator
s
t
a
t
e
s
time
x1
x2
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time
e
s
t
i
m
a
t
i
o
n

e
r
r
o
r
Figure 2: Closed-loop estimator. Convergence looks much better.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
Closedloop estimator
s
t
a
t
e
s
time
x1
x2
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time
e
s
t
i
m
a
t
i
o
n

e
r
r
o
r
Fall 2001 16.31 1613
Where to put the Estimator Poles?
Location heuristics for poles still apply use Bessel, ITAE, ...
Main dierence: probably want to make the estimator faster than
you intend to make the regulator should enhance the control,
which is based on x(t).
ROT: Factor of 23 in the time constant
n
associated with the
regulator poles.
Note: When designing a regulator, were concerned with band-
width of the control getting too high often results in control
commands that saturate the actuators and/or change rapidly.
Dierent concerns for the estimator:
Loop closed inside computer, so saturation not a problem.
However, the measurements y are often noisy, and we need to
be careful how we use them to develop our state estimates.
High bandwidth estimators tend to accentuate the eect of
sensing noise in the estimate.
State estimates tend to track the measurements, which are
uctuating randomly due to the noise.
Low bandwidth estimators have lower gains and tend to rely
more heavily on the plant model
Essentially an open-loop estimator tends to ignore the mea-
surements and just uses the plant model.
Fall 2001 16.31 1614
Can also develop an optimal estimator for this type of system.
Which is apparently what Kalman did one evening in 1958 while
taking the train from Princeton to Baltimore...
Balances eect of the various types of random noise in the
system on the estimator:
x = Ax + Bu + B
w
w
y = Cx + v
where:
3 w is called process noise models the uncertainty in the
system model.
3 v is called sensor noise models the uncertainty in the
measurements.
A symmetric root locus exists for the optimal estimator.
Dene G
yw
(s) = C(sI A)
1
B
w
N(s)/D(s)
SRL for the closed-loop poles
i
(ALC) of the estimator which
are the LHP roots of:
D(s)D(s)
R
w
R
v
N(s)N(s) = 0
where R
w
and R
v
are, in some sense, associated with the sizes of
the process/sensor noise (spectral density).
Pick sign to ensure that there are no poles on the j-axis.
Fall 2001 16.31 1615
Relative size of the noises determine where the poles will be located.
Similar to role of control cost in LQR problem.
As R
w
/R
v
0, the n poles go to the
1. LHP poles of the system
2. Reection of the RHP poles of the system about the j-axis.
The relatively noisy sensor case
Closed-loop estimator essentially reverts back to
the open-loop case (but must be stable).
Low bandwidth estimator.
As R
w
/R
v
, the n poles go to
1. LHP zeros (and reections of the RHP zeros) of G
yw
(s).
2. along the Butterworth patterns same as regulator case
The relatively clean sensor case
Closed-loop estimator poles go to very high band-
width to take full advantage o the information in y.
High bandwidth estimator.
If you know R
w
and R
v
, then use them in the SRL, but more
often than not we just use them as tuning parameters to
develop low high bandwidth estimators.
Typically x R
w
and tune estimator bandwidth using R
v
Fall 2001 16.31 1616
Final Thoughts
Note that the feedback gain L in the estimator only stabilizes the
estimation error.
If the system is unstable, then the state estimates will also go to
, with zero error from the actual states.
Estimation is an important concept of its own.
Not always just part of the control system
Critical issue for guidance and navigation system
More complete discussion requires that we study stochastic pro-
cesses and optimization theory.
Estimation is all about which do you trust more: your
measurements or your model.
Topic #16
16.31 Feedback Control
State-Space Systems
Open-loop Estimators
Closed-loop Estimators
Observer Theory (no noise) Luenberger
IEEE TAC Vol 16, No. 6, pp. 596602, December 1971.
Estimation Theory (with noise) Kalman
Copyright2001byJonathanHow.
1
Fall 2001 16.31 161
Estimators/Observers
Problem: So far we have assumed that we have full access to the
state x(t) when we designed our controllers.
Most often all of this information is not available.
Usually can only feedback information that is developed from the
sensors measurements.
Could try output feedback
u = Kx u =

Ky
Same as the proportional feedback we looked at at the beginning
of the root locus work.
This type of control is very dicult to design in general.
Alternative approach: Develop a replica of the dynamic sys-
tem that provides an estimate of the system states based on the
measured output of the system.
New plan:
1. Develop estimate of x(t) that will be called x(t).
2. Then switch from u = Kx(t) to u = K x(t).
Two key questions:
How do we nd x(t)?
Will this new plan work?
Fall 2001 16.31 162
Estimation Schemes
Assume that the system model is of the form:
x = Ax + Bu , x(0) unknown
y = Cx
where
1. A, B, and C are known.
2. u(t) is known
3. Measurable outputs are y(t) from C 6= I
Goal: Develop a dynamic system whose state
x(t) = x(t)
for all time t 0. Two primary approaches:
Open-loop.
Closed-loop.
Open-loop Estimator
Given that we know the plant matrices and the inputs, we can just
perform a simulation that runs in parallel with the system

x(t) = A x + Bu(t)
Then x(t) x(t) t provided that x(0) = x(0)
Major Problem: We do not know x(0)
Fall 2001 16.31 163
Analysis of this case:
x(t) = Ax + Bu(t)

x(t) = A x + Bu(t)
Dene the estimation error x(t) = x(t) x(t).
Now want x(t) = 0 t. (But is this realistic?)
Subtract to get:
d
dt
(x x) = A(x x)

x(t) = A x
which has the solution
x(t) = e
At
x(0)
Gives the estimation error in terms of the initial error.
Fall 2001 16.31 164
Does this guarantee that x = 0 t?
Or even that x 0 as t ? (which is a more realistic goal).
Response is ne if x(0) = 0. But what if x(0) 6= 0?
If A stable, then x 0 as t , but the dynamics of the estima-
tion error are completely determined by the open-loop dynamics of
the system (eigenvalues of A).
Could be very slow.
No obvious way to modify the estimation error dynamics.
Open-loop estimation does not seem to be a very good idea.
Closed-loop Estimator
An obvious way to x this problem is to use the additional informa-
tion available:
How well does the estimated output match the measured output?
Compare: y = Cx with y = C x
Then form y = y y C x
Fall 2001 16.31 165
Approach: Feedback y to improve our estimate of the state. Basic
form of the estimator is:

x(t) = A x(t) + Bu(t) + L y(t)


y(t) = C x(t)
where L is the user selectable gain matrix.
Analysis:

x = x

x = [Ax + Bu] [A x + Bu + L(y y)]
= A(x x) L(Cx C x) = A x LC x = (ALC) x
So the closed-loop estimation error dynamics are now

x = (ALC) x with solution x(t) = e


(ALC)t
x(0)
Bottom line: Can select the gain L to attempt to improve the
convergence of the estimation error (and/or speed it up).
But now must worry about observability of the system model.
Fall 2001 16.31 166
Note the similarity:
Regulator Problem: pick K for ABK
3 Choose K R
1n
(SISO) such that the closed-loop poles
det(sI A + BK) =
c
(s)
are in the desired locations.
Estimator Problem: pick L for ALC
3 Choose L R
n1
(SISO) such that the closed-loop poles
det(sI A + LC) =
o
(s)
are in the desired locations.
These problems are obviously very similar in fact they are called
dual problems.
Fall 2001 16.31 167
Estimation Gain Selection
For regulation, were concerned with controllability of (A, B)
For a controllable system we can place the eigenvalues
of A BK arbitrarily.
For estimation, were concerned with observability of pair (A, C).
For a observable system we can place the eigenvalues
of A LC arbitrarily.
Test using the observability matrix:
rank M
o
, rank

C
CA
CA
2
.
.
.
CA
n1

= n
The procedure for selecting L is very similar to that used for the
regulator design process.
Write the system model in observer canonical form

x
1
x
2
x
3

a
1
1 0
a
2
0 1
a
3
0 0

x
1
x
2
x
3

b
1
b
2
b
3

u
y =

1 0 0

x
1
x
2
x
3

Fall 2001 16.31 168


Now very simple to form
A LC =

a
1
1 0
a
2
0 1
a
3
0 0

l
1
l
2
l
3

1 0 0

=

a
1
l
1
1 0
a
2
l
2
0 1
a
3
l
3
0 0

The closed-loop poles of the estimator are at the roots of


det(sI A + LC) = s
3
+ (a
1
+ l
1
)s
2
+ (a
2
+ l
2
)s + (a
3
+ l
3
) = 0
So we have the freedom to place the closed-loop poles as desired.
Task greatly simplied by the selection of the state-space model
used for the design/analysis.
Fall 2001 16.31 169
Another approach:
Note that the poles of (ALC) and (A LC)
T
are identical.
Also we have that (A LC)
T
= A
T
C
T
L
T
So designing L
T
for this transposed system looks like a standard
regulator problem (ABK) where
A A
T
B C
T
K L
T
So we can use
K
e
= acker(A
T
, C
T
, P) , L K
T
e
Note that the estimator equivalent of Ackermanns formula is that
L =
e
(s)M
1
o

0
.
.
.
0
1

Fall 2001 16.31 1610


Estimators Example
Simple system
A =

1 1.5
1 2

, B =

1
0

, x(0) =

0.5
1

C =

1 0

, D = 0
Assume that the initial conditions are not well known.
System stable, but
max
(A) = 0.18
Test observability:
rank

C
CA

= rank

1 0
1 1.5

Use open and closed-loop estimators


Since the initial conditions are not well known, use
x(0) =

0
0

Open-loop estimator:

x = A x + Bu
y = C x
Closed-loop estimator:

x = A x + Bu + L y = A x + Bu + L(y y)
= (ALC) x + Bu + Ly
y = C x
Which is a dynamic system with poles given by
i
(A LC)
and which takes the measured plant outputs as an input and
generates an estimate of x.
Fall 2001 16.31 1611
Typically simulate both systems together for simplicity
Open-loop case:
x = Ax + Bu
y = Cx

x = A x + Bu
y = C x

=

A 0
0 A

x
x

B
B

u ,

x(0)
x(0)

0.5
1
0
0

y
y

=

C 0
0 C

x
x

Closed-loop case:
x = Ax + Bu

x = (ALC) x + Bu + LCx

=

A 0
LC A LC

x
x

B
B

u
Example uses a strong u(t) to shake things up
Fall 2001 16.31 1612
Figure 1: Open-loop estimator. Estimation error converges to zero, but very
slowly.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
Open loop estimator
s
t
a
t
e
s
time
x1
x2
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time
e
s
t
i
m
a
t
i
o
n

e
r
r
o
r
Figure 2: Closed-loop estimator. Convergence looks much better.
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
Closedloop estimator
s
t
a
t
e
s
time
x1
x2
0 0.5 1 1.5 2 2.5 3 3.5 4
1
0.5
0
0.5
1
time
e
s
t
i
m
a
t
i
o
n

e
r
r
o
r
Fall 2001 16.31 1613
Where to put the Estimator Poles?
Location heuristics for poles still apply use Bessel, ITAE, ...
Main dierence: probably want to make the estimator faster than
you intend to make the regulator should enhance the control,
which is based on x(t).
ROT: Factor of 23 in the time constant
n
associated with the
regulator poles.
Note: When designing a regulator, were concerned with band-
width of the control getting too high often results in control
commands that saturate the actuators and/or change rapidly.
Dierent concerns for the estimator:
Loop closed inside computer, so saturation not a problem.
However, the measurements y are often noisy, and we need to
be careful how we use them to develop our state estimates.
High bandwidth estimators tend to accentuate the eect of
sensing noise in the estimate.
State estimates tend to track the measurements, which are
uctuating randomly due to the noise.
Low bandwidth estimators have lower gains and tend to rely
more heavily on the plant model
Essentially an open-loop estimator tends to ignore the mea-
surements and just uses the plant model.
Fall 2001 16.31 1614
Can also develop an optimal estimator for this type of system.
Which is apparently what Kalman did one evening in 1958 while
taking the train from Princeton to Baltimore...
Balances eect of the various types of random noise in the
system on the estimator:
x = Ax + Bu + B
w
w
y = Cx + v
where:
3 w is called process noise models the uncertainty in the
system model.
3 v is called sensor noise models the uncertainty in the
measurements.
A symmetric root locus exists for the optimal estimator.
Dene G
yw
(s) = C(sI A)
1
B
w
N(s)/D(s)
SRL for the closed-loop poles
i
(ALC) of the estimator which
are the LHP roots of:
D(s)D(s)
R
w
R
v
N(s)N(s) = 0
where R
w
and R
v
are, in some sense, associated with the sizes of
the process/sensor noise (spectral density).
Pick sign to ensure that there are no poles on the j-axis.
Fall 2001 16.31 1615
Relative size of the noises determine where the poles will be located.
Similar to role of control cost in LQR problem.
As R
w
/R
v
0, the n poles go to the
1. LHP poles of the system
2. Reection of the RHP poles of the system about the j-axis.
The relatively noisy sensor case
Closed-loop estimator essentially reverts back to
the open-loop case (but must be stable).
Low bandwidth estimator.
As R
w
/R
v
, the n poles go to
1. LHP zeros (and reections of the RHP zeros) of G
yw
(s).
2. along the Butterworth patterns same as regulator case
The relatively clean sensor case
Closed-loop estimator poles go to very high band-
width to take full advantage o the information in y.
High bandwidth estimator.
If you know R
w
and R
v
, then use them in the SRL, but more
often than not we just use them as tuning parameters to
develop low high bandwidth estimators.
Typically x R
w
and tune estimator bandwidth using R
v
Fall 2001 16.31 1616
Final Thoughts
Note that the feedback gain L in the estimator only stabilizes the
estimation error.
If the system is unstable, then the state estimates will also go to
, with zero error from the actual states.
Estimation is an important concept of its own.
Not always just part of the control system
Critical issue for guidance and navigation system
More complete discussion requires that we study stochastic pro-
cesses and optimization theory.
Estimation is all about which do you trust more: your
measurements or your model.
Fall 2001 16.31 1617
Interpretations
With noise in the system, the model is of the form:
x = Ax + Bu + B
w
w , y = Cx + v
And the estimator is of the form:

x = A x + Bu + L(y y) , y = C x
Analysis: in this case:

x = x

x = [Ax + Bu + B
w
w] [A x + Bu + L(y y)]
= A(x x) L(Cx C x) + B
w
w Lv
= A x LC x + B
w
w Lv
= (A LC) x + B
w
w Lv
This equation of the estimation error explicitly shows the conict
in the estimator design process. Must balance between:
Speed of the estimator decay rate, which is governed by
i
(A
LC)
Impact of the sensing noise v through the gain L
Fast state reconstruction requires rapid decay rate (typically re-
quires a large L), but that tends to magnify the eect of v on the
estimation process.
The eect of the process noise is always there, but the choice of
L will tend to mitigate/accentuate the eect of v on x(t).
Kalman Filter provides an optimal balance between the two con-
icting problems for a given size of the process and sensing noises.
Fall 2001 16.31 1618
Filter Interpretation: Recall that

x = (A LC) x + Ly
Consider a scalar system, and take the Laplace transform of both
sides to get:

X(s)
Y (s)
=
L
sI (A LC)
This is the transfer function from the measurement to the esti-
mated state
It looks like a low-pass lter.
Clearly, by lowering r, and thus increasing L, we are pushing out
the pole.
DC gain asymptotes to 1/C as L
10
1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
2
10
1
10
0
Scalar TF from Y to \hat X for larger L
Freq (rad/sec)
|
\
h
a
t

X

/

Y
|
Increasing L
Fall 2001 16.31 1619
Second example: Lightly Damped Harmonic Oscillator

x
1
x
2

0 1

2
0
0

x
1
x
2

0
1

w
y = x
1
+ v
where R
w
= 1 and R
v
= r.
Can sense the position state of the oscillator, but want
to develop an estimator to reconstruct the velocity state.
Find the location of the optimal poles.
G
yw
(s) =

1 0

s 1

2
0
s

0
1

=
1
s
2
+
2
0
=
b(s)
a(s)
So we must nd the LHP roots of

s
2
+
2
0

(s)
2
+
2
0

+
1
r
= (s
2
+
2
0
)
2
+
1
r
= 0
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1.5
1
0.5
0
0.5
1
1.5
Real Axis
Im
a
g
A
x
is
Symmetric root locus
Note that as r 0 (clean sensor), the estimator poles tends to
along the 45 deg asymptotes, so the poles are approximately
s
1 j

r

e
(s) = s
2
+
2

r
s +
2
r
= 0
Fall 2001 16.31 1620
Can use these estimate pole locations in acker, to get that
L =

0 1

2
0
0

2
+
2

0 1

2
0
0

+
2
r
I

C
CA

0
1

2
r

2
0
2

2
0
2
r

2
0


1 0

0 1

0
1

r
2
r

2
0

Given L, A, and C, we can develop the estimator transfer function


from the measurement y to the x
2
x
2
y
=

0 1

sI

0 1

2
0
0

r
2
r

2
0

1 0

r
2
r

2
0

0 1

s +
2

r
1
2
r
s

r
2
r

2
0

0 1

s 1
2
r
s +
2

r
2
r

2
0

1
s
2
+
2

r
s +
2
r
=
2
r
2

r
+ (s +
2

r
)(
2
r

2
0
)
s
2
+
2

r
s +
2
r

r
2
0
s
2
+
2

r
s +
2
r
Filter zero asymptotes to s = 0 as r 0 and the two poles
Resulting estimator looks like a band-limited dierentiator.
This was expected because we measure position and want to
estimate velocity.
Frequency band over which we are willing to perform the dif-
ferentiation determined by the relative cleanliness of the mea-
surements.
Fall 2001 16.31 1621
10
3
10
2
10
1
10
0
10
1
10
2
10
3
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Vel sens to Pos state, sen noise r=0.01
10
3
10
2
10
1
10
0
10
1
10
2
10
3
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
10
3
10
2
10
1
10
0
10
1
10
2
10
3
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Vel sens to Pos state, sen noise r=0.0001
10
3
10
2
10
1
10
0
10
1
10
2
10
3
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
10
3
10
2
10
1
10
0
10
1
10
2
10
3
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Vel sens to Pos state, sen noise r=1e006
10
3
10
2
10
1
10
0
10
1
10
2
10
3
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Vel sens to Pos state, sen noise r=1e006
10
3
10
2
10
1
10
0
10
1
10
2
10
3
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Vel sens to Pos state, sen noise r=1e008
10
3
10
2
10
1
10
0
10
1
10
2
10
3
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Topic #17
16.31 Feedback Control
State-Space Systems
Closed-loop control using estimators and regulators.
Dynamics output feedback
Back to reality
Copyright2001byJonathanHow.
1
Fall 2001 16.31 171
Combined Estimators and Regulators
Can now evaluate the stability and/or performance of a controller
when we design K assuming that u = Kx, but we implement
u = K x
Assume that we have designed a closed-loop estimator with gain L

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = C x(t)
Then we have that the closed-loop system dynamics are given by:
x(t) = Ax(t) + Bu(t)

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = Cx(t)
y(t) = C x(t)
u = K x
Which can be compactly written as:

=

A BK
LC ABK LC

x
x

x
cl
= A
cl
x
cl
This does not look too good at this point not even obvious that
the closed-system is stable.

i
(A
cl
) =??
Fall 2001 16.31 172
Can x this problem by introducing a new variable x = x x and
then converting the closed-loop system dynamics using the
similarity transformation T
x
cl
,

x
x

=

I 0
I I

x
x

= Tx
cl
Note that T = T
1
Now rewrite the system dynamics in terms of the state x
cl
A
cl
TA
cl
T
1
,

A
cl
Note that similarity transformations preserve the eigenvalues, so
we are guaranteed that

i
(A
cl
)
i
(

A
cl
)
Work through the math:

A
cl
=

I 0
I I

A BK
LC A BK LC

I 0
I I

=

A BK
ALC A+ LC

I 0
I I

=

ABK BK
0 ALC

Because

A
cl
is block upper triangular, we know that the closed-loop
poles of the system are given by
det(sI

A
cl
) , det(sI (ABK)) det(sI (ALC)) = 0
Fall 2001 16.31 173
Observation: The closed-loop poles for this system con-
sist of the union of the regulator poles and estimator poles.
So we can just design the estimator/regulator separately and com-
bine them at the end.
Called the Separation Principle.
Just keep in mind that the pole locations you are picking for these
two sub-problems will also be the closed-loop pole locations.
Note: the separation principle means that there will be no ambi-
guity or uncertainty about the stability and/or performance of the
closed-loop system.
The closed-loop poles will be exactly where you put them!!
And we have not even said what compensator does this amazing
accomplishment!!!
Fall 2001 16.31 174
The Compensator
Dynamic Output Feedback Compensator is the combina-
tion of the regulator and estimator using u = K x

x(t) = A x(t) + Bu(t) + L(y y)


= A x(t) BK x + L(y C x)


x(t) = (ABK LC) x(t) + Ly
u = K x
Rewrite with new state x
c
x
x
c
= A
c
x
c
+ B
c
y
u = C
c
x
c
where the compensator dynamics are given by:
A
c
, A BK LC , B
c
, L , C
c
, K
Note that the compensator maps sensor measurements to ac-
tuator commands, as expected.
Closed-loop system stable if regulator/estimator poles placed in the
LHP, but compensator dynamics do not need to be stable.

i
(ABK LC) =??
Fall 2001 16.31 175
For consistency in the implementation with the classical approaches,
dene the compensator transfer function so that
u = G
c
(s)y
From the state-space model of the compensator:
U(s)
Y (s)
, G
c
(s)
= C
c
(sI A
c
)
1
B
c
= K(sI (ABK LC))
1
L
G
c
(s) = C
c
(sI A
c
)
1
B
c
Note that it is often very easy to provide classical interpretations
(such as lead/lag) for the compensator G
c
(s).
One way to implement this compensator with a reference command
r(t) is to change the feedback to be on e(t) = r(t) y(t) rather
than just y(t)
G
c
(s) G(s)
- -
6

r e y u
u = G
c
(s)e = G
c
(s)(r y)
So we still have u = G
c
(s)y if r = 0.
Intuitively appealing because it is the same approach used for
the classical control, but it turns out not to be the best approach.
More on this later.
Fall 2001 16.31 176
Mechanics
Basics:
e = r y, u = G
c
e, y = Gu
G
c
(s) : x
c
= A
c
x
c
+ B
c
e , u = C
c
x
c
G(s) : x = Ax + Bu , y = Cx
Loop dynamics L = G
c
(s)G(s) y = L(s)e
x = Ax +BC
c
x
c
x
c
= +A
c
x
c
+B
c
e
L(s)

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

e
y =

C 0

x
x
c

To close the loop, note that e = r y, then

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

r

C 0

x
x
c

=

A BC
c
B
c
C A
c

x
x
c

+

0
B
c

r
y =

C 0

x
x
c

A
cl
is not exactly the same as on page 17-1 because we have re-
arranged where the negative sign enters into the problem. Same
result though.
Fall 2001 16.31 177
Simple Example
Let G(s) = 1/s
2
with state-space model given by:
A =

0 1
0 0

, B =

0
1

, C =

1 0

, D = 0
Design the regulator to place the poles at s = 4 4j

i
(ABK) = 4 4j K =

32 8

Time constant of regulator poles


c
= 1/
n
1/4 = 0.25 sec
Put estimator poles so that the time constant is faster
e
1/10
Use real poles, so
e
(s) = (s + 10)
2
L =
e
(A)

C
CA

0
1

0 1
0 0

2
+ 20

0 1
0 0

+

100 0
0 100

1 0
0 1

0
1

=

100 20
0 100

0
1

=

20
100

Fall 2001 16.31 178


Compensator:
A
c
= ABK LC
=

0 1
0 0

0
1

32 8

20
100

1 0

=

20 1
132 8

B
c
= L =

20
100

C
c
= K =

32 8

Compensator transfer function:


G
c
(s) = C
c
(sI A
c
)
1
B
c
,
U
E
= 1440
s + 2.222
s
2
+ 28s + 292
Note that the compensator has a low frequency real zero and two
higher frequency poles.
Thus it looks like a lead compensator.
Fall 2001 16.31 179
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
Figure 1: Plant is pretty simple and the compensator looks like a lead
210 rads/sec.
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
280
260
240
220
200
180
160
140
120
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Figure 2: Loop transfer function L(s) shows the slope change near

c
= 5 rad/sec. Note that we have a large PM and GM.
Fall 2001 16.31 1710
15 10 5 0 5
15
10
5
0
5
10
15
Real Axis
I
m
a
g

A
x
i
s
Figure 3: Freeze the compensator poles and zeros and look at the root
locus of closed-loop poles versus an additional loop gain (nominally
= 1.)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
Figure 4: Closed-loop transfer function.
Fall 2001 16.31 1711
Figure 5: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
150
100
50
0
50
Gm=10.978 dB (at 40.456 rad/sec), Pm=69.724 deg. (at 15.063 rad/sec)
10
0
10
1
10
2
10
3
400
350
300
250
200
150
100
50
0
Fall 2001 16.31 1712
Figure 6: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
80 70 60 50 40 30 20 10 0 10 20
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
50 45 40 35 30 25 20 15 10 5 0
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1713
Two compensator zeros at -21.546.63j draw the two lower fre-
quency plant poles further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1714
Figure 7: Example #2: G(s) =
0.94
s
2
0.0297
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=11.784 dB (at 7.4093 rad/sec), Pm=36.595 deg. (at 2.7612 rad/sec)
10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
Fall 2001 16.31 1715
Figure 8: Example #2: G(s) =
0.94
s
2
0.0297
8 6 4 2 0 2
6
4
2
0
2
4
6
Real Axis
I
m
a
g

A
x
i
s
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1716
Compensator zero at -1.21 draws the two lower frequency plant poles
further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1717
Figure 9: Example #3: G(s) =
81420
(s8)(s14)(s20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=0.90042 dB (at 24.221 rad/sec), Pm=6.6704 deg. (at 35.813 rad/sec)
10
0
10
1
10
2
10
3
350
300
250
200
150
Fall 2001 16.31 1718
Figure 10: Example #3: G(s) =
81420
(s8)(s14)(s20)
140 120 100 80 60 40 20 0 20 40 60
100
80
60
40
20
0
20
40
60
80
100
Real Axis
I
m
a
g

A
x
i
s
25 20 15 10 5 0 5 10 15 20 25
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1719
Compensator zeros at 3.728.03j draw the two higher frequency
plant poles further into the LHP. Lowest frequency one heads into
the LHP on its own.
Compensator poles are at much higher frequency.
Note sure what this looks like.
Fall 2001 16.31 1720
Figure 11: Example #4: G(s) =
(s1)
(s+1)(s3)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
80
60
40
20
0
20
Gm=3.3976 dB (at 4.5695 rad/sec), Pm=22.448 deg. (at 1.4064 rad/sec)
10
1
10
0
10
1
10
2
10
3
140
150
160
170
180
190
200
210
220
Fall 2001 16.31 1721
Figure 12: Example #4: G(s) =
(s1)
(s+1)(s3)
40 30 20 10 0 10 20
40
30
20
10
0
10
20
30
40
Real Axis
I
m
a
g

A
x
i
s
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1722
Compensator zero at -1 cancels the plant pole. Note the very un-
stable compensator pole at s = 9!!
Needed to get the RHP plant pole to branch o the real line and
head into the LHP.
Other compensator pole is at much higher frequency.
Note sure what this looks like.
Separation principle gives a very powerful and simple way to develop
a dynamic output feedback controller
Note that the designer now focuses on selecting the appropriate
regulator and estimator pole locations. Once those are set, the
closed-loop response is specied.
Can almost consider the compensator to be a by-product.
These examples show that the design process is extremely simple.
Topic #17
16.31 Feedback Control
State-Space Systems
Closed-loop control using estimators and regulators.
Dynamics output feedback
Back to reality
Copyright2001byJonathanHow.
1
Fall 2001 16.31 171
Combined Estimators and Regulators
Can now evaluate the stability and/or performance of a controller
when we design K assuming that u = Kx, but we implement
u = K x
Assume that we have designed a closed-loop estimator with gain L

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = C x(t)
Then we have that the closed-loop system dynamics are given by:
x(t) = Ax(t) + Bu(t)

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = Cx(t)
y(t) = C x(t)
u = K x
Which can be compactly written as:

=

A BK
LC ABK LC

x
x

x
cl
= A
cl
x
cl
This does not look too good at this point not even obvious that
the closed-system is stable.

i
(A
cl
) =??
Fall 2001 16.31 172
Can x this problem by introducing a new variable x = x x and
then converting the closed-loop system dynamics using the
similarity transformation T
x
cl
,

x
x

=

I 0
I I

x
x

= Tx
cl
Note that T = T
1
Now rewrite the system dynamics in terms of the state x
cl
A
cl
TA
cl
T
1
,

A
cl
Note that similarity transformations preserve the eigenvalues, so
we are guaranteed that

i
(A
cl
)
i
(

A
cl
)
Work through the math:

A
cl
=

I 0
I I

A BK
LC A BK LC

I 0
I I

=

A BK
ALC A+ LC

I 0
I I

=

ABK BK
0 ALC

Because

A
cl
is block upper triangular, we know that the closed-loop
poles of the system are given by
det(sI

A
cl
) , det(sI (ABK)) det(sI (ALC)) = 0
Fall 2001 16.31 173
Observation: The closed-loop poles for this system con-
sist of the union of the regulator poles and estimator poles.
So we can just design the estimator/regulator separately and com-
bine them at the end.
Called the Separation Principle.
Just keep in mind that the pole locations you are picking for these
two sub-problems will also be the closed-loop pole locations.
Note: the separation principle means that there will be no ambi-
guity or uncertainty about the stability and/or performance of the
closed-loop system.
The closed-loop poles will be exactly where you put them!!
And we have not even said what compensator does this amazing
accomplishment!!!
Fall 2001 16.31 174
The Compensator
Dynamic Output Feedback Compensator is the combina-
tion of the regulator and estimator using u = K x

x(t) = A x(t) + Bu(t) + L(y y)


= A x(t) BK x + L(y C x)


x(t) = (ABK LC) x(t) + Ly
u = K x
Rewrite with new state x
c
x
x
c
= A
c
x
c
+ B
c
y
u = C
c
x
c
where the compensator dynamics are given by:
A
c
, A BK LC , B
c
, L , C
c
, K
Note that the compensator maps sensor measurements to ac-
tuator commands, as expected.
Closed-loop system stable if regulator/estimator poles placed in the
LHP, but compensator dynamics do not need to be stable.

i
(ABK LC) =??
Fall 2001 16.31 175
For consistency in the implementation with the classical approaches,
dene the compensator transfer function so that
u = G
c
(s)y
From the state-space model of the compensator:
U(s)
Y (s)
, G
c
(s)
= C
c
(sI A
c
)
1
B
c
= K(sI (ABK LC))
1
L
G
c
(s) = C
c
(sI A
c
)
1
B
c
Note that it is often very easy to provide classical interpretations
(such as lead/lag) for the compensator G
c
(s).
One way to implement this compensator with a reference command
r(t) is to change the feedback to be on e(t) = r(t) y(t) rather
than just y(t)
G
c
(s) G(s)
- -
6

r e y u
u = G
c
(s)e = G
c
(s)(r y)
So we still have u = G
c
(s)y if r = 0.
Intuitively appealing because it is the same approach used for
the classical control, but it turns out not to be the best approach.
More on this later.
Fall 2001 16.31 176
Mechanics
Basics:
e = r y, u = G
c
e, y = Gu
G
c
(s) : x
c
= A
c
x
c
+ B
c
e , u = C
c
x
c
G(s) : x = Ax + Bu , y = Cx
Loop dynamics L = G
c
(s)G(s) y = L(s)e
x = Ax +BC
c
x
c
x
c
= +A
c
x
c
+B
c
e
L(s)

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

e
y =

C 0

x
x
c

To close the loop, note that e = r y, then

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

r

C 0

x
x
c

=

A BC
c
B
c
C A
c

x
x
c

+

0
B
c

r
y =

C 0

x
x
c

A
cl
is not exactly the same as on page 17-1 because we have re-
arranged where the negative sign enters into the problem. Same
result though.
Fall 2001 16.31 177
Simple Example
Let G(s) = 1/s
2
with state-space model given by:
A =

0 1
0 0

, B =

0
1

, C =

1 0

, D = 0
Design the regulator to place the poles at s = 4 4j

i
(ABK) = 4 4j K =

32 8

Time constant of regulator poles


c
= 1/
n
1/4 = 0.25 sec
Put estimator poles so that the time constant is faster
e
1/10
Use real poles, so
e
(s) = (s + 10)
2
L =
e
(A)

C
CA

0
1

0 1
0 0

2
+ 20

0 1
0 0

+

100 0
0 100

1 0
0 1

0
1

=

100 20
0 100

0
1

=

20
100

Fall 2001 16.31 178


Compensator:
A
c
= ABK LC
=

0 1
0 0

0
1

32 8

20
100

1 0

=

20 1
132 8

B
c
= L =

20
100

C
c
= K =

32 8

Compensator transfer function:


G
c
(s) = C
c
(sI A
c
)
1
B
c
,
U
E
= 1440
s + 2.222
s
2
+ 28s + 292
Note that the compensator has a low frequency real zero and two
higher frequency poles.
Thus it looks like a lead compensator.
Fall 2001 16.31 179
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
Figure 1: Plant is pretty simple and the compensator looks like a lead
210 rads/sec.
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
280
260
240
220
200
180
160
140
120
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Figure 2: Loop transfer function L(s) shows the slope change near

c
= 5 rad/sec. Note that we have a large PM and GM.
Fall 2001 16.31 1710
15 10 5 0 5
15
10
5
0
5
10
15
Real Axis
I
m
a
g

A
x
i
s
Figure 3: Freeze the compensator poles and zeros and look at the root
locus of closed-loop poles versus an additional loop gain (nominally
= 1.)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
Figure 4: Closed-loop transfer function.
Fall 2001 16.31 1711
Figure 5: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
150
100
50
0
50
Gm=10.978 dB (at 40.456 rad/sec), Pm=69.724 deg. (at 15.063 rad/sec)
10
0
10
1
10
2
10
3
400
350
300
250
200
150
100
50
0
Fall 2001 16.31 1712
Figure 6: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
80 70 60 50 40 30 20 10 0 10 20
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
50 45 40 35 30 25 20 15 10 5 0
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1713
Two compensator zeros at -21.546.63j draw the two lower fre-
quency plant poles further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1714
Figure 7: Example #2: G(s) =
0.94
s
2
0.0297
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=11.784 dB (at 7.4093 rad/sec), Pm=36.595 deg. (at 2.7612 rad/sec)
10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
Fall 2001 16.31 1715
Figure 8: Example #2: G(s) =
0.94
s
2
0.0297
8 6 4 2 0 2
6
4
2
0
2
4
6
Real Axis
I
m
a
g

A
x
i
s
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1716
Compensator zero at -1.21 draws the two lower frequency plant poles
further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1717
Figure 9: Example #3: G(s) =
81420
(s8)(s14)(s20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=0.90042 dB (at 24.221 rad/sec), Pm=6.6704 deg. (at 35.813 rad/sec)
10
0
10
1
10
2
10
3
350
300
250
200
150
Fall 2001 16.31 1718
Figure 10: Example #3: G(s) =
81420
(s8)(s14)(s20)
140 120 100 80 60 40 20 0 20 40 60
100
80
60
40
20
0
20
40
60
80
100
Real Axis
I
m
a
g

A
x
i
s
25 20 15 10 5 0 5 10 15 20 25
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1719
Compensator zeros at 3.728.03j draw the two higher frequency
plant poles further into the LHP. Lowest frequency one heads into
the LHP on its own.
Compensator poles are at much higher frequency.
Note sure what this looks like.
Fall 2001 16.31 1720
Figure 11: Example #4: G(s) =
(s1)
(s+1)(s3)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
80
60
40
20
0
20
Gm=3.3976 dB (at 4.5695 rad/sec), Pm=22.448 deg. (at 1.4064 rad/sec)
10
1
10
0
10
1
10
2
10
3
140
150
160
170
180
190
200
210
220
Fall 2001 16.31 1721
Figure 12: Example #4: G(s) =
(s1)
(s+1)(s3)
40 30 20 10 0 10 20
40
30
20
10
0
10
20
30
40
Real Axis
I
m
a
g

A
x
i
s
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1722
Compensator zero at -1 cancels the plant pole. Note the very un-
stable compensator pole at s = 9!!
Needed to get the RHP plant pole to branch o the real line and
head into the LHP.
Other compensator pole is at much higher frequency.
Note sure what this looks like.
Separation principle gives a very powerful and simple way to develop
a dynamic output feedback controller
Note that the designer now focuses on selecting the appropriate
regulator and estimator pole locations. Once those are set, the
closed-loop response is specied.
Can almost consider the compensator to be a by-product.
These examples show that the design process is extremely simple.
Topic #17
16.31 Feedback Control
State-Space Systems
Closed-loop control using estimators and regulators.
Dynamics output feedback
Back to reality
Copyright2001byJonathanHow.
1
Fall 2001 16.31 171
Combined Estimators and Regulators
Can now evaluate the stability and/or performance of a controller
when we design K assuming that u = Kx, but we implement
u = K x
Assume that we have designed a closed-loop estimator with gain L

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = C x(t)
Then we have that the closed-loop system dynamics are given by:
x(t) = Ax(t) + Bu(t)

x(t) = A x(t) + Bu(t) + L(y y)


y(t) = Cx(t)
y(t) = C x(t)
u = K x
Which can be compactly written as:

=

A BK
LC ABK LC

x
x

x
cl
= A
cl
x
cl
This does not look too good at this point not even obvious that
the closed-system is stable.

i
(A
cl
) =??
Fall 2001 16.31 172
Can x this problem by introducing a new variable x = x x and
then converting the closed-loop system dynamics using the
similarity transformation T
x
cl
,

x
x

=

I 0
I I

x
x

= Tx
cl
Note that T = T
1
Now rewrite the system dynamics in terms of the state x
cl
A
cl
TA
cl
T
1
,

A
cl
Note that similarity transformations preserve the eigenvalues, so
we are guaranteed that

i
(A
cl
)
i
(

A
cl
)
Work through the math:

A
cl
=

I 0
I I

A BK
LC A BK LC

I 0
I I

=

A BK
ALC A+ LC

I 0
I I

=

ABK BK
0 ALC

Because

A
cl
is block upper triangular, we know that the closed-loop
poles of the system are given by
det(sI

A
cl
) , det(sI (ABK)) det(sI (ALC)) = 0
Fall 2001 16.31 173
Observation: The closed-loop poles for this system con-
sist of the union of the regulator poles and estimator poles.
So we can just design the estimator/regulator separately and com-
bine them at the end.
Called the Separation Principle.
Just keep in mind that the pole locations you are picking for these
two sub-problems will also be the closed-loop pole locations.
Note: the separation principle means that there will be no ambi-
guity or uncertainty about the stability and/or performance of the
closed-loop system.
The closed-loop poles will be exactly where you put them!!
And we have not even said what compensator does this amazing
accomplishment!!!
Fall 2001 16.31 174
The Compensator
Dynamic Output Feedback Compensator is the combina-
tion of the regulator and estimator using u = K x

x(t) = A x(t) + Bu(t) + L(y y)


= A x(t) BK x + L(y C x)


x(t) = (ABK LC) x(t) + Ly
u = K x
Rewrite with new state x
c
x
x
c
= A
c
x
c
+ B
c
y
u = C
c
x
c
where the compensator dynamics are given by:
A
c
, A BK LC , B
c
, L , C
c
, K
Note that the compensator maps sensor measurements to ac-
tuator commands, as expected.
Closed-loop system stable if regulator/estimator poles placed in the
LHP, but compensator dynamics do not need to be stable.

i
(ABK LC) =??
Fall 2001 16.31 175
For consistency in the implementation with the classical approaches,
dene the compensator transfer function so that
u = G
c
(s)y
From the state-space model of the compensator:
U(s)
Y (s)
, G
c
(s)
= C
c
(sI A
c
)
1
B
c
= K(sI (ABK LC))
1
L
G
c
(s) = C
c
(sI A
c
)
1
B
c
Note that it is often very easy to provide classical interpretations
(such as lead/lag) for the compensator G
c
(s).
One way to implement this compensator with a reference command
r(t) is to change the feedback to be on e(t) = r(t) y(t) rather
than just y(t)
G
c
(s) G(s)
- -
6

r e y u
u = G
c
(s)e = G
c
(s)(r y)
So we still have u = G
c
(s)y if r = 0.
Intuitively appealing because it is the same approach used for
the classical control, but it turns out not to be the best approach.
More on this later.
Fall 2001 16.31 176
Mechanics
Basics:
e = r y, u = G
c
e, y = Gu
G
c
(s) : x
c
= A
c
x
c
+ B
c
e , u = C
c
x
c
G(s) : x = Ax + Bu , y = Cx
Loop dynamics L = G
c
(s)G(s) y = L(s)e
x = Ax +BC
c
x
c
x
c
= +A
c
x
c
+B
c
e
L(s)

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

e
y =

C 0

x
x
c

To close the loop, note that e = r y, then

x
x
c

=

A BC
c
0 A
c

x
x
c

+

0
B
c

r

C 0

x
x
c

=

A BC
c
B
c
C A
c

x
x
c

+

0
B
c

r
y =

C 0

x
x
c

A
cl
is not exactly the same as on page 17-1 because we have re-
arranged where the negative sign enters into the problem. Same
result though.
Fall 2001 16.31 177
Simple Example
Let G(s) = 1/s
2
with state-space model given by:
A =

0 1
0 0

, B =

0
1

, C =

1 0

, D = 0
Design the regulator to place the poles at s = 4 4j

i
(ABK) = 4 4j K =

32 8

Time constant of regulator poles


c
= 1/
n
1/4 = 0.25 sec
Put estimator poles so that the time constant is faster
e
1/10
Use real poles, so
e
(s) = (s + 10)
2
L =
e
(A)

C
CA

0
1

0 1
0 0

2
+ 20

0 1
0 0

+

100 0
0 100

1 0
0 1

0
1

=

100 20
0 100

0
1

=

20
100

Fall 2001 16.31 178


Compensator:
A
c
= ABK LC
=

0 1
0 0

0
1

32 8

20
100

1 0

=

20 1
132 8

B
c
= L =

20
100

C
c
= K =

32 8

Compensator transfer function:


G
c
(s) = C
c
(sI A
c
)
1
B
c
,
U
E
= 1440
s + 2.222
s
2
+ 28s + 292
Note that the compensator has a low frequency real zero and two
higher frequency poles.
Thus it looks like a lead compensator.
Fall 2001 16.31 179
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
Figure 1: Plant is pretty simple and the compensator looks like a lead
210 rads/sec.
10
1
10
0
10
1
10
2
10
3
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
280
260
240
220
200
180
160
140
120
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Figure 2: Loop transfer function L(s) shows the slope change near

c
= 5 rad/sec. Note that we have a large PM and GM.
Fall 2001 16.31 1710
15 10 5 0 5
15
10
5
0
5
10
15
Real Axis
I
m
a
g

A
x
i
s
Figure 3: Freeze the compensator poles and zeros and look at the root
locus of closed-loop poles versus an additional loop gain (nominally
= 1.)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
Figure 4: Closed-loop transfer function.
Fall 2001 16.31 1711
Figure 5: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
150
100
50
0
50
Gm=10.978 dB (at 40.456 rad/sec), Pm=69.724 deg. (at 15.063 rad/sec)
10
0
10
1
10
2
10
3
400
350
300
250
200
150
100
50
0
Fall 2001 16.31 1712
Figure 6: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
80 70 60 50 40 30 20 10 0 10 20
50
40
30
20
10
0
10
20
30
40
50
Real Axis
I
m
a
g

A
x
i
s
50 45 40 35 30 25 20 15 10 5 0
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1713
Two compensator zeros at -21.546.63j draw the two lower fre-
quency plant poles further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1714
Figure 7: Example #2: G(s) =
0.94
s
2
0.0297
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=11.784 dB (at 7.4093 rad/sec), Pm=36.595 deg. (at 2.7612 rad/sec)
10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
Fall 2001 16.31 1715
Figure 8: Example #2: G(s) =
0.94
s
2
0.0297
8 6 4 2 0 2
6
4
2
0
2
4
6
Real Axis
I
m
a
g

A
x
i
s
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1716
Compensator zero at -1.21 draws the two lower frequency plant poles
further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.
Fall 2001 16.31 1717
Figure 9: Example #3: G(s) =
81420
(s8)(s14)(s20)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
100
50
0
50
Gm=0.90042 dB (at 24.221 rad/sec), Pm=6.6704 deg. (at 35.813 rad/sec)
10
0
10
1
10
2
10
3
350
300
250
200
150
Fall 2001 16.31 1718
Figure 10: Example #3: G(s) =
81420
(s8)(s14)(s20)
140 120 100 80 60 40 20 0 20 40 60
100
80
60
40
20
0
20
40
60
80
100
Real Axis
I
m
a
g

A
x
i
s
25 20 15 10 5 0 5 10 15 20 25
25
20
15
10
5
0
5
10
15
20
25
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1719
Compensator zeros at 3.728.03j draw the two higher frequency
plant poles further into the LHP. Lowest frequency one heads into
the LHP on its own.
Compensator poles are at much higher frequency.
Note sure what this looks like.
Fall 2001 16.31 1720
Figure 11: Example #4: G(s) =
(s1)
(s+1)(s3)
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
200
150
100
50
0
50
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
M
a
g
Plant G
closedloop Gcl
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
1
10
0
10
1
10
2
10
3
250
200
150
100
50
0
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Frequency (rad/sec)
P
h
a
s
e

(
d
e
g
)
;

M
a
g
n
i
t
u
d
e

(
d
B
)
Bode Diagrams
80
60
40
20
0
20
Gm=3.3976 dB (at 4.5695 rad/sec), Pm=22.448 deg. (at 1.4064 rad/sec)
10
1
10
0
10
1
10
2
10
3
140
150
160
170
180
190
200
210
220
Fall 2001 16.31 1721
Figure 12: Example #4: G(s) =
(s1)
(s+1)(s3)
40 30 20 10 0 10 20
40
30
20
10
0
10
20
30
40
Real Axis
I
m
a
g

A
x
i
s
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
3 closed-loop poles, 5 open-loop poles, 2 Compensator poles, Compensator zeros
Fall 2001 16.31 1722
Compensator zero at -1 cancels the plant pole. Note the very un-
stable compensator pole at s = 9!!
Needed to get the RHP plant pole to branch o the real line and
head into the LHP.
Other compensator pole is at much higher frequency.
Note sure what this looks like.
Separation principle gives a very powerful and simple way to develop
a dynamic output feedback controller
Note that the designer now focuses on selecting the appropriate
regulator and estimator pole locations. Once those are set, the
closed-loop response is specied.
Can almost consider the compensator to be a by-product.
These examples show that the design process is extremely simple.
Topic #18
16.31 Feedback Control
Closed-loop system analysis
Robustness
State-space eigenvalue analysis
Frequency domain Nyquist theorem.
Sensitivity
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 181
Combined Estimators and Regulators
When we use the combination of an optimal estimator and an opti-
mal regulator to design the controller, the compensator is called
Linear Quadratic Gaussian (LQG)
Special case of the controllers that can be designed using the
separation principle.
The great news about an LQG design is that stability of the closed-
loop system is guaranteed.
The designer is freed from having to perform any detailed me-
chanics - the entire process is fast and can be automated.
Now the designer just focuses on:
How to specify the state cost function (i.e. selecting z = C
z
x)
and what value of r to use.
Determine how the process and sensor noise enter into the system
and what their relative sizes are (i.e. select R
w
& R
v
)
So the designer can focus on the performance related issues, be-
ing condent that the LQG design will produce a controller that
stabilizes the system.
This sounds great so what is the catch??
Fall 2001 16.31 182
The remaining issue is that sometimes the controllers designed using
these state-space tools are very sensitive to errors in the knowledge
of the model.
i.e., Might work very well if the plant gain = 1, but be
unstable if it is = 0.9 or = 1.1.
LQG is also prone to plantpole/compensatorzero cancellation,
which tends to be sensitive to modeling errors.
The good news is that the state-space techniques will give you a
controller very easily.
You should use the time saved to verify that the one
you designed is a good controller.
There are, of course, dierent denitions of what makes a controller
good, but one important criterion is whether there is a reason-
able chance that it would work on the real system as
well as it do es in Matlab. Ro bu s t ne s s.
The controller must be able to tolerate some modeling error,
because our models in Matlab are typically inaccurate.
3 Linearized model
3 Some parameters poorly known
3 Ignores some higher frequency dynamics
Need to develop tools that will give us some insight on how well a
controller can tolerate modeling errors.

Fall 2001 16.31 183


Example
Consider the cart on a stick system, with the dynamics as given
in the notes on the web. Dene
q =


x

, x =

q
q

Then with y = x
x = Ax + Bu
y = Cx
For the parameters given in the notes, the system has an unstable
pole at +5.6 and one at s = 0. There are plant zeros at 5.
The target locations for the poles were determined using the SRL
for both the regulator and estimator.
Assumes that the process noise enters through the actuators
B
w
B, which is a useful approximation.
Regulator and estimator have the same SRL.
Choose the process/sensor ratio to be r/10 so that the estimator
poles are faster than the regulator ones.
The resulting compensator is unstable (+16!!)
But this was expected. (why?)
Fall 2001 16.31 184
8 6 4 2 0 2 4 6 8
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
Figure 1: SRL for the regulator and estimator.
10
2
10
1
10
0
10
1
10
2
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
2
10
1
10
0
10
1
10
2
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
Figure 2: Plant and Controller
Fall 2001 16.31 185
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Figure 3: Loop and Margins
Looking at both the Loop plots and the root locus, this system is
stable with a gain of 1, but
Unstable for a gain of 1 and/or a slight change in the system
phase (possibly due to some unmodeled delays)
Very limited chance that this would work on the real system.
Of course, this is an extreme example and not all systems are like
this, but you must analyze to determine what robustness mar-
gins your controller really has.
Question: what analysis tools should we use?
Fall 2001 16.31 186
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
Figure 4: Root Locus with frozen compensator dynamics. Shows sen-
sitivity to overall gain symbols are a gain of [0.995:.0001:1.005].
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
i
s
Fall 2001 16.31 187
Analysis Tools to Use?
Eigenvalues give a denite answer on the stability (or not) of the
closed-loop system.
Problem is that it is very hard to predict where the closed-loop
poles will go as a function of errors in the plant model.
Consider the case were the model of the system is
x = A
0
x + Bu
Controller also based on A
0
, so nominal closed-loop dynamics:

A
0
BK
LC A
0
BK LC



A
0
BK BK
0 A
0
LC

But what if the actual system has dynamics


x = (A
0
+ A)x + Bu
Then perturbed closed-loop system dynamics are:

A
0
+ A BK
LC A
0
BK LC



A
0
+ ABK BK
A A
0
LC

Transformed

A
cl
not upper-block triangular, so perturbed closed-
loop eigenvalues are NOT the union of regulator & estimator poles.
Can nd the closed-loop poles for a specic A, but
Hard to predict change in location of closed-loop poles for a range
of possible modeling errors.
Fall 2001 16.31 188
Frequency Domain Tests
Frequency domain stability tests provide further insights on the
stability margins.
Recall from the Nyquist Stability Theorem:
P = # poles of L(s) = G(s)G
c
(s) in the RHP
Z = # closed-loop poles in the RHP
N = # clockwise encirclements of the Nyquist Diagram about
the critical point -1.
Can show that Z = N + P (see notes on the web).
So for the closed-loop system to be stable, need
Z , 0 N = P
If the loop transfer function L(s) has P poles in the RHP s-plane
(and lim
s
L(s) is a constant), then for closed-loop stability, the
locus of L(j) for (, ) must encircle the critical point
(-1,0) P times in the counterclockwise direction [Ogata 528].
This provides a binary measure of stability, or not.
Fall 2001 16.31 189
Can use closeness of L(s) to the critical point as a measure of
closeness to changing the number of encirclements.
Premise is that the system is stable for the nominal system
has the right number of encirclements.
Goal of the robustness test is to see if the possible perturbations
to our system model (due to modeling errors) can change the
number of encirclements
In this case, say that the perturbations can destabilizethe system.
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
180.5 180 179.5 179 178.5
0.95
0.96
0.97
0.98
0.99
1
1.01
1.02
1.03
1.04
1.05
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
1
0.99
1.01
Figure 5: Nichols Plot for the cart example which clearly shows the
sensitivity to the overall gain and/or phase lag.
Fall 2001 16.31 1810
1.5 1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
I
m
a
g

P
a
r
t
Real Part
stable OL
L
N
(j)
L
A
(j)

2
Figure 6: Plot of Loop TF L
N
(j) = G
N
(j)G
c
(j) and perturbation
(
1

2
) that changes the number of encirclements.
Model error in frequency range
1

2
causes a change in the
number of encirclements of the critical point (1, 0)
Nominal closed-loop system stable L
N
(s) = G
N
(s)G
c
(s)
Actual closed-loop system unstable L
A
(s) = G
A
(s)G
c
(s)
Bottom line: Large model errors when L
N
1 are very dan-
gerous.
Fall 2001 16.31 1811
Frequency Domain Test
1.5 1 0.5 0 0.5 1
1
0.5
0
0.5
1
stable OL
Real Part
|L
N
(j)|
|d(j)| I
m
a
g

P
a
r
t
Figure 7: Geometric interpretation from Nyquist Plot of Loop TF.
|d(j)| measures distance of nominal Nyquist locus to critical point.
By vector addition gives 1 + d(j) = L
N
(j)
d(j) = 1 + L
N
(j)
Actually more convenient to plot
1
|d(j)|
=
1
|1 + L
N
(j)|
, |S(j)|
the magnitude of the sensitivity transfer function S(s).
Fall 2001 16.31 1812
So high sensitivity corresponds to L
N
(j) being very close to the
critical point.
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
10
2
10
3
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Figure 8: Sensitivity plot of the cart problem.
Ideally you would want the sensitivity to be much lower than this.
Same as saying that you want L(j) to be far from the critical
point.
Diculty in this example is that the open-loop system is unsta-
ble, so L(j) must encircle the critical point hard for L(j)
to get too far away from the critical point.
Fall 2001 16.31 1813
Figure 9: Sensitivity for Example 1 G(s) =
81420
(s+8)(s+14)(s+20)
with a low
bandwidth controller
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Stable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Stable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Fall 2001 16.31 1814
Figure 10: Sensitivity for Example 3 G(s) =
81420
(s8)(s14)(s20)
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Fall 2001 16.31 1815
Figure 11: Sensitivity for Example 1 G(s) =
81420
(s+8)(s+14)(s+20)
with a
high bandwidth controller
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Stable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Stable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Shows that as the controller bandwidth increases, can expect L(j) to get much closer to the critical point.
Pushpop
Fall 2001 16.31 1816
Summary
LQG gives you a great way to design a controller for the nominal
system.
But there are no guarantees about the stability/performance if the
actual system is slightly dierent.
Basic analysis tool is the Sensitivity Plot
No obvious ways to tailor the specication of the LQG controller to
improve any lack of robustness
Apart from the obvious lower the controller bandwidth ap-
proach.
And sometimes you need the bandwidth just to stabilize the
system.
Very hard to include additional robustness constraints into LQG
See my Ph.D. thesis in 1992.
Other tools have been developed that allow you to directly shape
the sensitivity plot |S(j)|
Called H

and
Good news: Lack of robustness is something you should look for,
but it is not always an issue.
MATLAB is a trademark of The MathWorks, Inc.

Topic #18
16.31 Feedback Control
Closed-loop system analysis
Robustness
State-space eigenvalue analysis
Frequency domain Nyquist theorem.
Sensitivity
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 181
Combined Estimators and Regulators
When we use the combination of an optimal estimator and an opti-
mal regulator to design the controller, the compensator is called
Linear Quadratic Gaussian (LQG)
Special case of the controllers that can be designed using the
separation principle.
The great news about an LQG design is that stability of the closed-
loop system is guaranteed.
The designer is freed from having to perform any detailed me-
chanics - the entire process is fast and can be automated.
Now the designer just focuses on:
How to specify the state cost function (i.e. selecting z = C
z
x)
and what value of r to use.
Determine how the process and sensor noise enter into the system
and what their relative sizes are (i.e. select R
w
& R
v
)
So the designer can focus on the performance related issues, be-
ing condent that the LQG design will produce a controller that
stabilizes the system.
This sounds great so what is the catch??
Fall 2001 16.31 182
The remaining issue is that sometimes the controllers designed using
these state-space tools are very sensitive to errors in the knowledge
of the model.
i.e., Might work very well if the plant gain = 1, but be
unstable if it is = 0.9 or = 1.1.
LQG is also prone to plantpole/compensatorzero cancellation,
which tends to be sensitive to modeling errors.
The good news is that the state-space techniques will give you a
controller very easily.
You should use the time saved to verify that the one
you designed is a good controller.
There are, of course, dierent denitions of what makes a controller
good, but one important criterion is whether there is a reason-
able chance that it would work on the real system as
well as it do es in Matlab. Ro bu s t ne s s.
The controller must be able to tolerate some modeling error,
because our models in Matlab are typically inaccurate.
3 Linearized model
3 Some parameters poorly known
3 Ignores some higher frequency dynamics
Need to develop tools that will give us some insight on how well a
controller can tolerate modeling errors.

Fall 2001 16.31 183


Example
Consider the cart on a stick system, with the dynamics as given
in the notes on the web. Dene
q =


x

, x =

q
q

Then with y = x
x = Ax + Bu
y = Cx
For the parameters given in the notes, the system has an unstable
pole at +5.6 and one at s = 0. There are plant zeros at 5.
The target locations for the poles were determined using the SRL
for both the regulator and estimator.
Assumes that the process noise enters through the actuators
B
w
B, which is a useful approximation.
Regulator and estimator have the same SRL.
Choose the process/sensor ratio to be r/10 so that the estimator
poles are faster than the regulator ones.
The resulting compensator is unstable (+16!!)
But this was expected. (why?)
Fall 2001 16.31 184
8 6 4 2 0 2 4 6 8
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
Symmetric root locus
Figure 1: SRL for the regulator and estimator.
10
2
10
1
10
0
10
1
10
2
10
4
10
2
10
0
10
2
10
4
Freq (rad/sec)
M
a
g
Plant G
Compensator Gc
10
2
10
1
10
0
10
1
10
2
0
50
100
150
200
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Plant G
Compensator Gc
Figure 2: Plant and Controller
Fall 2001 16.31 185
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
Freq (rad/sec)
M
a
g
Loop L
10
2
10
1
10
0
10
1
10
2
300
250
200
150
100
Freq (rad/sec)
P
h
a
s
e

(
d
e
g
)
Figure 3: Loop and Margins
Looking at both the Loop plots and the root locus, this system is
stable with a gain of 1, but
Unstable for a gain of 1 and/or a slight change in the system
phase (possibly due to some unmodeled delays)
Very limited chance that this would work on the real system.
Of course, this is an extreme example and not all systems are like
this, but you must analyze to determine what robustness mar-
gins your controller really has.
Question: what analysis tools should we use?
Fall 2001 16.31 186
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
Real Axis
I
m
a
g

A
x
i
s
Figure 4: Root Locus with frozen compensator dynamics. Shows sen-
sitivity to overall gain symbols are a gain of [0.995:.0001:1.005].
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Real Axis
I
m
a
g

A
x
i
s
Fall 2001 16.31 187
Analysis Tools to Use?
Eigenvalues give a denite answer on the stability (or not) of the
closed-loop system.
Problem is that it is very hard to predict where the closed-loop
poles will go as a function of errors in the plant model.
Consider the case were the model of the system is
x = A
0
x + Bu
Controller also based on A
0
, so nominal closed-loop dynamics:

A
0
BK
LC A
0
BK LC



A
0
BK BK
0 A
0
LC

But what if the actual system has dynamics


x = (A
0
+ A)x + Bu
Then perturbed closed-loop system dynamics are:

A
0
+ A BK
LC A
0
BK LC



A
0
+ ABK BK
A A
0
LC

Transformed

A
cl
not upper-block triangular, so perturbed closed-
loop eigenvalues are NOT the union of regulator & estimator poles.
Can nd the closed-loop poles for a specic A, but
Hard to predict change in location of closed-loop poles for a range
of possible modeling errors.
Fall 2001 16.31 188
Frequency Domain Tests
Frequency domain stability tests provide further insights on the
stability margins.
Recall from the Nyquist Stability Theorem:
P = # poles of L(s) = G(s)G
c
(s) in the RHP
Z = # closed-loop poles in the RHP
N = # clockwise encirclements of the Nyquist Diagram about
the critical point -1.
Can show that Z = N + P (see notes on the web).
So for the closed-loop system to be stable, need
Z , 0 N = P
If the loop transfer function L(s) has P poles in the RHP s-plane
(and lim
s
L(s) is a constant), then for closed-loop stability, the
locus of L(j) for (, ) must encircle the critical point
(-1,0) P times in the counterclockwise direction [Ogata 528].
This provides a binary measure of stability, or not.
Fall 2001 16.31 189
Can use closeness of L(s) to the critical point as a measure of
closeness to changing the number of encirclements.
Premise is that the system is stable for the nominal system
has the right number of encirclements.
Goal of the robustness test is to see if the possible perturbations
to our system model (due to modeling errors) can change the
number of encirclements
In this case, say that the perturbations can destabilizethe system.
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
180.5 180 179.5 179 178.5
0.95
0.96
0.97
0.98
0.99
1
1.01
1.02
1.03
1.04
1.05
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
1
0.99
1.01
Figure 5: Nichols Plot for the cart example which clearly shows the
sensitivity to the overall gain and/or phase lag.
Fall 2001 16.31 1810
1.5 1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
I
m
a
g

P
a
r
t
Real Part
stable OL
L
N
(j)
L
A
(j)

2
Figure 6: Plot of Loop TF L
N
(j) = G
N
(j)G
c
(j) and perturbation
(
1

2
) that changes the number of encirclements.
Model error in frequency range
1

2
causes a change in the
number of encirclements of the critical point (1, 0)
Nominal closed-loop system stable L
N
(s) = G
N
(s)G
c
(s)
Actual closed-loop system unstable L
A
(s) = G
A
(s)G
c
(s)
Bottom line: Large model errors when L
N
1 are very dan-
gerous.
Fall 2001 16.31 1811
Frequency Domain Test
1.5 1 0.5 0 0.5 1
1
0.5
0
0.5
1
stable OL
Real Part
|L
N
(j)|
|d(j)| I
m
a
g

P
a
r
t
Figure 7: Geometric interpretation from Nyquist Plot of Loop TF.
|d(j)| measures distance of nominal Nyquist locus to critical point.
By vector addition gives 1 + d(j) = L
N
(j)
d(j) = 1 + L
N
(j)
Actually more convenient to plot
1
|d(j)|
=
1
|1 + L
N
(j)|
, |S(j)|
the magnitude of the sensitivity transfer function S(s).
Fall 2001 16.31 1812
So high sensitivity corresponds to L
N
(j) being very close to the
critical point.
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
10
2
10
3
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Figure 8: Sensitivity plot of the cart problem.
Ideally you would want the sensitivity to be much lower than this.
Same as saying that you want L(j) to be far from the critical
point.
Diculty in this example is that the open-loop system is unsta-
ble, so L(j) must encircle the critical point hard for L(j)
to get too far away from the critical point.
Fall 2001 16.31 1813
Figure 9: Sensitivity for Example 1 G(s) =
81420
(s+8)(s+14)(s+20)
with a low
bandwidth controller
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Stable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Stable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Fall 2001 16.31 1814
Figure 10: Sensitivity for Example 3 G(s) =
81420
(s8)(s14)(s20)
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Unstable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Fall 2001 16.31 1815
Figure 11: Sensitivity for Example 1 G(s) =
81420
(s+8)(s+14)(s+20)
with a
high bandwidth controller
260 240 220 200 180 160 140 120 100
10
1
10
0
10
1
Nichols: Stable Openloop System
M
a
g
Phase (deg)
195 190 185 180 175 170 165
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
Nichols: Stable Openloop System
M
a
g
Phase (deg)
1
0.95
1.05
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Sensitivity Plot
Freq (rad/sec)
M
a
g
|S|
|L|
Shows that as the controller bandwidth increases, can expect L(j) to get much closer to the critical point.
Pushpop
Fall 2001 16.31 1816
Summary
LQG gives you a great way to design a controller for the nominal
system.
But there are no guarantees about the stability/performance if the
actual system is slightly dierent.
Basic analysis tool is the Sensitivity Plot
No obvious ways to tailor the specication of the LQG controller to
improve any lack of robustness
Apart from the obvious lower the controller bandwidth ap-
proach.
And sometimes you need the bandwidth just to stabilize the
system.
Very hard to include additional robustness constraints into LQG
See my Ph.D. thesis in 1992.
Other tools have been developed that allow you to directly shape
the sensitivity plot |S(j)|
Called H

and
Good news: Lack of robustness is something you should look for,
but it is not always an issue.
MATLAB is a trademark of The MathWorks, Inc.

Topic #19
16.31 Feedback Control
Closed-loop system analysis
Bounded Gain Theorem
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 191
Bounded Gain
There exist very easy ways of testing (analytically) whether
|S(j)| < ,
SISO Bounded Gain Theorem: Gain of generic stable system
x = Ax + Bu
y = Cx + Du
is bounded in the sense that
G
max
= sup

|G(j)| = sup

|C(jI A)
1
B + D| <
if and only if (i)
1. |D| <
2. The Hamiltonian matrix
H =

A+ B(
2
I D
T
D)
1
D
T
C B(
2
I D
T
D)
1
B
T
C
T
(I + D(
2
I D
T
D)
1
D
T
)C A
T
C
T
D(
2
I D
T
D)
1
B
T

has no eigenvalues on the imaginary axis.


Note that with D = 0, the Hamiltonian matrix is
H =

A
1

2
BB
T
C
T
C A
T

Eigenvalues of this matrix are symmetric about the real and


imaginary axis (related to the SRL)
Fall 2001 16.31 192
So sup

|G(j)| < i H has no eigenvalues on the j-axis.


An equivalent test is if there exists a X 0 such that
A
T
X + XA + C
T
C +
1

2
XBB
T
X = 0
and A +
1

2
BB
T
X is stable.
This is an Algebraic Riccati Equation (ARE)
Typical appplication: since e = y r, then for perfect tracking, we
want e 0
want S 0 since e = Sr + . . .
Sucient to discuss the magnitude of S because the only require-
ment is that it be small.
Direct approach is to develop an upperbound for |S| and then test
if |S| is below this bound.
|S(j)| <
1
|W
s
(j)|
?
or equivalently, whether |W
s
(j)S(j)| < 1,
Note: The state-space tests can also be used for MIMO systems
but in that case, we need diferent Frequency Domain tests.
Fall 2001 16.31 193
Typically pick simple forms for weighting functions (rst or second
order), and then cascade them as necessary. Basic one:
W
s
(s) =
s/M +
B
s +
B
A
10
2
10
1
10
0
10
1
10
2
10
2
10
1
10
0
10
1
Freq (rad/sec)
|
1
/
W
s
|
A

b
M
Figure 1: Example of a standard performance weighting lter. Typically
have A 1, M > 1, and |1/W
s
| 1 at
B
Thus we can test whether |W
s
(j)S(j)| < 1, by:
Forming a state space model of the combined system W
s
(s)S(s)
Use the bounded gain theorem with = 1
Typically use a bisection section of to nd |W
s
(j)S(j)|
max
Fall 2001 16.31 194
Example: Simple system
G(s) =
150
(10s + 1)(0.05s + 1)
2
with G
c
= 1
Require
B
5, a slope of 1, low frequency value less than A = 0.01
and a high frequency peak less than M = 5.
W
s
=
s/M +
B
s +
B
A
10
2
10
1
10
0
10
1
10
2
10
3
10
2
10
1
10
0
10
1
Frequency
M
a
g
n
i
t
u
d
e
Sensitivity and Inverse of Performance Weight
S W
s
1/W
s
S = 1.0219
Figure 2: Want |W
s
S| < 1, so we just fail the test
Fall 2001 16.31 195
Sketch of Proof
Suciency: consider = 1, and assume D = 0 for simplicity
G(s) G
T
(s)

r u y y
2
+
G(s) :=

A B
C 0

and G(s) = G
T
(s) :=

A
T
C
T
B
T
0

Note that
u/r = S(s) = [1 GG]
1
Now nd the state space representation of S(s)
x
1
= Ax
1
+ B(r + y
2
) = Ax
1
+ BB
T
x
2
+ Br
x
2
= A
T
x
2
C
T
y = A
T
x
2
C
T
Cx
u = r + B
T
x
2

x
1
x
2

A BB
T
C
T
C A
T

x
1
x
2

B
0

r
u =

0 B
T

x
1
x
2

+ r
poles of S(s) are contained in the eigenvalues of the matrix H.
Fall 2001 16.31 196
Now assume that H has no eigenvalues on the j-axis,
S = [I GG]
1
has no poles there
I GG has no zeros there
So I GG has no zeros on the j-axis, and we also know that
I G

G I > 0 as (since D = 0).


Together, these imply that
I G
T
(j)G(j) = I G

G > 0
For a SISO system, this condition (I G

G > 0) is equivalent to
|G(j)| < 1
which is true i
G
max
= max

|G(j)| < 1
Can use state-space tools to test if a generic system has a gain less
that 1, and can easily re-do this analysis to include the bound .
Fall 2001 16.31 197
Issues
Note that it is actually not easy to nd G
max
directly using the state
space techniques
It is easy to check if G
max
<
So we just keep changing to nd the smallest value for which
we can show that G
max
< (called
min
)
Bisection search algorithm.
Bisection search algorithm (see web)
1. Select
u
,
l
so that
l
G
max

u
2. Test (
u

l
)/
l
< TOL.
Yes Stop (G
max

1
2
(
u
+
l
))
No go to step 3.
3. With =
1
2
(
l
+
u
), test if G
max
< using
i
(H)
4. If
i
(H) jR, then set
l
= (test value too low), otherwise
set
u
= and go to step 2.
This is the basis of H

control theory.
Topic #20
16.31 Feedback Control
Robustness Analysis
Model Uncertainty
Robust Stability (RS) tests
RS visualizations
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 201
Model Uncertainty
Prior analysis assumed a perfect model. What if the model is in-
correct actual system dynamics G
A
(s) are in one of the sets
Multiplicative model G
p
(s) = G
N
(s)(1 + E(s))
Additive model G
p
(s) = G
N
(s) + E(s)
where
1. G
N
(s) is the nominal dynamics (known)
2. E(s) is the modeling error not known directly, but
bound E
0
(s) known (assumed stable) where
|E(j)| |E
0
(j)|
If E
0
(j) small, our condence in the model is high nominal
model is a good representation of the actual dynamics
If E
0
(j) large, our condence in the model is low nominal model
is not a good representation of the actual dynamics
G
N
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
1
10
0
10
1
10
2
multiplicative uncertainty
Freq (rad/sec)
|
G
|

Figure 1: Typical system TF with multiplicative uncertainty


Fall 2001 16.31 202
Simple example: Assume we know that the actual dynamics are

2
n
G
A
(s) =
s
2
(s
2
+ 2
n
s +
2
n
)
but we take the nominal model to be G
N
= 1/s
2
.
Can explicitly calculate the error E(s), and it is shown in the plot.
Can also calculate an LTI overbound E
0
(s) of the error. Since E(s)
is not normally known, it is the bound E
0
(s) that is used in our
analysis tests.
10
3
10
2
10
1
10
0
|
G
|

G
N
E=G
A
/G
N
1
E
0
G
A
G
A
G
N
E
E
0
10
1
10
2
10
3
10
4
10
1
10
0
10
1
Freq (rad/sec)
Figure 2: Various TFs for the example system
Fall 2001 16.31 203
10
4
10
2
10
0
10
2
10
4
10
6
G
N
Possible Gs given E
0
G
A
G
N
10
1
10
0
10
1
Freq (rad/sec)
Figure 3: G
N
with one partial bound. Can add many others to develop
the overall bound that would completely include G
A
.
Usually E
0
(j) not known, so we would have to develop it from our
approximate knowledge of the system dynamics.
Want to demonstrate that the system is stable for any possible
perturbed dynamics in the set G
p
(s) Robust Stability
|
G
|

Fall 2001 16.31 204


Unstructured Uncertainty Model
Standard error model lumps all errors in the system into the actu-
ator dynamics.
Could just as easily use the sensor dynamics, and for MIMO
systems, we typically use both.
G
p
(s) = G
N
(s)(1 + E(s))
E(s) is any stable TF that satises the magnitude bound
|E(j)| |E
0
(j)|
u
E
G
- -- -
- -- - - -- -
? ?? ?
y
Called an unstructured modeling error and/or uncertainty.
With a controller G
c
(s), we have that
G
p
G
c
= G
N
G
c
(1 + E) L
p
= L
N
(1 + E)
Which is a set of possible perturbed loop transfer functions.
Can use |E
0
(j)| to accentuate the model uncertainty in certain
frequency ranges (percentage error)
Fall 2001 16.31 205
Typically use
s + r
0
E
0
(s) =
( /r

)s + 1
where
r
0
relative weight at low freq ( 1)
r

relative weight at high freq ( 2)
1/ approx freq at which relative uncertainty is 100%.
10
1
10
2
10
2
10
1
10
0
10
1
10
2
r

1/
r
0
10
0
|
E
0
|

10
1
Freq (rad/sec)
Figure 4: Typical input uncertainty weighting. Low error at low fre-
quency and larger error at high frequency.
Fall 2001 16.31 206
Note that L
p
= L
N
(1 + E) L
p
L
N
= L
N
E
So we have that
|L
p
(j) L
N
(j)| = |L
N
(j) E(j)| |L
N
(j) E
0
(j)|
At each frequency point, we must test if
|L
p
(j) L
N
(j)| <
is which is equivalent to saying that the actual LTF is anywhere
within a circle (radius ) centered at point L
N
(j).
Example: Consider a simple system with
8s + 64 0.18s + 0.09
G(s) =
s
2
+ 12s + 20
with E
0
(s) =
0.5s + 1
Weight E
0
(s)
10
0
10
1
10
2
.09*[1/.5 1]/[1/2 1]
10
2
10
1
10
0
10
1
10
2
10
3
Freq
M
a
g
n
i
t
u
d
e

Figure 5: Uncertainty weighting.
Possible Perturbations to the LTF: Multiplicative
Fall 2001
2
16.31 207
1
0
1
2
3
4
2 1 0 1 2 3 4
Real
Figure 6: Nominal loop TF and possible multiplicative errors.
Possible Perturbations to the LTF: Multiplicative
2
1
0
1
2
3
4
2 1 0 1 2 3 4
L
N
L
N
(1+E
0
)
L
N
(1E
0
)
L
N
(1jE
0
)
L
N
(1+jE
0
)
Real
Figure 7: Consider 4 possible multiplicative perturbations.
L
p
(s) = L
N
(s)(1 + E(s)) And can have
E(s) = E
0
(s) E(s) = E
0
(s)
E(s) = jE
0
(s) E(s) = jE
0
(s)
I
m
a
g

I
m
a
g

Fall 2001 16.31 208
Robust Stability Tests
From the Nyquist Plot, we developed a measure of the closeness
of the loop transfer function (LTF) to the critical point:
1 1
=
|d(j)| |1 + L
N
(j)|
, |S
N
(j)|
Magnitude of nominal sensitivity transfer function S(s).
Based on this result, the test for robust stability is whether:


L
N
(j)

1

|T
N
(j)| =

<
1 + L
N
(j) |E
0
(j)|
Magnitude bound on the nominal complementary sensitiv-
ity transfer function T (s).
Recall that S(s) + T (s) , 1
Proof: With d(j) = 1 + L
N
(j), criterion of interest for robust
stability is whether the possible changes to the LTF
|L
p
(j) L
N
(j)|
exceed the distance from the LTF to the critical point
|d(j)| = |1 + L
N
(j)|
Because if it does, then it is possible that the modeling error
could change the number of encirclements
Actual system could be unstable.
Fall 2001 16.31 209
By geometry, we need to test if:
|L
p
(j) L
N
(j)| < |d(j)| = |1 + L
N
(j)|
But L
p
= L
N
(1 + E)
L
p
L
N
= L
N
E
So we must test whether
|L
N
E| < |1 + L
N
|
or


L
N


E

< 1

1 + L
N
Recall that T (s) , L(s)/(1 + L(s))
|T
N
(j) E(j)| = |T
N
(j)| |E(j)| |T
N
(j)| |E
0
(j)|
So the test for robust stability is to determine whether
|T
N
(j)| |E
0
(j)| < 1
Fall 2001 16.31 2010
Visualization of Robustness Tests
Stability robustness test with multiplicative uncertainty given by:
|T
N
(j)| <
1
|E
0
(j)|

Consider typical case of a system with poorly known high frequency


dynamics, so that
|E
0
(j)| 1 <
l
|E
0
(j)| 1 >
h
10
2
10
1
10
0
Good Perf
|T
N
|
1/|E
0
|
Robustness
Boundary
|
.
|

10
1
10
2
10
3
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
Figure 8: Visualization of the robustness test.
Bottom line: With high frequency uncertainty in the system dy-
namics, we must limit the bandwidth of the nominal system control
if we want to achieve robust stability.
Fall 2001 16.31 2011
Summary
Robust Stability Analysis
Use G
N
(s) to design G
c
(s)
Develop bound for uncertainty model E
0
(s) (stable, min phase)
Check that |T
N
(j)| < 1/|E
0
(j)|
State space tools for testing this condition are imperative. Can use
the bounded gain theorem to determine if
max |T
N
(j)E
0
(j)| < 1

Robust Stability Synthesis
Explicitly design the controller G
c
(s) to ensure that
|T
N
(j)| < 1/|E
0
(j)|
Harder, but can do this using H

techniques.
Primary dierence between additive and multiplicative uncertainties
is at high frequency. Additive approach still allows large errors, but
the multiplicative errors are washed out by the roll-o in G.
Potential problem with this approach is that the test only considers
the magnitude of the error. All phases are allowed, since we only
restrict |E| < |E
0
|.
Actual error could be very large, but with a phase that takes it
away from the critical point.
Tests exist to add the phase information, but these are harder
to compute.
Topic #21
16.31 Feedback Control
MI MO Systems
Singular Value Decomposition
Multivariable Frequency Response Plots
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 211
Multivariable Frequency Response
In the MIMO case, the system G(s) is described by a p m trans-
fer function matrix (TFM)
Still have that G(s) = C(sI A)
1
B + D
But G(s) A, B, C, D MUCH less obvious than in SISO case.
Discussion of poles and zeros of MIMO systems also much more
complicated.
In SISO case we use the Bode plot to develop a measure of the
system size.
Given z = Gw, where G(j) = |G(j)|e
j(w)
Then w = |w|e
j(
1
t+)
applied to |G(j)|e
j(w)
yields
|w||G(j
1
)|e
j(
1
t++(
1
))
= |z|e
j(
1
t+
o
)
z
Amplication and phase shift of the input signal obvious in the
SISO case.
MIMO extension?
Is the response of the system large or small?
G(s) =

10
3
/s 0
0 10
3
/s

Fall 2001 16.31 212
For MIMO systems, cannot just plot all of the G
ij
elements of G
Ignores the coupling that might exist between them.
So not enlightening.
Basic MI MO frequency response:
Restrict all inputs to be at the same frequency
Determine how the system responds at that frequency
See how this response changes with frequency
So inputs are w = w
c
e
jt
, where w
c
C
m
Then we get z = G(s)|
s=j
w, z = z
c
e
jt
and z
c
C
p
We need only analyze z
c
= G(j)w
c
As in the SISO case, we need a way to establish if the system re-
sponse is large or small.
How much amplication we can get with a bounded input.
Consider z
c
= G(j)w
c
and set kw
c
k
2
=
p
w
c
H
w
c
1. What can
we say about the kz
c
k
2
?
Answer depends on and on the direction of the input w
c
Best found using singular values.
Fall 2001 16.31 213
Singular Value Decomposition
Must perform the SVD of the matrix G(s) at each frequency s = j
G(j) C
pm
U C
pp
R
pm
V C
mm
G = U V
H
U
H
U = I, UU
H
= I, V
H
V = I, V V
H
= I, and is diagonal.
Diagonal elements
k
0 of are the singular values of G.

i
=
q

i
(G
H
G) or
i
=
q

i
(GG
H
)
the positive ones are the same from both formulas.
The columns of the matrices U and V (u
i
and v
j
) are the asso-
ciated eigenvectors
G
H
Gv
j
=
j
2
v
j
GG
H
u
i
=
i
2
u
i
Gv
i
=
i
u
i
If the rank(G) = r min(p, m), then

k
> 0, k = 1, . . . , r

k
= 0, k = r + 1, . . . , min(p, m)
An SVD gives a very detailed description of how a ma-
trix (the system G) acts on a vector (the input w) at a
particular frequency.
Fall 2001 16.31 214
So how can we use this result?
Fix the size kw
c
k
2
= 1 of the input, and see how large we can
make the output.
Since we are working at a single frequency, we just analyze the
relation
z
c
= G
w
w
c
, G
w
G(s = j)
Dene the maximum and minimum amplications as:
max kz
c
k
2
kw
c
k
2
=1
min kz
c
k
2
kw
c
k
2
=1
Then we have that (let q = min(p, m))
=
1

q
p m tall
=
0 p < m wide
Can use and to determine the possible amplication and atten-
uation of the input signals.
Since G(s) changes with frequency, so will and
Fall 2001 16.31 215
SVD Example
Consider (wide case)

G
w
=

5 0 0
0 0.5 0

=

1 0
0 1

5 0 0
0 0.5 0

1 0 0
0 1 0
0 0 1
= U V
H
so that
1
= 5 and
2
= 0.5
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min kG
w
w
c
k
2
= 0 6=
2
kw
c
k
2
=1
But now consider (tall case)

G
w
=

5 0
0 0.5
0 0


=

1 0 0
0 1 0
0 0 1

5 0
0 0.5
0 0


1 0
0 1

= U V
H
so that
1
= 5 and
2
= 0.5 still.
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min
kw
c
k
2
=1
kG
w
w
c
k
2
= 0.5 =
2
Fall 2001 16.31 216
For MIMO systems, the gains (or s) are only part of the story, as
we must also consider the input direction.
To analyze this point further, note that we can rewrite

1



.
.
.




m

0

v
H
1
.
.
.
v
H
m

G
w
= U V
H
=

u
1
. . . u
p

=
m
X
i=1
H

i
u
i
v
i
Assumed tall case for simplicity, so p > m and q = m
Can now analyze impact of various alternatives for the input
Only looking at one frequency, so the basic signal is harmonic.
But, we are free to pick the relative sizes and phases of each
of the components of the input vector w
c
.
3 These dene the input direction
Fall 2001 16.31 217
For example, we could pick w
c
= v
1
, then
!
m
X
H
z
c
= G
w
w
c
=
i
u
i
v
i
v
1
=
1
u
1
i=1
since v
i
H
v
j
=
ij
.
Output amplied by
1
. The relative sizes and phases of each
of the components of the output are given by the vector z
c
.
By selecting other input directions (at the same frequency), we can
get quite dierent amplications of the input signal
kG
w
w
c
k
2

kw
c
k
2
Thus we say that
G
w
is large if (G
w
) 1
G
w
is small if (G
w
) 1
MI MO frequency response are plots of (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 218
Example: Just picked a random 3x3 system
a =
- 0. 7500 - 2. 0000 0 0
2. 0000 - 0. 7500 0 0
0 0 - 1. 0000 - 4. 0000
0 0 4. 0000 - 1. 0000
b =
- 1. 9994 6. 4512 - 0. 0989
3. 4500 3. 3430 - 0. 7836
4. 0781 5. 9542 - 8. 0204
3. 5595 - 6. 0123 1. 2865
c =
- 1. 0565 0. 5287 - 2. 1707 0. 6145
1. 4151 0. 2193 - 0. 0592 0. 5077
- 0. 8051 - 0. 9219 - 1. 0106 1. 6924
d =
0. 0044 0. 0092 0. 0041
0. 0062 0. 0074 0. 0094
0. 0079 0. 0018 0. 0092
The singular value plot for this state space system is shown below:
SV Plots
10
2
10
1
10
0
10
1
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
S
V

Figure 1: SVD of a typical 3x3 TFM


Fall 2001 16.31 219
Then applied a sinusoid input at about 3 rad/sec using the v
1
di-
rection (see code on how this is done)
w
c
|w
c
| arg w
c
0.4715 0.4715 0
0.2684 + 0.7121i 0.7610 1.9313
0.2411 0.3746i 0.4455 2.1427
so the three input sinusoids are scaled quite dierently and phase
shifted quite substantially.
Best input. Gain 16.0751
15
10
5
lsim
sin
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 2: MIMO system response using the v
1
input direction. Outputs
converge to the expected output directions.
Fall 2001 16.31 2110
Also picked another random input direction at this same frequency
and got this response
Random input. Gain 4.6334
15
10
5
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 3: MIMO system response using a random input direction. Out-
puts quite dierent from the maximum output directions.
Fall 2001 16.31 2111
Summary
G
w
is said to be large if (G
w
) 1
G
w
is said to be small if (G
w
) 1
MIMO frequency response plots (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 2112
MI MO Performance
Not much changes from the SISO case, but all elements in the block
diagram are now TFM so we must pay attention to the order of
the multiplication since GG
c
6= G
c
G in general.
r
G
c
(s) G(s)
-
6

? ?
u
y e
d
i
d
o
n
Now use input ()
i
and output ()
o
loops, depending on where we
break the loop.
L
i
= G
c
G L
o
= GG
c
S
i
= (I + L
i
)
1
S
o
= (I + L
o
)
1
S
i
+ T
i
= I S
o
+ T
o
= I
Now have that
y = T
o
(r n) + S
o
(Gd
i
+ d
o
)
u = G
c
S
o
(r n) G
c
S
o
d
o
G
c
S
o
Gd
i
So the primary objectives are still the same:
S
o
and S
i
should be small at low frequency
T
o
should be small at high frequency
Fall 2001 16.31 2113
Loop conditions for S
o
to be small are:
1
1
(S
o
) =
(I + L
o
)
1 (I + L
o
) 1
Since (I + L
o
) (L
o
) 1, then if (L
o
) 1, we have
(I + L
o
) (L
o
) 1 1
So, if we make (L
o
) 1 then we get (S
o
) 1
Conditions for T
o
to be small are:
2
(T
o
) = ([I + L
o
]
1
L
o
) ([I + L
o
]
1
)(L
o
)
= ([I + L
o
])
1
(L
o
)
So if we make (L
o
) 1, then we will get (T
o
) 1
1
useful identities are (Lo) 1 (I + Lo) (Lo) + 1 and (G
1
) =
(
1
G)
2
AB AB
Fall 2001 16.31 2114
Similar to SISO loop shaping, but we have redened large and small
to use the singular values of L
o
.
Topic #21
16.31 Feedback Control
MI MO Systems
Singular Value Decomposition
Multivariable Frequency Response Plots
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 211
Multivariable Frequency Response
In the MIMO case, the system G(s) is described by a p m trans-
fer function matrix (TFM)
Still have that G(s) = C(sI A)
1
B + D
But G(s) A, B, C, D MUCH less obvious than in SISO case.
Discussion of poles and zeros of MIMO systems also much more
complicated.
In SISO case we use the Bode plot to develop a measure of the
system size.
Given z = Gw, where G(j) = |G(j)|e
j(w)
Then w = |w|e
j(
1
t+)
applied to |G(j)|e
j(w)
yields
|w||G(j
1
)|e
j(
1
t++(
1
))
= |z|e
j(
1
t+
o
)
z
Amplication and phase shift of the input signal obvious in the
SISO case.
MIMO extension?
Is the response of the system large or small?
G(s) =

10
3
/s 0
0 10
3
/s

Fall 2001 16.31 212
For MIMO systems, cannot just plot all of the G
ij
elements of G
Ignores the coupling that might exist between them.
So not enlightening.
Basic MI MO frequency response:
Restrict all inputs to be at the same frequency
Determine how the system responds at that frequency
See how this response changes with frequency
So inputs are w = w
c
e
jt
, where w
c
C
m
Then we get z = G(s)|
s=j
w, z = z
c
e
jt
and z
c
C
p
We need only analyze z
c
= G(j)w
c
As in the SISO case, we need a way to establish if the system re-
sponse is large or small.
How much amplication we can get with a bounded input.
Consider z
c
= G(j)w
c
and set kw
c
k
2
=
p
w
c
H
w
c
1. What can
we say about the kz
c
k
2
?
Answer depends on and on the direction of the input w
c
Best found using singular values.
Fall 2001 16.31 213
Singular Value Decomposition
Must perform the SVD of the matrix G(s) at each frequency s = j
G(j) C
pm
U C
pp
R
pm
V C
mm
G = U V
H
U
H
U = I, UU
H
= I, V
H
V = I, V V
H
= I, and is diagonal.
Diagonal elements
k
0 of are the singular values of G.

i
=
q

i
(G
H
G) or
i
=
q

i
(GG
H
)
the positive ones are the same from both formulas.
The columns of the matrices U and V (u
i
and v
j
) are the asso-
ciated eigenvectors
G
H
Gv
j
=
j
2
v
j
GG
H
u
i
=
i
2
u
i
Gv
i
=
i
u
i
If the rank(G) = r min(p, m), then

k
> 0, k = 1, . . . , r

k
= 0, k = r + 1, . . . , min(p, m)
An SVD gives a very detailed description of how a ma-
trix (the system G) acts on a vector (the input w) at a
particular frequency.
Fall 2001 16.31 214
So how can we use this result?
Fix the size kw
c
k
2
= 1 of the input, and see how large we can
make the output.
Since we are working at a single frequency, we just analyze the
relation
z
c
= G
w
w
c
, G
w
G(s = j)
Dene the maximum and minimum amplications as:
max kz
c
k
2
kw
c
k
2
=1
min kz
c
k
2
kw
c
k
2
=1
Then we have that (let q = min(p, m))
=
1

q
p m tall
=
0 p < m wide
Can use and to determine the possible amplication and atten-
uation of the input signals.
Since G(s) changes with frequency, so will and
Fall 2001 16.31 215
SVD Example
Consider (wide case)

G
w
=

5 0 0
0 0.5 0

=

1 0
0 1

5 0 0
0 0.5 0

1 0 0
0 1 0
0 0 1
= U V
H
so that
1
= 5 and
2
= 0.5
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min kG
w
w
c
k
2
= 0 6=
2
kw
c
k
2
=1
But now consider (tall case)

G
w
=

5 0
0 0.5
0 0


=

1 0 0
0 1 0
0 0 1

5 0
0 0.5
0 0


1 0
0 1

= U V
H
so that
1
= 5 and
2
= 0.5 still.
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min
kw
c
k
2
=1
kG
w
w
c
k
2
= 0.5 =
2
Fall 2001 16.31 216
For MIMO systems, the gains (or s) are only part of the story, as
we must also consider the input direction.
To analyze this point further, note that we can rewrite

1



.
.
.




m

0

v
H
1
.
.
.
v
H
m

G
w
= U V
H
=

u
1
. . . u
p

=
m
X
i=1
H

i
u
i
v
i
Assumed tall case for simplicity, so p > m and q = m
Can now analyze impact of various alternatives for the input
Only looking at one frequency, so the basic signal is harmonic.
But, we are free to pick the relative sizes and phases of each
of the components of the input vector w
c
.
3 These dene the input direction
Fall 2001 16.31 217
For example, we could pick w
c
= v
1
, then
!
m
X
H
z
c
= G
w
w
c
=
i
u
i
v
i
v
1
=
1
u
1
i=1
since v
i
H
v
j
=
ij
.
Output amplied by
1
. The relative sizes and phases of each
of the components of the output are given by the vector z
c
.
By selecting other input directions (at the same frequency), we can
get quite dierent amplications of the input signal
kG
w
w
c
k
2

kw
c
k
2
Thus we say that
G
w
is large if (G
w
) 1
G
w
is small if (G
w
) 1
MI MO frequency response are plots of (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 218
Example: Just picked a random 3x3 system
a =
- 0. 7500 - 2. 0000 0 0
2. 0000 - 0. 7500 0 0
0 0 - 1. 0000 - 4. 0000
0 0 4. 0000 - 1. 0000
b =
- 1. 9994 6. 4512 - 0. 0989
3. 4500 3. 3430 - 0. 7836
4. 0781 5. 9542 - 8. 0204
3. 5595 - 6. 0123 1. 2865
c =
- 1. 0565 0. 5287 - 2. 1707 0. 6145
1. 4151 0. 2193 - 0. 0592 0. 5077
- 0. 8051 - 0. 9219 - 1. 0106 1. 6924
d =
0. 0044 0. 0092 0. 0041
0. 0062 0. 0074 0. 0094
0. 0079 0. 0018 0. 0092
The singular value plot for this state space system is shown below:
SV Plots
10
2
10
1
10
0
10
1
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
S
V

Figure 1: SVD of a typical 3x3 TFM


Fall 2001 16.31 219
Then applied a sinusoid input at about 3 rad/sec using the v
1
di-
rection (see code on how this is done)
w
c
|w
c
| arg w
c
0.4715 0.4715 0
0.2684 + 0.7121i 0.7610 1.9313
0.2411 0.3746i 0.4455 2.1427
so the three input sinusoids are scaled quite dierently and phase
shifted quite substantially.
Best input. Gain 16.0751
15
10
5
lsim
sin
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 2: MIMO system response using the v
1
input direction. Outputs
converge to the expected output directions.
Fall 2001 16.31 2110
Also picked another random input direction at this same frequency
and got this response
Random input. Gain 4.6334
15
10
5
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 3: MIMO system response using a random input direction. Out-
puts quite dierent from the maximum output directions.
Fall 2001 16.31 2111
Summary
G
w
is said to be large if (G
w
) 1
G
w
is said to be small if (G
w
) 1
MIMO frequency response plots (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 2112
MI MO Performance
Not much changes from the SISO case, but all elements in the block
diagram are now TFM so we must pay attention to the order of
the multiplication since GG
c
6= G
c
G in general.
r
G
c
(s) G(s)
-
6

? ?
u
y e
d
i
d
o
n
Now use input ()
i
and output ()
o
loops, depending on where we
break the loop.
L
i
= G
c
G L
o
= GG
c
S
i
= (I + L
i
)
1
S
o
= (I + L
o
)
1
S
i
+ T
i
= I S
o
+ T
o
= I
Now have that
y = T
o
(r n) + S
o
(Gd
i
+ d
o
)
u = G
c
S
o
(r n) G
c
S
o
d
o
G
c
S
o
Gd
i
So the primary objectives are still the same:
S
o
and S
i
should be small at low frequency
T
o
should be small at high frequency
Fall 2001 16.31 2113
Loop conditions for S
o
to be small are:
1
1
(S
o
) =
(I + L
o
)
1 (I + L
o
) 1
Since (I + L
o
) (L
o
) 1, then if (L
o
) 1, we have
(I + L
o
) (L
o
) 1 1
So, if we make (L
o
) 1 then we get (S
o
) 1
Conditions for T
o
to be small are:
2
(T
o
) = ([I + L
o
]
1
L
o
) ([I + L
o
]
1
)(L
o
)
= ([I + L
o
])
1
(L
o
)
So if we make (L
o
) 1, then we will get (T
o
) 1
1
useful identities are (Lo) 1 (I + Lo) (Lo) + 1 and (G
1
) =
(
1
G)
2
AB AB
Fall 2001 16.31 2114
Similar to SISO loop shaping, but we have redened large and small
to use the singular values of L
o
.
Topic #21
16.31 Feedback Control
MI MO Systems
Singular Value Decomposition
Multivariable Frequency Response Plots
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 211
Multivariable Frequency Response
In the MIMO case, the system G(s) is described by a p m trans-
fer function matrix (TFM)
Still have that G(s) = C(sI A)
1
B + D
But G(s) A, B, C, D MUCH less obvious than in SISO case.
Discussion of poles and zeros of MIMO systems also much more
complicated.
In SISO case we use the Bode plot to develop a measure of the
system size.
Given z = Gw, where G(j) = |G(j)|e
j(w)
Then w = |w|e
j(
1
t+)
applied to |G(j)|e
j(w)
yields
|w||G(j
1
)|e
j(
1
t++(
1
))
= |z|e
j(
1
t+
o
)
z
Amplication and phase shift of the input signal obvious in the
SISO case.
MIMO extension?
Is the response of the system large or small?
G(s) =

10
3
/s 0
0 10
3
/s

Fall 2001 16.31 212
For MIMO systems, cannot just plot all of the G
ij
elements of G
Ignores the coupling that might exist between them.
So not enlightening.
Basic MI MO frequency response:
Restrict all inputs to be at the same frequency
Determine how the system responds at that frequency
See how this response changes with frequency
So inputs are w = w
c
e
jt
, where w
c
C
m
Then we get z = G(s)|
s=j
w, z = z
c
e
jt
and z
c
C
p
We need only analyze z
c
= G(j)w
c
As in the SISO case, we need a way to establish if the system re-
sponse is large or small.
How much amplication we can get with a bounded input.
Consider z
c
= G(j)w
c
and set kw
c
k
2
=
p
w
c
H
w
c
1. What can
we say about the kz
c
k
2
?
Answer depends on and on the direction of the input w
c
Best found using singular values.
Fall 2001 16.31 213
Singular Value Decomposition
Must perform the SVD of the matrix G(s) at each frequency s = j
G(j) C
pm
U C
pp
R
pm
V C
mm
G = U V
H
U
H
U = I, UU
H
= I, V
H
V = I, V V
H
= I, and is diagonal.
Diagonal elements
k
0 of are the singular values of G.

i
=
q

i
(G
H
G) or
i
=
q

i
(GG
H
)
the positive ones are the same from both formulas.
The columns of the matrices U and V (u
i
and v
j
) are the asso-
ciated eigenvectors
G
H
Gv
j
=
j
2
v
j
GG
H
u
i
=
i
2
u
i
Gv
i
=
i
u
i
If the rank(G) = r min(p, m), then

k
> 0, k = 1, . . . , r

k
= 0, k = r + 1, . . . , min(p, m)
An SVD gives a very detailed description of how a ma-
trix (the system G) acts on a vector (the input w) at a
particular frequency.
Fall 2001 16.31 214
So how can we use this result?
Fix the size kw
c
k
2
= 1 of the input, and see how large we can
make the output.
Since we are working at a single frequency, we just analyze the
relation
z
c
= G
w
w
c
, G
w
G(s = j)
Dene the maximum and minimum amplications as:
max kz
c
k
2
kw
c
k
2
=1
min kz
c
k
2
kw
c
k
2
=1
Then we have that (let q = min(p, m))
=
1

q
p m tall
=
0 p < m wide
Can use and to determine the possible amplication and atten-
uation of the input signals.
Since G(s) changes with frequency, so will and
Fall 2001 16.31 215
SVD Example
Consider (wide case)

G
w
=

5 0 0
0 0.5 0

=

1 0
0 1

5 0 0
0 0.5 0

1 0 0
0 1 0
0 0 1
= U V
H
so that
1
= 5 and
2
= 0.5
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min kG
w
w
c
k
2
= 0 6=
2
kw
c
k
2
=1
But now consider (tall case)

G
w
=

5 0
0 0.5
0 0


=

1 0 0
0 1 0
0 0 1

5 0
0 0.5
0 0


1 0
0 1

= U V
H
so that
1
= 5 and
2
= 0.5 still.
max kG
w
w
c
k
2
= 5 =
1
kw
c
k
2
=1
min
kw
c
k
2
=1
kG
w
w
c
k
2
= 0.5 =
2
Fall 2001 16.31 216
For MIMO systems, the gains (or s) are only part of the story, as
we must also consider the input direction.
To analyze this point further, note that we can rewrite

1



.
.
.




m

0

v
H
1
.
.
.
v
H
m

G
w
= U V
H
=

u
1
. . . u
p

=
m
X
i=1
H

i
u
i
v
i
Assumed tall case for simplicity, so p > m and q = m
Can now analyze impact of various alternatives for the input
Only looking at one frequency, so the basic signal is harmonic.
But, we are free to pick the relative sizes and phases of each
of the components of the input vector w
c
.
3 These dene the input direction
Fall 2001 16.31 217
For example, we could pick w
c
= v
1
, then
!
m
X
H
z
c
= G
w
w
c
=
i
u
i
v
i
v
1
=
1
u
1
i=1
since v
i
H
v
j
=
ij
.
Output amplied by
1
. The relative sizes and phases of each
of the components of the output are given by the vector z
c
.
By selecting other input directions (at the same frequency), we can
get quite dierent amplications of the input signal
kG
w
w
c
k
2

kw
c
k
2
Thus we say that
G
w
is large if (G
w
) 1
G
w
is small if (G
w
) 1
MI MO frequency response are plots of (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 218
Example: Just picked a random 3x3 system
a =
- 0. 7500 - 2. 0000 0 0
2. 0000 - 0. 7500 0 0
0 0 - 1. 0000 - 4. 0000
0 0 4. 0000 - 1. 0000
b =
- 1. 9994 6. 4512 - 0. 0989
3. 4500 3. 3430 - 0. 7836
4. 0781 5. 9542 - 8. 0204
3. 5595 - 6. 0123 1. 2865
c =
- 1. 0565 0. 5287 - 2. 1707 0. 6145
1. 4151 0. 2193 - 0. 0592 0. 5077
- 0. 8051 - 0. 9219 - 1. 0106 1. 6924
d =
0. 0044 0. 0092 0. 0041
0. 0062 0. 0074 0. 0094
0. 0079 0. 0018 0. 0092
The singular value plot for this state space system is shown below:
SV Plots
10
2
10
1
10
0
10
1
10
2
10
1
10
0
10
1
10
2
Freq (rad/sec)
S
V

Figure 1: SVD of a typical 3x3 TFM


Fall 2001 16.31 219
Then applied a sinusoid input at about 3 rad/sec using the v
1
di-
rection (see code on how this is done)
w
c
|w
c
| arg w
c
0.4715 0.4715 0
0.2684 + 0.7121i 0.7610 1.9313
0.2411 0.3746i 0.4455 2.1427
so the three input sinusoids are scaled quite dierently and phase
shifted quite substantially.
Best input. Gain 16.0751
15
10
5
lsim
sin
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 2: MIMO system response using the v
1
input direction. Outputs
converge to the expected output directions.
Fall 2001 16.31 2110
Also picked another random input direction at this same frequency
and got this response
Random input. Gain 4.6334
15
10
5
Y
i

0
5
10
15
0 1 2 3 4 5 6 7 8 9 10
Time sec)
Figure 3: MIMO system response using a random input direction. Out-
puts quite dierent from the maximum output directions.
Fall 2001 16.31 2111
Summary
G
w
is said to be large if (G
w
) 1
G
w
is said to be small if (G
w
) 1
MIMO frequency response plots (j) and (j).
Then use the singular value vectors to analyze the response at a
particular frequency.
Fall 2001 16.31 2112
MI MO Performance
Not much changes from the SISO case, but all elements in the block
diagram are now TFM so we must pay attention to the order of
the multiplication since GG
c
6= G
c
G in general.
r
G
c
(s) G(s)
-
6

? ?
u
y e
d
i
d
o
n
Now use input ()
i
and output ()
o
loops, depending on where we
break the loop.
L
i
= G
c
G L
o
= GG
c
S
i
= (I + L
i
)
1
S
o
= (I + L
o
)
1
S
i
+ T
i
= I S
o
+ T
o
= I
Now have that
y = T
o
(r n) + S
o
(Gd
i
+ d
o
)
u = G
c
S
o
(r n) G
c
S
o
d
o
G
c
S
o
Gd
i
So the primary objectives are still the same:
S
o
and S
i
should be small at low frequency
T
o
should be small at high frequency
Fall 2001 16.31 2113
Loop conditions for S
o
to be small are:
1
1
(S
o
) =
(I + L
o
)
1 (I + L
o
) 1
Since (I + L
o
) (L
o
) 1, then if (L
o
) 1, we have
(I + L
o
) (L
o
) 1 1
So, if we make (L
o
) 1 then we get (S
o
) 1
Conditions for T
o
to be small are:
2
(T
o
) = ([I + L
o
]
1
L
o
) ([I + L
o
]
1
)(L
o
)
= ([I + L
o
])
1
(L
o
)
So if we make (L
o
) 1, then we will get (T
o
) 1
1
useful identities are (Lo) 1 (I + Lo) (Lo) + 1 and (G
1
) =
(
1
G)
2
AB AB
Fall 2001 16.31 2114
Similar to SISO loop shaping, but we have redened large and small
to use the singular values of L
o
.
Topic #22
16.31 Feedback Control
Deterministic LQR
Optimal control and the Riccati equation
Lagrange multipliers
The Hamiltonian matrix and the symmetric root locus
Factoids: for symmtric R
u
T
Ru
= 2u
T
R
u
Ru
= R
u
Copyright 2001 by J onathan How.
1
Fall 2001 16.31 221
Linear Quadratic Regulator (LQR)
We have seen the solutions to the LQR problem using the symmetric
root locus which denes the location of the closed-loop poles.
Linear full-state feedback control.
Would like to demonstrate from rst principles that this is the
optimal form of the control.
Deterministic Linear Quadratic Regulator
Plant:
x (t) = A(t)x(t) + B
u
(t)u(t), x(t
0
) = x
0
z(t) = C
z
(t)x(t)
Cost:
Z
t
f

J
LQR
= z
T
(t)R
zz
(t)z(t) + u
T
(t)R
uu
(t)u(t)

dt + x(t
f
)P
t
f
x(t
f
)
t
0
Where P
t
f
0, R
zz
(t) > 0 and R
uu
(t) > 0
Dene R
xx
= C
z
T
R
zz
C
z
0
A(t) is a continuous function of time.
B
u
(t), C
z
(t), R
zz
(t), R
uu
(t) are piecewise continuous functions
of time, and all are bounded.
Problem Statement: Find the input u(t) t [t
0
, t
f
] to mini-
mize J
LQR
.
Fall 2001 16.31 222
Note that this is the most general form of the LQR problem
we rarely need this level of generality and often suppress the time
dependence of the matrices.
Aircraft landing problem.
To optimize the cost, we follow the procedure of augmenting the
constraints in the problem (the system dynamics) to the cost (inte-
grand) to form the Hamiltonian:
1

H =
2

x
T
(t)R
xx
x(t) + u
T
(t)R
uu
u(t) +
T
(t) (Ax(t) + B
u
u(t))
(t) R
n1
is called the Adjoint variable or Costate
It is the Lagrange multiplier in the problem.
From Stengel (pg427), the necessary and sucient conditions for
optimality are that:
T
1.

(t) =
H
= R
xx
x(t) A
T
(t)
x
2. (t
f
) = P
t
f
x(t
f
)
3.
H
= 0 R
uu
u + B
u
T
(t) = 0, so u
opt
= R
1
u
uu
B
u
T
(t)
4.

2
H
0 (need to check that R
uu
0)
u
2
Fall 2001 16.31 Optimization-1
This control design problem is a constrained optimization, with the
constraints being the dynamics of the system.
The standard way of handling the constraints in an optimization is
to add them to the cost using a Lagrange multiplier
Results in an unconstrained optimization.
Example: min f (x, y) = x
2
+ y
2
subject to the constraint that
c(x, y) = x + y + 2 = 0
2
1.5
1
0.5
0
0.5
1
1.5
2
2 1.5 1 0.5 0 0.5 1 1.5 2
x
Figure 1: Optimization results
y

Clearly the unconstrained minimum is at x = y = 0
Fall 2001 16.31 Optimization-2
To nd the constrained minimum, form the augmented cost function
L , f (x, y) + c(x, y) = x
2
+ y
2
+ (x + y + 2)
Where is the Lagrange multiplier
Note that if the constraint is satised, then L f
The solution approach without constraints is to nd the stationary
point of f (x, y) (f/x = f/y = 0)
With constraints we nd the stationary points of L
L L L
= = = 0
x y
which gives
L
x
= 2x + = 0
L
= 2y + = 0
y
L
= x + y + 2 = 0

This gives 3 equations in 3 unknowns, solve to nd x


?
= y
?
= 1
The key point here is that due to the constraint, the selection of x
and y during the minimization are not independent
The Lagrange multiplier captures this dependency.
The LQR optimization follows the same path as this, but it is com-
plicated by the fact that the cost involves an integration over time.
Fall 2001 16.31 223
Note that we now have:
x (t) = Ax(t) + Bu
opt
(t) = Ax(t) B
u
R
1
uu
B
u
T
(t)
with x(t
0
) = x
0
So combine with equation for the adjoint variable


(t) = R
xx
x(t) A
T
(t) = C
z
T
R
zz
C
z
x(t) A
T
(t)
to get:

x (t)


(t)

=
"
A B
u
R
1
uu
B
u
T
C
z
T
R
zz
C
z
A
T
#

x(t)
(t)

which of course is the Hamiltonian Matrix again.
Note that the dynamics of x(t) and (t) are coupled, but x(t) is
known initially and (t) is known at the terminal time, since (t
f
) =
P
t
f
x(t
f
)
This is a two point boundary value problem that is very hard to
solve in general.
However, in this case, we can introduce a new matrix variable P (t)
and show that:
1. (t) = P (t)x(t)
2. It is relatively easy to nd P (t).
Fall 2001 16.31 224
How proceed?
1. For the 2n system
" #

x (t)
A B
u
R
1
uu
B
u
T
x(t)


(t)

=
C
z
T
R
zz
C
z
A
T
(t)

dene a transition matrix
" #
F
11
(t
1
, t
0
) F
12
(t
1
, t
0
)
F (t
1
, t
0
) =
F
21
(t
1
, t
0
) F
22
(t
1
, t
0
)
and use this to relate x(t) to x(t
f
) and (t
f
)
" #

(t)

=
F
11
(t, t
f
) F
12
(t, t
f
)
x(t
f
) x(t)
F
21
(t, t
f
) F
22
(t, t
f
)
(t
f
)

so
x(t) = F
h
11
(t, t
f
)x(t
f
) + F
12
(t, t
f
)
i
(t
f
)
= F
11
(t, t
f
) + F
12
(t, t
f
)P
t
f
x(t
f
)
2. Now nd (t) in terms of x(t
f
)
h i
(t) = F
12
(t, t
f
) + F
22
(t, t
f
)P
t
f
x(t
f
)
3. Eliminate x(t
f
) to get:
h i h i
1
(t) = F
12
(t, t
f
) + F
22
(t, t
f
)P
t
f
F
11
(t, t
f
) + F
12
(t, t
f
)P
t
f
x(t)
, P (t)x(t)
Fall 2001 16.31 225
4. Now, since (t) = P (t)x(t), then


(t) = P

(t)x(t) + P (t)x (t)
C
z
T
R
zz
C
z
x(t) A
T
(t) =
P

(t)x(t) = C
z
T
R
zz
C
z
x(t) + A
T
(t) + P (t)x (t)
= C
z
T
R
zz
C
z
x(t) + A
T
(t) + P (t)(Ax(t) B
u
R
1
uu
B
u
T
(t))
= (C
z
T
R
zz
C
z
+ P (t)A)x(t) + (A
T
P (t)B
u
R
1
uu
B
u
T
)(t)
= (C
z
T
R
zz
C
z
+ P (t)A)x(t) + (A
T
P (t)B
u
R
1
uu
B
u
T
)P (t)x(t)
=

A
T
P (t) + P (t)A + C
z
T
R
zz
C
z
P (t)B
u
R
1
uu
B
u
T
P (t)

x(t)
This must be true for arbitrary x(t), so P (t) must satisfy
P

(t) = A
T
P (t) + P (t)A + C
z
T
R
zz
C
z
P (t)B
u
R
1
uu
B
u
T
P (t)
Which is a matrix dierential Riccati Equation.
The optimal value of P (t) is found by solving this equation back-
wards in time from t
f
with P (t
f
) = P
t
f
Fall 2001 16.31 226
The control gains are then
u
opt
= R
1
uu
B
u
T
(t)
= R
1
uu
B
u
T
P (t)x(t) = K(t)x(t)
Where K(t) , R
1
uu
B
u
T
P (t)
Note that x(t) and (t) together dene the closed-loop dynamics
for the system (and its adjoint), but we can eliminate (t) from the
solution by introducing P (t) which solves a Riccati Equation.
The optimal control inputs are in fact a linear full-state
feedback control
Note that normally we are interested in problems with t
0
= 0 and
t
f
= , in which case we can just use the steady-state value of P
that solves (assumes that A, B
u
is stabilizable)
A
T
P + PA + C
z
T
R
zz
C
z
PB
u
R
1
uu
B
u
T
P = 0
which is the Algebraic Riccati Equation.
If we use the steady-state value of P , then K is constant.
Fall 2001 16.31 227
Example: simple system with t
0
= 0 and t
f
= 10sec.

0 1 0
x =
0 1

x +
1

u
" #
Z
10

J = x
T
(10)
0 0
x(10) + x
T
(t)
q
0 0
0

x(t) + ru
2
(t)

dt
0 h 0
Compute gains using both time-varying P (t) and steady-state value.
Find state solution x(0) = [1 1]
T
using both sets of gains
q = 1 r = 1 h = 5
1.4
K
1
(t)
K
1
5
4.5
1.2
4
1
3.5
3
0.8
2.5
0.6
2
0.4
1.5
1
0.2
0.5
K
2
(t)
K
2
x
1
x
2
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Time (sec) Time (sec)
Dynamic Gains Static Gains
1.4 1.4
1.2
x
1
x
2
1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0.2 0.2
0.4 0.4
0.6 0.6
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Time (sec) Time (sec)
S
t
a
t
e
s

G
a
in
s

S
t
a
t
e
s

G
a
in
s

Figure 2: Set q = 1, r = 1, h = 10, K
lqr
= [1 0.73]
Fall 2001 16.31 228
As noted, the closed-loop dynamics couple x(t) and (t) and are
given by

x (t)


(t)

=
"
A B
u
R
1
uu
B
u
T
C
z
T
R
zz
C
z
A
T
#

x(t)
(t)

with the appropriate boundary conditions.
OK, so where are the closed-loop poles of the system?
They must be the eigenvalues of
" #
H ,
A B
u
R
1
uu
B
u
T
C
z
T
R
zz
C
z
A
T
When we analyzed this before for a SISO system, we found that the
closed-loop poles could be related to a SRL for the transfer function
G
zu
(s) = C
z
(sI A)
1
B
u
=
b(s)
a(s)
and, in fact, the closed-loop poles were given by the LHP roots of
R
zz
a(s)a(s) + b(s)b(s) = 0
R
uu
where we previously had R
zz
/R
uu
1/r
We now know enough to show that this is true.
Fall 2001 16.31 229
Derivation of the SRL
The closed-loop poles are given by the eigenvalues of
" #
A B
u
R
1
uu
B
u
T
C
z
T
R
zz
C
z
A
T
H ,
so solve det(sI H) = 0
" #
= det(A) det(D CA
1
B)
A B
C D
If A is invertible: det

(sI + A
T
) C
z
T
R
zz
C
z
(sI A)
1
B
u
R
1
u

uu
B
T
= det(sI A) det(sI + A
T
) det
det(sI H) = det(sI A) det

I C
z
T
R
zz
C
z
(sI A)
1
B
u
R
1
u
(sI + A
T
)
1

uu
B
T
Note that det(I + ABC) = det(I + CAB), and if a(s) = det(sI
A), then a(s) = det(sI A
T
) = (1)
n
det(sI + A
T
)
det(sIH) = (1)
n
a(s)a(s) det

I + R
1
u
(sI A
T
)
1
C
z
T
R
zz
C
z
(sI A)
1
B
u

uu
B
T
If G
zu
(s) = C
z
(sI A)
1
B
u
, then G
T
zu
(s) = B
u
T
(sI A
T
)
1
C
z
T
,
so for SISO systems

I + R
1
zu
(s)R
zz
G
zu
(s)

uu
G
T
= (1)
n
a(s)a(s) I +
R
zz
G
zu
(s)G
zu
(s)

R
uu
det(sI H) = (1)
n
a(s)a(s) det


R
zz
a(s)a(s) + = (1)
n
R
uu
b(s)b(s)

= 0
Fall 2001 16.31 2210
Simple example from before: A scalar system with
x = ax + bu
with cost (R
xx
> 0 and R
uu
> 0)
J =
Z

0
(R
xx
x
2
(t) + R
uu
u
2
(t)) dt
Then the steady-state P solves
2aP + R
xx
P
2
b
2
/R
uu
= 0
which gives that P =
a+

a
2
+b
2
R
xx
/R
uu
> 0
R
1
uu
b
2
Then u(t) = Kx(t) where
uu
bP =
a +
p
a
2
+ b
2
R
xx
/R
uu
K = R
1
b
The closed-loop dynamics are
x = (a bK)x =

a
b
b
(a +
p
a
2
+ b
2
R
xx
/R
uu
)

x
=
p
a
2
+ b
2
R
xx
/R
uu
x = A
cl
x(t)
Note that as R
xx
/R
uu
, A
cl
|b|
p
R
xx
/R
uu
And as R
xx
/R
uu
0, K (a + |a|)/b
If a < 0 (open-loop stable), K 0 and A
cl
= a bK a
If a > 0 (OL unstable), K 2a/b and A
cl
= a bK a
Fall 2001 16.31 2211
Summary
Can nd the optimal feedback gains u = Kx using the Matlab
command
K = l qr(A, B, R
xx
, R
uu
)
Similar derivation for the optimal estimation problem (Linear Quadratic
Estimator)
Full treatment requires detailed of advanced topics (e.g. stochas-
tic processes and Ito calculus) better left to a second course.
But, by duality, can compute optimal Kalman lter gains from
K
e
= l qr(A
T
, C
y
T
, B
w
R
w
B
T
w
, R
v
), L = K
e
T

MATLAB is a trademark of The MathWorks, Inc.

Fall 2001 16.31 2212


Weighting Matrix Selection
A good rule of thumb when selecting the weighting matrices R
xx
(or R
zz
) and R
uu
is to normalize the signals:





R
xx
=









R
uu
=



2
1
(x
1
)
2
max

2
2
(x
2
)
2
max
.
.
.

2
n
(x
n
)
2
max

2
1
(u
1
)
2
max

2
2
(u
2
)
2
max
.
.
.

2
m
(u
m
)
2
max




















The (x
i
)
max
and (u
i
)
max
represent the largest desired response/control
input for that component of the state/actuator signal.
The
i
and
i
are used to add an additional relative weighting on
the various components of the state/control
Fall 2001 16.31 1723
Reference Input - II
On page 17-5, compensator implemented with a reference command
by changing to feedback on e(t) = r(t) y(t) rather than y(t)
G
c
(s) G(s)

r e y u
So u = G
c
(s)e = G
c
(s)(r y), and have u = G
c
(s)y if r = 0.
Intuitively appealing because it is the same approach used for
the classical control, but it turns out not to be the best approach.
Can improve the implementation by using a more general form:
x
c
= A
c
x
c
+ Ly + Gr
u = Kx
c
+

Nr
Now explicitly have two inputs to the controller (y and r)


N performs the same role that we used it for previously.
Introduce G as an extra degree of freedom in the problem.
First: if

N = 0 and G = L, then we recover the same imple-
mentation used previously, since the controller reduces to:
x
c
= A
c
x
c
+ L(y r) = A
c
x
c
+ B
c
(e)
u = Kx
c
= C
c
x
c
So if G
c
(s) = C
c
(sIA
c
)
1
B
c
, then the controller can be written
as u = G
c
(s)e (negative signs cancel).
Fall 2001 16.31 1724
Second: this generalization does not change the closed-loop poles
of the system, regardless of the selection of G and

N, since
x = Ax + Bu , y = Cx
x
c
= A
c
x
c
+ Ly + Gr
u = Kx
c
+

Nr

x
x
c

A BK
LC A
c

x
x
c

B

N
G

r
y =

C 0

x
x
c

So the closed-loop poles are the eigenvalues of

A BK
LC A
c

regardless of the choice of G and



N
G and

N impact the forward path, not the feedback path
Third: given this extra freedom, what is the best way to use it?
One good objective is to select G and

N so that the state esti-
mation error is independent of r.
With this choice, changes in r do not tend to cause such large
transients in x
Note that for this analysis, take x = x x
c
since x
c
x

x = x x
c
= Ax + Bu (A
c
x
c
+ Ly + Gr)
= Ax + B(Kx
c
+

Nr) ({A BK LC}x
c
+ LCx + Gr)
Fall 2001 16.31 1725

x = Ax + B(

Nr) ({A LC}x
c
+ LCx + Gr)
= (A LC)x + B

Nr ({A LC}x
c
+ Gr)
= (A LC) x + B

Nr Gr
= (A LC) x + (B

N G)r
Thus we can eliminate the eect of r on x by setting G B

N
Fourth: if this generalization does not change the closed-loop poles
of the system, then what does it change?
The zeros of the y/r transfer function, which are given by:
general det

sI A BK B

N
LC sI A
c
G
C 0 0

= 0
previous det

sI A BK 0
LC sI A
c
L
C 0 0

= 0
new det

sI A BK B

N
LC sI A
c
B

N
C 0 0

= 0
Fall 2001 16.31 1726
Hard to see how this helps, but consider the scalar case:
new det

sI A BK B

N
LC sI A
c
B

N
C 0 0

= 0
C(BKB

N + (sI A
c
)B

N) = 0
CB

N(BK (sI [A BK LC])) = 0
CB

N(sI [A LC]) = 0
So that the zero of the y/r path is the root of sI [ALC] = 0
which is the pole of the estimator.
With this selection of G = B

N the estimator dynamics are can-
celed out of the response of the system to a reference command.
No such cancelation occurs with the previous implementation.
Fifth: select

N to ensure that the steady-state error is zero.
As before, this can be done by selecting

N so that the DC gain
of the closed-loop y/r transfer function is 1.
y
r

DC

C 0

A BK
LC A
c

B
B


N = 1
The new implementation of the controller is
x
c
= A
c
x
c
+ Ly + B

Nr
u = Kx
c
+

Nr
Which has two separate inputs y and r
Selection of

N ensure that the steady-state performance is good
The new implementation gives better transient performance.
Fall 2001 16.31 1727
Figure 13: Example #1: G(s) =
81420
(s+8)(s+14)(s+20)
.
Method #1: previous implementation.
Method #2: previous, with the reference input scaled to ensure
that the DC gain of y/r|
DC
= 1.
Method #3: new implementation with both G = B

N and

N selected.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Step Response
meth1
meth2
meth3
Fall 2001 16.31 1728
Figure 14: Example #2: G(s) =
0.94
s
2
0.0297
.
Method #1: previous implementation.
Method #2: previous, with the reference input scaled to ensure
that the DC gain of y/r|
DC
= 1.
Method #3: new implementation with both G = B

N and

N selected.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
Step Response
meth1
meth2
meth3
Fall 2001 16.31 1729
Figure 15: Example #3: G(s) =
81420
(s8)(s14)(s20)
.
Method #1: previous implementation.
Method #2: previous, with the reference input scaled to ensure
that the DC gain of y/r|
DC
= 1.
Method #3: new implementation with both G = B

N and

N selected.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
3
2
1
0
1
2
3
4
Step Response
meth1
meth2
meth3
16.31 Handout #3
Prof. J. P. How September 5, 2001
T.A. TBD Due: September 14, 2001
16.31 Homework Assignment #1
1. Plot the root locus diagram for positive values of K for the solutions of the equation
s
3
+ (5 + K)s
2
+ (6 + K)s + 2K = 0
2. The open loop transfer function of a closed-loop control system with unity negative
gain feedback is
K
G(s) =
s(s + 3)(s
2
+ 6s + 64)
Plot the root locus for this system, and then determine the closed-loop gain that gives
an eective damping ratio of 0.707.
3. A unity gain negative feedback system has an open-loop transfer function given by
K(1 + 5s)
G(s) =
s(1 + 10s)(1 + s)
2
Draw a Bode diagram for this system and determine the loop gain K required for a
phase margin of 20 degs. What is the gain margin?
(a) A lag compensator
1 + 10s
G
c
(s) =
1 + 50s
is added to this system. Use Bode diagrams to nd the reduction in steady state
error following a ramp change to the reference input, assuming that the 20 deg
phase margin is maintained.
4. Plot the Nyquist diagram for the plant with the unstable open-loop transfer function
K(s + 0.4)
G(s) =
s(s
2
+ 2s 1)
Determine the range of K for which the closed-loop system with unity negative gain
feedback which incorporated this plant would be stable.
1
16.31
Prof. J. P. How
T.A. TBD
Handout #4
September 14, 2001
Due: September 21, 2001
16.31 Homework Assignment #2
1.(RootLocusAnalysis)[FPE3.32,page159].

2.(DominantPoleLocations)[FPE3.36(a),(c),(d),page161].
(a)Statethestepsthatyouwouldfollowtoshowthatthisisthestepresponse.
WhichinversetransformswouldyouusefromtheTables?

3. (Basic Root Locus Plotting) Sketch the root locus for the following systems. As we
did in class, concentrate on the real-axis, and the asymptotes/centroids.
(a) G
c
G(s) =
K
s(s
2
+ 2s + 10)
(b) G
c
G(s) =
K(s + 2)
s
4
(c) G
c
G(s) =
K(s + 1)(s 0.2)
s(s + 1)(s + 3)(s
2
+ 5)
(d) Once you have completed the three sketches, verify the results using Matlab. How
closely do your sketches resemble the actual plots?
4. The attitude-control system of a space booster is shown in Figure 2. The attitude
angle is controlled by commanding the engine angle , which is then the angle of the
applied thrust, F
T
. The vehicle velocity is denoted by v. These control systems are
sometimes open-loop unstable, which occurs if the center of aerodynamic pressure is
forward of the booster center of gravity. For example, the rigid-body transfer function
of the Saturn V booster was
G
p
(s) =
0.9407
s
2
0.0297
This transfer function does not include vehicle bending dynamics, liquid fuel slosh dy-
namics, and the dynamics of the hydraulic motor that positioned the engine. These
dynamics added 25 orders to the transfer function!! The rigid-body vehicle was stabi-
lized by the addition of rate feedback, as shown in the Figure 2b (Rate feedback, in
addition to other types of compensation, was used on the actual vehicle.)
(a) With K
D
= 0 (the rate feedback removed), plot the root locus and state the
dierent types of (nonstable) responses possible (relate the response with the
possible pole locations)
2
Figure 2: Booster control system
(b) Design the compensator shown (which is PD) to place a closed-loop pole at s =
0.25 + j0.25. Note that the time constant of the pole is 4 sec, which is not
unreasonable for a large space booster.
(c) Plot the root locus of the compensated system, with K
p
variable and K
D
set to
the value found in (b).
(d) Use Matlab to compute the closed-loop response to an impulse for
c
.
3
16.31 Handout #5
Prof. J. P. How September 21, 2001
T.A. TBD Due: September 28, 2001
16.31 Homework Assignment #3
1. Given the plant G(s) = 1/s
2
, design a lead compensator so that the dominant poles
are located at 2 2j
2. Determine the required compensation for the system
K
G(s) =
(s + 8)(s + 14)(s + 20)
to meet the following specications:
Overshoot 5%
10-90% rise time t
r
150 msec
Simulate the response of this closed-loop system to a step response. Comment on the
steady-state error. You should nd that it is quite large.
Determine what modications you would need to make to this controller so that the
system also has
K
p
> 6
thereby reducing the steady state error. Simulate the response of this new closed-loop
system and conrm that all the specications are met.
3. Develop a state space model for the transfer function (not in modal/diagonal form).
Discuss what state vector you chose and why.
G
1
(s) =
(s + 1)(s + 2)
(1)
(s + 3)(s + 4)
(a) Develop a modal state space model for this transfer function as well.
(b) Conrm that both models yield the same transfer function when you compute

G(s) = C(sI A)
1
B + D
1
4. A set of state-space equations is given by:
x
1
= x
1
(u x
2
)
x
2
= x
2
( + x
1
)
where u is the input and and are positive constants.
(a) Is this system linear or nonlinear, time-varying or time-invariant?
(b) Determine the equilibrium points for this system (constant operating points),
assuming a constant input u = 1.
(c) Near the positive equilibrium point from (b), nd a linearized state-space model
of the system.
2

16.31 Handout #5
Prof. J. P. How September 28, 2001
T.A. TBD Due: October 5, 2001
16.31 Homework Assignment #4
1. State space response:
(a) Assume that the input vector u(t) is a weighted series of impulse functions applied
to the m system inputs, so that
_ _
k
1

.
_ . _
(t) = K(t) u(t) =

.

k
m
where the k
i
give the weightings of the various inputs. Use the convolution integral
to show that the output response can be written as:
y
imp
(t) = Ce
At
[x(0) + BK] + DK(t)
(b) Repeat the process, but now assume that the inputs are step functions
_ _
k
1

.
_ . _
u
s
(t) = Ku
s
(t) u(t) =

.

k
m
where u
s
(t) is the unit step function at time zero. In this case show that the
output can be written as:
y
step
(t) = Ce
At
x(0) + CA
1
(e
At
I)BK + DKu
s
(t)
(c) If A
1
exists, nd the steady state value for y
step
(t)
(d) Use the functions in (a) and (b) to nd an analytic expression for the response of
a system with zero initial conditions to an input
_
_
0 t < 0
u(t) = 1 0 t < T
_
0 t T
(e) Conrm the result in (d) using the system dynamics given in question # 2
1
2. For the system
_ _
2 2
" #

0
x = _ _ x + u
1
2 3
Compute the matrix exponential the following ways:
(a) Using the series expansion.
(b) Using the inverse Laplace transform of (sI A)
1
(c) Using the eigenvalues and eigenvectors
Which approach seems easiest for this system?
3. For the two-input, 1-output system,
_ _ _ _
1 0 0 3 0

x = _ 0 0 1 _ x + _ 0 0 _ u
0 2 3 0 1
h i
y = 1 1 4
Find the corresponding transfer function matrix it will be a row of 2 transfer functions.
4. Prove that:
(a) e
(A+B)t
= e
At
e
Bt
if AB = BA
Conrm this result in Matlab with a simple 2 2 example.
(b) e
(A+B)t
6= e
At
e
Bt
if AB 6= BA
5. Use state space techniques to nd the zeros of the system
_ _
2 3
" # " #
T

1 1
A = _ _ , B = C = D = 1
0 1
1 0
(a) Compute the result by hand, and then conrm using tzero in matlab.
(b) Convert this state-space model to a tranfer function and conrm the result that
way too.
2

MATLAB is a trademark of The MathWorks, Inc.

16.31
Prof. J. P. How
T.A. TBD
Handout #5
October 5, 2001
Due: October 12, 2001
16.31 Homework Assignment #5
1. For the system





1 0

x
1

1

= + u
1 2 x
2
1
x
1
x
2





u
h
1 1
i
x
1
x
2
y =
(a) Find the modes of this system (It is OK to use Matlab )
(b) Check the observability and controllability of the system using the rank tests
of the observability and controllability matrices.
(c) Check the observability and controllability of each mode of the system (compare
result with (b))
(d) Form the transfer function for the system. Is your result consistent with parts
(a)(c)?
2. Consider the system with three states, and the state-space model matrices given by:
A =

6 1 0 0

11 0 1

B =

1

C =
6 0 0 K
h
1 0 0
i
(a) Find the transfer function for the system G(s). Discuss the structure of G(s) for
various values of K.
(b) Form the observability matrix for the system. Is the system observable for all
values of K?
(c) Form the controllability matrix for the system. Is the system controllable for all
values of K?
(d) Compare your observations in (b) and (c) with those in (a).
3. The dierential equations for the system with 2 inputs and 2 outputs are:
y
1
+ 5y
1
+ 6y
1
= u
1
+ 3u
1
+ 4u
2
+ 8u
2
y
2
+ y
2
= u
1
+ 2u
2
+ 2u
2
Find the transfer function matrix for the system.
1

4. Consider the system






x
1

19 42

x
1

5

= + u
x
2
9 20 x
2
2
h
3 7
i


x
1

x
2
y =
(a) Use the state-space tools to nd the zeros (and their right directions) of the
system.
(b) Using these directions and the conditions given in the notes, check the modes of
the system for possible pole/zero cancellation. Form the transfer function of the
system and conrm that the results agree.
5. Consider a simple system with three identical masses m
1
= 1, m
2
, m
3
connected
together with ve identical springs in the following way:
Figure 1: Problem # 5. M
1
= 1 and k
i
= 1
(a) Form a state space model of this system
(b) Find the modes of this model
(c) Describe the motions of the masses associated with each mode
(d) Conrm with a Matlab simulation that you can initialize the system to excite
each mode independently.
2

MATLAB is a trademark of The MathWorks, Inc.

16.31
Prof. J. P. How
T.A. TBD
Handout #6
October 12, 2001
Due: October 19, 2001
16.31 Homework Assignment #6
1. (FPE 6.47) The open-loop transfer function for a unity gain feedback system is
G(s) =
K
s(1 +s/5)(1 +s/60))
Design a compensator for G(s) so that the closed-loop system satises the following
specications:
The steady-state error to a unit ramp reference input is less than 0.01
The phase margin is 40

Verify your design in Matlab.


2. State the Cayley-Hamilton theorem. Verify that it is true on a random 55 matrix in
Matlab using polyvalm.
3. The goal is to prove that the condition for controllability is that
rank M
c
= rank

B AB A
2
B A
n1
B

= n
To proceed, start with the orthogonality condition:
(x

)
T
e
At
B = 0
and expand the matrix exponential as a power series. Then show that the orthogonality
condition can be rewritten as the product of three matrices
(x

)
T
M
1
M
2
(t) = 0
where the entire time dependence of the matrix exponential is embedded into M
2
(t).
The basic test is derived by noting that this orthogonality condition must hold for all
time t, and conrm that (since x

= 0) this condition can then only be true if M


1
is
rank decient.
At this point, M
1
and M
2
will have an innite number of columns and rows, respec-
tively. Show how to use the Cayley-Hamilton theorem to convert the innite matrices
to nite ones, thereby recovering the controllability condition given above.
4. Do Problem 3.23 on page 111 in Belanger on observability.
5. Do Problem 3.25 on page 111 in Belanger on controllability.
6. Do Problem 3.36 on page 114 in Belanger on the two pendula problem. Discuss if
the loss of controllability/observability that are analyzed in the problem make physical
sense to you?
1

MATLAB is a trademark of The MathWorks, Inc.

16.31
Prof. J. P. How
T.A. TBD
Handout #7
October 17, 2001
Due: October 26, 2001
16.31 Homework Assignment #7
1. Describe a physical system that you commonly use that exhibits either a loss of ob
servability and/or controllability.
2. For the simple system:
x =



0 1
2 2


x +



0
1


u
(a) Find the poles of the system.
(b) Use the rank test to check controllability of the system.
(c) Repeat the controllability check using the modal test.
(d) Use the techniques on Page 13-2 of the notes to design a full-state feedback con-
troller for the system.
(e) Conrm the result in Matlab using acker.
3. Develop a state-space model of the following system:
G
1
(s) =
s
2
4.4s + 5
(s
2
3s + 4)(s + 2)
(a) Find the target pole locations using a symmetric root locus for r = 1.
(b) Design a full-state feedback controller that places the pole in these locations.
(c) Simulate the state response to a step command. Comment on the system response
and control eort required.
(d) Repeat (a)(c) for the system (you should be able to develop a very similar looking
state space model as the one for G
1
(s))
G
2
(s) =
s
2
+ 4.4s + 5
(s
2
+ 3s + 4)(s + 2)
(e) Now, for both G
1
(s) and G
2
(s), nd the target pole locations using a symmetric
root locus as r 0 and as r .
(f) Find the four sets of full-state feedback controller gains to put the poles of these
two systems in these two sets of locations. Compare the results. Are there any
surprises here?
1


4. The modied Longitudinal Dynamics of an old F-8 Aircraft are given as (see following
discussion):
x = Ax + Bu
y = Cx + Du

0.80 0.0344 12.0 0 19.0 2.50

0 0.0140 0.2904 0.5620

0.0115 0.0087

A = B =
1.0 0.0057 1.50 0 0.160 0.60
1.0 0 0 0 0 0
0 0 0 1 0 0
C = D =
0 0 1 1 0 0
(a) Find the eigenvalues and eigenvectors for these dynamics. Describe the mode
shapes for this system. Clearly discuss to what extent each state participates in
the mode.
(b) Use Matlab to check the controllability of the aircraft dynamics using the two
actuators separately.
(c) Using the elevator input
e
(t), design a full state-feedback controller that doubles
the magnitude of the real part of each pole (leaving the complex parts the same
as open-loop).
(d) Use the techniques shown in class to modify the reference input so that the control
system would track a change in the perturbed pitch angle from trim with zero
steady-state error.
(e) Simulate the response in Matlab to a commanded 2 deg increase in the pitch angle
and discuss the results. Show the actual elevator inputs required.
2

Modied F-8 Aircraft Longitudinal Dynamics


1 Introduction
We shall present here the dynamics of an aircraft. We shall present a brief discussion of the
aircraft dynamics, and present the nominal dierential equations.
The F-8 is an old-fashioned aircraft that has been used by NASA as part of their digital
y-by-wire research program. We have modied the equations of motion by including a
large aperon on the wing so as to obtain two control variables in the longitudinal dynamics
of the F-8. This aperon does not exist in the F-8 aircraft. However, such surfaces exist
in other recent aircraft, e.g. the X-29, and provide some additional exibility for precision
maneuvers.
2 Modeling Discussion
In Figure 1 we show an aircraft with a Cartesian coordinate system xed to its center of
gravity (cg). This coordinate system, which is the so-called stability axes coordinate frame,
has the x-axis pointed toward the nose of the aircraft, the y-axis out the right side, and the
z-axis pointed down.
We shall assume that the aircraft is ying in the vertical plane with its wings level (i.e.
without banking or turning) so that we can study its motion in the vertical plane, i.e. its
longitudinal dynamics. The important variables that characterize the aircraft in this motion
are the horizontal velocity of the airplane, the pitch angle which is the angle of the x-
axis with respect to the horizontal, the pitch rate is the rate of change of the pitch angle,
q = d/dt, and the angle of attack . The angle of attack, , is the angle of the nose with
respect to the velocity vector of the aircraft. Holding the wings at the angle of attack with
respect to the incoming wing, which necessitates a tail, is what provides the lift force needed
to y, i.e. to just balance the force of gravity, but it also produces drag; both lift and drag
are approximately proportional to the angle of attack for small . The ight path angle,
, dened by
=
is the angle between the aircraft velocity vector and the horizontal. As its name implies,
describes the trajectory of the aircraft in the vertical plane.
The longitudinal motion of the aircraft is controlled by two hinged aerodynamic control
surfaces as shown in gure 1: the elevator is located on the horizontal tail, and we shall
use the elevator angle
e
(t) as a control variable; the aperons are located on the wing, and
we shall use the aperon angle
f
(t) as another control variable. Most people are familiar
with elevators; the aperons are just like the ailerons except that they move in the same
direction. Deection of either of these two surfaces downward causes the airow to be
deected downward; this produces a force that induces a nose-down moment about the cg.
3
put the gure here
Figure 1: Variables for F-8 problem
As the aircraft rotates, it changes its angle of attack which results in additional forces and
moments (we shall show the equations below.)
The longitudinal motion of the aircraft is also inuenced by the thrust generated by the
engines. However, in this problem we shall x the thrust to be a constant and we shall
not use it as a dynamic control variable. (Actually, the dynamic coordination of the thrust,
elevator, and aperons becomes important and signicant when the aircraft is in its landing
conguration).
In general, the dierential equations that model the aircraft longitudinal motions are non-
linear. These nonlinear dierential equations can be linearized at a particular equilibrium
(steady-state) ight condition which is characterized by constant airspeed, altitude, cg lo-
cation, trimmed angle of attack , trimmed pitch angle so that = 0, and trimmed
elevator angle
e
to maintain zero pitch rate. One can then obtain a system of linear time-
invariant dierential equations that describe the deviations of the relevant quantities from
their constant equilibrium (trimmed) values.
3 Linearized F-8 Dynamics
The following linear dierential equations model the longitudinal motions of the F-8 aircraft
about the following equilibrium ight condition:
Altitude: 20,000 ft (= 6095 meters)
Speed: Mach 0.9 = 281.58 m/sec = 916.6ft/sec
Dynamic Pressure: 550 lbs/ft
2
= 26,429 N/m
2
Trim Pitch Angle: 2.25 deg.
Trim Angle of Attack: 2.25 deg.
Trim Elevator Angle: -2.65 deg.
The following four state variables represent perturbations from the equilibrium values:
4

x
1
= q(t) = pitch rate (deg/sec)
x
2
= u(t) = perturbation from horizontal velocity (ft/sec)
x
3
= (t) = perturbed angle of attack from trim (deg)
x
4
= (t) = perturbed pitch angle from trim (deg)
The two control variables are:

e
(t) = elevator deection from trim (deg)

f
(t) = aperon deection (deg)
The dynamics are given as
x = Ax + Bu
y = Cx + Du

0.80 0.0344 12.0 0 19.0 2.50

0 0.0140 0.2904 0.5620

0.0115 0.0087

A = B =
1.0 0.0057 1.50 0 0.160 0.60
1.0 0 0 0 0 0
0 0 0 1 0 0
C = D =
0 0 1 1 0 0
5
MATLAB is a trademark of The MathWorks, Inc.

16.31 Handout #8
Prof. J. P. How October 26, 2001
T.A. TBD Due: November 2, 2001
16.31 Homework Assignment #8
1. Consider the simple system
s z
o
G(s) =
(s + 3)(s + 4)
(a) Conrm that one possible state-space model is given by:

7 1 1
h i

A = , B = , C = 1 0 , D = 0
12 0 z
o
(b) Evaluate the controllability of this system as a function of the value of z
o
.
(c) Use pole placement techniques to derive the full-state feedback gains

h i
x
1

u = k
1
k
2

x
2
necessary to place the closed-loop poles at the roots of:
s
2
+ 2
n
s +
2
= 0
n
(d) Use your analytic expressions for k
1
and k
2
to support the following two claims:
The system has to work harder and harder to achieve control as the control-
lability is reduced.
To move the poles a long way takes large gains.
2. We have talked at length about selecting the feedback gains to change the pole locations
of the system, but we have not mentioned anything about how the open-loop zeros are
changed. Part of the reason for this is that it can be shown that:

When full state feedback is used (u = Nr Kx) to control a system, the zeros remain
unchanged by the feedback.
Conrm that this statement is true by analyzing the zero locations for the closed-loop
system, which are given by the roots of the polynomial:

sI (A BK)

NB

det = 0
C 0
1
The best way to proceed is to show that, through a series of column and row operations
that do not change the value of the determinant, you can get the following reduction:

sI (A BK)

NB sI A B

det = 0 det = 0
C 0 C 0
which, of course, is the same polynomial used to nd the open-loop zeros of the system.
3. Consider the stick-on-the-cart problem (dynamics are given on the web page).
(a) Develop a state-space model for the transfer function dynamics given on page 2-8
of that handout.
(b) Conrm that the system is controllable using the force actuator.
(c) The objective is to design a full-state feedback controller for this system using
pole-placement techniques. The overall goals
1
are to obtain a settling time of 45
sec with a maximum overshoot of 15%. Generate four controllers using the
placement locations from:
Dominant second-order (discuss where you put all the poles)
Bessel
ITAE (locations will be posted)
LQR (show the SRL in this case using y = x)
(d) Simulate the response of the system to a 3 unit step change in the position of the

cart x. Use u = Nr Kx to implement the control.
Discuss any aspects of the response of the system that seem particularly
interesting. Is the system minimum phase or NMP?
Compute (and plot on the same graph) the control eort required using each
of the four placement methods. Compare the amount of eort required in
each case
2
.
4. Consider the simple system
1
G(s) =
s + a
(a) Draw the SRL plot for this system.
(b) The SRL equation is only second order in this case, so use it to explicitly solve
for the closed-loop pole location as a function of r, the control weight.
(c) Use this solution to nd the feedback gain u = Kx as a function of r. Discuss
what happens when r 0 and r . Be sure to consider the two separate
cases: a > 0 and a < 0. Any surprises here?
1
Note that these specications on the closed-loop performance are not very tight, they are given only as
targets for you to use in roughly selecting where to place the dominant poles.
2
Note that this is a dangerous process in general, because we should really only be concerned if a technique
gives the same performance as the rest, but takes a lot more control eort
2


16.31 Handout #9
Prof. J. P. How November 2, 2001
T.A. TBD Due: November 9, 2001
16.31 Homework Assignment #9
1. Suppose that x = Ax and that

1

x(0) = gives trajectory x(t) =
2

0

x(0) = gives trajectory x(t) =
1
(a) Find x(t) when x(0) = [ 1 3 ]
T
(b) What are the eigenvalues and eigenvectors of A?

1

2t
e , and
2

0

t
e , and
1
(c) Is there enough information to nd A and e
At
. Either do so, or explain why not.
2. Suppose that x = Ax where

1 1

x (t) = x(t)
0 2
y(t) = 1 1 x(t)
Where we take u = 0 for simplicity.
(a) Suppose that we know y(3) = 1, what does this tell us about x(3)? Plot the
region in the x
1
(3)x
2
(3) plane consistent with this measurement.
(b) Now suppose that we also know that y(3) = 0, what does this tell us about x(3)?
Plot this region on the same graph and from this combined plot show whether or
not you can determine x(3).
(c) Repeat parts (a) and (b) using the same A matrix but with C = 0 1 . What
happens in this case? What causes the dierence?
3. Estimator of a non-minimum phase system Consider the system dynamics
x
1
= w
x
2
= x
1
2x
2
+ w (1)
with continuous measurements y(t) = x
2
(t)+ v(t). The measurement and process noise
sizes (spectral densities) are R
v
and R
w
, respectively.
1

(a) Find the transfer function G
yw
(s) from w(s) to y(s) and identify the poles and
zeros. Recall that right-half plane zeroes are called non-minimum phase.
(b) Use this transfer function to sketch the symmetric root locus for the estimator
poles versus R
w
/R
v
.
(c) Use this plot to clearly demonstrate that one of the estimator poles tends to
s = 1 as R
v
/R
w
0, even though there is no zero there (only a reected one).
(d) Find the gains to place the estimator poles in the locations associated with
R
v
/R
w
= (pick a small and use acker as in the notes). Simulate the re
sponse of the resulting closed-loop estimator to conrm that this implies that
part of the estimation error will never attenuate faster than e
t
, even with noise
free measurements.
(e) Comment on the impact of the nonminimum-phase zero on our ability to do good
estimation for this system.
4. Consider the F-8 problem in HW7, #4. Assume that the process noise disturbance
(size 1) enters through the aperon
f
and the sensor is the perturbed ight path
angle
y
2
(t) = 0 0 1 1 x + v
(a) Draw a symmetric root locus for this system to nd the closed-loop estimator
poles as function of the size R
v
of the sensor noise v. Clearly show the pole
locations associated with the limiting cases R
v
0, .
(b) Select the estimator pole locations that maximize the decay rate of the highest
frequency set of estimator poles. Find the estimator gains to place the estimator
poles at these locations.
(c) Simulate the combined system/estimator response to determine the estimation
error as a function of time when

1 0


1

0

x(0) = but
0
x(0) =
1
1 0
(d) Discuss the performance of this estimator.
2
16.31 Handout #10
Prof. J. P. How November 21, 2001
T.A. TBD Due: November 30, 2001
16.31 Homework Assignment #10
1. Consider the control of
10
G(s) =
s(s + 1)
using the model with y = x
1
and x
1
= x
2
.
(a) Design a full-state feedback controller that yields closed-loop poles with
n
= 3
and = 0.5.
(b) Design a state estimator that has estimator error poles with
n
= 15 and = 0.5
(c) Combine these two to obtain the compensator G
c
(s).
Plot the Bode diagram of the compensator.
Can you give classical interpretation of how this compensator works?
What are the gain and phase margins of this system?
(d) Plot the closed-loop root locus as a function of the plant gain (nominally 10)
how sensitive does this controller appear to be?
2. The linearized longitudinal equations of a helicopter near hover can be modeled by the
normalized third order system

q 0.4 0 0.01 q 6.3



= 1 0 0 + 0
u 1.4 9.8 0.02 u 9.8
where q is the pitch rate, is the pitch angle of the fuselage, u is the horizontal velocity,
and is the rotor tilt angle. Suppose that our sensor measures the horizontal velocity,
so y = u
1
(a) Find the open-loop pole locations.
(b) Sketch out how you would design a classical controller that stabilizes the system
(i.e., Discuss the number of compensator poles and zeros and show roughly where
they need to be.)
(c) Is this system controllable and observable?
(d) Plot the regulator SRL for this system choose the value of control weighting r
that places the real pole near s = 2. Design the regulator gains K that place
the poles at the resulting locations.
(e) Plot the estimator SRL for this system choose the value of noise ratio that places
the real pole near s = 8. Design the estimator gains L that place the poles at
the resulting locations.
(f) Compute the compensator transfer function using the K and L gains obtained.
Plot the Bode diagram of the compensator.
How does this design compare with yours in part (b)?
(g) Plot the closed-loop root locus as a function of the plant gain (as done in class)
how sensitive does this controller appear to be?
(h) Compute and plot the sensitivity function S(s) for this closed-loop system does
this design look feasible for implementation?
3. Consider the classic example
" # " # " #
1 1 0 1
x = x + u + w
0 1 1 1
h i
y = 1 0 x + v
h i
z = 1 1 x
(a) Use SRL arguments to show that, as the control weight 0 the regulator gains
are given by K
r
= k[ 1 1 ] (and nd an approximate expression for k).
(b) Use similar arguments to show that, as the relative sensor noise 0 the esti-
mator gains are given by L = l[ 1 1 ] (nd an approximate expression for l)
(c) Form the compensator using these gains and describe how the compensator sta-
bilizes the system. (Try to do this analytically if possible, but if not, consider
three designs with = {1, 10
2
, 10
4
}).
(d) Compute and plot the Nyquist plot for the system How large a circle can you
put around the critical point 1 without changing the number of encirclements?
Again consider = {1, 10
2
, 10
4
}.
(e) Compute and plot the sensitivity plot for the system is this plot consistent with
your ndings in the Nyquist diagram?
(f) What can you conclude about the stability margins for this controller as
= 0?
2
Fall 2001
Topic #6
16.31 Feedback Control
Control Design using Bode Plots
Performance Issues
Synthesis
Lead/Lag examples
Copyright 2001 by Jonathan How.
Topic #3
16.31 Feedback Control
Frequency response methods
Analysis
Synthesis
Performance
Stability
Copy right 2001 by Jon at h an H ow.
1
Fall 2001 16.31 31
Introduction
Root locus methods have:
Advantages:
Good indicator if transient response;
Explicity shows location of all closed-loop poles;
Trade-os in the design are fairly clear.
Disadvantages:
Requires a transfer function model (poles and zeros);
Dicult to infer all performance metrics;
Hard to determine response to steady-state (sinusoids)
Frequency response methods are a good complement to the root
locus techniques:
Can infer performance and stability from the same plot
Can use measured data rather than a transfer function model
The design process can be independent of the system order
Time delays are handled correctly
Graphical techniques (analysis and synthesis) are quite simple.
Fall 2001 16.31 32
Frequency response Function
Given a system with a transfer function G(s), we call the G(j),
[0, ) the frequency response function (FRF)
G(j) = |G(j)| arg G(j)
The FRF can be used to nd the steady-state response of a
system to a sinusoidal input. If
e(t) G(s) y(t)
and e(t) = sin 2t, |G(2j)| = 0.3, arg G(2j) = 80

, then the
steady-state output is
y(t) = 0.3 sin(2t 80

)
The FRF clearly shows the magnitude (and phase) of the
response of a system to sinusoidal input
A variety of ways to display this:
1. Polar (Nyquist) plot Re vs. Im of G(j) in complex plane.
Hard to visualize, not useful for synthesis, but gives denitive
tests for stability and is the basis of the robustness analysis.
2. Nichols Plot |G(j)| vs. arg G(j), which is very handy for
systems with lightly damped poles.
3. Bode Plot Log |G(j)| and arg G(j) vs. Log frequency.
Simplest tool for visualization and synthesis
Typically plot 20log |G| which is given the symbol dB
Fall 2001 16.31 33
Use logarithmic since if
log |G(s)| =

(s + 1)(s + 2)
(s + 3)(s + 4)

= log |s + 1| + log |s + 2| log |s + 3| log |s + 4|


and each of these factors can be calculated separately and then
added to get the total FRF.
Can also split the phase plot since
arg
(s + 1)(s + 2)
(s + 3)(s + 4)
= arg(s + 1) + arg(s + 2)
arg(s + 3) arg(s + 4)
The keypoint in the sketching of the plots is that good straightline
approximations exist and can be used to obtain a good prediction
of the system response.
Fall 2001 16.31 34
Example
Draw Bode for
G(s) =
s + 1
s/10 + 1
|G(j)| =
|j + 1|
|j/10 + 1|
log |G(j)| = log[1 + (/1)
2
]
1/2
log[1 + (/10)
2
]
1/2
Approximation
log[1 + (/
i
)
2
]
1/2

0
i
log[/
i
]
i
Two straightline approximations that intersect at
i
Error at
i
obvious, but not huge and the straightline approxima-
tions are very easy to work with.
10
2
10
1
10
0
10
1
10
2
10
0
10
1
10
2
Freq
|
G
|
Fall 2001 16.31 35
To form the composite sketch,
Arrange representation of transfer function so that DC gain of
each element is unity (except for parts that have poles or zeros
at the origin) absorb the gain into the overall plant gain.
Draw all component sketches
Start at low frequency (DC) with the component that has the
lowest frequency pole or zero (i.e. s=0)
Use this component to draw the sketch up to the frequency of
the next pole/zero.
Change the slope of the sketch at this point to account for the
new dynamics: -1 for pole, +1 for zero, -2 for double poles, . . .
Scale by overall DC gain
10
2
10
1
10
0
10
1
10
2
10
3
10
1
10
0
10
1
10
2
Freq
|
G
|
Figure 1: G(s) = 10(s + 1)/(s + 10) which is a lead.
Fall 2001 16.31 36
Since arg G(j) = arg(1 + j) arg(1 + j/10), we can construct
phase plot for complete system in a similar fashion
Know that arg(1 + j/
i
) = tan
1
(/
i
)
Can use straightline approximations
arg(1 + j/
i
)

0 /
i
0.1
90

/
i
10
45

/
i
= 1
Draw the components using breakpoints that are at
i
/10 and 10
i
10
2
10
1
10
0
10
1
10
2
10
3
0
10
20
30
40
50
60
70
80
90
100
Freq
A
r
g

G
Figure 2: Phase plot for (s + 1)
Fall 2001 16.31 37
Then add them up starting from zero frequency and changing the
slope as
10
2
10
1
10
0
10
1
10
2
10
3
80
60
40
20
0
20
40
60
80
Freq
A
r
g

G
Figure 3: Phase plot G(s) = 10(s + 1)/(s + 10) which is a lead.
Fall 2001 16.31 38
10
4
10
3
10
2
10
1
10
0
10
1
10
3
10
2
10
1
10
0
Freq (Hz)
M
a
g
n
i
t
u
d
e
Actual
LF
MF
HF
+1
0
2
2
+1
1
10
4
10
3
10
2
10
1
10
0
10
1
10
2
180
160
140
120
100
80
60
40
20
0
20
Freq (Hz)
P
h
a
s
e

(
d
e
g
)
Actual
LF
MF
HF
Bode for G(s) =
4.54s
s
3
+ 0.1818s
2
31.1818s 4.4545
.
The poles are at (-0.892, 0.886, -0.0227)
Fall 2001
Non-minimum Phase Systems
Bode plots are particularly complicated when we have non-minimum
phase systems
A system that has a pole/zero in the RHP is called non-minimum
phase.
The reason is clearer once you have studied the Bode Gain-
Phase relationship
Key point: We can construct two (and many more) systems
that have identical magnitude plots, but very dierent phase
diagrams.
Consider G
1
(s) =
s+1
s+2
and G
2
(s) =
s1
s+2
10
1
10
0
10
1
10
2
10
1
10
0
Freq
|
G
|MP
NMP
10
1
10
0
10
1
10
2
0
50
100
150
200
Freq
A
r
g

G
MP
NMP
Figure 4: Magnitude plots are identical, but the phase plots are dramatically dierent. NMP has a 180 deg
phase loss over this frequency range.
z Belanger. Control Engineering: A Modern Approach. Chapter 1.
z Van de Vegte. Feedback Control Systems. Chapter 5 and 6.
z Bode (PDF)
z Stability in the Frequency Domain (PDF - 2.4 MB)
z Bode Synthesis - Lead / Lag (PDF - 1.1 MB)
Page 1 of 1 MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,...
8/12/2005 http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...
Fall 2001
Topic #5
16.31 Feedback Control
Stability in the Frequency Domain
Nyquist Stability Theorem
Examples
Appendix (details)
Remember that this is the basis of future robustness tests.
Copyright 2001 by Jonathan How.

You might also like