You are on page 1of 442

Lecture #1

16.30/31 Feedback Control Systems

Motivation
Fall 2010 16.30/31 12

16.30/31: Introduction

d
r e u y
Gc(s) Gp(s)

Goal: Design a controller Gc(s) so that the system has some desired
characteristics. Typical objectives:
Stabilize the system (Stabilization)
Regulate the system about some design point (Regulation)
Follow a given class of command signals (Tracking)
Reduce response to disturbances. (Disturbance Rejection)

Typically think of closed-loop control so we would analyze the


closed-loop dynamics.
Open-loop control also possible (called feedforward) more
prone to modeling errors since inputs not changed as a result of
measured error.

Note that a typical control system includes the sensors, actuators,


and the control law.
The sensors and actuators need not always be physical devices
(e.g., economic systems).
A good selection of the sensor and actuator can greatly simplify
the control design process.
Course concentrates on the design of control law given the rest of
the system (although we will need to model the system).

September 9, 2010
Fall 2010 16.30/31 13

Why Use Control

Typically easy question to answer for aerospace because many vehicles


(spacecraft, aircraft, rockets) and aerospace processes (propulsion)
need to be controlled just to function
Example: aircraft doing aggressive maneuvers dicult to y by
hand - dynamics are unstable and nonlinear (aerodynamic and
geometric)

Fig. 1: Vertical hover Fig. 2: MIT Urban Challenge LR3.

Example: The LR3 dynamics of the MITs DARPA Urban Chal


lenge vehicle are nonlinear, unstable, constrained by limitations.
Car will not track desired path without feedback control.

But there are also many stable systems that simply require better
performance in some sense (e.g., faster, less oscillatory), and we
can use control to modify/improve this behavior.

September 9, 2010
Fall 2010 16.30/31 14

Feedback Control Approach

Establish control objectives


Qualitative dont use too much fuel
Quantitative settling time of step response <3 sec
Typically requires that you understand the process (expected com
mands and disturbances) and the overall goals (bandwidths).
Often requires that you have a strong understanding of the phys
ical dynamics of the system so that you do not ght them in
inappropriate (i.e., inecient) ways.

Select sensors & actuators


What aspects of the system are to be sensed and controlled?
Consider sensor noise and linearity as key discriminators.
Cost, reliability, size, . . .

Obtain model
Analytic (FEM) or from measured data (system ID)
Evaluation model reduce size/complexity Design model
Accuracy? Error model?

Design controller
Select technique (SISO, MIMO), (classical, state-space)
Choose parameters (ROT, optimization)

Analyze closed-loop performance. Meet objectives?


Analysis, simulation, experimentation, . . .

Yes done, No iterate . . .

September 9, 2010
Fall 2010 16.30/31 15

Assumptions and Philosophy

I assume that given a relatively simple (rst or second order) system,


and a set of stability/performance requirements:
Given a controller, you can analyze the stability & expected per
formance
You have a reasonably good idea how to design a controller using
classical techniques (Bode and/or root locus)

Will not focus too much on the classical design in this course, but
personally think it is important to know the design process for
classical controllers, and you should be able to provide a classical
interpretation of any advanced control technique as a sanity check
Try to answer why did it do that?
Not always possible, but always a good goal.
Texts provide full design process we will only review it

Observation: classical control the process tends to be a bit cryptic,


is designer intensive (lots of simple but detailed calculations), and
typically very iterative
So our focus in this class is on state space methods for several reasons:
More systematic design tools exist - can be easily codied and
implemented numerically easy integrate optimization
Easily handle larger-scale systems (many modes) and many sensors
and actuators (MIMO).

September 9, 2010
Fall 2010 16.30/31 16

State-Space Approach

Basic questions that we will address about the state-space approach:


What are state-space models?
Why should we use them?
How are they related to the transfer functions used in classical
control design?
How do we develop a state-space model?
How do we design a controller using a state-space model?

Bottom line:

1. What: representation of the dynamics of an nth-order system


using n rst-order dierential equations:
mq + cq + kq = F

q 0 1 q 0
= + u
q k/m c/m q 1/m
x = Ax + Bu

2. Why:
State variable form convenient way to work with complex dy
namics. Matrix format easy to use on computers.
Transfer functions only deal with input/output behavior, but
state-space form provides easy access to the internal fea
tures/response of the system.
Allows us to explore new analysis and synthesis tools.
Great for multiple-input multiple-output systems (MIMO), which
are very hard to work with using transfer functions.

September 9, 2010
Fall 2010 16.30/31 17

3. How: There are a variety of ways to develop these state-space


models. We will explore this process in detail.

Linearization of nonlinear models

Derivation from simple linear dynamics

4. Control design: Split into 3 main parts


Full-state feedback ctitious since requires more information
than typically (ever?) available
Observer/estimator design process of estimating the system
state from the measurements that are available.
Dynamic output feedback combines these two parts with prov
able guarantees on stability (and performance).
Fortunately there are very simple numerical tools available to
perform each of these steps

Removes much of the art and/or magic required in classical


control design design process more systematic.

Observation: Linear systems theory involves extensive use of linear


algebra.
Will not focus on the theorems/proofs in class details will be
handed out as necessary, but these are in the textbooks.
Will focus on using the linear algebra to understand the behavior
of the system dynamics so that we can modify them using control.
Linear algebra in action
Even so, this will require more algebra that most math courses
that you have taken . . . .

September 9, 2010
Fall 2010 16.30/31 18

Nonlinear and Robustness


Signicant dierence from previous years is that we will spend more
time on the analysis of the system stability assuming various types of
basic nonlinearities
All systems are nonlinear to some extent
How well will the controller based on linear model assumptions
work on the full nonlinear system?

Goal is to develop tools to work with a nonlinear model, linearize it,


design a controller, and then analyze the performance on the original
nonlinear system.
Also interested in understanding how the addition of certain types
of nonlinearities (saturation, rate limits, stiction) might inuence
stability/performance.

Will also explicitly consider the eects of uncertainty in the dynamic


models
Basic issues such as gain and timing errors
Parameter errors
Unmodeled dynamics

September 9, 2010
Fall 2010 16.30/31 19

Implementation Issues

Many Universities used to teach an entire class on digital control


analysis and design
I did at Stanford for many years and have a good set of notes on
the subject if you are interested

But with the increase in processor speeds and the ability to develop
code right from simulink (e.g., RTW), there is not much point dis
cussing all of the intricacies of digital design
But sometimes you have to use a small computer and/or you dont
get all the cycles

So we will discuss the process enough to capture the main issues and
the overall implementation approach
Bottom line is that as long as you sample fast enough the eects
are not catastrophic - it just adds a little bit more delay to the
feedback loop.
Easily predicted and relatively easy to account for.
Provides feedback on how fast you need to go go faster costs
more money and can create numerical implementation issues

Much of the implementation done for the labs will be using code
automatically developed straight from simulink
You are probably familiar with Matlab, but should practice with
Simulink as well, as that will be used for the HW and the labs.

September 9, 2010
Fall 2010 16.30/31 110

Feedback
Muddy cards and oce hours.
Help me to know whether my assumptions about your backgrounds
is correct and whether there are any questions about the material.

September 9, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #2

16.30/31 Feedback Control Systems

Basic Root Locus

Basic aircraft control concepts


Basic control approaches
Fall 2010 16.30/31 22

Aircraft Longitudinal Control

Consider the short period approximate model of an 747 aircraft.


x sp = Aspxsp + Bspe
where e is the elevator input, and

w Zw /m U0
xsp = , Asp = 1 1
q Iyy (Mw + Mw Zw /m) Iyy (Mq + Mw U0)

Ze /m
Bsp = 1
Iyy (Me + Mw Ze /m)
Add that = q, so s = q

Take the output as , input is e, then form the transfer function1


(s) 1 q(s) 1
= = 0 1 (sI Asp)1Bsp
e(s) s e(s) s

For the 747 (40Kft, M = 0.8) this reduces to:


(s) 1.1569s + 0.3435
= 2 Ge (s)
e(s) s(s + 0.7410s + 0.9272)
so that the dominant roots have a frequency of approximately 1
rad/sec and damping of about 0.4
PoleZero Map

1
1
0.68 0.54 0.42 0.3 0.2 0.09

0.8
0.8

0.84 0.6
0.6

0.4
0.4
0.95
0.2
0.2
Imaginary Axis

0.2
0.2
0.95
0.4
0.4

0.6
0.84 0.6

0.8
0.8

0.68 0.54 0.42 0.3 0.2 0.09


1
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 10 0.1
Real Axis

Fig. 1: Note - this is the Pole-zero map for Gqe

1 Much more on how to do this part later


Fall 2010 16.30/31 23

Basic problem is that there are vast quantities of empirical data to


show that pilots do not like the ying qualities of an aircraft with this
combination of frequency and damping
What is preferred?

Undamped natural frequency s rad/sec 6

5
POOR
ACCEPTABLE
4

3 SATISFACTORY

2
UNACCEPTABLE
1

0
0.1 0.2 0.4 0.6 0.8 1 2 4
Damping ratio s
Image by MIT OpenCourseWare.

Fig. 2: Thumb Print criterion

This criterion was developed in 1950s, and more recent data is pro
vided in MILSPEC8785C
Based on this plot, a good target: frequency 3 rad/sec and
damping of about 0.6

Problem is that the short period dynamics are no where near these
numbers, so we must modify them.
Could do it by redesigning the aircraft, but it is a bit late for
that. . .
Fall 2010 16.30/31 24

First Short Period Autopilot

First attempt to control the vehicle response: measure and feed it


back to the elevator command e.
Unfortunately the actuator is slow, so there is an apparent lag in
the response that we must model

ec 4 ea
Ge (s)
s+4
c
k

Dynamics: ea is the actual elevator deection, ec is the actuator


command created by our controller
4
= Ge (s)ea; ea = H(s)ec; H(s) =
s+4
The control is just basic proportional feedback
ec = k ( c)

which gives that


= Ge (s)H(s)k ( c)

or that
(s) Ge (s)H(s)k
=
c(s) 1 + Ge (s)H(s)k

Looks good, but how do we analyze what is going on?


Need to be able to predict where the poles are going as a function
of k Root Locus
Fall 2010 16.30/31 25

Root Locus Basics

r e u y
Gc(s) Gp(s)

Assume that the plant transfer function is of the form


npz
Np (s zpi)
Gp = Kp = Kp ni pp
Dp i (s ppi )
and the controller transfer function is
ncz
Nc (s zci)
Gc(s) = Kc = Kc nicp
Dc i (s pci )
Assume that npp > npz and ncp > ncz
2

Signals are:
u control commands
y output/measurements
r reference input
e response error

Unity feedback form. We could add the controller Gc in the feedback


path without changing the pole locations.
Will discuss performance and add disturbances later, but for now just
focus on the pole locations

2 Errata: Added n values for the number of poles and zeros


Fall 2010 16.30/31 26

Basic questions:
Analysis: Given Nc and Dc, where do the closed loop poles go
as a function of Kc?

Synthesis: Given Kp, Np and Dp, how should we chose Kc, Nc, Dc
to put the closed loop poles in the desired locations?

Block diagram analysis: Since y = GpGce and e = r y, then


easy to show that
y Gc Gp
= Gcl (s)
r 1 + Gc Gp
where
Kc Kp N c N p
Gcl (s) =
DcDp + KcKpNcNp
is the closed loop transfer function
Denominator called characteristic equation c(s) and the roots
of c(s) = 0 are called the closed-loop poles (CLP).
The CLP are clearly functions of Kc for a given Kp, Np, Dp, Nc, Dc
a locus of roots [Evans, 1948]
Fall 2010 16.30/31 27

Root Locus Analysis

General root locus is hard to determine by hand and requires Matlab


tools such as rlocus(num,den) to obtain full result, but we can get
some important insights by developing a short set of plotting rules.
Full rules in FPE, page 279 (4th edition).
Basic questions:
1. What points are on the root locus?
2. Where does the root locus start?
3. Where does the root locus end?
4. When/where is the locus on the real line?
5. Given that s0 is found to be on the locus, what gain is need for
that to become the closed-loop pole location?
6. What are the departure and arrival angles?
7. Where are the multiple points on the locus?

Question #1: is point s0 on the root locus? Assume that Nc and


Dc are known, let
Nc Np
Ld = and K = KcKp
Dc Dp
c(s) = 1 + KLd(s) = 0
So values of s for which Ld(s) = 1/K, with K real are on the RL.
For K positive, s0 is on the root locus if
Ld(s0) = 180 l 360, l = 0, 1, . . .
If K negative, s0 is on the root locus if [0 locus]
Ld(s0) = 0 l 360, l = 0, 1, . . .
These are known as the phase conditions.
Fall 2010 16.30/31 28

Question #2: Where does the root locus start?


Nc N p
c = 1 + K =0
DcDp

DcDp + KNcNp = 0
So if K 0, then locus starts at solutions of DcDp = 0 which are
the poles of the plant and compensator.

Question #3: Where does the root locus end?


Already shown that for s0 to be on the locus, must have
1
Ld(s0) =
K

So if K , the poles must satisfy:

NcNp

Ld = =0

DcDp

There are several possibilities:


1. Poles are located at values of s for which NcNp = 0, which are
the zeros of the plant and the compensator

2. If Loop Ld(s) has more poles than zeros


As |s| , |Ld(s)| 0, but we must ensure that the phase
condition is still satised.
Fall 2010 16.30/31 29

More details as K :
Assume there are n zeros and p poles of Ld(s)
Then for large |s|,
1
Ld(s)
(s )pn
So the root locus degenerates to:
1
1+ =0
(s )pn
So n poles head to the zeros of Ld(s)
Remaining p n poles head to |s| = along asymptotes
dened by the radial lines
180 + 360 (l 1)
l = l = 1, 2, . . .
pn
so that the number of asymptotes is governed by the number of
poles compared to the number of zeros (relative degree).
If zi are the zeros if Ld and pj are the poles, then the centroid
of the asymptotes is given by:
p
n

pj zi
=
pn
Fall 2010 16.30/31 210

Example: L(s) = s4
Im

Re

Number of asymptotes and ?

s+1
Example G(s) =
s2(s + 4)
Im

Re

Number of asymptotes and ?

s1
Example G(s) =
s2(s 4)
Im

Re

Number of asymptotes and ?


Fall 2010 16.30/31 211

Question #4: When/where is the locus on the real line?


Locus points on the real line are to the left of an odd number
of real axis poles and zeros [K positive].
Follows from the phase condition and the fact that the phase
contribution of the complex poles/zeros cancels out

Question #5: Given that s0 is found to be on the locus, what gain


is needed for that to become the closed-loop pole location?
Need
1 Dp(s0)Dc(s0)
K =
|Ld(s0)| Np(s0)Nc(s0)

Since K = KpKc, sign of Kc depends on sign of Kp


e.g., assume that Ld(s0) = 180, then need Kc and Kp to be
same sign so that K > 0
Fall 2010 16.30/31 212

Root Locus Examples

Im

Re

Fig. 3: Basic

Im


Re

Fig. 4: Two poles

Im


Re

Fig. 5: Add zero

Examples similar to control design process: add compensator dynam


ics to modify root locus and then chose gain to place CLP at desired
location on the locus.
Fall 2010 16.30/31 213

Im


Re

Fig. 6: Three poles

Im


Re

Fig. 7: Add a zero again

Im

Re

Fig. 8: Complex Case

Im

Re

Fig. 9: Very Complex Case

Fall 2010 16.30/31 214

Performance Issues
Interested in knowing how well our closed loop system can track var
ious inputs
Steps, ramps, parabolas
Both transient and steady state

For perfect steady state tracking want error to approach zero


lim e(t) = 0
t

Can determine this using the closed-loop transfer function and the
nal value theorem
lim e(t) = lim se(s)
t s0

So for a step input r(t) = 1(t) r(s) = 1/s


y(s) Gc(s)Gp(s) y(s)
= = Gc(s)Gp(s)
r(s) 1 + Gc(s)Gp(s) e(s)
e(s) 1
=
r(s) 1 + Gc(s)Gp(s)
so in the case of a step input, we have
r(s) 1/s
e(s) = =
1 + Gc(s)Gp(s) 1 + Gc(s)Gp(s)
1/s 1
lim se(s) = lim s = e()
s0 s0 1 + Gc (s)Gp (s) 1 + Gc(0)Gp(0)

So the steady state error to a step is given by


1
ess =
1 + Gc(0)Gp(0)
To make the error small, we need to make one (or both) of Gc(0),
Gp(0) very large
Fall 2010 16.30/31 215

Clearly if Gp(s) has a free integrator (or two) so that it resembles


1
sn (s+)m with n 1, then

lim Gp(s) ess 0


s0

Can continue this discussion by looking at various input types (step,


ramp, parabola) with systems that have a dierent number of free
integrators (type), but the summary is this:

step ramp parabola


1
type 0
1 + Kp
1
type 1 0
Kv
1
type 2 0 0
Ka
where
Kp = lim Gc(s)Gp(s) P
osition Error Constant
s0

Kv = lim sGc(s)Gp(s) Velocity Error Constant


s0

Ka = lim s2Gc(s)Gp(s) Acceleration Error Constant


s0

which are a good simple way to keep track of how well your system
is doing in terms of steady state tracking performance.
Fall 2010 16.30/31 216

Dynamic Compensation

For a given plant, can draw a root locus versus K. But if desired
pole locations are not on that locus, then need to modify it using
dynamic compensation.
Basic root locus plots give us an indication of the eect of adding
compensator dynamics. But need to know what to add to place
the poles where we want them.
New questions:
What type of compensation is required?
How do we determine where to put the additional dynamics?
There are three classic types of controllers u = Gc(s)e

1. Proportional feedback: Gc Kg a gain, so that Nc = Dc = 1


Same case we have been looking at.

2. Integral feedback:
t
Ki
u(t) = Ki e( )d Gc(s) =
0 s
Used to reduce/eliminate steady-state error
If e( ) is approximately constant, then u(t) will grow to be very
large and thus hopefully correct the error.
Fall 2010 16.30/31 217

Consider error response of Gp(s) = 1/(s + a)(s + b)


(a > 0, b > 0) to a step,
r(t) = 1(t) r(s) = 1/s
where
e 1 r(s)
= = S(s) e(s) =
r 1 + Gc G p (1 + GcGp)
where S(s) is the Sensitivity Transfer Function for the
closed-loop system

To analyze error, use FVT limt e(t) = lims0 se(s)


so that with proportional control,
s 1 1
lim ess = lim = K
t s0 s 1 + Kg Gp (s) 1 + ab
g
so can make ess small, but only with a very large Kg

With integral control, lims0 Gc(s) = , so ess 0

Integral control improves the steady state, but this is at


the expense of the transient response
Typically gets worse because the system is less well damped
Fall 2010 16.30/31 218

Example #1: G(s) = , add integral feedback to


(s + a)(s + b)
improve the steady state response.

Im

Re

Fig. 10: RL after adding integral FB

Increasing Ki to increase speed of the response pushes the poles


towards the imaginary axis more oscillatory response.

Now combine proportional and integral (PI) feedback:


K2 K1 s + K 2
Gc = K1 + =
s s
which introduces a pole at the origin and zero at s = K2/K1
PI solves many of the problems with just integral control
Im

Re

Fig. 11: RL with proportional and integral FB

Fall 2010 16.30/31 219

3. Derivative Feedback: u = Kde so that Gc(s) = Kds


Does not help with the steady state
Provides feedback on the rate of change of e(t) so that the
control can anticipate future errors.

1
Example # 2: G(s) = , (a > 0, b > 0)
(s a)(s b)
with Gc(s) = Kds
Im

Re

Fig. 12: RL with derivative FB

Derivative feedback is very useful for pulling the root locus into
the LHP - increases damping and more stable response.

Typically used in combination with proportional feedback to form


proportional-derivative feedback PD
Gc(s) = K1 + K2s
which moves the zero from the origin.

Unfortunately pure PD is not realizable in the lab as pure dierenti


ation of a measured signal is typically a bad idea
Typically use band-limited dierentiation instead, by rolling-o the
PD control with a high-frequency pole (or two).
Fall 2010 16.30/31 220

Controller Synthesis

First determine where the poles should be located


Will proportional feedback do the job?
What types of dynamics need to be added? Use main building block
(s + z)
GB (s) = Kc
(s + p)

Looks like various controllers, depending how Kc, p, and z picked


If pick z > p, with p small, then
Im

Re

(s + z)
GB (s) Kc
s
which is essentially a PI compensator, called a lag.
If pick p z, then at low frequency, the impact of p/(s + p) is
small, so
Im

Re

GB (s) Kc(s + z)

which is essentially PD compensator, called a lead.

Various algorithms exist to design the components of the lead and lag
compensators
Fall 2010 16.30/31 221

Classic Root Locus Approach

Consider a simple system Gp = s2 for which we want the closed


loop poles to be at 1 2j
Will proportional control be sucient? no
So use compensator with 1 pole.
(s + z)
Gc = K
(s + p)
So there are 3 CLP.
To determine how to pick the p, z, and k, we must use the phase
and magnitude conditions of the RL
To proceed, evaluate the phase of the loop
s+z
Ld(s) =
(s + p)s2
at s0 = 1 + 2j. Since we want s0 to be on the new locus, we know
that Ld(s0) = 180 360l
Im
(-1,2j)

0 Re

p z

Fig. 13: Phase Condition

As shown in the gure, there are four terms in Ld(s0) the two
poles at the origin contribute 117 each
Given the assumed location of the compensator pole/zero, can
work out their contribution as well
Fall 2010 16.30/31 222

2
Geometry for the real zero: tan = z1 and for the real pole: tan =
2
p1

Since we expect the zero to be closer to the origin, put it rst on


the negative real line, and then assume that p = z, where typically
5 10 is a good ratio.

So the phase condition gives:


2(117) + = 180

2 2
arctan arctan = 53
z1 10z 1
but recall that
tan(A) tan(B)
tan(A B) =
1 + tan(A) tan(B)
so
( z2 1 ) ( 10z21 )
= 1.33
1 + ( z2 1 )( 10z21 )
which give z = 2.2253, p = 22.2531, kc = 45.5062

1 % RL design using angles

2 clear all

3 target = 1+2*j;

4 phi origin= 180atan(imag(target)/real(target))*180/pi;

5 syms z M; ratio=10;

6
phi z=(imag(target)/(z+real(target)));
7 phi p=(imag(target)/(ratio*z+real(target)));
8 M=(phi zphi p)/(1+phi z * phi p);
9 test=solve(Mtan(pi/180*(2*phi origin180)));
10 Z=eval(test(1));
11 P=ratio*Z;
12 K=1/abs((target+Z)/(target2*(target+P)));
13 [Z P K]
Fall 2010 16.30/31 223

Pole Placement
Another option for simple systems is called pole placement.

Know that the desired characteristic equation is


d(s) = (s2 + 2s + 5)(s + ) = 0

Actual closed loop poles solve:


c(s) = 1 + GpGc = 0
s2(s + p) + K(s + z) = 0
s3 + s2p + Ks + Kz = 0

Clearly need to pull the poles at the origin into the LHP, so need a
lead compensator Rule of thumb:3 take p = (5 10)z.

Compare the characteristic equations:


c(s) = s2 + 10zs2 + Ks + Kz = 0

d(s) = (s2 + 2s + 5)(s + )


= s3 + s2( + 2) + s(2 + 5) + 5 = 0
gives
s2 + 2=10z
s 2 + 5=K
s0 5=zK
solve for , z, K
25 5z
; = K=
5 2z 5 2z
z = 2.23, = 20.25, K = 45.5
3 Errata: changed the rule of the pole zero ratio.
Fall 2010 16.30/31 224

Fig. 14: CLP with pole placement

Code: Pole Placement


1 %

2 % Fall 2009

3 %

4 close all

5 figure(1);clf

6 set(gcf,'DefaultLineLineWidth',2)

7 set(gcf,'DefaultlineMarkerSize',10)

8 set(gcf,'DefaultlineMarkerFace','b')

9 clear all;%close all;

10 set(0, 'DefaultAxesFontSize', 14, 'DefaultAxesFontWeight','demi')

11 set(0, 'DefaultTextFontSize', 14, 'DefaultTextFontWeight','demi')

12
13 %Example: G(s)=1/22

14 %Design Gc(s) to put the clp poles at 1 + 2j

15 z=roots([20 49 10]);z=max(z),k=25/(52*z),alpha=5*z/(52*z),

16 num=1;den=[1 0 0];

17 knum=k*[1 z];kden=[1 10*z];

18 rlocus(conv(num,knum),conv(den,kden));

19 hold;plot(alpha+eps*j,'d');plot([1+2*j,12*j],'d');hold off

20 r=rlocus(conv(num,knum),conv(den,kden),1)'

21 axis([25 5 15 15])

22 print dpng r300 rl pp.png

Fall 2010 16.30/31 225

Observations

In a root locus design it is easy to see the pole locations, and thus
we can relatively easily identify the dominant time response
Caveat is that near pole/zero cancelation complicates the process
of determining which set of poles will dominate
Some of the performance specications are given in the frequency
response, and it is dicult to determine those (and the corresponding
system error gains) in the RL plot
Easy for low-order systems, very dicult / time consuming for higher
order ones
As we will see, extremely dicult to identify the robustness margins
using a RL plot
A good approach for a fast/rough initial design

Matlab tool called sisotool provides a great interface for designing


and analyzing controllers
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #3

16.30/31 Feedback Control Systems

Frequency response methods


Analysis
Synthesis
Performance
Stability in the Frequency Domain
Nyquist Stability Theorem
Fall 2010 16.30/31 32

FR: Introduction

Root locus methods have:


Advantages:
Good indicator of transient response;
Explicitly shows location of all closed-loop poles;
Trade-os in the design are fairly clear.
Disadvantages:
Requires a transfer function model (poles and zeros);
Dicult to infer all performance metrics;
Hard to determine response to steady-state (sinusoids)
Hard to infer stability margins

Frequency response methods are a good complement to the root locus


techniques:
Can infer performance and stability from the same plot
Can use measured data rather than a transfer function model
Design process can be independent of the system order
Time delays are handled correctly
Graphical techniques (analysis and synthesis) are quite simple.

September 15, 2010


Fall 2010 16.30/31 33

Frequency Response Function

Given a system with a transfer function G(s), we call the G(j),


[0, ) the frequency response function (FRF)
G(j) = |G(j)|G(j)
The FRF can be used to nd the steady-state response of a
system to a sinusoidal input since, if
e(t) y(t)
G(s)

and e(t) = sin 2t, |G(2j)| = 0.3, G(2j) = 80 , then the


steady-state output is
y(t) = 0.3 sin(2t 80)
The FRF clearly shows the magnitude (and phase) of the re
sponse of a system to sinusoidal input

A variety of ways to display this:


1. Polar (Nyquist) plot Re vs. Im of G(j) in complex plane.
Hard to visualize, not useful for synthesis, but gives denitive
tests for stability and is the basis of the robustness analysis.

2. Nichols Plot |G(j)| vs. G(j), which is very handy for sys
tems with lightly damped poles.

3. Bode Plot Log |G(j)| and G(j) vs. Log frequency.


Simplest tool for visualization and synthesis
Typically plot 20log |G| which is given the symbol dB

September 15, 2010


Fall 2010 16.30/31 34

Use logarithmic since if



(s + 1)(s + 2)

log |G(s)| =

(s + 3)(s + 4)

= log |s + 1| + log |s + 2| log |s + 3| log |s + 4|


and each of these factors can be calculated separately and then added
to get the total FRF.

Can also split the phase plot since


(s + 1)(s + 2)
= (s + 1) + (s + 2)
(s + 3)(s + 4)
(s + 3) (s + 4)

The keypoint in the sketching of the plots is that good straightline


approximations exist and can be used to obtain a good prediction of
the system response.

September 15, 2010


Fall 2010 16.30/31 35

Bode Example

Draw Bode for


s+1
G(s) =
s/10 + 1
|j + 1|
|G(j)| =
|j/10 + 1|

log |G(j)| = log[1 + (/1)2]1/2 log[1 + (/10)2]1/2

Approximation

0 i
log[1 + (/i)2]1/2
log[/i] i

Two straightline approximations that intersect at i

Error at i obvious, but not huge and the straightline approximations


are very easy to work with.

Fig. 1: Frequency response basic approximation

September 15, 2010


Fall 2010 16.30/31 36

To form the composite sketch,


Arrange representation of transfer function so that DC gain of
each element is unity (except for parts that have poles or zeros at
the origin) absorb the gain into the overall plant gain.
Draw all component sketches
Start at low frequency (DC) with the component that has the
lowest frequency pole or zero (i.e. s=0)
Use this component to draw the sketch up to the frequency of the
next pole/zero.
Change the slope of the sketch at this point to account for the
new dynamics: -1 for pole, +1 for zero, -2 for double poles, . . .
Scale by overall DC gain

Fig. 2: G(s) = 10(s + 1)/(s + 10) which is a lead.

September 15, 2010


Fall 2010 16.30/31 37

Since G(j) = (1 + j) (1 + j/10), we can construct phase


plot for complete system in a similar fashion
Know that (1 + j/i) = tan1(/i)
Can use straightline approximations

0 /i 0.1
(1 + j/i) 90 /i 10

45 /i = 1

Draw components using breakpoints that are at i/10 and 10i

Fig. 3: Phase plot for (s + 1)

September 15, 2010


Fall 2010 16.30/31 38

Then add them up starting from zero frequency and changing the
slope as

Fig. 4: Phase plot G(s) = 10(s + 1)/(s + 10) which is a lead.

September 15, 2010


Fall 2010 16.30/31 39

Frequency Stability Tests

Want tests on the loop transfer function L(s) = Gc(s)G(s) that can
be performed to establish stability of the closed-loop system
Gc(s)G(s)
Gcl (s) =
1 + Gc(s)G(s)
Easy to determine using a root locus.
How do this in the frequency domain? i.e., what is the simple
equivalent of the statement does root locus go into RHP?

Intuition: All points on the root locus have the properties that
L(s) = 180 and |L(s)| = 1
So at the point of neutral stability (i.e., imaginary axis crossing),
we know that these conditions must hold for s = j
So for neutral stability in the Bode plot (assume stable plant),
must have that L(j) = 180 and |L(j)| = 1
So for most systems we would expect to see |L(j)| < 1 at the
frequencies for which L(j ) = 180

Note that L(j) = 180 and |L(j)| = 1 corresponds to L(j) =


1 + 0j

September 15, 2010


Fall 2010 16.30/31 310

Gain and Phase Margins

Gain Margin: factor by which the gain is less than 1 at the frequen
cies for which L(j ) = 180
GM = 20 log |L(j )|

Phase Margin: angle by which the system phase diers from 180
when the loop gain is 1.
Let c be the frequency at which |L(jc)| = 1, and = L(jc)
(typically less than zero), then
P M = 180 +

Typical stable system needs both GM > 0 and P M > 0

Gain
margin

1/GM
1
-1

Positive
phase
margin

L(j)

Image by MIT OpenCourseWare.

Fig. 5: Gain and Phase Margin for stable system in a polar plot

September 15, 2010


Fall 2010 16.30/31 311

Im
Positive gain Im
margin
Negative phase
margin

1/GM
1 1

Re Re
-1

Positive phase
margin
1/GM
Negative gain
L(j) L(j) margin

Stable system Unstable system

Image by MIT OpenCourseWare.

Fig. 6: Gain and Phase Margin in Polar plots

Negative gain
Positive gain + margin
+ margin c
|L| db 0 |L| db 0
Log Log
_ _
c

-90o -90o

L -180o L -180o
Log Log
-270o Positive phase -270o
margin Negative phase
margin

Stable system Unstable system

Image by MIT OpenCourseWare.

Fig. 7: Gain and Phase Margin in Bode plots

Can often predict closed-loop stability looking at the GM and PM

September 15, 2010


Fall 2010 16.30/31 312

So the test for neutral stability is whether, at some frequency, the


plot of L(j) in the complex plane passes through the critical point
s = 1

-1

GcG(j)

Fig. 8: Polar plot of a neutrally stable case

This is good intuition, but we need to be careful because the previous


statements are only valid if we assume that:
Increasing gain leads to instability
|L(j)| = 1 at only 1 frequency
which are reasonable assumptions, but not always valid.

In particular, if L(s) is unstable, this prediction is a little more com


plicated, and it can be hard to do in a Bode diagram need more
precise test.

A more precise version must not only consider whether L(s) passes
through 1, but how many times it encircles it.
In the process, we must take into account the stability of L(s)

September 15, 2010


Fall 2010 16.30/31 313

Nyquist Stability

Key pieces: an encirclement an accumulation of of 360 of


phase by a vector (tail at s0) as the tip traverses the contour c c
encircles s0

Im
c

Re
s0

Image by MIT OpenCourseWare.

We are interested in the plot of L(s) for a very specic set of values
of s, called the Nyquist Path.

Im

R
R

Re

C2

Image by MIT OpenCourseWare.

Case shown assumes that L(s) has no imaginary axis poles, which
is where much of the complexity of plotting occurs.
Also note that if lims L(s) = 0, then much of the plot of L(s)
for values of s on the Nyquist Path is at the origin.

Nyquist Diagram: plot of L(s) as s moves around the Nyquist


path C2

September 15, 2010


Fall 2010 16.30/31 314

Steps:
Construct Nyquist Path for particular L(s)
Draw Nyquist Diagram
Count # of encirclements of the critical point -1
Why do we care about the # of encirclements?
Turns out that (see appendix) that if L(s) has any poles in the
RHP, then the Nyquist diagram/plot must encircle the critical
point -1 for the closed-loop system to be stable.
It is our job to ensure that we have enough encirclements how many
do we need?

Nyquist Stability Theorem:


P = # poles of L(s) = G(s)Gc(s) in the RHP
Z = # closed-loop poles in the RHP
N = # clockwise encirclements of the Nyquist Diagram about the
critical point -1.
Can show that Z = N + P
So for the closed-loop system to be stable (i.e., no closed-loop
poles in the RHP), need
Z0 N = P
Note that since P 0, then would expect CCW encirclements

September 15, 2010


Fall 2010 16.30/31 315

The whole issue with the Nyquist test boils down to developing a
robust way to make accurate plots and count N .
Good approach to nd the # of crossing from a point s0 is:
Draw a line from s0
Count # of times that line and the Nyquist plot cross
N = #CWcrossings #CCWcrossings

Im
G(s)

-1
Re

Image by MIT OpenCourseWare.

Observation: If the stability of the system is unclear from the Bode


diagram, then always revert to the Nyquist plot.

September 15, 2010


Fall 2010 16.30/31 316

FR: Summary

Bode diagrams are easy to draw


Will see that control design is relatively straight forward as well
Can be a bit complicated to determine stability, but this is a relatively
minor problem and it is easily handled using Nyquist plots
Usually only necessary to do one of Bode/Root Locus analysis, but
they do provide dierent perspectives, so I tend to look at both in
sisotool.

Nyquist test gives us the desired frequency domain stability test


Corresponds to a test on the number of encirclements of the critical
point
For most systems that can be interpreted as needing the GM > 0
and P M > 0
Typically design to GM 6dB and P M 30 60
Introduced S(s) as a basic measure of system robustness.

September 15, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #4

16.30/31 Feedback Control Systems

Control Design using Bode Plots

Performance Issues

Synthesis
Lead/Lag examples
Fall 2010 16.30/31 42

Bodes Gain Phase Relationship

Control synthesis by classical means would be very hard if we had to


consider both the magnitude and phase plots of the loop, but that is
not the case.

Theorem: For any stable, minimum phase system with transfer func
tion G(s), G(j) is uniquely related to the slope of |G(j)|.
Relationship is that, on a log-log plot, if slope of the magnitude
plot is constant over a decade in frequency, with slope n, then
G(j) 90n

So in the crossover region, where L(j) 1 if the magnitude plot is


(locally):
s0 slope of 0, so no crossover possible
s1 slope of -1, so about 90 PM
s2 slope of -2, so PM very small

Basic rule of classical control design:

Select Gc(s) so that LTF crosses over with a slope of -1.

September 19, 2010


Fall 2010 16.30/31 43

Performance Issues
Step error response
1
ess =
1 + Gc(0)Gp(0)
and we can determine Gc(0)Gp(0) from the low frequency Bode plot
for a type 0 system.
For a type 1 system, the DC gain is innite, but dene
Kv = lim sGc(s)Gp(s) ess = 1/Kv
s0

So can easily determine this from the low frequency slope of the
Bode plot.

September 19, 2010


Fall 2010 16.30/31 44

Performance Issues II
Classic question: how much phase margin do we need? Time response
of a second order system gives:
1. Closed-loop pole damping ratio P M/100, P M < 70

2. Closed-loop resonant peak Mr = 1 1


2 sin(P M/2) , near
2 1 2

r = n 1 2 2

3. Closed-loop bandwidth

BW = n 1 2 + 2 4 2 + 4 4
2

and c = n 1 + 4 4 2 2

10 20

Decibels
Amplitude Mr resonant peak
c
ratio 0
1 , rad/s
|y|/|r|
0.7 -3

0.1 -20
Band width
BW

Image by MIT OpenCourseWare.

Fig. 1: Frequency domain performance specications.

So typically specify c, PM, and error constant as design goals

September 19, 2010


Fall 2010 16.30/31 45

Fig. 2: Crossover frequency for second order system

1 figure(1)%

2 z=[0:.01:1]'; zs=[0:.01:1/sqrt(2)eps]';

3 wbw=sqrt(12*z.2+sqrt(24*z.2+4*z.4));%

4 wc=sqrt(sqrt(1+4*z.4)2*z.2);%

5 wr=sqrt(12*zs.2);%

6 set(gcf,'DefaultLineLineWidth',2) %

7 plot(z,wbw,z,wc,zs,wr)%

8 legend('\omega {BW}/\omega n','\omega c/\omega n','\omega r/\omega n') %

9 xlabel('\zeta');ylabel('\omega')%

10 print dpng r300 fresp.png

Other rules of thumb come from approximating the system as having


a 2nd order dominant response:
1 + 1.1 + 1.4 2
10-90% rise time tr = n
Settling time (5%) ts = 3
n
Time to peak amplitude tp =

n 1 2
Peak overshoot M = entp
p

September 19, 2010


Fall 2010 16.30/31 46

Frequency Domain Design

Looked at the building block


s+z
Gc(s) = Kc
s+p
Question: how choose Gc(s) to modify the LTF L(s) = Gc(s)Gp(s)
to get the desired bandwidth, phase margin, and error constants?
Lead Controller (|z| < |p|)
Zero at a lower frequency than the pole
Gain increases with frequency (slope +1)
Phase positive (i.e. this adds phase lead)

dB

0
20 log |p/z|

20

0
0

Phase

-90o

0o
z p

Image by MIT OpenCourseWare.

Fig. 3: Lead: frequency domain.

September 19, 2010


Fall 2010 16.30/31 47

Lead Mechanics
Maximum phase added
1
sin max =
1 +

where = |z|/|p|, which implies that

1 sin max
=
1 + sin max

Frequency of the maximum phase addition is max = |z| |p|
Usually try to place this near c
High frequency gain increase is by 1/

So there is a compromise between wanting to add a large amount


of phase ( small) and the tendency to generate large gains at high
frequency
So try to keep 1/ 10, which means |p| 10|z| and
max 60

If more phase lead is needed, use multiple lead controllers


(s + z1) (s + z2)
Gc(s) = kc
(s + p1) (s + p2)

Select of the overall gain is problem specic.


One approach is to force the desired crossover frequency so that
|L(jc)| = 1

Use Lead to add phase increases PM improves transient re


sponse.

September 19, 2010


Fall 2010 16.30/31 48

Lead Mechanics II
Adding a lead to the LTF changes both the magnitude and phase, so
it is dicult to predict the new crossover point (which is where we
should be adding the extra phase).

dB

0
z p

Phase

0o

-90o

Phase margin, Phase margin,


o uncompensated compensated
-180

Image by MIT OpenCourseWare.

Fig. 4: Lead example

The process is slightly simpler if we target the lead compensator design


at a particular desired c
1. Find the max required
Note that required = P M (180 + G(jc))
2. Put max at the crossover frequency, so that
c2 = |p| |z|
3. Select Kc so that crossover is at c

September 19, 2010


Fall 2010 16.30/31 49

Design Example
1
Design a compensator for the system G(s) = s(s+1)

Want c = 10rad/sec and P M 40


Note that at 10rad/sec, the slope of |G| is -2, corresponding to a
plant phase of approximately 180
So need to add a lead compensator adds a slope of +1 (locally),
changing the LTF slope to -1, and thus increasing the phase margin

|.| -1 |G|

1 z p
0.1 1 10
-2
|GGc|

New
|.| = 1 Line
so c = 10

Image by MIT OpenCourseWare.

Fig. 5: Lead example

Design steps:
G(j ) 180
c
zp = 1sin m
1+sin m , m = 40
P M = 40
z
p = 0.22

c2 = z p = 102 z = 4.7, p = 21.4

Pick kc so that |GcG(jc)| = 1

September 19, 2010


Fall 2010 16.30/31 410

Fig. 6: Lead example


Code: Lead Example
1 %

2 % Lead Example 2009

3 %

4 close all

5 set(0,'DefaultLineLineWidth',2)

6 set(0,'DefaultlineMarkerSize',10)

7 set(0,'DefaultlineMarkerFace','b')

8 set(0, 'DefaultAxesFontSize', 14, 'DefaultAxesFontWeight','demi')

9 set(0, 'DefaultTextFontSize', 14, 'DefaultTextFontWeight','demi')

10 %
11 % g=1/s/(s+1)
12 figure(1);clf;
13 wc=10;
14 PM=40*pi/180;
15
16 G=tf(1,conv([1 0],[1 1]));
17 phi G=phase(evalfr(G,j*wc))*180/pi;
18 %
19 phi m=PM(180+phi G);
20 %
21 zdp=(1sin(phi m))/(1+sin(phi m));
22 z=sqrt(wc2*zdp);
23 p=z/zdp;
24 %
25 Gc=tf([1 z],[1 p]);
26 k c=1/abs(evalfr(G*Gc,j*wc));
27 Gc=Gc* k c;
28 w=logspace(2,2,300);
29 f G=freqresp(G,w);f Gc=freqresp(Gc,w);f L=freqresp(G*Gc,w);
30 f G=squeeze(f G);f Gc=squeeze(f Gc);f L=squeeze(f L);
31 loglog(w,abs(f G),w,abs(f Gc),w,abs(f L))
32 xlabel('Freq');ylabel('Magnitude');grid on
33 legend('Plant G','Comp G c','LTF L','Location','SouthWest')
34 print dpng r300 lead examp2.png

September 19, 2010


Fall 2010 16.30/31 411

Lag Mechanics

If pole at a lower frequency than zero:


Gain decreases at high frequency typically scale lag up so that
HF gain is 1, and thus the low frequency gain is higher.
Add negative phase (i.e., adds lag)

dB

0 p z
0
-20
0
20 log |p/z|

Phase m
o

0

o
-90


Image by MIT OpenCourseWare.

Fig. 7: Lag: frequency domain Glag = kc s/z+1


s/p+1

Typically use a lag to add 20 log to the low frequency gain with
(hopefully) a small impact to the PM (at crossover)
Pick the desired gain reduction at high frequency 20 log(1/),
where = |z|/|p|
Pick |Glag |s=0 = kc to give the desired low frequency gain increase
for the LTF (shift the magnitude plot up)
Heuristic: want to limit frequency of the zero (and pole) so that
there is a minimal impact of the phase lag at c z = c/10
September 19, 2010
Fall 2010 16.30/31 412

Lag Compensation

Assumption is that we need to modify (increase) the DC gain of the


LTF to reduce the tracking error.
Two ways to get desired low frequency gain:
Using just a gain increase, which increases c and decrease P M
Add Lag dynamics that increase increase the gain at low frequency
and leave the gain near and above c unchanged

dB
|KcGp|

|GcGp|

0
p z wc
20 log |p/z|

Original c
Phase

Gp(j)

o
-90

Gc(j)Gp(j)
Phase margin,
o uncompensated
-180
Phase margin,
compensated

Image by MIT OpenCourseWare.

September 19, 2010


Fall 2010 16.30/31 413

Lag Example

Lag example with G(s) = 3/((s + 1)2(s/2 + 1)) and on right Gc(s) = 1,
left Gc(s) = (s + 0.1)/(s + 0.01), which should reduce steady state step
error from 0.25 to 0.032

September 19, 2010


Fall 2010 16.30/31 414
Code: Lag Example
1 %

2 % Lag Example 2009

3 %

4 close all

5 set(0,'DefaultLineLineWidth',2)

6 set(0,'DefaultlineMarkerSize',10)

7 set(0,'DefaultlineMarkerFace','b')

8 set(0, 'DefaultAxesFontSize', 14, 'DefaultAxesFontWeight','demi')

9 set(0, 'DefaultTextFontSize', 14, 'DefaultTextFontWeight','demi')

10
11 G=tf([3],conv(conv([1/1 1],[1/2 1]),[1 1]));%

12 wc=1;zl=wc/10; pl=zl/10;

13 close all;

14
15 Gc=zl/pl*tf([1/(zl) 1],[1/(pl) 1]);%

16
17 figure(1);clf; %

18 set(gcf,'DefaultLineLineWidth',2)

19 L=Gc*G;Gcl=(L)/(1+L);%

20 [y,t]=step(Gcl,20);plot(t,y);grid on; axis([0 20 0 1.3])%

21 hold on;plot([min(t) max(t)],[y(end) y(end)],'r');%

22 xlabel('Time (sec)');ylabel('y') %

23 ess=1/(1+evalfr(L,0*j))

24 print dpng r300 lag examp11.png %

25
26 w=logspace(2,2,300)';

27 figure(2);clf;%

28 set(gcf,'DefaultLineLineWidth',2)

29 [mag,ph]=bode(L,w);grid on; %

30 subplot(211);loglog(w,squeeze(mag));grid on;%

31 axis([1e2 1e2 .001 100]) %

32 xlabel('Freq');ylabel('Mag')%

33 subplot(212);semilogx(w,squeeze(ph));%

34 xlabel('Freq');ylabel('Phase');grid on%

35 print dpng r300 lag examp12.png

36
37 figure(3);rlocus(L);rr=rlocus(L,1);hold on; %
38 plot(rr+j*eps,'rs','MarkerSize',12,'MarkerFace','r');hold off %
39 print dpng r300 lag examp13.png
40
41 Gc=1;

42 Gc2=2;

43
44 figure(4); set(gcf,'DefaultLineLineWidth',2) %

45 L=Gc*G;Gcl=(L)/(1+L);L2=Gc2*G;Gcl2=(L2)/(1+L2);%

46 [y,t]=step(Gcl,20);%

47 [y2,t]=step(Gcl2,t);%

48 hold on; %

49 plot(t,y,'b',t,y2,'m.');grid on; axis([0 20 0 1.2])%

50 plot([min(t) max(t)],[y(end) y(end)],'r');%

51 plot([min(t) max(t)],[y2(end) y2(end)],'g');%

52 xlabel('Time (sec)');ylabel('y') %

53 ess=1/(1+evalfr(L,0*j)) %

54 hold off

55 print dpng r300 lag examp14.png %

56
57 w=logspace(2,2,300)';

58 figure(5);clf;set(gcf,'DefaultLineLineWidth',2)

59 [mag,ph]=bode(L,w);

60 [mag2,ph2]=bode(L2,w);

61 subplot(211);loglog(w,squeeze(mag),'b',w,squeeze(mag2),'m.');grid on;%

62 axis([1e2 1e2 .001 100]) %

63 xlabel('Freq');ylabel('Mag')%

64 subplot(212);semilogx(w,squeeze(ph),'b',w,squeeze(ph2),'m.');%

65 xlabel('Freq');ylabel('Phase');grid on%

66 print dpng r300 lag examp15.png

67
68 figure(6);rlocus(L);rr=rlocus(L,Gc);rr2=rlocus(L,Gc2);hold on; %
69 plot(rr+j*eps,'rs','MarkerSize',12,'MarkerFace','r');
70 plot(rr2+j*eps,'md','MarkerSize',12,'MarkerFace','m');hold off %
71 print dpng r300 lag examp16.png

September 19, 2010


Fall 2010 16.30/31 415

Summary

The design summary online is old, but gives an excellent summary of


the steps involved.
Typically nd that this process is highly iterative because the nal
performance doesnt match the specications (second order assump
tions)
Practice is the only way to absorb these approaches.

September 19, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #5

16.30/31 Feedback Control Systems

State-Space Systems
What are state-space models?
Why should we use them?
How are they related to the transfer functions used in
classical control design and how do we develop a state-
space model?
What are the basic properties of a state-space model, and
how do we analyze these?
Fall 2010 16.30/31 52

SS Introduction

State space model: a representation of the dynamics of an N th order


system as a rst order dierential equation in an N -vector, which is
called the state.
Convert the N th order dierential equation that governs the dy
namics into N rst-order dierential equations

Classic example: second order mass-spring system


mp + cp + kp = F
Let x1 = p, then x2 = p = x 1, and
x 2 = p = (F cp kp)/m
= (F cx2 kx1)/m

p 0 1 p 0
= + u
p k/m c/m p 1/m
Let u = F and introduce the state

x1 p
x= = x = Ax + Bu
x2 p
If the measured output of the system is the position, then we have
that

p x1
y=p= 1 0 = 1 0 = Cx
p x2

September 21, 2010


Fall 2010 16.30/31 53

Most general continuous-time linear dynamical system has form


x (t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
where:
t R denotes time
x(t) Rn is the state (vector)
u(t) Rm is the input or control
y(t) Rp is the output

A(t) Rnn is the dynamics matrix


B(t) Rnm is the input matrix
C(t) Rpn is the output or sensor matrix
D(t) Rpm is the feedthrough matrix

Note that the plant dynamics can be time-varying.


Also note that this is a multi-input / multi-output (MIMO) system.

We will typically deal with the time-invariant case


Linear Time-Invariant (LTI) state dynamics
x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
so that now A, B, C, D are constant and do not depend on t.

September 21, 2010


Fall 2010 16.30/31 54

Basic Denitions

Linearity What is a linear dynamical system? A system G is linear


with respect to its inputs and output
u(t) G(s) y(t)
i superposition holds:
G(1u1 + 2u2) = 1Gu1 + 2Gu2
So if y1 is the response of G to u1 (y1 = Gu1), and y2 is the
response of G to u2 (y2 = Gu2), then the response to 1u1 + 2u2
is 1y1 + 2y2

A system is said to be time-invariant if the relationship between the


input and output is independent of time. So if the response to u(t)
is y(t), then the response to u(t t0) is y(t t0)

Example: the system


x (t) = 3x(t) + u(t)
y(t) = x(t)
is LTI, but
x (t) = 3t x(t) + u(t)
y(t) = x(t)
is not.
A matrix of second system is a function of absolute time, so re
sponse to u(t) will dier from response to u(t 1).

September 21, 2010


Fall 2010 16.30/31 55

x(t) is called the state of the system at t because:


Future output depends only on current state and future input
Future output depends on past input only through current state
State summarizes eect of past inputs on future output like the
memory of the system

Example: Rechargeable ashlight the state is the current state of


charge of the battery. If you know that state, then you do not need
to know how that level of charge was achieved (assuming a perfect
battery) to predict the future performance of the ashlight.
But to consider all nonlinear eects, you might also need to know
how many cycles the battery has gone through
Key point is that you might expect a given linear model to ac
curately model the charge depletion behavior for a given number
of cycles, but that model would typically change with the number
cycles

September 21, 2010


Fall 2010 16.30/31 56

Creating State-Space Models

Most easily created from N th order dierential equations that describe


the dynamics
This was the case done before.
Only issue is which set of states to use there are many choices.

Can be developed from transfer function model as well.


Much more on this later

Problem is that we have restricted ourselves here to linear state space


models, and almost all systems are nonlinear in real-life.
Can develop linear models from nonlinear system dynamics

September 21, 2010


Fall 2010 16.30/31 57

Equilibrium Points

Often have a nonlinear set of dynamics given by


x = f (x, u)
where x is once gain the state vector, u is the vector of inputs, and
f (, ) is a nonlinear vector function that describes the dynamics
First step is to dene the point about which the linearization will be
performed.
Typically about equilibrium points a point for which if the
system starts there it will remain there for all future time.

Characterized by setting the state derivative to zero:


x = f (x, u) = 0
Result is an algebraic set of equations that must be solved for both
xe and ue
Note that x e = 0 and u e = 0 by denition
Typically think of these nominal conditions xe, ue as set points
or operating points for the nonlinear system.

Example pendulum dynamics: + r + gl sin = 0 can be written


in state space form as

x 1 x2
=
x 2 rx2 gl sin x1
Setting f (x, u) = 0 yields x2 = 0 and x2 = rlg sin x1, which
implies that x1 = = {0, }

September 21, 2010


Fall 2010 16.30/31 58

Linearization

Typically assume that the system is operating about some nominal


state solution xe (possibly requires a nominal input ue)
Then write the actual state as x(t) = xe + x(t)

and the actual inputs as u(t) = ue + u(t)

The is included to denote the fact that we expect the variations


about the nominal to be small

Can then develop the linearized equations by using the Taylor series
expansion of f (, ) about xe and ue.

Recall the vector equation x = f (x, u), each equation of which


x i = fi(x, u)
can be expanded as
d
(xei + xi) = fi(xe + x, ue + u)
dt
fi
fi


fi(xe, ue) +

x +

u

x
0 u 0

where
fi fi fi
=
x x1
xn

and |0 means that we should evaluate the function at the nominal


values of xe and ue.

The meaning of small deviations now clear the variations in x


and u must be small enough that we can ignore the higher order
terms in the Taylor expansion of f (x, u).

September 21, 2010


Fall 2010 16.30/31 59

d
Since dt xe i = fi(xe, ue), we thus have that

d fi
fi

(xi)
x +

u

dt
x

0 u 0

Combining for all n state equations, gives (note that we also set

=) that



f1

f1

0

f2
f2


d

x

0


u

0


x
=
x +
u

dt
...
...





fn


fn


x
0

u
0

= A(t)x + B(t)u
where
f1 f1 f1
f1 f1 f1

x1 x2 xn u1 u2 um

f2 f2 f2 f2 f2 f2

x1 x2
xn

and
B(t)
u1 u2
um

A(t)




...
...




fn fn fn fn fn fn
x1 x2 xn u1 u2 um
0 0

September 21, 2010


Fall 2010 16.30/31 510

Similarly, if the nonlinear measurement equation is y = g(x, u) and


y(t) = ye + y, then



g1

g1


.
0
.
0

.
y
=
. x +
.
. u



gp


gp


x
0

u 0

= C(t)x + D(t)u

Typically drop the as they are rather cumbersome, and (abusing


notation) we write the state equations as:
x (t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
which is of the same form as the previous linear models

If the system is operating around just one set point then the partial
fractions in the expressions for AD are all constant LTI lin
earized model.

September 21, 2010


Fall 2010 16.30/31 511

Stability of LTI Systems

Consider a solution xs(t) to a dierential equation for a given initial


condition xs(t0).
Solution is stable if other solutions xb(t0) that start near xs(t0)
stay close to xs(t) t stable in sense of Lyapunov (SSL).
If other solutions are SSL, but the xb(t) do not converge to xs(t)
solution is neutrally stable.
If other solutions are SSL and xb(t) x(t) as t solution
is asymptotically stable.
A solution xs(t) is unstable if it is not stable.

Note that a linear (autonomous) system x = Ax has an equilibrium


point at xe = 0
This equilibrium point is stable if and only if all of the eigenvalues
of A satisfy Ri(A) 0 and every eigenvalue with Ri(A) = 0
has a Jordan block of order one.1
Thus the stability test for a linear system is the familiar one of
determining if Ri(A) 0

Somewhat surprisingly perhaps, we can also infer stability of the orig


inal nonlinear from the analysis of the linearized system model

1 more on Jordan blocks on 6??, but this basically means that these eigenvalues are not repeated.

September 21, 2010


Fall 2010 16.30/31 512

Lyapunovs indirect method2 Let xe = 0 be an equilibrium point


for the nonlinear autonomous system
x (t) = f (x(t))
where f is continuously dierentiable in a neighborhood of xe. As
sume
f
A =
x
xe
Then:
The origin is an asymptotically stable equilibrium point for the
nonlinear system if Ri(A) < 0 i
The origin is unstable if Ri(A) > 0 for any i

Note that this doesnt say anything about the stability of the nonlinear
system if the linear system is neutrally stable.

A very powerful result that is the basis of all linear control theory.

2 Much more on Lyapunov methods later too.

September 21, 2010


Fall 2010 16.30/31 513

Linearization Example

Example: simple spring. With a mass at the end of a linear spring


(rate k) we have the dynamics
mx = kx
but with a leaf spring as is used on car suspensions, we have a
nonlinear spring the more it deects, the stier it gets. Good model
now is
mx = k1x k2x3
which is a cubic spring.

Fig. 1: Leaf spring from http://en.wikipedia.org/wiki/Image:Leafs1.jpg

Restoring force depends on deection x in a nonlinear way.

Fig. 2: Response to linear k = 1 and nonlinear (k1 = k, k2 = 2) springs (code at


the end)

September 21, 2010


Fall 2010 16.30/31 514

Consider the nonlinear spring with (set m = 1)


y = k1y k2y 3
gives us the nonlinear model (x1 = y and x2 = y)

d y y
= x = f (x)
dt y k1y k2y 3

Find the equilibrium points and then make a state space model

For the equilibrium points, we must solve



y
f (x) = =0
k1y k2y 3
which gives
y e = 0 and k1ye + k2(ye)3 = 0

Second condition corresponds to ye = 0 or ye = k1/k2,
which is only real if k1 and k2 are opposite signs.

For the state space model,


f1 f1


x1 x2 0 1
A= f2 f2 =

k1 3k2(y)2 0 0
x1 x2 0
0 1
=
k1 3k2(ye)2 0
and the linearized model is x = Ax
September 21, 2010
Fall 2010 16.30/31 515

For the equilibrium point ye = 0, y e = 0



0 1
A0 =
k1 0
which are the standard dynamics of a system with just a linear spring
of stiness k1
Stable motion about y = 0 if k1 > 0

Assume that k1 = 1,k2 = 1/2, then we should get an equilibrium


point at y = 0, y = 2, and since k1 + k2(ye)2 = 0 then

0 1
A1 =
2 0
which are the dynamics of a stable oscillator about the equilibrium
point

Fig. 3: Nonlinear response (k1 = 1, k2 = 0.5). The gure on the right shows the
oscillation about the equilibrium point.

September 21, 2010


Fall 2010 16.30/31 516
Code: Nonlinear System Sim
1 function test=nlplant(ft);
2 global k1 k2
3 x0 = [1 2]/10;
4
5 k1=1;k2=0;
6 [T,x]=ode23('plant', [0 20], x0); % linear
7
8 k2=2;
9 [T1,x1]=ode23('plant', [0 20], x0); %nonlinear
10
11 figure(1);clf;
12 subplot(211)
13 plot(T,x(:,1),T1,x1(:,1),'');
14 legend('Linear','Nonlinear')
15 ylabel('X','FontSize',ft)
16 xlabel('Time','FontSize',ft)
17 subplot(212)
18 plot(T,x(:,2),T1,x1(:,2),'');
19 legend('Linear','Nonlinear')
20 ylabel('V','FontSize',ft)
21 xlabel('Time','FontSize',ft)
22 text(4,0.3,['k 2=',num2str(k2)],'FontSize',ft)
23 return
24
25 % use the following to cll the function above
26 close all
27 set(0, 'DefaultAxesFontSize', 12, 'DefaultAxesFontWeight','demi')
28 set(0, 'DefaultTextFontSize', 12, 'DefaultTextFontWeight','demi')
29 set(0,'DefaultAxesFontName','arial')
30 set(0,'DefaultAxesFontSize',12)
31 set(0,'DefaultTextFontName','arial')
32 set(gcf,'DefaultLineLineWidth',2);
33 set(gcf,'DefaultlineMarkerSize',10)
34 global k1 k2
35 nlplant(14)
36 print f1 dpng r300 nlplant.png
37 k1=1;k2=0.5;
38 % call plant.m
39 x0 = [sqrt(k1/k2) .25];
40 [T,x]=ode23('plant', [0:.001:32], x0);
41 figure(1);subplot(211);plot(T,x(:,1));ylabel('y');xlabel('Time');grid
42 subplot(212);plot(T,x(:,2));ylabel('dy/dt');xlabel('Time');grid
43 figure(2);plot(x(:,1),x(:,2));grid
44 hold on;plot(x0(1),0,'rx','MarkerSize',20);hold off;
45 xlabel('y');ylabel('dy/dt')
46 axis([1.2 1.7 .25 .25]);axis('square')
47
48 print f1 dpng r300 nlplant2.png
49 print f2 dpng r300 nlplant3.png

1 function [xdot] = plant(t,x);


2 % plant.m
3 global k1 k2

4 xdot(1) = x(2);

5 xdot(2) = k1*x(1)k2*(x(1))3;

6 xdot = xdot';

September 21, 2010


Fall 2010 16.30/31 5A1

Linearization Example: Aircraft Dynamics

The basic dynamics are:


I
F = mvcI and T = H

1 B
F = vc + BI vc Transport Thm.
m

B + BI
T = H H

q Y
u v
X
p
Basic assumptions are:
w

1. Earth is an inertial reference frame r

2. A/C is a rigid body Z

3. Body frame B xed to the aircraft (i, j, k) Image by MIT OpenCourseWare.

BI
Instantaneous mapping of vc and
into the body frame:
BI
= Pi + Qj + Rk
vc = Ui + V j + W k


P U
BI B = Q (vc)B = V
R W

If x and z axes in plane of symmetry, can show that Ixy = Iyz = 0,


but value of Ixz depends on specic body frame selected.
Instantaneous mapping of angular momentum
= Hxi + Hyj + Hzk
H
into the body frame given by

Hx Ixx 0 Ixz P
HB = Hy = 0 Iyy 0 Q
Hz Ixz 0 Izz R

September 21, 2010


Fall 2010 16.30/31 5A2

The overall equations of motion are then:


1
F = vcB + BI vc
m

X U 0 R Q U
1
Y = V + R 0 P V
m
Z W Q P 0 W


U + QW RV

= V + RU P W

W + P V QU

B +
T = H BI
H

IxxP + Ixz R

L 0 R Q Ixx 0 Ixz P
M = Iyy Q + R 0 P 0 Iyy 0 Q
N Izz R + Ixz P Q P 0 Ixz 0 Izz R

IxxP + Ixz R +QR(Izz Iyy ) + P QIxz


= Iyy Q +P R(Ixx Izz ) + (R2 P 2)Ixz


Izz R + Ixz P +P Q(Iyy Ixx) QRIxz

Equations are very nonlinear and complicated, and we have not even
said where F and T come from need to linearize to develop analytic
results
Assume that the aircraft is ying in an equilibrium condition and
we will linearize the equations about this nominal ight condition.

September 21, 2010


Fall 2010 16.30/31 5A3

Linearization
Can linearize about various steady state conditions of ight.
For steady state ight conditions must have
F = Faero + Fgravity + Fthrust = 0 and T = 0
So for equilibrium condition, forces balance on the aircraft
L = W and T = D

Also assume that P = Q = R = U = V = W


=0

Impose additional constraints that depend on ight condition:

Steady wings-level ight = = = = 0

Dene the trim angular rates and velocities



P Uo
BI o o
B = Q (vc)B = 0
R 0
which are associated with the ight condition. In fact, these dene
the type of equilibrium motion that we linearize about. Note:
W0 = 0 since we are using the stability axes, and
V0 = 0 because we are assuming symmetric ight

September 21, 2010


Fall 2010 16.30/31 5A4

Proceed with linearization of the dynamics for various ight conditions


Nominal Perturbed Perturbed
Velocity Velocity Acceleration

Velocities U0 , U = U0 + u U = u
W0 = 0, W =w W = w
V0 = 0, V =v V = v

Angular P0 = 0, P =p P = p
Rates Q0 = 0, Q=q Q = q
R0 = 0, R=r R = r

Angles 0 , = 0 + =
0 = 0, = =
0 = 0, = =

September 21, 2010


Fall 2010 16.30/31 5A5

Linearization for symmetric ight


U = U0 + u, V0 = W0 = 0, P0 = Q0 = R0 = 0.

Note that the forces and moments are also perturbed.


1
[X0 + X] = U + QW RV u + qw rv u
m
1
[Y0 + Y ] = V + RU P W
m
v + r(U0 + u) pw v + rU0

1 + P V QU w + pv q(U0 + u)
[Z0 + Z] = W
m
w qU0

X u 1
1
Y = v + rU0 2
m
Z w qU0 3

Attitude motion:
IxxP + Ixz R +QR(Izz Iyy ) + P QIxz

L
M = Iyy Q +P R(Ixx Izz ) + (R2 P 2)Ixz
N Izz R + Ixz P +P Q(Iyy Ixx) QRIxz

L Ixxp + Ixz r 4

M = Iyy q 5

N Izz r + Ixz p 6

September 21, 2010


Fall 2010 16.30/31 5A6

To understand equations in detail, and the resulting impact on the


vehicle dynamics, we must investigate terms X . . . N .
We must also address the left-hand side (F , T )
Net forces and moments must be zero in equilibrium condition.
Aerodynamic and Gravity forces are a function of equilibrium con
dition AND the perturbations about this equilibrium.

Predict the changes to the aerodynamic forces and moments using a


rst order expansion in the key ight parameters
X X X X X g
X = U + W + W + + . . . + + X c
U W W
g
X X X X X
= u+ w+ w + + ... + + X c
U W W
X
U called stability derivative evaluated at eq. condition.
Clearly approximation since ignores lags in aerodynamics forces (as
sumes that forces only function of instantaneous values)

September 21, 2010


Fall 2010 16.30/31 5A7

Stability Derivatives

First proposed by Bryan (1911) has proven to be a very eective way


to analyze the aircraft ight mechanics well supported by numerous
ight test comparisons.
The forces and torques acting on the aircraft are very complex nonlin
ear functions of the ight equilibrium condition and the perturbations
from equilibrium.
Linearized expansion can involve many terms u, u,
u,
. . . , w, w,
w,
...

Typically only retain a few terms to capture the dominant eects.

Dominant behavior most easily discussed in terms of the:


Symm. variables: U , W , Q & forces/torques: X, Z, and M

Asymm. variables: V , P , R & forces/torques: Y , L, and N

Observation for truly symmetric ight Y , L, and N will be exactly


zero for any value of U , W , Q
Derivatives of asymmetric forces/torques with respect to the sym
metric motion variables are zero.

Further (convenient) assumptions:


1. Derivatives of symmetric forces/torques with respect to the asym
metric motion variables are small and can be neglected.

2. We can neglect derivatives with respect to the derivatives of the


motion variables, but keep Z/ w and Mw M/ w (aero
dynamic lag involved in forming new pressure distribution on the
wing in response to the perturbed angle of attack)

3. X/q is negligibly small.

September 21, 2010


Fall 2010 16.30/31 5A8

()/() X Y Z L M N
u 0 0 0
v 0 0 0
w 0 0 0
p 0 0 0
q 0 0 0 0
r 0 0 0

Note that we must also nd the perturbation gravity and thrust forces
and moments
X g
Z g


= mg cos 0
= mg sin 0

Aerodynamic summary:
X X
1A X = U 0 u + W 0 w X u, x w/U0

2A Y v/U0, p, r

3A Z u, x w/U0, x w/U
0, q

4A L v/U0, p, r

5A M u, x w/U0, x w/U
0, q

6A N v/U0, p, r

September 21, 2010


Fall 2010 16.30/31 5A9

Result is that, with these force, torque approximations,


equations 1, 3, 5 decouple from 2, 4, 6

1, 3, 5 are the longitudinal dynamics in u, w, and q


X mu
Z = m(w qU0)
M Iyy q
X X X g c

U 0 u + W 0 w +
+ X 0

Z u + Z w + Z w + Z q + Z g + Z c

U 0

W 0 W Q 0 0
0

M
M M
U 0 u + W 0 w + W w + MQ q + M c
0 0

2, 4, 6 are the lateral dynamics in v, p, and r


Y m(v + rU0)

L = Ixxp + Ixz r

N Izz r + Ixz p


Y
Y Y c
V 0 v
+ P 0 p + R 0 r + +Y

L L L c


V 0 v + P 0 p + R 0 r + L



N
N N c
V 0 v + P 0 p + R 0 r + N

September 21, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
16.30
Estimation and Control of Aerospace Systems
Topic 5 addendum: Signals and Systems

Aeronautics and Astronautics

Massachusetts Institute of Technology

Fall 2010

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 1 / 27


Outline

1 Continuous- and discrete-time signals and systems

2 Causality

3 Time Invariance

4 State-space models

5 Linear Systems

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 2 / 27


Continuous- and discrete-time signals

Continuous-time signal
A (scalar) continuous-time signal is a function that associates to each time
t R a real number y (t), i.e., y : t y (t). Note: We will use the
standard (round) parentheses to indicate continuous-time signals.

Discrete-time signal
A (scalar) discrete-time signal is a function that associates to each integer
k Z a real number y [k], i.e., y : k y [k]. Note: We will use the square
parentheses to indicate discrete-time signals.

y (t) y [k]

t k

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 3 / 27


Signals are vectors

Multiplication by a scalar
Let R. The signal y can be obtained as:

(y )(t) = y (t), and (y )[k] = y [k].

Notice 0y is always the zero signal, where 0(t) = 0 for all t R,


and 0[k] = 0 for all k Z, and 1y = y .

Addition of two signals


Let u and v be two signals of the same kind (i.e., both in continuous
or discrete time).
The signal u + v is dened as:

(u + v )(t) = u(t) + v (t), and (u + v )[k] = u[k] + v [k].

Notice that u u = u + (1)u = 0.


(MIT) Topic 5 addendum: Signals, Systems Fall 2010 4 / 27
Systems

Denition (system)
A system is an operator that transforms an input signal u into a unique
output signal y .

u(t) y (t)

t t
u(t) y (t)
System
u[k] y [k]

u[k] y (t)

k t

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 5 / 27


A classication

Continuous-Time System: CT CT
This is the kind of systems you studied in 16.06.

Discrete-Time System: DT DT
We will study this kind of systems in this class.

Sampler: CT DT
This class includes sensors, and A/D (Analog Digital) converters.
Let us call a sampler with sampling time T a system such that

y [k] = u(kT ).

Hold: DT CT
This class includes actuators, and D/A (Digital Analog) converters.
A Zero-Order Hold (ZOH) with holding time T is such that
t
y (t) = u .
T
(MIT) Topic 5 addendum: Signals, Systems Fall 2010 6 / 27
Outline

1 Continuous- and discrete-time signals and systems

2 Causality

3 Time Invariance

4 State-space models

5 Linear Systems

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 7 / 27


Static/Memoryless systems

Denition (Memoryless system)


A system is said to be memoryless (or static) if, for any t0 R (resp.
k0 Z), the output at time t0 (resp. at time k0 ) depends only on the
input at time t0 (resp. at step k0 ).

This is the most basic kind of system. Essentially the output can be
written as a simple function of the input, e.g.,

y (t) = f (u(t)), y [k] = f (u[k]).

Examples:
A proportional compensator;
A spring;
An electrical circuit with resistors only.

In general, systems behave in a more complicated way.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 8 / 27


Causality

Denition (Causal system)


A system G is said to be causal if, for any t0 R (resp., k0 Z) the
output at time t0 (resp. at step k0 ) depends only on the input up to, and
including, time t0 (resp. up to, and including, step k0 ).

Denition (Strictly causal system)


A system is said to be strictly causal if the dependency is only on the input
preceding t0 (resp., k0 ).

A system is causal if it is non-anticipatory, i.e., it cannot respond to inputs


that will be applied in the future, but only on past and present inputs.
(Strictly causal systems only depend on past inputs).
Note that a static system is causal, but not strictly causal.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 9 / 27


Some remarks on causality

Most control systems that are implementable in practice are, in fact,

causal. In general, it is not possible to predict the inputs that will be

applied in the future.

However, there are some cases in which non-causal systems can actually

be interesting to study:

On-board sensors may provide look-ahead information.


For example, in a nap-of-the-Earth ying mission, the path to the
next waypoint, and the altitude prole, may be known in advance.
Commands may belong to a pre-dened class.
For example, an autopilot that is programmed to execute an acrobatic
maneuver (e.g., a loop).
O-line processing.
For example, a system that is used as a lter to smooth a certain
signal, or possibly to rip a song from a CD to an MP3 le.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 10 / 27


Outline

1 Continuous- and discrete-time signals and systems

2 Causality

3 Time Invariance

4 State-space models

5 Linear Systems

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 11 / 27


Time delays

Denition (Time-delay system)


A time-delay system, or more simply a time delay, is a system ST such that

y (t) = ST (u(t)) = u(t T ) in continuous time;


y [k] = ST (u[k]) = u[k T ] in discrete time;

u u

ST

T

t t

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 12 / 27


Time invariance

Denition (Time invariant system)


A system is time invariant if it commutes with time delays. In other words,
the output to a time-delayed input is the same as the time-delayed output
to the original input.

u(t) y (t) y (t T )
TI System Time delay

u(t) u(t T ) y (t T )
Time delay TI System

More simply, the response of a time-invariant system does not depend on


where you put the origin on the time axis (but it depends on the direction
in which time ows).
(MIT) Topic 5 addendum: Signals, Systems Fall 2010 13 / 27
Outline

1 Continuous- and discrete-time signals and systems

2 Causality

3 Time Invariance

4 State-space models

5 Linear Systems

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 14 / 27


State of a system
We know that, if a system is causal, in order to compute its output at a
given time t0 , we need to know only the input signal over (, t0 ].
(Similarly for DT systems.)
This is a lot of information. Can we summarize it with something more
manageable?

Denition (state)
The state x(t1 ) of a causal system at time t1 (resp. at step k1 ) is the
information needed, together with the input u between times t1 and t2
(resp. between steps k1 and k2 ), to uniquely predict the output at time t2
(resp. at step k2 ), for all t2 t1 (resp. for all steps k2 k1 ).

In other words, the state of the system at a given time summarizes the
whole history of the past inputs , for the purpose of predicting the
output at future times.
Usually, the state of a system is a vector in some n-dimensional space Rn .
(MIT) Topic 5 addendum: Signals, Systems Fall 2010 15 / 27
Dimension of a system

The choice of a state for a system is not unique (in fact, there are innite
choices, or realizations).
However, there are come choices of state which are preferable to others; in
particular, we can look at minimal realizations.

Denition (Dimension of a system)


We dene the dimension of a causal system as the minimal number of
variables sucient to describe the systems state (i.e., the dimension of
the smallest state vector).

We will deal mostly with nite-dimensional systems, i.e., systems which


can be described with a nite number of variables.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 16 / 27


Some remarks on innite-dimensional systems

Even though we will not address innite-dimensional systems in detail,


some examples are very useful:
(CT) Time-delay systems: Consider the very simple time delay ST ,
dened as a continuous-time system such that its input and outputs
are related by
y (t) = u(t T ).
In order to predict the output at times after t, the knowledge of the
input for times in (t T , t] is necessary.
PDE-driven systems: Many systems in aerospace, arising, e.g., in
structural control and ow control applications, can only be described
exactly using a continuum of state variables (stress, displacement,
pressure, temperature, etc.). These are innite-dimensional systems.
In order to deal with innite-dimensional systems, approximate discrete
models are often used to reduce the dimension of the state.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 17 / 27


Outline

1 Continuous- and discrete-time signals and systems

2 Causality

3 Time Invariance

4 State-space models

5 Linear Systems

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 18 / 27


Linear Systems

Denition (Linear system)


A system is said a Linear System if, for any two inputs u1 and u2 , and any
two real numbers , , the following are satised:

u 1 y1 ,
u2 y 2 ,
u1 + u2 y1 + y2 .

Superposition of eects
This property is very important: it tells us that if we can decompose a
complicated input into a sum of simple signals, we can obtain the output
as the sum of the individual outputs corresponding to the simple inputs.
Examples (in CT, same holds in DT):
Taylor series: u(t) = ci t i .

i=0
Fourier series: u(t) = i=0 (ai cos(i t) + bi sin(i t)).
(MIT) Topic 5 addendum: Signals, Systems Fall 2010 19 / 27
State-space model
Finite-dimensional linear systems can always be modeled using a set of dierential (or
dierence) equations as follows:

Denition (Continuous-time systems)

d
x(t) = A(t)x(t) + B(t)u(t);
dt
y (t) = C (t)x(t) + D(t)u(t);

Denition (Discrete-time systems)

x[k + 1] = A[k]x[k] + B[k]u[k];


y [k] = C [k]x[k] + D[k]u[k];

The matrices appearing in the above formulas are in general functions of


time, and have the correct dimensions to make the equations meaningful.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 20 / 27


LTI State-space model
If the system is Linear Time-Invariant (LTI), the equations simplify to:

Denition (Continuous-time systems)

d
x(t) = Ax(t) + Bu(t);
dt
y (t) = Cx(t) + Du(t);

Denition (Discrete-time systems)

x[k + 1] = Ax[k] + Bu[k];


y [k] = Cx[k] + Du[k];

In the above formulas, A Rnn , B Rn1 , C R1n , D R, and n is the


dimension of the state vector.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 21 / 27


Example of DT system: accumulator

Consider a system such that


k1

y [k] = u[i].
i=

Notice that we can rewrite the above as


k2

y [k] = u[i] + u[k 1] = y [k 1] + u[k 1].
i=

In other words, we can set x[k] = y [k] as a state, and get the following state-space model:

x[k + 1] = x[k] + u[k],


y [k] = x[k].

Let x[0] = y [0] = 0, and u[k] = 1; we can solve by repeated substitution:

x[1] = x[0] + u[0] = 0 + 1 = 1, y [1] = x[1] = 1;


x[2] = x[1] + u[1] = 1 + 1 = 2, y [2] = x[2] = 2;
...
x[k] = x[k 1] + u[k 1] = k 1 + 1 = k, y [k] = x[k] = k;

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 22 / 27


LTI State-space models in Matlab

Example (Continuous-time LTI system)


>> A= [1, 0.1; 0, 1]; B= [1; 2]; C = [3, 4]; D=0;
>> P = ss(A,B,C,D)

a =
x1 x2
x1 1 0.1
x2 0 1

b =
u1
x1 1
x2 2

c =
x1 x2
y1 3 4

d =
u1
y1 0

Continuous-time model.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 23 / 27


LTI State-space models in Matlab

Example (Discrete-time LTI system)


>> A= [1, 0.1; 0, 1]; B= [1; 2]; C = [3, 4]; D=0; Ts = 0.2;
>> P = ss(A,B,C,D,Ts)

a =
x1 x2
x1 1 0.1
x2 0 1

b =
u1
x1 1
x2 2

c =
x1 x2
y1 3 4

d =
u1
y1 0

Sampling time: 0.2


Discrete-time model.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 24 / 27


Finite-dimensional Linear Systems 1/2

Recall the denition of a linear system. Essentially, a system is linear


if the linear combination of two inputs generates an output that is the
linear combination of the outputs generated by the two individual
inputs.
The denition of a state allows us to summarize the past inputs into
the state, i.e.,

x(t0 ),
u(t), t +
u(t), t t0 ,

(similar formulas hold for the DT case.)

We can extend the denition of linear systems as well to this new


notion.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 25 / 27


Finite-dimensional Linear Systems 2/2

Denition (Linear system (again))


A system is said a Linear System if, for any u1 , u2 , t0 , x0,1 , x0,2 , and any
two real numbers , , the following are satised:

x(t0 ) = x0,1 ,
y1 ,
u(t) = u1 (t), t t0 ,

x(t0 ) = x0,2 ,
y2 ,
u(t) = u2 (t), t t0 ,

x(t0 ) = x0,1 + x0,2 ,
y1 + y2 .
u(t) = u1 (t) + u2 (t), t t0 ,

Similar formulas hold for the discrete-time case.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 26 / 27


Forced response and initial-conditions response
Assume we want to study the output of a system starting at time t0 , knowing the
initial state x(t0 ) = x0 , and the present and future input u(t), t t0 . Let us study
the following two cases instead:
Initial-conditions response:

xIC (t0 ) = x0 ,
yIC ;
uIC (t) = 0, t t0 ,

Forced response:
xF (t0 ) = 0,
yF .
uF (t) = u(t), t t0 ,
Clearly, x0 = xIC + xF , and u = uIC + uF , hence

y = yIC + yF ,

that is, we can always compute the output of a linear system by adding the output
corresponding to zero input and the original initial conditions, and the output
corresponding to a zero initial condition, and the original input.
In other words, we can study separately the eects of non-zero inputs and of
non-zero initial conditions. The complete case can be recovered from these two.

(MIT) Topic 5 addendum: Signals, Systems Fall 2010 27 / 27


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .
Topic #6

16.30/31 Feedback Control Systems

State-Space Systems
What are state-space models?
Why should we use them?
How are they related to the transfer functions
used in classical control design and how do we
develop a state-space model?
What are the basic properties of a state-space model, and
how do we analyze these?
Fall 2010 16.30/31 62

TFs to State-Space Models

The goal is to develop a state-space model given a transfer function


for a system G(s).
There are many, many ways to do this.

But there are three primary cases to consider:


1. Simple numerator (strictly proper)
y 1
= G(s) = 3
u s + a 1 s2 + a 2 s + a 3

2. Numerator order less than denominator order (strictly proper)


y b1s2 + b2s + b3 N (s)
= G(s) = 3 =
u s + a1s2 + a2s + a3 D(s)

3. Numerator equal to denominator order (proper)


y b0s3 + b1s2 + b2s + b3
= G(s) = 3
u s + a 1 s2 + a 2 s + a 3

These 3 cover all cases of interest

September 21, 2010


Fall 2010 16.30/31 63

Consider case 1 (specic example of third order, but the extension


to nth follows easily)
y 1
= G(s) = 3
u s + a 1 s2 + a 2 s + a 3
can be rewritten as the dierential equation
...
y + a1y + a2y + a3y = u
choose the output y and its derivatives as the state vector

y
x = y
y
then the state equations are

...
y

a1 a2 a3 y 1
x = y = 1 0 0 y + 0 u
y 0 1 0 y 0

y
y = 0 0 1 y + [0]u
y

This is typically called the controller form for reasons that will become
obvious later on.
There are four classic (called canonical) forms observer, con
troller, controllability, and observability. They are all useful in
their own way.

September 21, 2010


Fall 2010 16.30/31 64

Consider case 2
y b1 s2 + b 2 s + b 3 N (s)
= G(s) = 3 =
u s + a1s2 + a2s + a3 D(s)

Let
y y v
=
u v u
where y/v = N (s) and v/u = 1/D(s)

Then representation of v/u = 1/D(s) is the same as case 1


...
v + a1v + a2v + a3v = u
use the state vector
v
x = v
v
to get
x = A2x + B2u
where
a1 a2 a3 1
A2 = 1 0 0 and B2 = 0
0 1 0 0

Then consider y/v = N (s), which implies that


y = b1v + b2v + b3v

v
= b1 b2 b3 v
v
= C2x + [0]u

September 21, 2010


Fall 2010 16.30/31 65

Consider case 3 with


y b0s3 + b1s2 + b2s + b3
= G(s) = 3
u s + a 1 s2 + a 2 s + a 3

1s2 + 2s + 3
= 3 +D
s + a1 s 2 + a 2 s + a 3

= G1(s) + D
where
D( s3 +a1s2 +a2s +a3 )

+( +1s2 +2s +3 )

= b0s3 +b1s2 +b2s +b3


so that, given the bi, we can easily nd the i
D = b0
1 = b1 Da1
...

Given the i, can nd G1(s)


Can make state-space model for G1(s) as in case 2

Then we just add the feed-through term Du to the output equation


from the model for G1(s)

Will see that there is a lot of freedom in making a state-space model


because we are free to pick the x as we want

September 21, 2010


Fall 2010 16.30/31 66

Modal Form

One particular useful canonical form is called the Modal Form


It is a diagonal representation of the state-space model.
Assume for now that the transfer function has distinct real poles pi
(easily extends to case with complex poles, see 7??)
N (s) N (s)
G(s) = =
D(s) (s p1)(s p2) (s pn)
r1 r2 rn
= + + +
s p1 s p2 s pn
Now dene collection of rst order systems, each with state xi
X1 r1
= x 1 = p1x1 + r1u
U (s) s p1
X2 r2
= x 2 = p2x2 + r2u
U (s) s p2
...
Xn rn
= x n = pnxn + rnu
U (s) s pn
Which can be written as
x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
with
T
p1 r1 1
A= ... B = .
.. C = ...
pn rn 1
Good representation to use for numerical robustness reasons.
Avoids some of the large coecients in the other 4 canonical forms.

September 21, 2010


Fall 2010 16.30/31 67

State-Space Models to TFs

Given the Linear Time-Invariant (LTI) state dynamics


x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
what is the corresponding transfer function?

Start by taking the Laplace Transform of these equations


L{x (t) = Ax(t) + Bu(t)}
sX(s) x(0) = AX(s) + BU (s)

L{y(t) = Cx(t) + Du(t)}


Y (s) = CX(s) + DU (s)
which gives
(sI A)X(s) = BU (s) + x(0)
X(s) = (sI A)1BU (s) + (sI A)1x(0)
and
Y (s) = C(sI A) B + D U (s) + C(sI A)1x(0)
1

By denition G(s) = C(sI A)1B + D is called the Transfer


Function of the system.

And C(sI A)1x(0) is the initial condition response.


It is part of the response, but not part of the transfer function.

September 21, 2010


Fall 2010 16.30/31 68

SS to TF

In going from the state space model


x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
to the transfer function G(s) = C(sI A)1B + D need to form
inverse of matrix (sI A)
A symbolic inverse not very easy.

For simple cases, we can use the following:


1
a1 a2 1 a4 a2
=
a3 a4 a1a4 a2a3 a3 a1
For larger problems, we can also use Cramers Rule

Turns out that an equivalent method is to form:1



sI A B
det
C D
G(s) = C(sI A)1B + D =
det(sI A)
Reason for this will become more apparent later (see 8??) when
we talk about how to compute the zeros of a state-space model
(which are the roots of the numerator)

Key point: System characteristic equation given by


(s) = det(sI A) = 0
It is the roots of (s) = 0 that determine the poles of the system.
Will show that these determine the time response of the system.

1 see here

September 21, 2010


Fall 2010 16.30/31 69

Example from Case 2, page 64



a1 a2 a3
1

T

A =

1 0 0
,
B =
0
,
C
=
b1 b2 b3

0 1 0 0
then

s + a1 a2 a3 1
1
1 s 0 0

G(s) = det
det(sI A)
0 1 s 0


b1 b2 b3 0
b3 + b 2 s + b 1 s2
=
det(sI A)
and det(sI A) = s3 + a1s2 + a2s + a3
Which is obviously the same as before.

September 21, 2010


Fall 2010 16.30/31 610

State-Space Transformations

State space representations are not unique because we have a lot of


freedom in choosing the state vector.
Selection of the state is quite arbitrary, and not that important.
In fact, given one model, we can transform it to another model that
is equivalent in terms of its input-output properties.
To see this, dene Model 1 of G(s) as
x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)

Now introduce the new state vector z related to the rst state x
through the transformation x = T z
T is an invertible (similarity) transform matrix

z = T 1x = T 1(Ax + Bu)
= T 1(AT z + Bu)
= (T 1AT )z + T 1Bu = Az + B u
and
+ Du
y = Cx + Du = CT z + Du = Cz

So the new model is


z = Az + B u
+ Du
y = Cz

Are these going to give the same transfer function? They must if
these really are equivalent models.
September 21, 2010
Fall 2010 16.30/31 611

Consider the two transfer functions:


G1(s) = C(sI A)1B + D
G2(s) = C (sI A)1B + D

Does G1(s) G2(s) ?


G1(s) = C(sI A)1B + D
= C(T T 1)(sI A)1(T T 1)B + D
= (CT ) T 1(sI A)1T (T 1B) + D

1
= (C ) T 1(sI A)T (B ) + D

= C (sI A)1B + D
= G2(s)

So the transfer function is not changed by putting the state-space


model through a similarity transformation.

Note that in the transfer function


b1 s2 + b 2 s + b 3
G(s) = 3
s + a 1 s2 + a 2 s + a 3
we have 6 parameters to choose
But in the related state-space model, we have A 3 3, B 3 1,
C 1 3 for a total of 15 parameters.
Is there a contradiction here because we have more degrees of freedom
in the state-space model?
No. In choosing a representation of the model, we are eectively
choosing a T , which is also 3 3, and thus has the remaining 9
degrees of freedom in the state-space model.

September 21, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #7

16.30/31 Feedback Control Systems

State-Space Systems
What are the basic properties of a state-space
model, and how do we analyze these?
Time Domain Interpretations
System Modes
Fall 2010 16.30/31 71

Time Response

Can develop a lot of insight into the system response and how it is
modeled by computing the time response x(t)
Homogeneous part

Forced solution

Homogeneous Part
x = Ax, x(0) known
Take Laplace transform
X(s) = (sI A)1x(0)
so that
1 1

x(t) = L (sI A) x(0)
But can show
1 I A A2
(sI A) = + 2 + 3 + ...
s s s
1
so L1 (sI A)1 = I + At + (At)2 + . . .

2!
At
= e

x(t) = eAtx(0)

eAt is a special matrix that we will use many times in this course
Transition matrix or Matrix Exponential
1

Calculate in MATLAB using expm.m and not exp.m


Note that e(A+B)t = eAteBt i AB = BA

1 MATLAB is a trademark of the Mathworks Inc.

September 23, 2010


Fall 2010 16.30/31 72

Example: x = Ax, with



0 1
A =
2 3
1
1 s 1
(sI A) =
2 s+3

s+3 1 1
=
2 s (s + 2)(s + 1)

2 1 1 1

s+1 s+2 s+1 s+2
= 2 2 1 2
+ +
s+1 s+2 s+1 s+2


t 2t t 2t
2e e e e
eAt =
2et + 2e2t et + 2e2t

We will say more about eAt when we have said more about A (eigen
values and eigenvectors)
Computation of eAt = L1{(sI A)1} straightforward for a 2-state
system
More complex for a larger system, see this paper

September 23, 2010


Fall 2010 16.30/31 73

SS: Forced Solution

Forced Solution
Consider a scalar case:
x = ax + bu, x(0) given
t
x(t) = eatx(0) + ea(t )bu( )d
0
where did this come from?
1. x ax = bu

2. eat [x ax] = d at
dt (e x(t)) = eatbu(t)
t d a
t
3. 0 d e x( )d = eatx(t) x(0) = 0 ea bu( )d

Forced Solution Matrix case:


x = Ax + Bu
where x is an n-vector and u is a m-vector

Just follow the same steps as above to get


t
x(t) = eAtx(0) + eA(t )Bu( )d
0
and if y = Cx + Du, then
t
At
y(t) = Ce x(0) + CeA(t )Bu( )d + Du(t)
0

CeAtx(0) is the initial response


CeA(t)B is the impulse response of the system.

September 23, 2010


Fall 2010 16.30/31 74

Have seen the key role of eAt in the solution for x(t)
Determines the system time response
But would like to get more insight!
Consider what happens if the matrix A is diagonalizable, i.e. there
exists a T such that

1
T 1AT = which is diagonal = ...
n
Then
eAt = T etT 1
where
e1t

e t = ...
e n t

Follows since eAt = I + At + 2!1 (At)2 + . . . and that A = T T 1, so


we can show that
1
eAt = I + At + (At)2 + . . .
2!
1
= I + T T 1t + (T T 1t)2 + . . .
2!
t 1
= T e T

This is a simpler way to get the matrix exponential, but how nd T


and ?
Eigenvalues and Eigenvectors

September 23, 2010


Fall 2010 16.30/31 75

Eigenvalues and Eigenvectors

Recall that the eigenvalues of A are the same as the roots of the
characteristic equation (page 67)
is an eigenvalue of A if
det(I A) = 0
which is true i there exists a nonzero v (eigenvector) for which
(I A)v = 0 Av = v

Repeat the process to nd all of the eigenvectors. Assuming that the


n eigenvectors are linearly independent
Avi = ivi i = 1, . . . , n

1 .
..

A v1 vn = v1 vn
n
AT = T T 1AT =

September 23, 2010


Fall 2010 16.30/31 76

Jordan Form
One word of caution: Not all matrices are diagonalizable

0 1
A= det(sI A) = s2
0 0
only one eigenvalue s = 0 (repeated twice). The eigenvectors solve

0 1 r1
= 0

0 0 r2

r1
eigenvectors are of the form , r1 = 0 would only be one.
0
2
Need Jordan Form to handle the case with repeated roots
Jordan form of matrix A Rnn is block diagonal:

A1 0 0 j 1 0 0
0 A2 0
0 j 1 0


A =
.. . . .
...

with
Aj =
.
...

.
.. 1


0 0 Ak 0 0 0 j
Observation: any matrix can be transformed into Jordan form with
the eigenvalues of A determining the blocks Aj .

The matrix exponential of a Jordan form matrix is then given by


tn1
1 t t2 /2!

A1 t
(n1)!
e 0
0

tn2
0 A2 t
e 0
0 1
t
(n2)!


eAt with
e
Aj t j t
=
=
0 0 1

e



.
. .
.

.. ..
0 0
e
Ak t . t

0 0 1

2 see book by Strang, Matlab command Jordan, or here

September 23, 2010


Fall 2010 16.30/31 77

EV Mechanics


1 1
Consider A =
8 5

s+1 1
(sI A) =
8 s5
det(sI A) = (s + 1)(s 5) + 8 = s2 4s + 3 = 0
so the eigenvalues are s1 = 1 and s2 = 3
Eigenvectors (sI A)v = 0

1

s+1 v11
(s1I A)v1 = =0
8 s5 v21
s=1

2 1

v11

=0 2v11 v21 = 0, v21 = 2v11

8 4
v21
0), so set v11 = 1

v11 is then arbitrary (=



1
v1 =
2


4 1

v12
(s2IA)v2 = =0 4v12v22 = 0, v22 = 4v12
8 2 v22

1
v2 =
4
Conrm that Avi = ivi

September 23, 2010


Fall 2010 16.30/31 78

Dynamic Interpretation

Since A = T T 1, then
t
w1T

| | e1
eAt = T etT 1 = v1 vn ... ...
| | ent wnT
where we have written
w1T

T 1 = ...
wnT
which is a column of rows.
Multiply this expression out and we get that
n

eAt = eitviwiT
i=1

Assume A diagonalizable, then x = Ax, x(0) given, has solution


x(t) = eAtx(0) = T etT 1x(0)
n
= eitvi{wiT x(0)}
i=1
n
= eitvii
i=1

State solution is linear combination of the system modes vieit

eit Determines nature of the time response


vi Determines how each state contributes to that mode
i Determines extent to which initial condition excites the mode

September 23, 2010


Fall 2010 16.30/31 79

Note that the vi give the relative sizing of the response of each part
of the state vector to the response.

1 t
v1(t) = e mode 1
0

0.5 3t
v2(t) = e mode 2
0.5
Clearly eit gives the time modulation
i real growing/decaying exponential response
i complex growing/decaying exponential damped sinusoidal

Bottom line: The locations of the eigenvalues determine the pole


locations for the system, thus:
They determine the stability and/or performance & transient be
havior of the system.

It is their locations that we will want to modify when we start the


control work

September 23, 2010


Fall 2010 16.30/31 710

Diagonalization with Complex Roots

If A has complex conjugate eigenvalues, the process is similar but a


little more complicated.
Consider a 2x2 case with A having eigenvalues a bi and associated
eigenvectors e1, e2, with e2 = e1. Then

a + bi 0 1
A = e1 e2 e1 e2
0 a bi

a + bi 0 1
T DT 1

= e1 e1 e1 e1
0 a bi

Now use the transformation matrix



1 i 1 1
M = 0.5 M 1 =
1 i i i
Then it follows that
A = T DT 1 = (T M )(M 1DM )(M 1T 1)
= (T M )(M 1DM )(T M )1
which has the nice structure:

a b 1
A = Re(e1) Im(e1) Re(e1) Im(e1)
b a
where all the matrices are real.

With complex roots, diagonalization is to a block diagonal form.

September 23, 2010


Fall 2010 16.30/31 711

For this case we have that



cos(bt) sin(bt) 1
eAt = Re(e1) Im(e1) eat

Re(e1) Im(e1)
sin(bt) cos(bt)

Note that

1 1 0
Re(e1) Im(e1) Re(e1) Im(e1) =
0 1

So for an initial condition to excite just this mode, can pick x(0) =
[Re(e1)], or x(0) = [Im(e1)] or a linear combination.

Example x(0) = [Re(e1)]



cos(bt) sin(bt)
x(t) = eAtx(0) = Re(e1) Im(e1) eat


sin(bt) cos(bt)
1
Re(e1) Im(e1) [Re(e1)]

at cos(bt) sin(bt) 1
= Re(e1) Im(e1) e
sin(bt) cos(bt) 0

cos(bt)
= eat Re(e1) Im(e1)

sin(bt)

= eat (Re(e1) cos(bt) Im(e1) sin(bt))

which would ensure that only this mode is excited in the response

September 23, 2010


Fall 2010 16.30/31 712

Example: Spring Mass System

Classic example: spring mass system consider simple case rst: mi =


1, and ki = 1

k5

k1 k2 k3 k4
Wall

Wall
m1 m3 m2


x = z1 z2 z3 z1 z2 z3

0 I
A = M = diag(mi)
M 1K 0

k1 + k2 + k5 k5 k2
K = k5 k3 + k4 + k5 k3
k2 k3 k2 + k3
Eigenvalues and eigenvectors of the undamped system
1 = 0.77i 2 = 1.85i 3 = 2.00i
v1 v2 v3
1.00 1.00 1.00
1.00 1.00 1.00
1.41 1.41 0.00
0.77i 1.85i 2.00i
0.77i 1.85i 2.00i
1.08i 2.61i 0.00

September 23, 2010


Fall 2010 16.30/31 713

Initial conditions to excite just the three modes:


xi(0) = 1Re(vi) + 2Im(vi) j R
Simulation using 1 = 1, 2 = 0
Visualization important for correct physical interpretation
Mode 1 1 = 0.77i

m1 m3 m2

Lowest frequency mode, all masses move in same direction


Middle mass has higher amplitude motions z3, motions all in phase

September 23, 2010


Fall 2010 16.30/31 714

Mode 2 2 = 1.85i

m1 m3 m2

Middle frequency mode has middle mass moving in opposition to


two end masses
Again middle mass has higher amplitude motions z3

September 23, 2010


Fall 2010 16.30/31 715

Mode 3 3 = 2.00i

m1 m3 m2

Highest frequency mode, has middle mass stationary, and other


two masses in opposition

Eigenvectors that correspond to more constrained motion of the sys


tem are associated with higher frequency eigenvalues

September 23, 2010


Fall 2010 16.30/31 716

Result if we use a random input is a combination of all three modes

September 23, 2010


Fall 2010 16.30/31 717
Code: Simulation of Spring Mass System Modes
1 % Simulate modal response for a spring mass system

2 % Jonathan How, MIT

3 % Fall 2009

4 alp1=1; % weighting choice on the IC

5 m=eye(3); % mass

6 k=[3 1 1;1 3 1;1 1 2]; % stiffness

7 a=[m*0 eye(3);inv(m)*k m*0];

8 [v,d]=eig(a);

9 t=[0:.01:20]; l1=1:25:length(t);

10 G=ss(a,zeros(6,1),zeros(1,6),0);

11
12 % use the following to cll the function above

13 close all

14 set(0, 'DefaultAxesFontSize', 14, 'DefaultAxesFontWeight','demi')

15 set(0, 'DefaultTextFontSize', 14, 'DefaultTextFontWeight','demi')

16 set(0,'DefaultAxesFontName','arial');

17 set(0,'DefaultTextFontName','arial');set(0,'DefaultlineMarkerSize',10)

18
19 figure(1);clf

20 x0=alp1*real(v(:,1))+(1alp1)*imag(v(:,1))

21 [y,t,x]=lsim(G,0*t,t,x0);

22 plot(t,x(:,1),'','LineWidth',2);hold on

23 plot(t(l1),x(l1,2),'rx','LineWidth',2)

24 plot(t,x(:,3),'m','LineWidth',2);hold off

25 xlabel('time');ylabel('displacement')

26 title(['Mode with \lambda=', num2str(imag(d(1,1))),' i'])

27 legend('z1','z2','z3')

28 print dpng r300 v1.png

29
30 figure(2);clf

31 x0=alp1*real(v(:,5))+(1alp1)*imag(v(:,5))

32 [y,t,x]=lsim(G,0*t,t,x0);

33 plot(t,x(:,1),'','LineWidth',2);hold on

34 plot(t(l1),x(l1,2),'rx','LineWidth',2)

35 plot(t,x(:,3),'m','LineWidth',2);hold off

36 xlabel('time');ylabel('displacement')

37 title(['Mode with \lambda=', num2str(imag(d(5,5))),' i'])

38 legend('z1','z2','z3')

39 print dpng r300 v3.png

40
41 figure(3);clf

42 x0=alp1*real(v(:,3))+(1alp1)*imag(v(:,3))

43 [y,t,x]=lsim(G,0*t,t,x0);

44 plot(t,x(:,1),'','LineWidth',2);hold on

45 plot(t,x(:,2),'r.','LineWidth',2)

46 plot(t,x(:,3),'m','LineWidth',2);hold off

47 xlabel('time');ylabel('displacement')

48 title(['Mode with \lambda=', num2str(imag(d(3,3))),' i'])

49 legend('z1','z2','z3')

50 print dpng r300 v2.png

51
52 figure(4);clf

53 x0=[1 0 1 0 0 0]';

54 [y,t,x]=lsim(G,0*t,t,x0);

55 plot(t,x(:,1),'','LineWidth',2)

56 hold on

57 plot(t,x(:,2),'r.','LineWidth',2)

58 plot(t,x(:,3),'m','LineWidth',2)

59 hold off

60 xlabel('time');ylabel('displacement')

61 title(['Random Initial Condition'])

62 legend('z1','z2','z3')

63 print dpng r300 v4.png

September 23, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #8

16.30/31 Feedback Control Systems

State-Space Systems
System Zeros
Transfer Function Matrices for MIMO systems

Fall 2010 16.30/31 81

Zeros in State Space Models

Roots of transfer function numerator called the system zeros.


Need to develop a similar way of dening/computing them using
a state space model.

Zero: generalized frequency s0 for which the system can have a


non-zero input u(t) = u0es0t, but exactly zero output y(t) 0 t
Note that there is a specic initial condition associated with this
response x0, so the state response is of the form x(t) = x0es0t
u(t) = u0es0t x(t) = x0es0t y(t) 0

Given x = Ax + Bu, substitute the above to get:



x0
x0s0es0t = Ax0es0t + Bu0es0t

s0I A B =0
u0
Also have that y = Cx + Du = 0 which gives:

x0
Cx0es0t + Du0es0t = 0

C D =0
u0

So we must nd the s0 that solves:



s0I A B x0
=0
C D u0
Is a generalized eigenvalue problem that can be solved in
MATLAB using eig.m or tzero.m 1
1 MATLAB is a trademark of the Mathworks Inc.

October 17, 2010


Fall 2010 16.30/31 82

There is a zero at the frequency s0 if there exists a non-trivial solution


of
s0I A B
det =0
C D
Compare with equation on page 6??


x0
Key Point: Zeros have both direction and frequency s0
u0
Just as we would associate a direction (eigenvector) with each pole
(frequency i)

s+2
Example: G(s) = s2+7s+12

7 12

1
A = B= C= 1 2 D=0
1 0 0

s0 + 7 12 1

s I A B
det 0 = det 1 s0 0

C D
1 2 0
= (s0 + 7)(0) + 1(2) + 1(s0) = s0 + 2 = 0
so there is clearly a zero at s0 = 2, as we expected. For the
directions, solve:

s0 + 7 12 1 x01 5 12 1 x01
1 s0 0 x02 = 1 2 0 x02 = 0?
1 2 0 s =2 u0 1 2 0 u0
0

gives x01 = 2x02 and u0 = 2x02 so that with x02 = 1



2
x0 = and u = 2e2t
1

October 17, 2010


Fall 2010 16.30/31 83

Further observations: apply the specied control input in the fre


quency domain, so that
Y1(s) = G(s)U (s)
where u = 2e2t, so that U (s) = 2
s+2
s+2 2 2
Y1(s) = =
s2 + 7s + 12 s + 2 s2 + 7s + 12
Say that s = 2 is a blocking zero or a transmission zero.

The response Y1(s) is clearly non-zero, but it does not contain a


component at the input frequency s = 2.
That input has been blocked.

Note that the output response left in Y1(s) is of a very special form
it corresponds to the (negative of the) response you would see from
T
the system with u(t) = 0 and x0 = 2 1
Y2(s) = C(sI A)1x0
s + 7 12 1 2


= 1 2
1 s 1

s 12 2 1
= 1 2
1 s+7 1 s2 + 7s + 12
2
= 2
s + 7s + 12

So then the total output is Y (s) = Y1(s)+Y2(s) showing that Y (s) =


0 y(t) = 0, as expected.

October 17, 2010


Fall 2010 16.30/31 84

Simpler Test

Simpler test using transfer function matrix:


If z is a zero with (right) direction [ T , uT ]T , then

zI A B
=0
C D u
If z not an eigenvalue of A, then = (zI A)1Bu, which gives
C(zI A)1B + D u = G(z)

u=0
Which implies that G(s) loses rank at s = z
If G(s) is square, can nd the zero frequencies by solving:
det G(s) = 0
If any of the resulting roots are also eigenvalues of A, need to
re-check the generalized eigenvalue matrix condition.

Need to be very careful when we nd MIMO zeros that have the same
frequency as the poles of the system, because it is not obvious that
a pole/zero cancelation will occur (for MIMO systems).
The zeros have a directionality associated with them, and that
must agree as well, or else you do not get cancelation
More on this topic later when we talk about controllability and
observability

October 17, 2010


Fall 2010 16.30/31 85

Transfer Function Matrix


Note that the transfer function matrix (TFM) notion is a MIMO
generalization of the SISO transfer function
It is a matrix of transfer functions

g11(s) g1m(s)
G(s) =

...

gp1(s) gpm(s)
gij (s) relates input of actuator j to output of sensor i.
It is relatively easy to go from a state-space model to a TFM, but
not obvious how to go back the other way.

Simplest approach is to develop a state space model for each element


of gij (s) in the form Aij , Bij , Cij , Dij , and then assemble (if TFM is
p m)


A11 B11

. . .
. . .


A1m
B1m


A =
B =


A21
B21

..

..

Apm Bpm


C11 C1m
C21 . . . C2m

C
=
D = [Dij ]


...

Cp1 Cpm

October 17, 2010


Fall 2010 16.30/31 86

One issue is how many poles are needed - this realization might be
inecient (larger than necessary).
Related to McMillan degree, which for a proper system is the
degree of the characteristic polynomial obtained as the least com
mon denominator of all minors of G(s).2
Subtle point: consider a m m matrix A, then the standard
minors formed by deleting 1 row and column and taking the de
terminant of the resulting matrix are called the m 1th order
minors of A.
To consider all minors of A, must consider all possible orders, i.e.
by selecting j m subsets of the rows and columns and taking
the resulting determinant.
Given an n m matrix A with entries aij , a minor of A is the
determinant of a smaller matrix formed from its entries by selecting
only some of the rows and columns.
Let K = { k1 k2 . . . kp } and L = { l1 l2 . . . lp } be subsets
of {1, 2, . . . , n} and {1, 2, . . . , m}, respectively.
Indices are chosen so k1 < k2 < kp and l1 < l2 < lp.
3
pth order minor dened by K and L is the determinant

a ak1l2 . . . ak1lp
k1l1
a ak2l2 . . . ak2lp

[A]K,L =
k..2l1 ...
.




akpl1 akpl2 . . . akplp

If p = m = n then the minor is simply the determinant of the


matrix.

In a nutshell what this means is that a 2 2 matrix has 4 order-1


minors and 1 order-2 minor to consider.
2 Lowest order polynomial that can be divided cleanly by all denominators of the minors of G(s).
3 See here for details

October 17, 2010


Fall 2010 16.30/31 87

Gilberts Realization
One approach: rewrite the TFM as
H(s)
G(s) =
d(s)
where d(s) is the least common multiple of the denominators of the
entries of G(s).
Note dierence from the discussion about the McMillan degree.
d(s) looks like a characteristic equation for this system, but it is
not it does not accurately reect number of poles needed.

For proper systems for which d(s) has distinct roots, can use Gilberts
realization.
Apply a partial fraction expansion to each of the elements of TFM
G(s) and collect residues for each distinct pole4.
Nm
Ri
G(s) = where Ri = lim (s pi)G(s)
i
s pi spi

Then sum of the ranks of matrices Ri gives the McMillan degree

4 Generalizations of this Gilberts realization approach exist if the gij have repeated roots.

October 17, 2010


Fall 2010 16.30/31 88

Can develop a state space realization by analyzing each element of


the partial fraction expansion
Set Ri = CiBi, and nd appropriate Bi and Ci
Form Ai by placing the poles on the diagonal as many times as
needed (determined by rank of Ri)
Form state space model:

A1 B1
x = ... x + ... u
ANm BNm

y = C1 CNm x

October 17, 2010


Fall 2010 16.30/31 89

Zero Example 1

1 1

s+2 s + 2


TFM G(s) =

s2


1

s 2 (s + 1)(s + 2)
To compute the McMillan degree for this system, form all minors (4
order-1 and 1 order-2):

1 1 1 s2 2 7s
, , , ,
s + 2 s + 2 s 2 (s + 1)(s + 2) (s 2)(s + 1)(s + 2)2

To nd LCD (least common multiple of denominators), pull out small


est polynomial that leaves all terms with no denominator:
1
{(s 2)(s + 1)(s + 2), (s 2)(s + 1)(s + 2),
(s 2)(s + 1)(s + 2)2
2 2

(s + 1)(s + 2 ), (s 2) (s + 2), 2 7s
So we expect a fourth order system with poles at s = 2, s = 2
(two), and s = 1

Compare with the Gilbert realization, nd d(s):



(s + 1)(s 2) (s + 1)(s 2)

1
G(s) =

(s + 1)(s + 2)(s 2)

(s + 1)(s + 2) (s 2)2



1 0 0 1 0 0 1 1 1
= + +
s + 1 0 3 s2 1 0 (s + 2) 0 4
Note that the rank of the last 22 matrix is 2
So the system order is 4 - we need to have two poles s = 2.

October 17, 2010


Fall 2010 16.30/31 810

So the system model for the example is



0
A1 = [1] B1 = 0 3 C1 =
1
2 0 1 1
A2 = B2 = C2 = I2
0 2 0 4

0
A3 = [2] B3 = 1 0 C3 =
1

Note, realization model on 85 would be 5th order, not 4th.

Code: MIMO Models


1 %

2 % basic MIMO TFM to SS

3 %

4 G=tf({1 1;1 [1 2]},{[1 2] [1 2];[1 2] [1 3 2]});

6 % find residue matrices of the 3 poles

7 R1=tf([1 1],1)*G;R1=minreal(R1);R1=evalfr(R1,1)

8 R2=tf([1 2],1)*G;R2=minreal(R2);R2=evalfr(R2,2)

9 R3=tf([1 2],1)*G;R3=minreal(R3);R3=evalfr(R3,2)

10
11 % form SS model for 3 poles using the residue matrices
12 A1=[1];B1=R1(2,:);C1=[0 1]';
13 A2=[2 0;0 2];B2=R2;C2=eye(2);
14 A3=[2];B3=R3(2,:);C3=[0 1]';
15
16 % combine submodels
17 A=zeros(4);A(1:1,1:1)=A1;A(2:3,2:3)=A2;A(4,4)=A3;
18 B=[B1;B2;B3];
19 C=[C1 C2 C3];
20
21 syms s
22 Gn=simple(C*inv(s*eye(4)A)*B);
23
24 % alternative is to make a SS model of each g {ij}
25 A11=2;B11=1;C11=1;
26 A12=2;B12=1;C12=1;
27 A21=2;B21=1;C21=1;
28 A22=[3 2;1 0];B22=[2 0]';C22=[0.5 1];
29
30 % and then combine
31 AA=zeros(5);AA(1,1)=A11;AA(2,2)=A12;AA(3,3)=A21;AA(4:5,4:5)=A22;
32 BB=[B11 B11*0;B12*0 B12;B21 B21*0;B22*0 B22];
33 CC=[C11 C12 zeros(1,3);zeros(1,2) C21 C22];
34 GGn=simple(CC*inv(s*eye(5)AA)*BB);
35
36 Gn,GGn

October 17, 2010


Fall 2010 16.30/31 811

Zero Example 2

1 1

s + 1 (s + 1)2


TFM G(s) =




1 1

(s + 1)3 (s + 1)4

McMillan Degree: nd all minors of G(s)


1 1 1 1
, , , , 0
s+1 (s + 1)2 (s + 1)3 (s + 1)4
To nd LCD (least common multiple of denominators), pull out small
est polynomial that leaves all terms with no denominator:
1

3 2

(s + 1) , (s + 1) , (s + 1), 1

(s + 1)4

So the LCD is (s + 1)4 and the McMillan degree is 4 we expect the


minimal state space model to have 4 poles at s = 1.
Gilbert approach as given cannot be applied directly since d(s) =
1
(s+1)4
has repeated roots
See Matlab code for model development

1.00

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00

0.00 3.00 1.50 0.50 0.00 0.00 0.00 0.00 0.00 0.00


0.50 0.00


0.00 2.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00


0.00 0.00


0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00


0.00 0.00

0.00 0.00 0.00 0.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00
A= B=

0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00


0.00 0.00


0.00 0.00 0.00 0.00 0.00 0.00 4.00 1.50 1.00 0.50


0.00 0.50


0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00


0.00 0.00

0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00

1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
C = D=
0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00

Note that i(A) = 1 there are 10 poles there. So this is clearly


not minimal since the order is 10, not the 4 we expected.

October 17, 2010


Fall 2010 16.30/31 812

Matlab command minreal can be used to convert to a minimal


realization.
0.40 0.16 1.00 0.01 0.23 0.02
0.32 1.49 0.06
A= 1.07
0.50 1.06 1.17 0.39 B = 0.97 0.36
0.05 0.31
0.07 0.16 0.02 0.94 0.01 0.75


0.18 1.01 0.35 0.63 0.00 0.00
C= D=
1.11 0.29 0.43 0.28 0.00 0.00

New model has 6 states removed - so the minimal degree is 4 as


expected.
Code: Zeros (zero example1.m)
1 G1=ss(tf({1 1;1 1},{[1 1] conv([1 1],[1 1]);conv([1 1],conv([1 1],[1 1])) ...

2 conv([1 1],conv([1 1],conv([1 1],[1 1])))})); %

3 [a,b,c,d]=ssdata(G1);

4 latex(a,'%.2f','nomath') %

5 latex(b,'%.2f','nomath') %

6 latex(c,'%.2f','nomath') %

7 latex(d,'%.2f','nomath') %

8 G2=minreal(G1);[a2,b2,c2,d2]=ssdata(G2);

9 latex(a2,'%.2f','nomath') %

10 latex(b2,'%.2f','nomath') %

11 latex(c2,'%.2f','nomath') %

12 latex(d2,'%.2f','nomath') %

October 17, 2010


Fall 2010 16.30/31 813

Zero Example 3

2s + 3 3s + 5

s2 + 3s + 2 s2 + 3s + 2


TFM G(s) =

0
(s + 1)
McMillan Degree: nd all minors of G(s)
2s + 3 3s + 5 1 (3s + 5)
, , ,
s2 + 3s + 2 s2 + 3s + 2 (s + 1) (s + 1)(s2 + 3s + 2)
To nd LCD, pull out smallest polynomial that leaves all terms with
no denominator:
1
(s2 + 3s + 2),

(2s + 3)(s + 1), (3s + 5)(s + 1), (3s + 5)
(s2 + 3s + 2)(s + 1)

So the LCD is (s2 + 3s + 2)(s + 1) = (s + 1)2(s + 2)


The McMillan degree is 3 we expect the minimal state space
model to have 3 poles.
For Gilbert approach, we rewrite

2s + 3 3s + 5
(s + 2) 0 R1 R2
G(s) = = +
(s + 1)(s + 2) s+1 s+2
where

2s+3

3s+5

s+2 s+2
1 2

R1 = lim (s + 1)G(s) = lim =

s1
s1
1 0
1 0

2s+3 3s+5

s+1 s+1
1 1
R2 = lim (s + 2)G(s) = lim =
s2 s2 s+2
s+1 0 0 0
which also indicates that we will have a third order system with 2
poles at s = 1 and 1 at s = 2.

October 17, 2010


Fall 2010 16.30/31 814

For the state space model, note that



1 2 1 0
R1 = = C1B1
1 0 0 1

1
R2 = 1 1 = C2B2
0

giving

1 0 0 1 0
A = 0 1 0 B=0 1
0 0 2 1 1

1 2 1
C =
1 0 0

From Matlab you get:



1.00 0.00 0.00 0.56 1.12
A = 0.00 2.00 0.00 B = 0.35 0.35
0.00 0.00 1.00 0.50 0.00

1.79 2.83 0.00 0.00 0.00
C = D=
0.00 0.00 2.00 0.00 0.00
Code: Zeros (zero example2.m)

1 G1=ss(tf({[2 3] [3 5];1 0},{[1 3 2] [1 3 2];[1 1] 1})); %


2 G1=canon(G1,'modal')
3 [a,b,c,d]=ssdata(G1);
4 latex(a,'%.2f','nomath') %
5 latex(b,'%.2f','nomath') %
6 latex(c,'%.2f','nomath') %
7 latex(d,'%.2f','nomath') %

October 17, 2010


Fall 2010 16.30/31 815

Summary of Zeros and TFMs

Great feature of solving for zeros using the generalized eigenvalue


matrix condition is that it can be used to nd MIMO zeros of a
system with multiple inputs/outputs.

s I A B
det 0 =0
C D

Note: we have to be careful how to analyze these TFMs.


Just looking at individual transfer functions is not useful.
Need to look at system as a whole use the singular values of
G(s)

Will see later the conditions to determine if the order of a state space
model is minimal.

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #9

16.30/31 Feedback Control Systems

State-Space Systems
State-space model features
Observability
Controllability
Minimal Realizations
Fall 2010 16.30/31 92

State-Space Model Features

There are some key characteristics of a state-space model that we


need to identify.
Will see that these are very closely associated with the concepts
of pole/zero cancelation in transfer functions.

Example: Consider a simple system


6
G(s) =
s+2
for which we develop the state-space model
Model # 1 x = 2x + 2u
y = 3x

= [x x2]
T
But now consider the new state space model x

2 0

Model # 2
=

x +

x u
0 1 1

y = 3 0 x
which is clearly dierent than the rst model, and larger.

But lets looks at the transfer function of the new model:


(s) = C(sI A)1B + D
G
1
2 0 2
= 3 0 sI
0 1 1
2
s+2 6
= 3 0 1
= !!
s+2
s+1

September 30, 2010


Fall 2010 16.30/31 93

This is a bit strange, because previously our gure of merit when


comparing one state-space model to another (page 6??) was whether
they reproduced the same same transfer function
Now we have two very dierent models that result in the same
transfer function
Note that I showed the second model as having 1 extra state, but
I could easily have done it with 99 extra states!!

So what is going on?


A clue is that the dynamics associated with the second state of
the model x2 were eliminated when we formed the product
2
s+2


G(s) = 3 0 1
s+1
because the A is decoupled and there is a zero in the C matrix
Which is exactly the same as saying that there is a pole-zero
(s)
cancelation in the transfer function G
6 6(s + 1) (s)
= G
s + 2 (s + 2)(s + 1)
(s)
Note that model #2 is one possible state-space model of G
(has 2 poles)

For this system we say that the dynamics associated with the second
state are unobservable using this sensor (denes C matrix).
There could be a lot motion associated with x2, but we would
be unaware of it using this sensor.

September 30, 2010


Fall 2010 16.30/31 94

There is an analogous problem on the input side as well. Consider:


Model # 1 x = 2x + 2u
y = 3x

= [ x x2]
T

with x

2

0
2

Model # 3
=

x +

x u
0 1 0

y = 3 2 x

which is also clearly dierent than model #1, and has a dierent
form from the second model.
1

2 0 2
G(s) = 3 2 sI
0 1 0

3 2
2 6
= s+2 s+1 = !!
0 s+2

Once again the dynamics associated with the pole at s = 1 are


canceled out of the transfer function.
But in this case it occurred because there is a 0 in the B matrix


So in this case we can see the state x2 in the output C = 3 2 ,
but we cannot inuence that state with the input since

2
B=
0

So we say that the dynamics associated with the second state are
uncontrollable using this actuator (denes the B matrix).

September 30, 2010


Fall 2010 16.30/31 95

Of course it can get even worse because we could have



2 0

x +

x u
0 1 0

y = 3 0 x

So now we have
1
2

=
0 2
G(s) 3 0 sI
0 1 0

3 0
2 6
= s+2 s+1 = !!
0 s+2
Get same result for the transfer function, but now the dynamics as
sociated with x2 are both unobservable and uncontrollable.

Summary: Dynamics in the state-space model that are uncon


trollable, unobservable, or both do not show up in the transfer
function.

Would like to develop models that only have dynamics that are both
controllable and observable
called a minimal realization
A state space model that has the lowest possible order for the
given transfer function.

But rst need to develop tests to determine if the models are observ
able and/or controllable

September 30, 2010


Fall 2010 16.30/31 96

Observability

Denition: An LTI system is observable if the initial state x(0)


can be uniquely deduced from the knowledge of the input u(t) and
output y(t) for all t between 0 and any nite T > 0.
If x(0) can be deduced, then we can reconstruct x(t) exactly
because we know u(t) we can nd x(t) t.
Thus we need only consider the zero-input (homogeneous) solution
to study observability.
y(t) = CeAtx(0)

This denition of observability is consistent with the notion we used


before of being able to see all the states in the output of the de-
coupled examples
ROT: For those decoupled examples, if part of the state cannot
be seen in y(t), then it would be impossible to deduce that part
of x(0) from the outputs y(t).

September 30, 2010


Fall 2010 16.30/31 97

Denition: A state x = 0 is said to be unobservable if the


zero-input solution y(t), with x(0) = x, is zero for all t 0
Equivalent to saying that x is an unobservable state if

CeAtx = 0 t 0

For the problem we were just looking at, consider Model #2 with
x = [ 0 1 ]T = 0, then

2 0

2
=
x +
x u
0 1 1

y = 3 0 x
so
2t
e 0 0
CeAtx = 3 0

0 et 1

2t 0
= 3e 0 =0t
1

So, x = [ 0 1 ]T is an unobservable state for this system.

But that is as expected, because we knew there was a problem with


the state x2 from the previous analysis

September 30, 2010


Fall 2010 16.30/31 98

Theorem: An LTI system is observable i it has no unob


servable states.
We normally just say that the pair (A,C) is observable.

Pseudo-Proof: Let x = 0 be an unobservable state and compute


the outputs from the initial conditions x1(0) and x2(0) = x1(0) + x
Then
y1(t) = CeAtx1(0) and y2(t) = CeAtx2(0)
but
y2(t) = CeAt(x1(0) + x) = CeAtx1(0) + CeAtx
= CeAtx1(0) = y1(t)

Thus 2 dierent initial conditions give the same output y(t), so it


would be impossible for us to deduce the actual initial condition
of the system x1(t) or x2(t) given y1(t)

Testing system observability by searching for a vector x(0) such that


CeAtx(0) = 0 t is feasible, but very hard in general.
Better tests are available.

September 30, 2010


Fall 2010 16.30/31 99

Theorem: The vector x is an unobservable state i



C

CA


2

CA x
= 0



...

CAn1

Pseudo-Proof: If x is an unobservable state, then by denition,


CeAtx = 0 t0
But all the derivatives of CeAt exist and for this condition to hold, all
derivatives must be zero at t = 0. Then
Ce x t=0 = 0
Cx = 0

At


d At
= 0

CAeAtx

t=0
= CAx
= 0


Ce x
dt
t=0

d2

At 2 At

2

Ce x = 0

CA e x t=0

= CA x = 0

dt2

t=0

...

dk

At k At

k

Ce x = 0

CA e x t=0

= CA x = 0

dtk

t=0

We only need retain up to the n 1th derivative because of the


Cayley-Hamilton theorem.

September 30, 2010


Fall 2010 16.30/31 910

Simple test: Necessary and sucient condition for observability is


that

C

CA


2

rank Mo
rank
CA = n



...

CAn1

Why does this make sense?


The requirement for an unobservable state is that for x =
0
Mox = 0
Which is equivalent to saying that x is orthogonal to each row of
Mo .
But if the rows of Mo are considered to be vectors and these span
the full n-dimensional space, then it is not possible to nd an
n-vector x that is orthogonal to each of these.
To determine if the n rows of Mo span the full n-dimensional
space, we need to test their linear independence, which is equiv
alent to the rank test1

Let M be a m p matrix, then the rank of M satises:


1

1. rank M number of linearly independent columns of M


2. rank M number of linearly independent rows of M
3. rank M min{m, p}

September 30, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #10

16.30/31 Feedback Control Systems

State-Space Systems
State-space model features
Controllability
Fall 2010 16.30/31 101

Controllability

Denition: An LTI system is controllable if, for every x(t) and


every nite T > 0, there exists an input function u(t), 0 < t T ,
such that the system state goes from x(0) = 0 to x(T ) = x.

Starting at 0 is not a special case if we can get to any state


in nite time from the origin, then we can get from any initial
condition to that state in nite time as well. 1

This denition of controllability is consistent with the notion we used


before of being able to inuence all the states in the system in the
decoupled examples (page 9??).

ROT: For those decoupled examples, if part of the state cannot


be inuenced by u(t), then it would be impossible to move that
part of the state from 0 to x

Need only consider the forced solution to study controllability.


t
xf (t) = eA(t )Bu( )d
0
Change of variables 2 = t , d = d2 gives a form that is a
little easier to work with:
t
xf (t) = eA2 Bu(t 2)d2
0
Assume system has m inputs.

1 This controllability from the origin is often called reachability.

October 13, 2010


Fall 2010 16.30/31 102

Note that, regardless of the eigenstructure of A, the Cayley-Hamilton


theorem gives
n1

At
e = Aii(t)
i=0
for some computable scalars i(t), so that
n1
t n1

i
xf (t) = (A B) i(2)u(t 2)d2 = (AiB) i(t)
i=0 0 i=0

for coecients i(t) that depend on the input u( ), 0 < t.

Result can be interpreted as meaning that the state xf (t) is a linear


combination of the nm vectors AiB (with m inputs).
All linear combinations of these nm vectors is the range space of
the matrix formed from the AiB column vectors:
2 n1

Mc = B AB A B A B

Denition: Range space of Mc is controllable subspace of the


system
If a state xc(t) is not in the range space of Mc, it is not a linear
combination of these columns it is impossible for xf (t) to ever
equal xc(t) called uncontrollable state.

Theorem: LTI system is controllable i it has no uncontrol


lable states.
Necessary and sucient condition for controllability is that
2 n1

rank Mc rank B AB A B A B = n

October 13, 2010


Fall 2010 16.30/31 103

Further Examples

With Model # 2:

2

0
2

x +

x u
0 1 1

y = 3 0 x

C 3 0
M0 = =
CA 6 0

2 4
Mc = B AB =
1 1
rank M0 = 1 and rank Mc = 2
So this model of the system is controllable, but not observable.

With Model # 3:

2

0
2

x +

x u
0 1 0

y = 3 2 x

C 3 2
M0 = =
CA 6 2

2 4
Mc = B AB =
0 0
rank M0 = 2 and rank Mc = 1
So this model of the system is observable, but not controllable.

Note that controllability/observability are not intrinsic properties of


a system. Whether the model has them or not depends on the repre
sentation that you choose.
But they indicate that something else more fundamental is wrong. . .

October 13, 2010


Fall 2010 16.30/31 104

Modal Tests

Earlier examples showed the relative simplicity of testing observabili


ty/controllability for system with a decoupled A matrix.

There is, of course, a very special decoupled form for the state-space
model: the Modal Form (6??)

Assuming that we are given the model


x = Ax + Bu
y = Cx + Du
and the A is diagonalizable (A = T T 1) using the transformation

| |
T = v1 vn
| |
based on the eigenvalues of A. Note that we wrote:
w1T

T 1 = ...
wnT
which is a column of rows.

Then dene a new state so that x = T z, then


z = T 1x = T 1(Ax + Bu)
= (T 1AT )z + T 1Bu
= z + T 1Bu

y = Cx + Du
= CT z + Du

October 13, 2010


Fall 2010 16.30/31 105

The new model in the state z is diagonal. There is no coupling in the


dynamics matrix .

But by denition,
w1T

T 1B = ... B
wnT
and
CT = C v1 vn

Thus if it turned out that


wiT B 0
then that element of the state vector zi would be uncontrollable by
the input u.

Also, if
Cvj 0
then that element of the state vector zj would be unobservable with
this sensor.

Thus, all modes of the system are controllable and observ


able if it can be shown that
wiT B =
0 i
and
Cvj = 0 j

October 13, 2010


Fall 2010 16.30/31 106

Cancelation
Examples show the close connection between pole-zero cancelation
and loss of observability and controllability. Can be strengthened.

Theorem: The mode (i, vi) of a system (A, B, C, D) is unobserv


vi
able i the system has a zero at i with direction .
0

Proof: If the system is unobservable at i, then we know


(iI A)vi = 0 It is a mode
Cvi = 0 That mode is unobservable
Combine to get:
(iI A)
vi = 0
C
Or
(iI A) B vi
=0
C D 0
which implies that
the system has a zero at that frequency as well,
vi
with direction .
0

Can repeat the process looking for loss


of controllability, but now
using zeros with left direction wiT 0 .

October 13, 2010


Fall 2010 16.30/31 107

Combined Denition: when a MIMO zero causes loss of either


observability or controllability we say that there is a pole/zero cance
lation.

MIMO pole-zero (right direction generalized eigenvector) cancela


tion mode is unobservable

MIMO pole-zero (left direction generalized eigenvector) cancela


tion mode is uncontrollable

Note: This cancelation requires an agreement of both the frequency


and
the directionality of the system mode (eigenvector) and zero
vi T
or wi 0 .
0

October 13, 2010


Fall 2010 16.30/31 108

Connection to Residue
Recall that in modal form, the state-space model (assumes diagonal
izable) is given by the matrices
T
p1 w1 B
. .. B = ... C = Cv1 Cvn

A=
pn wnT B
for which case it can easily be shown that
G(s) = C(sI A)1B

1
w1T B

sp1
=

Cv1 Cvn ... ...

1
spn w nT B
n
(Cvi)(wT B)
i
=
i=1
s pi

Thus the residue of each pole is a direct function of the product of


the degree of controllability and observability for that mode.
Loss of observability or controllability residue is zero that
pole does not show up in the transfer function.
If modes have equal observability Cvi Cvj , but one

wiT B wjT B

then the residue of the ith mode will be much larger.


Great way to approach model reduction if needed.

October 13, 2010


Fall 2010 16.30/31 109

Weaker Conditions
Often it is too much to assume that we will have full observability
and controllability. Often have to make do with the following. System
called:
Detectable if all unstable modes are observable
Stabilizable if all unstable modes are controllable

So if you had a stabilizable and detectable system, there could be


dynamics that you are not aware of and cannot inuence, but you
know that they are at least stable.

That is enough information on the system model for now will assume
minimal models from here on and start looking at the control issues.

October 13, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #11

16.31 Feedback Control Systems

State-Space Systems

Full-state Feedback Control

How do we change the poles of the state-space system?

Or, even if we can change the pole locations.


Where do we change the pole locations to?
How well does this approach work?

Reading: FPE 7.3


Fall 2010 16.30/31 111

Full-state Feedback Controller

Assume that the single-input system dynamics are given by


x (t) = Ax(t) + Bu(t)
y(t) = Cx(t)
so that D = 0.
The multi-actuator case is quite a bit more complicated as we
would have many extra degrees of freedom.

Recall that the system poles are given by the eigenvalues of A.


Want to use the input u(t) to modify the eigenvalues of A to
change the system dynamics.

r u(t) y(t)
A, B, C

x(t)

Assume a full-state feedback of the form:


u(t) = r Kx(t)
where r is some reference input and the gain K is R1n
If r = 0, we call this controller a regulator

Find the closed-loop dynamics:


x (t) = Ax(t) + B(r Kx(t))
= (A BK)x(t) + Br
= Acl x(t) + Br
y(t) = Cx(t)

October 17, 2010


Fall 2010 16.30/31 112

Objective: Pick K so that Acl has the desired properties, e.g.,


A unstable, want Acl stable
Put 2 poles at 2 2i
Note that there are n parameters in K and n eigenvalues in A, so it
looks promising, but what can we achieve?

Example #1: Consider:



1 1 1
x (t) = x(t) + u
1 2 0

Then det(sI A) = (s 1)(s 2) 1 = s2 3s + 1 = 0 so the


system is unstable.


Dene u = k1 k2 x(t) = Kx(t), then

1 1 1
Acl = A BK = k1 k2
1 2 0

1 k1 1 k2
=
1 2
which gives
det(sI Acl ) = s2 + (k1 3)s + (1 2k1 + k2) = 0

Thus, by choosing k1 and k2, we can put i(Acl ) anywhere in the


complex plane (assuming complex conjugate pairs of poles).

October 17, 2010


Fall 2010 16.30/31 113

To put the poles at s = 5, 6, compare the desired characteristic


equation
(s + 5)(s + 6) = s2 + 11s + 30 = 0
with the closed-loop one
s2 + (k1 3)s + (1 2k1 + k2) = 0
to conclude that

k1 3 = 11 k1 = 14
1 2k1 + k2 = 30 k2 = 57

so that K = 14 57 , which is called Pole Placement.

Of course, it is not always this easy, as lack of controllability might


be an issue.

Example #2: Consider this system:



1 1 1
x (t) = x(t) + u
0 2 0
with the same control approach


1 1 1 1 k1 1 k2
Acl = A BK = k1 k2 =
0 2 0 0 2
so that
det(sI Acl ) = (s 1 + k1)(s 2) = 0
So the feedback control can modify the pole at s = 1, but it cannot
move the pole at s = 2.

System cannot be stabilized with full-state feedback.


Problem caused by a lack of controllability of the e2t mode.

October 17, 2010


Fall 2010 16.30/31 114

Consider the basic controllability test:



1 1 1 1
Mc = B AB =
0 0 2 0
So that rank Mc = 1 < 2.

Modal analysis of controllability to develop a little more insight



1 1
A= , decompose as AV = V = V 1AV
0 2
where

1 0 1 1 1 1
= V = V 1 =
0 2 0 1 0 1
Convert
z=V 1 x(t)
x (t) = Ax(t) + Bu z = z + V 1Bu
T
where z = z1 z2 . But:

1 1

1 1
V 1B = =
0 1 0 0
so that the dynamics in modal form are:

1 0 1
z = z+ u
0 2 0

With this zero in the modal B-matrix, can easily see that the mode
associated with the z2 state is uncontrollable.
Must assume that the pair (A, B) are controllable.

October 17, 2010


Fall 2010 16.30/31 115

Ackermanns Formula
The previous outlined a design procedure and showed how to do it by
hand for second-order systems.
Extends to higher order (controllable) systems, but tedious.

Ackermanns Formula gives us a method of doing this entire design


process is one easy step.
1
K = 0 . . . 0 1 Mc d(A)
n1

Mc = B AB . . . A B as before
d(s) is the characteristic equation for the closed-loop poles, which
we then evaluate for s = A.
Note: is explicit that the system must be controllable because
we are inverting the controllability matrix.

Revisit Example # 1: d(s) = s2 + 11s + 30



1 1 1 1 1 1
Mc = B AB = =
0 1 2 0 0 1
So
1 2
1 1 1 1 1 1
K = 0 1 + 11 + 30I
0 1 1 2 1 2

43 14
= 0 1 = 14 57
14 57

Automated in Matlab: place.m & acker.m (see polyvalm.m too)

October 17, 2010


Fall 2010 16.30/31 116

Origins of Ackermanns Formula

For simplicity, consider third-order system (case #2 on 6??), but


this extends to any order.

a1 a2 a3 1
A= 1 0 0 B = 0 C = b1 b2 b3
0 1 0 0
See key benet of using control canonical state-space model
This form is useful because the characteristic equation for the
system is obvious det(sI A) = s3 + a1s2 + a2s + a3 = 0

Can show that



a1 a2 a3 1
Acl = A BK = 1 0 0 0
k1 k2 k3
0 1 0 0

a1 k1 a2 k2 a3 k3
= 1 0 0
0 1 0
so that the characteristic equation for the system is still obvious:
cl (s) = det(sI Acl )
= s3 + (a1 + k1)s2 + (a2 + k2)s + (a3 + k3) = 0

October 17, 2010


Fall 2010 16.30/31 117

Compare with the characteristic equation developed from the desired


closed-loop pole locations:
d(s) = s3 + (1)s2 + (2)s + (3) = 0
to get that
a1 + k1 = 1 k1 = 1 a1
..
. ...
an + kn = n kn = n an

To get the specics of the Ackermann formula, we then:


Take an arbitrary A, B and transform it to the control canonical
form (x(t) z(t) = T 1x(t))
Not obvious, but Mc can be used to form this T
using the formulas at top of page for the
Solve for the gains K
state z(t)

u(t) = K z(t)

Then switch back to gains needed for the state x(t), so that
1 u = Kz(t)
K = KT = Kx(t)

Pole placement is a very powerful tool and we will be using it for most
of this course.

October 17, 2010


Fall 2010 16.30/31 118

Reference Inputs

So far we have looked at how to pick K to get the dynamics to have


some nice properties (i.e. stabilize A)

The question remains as to how well this controller allows us to track


a reference command?
Performance issue rather than just stability.

Started with
x (t) = Ax(t) + Bu y = Cx(t)
u = r Kx(t)

For good tracking performance we want


y(t) r(t) as t

Consider this performance issue in the frequency domain. Use the


nal value theorem:
lim y(t) = lim sY (s)
t s0

Thus, for good performance, we want


Y (s)
sY (s) sR(s) as s 0 = 1

R(s)

s=0

So, for good performance, the transfer function from R(s) to Y (s)
should be approximately 1 at DC.
October 17, 2010
Fall 2010 16.30/31 119

Example #1 continued: For the system



1 1 1

x(t) = x(t) + u
1 2 0

y = 1 0 x(t)

Already designed K = 14 57 so the closed-loop system is
x (t) = (A BK)x(t) + Br
y = Cx(t)
which gives the transfer function
Y (s)
= C (sI (A BK))1 B
R(s)
1
s + 13 56 1
= 1 0
1 s 2 0
s2
= 2
s + 11s + 30

Assume that r(t) is a step, then by the FVT



Y (s) 2
= = 1 !!
R(s) s=0 30
So our step response is quite poor!

October 17, 2010


Fall 2010 16.30/31 1110

One solution is to scale the reference input r(t) so that


u = N r Kx(t)
N extra gain used to scale the closed-loop transfer function

Now we have
x (t) = (A BK)x(t) + BN r
y = Cx(t)
so that
Y (s)
= C (sI (A BK))1 BN = Gcl (s)N
R(s)

If we had made N = 15, then

Y (s) 15(s 2)

= 2
R(s) s + 11s + 30

so with a step input, y(t) 1 as t .

Clearly can compute


1
1
1
N = Gcl (0) = C(A BK) B

Note that this development assumed that r was constant, but it could
also be used if r is a slowly time-varying command.

October 17, 2010


Fall 2010 16.30/31 1111

So the steady state step error is now zero, but is this OK?
See plots big improvement in the response, but transient a bit
weird.

Fig. 1: Response to step input with and without the N correction.

Code: Step Response (step1.m)


1 % full state feedback for topic 13

2 % reference input issues

3 %

4
a=[1 1;1 2];b=[1 0]';c=[1 0];d=0;
5 k=[14 57];
6 Nbar=15;
7 sys1=ss(ab*k,b,c,d);
8 sys2=ss(ab*k,b*Nbar,c,d);
9 t=[0:.025:4];
10 [y,t,x]=step(sys1,t);
11 [y2,t2,x2]=step(sys2,t);
12
13 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 4 1 1.2]);grid;
14 legend('u=rKx','u=Nbar rKx','Location','SouthEast')
15 xlabel('time (sec)');ylabel('Y output');title('Step Response')
16 print dpng r300 step1.png

October 17, 2010


Fall 2010 16.30/31 1112

Pole Placement Examples

Simple example:
8 14 20

G(s) =
(s + 8)(s + 14)(s + 20)
Target pole locations 12 12i, 20

Fig. 2: Response to step input with and without the N correction. Gives the desired
steady-state behavior, with little diculty!

Fig. 3: Closed-loop frequency response. Clearly shows unity DC gain

October 17, 2010


Fall 2010 16.30/31 1113

Example system with 1 unstable pole


0.94
G(s) = 2
s 0.0297
Target pole locations 0.25 0.25i

Fig. 4: Response to step input with and without the N correction. Gives the desired
steady-state behavior, with little diculty!

Fig. 5: Closed-loop frequency response. Clearly shows unity DC gain

October 17, 2010


Fall 2010 16.30/31 1114

OK, so lets try something challenging. . .


8 14 20

G(s) =
(s 8)(s 14)(s 20)
Target pole locations 12 12i, 20

Fig. 6: Response to step input with and without the N correction. Gives the desired
steady-state behavior, with little diculty!

Fig. 7: Closed-loop frequency response. Clearly shows unity DC gain

October 17, 2010


Fall 2010 16.30/31 1115

The worst possible. . . Unstable, NMP!!


(s 1)
G(s) =
(s + 1)(s 3)
Target pole locations 1 i

Fig. 8: Response to step input with and without the N correction. Gives the desired
steady-state behavior, with little diculty!

Fig. 9: Closed-loop frequency response. Clearly shows unity DC gain

October 17, 2010


Fall 2010 16.30/31 1116

FSFB Summary

Full state feedback process is quite simple as it can be automated in


Matlab using acker and/or place

With more than 1 actuator, we have more than n degrees of freedom


in the control we can change the eigenvectors as desired, as well
as the poles.

The real issue now is where to put the poles. . .

And to correct the fact that we cannot usually measure the state
develop an estimator.

October 17, 2010


Fall 2010 16.30/31 1117

Code: Step Response (step3.m)


1 % Examples of pole placement with FSFB

2 % demonstrating the Nbar modifcation to the reference command

3 %

4 % Jonathan How

5 % Sept, 2010

6 %

7 close all;clear all

8 set(0,'DefaultLineLineWidth',2)

9 set(0,'DefaultlineMarkerSize',10);set(0,'DefaultlineMarkerFace','b')

10 set(0, 'DefaultAxesFontSize', 14);set(0, 'DefaultTextFontSize', 14);

11
12 % system

13 [a,b,c,d]=tf2ss(8*14*20,conv([1 8],conv([1 14],[1 20])));

14 % controller gains to place poles at specified locations

15 k=place(a,b,[12+12*j;1212*j;20]);

16
17 % find the feedforward gains

18 Nbar=inv(c*inv(ab*k)*b);

19
20 sys1=ss(ab*k,b,c,d);

21 sys2=ss(ab*k,b*Nbar,c,d);

22
23 t=[0:.01:1];

24 [y,t,x]=step(sys1,t);

25 [y2,t2,x2]=step(sys2,t);

26
27 figure(1);clf

28 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 1 0 1.2]);grid;

29 legend('u=rKx','u=Nbar rKx');xlabel('time (sec)');ylabel('Y output')

30 title('Step Response')

31 hold on

32 plot(t2([1 end]),[.1 .1]*y2(end),'r');

33 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');

34 hold off

35
36 text(.4,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'],'FontSize',14)

37 text(.4,.8,['Nbar= ',num2str(round(Nbar*1000)/1000)],'FontSize',14)

38 export fig triple1 pdf

39
40 figure(1);clf

41 f=logspace(1,2,400);

42 gcl1=freqresp(sys1,f);

43 gcl2=freqresp(sys2,f);

44 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);grid

45 xlabel('Freq (rad/sec)')

46 ylabel('G {cl}')

47 title('Closedloop Freq Response')

48 legend('u=rKx','u=Nbar rKx')

49 export fig triple11 pdf

50
51 %%%%%%%%%

52 % example 2

53 %

54 clear all

55
56 [a,b,c,d]=tf2ss(8*14*20,conv([1 8],conv([1 14],[1 20])))

57 k=place(a,b,[12+12*j;1212*j;20])

58 % find the feedforward gains

59 Nbar=inv(c*inv(ab*k)*b);

60
61 sys1=ss(ab*k,b,c,d);

62 sys2=ss(ab*k,b*Nbar,c,d);

63
64 t=[0:.01:1];

65 [y,t,x]=step(sys1,t);

66 [y2,t2,x2]=step(sys2,t);

67
68 figure(2);clf

69 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 1 0 1.2])

70 grid;

71 legend('u=rKx','u=Nbar rKx')

72 xlabel('time (sec)');ylabel('Y output');title('Step Response')

73 hold on

74 plot(t2([1 end]),[.1 .1]*y2(end),'r');

October 17, 2010


Fall 2010 16.30/31 1118

75 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');

76 hold off

77

78 text(.4,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'],'FontSize',14)

79 text(.4,.8,['Nbar= ',num2str(round(Nbar*1000)/1000)],'FontSize',14)

80 export fig triple2 pdf

81

82 figure(2);clf

83 f=logspace(1,2,400);

84 gcl1=freqresp(sys1,f);

85 gcl2=freqresp(sys2,f);

86 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);grid

87 xlabel('Freq (rad/sec)')

88 ylabel('G {cl}')

89 title('Closedloop Freq Response')

90 legend('u=rKx','u=Nbar rKx')

91 export fig triple21 pdf

92

93 %%%%%%%%%%%%%%

94 % example 3

95 clear all

96

97 [a,b,c,d]=tf2ss(.94,[1 0 0.0297])

98 k=place(a,b,[1+j;1j]/4)

99 % find the feedforward gains

100 Nbar=inv(c*inv(ab*k)*b);
101
102 sys1=ss(ab*k,b,c,d);
103 sys2=ss(ab*k,b*Nbar,c,d);
104
105 t=[0:.1:30];
106 [y,t,x]=step(sys1,t);
107 [y2,t2,x2]=step(sys2,t);
108
109 figure(3);clf
110 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 30 0 2])
111 grid;
112 legend('u=rKx','u=Nbar rKx')
113 xlabel('time (sec)');ylabel('Y output');title('Step Response')
114 hold on
115 plot(t2([1 end]),[.1 .1]*y2(end),'r');
116 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');
117 hold off
118
119 text(15,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'],'FontSize',14)
120 text(15,.8,['Nbar= ',num2str(round(Nbar*1000)/1000)],'FontSize',14)
121 export fig triple3 pdf
122
123 figure(3);clf
124 f=logspace(3,1,400);
125 gcl1=freqresp(sys1,f);
126 gcl2=freqresp(sys2,f);
127 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);grid
128 xlabel('Freq (rad/sec)')
129 ylabel('G {cl}')
130 title('Closedloop Freq Response')
131 legend('u=rKx','u=Nbar rKx')
132 export fig triple31 pdf
133
134 %%%%%%%%%%%%
135 % example 4
136 clear all
137
138 [a,b,c,d]=tf2ss([1 1],conv([1 1],[1 3]))
139 k=place(a,b,[[1+j;1j]])
140 % find the feedforward gains
141 Nbar=inv(c*inv(ab*k)*b);
142
143 sys1=ss(ab*k,b,c,d);
144 sys2=ss(ab*k,b*Nbar,c,d);
145
146 t=[0:.1:10];
147 [y,t,x]=step(sys1,t);
148 [y2,t2,x2]=step(sys2,t);
149
150 figure(3);clf
151 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 10 1 1.2])

October 17, 2010


Fall 2010 16.30/31 1119

152 grid;
153 legend('u=rKx','u=Nbar rKx')
154 xlabel('time (sec)');ylabel('Y output')
155 title('Unstable, NMP system Step Response')
156 hold on
157 plot(t2([1 end]),[.1 .1]*y2(end),'r');
158 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');
159 hold off
160
161 text(5,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'],'FontSize',14)
162 text(5,.8,['Nbar= ',num2str(round(Nbar*1000)/1000)],'FontSize',14)
163 export fig triple4 pdf
164
165 figure(4);clf
166 f=logspace(2,2,400);
167 gcl1=freqresp(sys1,f);
168 gcl2=freqresp(sys2,f);
169 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);grid
170 xlabel('Freq (rad/sec)')
171 ylabel('G {cl}')
172 title('Closedloop Freq Response')
173 legend('u=rKx','u=Nbar rKx')
174 export fig triple41 pdf

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #12

16.30/31 Feedback Control Systems

State-Space Systems

Full-state Feedback Control

How do we change location of state-space eigenvalues/poles?

Or, if we can change the pole locations where do we


put the poles?
Heuristics
Linear Quadratic Regulator
How well does this approach work?

Reading: FPE 7.4

Fall 2010 16.30/31 122

Pole Placement Approach

So far we have looked at how to pick K to get the dynamics to have


some nice properties (i.e. stabilize A)
i(A) i(A BK)

Question: where should we put the closed-loop poles?

Approach #1: use time-domain specications to locate dominant


poles roots of:
s2 + 2ns + n2 = 0
Then place rest of the poles so they are much faster than the
dominant 2nd order behavior.
Example: could keep the same damped frequency wd and then
move the real part to be 23 times faster than the real part of
dominant poles n
Just be careful moving the poles too far to the left because it
takes a lot of control eort

Recall ROT for 2nd order response (4??):

1 + 1.1 + 1.4 2
10-90% rise time tr = n

Settling time (5%) ts = 3


n
Time to peak amplitude tp =

n 1 2

Peak overshoot Mp = entp

Key dierence in this case: since all poles are being placed, the
assumption of dominant 2nd order behavior is pretty much guaranteed
to be valid.

October 17, 2010


Fall 2010 16.30/31 123

Linear Quadratic Regulator

Approach #2: is to place the pole locations so that the closed-loop


system optimizes the cost function

T T

JLQR = x(t) Qx(t) + u(t) Ru(t) dt
0
where:
xT Qx is the State Cost with weight Q
uT Ru is called the Control Cost with weight R
Basic form of Linear Quadratic Regulator problem.

MIMO optimal control is a time invariant linear state feedback


u(t) = Klqrx(t)
and Klqr found by solving Algebraic Riccati Equation (ARE)
0 = AT P + P A + Q P BR1B T P
Klqr = R1B T P
Some details to follow, but discussed at length in 16.323

Note: state cost written using output xT Qx, but could dene a
system output of interest z = Cz x that is not based on a physical
sensor measurement and use cost ftn:

T
T T

JLQR = x (t)Cz QCz x(t) + u(t) u(t) dt
0
z)
Then eectively have state penalty Q = (CzT QC
Selection of z used to isolate system states of particular interest
that you would like to be regulated to zero.
R = I eectively sets the controller bandwidth

October 17, 2010


Fall 2010 16.30/31 124

81420
Fig. 1: Example #1: G(s) = (s+8)(s+14)(s+20) with control penalty and 10

October 17, 2010


Fall 2010 16.30/31 125

0.94
Fig. 2: Example #2: G(s) = s2 0.0297 with control penalty and 10

October 17, 2010


Fall 2010 16.30/31 126

81420
Fig. 3: Example #3: G(s) = (s8)(s14)(s20) with control penalty and 10

October 17, 2010


Fall 2010 16.30/31 127

(s1)
Fig. 4: Example #4: G(s) = (s+1)(s3) with control penalty and 10

October 17, 2010


Fall 2010 16.30/31 128

(s2)(s4)
Fig. 5: Example #5: G(s) = (s1)(s3)(s2 +0.8s+4)s2 with control penalty and 10

October 17, 2010


Fall 2010 16.30/31 129

LQR Weight Matrix Selection

Good ROT (typically called Brysons Rules) when selecting the weight
ing matrices Q and R is to normalize the signals:

12

(x1)2
max
2


2


Q =
(x2)2max

...



n2

(xn)2max

12

(u1)2
max
22



R =
(u2)2max

...

2


m

(um)2

max

The (xi)max and (ui)max represent the largest desired response or


control input for that component of the state/actuator signal.

2


2


i
i =
1 and
i
i =
1 are used to add an additional relative

weighting on the various components of the state/control

is used as the last relative weighting between the control and state
penalties gives a relatively concrete way to discuss the relative size
of Q and R and their ratio Q/R

October 17, 2010


Fall 2010 16.30/31 1210

Regulator Summary

Dominant second order approach places the closed-loop pole locations


with no regard to the amount of control eort required.
Designer must iterate on the selected bandwidth (n) to ensure
that the control eort is reasonable.

LQR selects closed-loop poles that balance between state errors and
control eort.
Easy design iteration using R
Sometimes dicult to relate the desired transient response to the
LQR cost function.
Key thing is that the designer is focused on system performance
issues rather than the mechanics of the design process

October 17, 2010


Fall 2010 16.30/31 1211

Code: LQR Examples - (step4.m)


1 % LQR examples for 16.31

2 % Fall 2010

3 % Jonathan How, MIT

4 %

5 close all;clear all

6 set(0,'DefaultLineLineWidth',2);

7 set(0,'DefaultlineMarkerSize',8);set(0,'DefaultlineMarkerFace','b')

8 set(0, 'DefaultAxesFontSize', 12);set(0, 'DefaultTextFontSize', 12)

9 set(0,'DefaultAxesFontName','arial');set(0,'DefaultTextFontName','arial')

10 set(0,'DefaultFigureColor','w','DefaultAxesColor','w',...

11 'DefaultAxesXColor','k','DefaultAxesYColor','k',...

12 'DefaultAxesZColor','k','DefaultTextColor','k')

13
14 if 1

15 clear all;fig=1;

16 % system

17 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));

18 [a,b,c,d]=ssdata(G);

19 R=1e3;

20 k=lqr(a,b,c'*c,R); % nominal controller

21 k2=lqr(a,b,c'*c,10*R); % slower control bec of higher control penalty

22
23 % find the feedforward gains

24 Nbar=inv(c*inv(ab*k)*b);

25 Nbar2=inv(c*inv(ab*k2)*b);

26
27 sys1=ss(ab*k,b*Nbar,c,d);

28 sys2=ss(ab*k2,b*Nbar2,c,d);

29
30 t=[0:.005:1];

31 [y,t,x]=step(sys1,t);

32 [y2,t2,x2]=step(sys2,t);

33
34 figure(fig);clf;fig=fig+1;

35 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 1 0 1.2])

36 grid;

37 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthEast')

38 xlabel('time (sec)');ylabel('Y output');

39 title('Step Response')

40 hold on

41 plot(t2([1 end]),[.1 .1]*y2(end),'r');

42 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');

43 hold off

44
45 text(.4,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'])

46 text(.4,.8,['Nbar= ',num2str(Nbar)])

47 %!rm srl12.pdf

48 export fig srl12 pdf

49
50 if 1
51 figure(fig);clf;

52 f=logspace(1,2,400);

53 gcl1=freqresp(sys1,f);

54 gcl2=freqresp(sys2,f);

55 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);%grid

56 axis([.1 100 .01 2])

57 xlabel('Freq (rad/sec)');ylabel('G {cl}')

58 title('Closedloop Freq Response')

59 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthWest')

60 %!rm srl13.pdf
61 export fig srl13 pdf
62 end

63
64 end % if 0

65
66 %%%%%%%%%

67 if 1

68 clear all

69 fig=4;

70
71 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));

72 [a,b,c,d]=ssdata(G);

73 R=.1

74 k=lqr(a,b,c'*c,R);

October 17, 2010


Fall 2010 16.30/31 1212

75 k2=lqr(a,b,c'*c,10*R);

76

77 % find the feedforward gains

78 Nbar=inv(c*inv(ab*k)*b);

79 Nbar2=inv(c*inv(ab*k2)*b);

80

81 sys1=ss(ab*k,b*Nbar,c,d);

82 sys2=ss(ab*k2,b*Nbar2,c,d);

83

84 t=[0:.005:1];

85 [y,t,x]=step(sys1,t);

86 [y2,t2,x2]=step(sys2,t);

87

88 figure(fig);clf;fig=fig+1;

89 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 1 0 1.2])

90 grid;

91 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthEast')

92 xlabel('time (sec)');ylabel('Y output');

93 title('Step Response')

94 hold on

95 plot(t2([1 end]),[.1 .1]*y2(end),'r');

96 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');

97 hold off

98

99 text(.4,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'])

100 text(.4,.8,['Nbar= ',num2str(Nbar)])


101 %!rm srl22.pdf;
102 export fig srl22 pdf
103
104 if 1
105 figure(fig);clf;fig=fig+1;
106 f=logspace(1,2,400);
107 gcl1=freqresp(sys1,f);
108 gcl2=freqresp(sys2,f);
109 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);%grid
110 axis([.1 100 .01 2])
111 xlabel('Freq (rad/sec)');ylabel('G {cl}')
112 title('Closedloop Freq Response')
113 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthWest')
114 %!rm srl23.pdf;
115 export fig srl23 pdf
116 end
117
118 end % if 0
119 %%%%%%%%%%%%%%
120
121 if 1
122 clear all
123 fig=7;
124
125 G=tf(.94,[1 0 0.0297]);
126 [a,b,c,d]=ssdata(G);
127 R=.1
128 k=lqr(a,b,c'*c,R);
129 k2=lqr(a,b,c'*c,10*R);
130
131 % find the feedforward gains
132 Nbar=inv(c*inv(ab*k)*b);
133 Nbar2=inv(c*inv(ab*k2)*b);
134
135 sys1=ss(ab*k,b*Nbar,c,d);
136 sys2=ss(ab*k2,b*Nbar2,c,d);
137
138 t=[0:.01:30];
139 [y,t,x]=step(sys1,t);
140 [y2,t2,x2]=step(sys2,t);
141
142 figure(fig);clf;fig=fig+1;
143 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 30 0 1.2])

144 grid;

145 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthEast')

146 xlabel('time (sec)');ylabel('Y output');

147 title('Step Response')

148 hold on

149 plot(t2([1 end]),[.1 .1]*y2(end),'r');

150 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');

151 hold off

October 17, 2010


Fall 2010 16.30/31 1213

152
153 text(15,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'])
154 text(15,.8,['Nbar= ',num2str(Nbar)])
155 %!rm srl32.pdf;
156 export fig srl32 pdf
157
158 if 1
159 figure(fig);clf;fig=fig+1;
160 f=logspace(3,3,400);
161 gcl1=freqresp(sys1,f);
162 gcl2=freqresp(sys2,f);
163 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);%grid
164 axis([.1 100 .01 2])
165 xlabel('Freq (rad/sec)');ylabel('G {cl}')
166 title('Closedloop Freq Response')
167 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthWest')
168 %!rm srl33.pdf;
169 export fig srl33 pdf
170 end
171
172 end % if 0
173
174 %%%%%%%%%%%%
175 if 1
176 clear all
177 fig=10;
178
179 G=tf([1 1],conv([1 1],[1 3]));
180 [a,b,c,d]=ssdata(G);
181 R=.1
182 k=lqr(a,b,c'*c,R);
183 k2=lqr(a,b,c'*c,10*R);
184
185 % find the feedforward gains
186 Nbar=inv(c*inv(ab*k)*b);
187 Nbar2=inv(c*inv(ab*k2)*b);
188
189 sys1=ss(ab*k,b*Nbar,c,d);
190 sys2=ss(ab*k2,b*Nbar2,c,d);
191
192 t=[0:.01:10];
193 [y,t,x]=step(sys1,t);
194 [y2,t2,x2]=step(sys2,t);
195
196 figure(fig);clf;fig=fig+1;
197 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 10 1 1.2])
198 grid;
199 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthEast')
200 xlabel('time (sec)');ylabel('Y output');
201 title('Unstable, NMP system Step Response')
202 hold on
203 plot(t2([1 end]),[.1 .1]*y2(end),'r');
204 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');
205 hold off
206
207 text(5,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'])
208 text(5,.8,['Nbar= ',num2str(Nbar)])
209 %!rm srl42.pdf;
210 export fig srl42 pdf
211
212 if 1
213 figure(fig);clf;fig=fig+1;
214 f=logspace(2,2,400);
215 gcl1=freqresp(sys1,f);
216 gcl2=freqresp(sys2,f);
217 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);%grid
218 axis([.1 100 .01 2])
219 xlabel('Freq (rad/sec)');ylabel('G {cl}')
220 title('Closedloop Freq Response')
221 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthWest')
222 %!rm srl43.pdf;
223 export fig srl43 pdf
224 end
225
226 end % if 0
227
228 %%%%%%%%%%%%

October 17, 2010


Fall 2010 16.30/31 1214

229 clear all


230 fig=13;
231
232 G=tf(conv([1 2],[1 4]),conv(conv([1 1],[1 3]),[1 2*.2*2 22 0 0]));
233 [a,b,c,d]=ssdata(G);
234 R=.1
235 k=lqr(a,b,c'*c,R);
236 k2=lqr(a,b,c'*c,10*R);
237
238 % find the feedforward gains
239 Nbar=inv(c*inv(ab*k)*b);
240 Nbar2=inv(c*inv(ab*k2)*b);
241
242 sys1=ss(ab*k,b*Nbar,c,d);
243 sys2=ss(ab*k2,b*Nbar2,c,d);
244
245 t=[0:.01:10];
246 [y,t,x]=step(sys1,t);
247 [y2,t2,x2]=step(sys2,t);
248
249 figure(fig);clf;fig=fig+1;
250 plot(t,y,'',t2,y2,'LineWidth',2);axis([0 10 1 1.2])
251 grid;
252 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthEast')
253 xlabel('time (sec)');ylabel('Y output');
254 title('Unstable, NMP system Step Response')
255 hold on
256 plot(t2([1 end]),[.1 .1]*y2(end),'r');
257 plot(t2([1 end]),[.1 .1]*9*y2(end),'r');
258 hold off
259
260 text(1,.6,['k= [ ',num2str(round(k*1000)/1000),' ]'])
261 text(1,.8,['Nbar= ',num2str(Nbar)])
262 %!rm srl52.pdf;
263 export fig srl52 pdf
264
265 if 1
266 figure(fig);clf;fig=fig+1;
267 f=logspace(2,2,400);
268 gcl1=freqresp(sys1,f);
269 gcl2=freqresp(sys2,f);
270 loglog(f,abs(squeeze(gcl1)),f,abs(squeeze(gcl2)),'LineWidth',2);%grid
271 axis([.1 100 .01 2])
272 xlabel('Freq (rad/sec)')
273 ylabel('G {cl}')
274 title('Closedloop Freq Response')
275 legend(['\rho = ',num2str(R)],['\rho = ',num2str(10*R)],'Location','SouthWest')
276 %!rm srl53.pdf;
277 export fig srl53 pdf
278 end

October 17, 2010


Fall 2010 16.30/31 1215

Example: B747

Consider the lateral dynamics of a B747 at cruise (40,000ft, M=0.8)


Variables of interest now include lateral velocity (side slip, ), yaw
and yaw rate r, roll and roll rate p.
Actuators are aileron a and rudder r (Figure 10.30 from FPE)
x, y, z = Position coordinates = Roll angle
u, v, w = Velocity coordinates = Pitch angle
p = Roll rate = Yaw angle
q = Pitch rate = Slide-slip angle
r = Yaw rate = Angle of attack

Velocity vector

x, u
y, v
, q

r
, p

er
dd
Ru
, r
Aileron a Elevators e
z, w
Image by MIT OpenCourseWare.

Form nonlinear dynamics as before and linearize about forward cruise


ight condition to get equations of motion for lateral dynamics



r
r


x = Ax + Bu , x = , u = = r


p

a


0.0558 0.9968 0.0802 0.0415

0.598 0.115 0.0318 0




A = (1)


3.05 0.388 0.4650 0


0 0.0805 1 0

0.00729 0

0.475 0.00775


B =
(2)


0.153 0.143


0 0

and we can sense the yaw rate r and bank angle .

Goal: Investigate OL behavior and correct parts that we dont like.


October 17, 2010
Fall 2010 16.30/31 1216

Open-Loop Dynamics

Code gives the numerical values for all of the stability derivatives.
Solve for the eigenvalues of the matrix A to nd the lateral
modes.
0.0329 0.9467i, 0.5627, 0.0073
Stable, but there is one very slow pole.

There are 3 modes, but they are a lot more complicated than the
longitudinal case.

Slow mode 0.0073 Spiral Mode


Fast real 0.5627 Roll Damping
Oscillatory 0.0329 0.9467i Dutch Roll

Can look at normalized eigenvectors:


Spiral Roll Dutch Roll
= v/U0 0.0067 -0.0197 0.3085 0
p = p/(2U0/b) -0.0009 -0.0712 0.1131 120
r = r/(2U0/b) 0.0052 0.0040 0.0348 -84
1.0000 1.0000 0.9438 28
Not as enlightening as the longitudinal case the lateral
dynamics tightly couple all of the states in a complex motion

Note that the Dutch roll is too lightly damped, and we would typically
like to increase that damping using closed-loop control

October 17, 2010


Fall 2010 16.30/31 1217

Washout Filter

A hidden complexity in the design of an autopilot for the lateral dy


namics is that there are various ight modes that require a steady
0). For example, steady turning ight.
yaw rate (rss =
So the control that we implement must not be concerned if the
steady state value r is non-zero

Typically achieved using a washout lter Hw (s) a high pass version


of the r signal.
High pass cuts out the low frequency content in the signal

For now we will penalize that ltered state in the LQR cost function
When we get to output feedback, it is only the ltered r that is
available to the controller
Washout filter with =4.2

0
10
|H (s)|
w

1
10

2
10
2 1 0 1
10 10 10 10
Freq (rad/sec)

Fig. 6: Washout lter with = 4.2

October 17, 2010


Fall 2010 16.30/31 1218

Add the rudder dynamics Hr (s) = 10/(s + 10), i.e. eectively a lag
New control picture (shows a nominal yaw damper we will replace
that with a state feedback controller)
Yaw
Hw (s)
Gyro
e

rc er r r 1

K Hr (s) s

Aircraft
a p 1

s

Fig. 7: Typical implementation of the LQ servo


To proceed, must augment the lter and actuator dynamics to the
state model
Algebra of this will be discussed later, but Matlab has easy ways of
doing this augmentation because we can multiply systems together
(note code using a I/O order dierent than picture above)
1 sys = ss(A,B,C,D);

2 set(sys, 'inputname', {'rudder' 'aileron'},...

3 'outputname', {'yaw rate' 'bank angle'});

4 set(sys, 'statename', {'beta' 'yaw rate' 'roll rate' 'phi'});

5 % actuator dynamics are a lag at 10

6 actn=10;actd=[1 10]; % H(s) in notes

7 H=tf({actn 0;0 1},{actd 1;1 1});

8 %

9 % Washout filter

10 tau=3;
11 washn=[1 0];washd=[1 1/tau]; % washout filter on yaw rate
12 WashFilt=tf({washn 0;0 1},{washd 1;1 1});
13 %
14 Gp=WashFilt*sys*H;

Inputs are now er and a, and outputs are e and from this MIMO
system, pull out the SISO system from er e
Sixth order because of the augmented states
Easiest control is of course to try gain feedback from e to er will
compare that with LQR
October 17, 2010
Fall 2010 16.30/31 1219

B747 Lateral Gain Feedback


Try gain feedback - see root locus

Root Locus

0.2 0.1
1 0.4 0.3
0.8

0.6

0.4
Imaginary Axis

0.2
1 0.9 0.8 0.7
0

0.2

0.4

0.6

0.8

1 0.4 0.3
0.2 0.1

1.2 1 0.8 0.6 0.4 0.2 0


Real Axis

Presence of zero on imaginary axis traps the Dutch roll pole so that
it cannot be moved far into the LHP

Net result is that we are limited to about 30% damping in that pole

October 17, 2010


Fall 2010 16.30/31 1220

B747 Lateral LQR

Using an LQR state cost based on e2, can design an LQR controller
for various R = .
But where do the poles go as a ftn of ?
Follow a pattern dened by a symmetric root locus (SRL)
form pole/zero map from input er to output e, put these on
the s-plane, introduce mirror image of these dynamics about the
imaginary axis, plot root locus using standard root-locus tools.
Symmetric root locus

0.20.1 1
1 0.40.3 0.9
0.8
0.7

0.5
Imaginary Axis

0.5

0.7
0.8
1 0.40.3 0.9
0.20.1 1

1 0.5 0 0.5 1
Real Axis

Dutch roll mode is strongly inuenced by this control approach, but


eect of zero still apparent
Suggests 0.1, which gives about 40% damping in that mode

1 Gp=WashFilt*sys*H;
2 set(Gp, 'statename', {'xwo' 'beta' 'yaw rate' 'roll' 'phi' 'xa'});
3 set(Gp, 'inputname', {'rudder inputs' 'aileron'},...
4 'outputname', {'filtered yaw rate' 'bank angle'});
5 [Ap,Bp,Cp,Dp]=ssdata(Gp);
6 [Klqr,S,Elqr]=lqr(Ap,Bp(:,1),Cp(1,:)'*Cp(1,:),0.1)

October 17, 2010


Fall 2010 16.30/31 1221

Klqr = [ 1.0105 0.1968 2.3864 0.1005 0.0374 0.2721 ] giving


9.8852


1.1477


(Ap Bp(:, 1)Klqr ) =
0.2750 0.6200i


0.4696

0.0050

Compare the initial condition response to a perturbation to

1
OL
0.8 LQR
Gain
0.6

0.4

0.2

0.2

0.4

0.6

0.8

1
0 5 10 15 20 25 30
Time

Closed-loop response looks pretty good


But note that the original gain feedback design wasnt that much
worse is the additional complexity of the full state feedback justied?
October 17, 2010
Fall 2010 16.30/31 1222

Code: B747 LQR, Estimator and DOFB


1 % B747 lateral dynamics
2 % Jonathan How, Fall 2010
3 % working from Matlab Demo called jetdemo
4 clear all;%close all
5 %
6 A=[.0558 .9968 .0802 .0415;
7 .598 .115 .0318 0;
8 3.05 .388 .4650 0;
9 0 0.0805 1 0];
10 B=[ .00729 0;
11 0.475 0.00775;
12 0.153 0.143;
13 0 0];
14 C=[0 1 0 0; 0 0 0 1];D=[0 0; 0 0];
15 sys = ss(A,B,C,D);
16 set(sys, 'inputname', {'rudder' 'aileron'},...
17 'outputname', {'yaw rate' 'bank angle'});
18 set(sys, 'statename', {'beta' 'yaw rate' 'roll rate' 'phi'});
19 [Yol,Tol]=initial(ss(A,B,[1 0 0 0],zeros(1,2)),[1 0 0 0]',[0:.1:30]);
20 [V,E]=eig(A)
21 %

22 % CONTROL actuator dynamics are a lag at 10

23 actn=10;actd=[1 10]; % H(s) in notes

24 H=tf({actn 0;0 1},{actd 1;1 1});

25 %

26 tau=3;washn=[1 0];washd=[1 1/tau]; % washout filter on yaw rate

27 WashFilt=tf({washn 0;0 1},{washd 1;1 1});

28 %

29 Gp=WashFilt*sys*H;

30 set(Gp, 'statename', {'xwo' 'beta' 'yaw rate' 'roll' 'phi' 'xa'});

31 set(Gp, 'inputname', {'rudder inputs' 'aileron'},...

32 'outputname', {'filtered yaw rate' 'bank angle'});


33 [Ap,Bp,Cp,Dp]=ssdata(Gp);
34 [Klqr,S,Elqr]=lqr(Ap,Bp(:,1),Cp(1,:)'*Cp(1,:),0.1);rifd(Elqr)
35 %

36 % gain feedback on the filter yaw rate

37 figure(1);clf

38 rlocus(Ap,Bp(:,1),Cp(1,:),0);

39 sgrid([.1 .2 .3 .4],[.7 .8 .9 1]);grid on;axis([1.4 .1 1.2 1.2])

40 Kgain=2;

41 Egain=eig(ApBp(:,1)*Kgain*Cp(1,:));rifd(Egain)

42 hold on;plot(Egain+eps*sqrt(1),'bd');hold off

43 export fig b747 lqr1 pdf

44
45 Cpbeta=[0 1 0 0 0 0]; % performance output variable
46 Ggain=ss(ApBp(:,1)*Kgain*Cp(1,:),Bp,Cpbeta,0);
47 xp0=[0 1 0 0 0 0]';
48 [Ygain,Tgain]=initial(Ggain,xp0,Tol); %'
49
50 figure(2);clf
51 srl(ss(Ap,Bp(:,1),Cp(1,:),0));
52 sgrid([.1 .2 .3 .4],[.7 .8 .9 1])
53 axis([1.25 1.25 1.25 1.25]);grid on
54 hold on;plot(Elqr+eps*sqrt(1),'bd');hold off
55 export fig b747 lqr2 pdf
56
57 % close the LQR loop

58 Acl=ApBp(:,1)*Klqr;

59 Bcl=Bp(:,1);

60 % choose output to just state beta and control u

61 Ccl=[Cpbeta;Klqr];

62 Dcl=[0;0];

63 Glqr=ss(Acl,Bcl,Ccl,Dcl);

64 [Y,T]=initial(Glqr,xp0,Tol); %'

65
66 figure(3);clf

67 plot(Tol,Yol,T,Y(:,1),Tgain,Ygain);axis([0 30 1 1]);

68 setlines(2)

69 legend('OL','LQR','Gain')

70 ylabel('\beta');xlabel('Time')

71 grid on

72 export fig b747 lqr3 pdf

73
74 % estimator poles made faster by jfactor

October 17, 2010


Fall 2010 16.30/31 1223

75 jfactor=2.05;

76 Eest=jfactor*Elqr;

77 Kest=place(Ap',Cp(1,:)',Eest);Kest=Kest';

78

79 na=size(Ap,1);

80 % Form compensator and closedloop

81 % see 155

82 ac=ApBp(:,1)*KlqrKest*Cp(1,:);bc=Kest;cc=Klqr;dc=0;

83 Gc=ss(ac,bc,cc,dc);

84 % see 157

85 acl=[Ap Bp(:,1)*cc;bc*Cp(1,:) ac];

86 bcl=[zeros(na,1);bc];

87 % choose output to be state beta and control u

88 ccl=[Cpbeta,zeros(1,na);zeros(1,na) cc];

89 dcl=[0;0];

90 Gcl=ss(acl,bcl,ccl,dcl);

91

92 figure(5);clf

93 [p,z]=pzmap(Gc(1,1));

94 plot(z+sqrt(1)*eps,'mo','MarkerFace','m')

95 hold on

96 plot(p+sqrt(1)*eps,'bs','MarkerFace','b')

97 hold off

98 legend('Gc zeros','Gc poles','Location','NorthWest')

99 grid on;axis([5.5 .5 3 3])

100 export fig b747 lqr5 pdf


101
102 figure(4);clf
103 rlocus(Gp(1,1)*Gc);axis([5.5 .5 3 3])
104 hold on
105 [p,z]=pzmap(Gp(1,1));
106 plot(p+sqrt(1)*eps,'rx','MarkerFace','r')
107 plot(z+sqrt(1)*eps,'ro','MarkerFace','r')
108 [p,z]=pzmap(Gc(1,1));
109 plot(p+sqrt(1)*eps,'mx','MarkerFace','m')
110 plot(z+sqrt(1)*eps,'mo','MarkerFace','m')
111 p=eig(acl);
112 plot(p+sqrt(1)*eps,'bd')
113 hold off
114 export fig b747 lqr4 pdf
115
116 % initial comp at zero
117 [Ycl,Tcl]=initial(Gcl,[xp0;xp0*0],Tol); %'
118
119 figure(3);clf
120 plot(Tol,Yol,T,Y(:,1),Tgain,Ygain,Tcl,Ycl(:,1));axis([0 30 1 1]);
121 setlines(2)
122 legend('OL','LQR','Gain','DOFB')
123 ylabel('\beta');xlabel('Time')
124 grid on
125 export fig b747 lqr3a pdf
126
127 figure(6);clf
128 plot(T,Y(:,1).2,Tcl,Ycl(:,1).2,T,Y(:,2).2,Tcl,Ycl(:,2).2);
129 axis([0 10 0 1]);
130 hold off
131 setlines(2)
132 legend('LQR x2','DOFB x2','LQR u2','DOFB u2')
133 ylabel('x2 and u2');xlabel('Time')
134 grid on
135 export fig b747 lqr3b pdf

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #13

16.30/31 Feedback Control Systems

State-Space Systems
Full-state Feedback Control Performance/Robustness

Reading: FPE 7.9.1


Fall 2010 16.30/31 132

LQ Servo Introduction

Can use scaling N can achieve zero steady state error, but the ap
proach is sensitive to accurate knowledge of all plant parameters

Can modify LQ formulation to ensure that zero steady state error is


robustly achieved in response to constant reference commands.
Done by augmenting integrators to the system output and then
including a penalty on the integrated output in LQ cost.

Approach: If the relevant system output is y(t) = Cy x(t) and


reference r(t), add extra states xI (t), where
x I (t) = e(t) = r(t) y(t)
Then penalize both x(t) and xI (t) in the cost

If state of the original system is x(t), then the dynamics are modied
to be

x (t) A 0 x(t) Bu 0
= + u(t) + r(t)
x I (t) Cy 0 x I (t) 0 I
T
and dene x(t) = xT (t) xI T (t)

The optimal feedback for the cost



T T

J= x Rxxx + u Ruuu dt
0
is full-state feedback (found using LQR approach in Topic 12)

x
u(t) = [ K KI ] = Kx(t)
xI

October 17, 2010


Fall 2010 16.30/31 133

Once we have used LQR to design the control gains K and KI , we


typically implement the controller using the architecture:


r 1 xI
KI
s
u y
K G(s)

Fig. 1: Typical implementation of the LQ servo

Example compare the integrator and N (see 11-??) approaches


when the model of the system dynamics is wrong.

Case 1: nominal, so the C used to design the Ks and N s is


the same used in the simulations

Case 2: o-nominal, so the C used to design the Ks and N s


diers from the one used in the simulations

Obvious that the N approach inaccurate at steady state, but the


integrator approaches still do OK robustness to modeling errors.

October 17, 2010


Fall 2010 16.30/31 134

Code: LQR Servo


1 % LQ servo

2 % Fall 2010

3 % Jonathan How, MIT

4 close all;clear all;j=sqrt(1);

6 G=tf([1 2],conv([1 1],[1 1 2]));

7 [a,b,c,d]=ssdata(G);

8 % consider possibility that c is wrong in the design

9 % use c in design of K's and N's, but simulate with cpert

10 nom=2;
11 if nom
12 cpert=c; % nominal case
13 else
14 cpert=1.5*c; % offnominal case
15 end
16 ns=size(a,1);
17 z=zeros(ns,1);
18 Rxx=c'*c;
19 Ruu=1;
20
21 klqr=lqr(a,b,Rxx,Ruu); % basic LQR gains
22 Nbar=inv(c*inv(ab*klqr)*b); % design with c
23 syslqr=ss(ab*klqr,b*Nbar,cpert,0); % simulate with cpert
24
25 Rxx1=[Rxx z;z' 30];
26 Rxx2=[Rxx z;z' 1];
27
28 abar=[a z;c 0];bbar=[b;0]; % design with c
29 kbar1=lqr(abar,bbar,Rxx1,Ruu);
30 kbar2=lqr(abar,bbar,Rxx2,Ruu);
31 abarpert=[a z;cpert 0]; % simulate with cpert
32 sys1=ss(abarpertbbar*kbar1,[z;1],[cpert 0],0);
33 sys2=ss(abarpertbbar*kbar2,[z;1],[cpert 0],0);
34
35 figure(1);
36 t=[0:.1:10]';
37 [ylqr,tlqr]=step(ss(syslqr),t);
38 [y1,t1]=step(ss(sys1),t);
39 [y2,t2]=step(ss(sys2),t);
40 plot(t,ylqr,t,y1,t,y2,'LineWidth',2)
41 xlabel('Time')
42 legend('LQR with N','Int Penalty 30','Int Penalty 1','Location','SouthEast')
43
44 if nom
45 export fig lqrservo1 pdf
46 else
47 export fig lqrservo2 pdf
48 end

October 17, 2010


Fall 2010 16.30/31 135

Fig. 2: Comparison of use of integrator and N weighting - nominal case

Fig. 3: Comparison of use of integrator and N weighting - o-nominal case

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #14

16.30/31 Feedback Control Systems

State-Space Systems
Open-loop Estimators
Closed-loop Estimators

Observer Theory (no noise) Luenberger

IEEE TAC Vol 16, No. 6, pp. 596602, Dec 1971.


Estimation Theory (with noise) Kalman

Reading: FPE 7.5

Fall 2010 16.30/31 142

Estimators/Observers

Problem: So far we have assumed that we have full access to the


state x(t) when we designed our controllers.

Most often all of this information is not available.

Usually can only feedback information that is developed from the


sensors measurements.
Could try output feedback
y
u = Kx u = K
Same as the proportional feedback we looked at at the beginning
of the root locus work.
This type of control is very dicult to design in general.

Alternative approach: Develop a replica of the dynamic system


that provides an estimate of the system states based on the mea
sured output of the system.

New plan:
(t).
1. Develop estimate of x(t) that will be called x
2. Then switch from u(t) = Kx(t) to u(t) = Kx (t).

Two key questions:


How do we nd x
(t)?
Will this new plan work?

October 17, 2010


Fall 2010 16.30/31 143

Estimation Schemes
Assume that the system model is of the form:
x (t) = Ax(t) + Bu(t) , x(0) unknown
y(t) = Cx(t)
where
1. A, B, and C are known.

2. u(t) is known

I
3. Measurable outputs are y(t) from C =

Goal: Develop a dynamic system whose state


(t) = x(t)

x
for all time t 0. Two primary approaches:

Open-loop.

Closed-loop.

October 17, 2010


Fall 2010 16.30/31 144

Open-loop Estimator

Given that we know the plant matrices and the inputs, we can just
perform a simulation that runs in parallel with the system
(t) = Ax
x (t) + Bu(t)

Then x
(t) x(t) t provided that x
(0) = x(0)

Actual System y
x: A, B, C
u

Estimator
y
: A, B, C
x

Analysis of this case:


x (t) = Ax(t) + Bu(t)
(t) = Ax
x (t) + Bu(t)

Dene the estimation error x (t) = x(t) x (t).


(t) = 0 t. (But is this realistic?)
Now want x

Major Problem: We do not know x(0)

October 17, 2010


Fall 2010 16.30/31 145

Subtract to get:
d
(x(t) x
(t)) = A(x(t) x (t) = Ax
(t)) x (t)
dt
which has the solution
(t) = eAtx
x (0)
Gives the estimation error in terms of the initial error.

Does this guarantee that x


(t) = 0 t?
(t) 0 as t ? (which is a more realistic goal).
Or even that x

Response is ne if x (0) = 0?
(0) = 0. But what if x

If A stable, then x (t) 0 as t , but the dynamics of the


estimation error are completely determined by the open-loop dynamics
of the system (eigenvalues of A).
Could be very slow.
No obvious way to modify the estimation error dynamics.

Open-loop estimation does not seem to be a very good idea.

October 17, 2010


Fall 2010 16.30/31 146

Closed-loop Estimator

An obvious way to x this problem is to use the additional information


available:
How well does the estimated output match the measured output?
(t) = Cx
Compare: y(t) = Cx(t) with y (t)
Then form y
(t) = y(t) y
(t) Cx
(t)

Actual System y
x: A, B, C
+
u
L

y

Estimator
y
: A, B, C
x

Approach: Feedback y (t) to improve our estimate of the state.


Basic form of the estimator is:
(t) = Ax
x (t) + Bu(t) + Ly
(t)
(t) = Cx
y (t)
where L is the user selectable gain matrix.

Analysis:
(t) =
x x (t) x (t)
= [Ax(t) + Bu(t)] [Ax (t) + Bu(t) + L(y(t) y
(t))]
= A(x(t) x (t)) L(Cx(t) Cx (t))
= Ax (t) LCx (t)
= (A LC)x (t)

October 17, 2010


Fall 2010 16.30/31 147

So the closed-loop estimation error dynamics are now


(t) = (A LC)x
x (t)
with solution
(t) = e(ALC)t x
x (0)

Bottom line: Can select the gain L to attempt to improve the


convergence of the estimation error (and/or speed it up).
But now must worry about observability of the system model.

Closed-loop estimator:
(t) =
x Ax (t) + Bu(t) + Ly (t)
= Ax (t) + Bu(t) + L(y(t) y (t))
= (A LC)x (t) + Bu(t) + Ly(t)
(t) =
y Cx (t)
Which is a dynamic system with poles given by i(A LC) and
which takes the measured plant outputs as an input and generates
an estimate of x(t).

October 17, 2010


Fall 2010 16.30/31 148

Regulator/Estimator Comparison

Regulator Problem:
Concerned with controllability of (A, B)

For a controllable system we can place the eigenvalues


of A BK arbitrarily.
Choose K R1n (SISO) such that the closed-loop poles
det(sI A + BK) = c(s)
are in the desired locations.

Estimator Problem:
For estimation, were concerned with observability of pair (A, C).

For a observable system we can place the eigenvalues of


A LC arbitrarily.
Choose L Rn1 (SISO) such that the closed-loop poles
det(sI A + LC) = o(s)
are in the desired locations.

These problems are obviously very similar in fact they are called
dual problems.

October 17, 2010


Fall 2010 16.30/31 149

Estimation Gain Selection


The procedure for selecting L is very similar to that used for the
regulator design process.

Write the system model in observer canonical form



x 1 a1 1 0 x1 b1
x 2 = a2 0 1 x2 + b2 u
x 3 a3 0 0 x3 b3

x1
y = 1 0 0 x2
x3
Now very simple to form

a1 1 0 l1
A LC = a2 0 1 l2
1 0 0
a3 0 0 l3

a1 l1 1 0
= a2 l2 0 1
a3 l3 0 0
The closed-loop poles of the estimator are at the roots of
det(sI A + LC) = s3 + (a1 + l1)s2 + (a2 + l2)s + (a3 + l3) = 0
Use Pole Placement algorithm with this characteristic equation.

Note: estimator equivalent of Ackermanns formula is that



0
..
.
L = e(A)M1 o
0
1

October 17, 2010


Fall 2010 16.30/31 1410

Dual Design Approach

Note that the poles of (A LC) and (A LC)T are identical.


Also we have that (A LC)T = AT C T LT
So designing LT for this transposed system looks like a standard
regulator problem (A BK) where
A AT
B CT
K LT
So we can use
Ke = acker(AT , C T , P ) , L KeT

In fact, just as k=lqr(A,B,Q,R) returns a good set of control gains,


can use
R
Ke = lqr(AT , C T , Q, ) , L KT
e
to design a good set of optimal estimator gains

Roles of Q and R
explained in 16.322

October 17, 2010


Fall 2010 16.30/31 1411

Estimators Example

Simple system

1 1.5

1 0.5
A = , B= , x(0) =
1 2 0 1

C = 1 0 , D=0
Assume that the initial conditions are not well known.
System stable, but max(A) = 0.18
Test observability:

C 1 0
rank = rank
CA 1 1.5

Use open and closed-loop estimators


Since the initial conditions are not well known, use

0
(0) =
x
0

Open-loop estimator:
(t) = Ax
x (t) + Bu(t)
(t) = Cx
y (t)

Typically simulate both systems together for simplicity

October 17, 2010


Fall 2010 16.30/31 1412

Open-loop case:
x (t) = Ax(t) + Bu(t)
y(t) = Cx(t)
(t)
x = (t) + Bu(t)
Ax
(t)
y = (t)
Cx


x (t)
A 0 x(t) B
=
+ u(t)
(t)
x 0 A (t)

x B



0.5

x(0)
1


=

(0)

x 0

y(t)
C 0 x(t)

=
(t)
y 0 C (t)
x

Closed-loop case:
x (t) = Ax(t) + Bu(t)

(t) = (A LC)x
x (t) + Bu(t) + LCx(t)

x (t)
A 0 x(t) B
=
+ u(t)
(t)
x LC A LC (t)

x B

Example uses a strong u(t) to shake things up

October 17, 2010


Fall 2010 16.30/31 1413

Open loop estimator


1

0.5
states

0
x1
0.5 x2

1
0 0.5 1 1.5 2 2.5 3 3.5 4
time
1
estimation error

0.5

0.5

1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

Fig. 1: Open-loop estimator error converges to zero, but very slowly.

Closedloop estimator
1

0.5
states

0
x1
0.5 x2

1
0 0.5 1 1.5 2 2.5 3 3.5 4
time
1
estimation error

0.5

0.5

1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

Fig. 2: Closed-loop estimator. Convergence looks much better.

October 17, 2010


Fall 2010 16.30/31 1414

Where Put Estimator Poles?


Location heuristics for poles still apply
Main dierence: probably want to make the estimator faster than
you intend to make the regulator should enhance the control,
(t).
which is based on x
Crude ROT: Factor of 2 in the time constant n associated
with the regulator poles.

Note: When designing a regulator, were concerned with bandwidth


of the control getting too high often results in control commands
that saturate the actuators and/or change rapidly.

Dierent concerns for the estimator:


Loop closed inside computer, so saturation not a problem.
However, measurements y are often noisy, and must be careful
how we use them to develop state estimates.

High bandwidth estimators tend to accentuate the eect of sens


ing noise in the estimate.
State estimates tend to track the data in the measurements,
which could be uctuating randomly due to the noise.

Low bandwidth estimators have lower gains and tend to rely more
heavily on the plant model
Essentially an open-loop estimator tends to ignore the measure
ments and just uses the plant model.

October 17, 2010


Fall 2010 16.30/31 1415

Final Thoughts

Note that the feedback gain L in the estimator only stabilizes the
estimation error.
If the system is unstable, then the state estimates will also go to
, with zero error from the actual states.

Estimation is an important concept of its own.


Not always just part of the control system
Critical issue for guidance and navigation system

Can develop an optimal estimate as well


More complete discussion requires that we study stochastic pro
cesses and optimization theory.
More in 16.322 take in Spring or see 2004 OCW notes

Estimation is all about which do you trust more: your measure


ments or your model.

October 17, 2010


Fall 2010 16.30/31 1416

Code: Estimator
1 % Examples of estimator performance

2 %

3 % Jonathan How

4 % Oct 2010

5 %

6 % plant dynamics

7 %

8 close all;clear all

9 set(0,'DefaultLineLineWidth',2);

10 set(0,'DefaultlineMarkerSize',10);set(0,'DefaultlineMarkerFace','b')

11 set(0, 'DefaultAxesFontSize', 12);set(0, 'DefaultTextFontSize', 12)

12
13 set(0,'DefaultFigureColor','w',...
14 'DefaultAxesColor','w',...
15 'DefaultAxesXColor','k',...
16 'DefaultAxesYColor','k',...
17 'DefaultAxesZColor','k',...
18 'DefaultTextColor','k')
19
20 a=[1 1.5;1 2];b=[1 0]';c=[1 0];d=0;

21 %

22 % estimator gain calc

23 l=place(a',c',[3 4]);l=l'

24 %

25 % plant initial cond

26 xo=[.5;1];

27 % extimator initial cond

28 xe=[0 0]';

29 %

30 t=[0:.1:10];

31 %

32 % inputs

33 u=0;u=[ones(15,1);ones(15,1);ones(15,1)/2;ones(15,1)/2;zeros(41,1)];

34 %

35 % openloop extimator

36 A ol=[a zeros(size(a));zeros(size(a)) a];

37 B ol=[b;b];

38 C ol=[c zeros(size(c));zeros(size(c)) c];

39 D ol=zeros(2,1);

40 %

41 % closedloop extimator

42 A cl=[a zeros(size(a));l*c al*c];

43 B cl=[b;b];

44 C cl=[c zeros(size(c));zeros(size(c)) c];

45 D cl=zeros(2,1);

46
47 [y cl,x cl]=lsim(A cl,B cl,C cl,D cl,u,t,[xo;xe]);
48 [y ol,x ol]=lsim(A ol,B ol,C ol,D ol,u,t,[xo;xe]);
49
50 figure(1);clf;subplot(211)

51 set(gca)

52 plot(t,x cl(:,[1 2]),t,x cl(:,[3 4]),'','LineWidth',2);axis([0 4 1 1]);

53 title('Closedloop estimator');ylabel('states');xlabel('time')

54 text(.25,.4,'x 1');text(.5,.55,'x 2');subplot(212)

55 plot(t,x cl(:,[1 2])x cl(:,[3 4]))

56 setlines(2);axis([0 4 1 1]);grid on

57 ylabel('estimation error');xlabel('time')

58
59 figure(2);clf;subplot(211)

60 set(gca)

61 plot(t,x ol(:,[1 2]),t,x ol(:,[3 4]),'','LineWidth',2);axis([0 4 1 1])

62 title('Open loop estimator');ylabel('states');xlabel('time')

63 text(.25,.4,'x 1');text(.5,.55,'x 2');subplot(212)

64 plot(t,x ol(:,[1 2])x ol(:,[3 4]))

65 setlines(2);axis([0 4 1 1]);grid on

66 ylabel('estimation error');xlabel('time')

67
68 figure(1);export fig est11 pdf
69 figure(2);export fig est12 pdf

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #15

16.30/31 Feedback Control Systems

State-Space Systems
Closed-loop control using estimators and regulators.

Dynamics output feedback

Back to reality

Reading: FPE 7.6


Fall 2010 16.30/31 152

Combined Estimators and Regulators

Now evaluate stability and/or performance of a controller when K


designed assuming
u(t) = Kx(t)
but implemented as
u(t) = Kx
(t)

Assume we have designed a closed-loop estimator with gain L


(t) = (A LC)x
x (t) + Bu(t) + Ly(t)
(t) = Cx
y (t)

The closed-loop system dynamics are given by:


x (t) = Ax(t) + Bu(t)

(t)
x = (A LC)x (t) + Bu(t) + Ly(t)

y(t) = Cx(t)

(t)
y = Cx (t)

u(t) = Kx (t)

Which can be compactly written as:



BK

x (t) A x(t)
= x cl (t) = Acl xcl (t)
(t)
x LC A BK LC (t)
x

This does not look too good at this point not even obvious that
the closed-system is stable.
i(Acl ) =??

November 5, 2010
Fall 2010 16.30/31 153

Can x this problem by introducing a new variable x = xx and


then converting the closed-loop system dynamics using the
similarity transformation T

x(t) I 0 x(t)
cl (t)
x = = T xcl (t)
(t)
x I I x (t)

Note that T = T 1

Now rewrite the system dynamics in terms of the state x


cl (t)
Acl T Acl T 1 Acl
Since similarity transformations preserve the eigenvalues we are
guaranteed that
i(Acl ) i(Acl )

Work through the math:



I 0 A BK I 0
Acl =
I I LC A BK LC I I

A BK I 0
=
A LC A + LC I I

A BK BK
=
0 A LC

Because Acl is block upper triangular, we know that the closed-loop


poles of the system are given by
cl) det(sI (A BK)) det(sI (A LC)) = 0
det(sI A

November 5, 2010
Fall 2010 16.30/31 154

Observation: Closed-loop poles for this system consist of


the union of the regulator poles and estimator poles.

So we can just design the estimator/regulator separately and com


bine them at the end.
Called the Separation Principle.
Keep in mind that the pole locations you are picking for these two
sub-problems will also be the closed-loop pole locations.

Note: Separation principle means that there will be no ambiguity or


uncertainty about the stability and/or performance of the closed-loop
system.
The closed-loop poles will be exactly where you put them!!
And we have not even said what compensator does this amazing
accomplishment!!!

November 5, 2010
Fall 2010 16.30/31 155

The Compensator

Dynamic Output Feedback Compensator is the combination of


the regulator and estimator using u(t) = Kx
(t)
(t) = (A LC)x
x (t) + Bu(t) + Ly(t)
(t) = (A BK LC)x
x (t) + Ly(t)
u(t) = Kx (t)

Rewrite with new state xc(t) x


(t)

x c(t) = Acxc(t) + Bcy(t)


u(t) = Ccxc(t)

where the compensator dynamics are given by:


Ac A BK LC , Bc L , Cc K
Note that the compensator maps sensor measurements to actuator
commands, as expected.

Closed-loop system stable if regulator/estimator poles placed in the


LHP, but compensator dynamics do not need to be stable.
i(A BK LC) =??

November 5, 2010
Fall 2010 16.30/31 156

For consistency with the implementation of classical controllers, dene


the compensator transfer function as
Gc(s) = Cc(sI Ac)1Bc
so that
u(t) = Gc(s)y(t)

Note that it is often very easy to provide classical interpretations (such


as lead/lag) for the compensator Gc(s).

Can implement this compensator with a reference command r(t) using


feedback on e(t) = r(t) y(t) rather than just y(t)

r e u y
Gc(s) Gp(s)

u(t) = Gc(s)e(t) = Gc(s)(r(t) y(t))


Still have u(t) = Gc(s)y(t) if r(t) = 0.
So now write state space model of the compensator as
x c(t) = Acxc(t) + Bce(t)
u(t) = Ccxc(t)

Intuitively appealing setup because it is the same approach used


for the classical control, but it turns out not to be the best approach
More on this later.

November 5, 2010
Fall 2010 16.30/31 157

Mechanics
Basics:
e(t) = r(t) y(t), u(t) = Gce(t), y(t) = Gu(t)

Gc(s) : x c(t) = Acxc(t) + Bce(t) , u(t) = Ccxc(t)

G(s) : x (t) = Ax(t) + Bu(t) , y(t) = Cx(t)

Loop dynamics L(s) = G(s)Gc(s) y(t) = L(s)e(t)


x (t) = Ax(t) +BCc xc(t)
x c(t) = +Ac xc(t) +Bce(t)


x (t) A BCc
x(t) 0
L(s) : = + e(t)
x c(t) 0 Ac xc(t) Bc

x(t)
y(t) = C 0
xc(t)

To close the loop, note that e(t) = r(t) y(t), then



x (t) A BCc x(t) 0 x(t)
= + r(t) C 0
x c(t) 0 Ac xc(t) Bc xc(t)

A BCc x(t) 0
= + r(t)
BcC Ac xc(t) Bc

x(t)
y(t) = C 0
xc(t)
Acl is not exactly the same as on page 15-2 because we have re
arranged where the negative sign enters into the problem. Same
result though.

November 5, 2010
Fall 2010 16.30/31 158

Scaling DOFB Compensator

DOFB = Dynamic Output Feedback

Assume that the closed-loop system has been written for the system
on 15-6, then it is still possible that the DC gain of the response from
r to y is not unity
Scalar input/output assumed for now

Need to scale the closed-loop to x this using r N r

Closed-loop dynamics are


x cl (t) = Acl xcl (t) + Bcl N r(t)
y(t) = Ccl xcl (t)

So to have the TF of y/u have unity gain at DC, must have


1
N = Ccl (Acl )1Bcl

November 5, 2010
Fall 2010 16.30/31 159

Simple Compensator Example

Let G(s) = 1/(s2 + s + 1) with state-space model given by:



0 1 0
A= , B= , C= 1 0 , D=0
1 1 1

Design the regulator to place the poles at s = 4 4j



i(A BK) = 4 4j K = 31 7
Regulator pole time constant c = 1/n 1/4 = 0.25 sec

Put estimator poles with faster time constant e 1/10


Use real poles, so e(s) = (s + 10)2
1
C 0
L = e(A)
CA 1
2 1
0 1 0 1 100 0 1 0 0
= + 20 +
1 1 1 1 0 100 0 1 1

99 19 0 19
= =
19 80 1 80

November 5, 2010
Fall 2010 16.30/31 1510

Compensator:
A BK LC
Ac =

0 1 0 19
= 31 7 1 0
1 1 1 80

19 1
=
112 8

19
Bc = L =
80

Cc = K = 31 7

1 a=[0 1;1 1];b=[0 1]';c=[1 0];d=0;

2 k=acker(a,b,[4+4*j;44*j]);

3 l=acker(a',c',[10 10]')';

4 %

5 % For state space for G c(s)

6 %

7 ac=ab*kl*c;bc=l;cc=k;dc=0;

Compensator transfer function:


U
Gc(s) = Cc(sI Ac)1Bc
E
s + 2.553
= 1149
s2 + 27s + 264

Note that the compensator has a low frequency real zero and two
higher frequency poles.

Thus it looks like a lead compensator.

November 5, 2010
Fall 2010 16.30/31 1511

Fig. 1: Simple plant compensator lead 210 rads/sec.

Fig. 2: Loop transfer function L(s) shows the slope change near c = 5
rad/sec. Note that we have a large PM and GM.

November 5, 2010
Fall 2010 16.30/31 1512

Fig. 3: Freeze compensator poles and zeros; look at root locus of closed-
loop poles versus an additional loop gain (nominally = 1.)

Fig. 4: Closed-loop transfer function.

November 5, 2010
Fall 2010 16.30/31 1513
Code: Dynamic Output Feedback Example
1 % Combined estimator/regulator design for a simple system

2 % G= 1/(s2+s+1)

3 % Jonathan How

4 % Fall 2010

5 %

6 %close all;clear all

7 set(0,'DefaultLineLineWidth',2)

8 set(0,'DefaultFigureColor','None')

9 set(0,'DefaultlineMarkerSize',8);set(0,'DefaultlineMarkerFace','b')

10 set(0, 'DefaultAxesFontSize', 12);set(0, 'DefaultTextFontSize', 12);

11 %

12 a=[0 1;1 1];b=[0 1]';c=[1 0];d=0;

13 k=acker(a,b,[4+4*j;44*j]);

14 l=acker(a',c',[10 11]')';

15 %

16 % For state space for G c(s)

17 %

18 ac=ab*kl*c;bc=l;cc=k;dc=0;

19
20 G=ss(a,b,c,d);

21 Gc=ss(ac,bc,cc,dc);

22
23 f=logspace(1,2,400);

24 g=freqresp(G,f*j);g=squeeze(g);

25 gc=freqresp(Gc,f*j);gc=squeeze(gc);

26
27 figure(1);clf;orient tall

28 subplot(211)

29 loglog(f,abs(g),f,abs(gc),'');axis([.1 1e2 .02 1e2])

30 xlabel('Freq (rad/sec)');ylabel('Mag')

31 legend('Plant G','Compensator Gc','Location','East');

32 grid

33 subplot(212)

34 semilogx(f,180/pi*angle(g),f,180/pi*angle(gc),'');

35 axis([.1 1e2 200 50])

36 xlabel('Freq (rad/sec)');ylabel('Phase (deg)');grid

37
38 L=g.*gc;

39
40 figure(2);clf;orient tall

41 subplot(211)

42 loglog(f,abs(L),[.1 1e2],[1 1]);axis([.1 1e2 .2 1e2])

43 xlabel('Freq (rad/sec)');ylabel('Mag')

44 legend('Loop L');

45 grid

46 subplot(212)

47 semilogx(f,180/pi*phase(L.'),[.1 1e2],180*[1 1]);

48 axis([.1 1e2 290 0])

49 xlabel('Freq (rad/sec)');ylabel('Phase (deg)');grid

50 %

51 % loop dynamics L = G Gc

52 %

53 al=[a b*cc;zeros(2) ac];

54 bl=[zeros(2,1);bc];

55 cl=[c zeros(1,2)];

56 dl=0;

57 figure(3)

58 rlocus(al,bl,cl,dl)

59 %

60 % closedloop dynamics

61 % unity gain wrapped around loop L

62 %

63 acl=albl*cl;bcl=bl;ccl=cl;dcl=d;

64 %

65 % scale closedloop to get zero SS error

66 N=inv(ccl*inv(acl)*bcl)

67
68 hold on;plot(eig(acl),'d','MarkerFaceColor','b');hold off

69 grid on

70 axis([25 5 15 15])

71 %

72 % closedloop freq response

73 %

74 Gcl=ss(acl,bcl*N,ccl,dcl*N);

75 gcl=freqresp(Gcl,f*j);gcl=squeeze(gcl);

76

November 5, 2010
Fall 2010 16.30/31 1514

77 figure(4);clf

78 loglog(f,abs(g),f,abs(gcl),'');

79 axis([.1 1e2 .01 1e2])

80 xlabel('Freq (rad/sec)');ylabel('Mag')

81 legend('Plant G','closedloop G {cl}');grid

82 title(['Factor of N=',num2str(round(1000*N)/1000),' applied to Closed loop'])

83

84 figure(5);clf

85 margin(al,bl,cl,dl)

86

87 figure(1);export fig reg est1 pdf;

88 figure(2);export fig reg est2 pdf;

89 figure(3);export fig reg est3 pdf;

90 figure(4);export fig reg est4 pdf;

91 figure(5);export fig reg est5 pdf;

92

93 figure(6);clf

94 % new form on 167

95 bcl=[b;b];

96 N=inv(ccl*inv(acl)*bcl)

97 Gnew=ss(acl,bcl*N,ccl,dcl*N);

98 [yorig,t]=step(Gcl,2.5);

99 [ynew]=step(Gnew,t);

100 plot(t,yorig,t,ynew);setlines(2);
101 legend('Original','New');grid on
102 ylabel('Y');xlabel('Time (s)')
103 figure(6);export fig reg est6 pdf;

November 5, 2010
Fall 2010 16.30/31 1515

81420
Mag Fig. 5: Example #1: G(s) = (s+8)(s+14)(s+20)

0
10
G
Gc

1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
50
0
Phase (deg)

50
100
G
150 Gc
200 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

Loop L
0
10
Mag

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
0
50
Phase (deg)

100
150
200
250
1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1516

81420
Fig. 6: Example #1: G(s) = (s+8)(s+14)(s+20)
Bode Diagram
Gm = 11 dB (at 40.4 rad/sec) , Pm = 69.7 deg (at 15.1 rad/sec)

50

0
Magnitude (dB)

50

100

150
0

90
Phase (deg)

180

270

360
1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
2
10
Plant G
Gcl unscaled
Gcl scaled
1
10
Mag

0
10

1
10

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1517

81420
Fig. 7: Example #1: G(s) = (s+8)(s+14)(s+20)
Root Locus

150 140
0.7 0.56 0.42 0.28 0.14
120
100
100 0.82
80
0.91 60
50 40
Imaginary Axis

0.975
20

20
0.975
50 40
0.91 60
80
100 0.82
100
120
0.7 0.56 0.42 0.28 0.14
150 140
150 100 50 0 50 100
Real Axis

Root Locus

25
0.86 0.76 0.64 0.5 0.34 0.16
20
0.94
15

10
0.985
Imaginary Axis

5
50 40 30 20 10
0

5
0.985
10

15
0.94
20
0.86 0.76 0.64 0.5 0.34 0.16
25
50 45 40 35 30 25 20 15 10 5 0
Real Axis

closed-loop poles, -- Plant poles, Plant zeros, Compensator poles, Compensator zeros
Two compensator zeros at -21.546.63j draw the two lower frequency plant poles further into the LHP.
Compensator poles are at much higher frequency.
Looks like a lead compensator.

November 5, 2010
Fall 2010 16.30/31 1518

0.94
Mag Fig. 8: Example #2: G(s) = s2 0.0297

0
10
G
Gc

1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
50
0
Phase (deg)

50
100
G
150 Gc
200 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

Loop L
0
10
Mag

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
0
50
Phase (deg)

100
150
200
250
1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1519

0.94
Fig. 9: Example #2: G(s) = s2 0.0297
Bode Diagram
Gm = 11.8 dB (at 7.41 rad/sec) , Pm = 36.6 deg (at 2.76 rad/sec)

50

0
Magnitude (dB)

50

100

150
135
Phase (deg)

180

225

270
2 1 0 1 2 3
10 10 10 10 10 10
Frequency (rad/sec)
2
10
Plant G
Gcl unscaled
Gcl scaled
1
10
Mag

0
10

1
10

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1520

0.94
Fig. 10: Example #2: G(s) = s2 0.0297
Root Locus

15
0.82 0.7 0.56 0.42 0.28 0.14

10
0.91

5 0.975
Imaginary Axis

20 17.5 15 12.5 10 7.5 5 2.5


0

5 0.975

0.91
10

0.82 0.7 0.56 0.42 0.28 0.14


15
20 15 10 5 0 5
Real Axis

Root Locus
2
2
0.64 0.5 0.34 0.161.75
0.76 1.5
1.5
1.25
0.86 1
1
0.75
0.94
0.5
Imaginary Axis

0.5
0.985 0.25

0.985 0.25
0.5
0.5
0.94
0.75
1
0.86 1
1.25
1.5
0.76 1.5
0.64 0.5 0.34 0.161.75
2
2 1.5 1 0.5 20 0.5 1 1.5 2
Real Axis

closed-loop poles, -- Plant poles, Plant zeros, Compensator poles, Compensator zeros
Compensator zero at -1.21 draws the two lower frequency plant poles further into the LHP.

Compensator poles are at much higher frequency.


Looks like a lead compensator.

November 5, 2010
Fall 2010 16.30/31 1521

81420
Mag Fig. 11: Example #3: G(s) = (s8)(s14)(s20)

0
10
G
Gc

1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
50
0
Phase (deg)

50
100
G
150 Gc
200 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

Loop L
0
10
Mag

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
0
50
Phase (deg)

100
150
200
250
1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1522

81420
Fig. 12: Example #3: G(s) = (s8)(s14)(s20)
Bode Diagram
Gm = 0.9 dB (at 24.2 rad/sec) , Pm = 6.67 deg (at 35.8 rad/sec)

50

0
Magnitude (dB)

50

100

150

200
135

180
Phase (deg)

225

270

315

360
1 0 1 2 3 4
10 10 10 10 10 10
Frequency (rad/sec)
2
10
Plant G
Gcl unscaled
Gcl scaled
1
10
Mag

0
10

1
10

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1523

81420
Fig. 13: Example #3: G(s) = (s8)(s14)(s20)
Root Locus

200
0.76 0.64 0.5 0.34 0.16

150 0.86

100
0.94
Imaginary Axis

50 0.985

250 200 150 100 50


0

50 0.985

0.94
100

150 0.86

0.76 0.64 0.5 0.34 0.16


200
250 200 150 100 50 0 50 100 150
Real Axis

Root Locus
25
25
0.64 0.5 0.34 0.16
0.76 20
20
15
15 0.86
10
10
0.94
Imaginary Axis

5
5 0.985

5 0.985
5
0.94
10
10

15 0.86
15

20 0.76
20
0.64 0.5 0.34 0.16
25
25 20 15 10 5 250 5 10 15 20 25
Real Axis

closed-loop poles, -- Plant poles, Plant zeros, Compensator poles, Compensator zeros
Compensator zeros at 3.728.03j draw the two higher frequency plant poles further into the LHP. Lowest
frequency one heads into the LHP on its own.
Compensator poles are at much higher frequency.

Not sure what this looks like.

November 5, 2010
Fall 2010 16.30/31 1524

(s1)
Mag Fig. 14: Example #4: G(s) = (s+1)(s3)

0
10
G
Gc

1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
50
0
Phase (deg)

50
100
G
150 Gc
200 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

Loop L
0
10
Mag

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)
0
50
Phase (deg)

100
150
200
250
1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1525

(s1)
Fig. 15: Example #4: G(s) = (s+1)(s3)
Bode Diagram
Gm = 3.4 dB (at 4.57 rad/sec) , Pm = 22.4 deg (at 1.41 rad/sec)

20

0
Magnitude (dB)

20

40

60

80
120
Phase (deg)

150

180

210
2 1 0 1 2 3
10 10 10 10 10 10
Frequency (rad/sec)
2
10
Plant G
Gcl unscaled
Gcl scaled
1
10
Mag

0
10

1
10

2
10 1 0 1 2 3
10 10 10 10 10
Freq (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1526

(s1)
Fig. 16: Example #4: G(s) = (s+1)(s3)
Root Locus
50
50
0.64 0.5 0.38 0.24 0.12
40
40 0.76
30
30
0.88
20
20
Imaginary Axis

0.97 10
10

10
0.97 10

20
20
0.88
30
30

40 0.76
40
0.64 0.5 0.38 0.24 0.12
50
50 40 30 20 10 500 10 20
Real Axis

Root Locus
10
10
0.64 0.5 0.34 0.16
0.76 8
8
6
6 0.86
4
4
0.94
Imaginary Axis

2
2 0.985

2 0.985
2
0.94
4
4

6 0.86
6

8 0.76
8
0.64 0.5 0.34 0.16
10
10 8 6 4 2 100 2 4 6 8 10
Real Axis

closed-loop poles, -- Plant poles, Plant zeros, Compensator poles, Compensator zeros
Compensator zero at -1 cancels the plant pole. Note the very unstable compensator pole at s = 9!!
Needed to get the RHP plant pole to branch o the real line and head into the LHP.

Other compensator pole is at much higher frequency.


Note sure what this looks like.

November 5, 2010
Fall 2010 16.30/31 1527

B747 DOFB

The previous B747 controller assumed full state feedback - would like
to extend that now to consider output feedback using the output of
the washout ltered yaw rate e

To place the estimator poles, take the approach of just increasing the
frequency of the LQR closed-loop poles by a factor of 2:
d(Aest) = 2 (Alqr )

Then use pole placement to determine the estimator gains place.m

1 Gp=WashFilt*sys*H;
2 set(Gp, 'statename', {'xwo' 'beta' 'yaw rate' 'roll' 'phi' 'xa'});
3 set(Gp, 'inputname', {'rudder inputs' 'aileron'},...
4 'outputname', {'filtered yaw rate' 'bank angle'});
5 [Ap,Bp,Cp,Dp]=ssdata(Gp);
6 [Klqr,S,Elqr]=lqr(Ap,Bp(:,1),Cp(1,:)'*Cp(1,:),0.1)
7
8 Eest=real(Elqr)*2+2*imag(Elqr)*sqrt(1);
9 Kest=place(Ap',Cp(1,:)',Eest);Kest=Kest';

Note the transposes on the A and C here in the place command

Yields the gains



Lest = 31.500 5.2985 7.854 122.6895 121.7798 54.858
Much larger that LQR gains because the poles are moving further

November 5, 2010
Fall 2010 16.30/31 1528

Now form DOFB compensator as before:


1 na=size(Ap,1);
2 % Form compensator and closedloop
3 % see 155
4 ac=ApBp(:,1)*KlqrKest*Cp(1,:);bc=Kest;cc=Klqr;dc=0;
5 Gc=ss(ac,bc,cc,dc);
6 % see 157
7 acl=[Ap Bp(:,1)*cc;bc*Cp(1,:) ac];bcl=[zeros(na,1);bc];ccl=[Cpbeta,zeros(1,na)];dcl=0;
8 Gcl=ss(acl,bcl,ccl,dcl);

Consider the DOFB compensator for this system it is 6th order


because the augmented plant is (4+1+1) Gc pole-zero map and
closed-loop poles shown
3
Gc zeros
2 Gc poles

3
5 4 3 2 1 0
Root Locus

1
Imaginary Axis

3
5 4 3 2 1 0
Real Axis

November 5, 2010
Fall 2010 16.30/31 1529

Now consider the time response again to an initial perturbation in :

1
OL
0.8 LQR
Gain
0.6 DOFB

0.4

0.2

0.2

0.4

0.6

0.8

1
0 5 10 15 20 25 30
Time

Time response looks good perhaps even better than LQR, but
note that is not inconsistent as this DOFB might be using more
control eort, and LQR balances that eort with the state cost
1
LQR x2
0.9 2
DOFB x
0.8 LQR u2
DOFB u2
0.7

0.6
x2 and u2

0.5

0.4

0.3

0.2

0.1

0
0 2 4 6 8 10
Time

November 5, 2010
Fall 2010 16.30/31 1530

Code: B747 LQR, Estimator and DOFB


1 % B747 lateral dynamics
2 % Jonathan How, Fall 2010
3 % working from Matlab Demo called jetdemo
4 clear all;%close all
5 %
6 A=[.0558 .9968 .0802 .0415;
7 .598 .115 .0318 0;
8 3.05 .388 .4650 0;
9 0 0.0805 1 0];
10 B=[ .00729 0;
11 0.475 0.00775;
12 0.153 0.143;
13 0 0];
14 C=[0 1 0 0; 0 0 0 1];D=[0 0; 0 0];
15 sys = ss(A,B,C,D);
16 set(sys, 'inputname', {'rudder' 'aileron'},...
17 'outputname', {'yaw rate' 'bank angle'});
18 set(sys, 'statename', {'beta' 'yaw rate' 'roll rate' 'phi'});
19 [Yol,Tol]=initial(ss(A,B,[1 0 0 0],zeros(1,2)),[1 0 0 0]',[0:.1:30]);
20 [V,E]=eig(A)
21 %

22 % CONTROL actuator dynamics are a lag at 10

23 actn=10;actd=[1 10]; % H(s) in notes

24 H=tf({actn 0;0 1},{actd 1;1 1});

25 %

26 tau=3;washn=[1 0];washd=[1 1/tau]; % washout filter on yaw rate

27 WashFilt=tf({washn 0;0 1},{washd 1;1 1});

28 %

29 Gp=WashFilt*sys*H;

30 set(Gp, 'statename', {'xwo' 'beta' 'yaw rate' 'roll' 'phi' 'xa'});

31 set(Gp, 'inputname', {'rudder inputs' 'aileron'},...

32 'outputname', {'filtered yaw rate' 'bank angle'});


33 [Ap,Bp,Cp,Dp]=ssdata(Gp);
34 [Klqr,S,Elqr]=lqr(Ap,Bp(:,1),Cp(1,:)'*Cp(1,:),0.1);rifd(Elqr)
35 %

36 % gain feedback on the filter yaw rate

37 figure(1);clf

38 rlocus(Ap,Bp(:,1),Cp(1,:),0);

39 sgrid([.1 .2 .3 .4],[.7 .8 .9 1]);grid on;axis([1.4 .1 1.2 1.2])

40 Kgain=2;

41 Egain=eig(ApBp(:,1)*Kgain*Cp(1,:));rifd(Egain)

42 hold on;plot(Egain+eps*sqrt(1),'bd');hold off

43 export fig b747 lqr1 pdf

44
45 Cpbeta=[0 1 0 0 0 0]; % performance output variable
46 Ggain=ss(ApBp(:,1)*Kgain*Cp(1,:),Bp,Cpbeta,0);
47 xp0=[0 1 0 0 0 0]';
48 [Ygain,Tgain]=initial(Ggain,xp0,Tol); %'
49
50 figure(2);clf
51 srl(ss(Ap,Bp(:,1),Cp(1,:),0));
52 sgrid([.1 .2 .3 .4],[.7 .8 .9 1])
53 axis([1.25 1.25 1.25 1.25]);grid on
54 hold on;plot(Elqr+eps*sqrt(1),'bd');hold off
55 export fig b747 lqr2 pdf
56
57 % close the LQR loop

58 Acl=ApBp(:,1)*Klqr;

59 Bcl=Bp(:,1);

60 % choose output to just state beta and control u

61 Ccl=[Cpbeta;Klqr];

62 Dcl=[0;0];

63 Glqr=ss(Acl,Bcl,Ccl,Dcl);

64 [Y,T]=initial(Glqr,xp0,Tol); %'

65
66 figure(3);clf

67 plot(Tol,Yol,T,Y(:,1),Tgain,Ygain);axis([0 30 1 1]);

68 setlines(2)

69 legend('OL','LQR','Gain')

70 ylabel('\beta');xlabel('Time')

71 grid on

72 export fig b747 lqr3 pdf

73
74 % estimator poles made faster by jfactor

November 5, 2010
Fall 2010 16.30/31 1531

75 jfactor=2.05;

76 Eest=jfactor*Elqr;

77 Kest=place(Ap',Cp(1,:)',Eest);Kest=Kest';

78

79 na=size(Ap,1);

80 % Form compensator and closedloop

81 % see 155

82 ac=ApBp(:,1)*KlqrKest*Cp(1,:);bc=Kest;cc=Klqr;dc=0;

83 Gc=ss(ac,bc,cc,dc);

84 % see 157

85 acl=[Ap Bp(:,1)*cc;bc*Cp(1,:) ac];

86 bcl=[zeros(na,1);bc];

87 % choose output to be state beta and control u

88 ccl=[Cpbeta,zeros(1,na);zeros(1,na) cc];

89 dcl=[0;0];

90 Gcl=ss(acl,bcl,ccl,dcl);

91

92 figure(5);clf

93 [p,z]=pzmap(Gc(1,1));

94 plot(z+sqrt(1)*eps,'mo','MarkerFace','m')

95 hold on

96 plot(p+sqrt(1)*eps,'bs','MarkerFace','b')

97 hold off

98 legend('Gc zeros','Gc poles','Location','NorthWest')

99 grid on;axis([5.5 .5 3 3])

100 export fig b747 lqr5 pdf


101
102 figure(4);clf
103 rlocus(Gp(1,1)*Gc);axis([5.5 .5 3 3])
104 hold on
105 [p,z]=pzmap(Gp(1,1));
106 plot(p+sqrt(1)*eps,'rx','MarkerFace','r')
107 plot(z+sqrt(1)*eps,'ro','MarkerFace','r')
108 [p,z]=pzmap(Gc(1,1));
109 plot(p+sqrt(1)*eps,'mx','MarkerFace','m')
110 plot(z+sqrt(1)*eps,'mo','MarkerFace','m')
111 p=eig(acl);
112 plot(p+sqrt(1)*eps,'bd')
113 hold off
114 export fig b747 lqr4 pdf
115
116 % initial comp at zero
117 [Ycl,Tcl]=initial(Gcl,[xp0;xp0*0],Tol); %'
118
119 figure(3);clf
120 plot(Tol,Yol,T,Y(:,1),Tgain,Ygain,Tcl,Ycl(:,1));axis([0 30 1 1]);
121 setlines(2)
122 legend('OL','LQR','Gain','DOFB')
123 ylabel('\beta');xlabel('Time')
124 grid on
125 export fig b747 lqr3a pdf
126
127 figure(6);clf
128 plot(T,Y(:,1).2,Tcl,Ycl(:,1).2,T,Y(:,2).2,Tcl,Ycl(:,2).2);
129 axis([0 10 0 1]);
130 hold off
131 setlines(2)
132 legend('LQR x2','DOFB x2','LQR u2','DOFB u2')
133 ylabel('x2 and u2');xlabel('Time')
134 grid on
135 export fig b747 lqr3b pdf

November 5, 2010
Fall 2010 16.30/31 1532

Satellite Control Design

Simple system model goal is to point payload device (J2 = 0.1)


using torque inputs on the larger spacecraft (J1 = 1)
Sensor is star tracker on payload (angle), and actuator is torque
applied to spacecraft

Can form the state space model using the state x = [ 2 2 1 1 ]T


10bs + 10k
Plant: Gp(s) = , k = 0.1 and b = 0.005
s2(s2 + 11bs + 11k)

Rigid body part easily controlled using a lead, such as


3s + 1
Klead(s) = 1.5
s+5
2 2
10 10
Loop L
Mag

Mag

0
0 10
10
Plant G
Compensator Gc
2
1 0 1 2
10 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
0
200
Phase (deg)

Phase (deg)

100
100
0 200

100 300
200 1 0 1 2 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
Root Locus
3
3
0.5 0.38 0.28 0.17 0.08

2.5
0.64

2
2

1.5
0.8

1
1

0.94
0.5
Imaginary Axis

0.5
0.94

1
1

0.8
1.5

2
2

0.64
2.5

0.5 0.38 0.28 0.17 0.08


3
2 1.5 1 0.5 30 0.5 1
Real Axis

November 5, 2010
Fall 2010 16.30/31 1533

So problems with this non-collocated control is apparent the ex


ible modes are driven unstable by the lead controller
The problem with the lightly damped poles is that the loop phase
changes very rapidly get wrong departure angle for the root locus.
Can improve the design using notch ltering
The notch has a lightly damped zero pair at a frequency just below the
resonant frequency of the system, resulting in a rapid phase change,
but in the opposite direction.
Result is that the departure angle now to the left, not the right.
Fixes instability problem - but damping of the resonant pole is
quite limited need to nd a better location for the notch zeros.
2 2
10 10
Loop L
Mag

Mag

0
0 10
10
Plant G
Compensator Gc
2
1 0 1 2
10 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
0
200
Phase (deg)

Phase (deg)

100
100
0 200

100 300
200 1 0 1 2 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
Root Locus
3
3
0.5 0.38 0.28 0.17 0.08
2.5
0.64
2
2

1.5
0.8

1
1

0.94
0.5
Imaginary Axis

0.5
0.94

1
1

0.8
1.5

2
2
0.64
2.5
0.5 0.38 0.28 0.17 0.08
3
2 1.5 1 0.5 30 0.5 1
Real Axis

November 5, 2010
Fall 2010 16.30/31 1534

Lets try to do it using DOFB nothing particularly complicated here


in the weights
1 kk=0.1;bb=0.05*sqrt(kk/10);J1=1;J2=0.1;
2 a=[0 1 0 0;kk/J2 bb/J2 kk/J2 bb/J2;0 0 0 1;kk/J1 bb/J1 kk/J1 bb/J1];na=size(a,1);
3 b=[0 0 0 1/J1]';c=[1 0 0 0];d=0;
4 k=lqr(a,b,c'*c,.1);
5 l=lqr(a',c',b*b',.01);l=l'
6 %
7 % For state space for G c(s)
8 %
9 ac=ab*kl*c;bc=l;cc=k;dc=0;

Look at the resulting DOFB controller - are the plant poles; are
the compensator zeros; are the compensator poles
Root Locus

3
0.78 0.64 0.5 0.34 0.18

2.5
0.87
2
Imaginary Axis

1.5 0.94

1
0.985
0.5

4 3.5 3 2.5 2 1.5 1 0.5


0

0.5
0.985

1
4 3.5 3 2.5 2 1.5 1 0.5 0
Real Axis

So the DOFB controller puts zeros right near the plant poles similar
to the notch lter ensures that the departure angle of the poles is
to the left, not the right, so all the poles are stabilized.
Appears to achieve much higher levels of damping in that pole
than the classical/notch lter design did.

November 5, 2010
Fall 2010 16.30/31 1535

2 2
10 10
Loop L
Mag

Mag
0
0 10
10
Plant G
Compensator Gc
2
1 0 1 2
10 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
0
200
Phase (deg)

Phase (deg)
100
100
0 200

100 300
200 1 0 1 2 1 0 1 2
10 10 10 10 10 10 10 10
Freq (rad/sec) Freq (rad/sec)
Bode Diagram
Factor of N=1 applied to Closed loop Gm = 1.87 dB (at 1.89 rad/sec) , Pm = 11.6 deg (at 1.49 rad/sec)
2
10
Plant G 100
closedloop Gcl
Magnitude (dB) 0
1
10
100

200
Mag

0
10
300
0

90
Phase (deg)

1
10 180

270

360
2
10 1 0 1 2
450
2 1 0 1 2 3
10 10 10 10 10 10 10 10 10 10
Freq (rad/sec) Frequency (rad/sec)

November 5, 2010
Fall 2010 16.30/31 1536

DOFB Summary

Separation principle gives a very powerful and simple way to develop


a dynamic output feedback controller

Note that the designer now focuses on selecting the appropriate reg
ulator and estimator pole locations.
Once those and the implementation architecture (i.e., how refer
ence input is applied, as on 155) are set, the closed-loop response
is specied.
Can almost consider the compensator to be a by-product.

The examples show that the design process is extremely simple.

November 5, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #16

16.30/31 Feedback Control Systems

Add reference inputs for the DOFB case

Reading: FPE 7.8, 7.9


Fall 2010 16.30/31 162

Reference Input - II

On page 15-6, compensator implemented with reference command by


changing to feedback on
e(t) = r(t) y(t)

rather than y(t)

r e u y

Gc(s) Gp(s)

So u(t) = Gc(s)e(t) = Gc(s)(r(t) y(t))


Intuitively appealing because same approach used for classical
control, but it turns out not to be the best.

Can improve the implementation by using a more general form:

x c(t) = Acxc(t) + Bcy(t) + Gr(t)


u(t) = Ccxc(t) + N r(t)

Now explicitly have two inputs to controller (y(t) and r(t))


N performs the same role that we used it for previously.
Introduce G as an extra degree of freedom in the problem.

Turns out that setting G = BN is a particularly good choice.


Following presents some observations on the impact of G

November 2, 2010
Fall 2010 16.30/31 163

First: this generalization does not change the closed-loop poles of


the system, regardless of the selection of G and N , since
x (t) = Ax(t) + Bu(t) , y(t) = Cx(t)
x c(t) = Acxc(t) + Bcy(t) + Gr(t)
u(t) = Ccxc(t) + N r(t)

BCc

x (t) A x(t) BN
= + r(t)
x c(t) BcC Ac xc(t) G

x(t)
y(t) = C 0
xc(t)
So the closed-loop poles are the eigenvalues of

A BCc
BcC Ac
(same as 157 except in a dierent place, gives same closed-
loop eigenvalues) regardless of the choice of G and N
G and N impact the forward path, not the feedback path

Second: if N = 0 and G = L = Bc, then we recover the


original implementation, on 156 since the controller reduces
to:
x c(t) = Acxc(t) + Bc(y(t) r(t)) = Acxc(t) + Bc(e(t))
u(t) = Ccxc(t)
With Gc(s) = Cc(sI Ac)1Bc, then this compensator can be
written as u(t) = Gc(s)e(t) as before (since the negative signs
cancel).

November 2, 2010
Fall 2010 16.30/31 164

Third: Given this extra freedom, what is best way to use it?
One good objective is to select G and N so that the state esti
mation error is independent of r(t).
With this choice, changes in r(t) do not tend to cause such large
(t)
transients in x
For this analysis, take x
(t) = x(t) xc(t) since xc(t) x
(t)

(t) = x (t) x c(t)

x
= Ax(t) + Bu(t) (Acxc(t) + Bcy(t) + Gr(t))

= Ax(t) + B(Ccxc(t) + N r(t))

({A BCc BcC}xc(t) + BcCx(t) + Gr(t))

So

(t) = Ax(t) + B(N r(t)) ({A BcC}xc(t) + BcCx(t) + Gr(t))


x
= (A BcC)x(t) + BN r(t) ({A BcC}xc(t) + Gr(t))
= (A BcC)x (t) + BN r(t) Gr(t)
= (A BcC)x (t) + (BN G)r(t)

Thus we can eliminate the eect of r(t) on x


(t) by setting

G BN

November 2, 2010
Fall 2010 16.30/31 165

With this choice, the controller:


x c(t) = (A BK LC)xc(t) + Ly(t) + BN r(t)
u(t) = Kxc(t) + N r(t)
can be rewritten as:
x c(t) = (A LC)xc(t) + Ly(t) + Bu(t)
u(t) = Kxc(t) + N r(t)
So the control is computed using the reference before it is applied,
and that control is applied to both system and estimator.

Fourth: if this generalization does not change the closed-loop poles


of the system, then what does it change?
Recall that zeros of SISO y/r transfer function solve:

sI A BCc BN
general det BcC sI Ac G = 0
C 0 0


sI A BCc 0
original det BcC sI Ac Bc = 0
C 0 0


sI A BCc BN
new det BcC sI Ac BN = 0
C 0 0

November 2, 2010
Fall 2010 16.30/31 166

14
Hard to see how this helps, but consider the scalar new case :
C(BCcBN + (sI Ac)BN ) = 0
CBN (BCc (sI [A BCc BcC])) = 0
CBN (sI [A BcC]) = 0
So zero of the y/r path is the root of sI [A BcC] = 0, which
is the pole of the estimator.
So by setting G = BN as in the new case, the estimator dynamics
are canceled out of the response of the system to a reference
command.
Cancelation does not occur with original implementation.
SISO, multi-dimensional state case handled in the appendix.

So in summary, if the SISO system is G(s) = b(s)/a(s), then with


DOFB control, the closed-loop transfer function will be of the form:
Y (s) (s)b(s)
T (s) = = Kg
R(s) e(s)c(s)
Designer determines c(s) and e(s)
Plant zeroes b(s) remain, unless canceled out
Many choices for (s) the one called new sets it to

(s) e(s)

to cancel out the estimator dynamics from the response.

14 no plant zeros in this case

November 2, 2010
Fall 2010 16.30/31 167

Fifth: select N to ensure that the steady-state error is zero.


As before, this can be done by selecting N so that the DC gain of
the closed-loop y/r transfer function is 1.
1
y
A BCc B
C 0 N =I
r DC BcC Ac B

The new implementation of the compensator is

x c(t) = Acxc(t) + Bcy(t) + BN r(t)


u(t) = Ccxc(t) + N r(t)

Which has two separate inputs y(t) and r(t)


Selection of N ensures good steady-state performance
new implementation gives better transient performance.

November 2, 2010
Fall 2010 16.30/31 168

Appendix: Zero Calculation

Calculation on 165 requires we nd (assume r and y scalars)



sI Acl Bcl
det = det(sI Acl ) det(0 Ccl (sI Acl )1Bcl )
Ccl 0
= e(s)c(s) det(Ccl (sI Acl )1Bcl )
= e(s)c(s) Ccl (sI Acl )1Bcl

But nding (sI Acl )1 can be tricky it is simplied by the following


steps.

I 0
First note if T = so that T = T 1, then
I I
(sI Acl )1 = T T 1(sI Acl )1T T 1
= T (sI T Acl T )1T
where

A BK BK
T Acl T Acl =
0 A LC
1
sI (A BK) BK
(sI Acl )1 = T T
0 sI (A LC)

1
1 1 1
F H F F HG
Now use fact that =
0 G 0 G1
To get that
(sI (A BK))1 (sI (A BK))1 BK(sI (A LC))1

1
(sI Acl ) = T T
0 (sI (A LC))1

November 2, 2010
Fall 2010 16.30/31 169

Now consider new case:



1 BN

(sI Acl )
C 0
BN

(sI (A BK))1 (sI (A BK))1 BK(sI (A LC))1



BN
= C 0 T T
0 (sI (A LC))1 BN
(sI (A BK))1 (sI (A BK))1 BK(sI (A LC))1

BN
= C 0
0 (sI (A LC))1 0
= C(sI (A BK))1 BN = Cadj[sI (A BK)]BN c (s)1

thus in the new case,



sI Acl Bcl
det = e (s)c (s) det(Ccl (sI Acl )1 Bcl )
Ccl 0
= e (s)c (s) Cadj[sI (A BK)]BN c (s)1
= e (s) Cadj[sI (A BK)]BN

Whereas in the original case, we get:



1 0
(sI Acl )
C 0
L
(sI (A BK))1 (sI (A BK))1 BK(sI (A LC))1

0
= C 0 T 1 T
0 (sI (A LC)) L
1
(sI (A BK)) (sI (A BK))1 BK(sI (A LC))1

0
= C 0
0 (sI (A LC))1 L
= C(sI (A BK))1 BK(sI (A LC))1 L

= Cadj[sI (A BK)] BK adj[sI (A LC)]Le (s)1 c (s)1

thus in the original case,



sI Acl Bcl
det = e (s)c (s) det(Ccl (sI Acl )1 Bcl )
Ccl 0
= e (s)c (s) Cadj[sI (A BK)] BK adj[sI (A LC)]Le (s)1 c (s)1
= Cadj[sI (A BK)] BK adj[sI (A LC)]L

So new case has e(s) in numerator, which cancels the zero dynamics

out of the closed-loop response, but the original case does not.

November 2, 2010
Fall 2010 16.30/31 1610

FF Example 1

8 14 20
G(s) =
(s + 8)(s + 14)(s + 20)
Method #1: original implementation.
Method #2: original, with the reference input scaled to ensure
that the DC gain of y/r|DC = 1.
Method #3: new implementation with both G = BN and N se
lected.

Step Response
1.4
meth1
meth2
1.2 meth3

1
Y response

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
Time (sec)

New method (#3) shows less transient impact of applying r(t).

November 2, 2010
Fall 2010 16.30/31 1611

FF Example 2

0.94
G(s) =
s2 0.0297
Method #1: original implementation.
Method #2: original, with the reference input scaled to ensure
that the DC gain of y/r|DC = 1.
Method #3: new implementation with both G = BN and N se
lected.

Step Response
1.5
meth1
meth2
meth3

1
Y response

0.5

0
0 1 2 3 4 5
Time (sec)

November 2, 2010
Fall 2010 16.30/31 1612

FF Example 3

8 14 20
G(s) =
(s 8)(s 14)(s 20)
Method #1: original implementation.
Method #2: original, with the reference input scaled to ensure
that the DC gain of y/r|DC = 1.
Method #3: new implementation with both G = BN and N se
lected.

Step Response
4
meth1
meth2
3 meth3

2
Y response

3
0 0.2 0.4 0.6 0.8 1
Time (sec)

November 2, 2010
Fall 2010 16.30/31 1613

FF Example 4

Revisit example on 15-9, with G(s) = 1/(s2 + s + 1) and regulator


target poles at 4 4j and estimator target poles at 10.
So dominant poles have n 5.5 and = 0.707
From these closed-loop dynamics, would expect to see

1 + 1.1 + 1.4 2
10-90% rise time tr = n = 0.44s

Settling time (5%) ts = 3 = 0.75s


n
Time to peak amplitude tp = = 0.79s
n 1 2

Peak overshoot Mp = entp = 0.04


Now compare step responses of new and original implementations
1.4
Original
New
1.2

0.8
Y

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5
Time (s)

Hopefully it is clear that the new implementation meets the expected


criteria quite well - and certainly much better than the original

November 2, 2010
Fall 2010 16.30/31 1614
Code: Dynamic Output Feedback FF Examples
1 % Code for Topic 16 in 2010
2 % Examples of dynamic output feedback
3 % Jonathan How
4 %
5 close all;
6 % clear all;for model=1:5;dofb examp2;end
7 fig=0;
8 % system
9 switch model
10 case 1

11 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp1';

12 tt=[20 0 10 10]*2.5;

13 t=[0:.0025:1];

14 case 2

15 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp2';

16 tt=[10 10 10 10]*2.5;

17 t=[0:.0025:1];

18 case 3

19 G=tf(.94,[1 0 0.0297]);name='nexamp3';

20 tt=[10 10 10 10]/5;

21 t=[0:.01:5];

22 case 4

23 G=tf([1 1],conv([1 1],[1 3]));name='nexamp4';

24 tt=[10 10 10 10];

25 t=[0:.01:10];

26 case 5
27 G=tf(conv([1 2],[1 4]),conv(conv([1 1],[1 3]),[1 2*.2*2 22 0 0]));
28 name='examp5';
29 case 6

30 G=tf(8*14*20,conv([1 8],conv([1 14],[1 20])));name='nexamp1h';

31 tt=[20 0 10 10]*2.5;

32 t=[0:.0001:1];

33 otherwise
34 return
35 end
36
37 %%%
38 [a,b,c,d]=ssdata(G);na=length(a);
39 %
40 if model == 6
41 R=.00001;
42 else
43 R=.01;
44 end
45 % choose the regulator poles using LQR 122
46 [k,P,reg poles]=lqr(a,b,c'*c,R);
47 %hold on
48 %plot(reg poles+j*eps,'md','MarkerSize',12,'MarkerFaceColor','m')
49 %hold off
50
51 % design the estimator by doubling the real part of the regulator poles
52 PP=2*real(reg poles)+imag(reg poles)*j;
53 % see 1413

54 ke=place(a',c',PP);l=ke';

55 % now form compensator see 144

56 ac=ab*kl*c;bc=l;cc=k;dc=0;

57 % see 156

58 acl=[a b*cc;bc*c ac];ccl=[c,zeros(1,na)];dcl=0;

59 % standard case

60 bcl1=[zeros(na,1);bc];

61 %scale up the ref path so that at least the Gcl(s) has unity DC gain

62 bcl2=[zeros(na,1);bc]; % original implementation on 1624

63 scale=ccl*inv(acl)*bcl2;

64 bcl2=bcl2/scale;

65 %

66 bcl3=[b;b]; % new implementation on 1423

67 scale=ccl*inv(acl)*bcl3;

68 bcl3=bcl3/scale;

69 %

70 Gcl1=ss(acl,bcl1,ccl,dcl); % original implementation (145) unscaled

71 Gcl2=ss(acl,bcl2,ccl,dcl); % original implementation (145) scaled

72 Gcl3=ss(acl,bcl3,ccl,dcl); % new implementation on (1423) scaled

73 Gc=ss(ac,bc,cc,dc);

74
75 fig=fig+1;figure(fig);clf;

76 f=logspace(2,3,400);j=sqrt(1);

November 2, 2010
Fall 2010 16.30/31 1615

77 g=freqresp(G,f*j);g=squeeze(g);

78 gc=freqresp(Gc,f*j);gc=squeeze(gc);

79 gcl1=freqresp(Gcl1,f*j);gcl1=squeeze(gcl1);

80 gcl2=freqresp(Gcl2,f*j);gcl2=squeeze(gcl2);

81 gcl3=freqresp(Gcl3,f*j);gcl3=squeeze(gcl3);

82

83 figure(fig);fig=fig+1;clf

84 orient tall

85 subplot(211)

86 loglog(f,abs(g),f,abs(gc),'','LineWidth',2);axis([.1 1e3 1e2 500])

87 xlabel('Freq (rad/sec)');ylabel('Mag')

88 legend('G','G c','Location','Southeast');grid

89 subplot(212)

90 semilogx(f,180/pi*unwrap(angle(g)),f,180/pi*unwrap(angle(gc)),'','LineWidth',2);

91 axis([.1 1e3 200 50])

92 xlabel('Freq (rad/sec)');ylabel('Phase (deg)');grid

93 legend('G','G c','Location','SouthWest')

94

95 L=g.*gc;

96

97 figure(fig);fig=fig+1;clf

98 orient tall

99 subplot(211)

100 loglog(f,abs(L),[.1 1e3],[1 1],'LineWidth',2);axis([.1 1e3 1e2 10])


101 xlabel('Freq (rad/sec)');ylabel('Mag')
102 legend('Loop L');
103 grid
104 subplot(212)
105 semilogx(f,180/pi*phase(L.'),[.1 1e3],180*[1 1],'LineWidth',2);
106 axis([.1 1e3 290 0])
107 xlabel('Freq (rad/sec)');ylabel('Phase (deg)');grid
108 %
109 % loop dynamics L = G Gc
110 % see 156
111 al=[a b*cc;zeros(na) ac];bl=[zeros(na,1);bc];cl=[c zeros(1,na)];dl=0;
112
113 figure(fig);fig=fig+1;clf
114 margin(al,bl,cl,dl);
115 figure(fig);fig=fig+1;clf
116 rlocus(al,bl,cl,dl)
117 hold on;
118 plot(eig(acl)+eps*j,'bd','MarkerFaceColor','b')
119 plot(eig(a)+eps*j,'mv','MarkerFaceColor','m')
120 plot(eig(ac)+eps*j,'rs','MarkerSize',9,'MarkerFaceColor','r')
121 plot(tzero(ac,bc,cc,dc)+eps*j,'ro','MarkerFaceColor','r')
122 hold off;grid on
123 %
124 % closedloop freq response
125 %
126 figure(15);clf
127 loglog(f,abs(g),f,abs(gcl1),f,abs(gcl2),'','LineWidth',2);
128 axis([.1 1e3 .01 1e2])
129 xlabel('Freq (rad/sec)');ylabel('Mag')
130 legend('Plant G','Gcl unscaled','Gcl scaled');grid
131
132 figure(fig);fig=fig+1;clf
133 loglog(f,abs(g),f,abs(gcl1),f,abs(gcl2),f,abs(gcl3),'','LineWidth',2);
134 axis([.1 1e3 .01 1e2])
135 xlabel('Freq (rad/sec)');ylabel('Mag')
136 legend('Plant G','Gcl1','Gcl2','Gcl3');grid
137
138 ZZ=1+.1*exp(j*[0:.01:1]*2*pi);
139 figure(fig);fig=fig+1;clf
140 semilogy(unwrap(angle(L))*180/pi,abs(L))
141 hold on;
142 semilogy(unwrap(angle(ZZ))*180/pi360,abs(ZZ),'g')
143 hold off
144 axis([270 100 .1 10])
145 hold on;plot(180,1,'rx');hold off
146 if max(real(eig(a))) > 0
147 title('Nichols: Unstable Openloop System')
148 else
149 title('Nichols: Stable Openloop System')
150 end
151 ylabel('Mag');xlabel('Phase (deg)')
152
153 figure(fig);fig=fig+1;clf

November 2, 2010
Fall 2010 16.30/31 1616

154 plot(unwrap(angle(L))*180/pi,abs(L),'LineWidth',1.5)
155 hold on
156 plot(unwrap(angle(L))*180/pi,.95*abs(L),'r.','LineWidth',1.5)
157 plot(unwrap(angle(L))*180/pi,1.05*abs(L),'m:','LineWidth',1.5)
158 plot(unwrap(angle(ZZ))*180/pi,abs(ZZ),'g')
159 plot(180,1,'rx');hold off
160 hold on;
161 semilogy(unwrap(angle(ZZ))*180/pi360,abs(ZZ),'g')
162 hold off
163 legend('1','0.95','1.05')
164 axis([195 165 .5 1.5])
165 if max(real(eig(a))) > 0
166 title('Nichols: Unstable Openloop System')
167 else
168 title('Nichols: Stable Openloop System')
169 end
170 ylabel('Mag');xlabel('Phase (deg)')
171
172 figure(fig);fig=fig+1;clf
173 loglog(f,abs(1./(1+L)),f,abs(L),'','LineWidth',1.5);grid
174 title('Sensitivity Plot')
175 legend(' | S | ',' | L | ')
176 xlabel('Freq (rad/sec)');ylabel('Mag')
177 axis([.1 1e3 1e2 100])
178
179 figure(fig);fig=fig+1;clf
180 [y1,t]=step(Gcl1,t);
181 [y2,t]=step(Gcl2,t);
182 [y3,t]=step(Gcl3,t);
183 plot(t,y1,t,y2,t,y3,t,ones(size(t)),'','LineWidth',1.5)
184 setlines(1.5)
185 legend('meth1','meth2','meth3')
186 title('Step Response')
187 xlabel('Time (sec)');ylabel('Y response')
188
189 for ii=[1:gcf 15]
190 eval(['figure(',num2str(ii),'); export fig ',name,' ',num2str(ii),' pdf a1'])
191 if ii==4;
192 figure(ii);axis(tt)
193 eval(['export fig ',name,' ',num2str(ii),'a pdf'])
194 end
195 end
196
197 eval(['save ',name,' R G Gc Gcl1 Gcl2 Gcl3 k l P PP'])
198
199 return

November 2, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #17

16.30/31 Feedback Control Systems

Improving the transient performance of the LQ Servo

Feedforward Control Architecture Design


DOFB Servo
Handling saturations in DOFB
Fall 2010 16.30/31 171

LQ Servo Revisited

Earlier (13-??) we described how to use an integrator to achieve zero


steady state error that is a bit more robust to modeling errors than
the N formulation.
An issue with this architecture is that the response to the reference
is only driven by the integrated error there is no direct path
from the reference input to the system
transient might be slow.

If the relevant system output is y = Cy x and reference r, previously


added extra states xI , where
x I = e
Then penalize both x and xI in the cost
Caveat: note we are free to dene e = r y or e = y r, but
the implementation must be consistent

As before, if the state of the original system is x, then the dynamics


are modied to be

x A 0 x Bu 0
= + u+ r
x I Cy 0 xI 0 I
T
T

T
and dene x = x xI

The optimal feedback for the cost



T T

J= x Rxxx + u Ruuu dt
0
is of the form:

x
u = [ K KI ] = Kx
xI

October 17, 2010


Fall 2010 16.30/31 172

Once have used LQR to design control gains K and KI , we have the
freedom to choose how we implement it the control system.
Provided that we dont change the feedback path and thus modify
the closed loop pole locations

The rst controller architecture on (13-??) was of the form:


r e 1 xI
KI
s
u y
K G(s)
x

And the suggestion is that we modify it to this revised form:


r e 1 xI
KI
s
u y
R K G(s)

Fig. 1: Revised implementation of the LQ servo

Note key dierence from above architecture is the feedforward term


of the e through R.
This actually adds another feedback loop, so we need to clarify
how this is done.

October 17, 2010


Fall 2010 16.30/31 173

Design Approach

Control law for the revised implementation can be written as:


u = K (x
+ Re) KI xI
Note that the tracking error is in the output of the integrator and
the state feedback components.
So this approach has the potential to correct the slow response
problems identied in the original implementation.

But how do we determine R?

Assume that the state can be partitioned into parts we care about for
tracking (T x) that we assume are directly available from y and parts
we dont (x = T x)
Can think of T and T as selector matrices with a diagonal of
1s and 0s - but not always this form
But must have T + T = I.

Example: Consider position control case, where location x is part of


state vector x = [x, v, . . .]T , and y = x = Cx
Scalar position reference r, and e = r y, x I = e
In this example it is clear that we have

x 1 x
0= 0 v
.
.
. ... ...

1 0
T = 0 and T = 1
. . . ...

October 17, 2010


Fall 2010 16.30/31 174

So the control is
u = K(x
+ Re) KI xI

= K(x
+ R[r y]) KI xI

= K(x
RCx +Rr) KI xI

key

To proceed, note what happens if we dene RC = T


x
RCx = T x (T )x = x
Then the control signal becomes:

u = K(x + Rr) KI xI

This choice of T = RC ensures that we avoid double counting


in the feedback the elements of x are not fedback more that
once
Appears to be useful to choose R = C T , with < 1

Closed-loop dynamics
Two control laws being compared are:
u1 = K(x) KI xI (1)
u2 = K(x + Rr) KI xI (2)
with dynamics
x = Ax + Bu, y = Cx (3)
augmented with an integrator x I = r y

October 17, 2010


Fall 2010 16.30/31 175

Closed-loop:

x A BK BKI x 0
= + r (4)
x I C 0 xI I
and
x A BK BKI x BKR
= + r (5)
x I C 0 xI I

(s+2)
Example G(s) = (s1)(ss+2)
1.4

1.2

0.8
y

0.6

0.4

0.2
LQ Servo without R
With R
0
0 1 2 3 4 5
Time

0
10
|Gcl|

LQ Servo without R
With R
2
10 2 1 0 1 2
10 10 10 10 10
w

0
cl
phase G

100

200

300 2 1 0 1 2
10 10 10 10 10
w
Fig. 2: LQ servo implementation comparison

October 17, 2010


Fall 2010 16.30/31 176

Code: LQservo Comparison


1 % LQR servo

2 % Fall 2010

3 % Jonathan How, MIT

4 close all;clear all;j=sqrt(1);

5
6 G=tf([1 2],conv([1 1],[1 1 2]));

7 [a,b,c,d]=ssdata(G);

8 % consider possibility that c is wrong in the design

9 % use c in design of K's and N's, but simulate with cpert

10 ns=size(a,1);

11 z=zeros(ns,1);

12 Rxx=c'*c;

13 Ruu=.001;

14 Rxx1=[Rxx z;z' 10];

15 R=c'/1.75;

16
17 abar=[a z;c 0];

18 bbar=[b;0];

19 % for Nbar calc

20 kbar1=lqr(abar,bbar,Rxx1,Ruu);

21 K1=kbar1(1,1:ns);Ki1=kbar1(ns+1);

22 % for R calc

23 kbar2=lqr(abar,bbar,Rxx1,1*Ruu);

24 K2=kbar2(1,1:ns);Ki2=kbar2(ns+1);

25
26 acl1=abarbbar*kbar1;

27 acl2=abarbbar*kbar2;

28 % implementation as on 175

29 bcl1=[zeros(ns,1);1];

30 bcl2=[b*K2*R;1];

31 ccl=[c 0];

32 dcl=0;

33
34 Gcl1=ss(acl1,bcl1,ccl,dcl);

35 Gcl2=ss(acl2,bcl2,ccl,dcl);

36
37 figure(1);

38 t=[0:.01:5]';

39 [y1,t1]=step(Gcl1,t);

40 [y2,t2]=step(Gcl2,t);

41 ')
plot(t,y1,t,y2,'

42 xlabel('Time')

43 ylabel('y')

44 legend('LQ Servo without R','With R','Location','SouthEast')

45 export fig lqrservo2 1 pdf

46
47 w=logspace(2,2,300);

48 figure(2)

49 [mag1,ph1]=bode(Gcl1,w);mag1=squeeze(mag1);ph1=squeeze(ph1);

50 [mag2,ph2]=bode(Gcl2,w);mag2=squeeze(mag2);ph2=squeeze(ph2);

51 subplot(211);

52 loglog(w,mag1,w,mag2,'');grid on

53 legend('LQ Servo without R','With R','Location','SouthWest')

54 ylabel(' | G {cl } | ');xlabel('w');axis([1e2 1e2 1e2 5])

55 subplot(212);

56 semilogx(w,ph1,w,ph2,'');grid on

57 axis([1e2 1e2 300 60]);ylabel('phase G {cl}');xlabel('w')

58 export fig lqrservo2 2 pdf

October 17, 2010


Fall 2010 16.30/31 177

DOFB Servo

Now back up a bit add address how to add an integrator into the
DOFB compensator to obtain robust tracking performance. For the
DOFB problem,
= Ax
x + Bu + L(y y
)
= (A LC)x + Bu + Ly (6)
x I = r y

x
u = K KI
xI
Augment integrator state used in LQR design, since it must be
part of the compensator (does not need to be estimated)
T T T
= x
x xI
Which gives the DOFB servo compensator as:

A LC BK BK I L 0
=
x +
x y+ r
0 0 I I
u = Kx (7)

Resulting closed-loop system (apply (7) to (3))



A BK 0
x x
= LC A LC BK BKI + 0 r


x
x
C 0 0 I

x
y = C 0
x

October 17, 2010


Fall 2010 16.30/31 178

DOFB Servo Example


System
(s + 2)
G(s) =
(s 1)(s s + 2)

Use LQR and its dual to nd K and L using the dynamics aug
mented with an integrator.
Implement as in Eq. 7 to get the following results
Note root locus as a function of scalar loop gain multiplier ,
where nominally = 1
Int Penalty 10

5
vs
4 CLP
OLP
3 OLZ
GcZ
2 GcP
Imaginary Axis

5
4 3 2 1 0 1 2
Real Axis
Fig. 3: DOFB servo root locus

Obviously a pretty complex structure for the controller often see


that the compensator zeros are close to the plant poles.
October 17, 2010
Fall 2010 16.30/31 179

Int Penalty 1

5
vs
4 CLP
OLP
3 OLZ
GcZ
2 GcP
Imaginary Axis

5
4 3 2 1 0 1 2
Real Axis

Fig. 4: DOFB servo root locus

1.4

1.2

1
Response

0.8

0.6

0.4

0.2 Int Penalty 10


Int Penalty 1
Nbar Form
0
0 1 2 3 4 5
time

Fig. 5: Step response of the DOFB servo interplay now between the various
weightings - all give zero SS error, but transient response quite dierent.

October 17, 2010


Fall 2010 16.30/31 1710

DOFB Servo II

Now, as before, consider changing the implementation to improve the


transient response.
Consider same modication as on 17-2 and add a more direct
feedback of parts of e other than through integrator state.

As before, assume that we can partition the state into parts we care
about for tracking T x and parts we dont T x
Require that T + T = I

For the DOFB problem, basic form of the estimator is


= (A LC)x
x + Bu + Ly
x I = e = r y
Note: can also dene an error based on the state estimate
e = r y
= r Cx

Let us dene the control signal to be (note similarities to u equation


on 17-4):
u = K(T x
+ Re) KI xI
= K(T x
+ R(r Cx )) KI xI
= K((T RC)x + Rr) KI xI
Again set RC = T , so T RC = I and
u = K(x
+ Rr) KI xI
= Kx
KRr (8)

October 17, 2010


Fall 2010 16.30/31 1711

1.4

1.2

1
Response

0.8

0.6

0.4

0.2 Int Penalty 10 no R


Int Penalty 10 with R
Nbar Form
0
0 1 2 3 4 5
time

Fig. 6: DOFB servo step response with and without R formulations compared to
N formulation (see Figure 5).

Closed-loop system (apply (8) and rst line of (7) to (3))



A BK BKR
x x
= LC A LC BK BK I + BKR r


x
x
C 0 0 I

x
y = C 0

x

(s+2)
Example for same system as 17-5: G(s) = (s1)(ss+2)

Adding extra reference input like this increases the bandwidth of the
controller and speeds up the response.
But perhaps too much - so further tuning required.

October 17, 2010


Fall 2010 16.30/31 1712

DOFB Implementation with Saturations

Closed-loop implementation of DOFB assumes that the system and


compensator both see u = Kx .
If actuator saturates, then possible that the system sees

if u>
sat(u) = u if <u<
if u <

whereas the input to the estimator is u.

Can improve the response to some extent if the compensator is mod


ied so that it also sees sat(u) rather than u.

Compensator for the saturated DOFB problem


= (A LC)x
x + Bsat(u) + Ly
u = Kx

sat(u) y
sat() Gp(s)

K = (A LC)
x x + Bsat(u) + Ly

x

October 17, 2010


Fall 2010 16.30/31 1713

Code: DOFB servo (lqservo.m)


1 % DOFB servo

2 % Jonathan How, MIT, Fall 10

3 clear all;close all;j=sqrt(1);

4 %

5 G=tf([1 2],conv([1 1],[1 1 2]));

6 [N,D]=tfdata(G,'v');roots(N);

7 [a,b,c,d]=ssdata(G);ns=size(a,1);

8 Ruu=.001;

9 Rxx=c'*c;

10 [klqr,P,E]=lqr(a,b,Rxx,Ruu);

11 z=zeros(ns,1);

12 Rxx1=[Rxx z;z' 10];

13 Rxx2=[Rxx z;z' 1];

14
15 abar=[a z;c 0];bbar=[b;0];

16 kbar1=lqr(abar,bbar,Rxx1,Ruu);

17 kbar2=lqr(abar,bbar,Rxx2,Ruu);

18
19 % use dual of LQR to design estimator

20 [l,QQ,EE]=lqr(a',c',b*b',.01);l=l';

21
22 % form compensators

23 ac1=[al*cb*kbar1(1:ns) b*kbar1(end);z' 0];

24 ac2=[al*cb*kbar2(1:ns) b*kbar2(end);z' 0];

25 by=[l;1];br=[l*0;1];

26 cc1=[kbar1];cc2=[kbar2];

27
28 % form closedloop dynamics

29 acl1=[a b*kbar1;[l;1]*c ac1];

30 acl2=[a b*kbar2;[l;1]*c ac2];

31 bcl=[b*0;br]; % r input matrix for basic form changed below

32 ccl=[c c*0 0]; % not changed below

33
34 % using the Nbar form of closed loop

35 acl3=[a b*klqr;l*c ab*klqrl*c];

36 bcl3=[b;b];

37 ccl3=[c c*0];

38 Nbar=inv(ccl3*inv(acl3)*bcl3);

39 bcl3=bcl3*Nbar;

40
41 figure(1);t=[0:.01:5]';

42 [y2,t2]=step(ss(acl2,bcl,ccl,0),t);

43 [y1,t1]=step(ss(acl1,bcl,ccl,0),t);

44 [y3,t3]=step(ss(acl3,bcl3,ccl3,0),t);

45 plot(t1,y1,t2,y2,'r',t3,y3,'m:','LineWidth',2)

46 xlabel('time');ylabel('Response')

47 legend('Int Penalty 10','Int Penalty 1','Nbar Form','Location','SouthEast')

48 export fig lqservo1 pdf

49
50 Gc uy=tf(ss(ac1,by,cc1,0));
51 [nn,dd]=tfdata(Gc uy,'v');
52 figure(2);
53 rlocus(G* Gc uy) % is a positive feedback loop now
54 hold on;plot(eig(acl1)+j*eps,'bd','MarkerFaceColor','b');hold off
55 hold on;plot(eig(a)+j*eps,'mv','MarkerFaceColor','m');hold off
56 hold on;plot(roots(N)+j*eps,'ro');hold off
57 hold on;plot(roots(nn)+j*eps,'ro','MarkerFaceColor','r');hold off
58 hold on;plot(roots(dd)+j*eps,'gs','MarkerFaceColor','g','MarkerSize',5);hold off
59 legend('vs \alpha','CLP','OLP','OLZ','GcZ','GcP','Location','NorthEast')
60 title('Int Penalty 10')
61 axis([4 2 5 5])
62 export fig lqservo2 pdf
63
64 Gc uy=tf(ss(ac2,by,cc2,0));
65 [nn,dd]=tfdata(Gc uy,'v');
66 figure(3);
67 rlocus(G* Gc uy) % is a positive feedback loop now
68 hold on;plot(eig(acl2)+j*eps,'bd','MarkerFaceColor','b');hold off
69 hold on;plot(eig(a)+j*eps,'mv','MarkerFaceColor','m');hold off
70 hold on;plot(roots(N)+j*eps,'ro');hold off
71 hold on;plot(roots(nn)+j*eps,'ro','MarkerFaceColor','r');hold off
72 hold on;plot(roots(dd)+j*eps,'gs','MarkerFaceColor','g','MarkerSize',5);hold off
73 legend('vs \alpha','CLP','OLP','OLZ','GcZ','GcP','Location','NorthEast')
74 title('Int Penalty 1')

October 17, 2010


Fall 2010 16.30/31 1714

75 axis([4 2 5 5])
76 export fig lqservo3 pdf
77
78 % Use R=/= formulation
79 R=c'/1.75;
80 figure(4);t=[0:.01:5]';
81 [y4,t4]=step(ss(acl1,[b*kbar1(1:ns)*R;b*kbar1(1:ns)*R;1],ccl,0),t);
82 %[y4,t4]=step(ss(acl2,[b*kbar2(1:ns)*R;b*kbar2(1:ns)*R;1],ccl,0),t);
83 plot(t1,y1,t4,y4,t3,y3,'m:','LineWidth',2)
84 %plot(t2,y2,t4,y4,t3,y3,'m:','LineWidth',2)
85 xlabel('time');ylabel('Response')
86 legend('Int Penalty 10 no R','Int Penalty 10 with R','Nbar Form','Location','SouthEast')
87 export fig lqservo4 pdf

October 17, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #18

16.31 Feedback Control Systems

Deterministic LQR
Optimal control and the Riccati equation

Weight Selection
Fall 2010 16.30/31 182

Linear Quadratic Regulator (LQR)

Have seen the solutions to the LQR problem, which results in linear
full-state feedback control.
Would like to get some more insight on where this came from.

Deterministic Linear Quadratic Regulator


Plant:
x = Ax + Buu, x(t0) = x0

z = Cz x

Cost:
tf
1 T 1

z Rzzz + uT Ruuu dt + xT (tf )P (tf )x(tf )



JLQR =
2 0 2
Where Rzz > 0 and Ruu > 0
Dene Rxx = CzT RzzCz 0

Problem Statement: Find input u t [t0, tf ] to min JLQR


This is not necessarily specied to be a feedback controller.

Control design problem is a constrained optimization, with the con


straints being the dynamics of the system.

November 5, 2010
Fall 2010 16.30/31 183

Constrained Optimization

The standard way of handling the constraints in an optimization is to


add them to the cost using a Lagrange multiplier
Results in an unconstrained optimization.

Example: min f (x, y) = x2 + y 2 subject to the constraint that


c(x, y) = x + y + 2 = 0
2

1.5

0.5

0
y

0.5

1.5

2
2 1.5 1 0.5 0 0.5 1 1.5 2
x

Fig. 1: Optimization results

Clearly the unconstrained minimum is at x = y = 0

November 5, 2010
Fall 2010 16.30/31 184

To nd the constrained minimum, form augmented cost function


L f (x, y) + c(x, y) = x2 + y 2 + (x + y + 2)
Where is the Lagrange multiplier
Note that if the constraint is satised, then L f

The solution approach without constraints is to nd the stationary


point of f (x, y) (f /x = f /y = 0)
With constraints we nd the stationary points of L

L L L

= = =0
x y
which gives
L
= 2x + = 0
x
L
= 2y + = 0
y
L
= x + y + 2 = 0

This gives 3 equations in 3 unknowns, solve to nd


x = y = 1

Key point here is that due to the constraint, the selection of x and y
during the minimization are not independent
Lagrange multiplier captures this dependency.

November 5, 2010
Fall 2010 16.30/31 185

LQR Optimization

LQR optimization follows the same path, but it is complicated by the


fact that the cost involves an integration over time
See 16.323 OCW notes for details

To optimize the cost, follow the same procedure of augmenting the


constraints in the problem (the system dynamics) to the cost (inte
grand, then integrate by parts) to form the Hamiltonian:
1 T
x Rxxx + uT Ruuu + pT (Ax + Buu)

H =
2
p Rn1 is called the Adjoint variable or Costate
It is the Lagrange multiplier in the problem.

The necessary conditions for optimality are then that:


T
1. x = H = Ax + Buu with x(t0) = x0
p
T
2. p = H = Rxxx AT p with p(tf ) = Ptf x(tf )
x

3. H = 0 Ruuu + Bu
T 1 T
p = 0, so u = Ruu Bu p
u

2
Can check for a minimum by looking at H2 0 (need to check

u
that Ruu 0)

November 5, 2010
Fall 2010 16.30/31 186

Key point is that we now have that


1 T
x = Ax + Buu = Ax BuRuu Bu p
which can be combined with equation for the adjoint variable
p = Rxxx AT p = CzT RzzCz x AT p
1 T

x A BuRuu Bu x
=
p CzT RzzCz AT p
which is called the Hamiltonian Matrix.
Matrix describes closed loop dynamics for both x and p.
Dynamics of x and p are coupled, but x known initially and p
known at terminal time, since p(tf ) = Ptf x(tf )
Two point boundary value problem typically hard to solve.

However, in this case, we can introduce a new matrix variable P and


it is relatively easy to show that:
1. p = P x

2. It is relatively easy to nd P .

In fact, P must satisfy


1 T
0 = AT P + P A + Cz T RzzCz P BuRuu Bu P
Which, is the matrix algebraic Riccati Equation.

The control gains are then


1 T 1 T
uopt = Ruu Bu p = Ruu Bu P x = Kx

So the optimal control inputs can in fact be computed using


linear feedback on the full system state

November 5, 2010
Fall 2010 16.30/31 187

LQR Stability Margins

LQR approach selects closed-loop poles that balance between system


errors and the control eort.
Easy design iteration using Ruu
Sometimes dicult to relate the desired transient response to the
LQR cost function.
Particularly nice thing about the LQR approach is that the designer
is focused on system performance issues

Turns out that the news is even better than that, because LQR ex
hibits very good stability margins

Consider the LQR stability robustness.


1 T

J = z z + uT u dt
2 0
x = Ax + Buu, z = Cz x, Rxx = CzT Cz
z
Cz

r x
Bu (sI A)1 K

Study robustness in the frequency domain.


Loop transfer function L(s) = K(sI A)1Bu
Cost transfer function C(s) = Cz (sI A)1Bu

November 5, 2010
Fall 2010 16.30/31 188

Can develop a relationship between the open-loop cost C(s) and the
closed-loop return dierence I +L(s) called the Kalman Frequency
Domain Equality
1
[I + L(s)]T [I + L(s)] = 1 + C T (s)C(s)

Written for MIMO case, but look at the SISO case to develop further
insights (s = j)
[I + L(j)] [I + L(j)] = (I + Lr () jLi())(I + Lr () + jLi())
|1 + L(j)|2
and
C T (j)C(j) = Cr2 + Ci2 = |C(j)|2 0

Thus the KFE becomes


1
|1 + L(j)|2 = 1 + |C(j)|2 1

November 5, 2010
Fall 2010 16.30/31 189

Implications: The Nyquist plot of L(j) will always be outside the


unit circle centered at (1, 0).
4

3 |LN(j)|

2 |1+LN(j)|

1
Imag Part

0
(1,0)
1

4
7 6 5 4 3 2 1 0 1
Real Part

Great, but why is this so signicant? Recall the SISO form of the
Nyquist Stability Theorem:
If the loop transfer function L(s) has P poles in the RHP s-plane (and
lims L(s) is a constant), then for closed-loop stability, the locus
of L(j) for : (, ) must encircle the critical point (1, 0) P
times in the counterclockwise direction (Ogata528)

November 5, 2010
Fall 2010 16.30/31 1810

So we can directly prove stability from the Nyquist plot of L(s).


But what if the model is wrong and it turns out that the actual loop
transfer function LA(s) is given by:
LA(s) = LN (s)[1 + (s)], |(j)| 1,

We need to determine whether these perturbations to the loop TF


will change the decision about closed-loop stability
can do this directly by determining if it is possible to change the
number of encirclements of the critical point

stable OL
1.5

0.5 LA(j)
Imag Part

2
0

0.5 1 LN(j)

1.5

1.5 1 0.5 0 0.5 1 1.5

Real Part
Fig. 2: Perturbation to the LTF causing a change in the number of encirclements

November 5, 2010
Fall 2010 16.30/31 1811

Claim is that since the LTF L(j) is guaranteed to be far from the
critical point for all frequencies, then LQR is VERY robust.
Can study this by introducing a modication to the system, where
nominally = 1, but we would like to consider:
The gain R
j
The phase e

K(sI A)1Bu

In fact, can be shown that:


If open-loop system is stable, then any (0, ) yields a stable
closed-loop system. For an unstable system, any (1/2, )
yields a stable closed-loop system
gain margins are (1/2, )
Phase margins of at least 60
Both of these robustness margins are very large on the scale of what
is normally possible for classical control systems.
stable OL
3

|L|
1
|1+L|
Imag Part

0 =0

3
2 1 0 1 2 3 4
Real Part

Fig. 3: Example of LTF for an open-loop Fig. 4: Example loop transfer functions
stable system for open-loop unstable system.

November 5, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #19

16.31 Feedback Control Systems

Stengel Chapter 6
Question: how well do the large gain and phase margins
discussed for LQR map over to DOFB using LQR and LQE
(called LQG)?
Fall 2010 16.30/31 192

Linear Quadratic Gaussian (LQG)

When we use the combination of an optimal estimator (not discussed


in this course) and an optimal regulator to design the controller, the
compensator is called
Linear Quadratic Gaussian (LQG)
Special case of the controllers that can be designed using the sep
aration principle.

Great news about an LQG design is that stability of the closed-loop


system is guaranteed.
The designer is freed from having to perform any detailed mechanics
- the entire process is fast and automated.
Designer can focus on the performance related issues, being con
dent that the LQG design will produce a controller that stabilizes
the system.
Selecting values of Rzz , Ruu and relative sizes of Rww & Rvv

This sounds great so what is the catch??

Remaining issue is that sometimes the controllers designed using these


state space tools are very sensitive to errors in the knowledge of the
model.
i.e., the compensator might work very well if the plant gain = 1,
but be unstable if = 0.9 or = 1.1.
LQG is also prone to plantpole/compensatorzero cancelation,
which tends to be sensitive to modeling errors.

J. Doyle, Guaranteed Margins for LQG Regulators, IEEE Transac


tions on Automatic Control, Vol. 23, No. 4, pp. 756-757, 1978.

November 5, 2010
Fall 2010 16.30/31 193

The good news is that the state-space techniques will give you a
controller very easily.
You should use the time saved to verify that the one you
designed is a good controller.

There are, of course, dierent denitions of what makes a controller


good, but one important criterion is whether there is a reasonable
chance that it would work on the real system as well as it
does in Matlab. Robustness.
The controller must be able to tolerate some modeling error, be
cause our models in Matlab are typically inaccurate.
Linearized model

Some parameters poorly known


Ignores some higher frequency dynamics

Need to develop tools that will give us some insight on how well a
controller can tolerate modeling errors.

November 5, 2010
Fall 2010 16.30/31 194

LQG Robustness Example


Cart with an inverted pendulum on top.

m, l

L
F

Image by MIT OpenCourseWare.

Force actuator and angle sensor


Can develop the nonlinear equations for large angle motions
Linearize for small

(I + ml2) mgl = mLx


(M + m)x + bx mL = F
2 2 2

(I + ml )s mgL mLs (s) 0
2 2 =
mLs (M + m)s + bs x(s) F
which gives
(s) mLs2
=
F [(I + ml2)s2 mgL][(M + m)s2 + bs] (mLs2)2
x(s) (I + ml2)s2 mgL
=
F [(I + ml2)s2 mgL][(M + m)s2 + bs] (mLs2)2

Set M = 0.5, m = 0.2, b = 0.1, I = 0.006, L = 0.3 to get:


x 1.818s2 + 44.55
= 4
F s + 0.1818s3 31.18s2 4.45s
which has poles at s = 5.6, s = 0, and s = 0.14 and plant zeros
at 5.

November 5, 2010
Fall 2010 16.30/31 195

Dene
q
q= , x=
x q
Then with y = x
x = Ax + Buu
y = Cy x

Very simple LQG design - main result is fairly independent of the


choice of the weighting matrices.

The resulting compensator is unstable (+23!!)


This is somewhat expected. (why?)

5
10
Mag

0
10
G
Gc
5
10 2 1 0 1 2
10 10 10 10 10
Freq (rad/sec)
200
Phase (deg)

150
G
100 Gc

50

0 2 1 0 1 2
10 10 10 10 10
Freq (rad/sec)
Fig. 1: Plant and Controller

November 5, 2010
Fall 2010 16.30/31 196

2
10
Mag Loop L

0
10

2
10 2 1 0 1 2
10 10 10 10 10
Freq (rad/sec)
0
Phase (deg)

100

200

300 2 1 0 1 2
10 10 10 10 10
Freq (rad/sec)
Bode Diagram
Gm = 0.0712 dB (at 5.28 rad/sec) , Pm = 0.355 deg (at 3.58 rad/sec)

30

20
Magnitude (dB)

10

10

20

30
180

90
Phase (deg)

90

180
2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
Fig. 2: Loop and Margins

November 5, 2010
Fall 2010 16.30/31 197

Root Locus
10
10
0.64 0.5 0.34 0.16
0.76 8
8
6
6 0.86
4
4
0.94
Imaginary Axis

2
2 0.985

2 0.985
2
0.94
4
4

6 0.86
6

8 0.76
8
0.64 0.5 0.34 0.16
10
10 8 6 4 2 100 2 4 6 8 10
Real Axis

Root Locus

2
2 0.64 0.5 0.34 0.16
1.75
0.76
1.5
1.5
1.25
0.86
1
1
0.75
0.94
0.5
Imaginary Axis

0.5
0.985 0.25

0.985 0.25
0.5
0.5
0.94
0.75
1
1
0.86
1.25
1.5
1.5
0.76
1.75
2 0.64 0.5 0.34 0.16
2
2 1.5 1 0.5 0 0.5 1 1.5 2
Real Axis

Fig. 3: Root Locus with frozen compensator dynamics. Shows sensitivity to overall
gain symbols are a gain of [0.995:.0001:1.005].

November 5, 2010
Fall 2010 16.30/31 198

Looking at both the Loop TF plots and the root locus, it is clear this
system is stable with a gain of 1, but
Unstable for a gain of 1 and/or a slight change in the system
phase (possibly due to some unmodeled delays)
Very limited chance that this would work on the real system.

Of course, this is an extreme example and not all systems are like this,
but you must analyze to determine what robustness margins your
controller really has.

Question: what analysis tools should we use?

November 5, 2010
Fall 2010 16.30/31 199

Frequency Domain Tests

Frequency domain stability tests provide further insights on the sta


bility margins.

Recall that the Nyquist Stability Theorem provides a binary mea


sure of stability, or not.

But already discussed that we can use closeness of L(s) to the


critical point as a measure of closeness to changing the number of
encirclements.
Closeness translates to high sensitivity which corresponds to LN (j)
being very close to the critical point.
Ideally you would want the sensitivity to be low. Same as saying
that you want L(j) to be far from the critical point.

Premise is that the system is stable for the nominal system has the
right number of encirclements.
Goal of the robustness test is to see if the possible perturbations
to our system model (due to modeling errors) can change the
number of encirclements
In this case, say that the perturbations can destabilize the system.

November 5, 2010
Fall 2010 16.30/31 1910

Nichols: Unstable Openloop System Nichols: Unstable Openloop System


1
10 1.05
1
1.04 0.99
1.01
1.03

1.02

1.01
Mag

Mag
0
10 1

0.99

0.98

0.97

0.96
1
10 0.95
260 240 220 200 180 160 140 120 100 180.5 180 179.5 179 178.5
Phase (deg) Phase (deg)

Fig. 4: Nichols Plot (|L((j))| vs. arg L((j))) for the cart example which clearly
shows sensitivity to overall gain and/or phase lag.
Sensitivity Plot
3
10
|S|
|L|
2
10

1
10
Mag

0
10

1
10

2
10 2 1 0 1 2
10 10 10 10 10
Freq (rad/sec)

Fig. 5: Sensitivity plot of the cart problem.

Diculty in this example is that the open-loop system is unstable, so L(j) must

encircle the critical point


hard for L(j) to get too far away from the critical

point.

November 5, 2010
Fall 2010 16.30/31 1911

Summary

LQG gives you a great way to design a controller for the nominal
system.

But there are no guarantees about the stability/performance if the


actual system is slightly dierent.
Basic analysis tool is the Sensitivity Plot

No obvious ways to tailor the specication of the LQG controller to


improve any lack of robustness
Apart from the obvious lower the controller bandwidth approach.
And sometimes you need the bandwidth just to stabilize the system.

Very hard to include additional robustness constraints into LQG


See my Ph.D. thesis in 1992.

Other tools have been developed that allow you to directly shape the
sensitivity plot |S(j)|
Called H and

Good news: Lack of robustness is something you should look for,


but it is not always an issue.

November 5, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #20

16.30/31 Feedback Control Systems

Digital Control Basics


Eective Delay
Emulation
Fall 2010 16.30/31 201

Digital Control

Control picture so far

d
r e u y
Gc(s) Gp(s)

Can implement this using analog circuits, but as you have seen, there
are many advantages to implementing these using a computer much
more exible

In this case the new picture is:

r(t) ek uk u(t) y(t)


A2D Gc(s) D2A Gp(s)

yk A2D

Digital Computer

November 5, 2010
Fall 2010 16.30/31 202

Digital Control Mechanics


Digital/discrete control runs on a clock
Only uses the input signals at discrete instants in time
So continuous e(t) is sampled at xed periods in time e(kTs)
Where Ts is the sampling period and k is an integer

Must also get information into and out of the computer


Requires A/D and D/A operations

The A/D consists of 2 steps:


1. Convert a physical signal (voltage) to a binary number, which is
an approximation since we will only have a 12-16 bits to cover a
10V range.
2. Sample a continuous signal e(t) every Ts seconds so that
y(t) y(k)

Ts

y(t) y(k)

Image by MIT OpenCourseWare.

3. Sampler clearly ignores a large part of the continuous signal.

November 5, 2010
Fall 2010 16.30/31 203

The D/A consists of 2 steps as well


1. Binary to analog
2. Convert discrete signal (at kTs) to a continuous one.

u(k)

u(t) ?

Easiest way is to just


k=1 1 2 3 4 5 6 7 hold u(k) for period Ts

Image by MIT OpenCourseWare.

Basic approach is to just hold the latest value of u(k) for the entire
periods Ts
Called a zero-order hold (ZOH)

Need to determine what impact this sample and hold operation


might have on the loop transfer function
Approximate the A/D as sample
Approximate D/A as ZOH
Set control law to 1, so u(k) = e(k)

y(t) A/D CONTROL D/A u(t)

Ts
u(k) = y(k)
y(t) ZOH u(t)

Image by MIT OpenCourseWare.

November 5, 2010
Fall 2010 16.30/31 204

Sample and Hold Analysis


Can actually analyze the transfer function U (s)/E(e) analytically
Can also gain some insight by looking at basic signals

u(k) has a standard box car shape


Smoothed u(k) by connecting mid-points u(t)

So sampled and held e(t) looks like input e(t), but delay is obvious.

Analytic study of U (s)/E(e) shows that eective delay of sample and


hold is Ts/2 on average
Can be a problem if Ts is too large

November 5, 2010
Fall 2010 16.30/31 205

So why not just make Ts small?


Key point is that Ts is how long we have to compute the control
command given the measurements received

Calc #1 Calc #2 Wait

Read: y Write: u Read: y

Time

Image by MIT OpenCourseWare.

Usually wait period is short, but length of calc 1 and calc 2,


A/D and D/A operations depend on the complexity of the algorithm
and the quality of the equipment
But quality cost
2
Typically would set sampling frequency 2 = Ts 15BW

November 5, 2010
Fall 2010 16.30/31 206

Control Law
Basic compensator
s + z U (s)
Gc(s) = Kc =
s + p E(s)

Equivalent dierential equation form


u + pu = Kc(e + ze)

Dierential equation form not useful for a computer implementation


- need to approximate the dierentials
1 uk+1 uk
u |t=kTs [u((k + 1)Ts) u(kTs)]
Ts Ts
This uses the forward approximation, but others exist

Then u + pu = Kc(e + ze) approximately becomes


uk+1 uk ek+1 ek
+ puk = Kc( + zek )
Ts Ts
or
uk+1 = (1 pTs)uk + Kcek+1 Kc(1 zTs)ek
which is a recursive dierence equation, that can easily be imple
mented on a computer.

Similar case for state space controllers


x c = Acxc + Bce
u = Ccxc + Dce
xc (k+1)xc (k)
and x c Ts so that

xc(k + 1) xc(k)

= Acxc + Bce
Ts
xc(k + 1) = (I + TsAc)xc(k) + TsBce(k)
u(k) = Ccxc(k) + Dce(k)

November 5, 2010
Fall 2010 16.30/31 207

Computer Code Layout

Given previous information uk and ek and new information yk+1 and


rk+1, form ek+1

Need to use the dierence equation to nd uk+1


uk+1 = (1 pTs)uk + Kcek+1 Kc(1 zTs)ek
Then let uold = (1 pTs)uk Kc(1 zTs)ek , so that
uk+1 = Kcek+1 + uold
Also dene constants 1 = (1 pTs) and 2 = Kc(1 zTs)

Then the code layout is as follows:

initialize uold

while k < 1000 do

k = k + 1

sample A/Ds (read yk+1, rk+1)

compute ek+1 = rk+1 yk+1

update uk+1 = Kcek+1 + uold

output to D/A (write uk+1)

update uold = 1uk+1 + 2ek+1

wait

end while

Note that this outputs the control as soon after the data read as
possible to reduce the delay
result is that the write time might be unknown
Could write uk+1 at end of wait delay is longer, but xed

November 5, 2010
Fall 2010 16.30/31 208

Summary
Using a digital computer introduces some extra delay
Sample and hold Ts/2 delay
Holding u(k) to end of loop Ts delay
So the delay is Ts/23Ts/2

Emulation process design the continuous control accounting for an


extra
cTs 180 c2 180 c
= = 180
2 2s s
to the PM to account for the delay.

With s 15BW , delay eects are small, and the cts and discrete
controllers are similar

c2dm.m provides simple ways to discretize the continuous controllers


Lots of dierent conversion approaches depending on what proper
ties of the continuous controller you want to preserve.

November 5, 2010
MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #21

16.30/31 Feedback Control Systems

Systems with Nonlinear Functions


Describing Function Analysis
Fall 2010 16.30/31 212

NL Example

Another classic example Van Der Pol equation1:


x + (x2 1)x + x = 0
which can be written as linear system

G(s) =
s2 s + 1

in negative feedback with a nonlinear function f (x, x ) = x2x

0 x(t) q(t) x(t)

f (x, x)
G(s)

Would expect to see dierent behaviors from the system depending


on the value of
4

0
x

2
=1
3 =2
=0.2
4
3 2 1 0 1 2 3 4
x

Of particular concern is the existence of a limit cycle response


Sustained oscillation for a nonlinear system, of the type above

1 Slotine and Li, page 158

November 23, 2010


Fall 2010 16.30/31 213

In this case the signal x(t) would be of the form of an oscillation


x(t) = A sin(t) so that x (t) = A cos(t)
Note that A and are not known, and may not actually exist.

Given the form of x(t), we have that


q(t) = x2x = A2 sin2(t)A cos(t)
A3
= (cos(t) cos(3t))
4

Thus the output of the nonlinearity (input of the linear part) contains
the third harmonic of the input
Key point: since the system G(s) is low pass, expect that this
third harmonic will be suciently attenuated by the linear sys
tem that we can approximate
2 A3
q(t) = x x cos(t)
4
A2 d
= [A sin(t)]
4 dt

Note that we can now create an eective transfer function of


this nonlinearity by dening that:
A2j
q = N (A, )(x) N (A, ) =
4
which approximates the eect of the nonlinearity as a frequency re
sponse function.

0 x(t) q(t) x(t)


N (A, ) G(s)

November 23, 2010


Fall 2010 16.30/31 214

What are the implications of adding this nonlinearity into the feedback
loop?
Can approximately answer that question by looking at the stability
of G(s) in feedback with N .
x = A sin(t) = G(j)q = G(j)N (A, )(x)
which is equivalent to:
(1 + G(j)N (A, ))x = 0
that we can rewrite as:
A2(j)
1 + =0
4 (j)2 (j) + 1

which is only true if A = 2 and = 1

These results suggest that we could get sustained oscillations in this


case (i.e. a limit cycle) of amplitude 2 and frequency 1.
This is consistent with the response seen in the plots - independent
of we get sustained oscillations in which the x(t) value settles
down to an amplitude of 2.
Note that does impact the response and changes the shape/fea
tures in the response.

Approach (called Describing Functions) is generalizable....

November 23, 2010


Fall 2010 16.30/31 215

Describing Function Analysis

Now consider a more general analysis of the describing function ap


proach.

In this case consider the input to the nonlinearity to be x(t) =


A sin t.
Would expect that the output y = f (x) is a complex waveform,
which we represent using a Fourier series of the form:


y(t) = b0 + (an sin nt + bn cos nt)
n=1

So it is explicit that the output of the nonlinearity contains multiple


harmonics of the ingoing signal.
In general we would expect these harmonics to pass through the
system G(s) and show up in the input to the nonlinearity
Would then have a much more complicated input for x(t), leading
to a more complex output y(t) non-feasible analysis path

Need approximate approach, so assume


The fundamental yf = a1 sin t+b1 cos t is signicantly larger
in amplitude than the harmonics
The linear system G(s) acts as a low-pass that attenuates the
harmonics more strongly than the fundamental.

November 23, 2010


Fall 2010 16.30/31 216

As a result, can approximate y(t) as yf , and then the describing


function of the nonlinearity becomes
yf
N=
x

Using Fourier analysis, can show that


2/ 2/

a1 = y(t) sin t dt b1 = y(t) cos t dt
0 0
Note that will often nd that N is a function of the amplitude A
and the frequency .

Simple example: ideal relay y = T if x 0, otherwise y = T .


Then (setting = 1 for simplicity, since the solution isnt a function
of )
1 2 2

4T
a1 = y(t) sin t dt = T sin t dt =
0 0
Nonlinearity is odd (i.e., y(t) = y(t)), so bi = 0 i

So we have
4T
N=
A
so the equivalent gain decreases as the input amplitude goes up.
Makes sense since the output is limited, so eective gain of the
nonlinearity must decrease as the amplitude of the input goes up

November 23, 2010


Fall 2010 16.30/31 217

Saturation Nonlinearity

Classic nonlinearity is the saturation function



T if e > T
u = f (e) = e if T e T
T if e < T

Outputs the signal, but only up to some limited magnitude, then


caps the output to a value T .
Saturation is an odd function

Describing function calculation is (as before bi = 0):


Assume e(t) = A sin t and A > T , and nd T so that

T
e(tT ) = A sin T = T T = arcsin
A

Set = t, so that d = dt
4 /2

d
a1 = y(t) sin
0
T
4 /2

4
= A sin sin d + T sin d
0 T
4A T
/2
4T
= A sin2 d + sin d
0 T
2
2A T 2T T
= arcsin + 1
A A

So if A > T the DF is given by



2
2 T T T
N (A) = arcsin + 1
A A A

and if A < T , then N (A) = 1.

November 23, 2010


Fall 2010 16.30/31 218

Odd Nonlinearities with Memory

Many of the DF are real, but for NL with delay, hysteresis, can get
very complex forms for N that involve phase lags.
N has both real and imaginary parts

Example: relay with hysteresis (also known as a Schmitt trigger)

f (e)

T
e

Converts input sine wave to square wave, and also introduces phase
shift, as change from 1 to +1 occurs after input signal has
changed sign.

If = arcsin(
A ), then

2T 4T
b1 = cos d + cos d =
0 A

2

2T 4T
a1 = sin d + sin d = 1
0 A

Thus we have

2
4T
N (A) = 1 j
A A A

Where the complex term arises from the phase shift from a sin
input to a cos output.

November 23, 2010


Fall 2010 16.30/31 2113

Limit Cycle Analysis

Since N is an equivalent linear gain, the stability of the loop involving


both N and G is given by the condition that there be a nonzero
solution to the equation
1
GN + 1 = 0 G =
N

Graphically what this will look like is an intersection between the


Nyquist plot of G(s) and the DF (1/N )
If N is real, then 1/N is along the negative real line

The intersection point gives two values:


from G(j) at the intersection point gives the frequency of the
oscillation
A from the expression for N for the particular value associated
with the intersection point.

November 23, 2010


Fall 2010 16.30/31 2114

DF Analysis Example
K
Plant: G(s) = s(T1 s+1)(T2 s+1)with relay nonlinearity:

T if e 0
f (e) =
T if e < 0

Describing function for f given by N = 4T /(A), and thus


1 A
=
N 4T
which is on negative real line moving to left as A increases.

Nyquist plot of G(s) will cross the real line at


KT1T2
s=
(T1 + T2)

with corresponding = 1/ T1T2

Graphical test:
With a Relay, K=1.5, A=2.2918
4
G
1
3
N

1
Imag

4
8 7 6 5 4 3 2 1 0
Real

Fig. 1: DF graphical test note that the DF is on the negative real line, parameterized
by A, whereas the transfer function of G is parameterized by

November 23, 2010


Fall 2010 16.30/31 2115

RL

Step Relay To Workspace

x' = Ax+Bu
y = Cx+Du
State -Space

XY Graph 1

Fig. 2: Typical simulation setup

Fig. 3: System initially forced a little (green) and a lot (blue), and then both re
sponses converge to the same limit cycle

Can show that the expected amplitude of the limit cycle is:
4T KT1T2
A=
(T1 + T2)

Compare with nonlinear simulation result:


Can we prove that this limit cycle is stable or unstable?

November 23, 2010


Fall 2010 16.30/31 2116

Now consider the same system, but a saturation nonlinearity instead.

For the graphical test, note that 1/N is real, and very similar to
the result for a relay

Relay
1 Saturation
10
N

0
10

1
10
0 1 2 3 4 5
A
Fig. 4: Comparison of N for relay and saturation

The slight dierence being in resulting amplitude of limit cycle

With a Saturation , K=1.5 A=2.2111


4
G
1
3
N

1
Imag

4
8 7 6 5 4 3 2 1 0
Real

November 23, 2010


Fall 2010 16.30/31 2117

Now consider the system with a relay hysteresis with = T /3

First note that if N (A) = F jH, 1


N = F +jH
F 2 +H 2
and in this
case
2 2 2 2 2

2
4T 1 + 4T 4T
F 2+H 2 = =
A A A A A

so we have

2 2
1 4T 4T
= 1 + j

N A A A A


2 2
A A
= 1 +j = 1 j
4T A A 4T A 4T

So now use this to nd when (G(jc)) =


4T , and then use
(G(jc)) to nd A.
With a Relay with Hysteresis, K=1.5 A=2.921
1
G
0.8 1

N
0.6

0.4

0.2
Imag

0.2

0.4

0.6

0.8

1
5 4 3 2 1 0
Real
Fig. 5: Test for limit cycle for a relay with hysteresis, K = 1.5, = T /3

November 23, 2010


Fall 2010 16.30/31 2118

Sim response with different NLs, K=1.5


1.5
Relay
Saturation
1 Hyst

0.5

0
x

0.5

1.5
4 2 0 2 4 6
x
Fig. 6: Simulation comparison of all 3 types of nonlinearities with K = 1.5. Simu
lation results agree well with the predictions.

November 23, 2010


Fall 2010 16.30/31 2119

Repeat the analysis with K = 1 to get the following plots


Note greater separation in the amplitudes
With a Relay, K=1, A=1.5279 With a Saturation , K=1 A=1.3812
4 4
G G
1 1
3 3
N N

2 2

1 1
Imag

Imag
0 0

1 1

2 2

3 3

4 4
8 7 6 5 4 3 2 1 0 8 7 6 5 4 3 2 1 0
Real Real
With a Relay with Hysteresis, K=1 A=2.0808 Sim response with different NLs, K=1
1 1
G Relay
0.8 1
0.8 Saturation
N
Hyst
0.6 0.6

0.4 0.4
0.2 0.2
Imag

0 0
x

0.2 0.2
0.4 0.4
0.6 0.6
0.8 0.8
1 1
5 4 3 2 1 0 4 2 0 2 4 6
Real x
Fig. 7: Repeat all simulations with K = 1

November 23, 2010


Fall 2010 16.30/31 2120

Repeat the analysis with K = 0.7 to get the following plots


Now note that the saturation nonlinearity does not lead to a limit
cycle
With a Relay, K=0.7, A=1.0695 With a Saturation , K=0.7 A=Inf
4 4
G G
1 1
3 3
N N

2 2

1 1
Imag

Imag
0 0

1 1

2 2

3 3

4 4
8 7 6 5 4 3 2 1 0 8 7 6 5 4 3 2 1 0
Real Real
With a Relay with Hysteresis, K=0.7 A=1.6013 Sim response with different NLs, K=0.7
1 0.8
G Relay
0.8 1
Saturation
N 0.6
Hyst
0.6
0.4
0.4

0.2 0.2
Imag

0 0
x

0.2
0.2
0.4
0.4
0.6

0.8 0.6

1 0.8
5 4 3 2 1 0 2 1 0 1 2 3 4 5
Real x
Fig. 8: Repeat all simulations with K = 0.7

November 23, 2010


Fall 2010 16.30/31 2121

Limit Cycle Stability

Stability analysis is similar to that used for linear systems, where the
concern is about encirclements of critical point s = 1.
Dierence here: use the 1/N (A) point as critical point.

Need to consider what the impact might be of a perturbation to


amplitude A if a limit cycle is initiated.

In cases considered, an increase in A would correspond to a shift to


the left of the 1/N (A) point in the s-plane
With that change, G(s) would not encircle the critical pt, the
response would be stable and the amplitude of the signal (A)
would to decrease
Since the perturbation increase A, and response decreases it, the
limit cycle is stable.

Similarly, if a decrease in A corresponds to a shift to the right of the


1/N (A) point in the s-plane
G(s) would now encircle the critical point, the response would be
unstable and the signal amplitude (A) would increase
Since perturbation decreases A, and response increases it, limit
cycle is stable.

So limit cycle stability hinges on 1/N (A) intersecting Nyquist plot


with increasing A pointing to the left of G(s)

November 23, 2010


Fall 2010 16.30/31 2122

Example

Im

Unstable L.C.

U S U
A

Re

Stable L.C.

E. Frazzoli (MIT) Lecture 23: Describing Functions April 14, 2008 10 / 11

November 23, 2010


Fall 2010 16.30/31 2123

Code: Describing Function Analysis


1 % Examples of describing ftns

2 % Jonathan How

3 % Oct 2009

4 %

5 set(0,'DefaultLineLineWidth',1.5)

6 set(0,'DefaultAxesFontName','arial');set(0,'DefaultTextFontName','arial')

7 set(0,'DefaultlineMarkerSize',8);set(0,'DefaultlineMarkerFace','r')

8 set(0, 'DefaultAxesFontSize', 12);set(0, 'DefaultTextFontSize', 12);

9 set(0,'DefaultFigureColor','w', 'DefaultAxesColor','w',...

10 'DefaultAxesXColor','k', 'DefaultAxesYColor','k',...
11 'DefaultAxesZColor','k','DefaultTextColor','k')
12 %
13 %clear all
14 global T GG2 GG3 Delta
15
16 if 0
17 T 1=3;T 2=2;K=1.5;T=1;
18 elseif 0
19 T 1=3;T 2=2;K=1;T=1;
20 else
21 T 1=3;T 2=2;K=0.7;T=1;
22 end
23
24 SS=(K* T 1 * T 2)/(T 1+T 2);
25 G=tf(K,conv([T 1 1 0],[T 2 1]));
26 omega=logspace(3,3,400);GG=freqresp(G,omega);GG=squeeze(GG);
27 omega2=1/sqrt(T 1 * T 2);GG2=freqresp(G,omega2);GG2=squeeze(GG2);
28
29 A1=logspace(2,log10(10),50);
30 N1=4*T./(pi*A1);
31 figure(1);clf
32 plot(real(GG),imag(GG));
33 axis([10 1 10 10]);
34 axis([8 .1 4 4]);
35 grid on;hold on;
36 plot(real(1./N1),imag(1./N1),'r');
37 plot(real(GG2),imag(GG2),'ro');
38 hold off;
39 xlabel('Real');ylabel('Imag');
1
40 h=legend({'G',' '},'Location','NorthEast','interpreter','latex');
N
41 title(['With a Relay, K=',num2str(K),', A=',num2str(real(GG2)*4*T/pi)])
42
43 A2=logspace(log10(T),log10(10),50);
44 N2=(2/pi)*(asin(T./A2)+(T./A2).*sqrt(1(T./A2).2));
45 figure(5);clf
46 plot(A2,1./N2,[0 5]',[real(GG2) real(GG2)],'k');grid on
47 axis([1 1.75 2 0.9]);
48 if real(GG2) < 1
49 Asat=fsolve('Nsat',[1]);
50 else
51 Asat=inf;
52 end
53
54 figure(2);clf

55 plot(real(GG),imag(GG));axis([8 .1 4 4]);

56 grid on;hold on;

57 plot(real(1./N2),imag(1./N2),'r');

58 if real(GG2) < 1

59 plot(real(GG2),imag(GG2),'ro');

60 end

61 hold off;

62 xlabel('Real');ylabel('Imag');

1
63 h=legend({'G',' '},'Location','NorthEast','interpreter','latex');
N
64 title(['With a Saturation , K=',num2str(K),' A=',num2str(Asat)])
65
66 Delta=T/3;

67 A3=logspace(log10(Delta),1,200);

68 N3=(4*T./(A3*pi)).*(sqrt(1(Delta./A3).2)sqrt(1)*Delta./A3);

69 figure(3);clf

70 plot(real(GG),imag(GG));

71 axis([5 .1 1 1]);

72 grid on;hold on;

73 plot(real(1./N3),imag(1./N3),'r');

November 23, 2010


Fall 2010 16.30/31 2124

74 ii=find(abs(imag(GG) + (Delta*pi)/(4*T)) < .05)

75 GG3=GG(ii);Ahyst=fsolve('Nhyst',[2])

76 plot(real(GG3),imag(GG3),'ro');

77 hold off;

78 xlabel('Real');ylabel('Imag');

1
79 h=legend({'G',' '},'Location','NorthEast','interpreter','latex');
N
80 title(['With a Relay with Hysteresis, K=',num2str(K),' A=',num2str(Ahyst)])

81

82 figure(4);clf

83 A2=[0 A2];N2=[1 N2];

84 semilogy(A1,N1,A2,N2,'k');grid on

85 xlabel('A','Interpreter','latex');ylabel('N ','Interpreter','latex');

86 legend('Relay','Saturation','Location','NorthEast');

87 axis([0 5 1e1 20]);

88

89 sim('RL1');sim('RL2');sim('RL3');

90

91 figure(6)

92 plot(RL(:,1),RL(:,2))

93 hold on

94 plot(RL2(:,1),RL2(:,2),'g')

95 plot(RL3(:,1),RL3(:,2),'r:')

96 hold off

97 legend('Relay','Saturation','Hyst','Location','NorthEast');

98 grid on

99 xlabel('x','Interpreter','latex');

100 ylabel('x ','Interpreter','latex');

101 title(['Sim response with different NLs, K=',num2str(K)])

102

103 if K==1.5

104 figure(1);export fig G examp1 pdf

105 figure(2);export fig G examp2 pdf

106 figure(3);export fig G examp3 pdf

107 figure(4);export fig G examp4 pdf

108 figure(6);export fig G examp6 pdf

109 elseif K==1

110 figure(1);export fig G examp1a pdf

111 figure(2);export fig G examp2a pdf

112 figure(3);export fig G examp3a pdf

113 figure(6);export fig G examp6a pdf

114 else

115 figure(1);export fig G examp1b pdf

116 figure(2);export fig G examp2b pdf

117 figure(3);export fig G examp3b pdf

118 figure(6);export fig G examp6b pdf

119 end

1 function y=Nsat(A);
2 global T GG2
3
4 N2=(2/pi)*(asin(T/A)+(T/A)*sqrt(1(T/A)2));
5 y=1/N2real(GG2);
6
7 end

1 function y=Nhyst(A);
2 global T GG3 Delta
3
4 neginvN3=(A*pi/(4*T))*sqrt(1(Delta/A)2)sqrt(1)*(Delta*pi)/(4*T)
5 y=real(neginvN3)real(GG3)
6
7 end

November 23, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #22

16.30/31 Feedback Control Systems

Analysis of Nonlinear Systems


Lyapunov Stability Analysis
Fall 2010 16.30/31 222

Lyapunov Stability Analysis

Very general method to prove (or disprove) stability of nonlinear sys


tems.
Formalizes idea that all systems will tend to a minimum-energy
state.
Lyapunovs stability theory is the single most powerful method
in stability analysis of nonlinear systems.

Consider a nonlinear system x = f (x)


A point x0 is an equilibrium point if f (x0) = 0
Can always assume x0 = 0

In general, an equilibrium point is said to be


Stable in the sense of Lyapunov if (arbitrarily) small devia
tions from the equilibrium result in trajectories that stay (arbitrar
ily) close to the equilibrium for all time.
Asymptotically stable if small deviations from the equilibrium
are eventually forgotten, and the system returns asymptotically
to the equilibrium point.
Exponentially stable if it is asymptotically stable, and the con
vergence to the equilibrium point is fast.

November 27, 2010


Fall 2010 16.30/31 223

Stability
Let x = 0 D be an equilibrium point of the system
x = f (x),

where f : D Rn is locally Lipschitz in D R

f (x) is locally Lipschitz in D if x D I(x) such that |f (y)


f (z)| L|y z| for all y, z I(x).
Smoothness condition for functions which is stronger than regular
continuity intuitively, a Lipschitz continuous function is limited
in how fast it can change. (see here)
A sucient condition for a function to be Lipschitz is that the
Jacobian f /x is uniformly bounded for all x.
The equilibrium point is
Stable in the sense of Lyapunov (ISL) if, for each 0,
there is = () > 0 such that

x(0) < x(t) , t 0;

Asymptotically stable if stable, and there exists > 0 s.t.


x(0) < lim x(t) = 0
t+
Exponentially stable if there exist , , > 0 s.t.

x(0) < x(t) < et, t 0;

Unstable if not stable.

ISL or Marginally Stable

xe

x(0)

Unstable

November 27, 2010


Fall 2010 16.30/31 224

How do we analyze the stability of an equilibrium point?

Already talked about how to linearize the dynamics about the equilib
rium point and use the conclusion from the linear analysis to develop
a local conclusion
Often called Lyapunovs rst method

How about a more global conclusion?


Powerful method based on concept of Lyapunov function
Lyapunovs second method
LF is a scalar function of the state that is always non-negative,
is zero only at the equilibrium point, and is such that its value is
non-increasing along systems trajectories.

Generalization of result from classical mechanics, which is that a vi


bratory system is stable if the total energy is continually decreasing.

November 27, 2010


Fall 2010 16.30/31 225

Lyapunov Stability Theorem

Let D be a compact subset1 of the state space, containing the equi


librium point (i.e., {x0} D Rn), and a let there be a function
V : D R.

Theorem: The equilibrium point x0 is stable (in the sense of Lya


punov) if the V satises the following conditions (and if it does, it is
called a Lyapunov function):
1. V (x) 0, for all x D.

2. V (x) = 0 if and only if x = x0.

3. For all x(t) D,


d V (x) dx(t)
V (x(t)) V (x(t)) =
dt x dt
V (x)
= f (x) 0
x

Furthermore,
1. If V (x(t)) = 0 only when x(t) = x0, then the equilibrium is
asymptotically stable.
2. If V (x(t)) < V (x(t)), for some > 0, then the equilibrium
is exponentially stable.

Finally, to ensure global stability, need to impose extra condition


that as x +, then V (x) +.
Such a function V is said radially unbounded

1A compact set is a set that is closed and bounded, e.g., the set {(x, y) : 0 x 1, x2 y x2 .

November 27, 2010


Fall 2010 16.30/31 226

Note that condition (1) in the Theorem corresponds to V (x) being


positive denite (V (x) > 0 for all x = 0 and V (0) = 0.)
V (x) being positive semi-denite means V (x) 0 for all x, but
V (x) can be zero at points other than x = 0.)
i) V (x) = x21 + x22 PD, PSD, ND, NSD, ID

ia) V (x) = x21 PD, PSD, ND, NSD, ID

ii) V (x) = (x1 + x2)2 PD, PSD, ND, NSD, ID

iii) V (x) = x21 (3x1 + 2x2)2 PD, PSD, ND, NSD, ID

iv) V (x) = x1x2 + x22 PD, PSD, ND, NSD, ID


2x2
v) V (x) = x21 + 1+x22 PD, PSD, ND, NSD, ID
2

November 27, 2010


Fall 2010 16.30/31 227

Example 1: Pendulum

Typical method for nding candidate Lyapunov functions is based on


the mechanical energy in the system

Consider a pendulum:
g
= sin() c,

l
Setting x1 = , x2 = :
x 1 = x2
g
x 2 = sin(x1) cx2
l

Can use the mechanical energy as a Lyapunov function candidate:


1
V = ml2x22 + mgl(1 cos(x1))
2

Analysis:
V (0) = 0

V (x1, x2) 0

V (x1, x2) = (ml2x2)x 2 + mgl sin(x1)x 1


= cml2x22 0

Thus the equilibrium point (x1, x2) = 0 is stable in the sense of


Lyapunov.
But note that V is only NSD

November 27, 2010


Fall 2010 16.30/31 228

Example 2: Linear System

Consider a system x = Ax.

Another common choice: quadratic Lyapunov functions,


V (x) = M x2 = xT M T M x = xT P x

with P = M T M , a symmetric and positive denite matrix.

Easy to check that V (0) = 0, and V (x) 0

To nd the derivative along trajectories, note that


V (x) = x T P x + xT P x
= xT AT P x + xT P Ax
= xT (AT P + P A)x

Next step: make this derivative equal to a given negative-denite


function
V (x) = xT (AT P + P A)x = xT Qx, (Q > 0)

Then appropriate matrix P can be found by solving:


AT P + P A = Q
Not surprisingly, this is called a Lyapunov equation
Note that it happens to be the linear part of a Riccati equation
It always has a solution if all the eigenvalues of A are in the left
half plane (i.e., A is Hurwitz, and denes a stable linear system)

November 27, 2010


Fall 2010 16.30/31 229

Example 3: Controlled Linear System

Consider a possibly unstable, but controllable linear system


x = Ax + Bu

We know that if we solve the Riccati equation


AT P + P A P BR1B T P + Q = 0
and set u = Kx with K = R1B T P , the closed-loop system is
stable.
x = (A + BK)x
Can conrm this fact using the Lyapunov Thm.

In particular, note that the solution P of the Riccati equation has the
interpretation of a Lyapunov function, i.e., for this closed-loop system
we can use
V (x) = xT P x
Check:
V (x) = xT P x + x T P x
= xT P (A + BK)x + xT (A + BK)T P x
= xT (P A + P BK + AT P + K T B T P )x
= xT (AT P + P A P BR1B T P P BR1B T P )x
= xT (Q + P BR1B T P )x 0

November 27, 2010


Fall 2010 16.30/31 2210

Example 4: Local Region

Consider the system


dx 2
= x

dt 1+x
which has equilibrium points at x = 1 and x = 2.

Around the eq point x = 1, let z = x 1, then


dz 2
= z 1

dt 2 + z
which has an eq point at z = 0.

Consider LF V = 12 z 2 which is global PD


Then can show
2z
V = zz = z2 z
2+z

Now restrict attention to an interval Br , where r < 2 and thus z < 2


and 2 < z, which can be rewritten as 2 + z > 0, then have
V (2 + z) = 2z (z 2 + z)(2 + z)
= z 3 3z 2
= z 2(z + 3) < 0 z Br (r < 2)

Thus it follows that V < 0 for all z Br , z = 0 and hence the eq


point xe = 1 is locally asymptotically stable.

November 27, 2010


Fall 2010 16.30/31 2211

Example 5: Saturation

r e u 1
x2 1
x1
f (e) Ts s

System dynamics are


e = x2
1 f (e)
x 2 = x2 +
T T
where it is known that:
u = f (e) and f () lies in the rst and third quadrants
e
f (e) = 0 means e = 0, and 0 f (e)de > 0
Assume that T > 0 so open loop stable

Candidate Lyapunov function


e
T
V = x22 + f ()d
2 0

Clearly:
V = 0 if e = x2 = 0 and V > 0 for x22 + e2 = 0
What about the derivative?
V = T x2x 2 + f (e)e

1 f (e)
= T x2 x2 + + f (e) [x2]
T T
= x22

Since V PD and V NSD, the origin is stable ISL.

November 27, 2010


Fall 2010 16.30/31 2212

Invariance Principle

Lyapunovs theorem ensures asymptotic stability if we can nd a Lya


punov function that is strictly decreasing away from the equilibrium.
Unfortunately, in many cases (e.g., in aerospace, robotics, etc.),
there may be situations in which V = 0 for states other than at
the equilibrium. (i.e. V is NSD not ND)
Need further analysis tool for these types of systems, since stable
ISL is typically insucient

LaSalles invariance principle Consider a system


x = f (x)
Let D be a (compact) positively invariant set, i.e., a set such
that if x(t0) , then x(t) for all t t0.
Let V : D R, such that V (x) 0 for all x .
Then, x(t) will eventually approach the largest positively invariant set
in which V = 0.

Note that positively invariant sets include equilibrium points and limit
cycles.

November 27, 2010


Fall 2010 16.30/31 2213

Invariance Example 1

Pendulum Revisited consider again the mechanical energy as the


Lyapunov function
Showed that V (x) = cml2x2 2
2

Thus previously could only show that V (x) 0, and the system
is stable ISL

But we know that V (x) = 0 whenever = 0, i.e., the system is


on the x2 = = 0 axis
However, the only part of the x2 = 0 axis that is invariant is the
origin!
LaSalles invariance principle allows us to conclude that the pen
dulum system response must tend to this invariant set
Hence the system is in fact asymptotically stable.

Revisit Example 5:
V decreasing if x2 = 0, and the only invariant point is x2 = e = 0,
so the origin is asymptotically stable

November 27, 2010


Fall 2010 16.30/31 2214

Invariance Example 2

Limit cycle:
x 1
=
x2 x71
[x
41
+ 2x
22
10]
x 2
=
x
31 3x52
[x
41
+ 2x22 10]


Note that x
41
+ 2x
22
10 is invariant since

d
4
[x
1
+ 2x
22
10] = (4x
10
6 4 2
1
+ 12x
2
)(x
1
+ 2x
2
10)

dt

which is zero if x
41
+ 2x
22
= 10.

Dynamics on this set governed by x 1 = x2 and x 2


= x
31
, which

corresponds to a limit cycle with clockwise state motion in the


phase plane

Is the limit cycle attractive? To determine, pick


V = (x
41
+ 2x
22
10)2

which is a measure of the distance to the LC.


In a region about the LC, can show that
V = 8(x
10
6 4 2
1
+ 3x
2
)(x
1
+ 2x
2
10)
2

so
V
< 0 except if x
41
+ 2x
22
= 10 (the LC) or x
10
6
1
+ 3x
2
= 0 (at

origin).

Conclusion: since the origin and LC are the invariant set for this
system - thus all trajectories starting in a neighborhood of the LC
converge to this invariant set
Actually turns out the origin in unstable.

November 27, 2010


Fall 2010 16.30/31 2215

Summary

Lyapunov functions are a very powerful tool to study stability of a


system.

Lyapunovs theorem only gives us a sucient condition for stability


If we can nd a Lyapunov function, then we know the equilibrium
is stable.
However, if a candidate Lyapunov function does not satisfy the
conditions in the theorem, this does not prove that the equi
librium is unstable.

Unfortunately, there is no general way for constructing Lyapunov func


tions; however,
Often energy can be used as a Lyapunov function.
Quadratic Lyapunov functions are commonly used; these can be
derived from linearization of the system near equilibrium points.

A very recent development: Sum-of-squares methods can be


used to construct polynomial Lyapunov functions.

LaSalles invariance principle very useful in resolving cases when V is


negative semi-denite.

November 27, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
16.30/31 Feedback Control Systems
Overview of Nonlinear Control Synthesis

Emilio Frazzoli

Aeronautics and Astronautics

Massachusetts Institute of Technology

December 3, 2010

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 1 / 14


An overview of nonlinear control design methods
Extend applicability of linear design methods:
Gain scheduling

Integrator anti-windup schemes

Geometric control
Feedback linearization

Dynamics inversion

Dierential atness

Adaptive control
Neural network augmentation
Lyapunov-based methods/Contraction theory
Control Lyapunov Functions

Sliding mode control

Backstepping

Computational/logic approaches
Hybrid systems

Model Predictive Control

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 2 / 14


Gain scheduling

Nonlinear system: x = f (x, u).

Choose n equilibrium points, i.e., (xi , ui ), such that f (xi , ui ) = 0,

i = 1, . . . n.

For each of these equilibria, linearize the system and design a local control
law ui (x) = ui K (x xi ) for the linearization.

A global control law consists of:

Choose the right control law, as a function of the state: i = (x)

Use that control law: u(x) = u(x) (x)

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 3 / 14


Control Lyapunov functions

Nonlinear system: x = f (x) + g (x)u, with equilibrium at x = 0

A function V : x V (x) is a Control Lyapunov Function if

It is positive denite

V (0) = 0.

It is always possible to nd u such that


V V
V = f (x) + g (x)u 0.
x x

If V is a CLF, it is always possible to design a control law ensuring V 0!

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 4 / 14


Dierential Flatness

A dynamical system
dx
= f (x, u),
dt
z = h(x, u).

is said to be dierentially at, with at output z, if one can compute the


state and input trajectories as a function of the at outputs and a nite
number of its derivatives, i.e., if one can nd a map such that

(x, u) = (z, z , . . . , z (l) ).

Dierential atness can be shown to be equivalent to (dynamic) feedback


linearizability. (But more intuitive.)

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 5 / 14


A dierentially-at model of aircraft dynamics 1/4

Given a reference trajectory, pd , we can compute the reference velocity p d ,


and the reference acceleration pd .
0) we can
Based on these, and assuming coordinated ight (and hence p d =
get a set of reference wind axes:
The xw axis is aligned with the velocity vector pd , i.e., xw := p d /p d .
The acceleration can be written as

p
d = g + fI /m,

where fI is the aerodynamic/propulsive force in inertial frame, and g is the


gravity vector.
The main sources of forces for an airplane are the engine thrust and lift. Both
are approximately contained within the symmetry plane of the aircraft. Hence,
the zw axis is chosen such that the (xw , zw ) plane contains fI .
The yw axis is chosen to complete a right-handed orthonormal triad.

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 6 / 14


A dierentially-at model of aircraft dynamics 2/4

Assume that we can control independently:


The tangential acceleration1 at along the wind velocity vector (xw axis).
The normal acceleration an (along the zw axis).
The roll rate 1 around the wind velocity vector (xw axis).

Recall pd = g + aI = g + Raw , where aI is the acceleration in the inertial


frame, aw is the acceleration in the wind frame dened in the previous slide,
and R is a rotation matrix, computed as a function of p d , pd .
Dierentiating,
(3) w + Ra w = R( aw ) + Ra w .
pd = Ra

In addition, since p d = VRe1 (coordinated ight), we also have

pd = g + Raw = V Re1 + VR( e1 )

1 Here and below acceleration is understood as acceleration due to aerodynamic/propulsive

forces.
E. Frazzoli (MIT) Nonlinear Control December 3, 2010 7 / 14
A dierentially-at model of aircraft dynamics 3/4

In coordinates, the second equation reads:

pd = R V 3 ,
V 2

i.e., 2 and 3 can be computed from p and p.

Also,
pd,1 2 an + a t
d
pd,2 = R 3 at 1 an
dt 3
pd,3 2 at + a n

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 8 / 14


A dierentially-at model of aircraft dynamics 4/4

Finally, we have

a t 2 an 1 0 0 3 pd,1
d
1 = 3 at /an + 0 1/an 0 R T 3 pd,2
dt
a n 2 at 0 0 1 pd,3

I.e., the system is dierentially at, with at output pd , as long as

V = p d = 0: if the velocity is zero, then coordinated ight is not well


dened.

an = I V12 p d p dT (

pd g ) = 0: if the normal acceleration is zero, the roll
attitude is not well dened.

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 9 / 14


Incorporating aerodynamics and propulsive models

We have derived a dierentially at system for aircraft dynamics, with at


output pd (position trajectory) and inputs: (a t , 1 , a n ).
How can we control at , an (or their derivatives)?
The wing lift is

1
L= V 2 SCL + aL0 ,

2m
the drag is similarly computed.
The engine thrust T is a function of the throttle setting T and other variables.

The acceleration components in wind axes are given by

at = T (T ) cos D(),

an = T (T ) sin L().

Compute, e.g., and T , from the desired at , an .

Rely on a CAS that tracks the commanded , T , and 1 .

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 10 / 14


Adding feedback

So far, we have shown that, given a reference trajectory, we can compute


uniquely (modulo a 180 roll rotation) the corresponding state and control
input trajectories.

What if the initial condition is not on the reference trajectory? What if there
are disturbances that make the aircraft deviate from the trajectory? We need
feedback.

Let p : t p(t) be the actual position of the vehicle, and consider a system
in which p (3) = u, e.g.,

p 0 I 0 p 0
d
p = 0 0 I p + 0 u
dt
p 0 0 0 p I

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 11 / 14


Tracking control law

Dene the error e = p pd . The error dynamics are given by



e 0 I 0 e 0
d (3)

e = 0 0 I e + 0 u pd .
dt
e 0 0 0 e I

If we
set
(3)
u = pd K [e, e , e]T ,
where k is a stabilizing control gain for the error dynamics, and
compute at , an , 1 from (p, p , p
, u) (vs. pd and its derivatives)

then,
lim e(t) = lim (p(t) pd (t)) = 0,
t + t +

as desired.

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 12 / 14


Some remarks

Convergence assured almost globally, the control law breaks down if at any
point x = 0, or an = 0.

A modication of this control law can ensure path tracking (vs. trajectory
tracking), requiring less thrust control eort. See Hauser & Hindman '97.

Some limitations:

Simplied aerodynamic/propulsive models.

Saturations are not taken into account (unbounded an , at , 1 ).

Coordinated ight is an additional constraints, no control over roll: this model


cannot account for, e.g., a split-S maneuver.

E. Frazzoli (MIT) Nonlinear Control December 3, 2010 14 / 14


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #23

16.30/31 Feedback Control Systems

Analysis of Nonlinear Systems


Anti-windup

Notes developed in part from 16.30 Estimation and Con


trol of Aerospace Systems, Lecture 21: Lyapunov Stability
Theory by Prof. Frazzoli
Fall 2010 16.30/31 232

Nonlinear System Analysis

Example: Car cruise control1

Fg F

mg

Equation of motion in the direction parallel to the road surface:


dv
m = Feng + Faero + Ffrict + Fg .
dt
where
1
Faero = CdAv |v|,
2

Fg = mg sin(),

Ffrict = mgCr cos()sgn(v).

Engine model
200 200
n=5
180 180
Torque T [Nm]

Torque T [Nm]

160 160

140 140

120 120 n=1 n=2 n=3 n=4

100 100
0 200 400 600 0 20 40 60
Angular velocity [rad/s] Velocity v [m/s]
2
Engine torque (at full throttle): T = Tm 1
m 1 ,
where = nr v = nv, n is gear ratio, and r wheel radius.
The engine driving force can hence be written as
Feng = nT (nv)u, 0 u 1.

1 The example is taken from


Astr
om and Murray: Feedback Systems, 2008

November 21, 2010


Fall 2010 16.30/31 233

Jacobian Linearization
Any (feasible) speed corresponds to an equilibrium point.
Choose a reference speed vref > 0, and solve for dv/dt = 0 with
respect to u, assuming a horizontal road ( = 0).

0 = nT (nv)
u CdAv2 mgCr
2
i.e.,
1
2
2 Cd Av + mgCr
u = .
nT (nv)
Linearized system ( = v v, = u u):

d 1
T (nv)
1

= n

u C
Av
+ nT (nv)

v

v
d
dt
m
m


Adyn Bdyn

Example: numerical values


Let us use the following numerical values (all units in SI):
Tm = 190, = 0.4, m = 420, 5 = 10, Cr = 0.01,
m = 1500, g = 9.81, = 1.2, CdA = 0.79.
For v = 25 (90 km/h, or 55 mph), we get u = 0.2497.
The linearization yields:
Adyn = 0.0134, Bdyn = 1.1837
1.1837
G(s) =
s + 0.0134

November 21, 2010


Fall 2010 16.30/31 234

Cruise control design

A proportional controller would stabilize the closed-loop system.


Assume we want to maintain the commanded speed (cruise con
trol): we need to add an integrator.
A PI controller will work, e.g.,
s+1
C(s) = 1.5
s

Linear control/model response to hill at specied angle between 5 and


25 sec

November 21, 2010


Fall 2010 16.30/31 235

Nonlinear Simulation
Check with BOTH linear AND nonlinear simulation

f(u)
Signal 1 -K- sin -K-
Fg
engine Car1
Gain 2 Trigonometric Gain 4
Signal Builder f(u)
Function
ubar To Workspace
vbar Clock
Faero
1
1
1
-C -K-
Ffrict F s v

1.5s+1.5 Gain Integrator


0
del V s Add
Product
Constant 3 control Saturation

-K-
vbar
Gain 3
1.5s+1.5 Bdyn
Scope
s s-Adyn
control 1 Transfer Fcn

ubar

November 21, 2010


Fall 2010 16.30/31 236

Review
(Jacobian) linearization:
Find the desired equilibrium condition (state and control).
Linearize the non-linear model around the equilibrium.

Control design:
Design a linear compensator for the linear model.
If the linear system is closed-loop stable, so will be the nonlinear
systemin a neighborhood of the equilibrium.
Check in a (nonlinear) simulation the robustness of your design
with respect to typical deviations.

November 21, 2010


Fall 2010 16.30/31 237

Eects of the saturation


What if the slope is a little steeper (say 4.85 degrees)?

What is wrong?

Systems experiencing Integrator wind-up


Once the input saturates, the integral of the error keeps increasing.
When the error decreases, the large integral value prevents the
controller from resuming normal operations quickly (the integral
error must decrease rst!) so the response is delayed

Idea: once the input saturates, stop integrating the error (cant do
much about it anyway!)

November 21, 2010


Fall 2010 16.30/31 238

Anti-windup Logic

One option is the following logic for the integral gain:



KI if the input does not saturate;
KI =
0 if the input saturates
Another option is the following:
Compare the actual input and the commanded input.
If they are the same, the saturation is not in eect.
Otherwise, reduce the integral error by a constant times the dif
ference.

uint = Ki (e + Kaw (sat(u) u)) dt

With this choice, under saturation, the actuator/commanded dier


ence is fedback to the integrator to so that eact = sat(u) u tends
to zero
Implies that the controller output is kept close to saturation limit.
The controller output will then change as soon as the error changes
sign, thus avoiding the integral windup.
If there is no saturation, the anti-windup scheme has no eect.

November 21, 2010


Fall 2010 16.30/31 239

Anti-windup Scheme

Signal 1 f(u)
-K- sin -K-
Fg
engine Car2
Gain 2 Trigonometric Gain 4
Signal Builder f(u)
Function To Workspace
vbar ubar Clock
Faero
Constant
1
1.5 -K-
-C- F s v
1 Ffrict
Gain Integrator
0 Transfer Fcn 2 Constant 1
Add
1.5 Product
Constant 3 Saturation
s
Transfer Fcn 3
-K-

Gain 1
-K-
vbar

1.5s+1.5 Gain 3 Bdyn


Scope
s s-Adyn
control 1 Transfer Fcn

ubar

Response to a 4.85 degree slope

Anti-windup compensator avoids the velocity overshoot by preventing


the error buildup in the integral term

November 21, 2010


Fall 2010 16.30/31 2310

Anti-windup Summary

Anti-wind up schemes guarantee the stability of the compensator


when the (original) feedback loop is eectively opened by the satura
tion.
Prevent divergence of the integral error when the control cannot keep
up with the reference.
Maintain the integral errors small.

November 21, 2010


Fall 2010 16.30/31 2311

Code: Car Setup


1 set(0, 'DefaultAxesFontSize', 12, 'DefaultAxesFontWeight','demi')

2 set(0, 'DefaultTextFontSize', 12, 'DefaultTextFontWeight','demi')

3 set(0,'DefaultAxesFontName','arial')

4 set(0,'DefaultAxesFontSize',12)

5 set(0,'DefaultTextFontName','arial')

6
7 clear all

8 %close all

9 % Parameters for defining the system dynamics

10 theta=0;

11 gear=5;

12 alpha = [40, 25, 16, 12, 10]; % gear ratios

13 Tm = 190; % engine torque constant, Nm

14 wm = 420; % peak torque rate, rad/sec

15 beta = 0.4; % torque coefficient

16 Cr = 0.01; % coefficient of rolling friction

17 rho = 1.2; % density of air, kg/m3

18 A = 2.4; % car area, m2

19 Cd = 0.79/A; % drag coefficient

20 g = 9.8; % gravitational constant

21 m=1500; % mass

22 v=25;

23
24 % Compute the torque produced by the engine, Tm

25 omega = alpha(gear) * v; % engine speed

26 torque = Tm * ( 1 beta * (omega/wm 1)2 );

27 F = alpha(gear) * torque;

28
29 % Compute the external forces on the vehicle

30 Fr = m * g * Cr; % Rolling friction

31 Fa = 0.5 * rho * Cd * A * v2; % Aerodynamic drag

32 Fg = m * g * sin(theta); % Road slope force

33 Fd = Fr + Fa + Fg; % total deceleration

34
35 ubar=(Fa+Fr)/(F)

36 vbar=v;

37
38 dTdv=Tm*(2*beta*(alpha(gear)*vbar/wm1)*(alpha(gear)/wm))

39 Adyn=(alpha(gear)*dTdv*ubarrho*Cd*A*vbar)/m;

40 Bdyn=F/m;

41
42 Hillangle=4; % nonsaturating angle

43 sim('cruise control')

44 figure(4);

45 subplot(211);

46 plot(Car1(:,5),Car1(:,[2]),'LineWidth',2);xlabel('Time');ylabel('Speed')

47 title(['Hill response, angle=',num2str(Hillangle)])

48 subplot(212);

49 plot(Car1(:,5),Car1(:,[4]),'LineWidth',2);xlabel('Time');ylabel('Throttle')

50
51 figure(1);

52 subplot(211);

53 plot(Car1(:,5),Car1(:,[1 2]),'LineWidth',2);xlabel('Time');ylabel('Speed')

54 legend('NL','Lin','Location','SouthEast');

55 title(['Hill response, angle=',num2str(Hillangle)])

56 subplot(212);

57 plot(Car1(:,5),Car1(:,[3 4]),'LineWidth',2);xlabel('Time');ylabel('Throttle')

58
59 Antiwindup gain=0; % us esame code with and without AWup

60 Hillangle=4.85;

61 sim('cruise control awup')

62 figure(2);

63 subplot(211);

64 plot(Car2(:,5),Car2(:,[1 2]),'LineWidth',2);xlabel('Time');ylabel('Speed')

65 legend('NL','Lin','Location','SouthEast');

66 title(['Hill response, angle=',num2str(Hillangle)])

67 subplot(212);

68 plot(Car2(:,5),Car2(:,[3 4]),'LineWidth',2);xlabel('Time');ylabel('Throttle')

69
70 Car2 no AW=Car2;
71
72 Antiwindup gain=5;
73 sim('cruise control awup')
74 figure(3);

November 21, 2010


Fall 2010 16.30/31 2312

75 subplot(211);
76 plot(Car2(:,5),Car2(:,[1 2]),'LineWidth',2);xlabel('Time');ylabel('Speed')
77 legend('NL','Lin','Location','SouthEast');
78 title(['Hill response, angle=',num2str(Hillangle),' With Antiwindup gain=',num2str(Antiwindup gain)])
79 subplot(212);
80 plot(Car2(:,5),Car2(:,[3 4]),'LineWidth',2);xlabel('Time');ylabel('Throttle')
81
82 figure(5);
83 subplot(211);
84 plot(Car2 no AW(:,5),Car2 no AW(:,[6 7]),Car2 no AW(:,5),10* Car2 no AW(:,[8]),'LineWidth',2);xlabel('Time');yla
85 legend('sat(u)','u','10* e {vel}','Location','NorthEast');
86 axis([4 20 0.5 5])
87 title(['Hill response no AW'])
88 subplot(212);
89 plot(Car2(:,5),Car2(:,[6 7]),Car2(:,5),10*Car2(:,[8]),'LineWidth',2);xlabel('Time');ylabel('Windup error')
90 legend('sat(u)','u','10* e {vel}','Location','NorthEast');
91 title(['Hill response with AW'])
92 axis([4 20 0.5 5])
93
94 print dpng r300 f1 AW1.png
95 print dpng r300 f2 AW2.png
96 print dpng r300 f3 AW3.png
97 print dpng r300 f4 AW4.png
98 print dpng r300 f5 AW5.png

November 21, 2010


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Topic #24

16.30/31 Feedback Control Systems

Closed-loop system analysis


Bounded Gain Theorem
Robust Stability
Fall 2010 16.30/31 241

SISO Performance Objectives

Basic setup:

di do
r e u y
Gc(s) G(s)

n

Where the tracking error can be written as:


e = r (y + n)
= r (do + G(di + Gce) + n)
= S(r do n) SGdi

y = T (r n) + Sdo + SGdi
with
L = GGc, S = (I + L)1 T = L(I + L)1

For good tracking performance of r (typically low frequency), require


e small
IS(j)I small 0 1

To reduce impact of sensor noise n (typically high frequency), require


IT (j)I small 2

Since
T (s) + S(s) = I s
cannot make both IS(j)I and IT (j)I small at the same frequen
cies fundamental design constraint

December 29, 2011


Fall 2010 16.30/31 242

SISO Design Approaches

There are two basic approaches to design:


Indirect that works on L itself classical control does this
Direct that works on S and T

For the Indirect, note that


If |L(j)| 1 S = (1 + L)1 L1 |S| 1, |T | 1
If |L(j)| 1 S = (1 + L)1 1 and T L |T | 1
So we can convert the performance requirements on S, T into spec
ications on L

(A) High loop gain Good command following & dist rejection
(B) Low loop gain Attenuation of sensor noise.

Of course, we must be careful when |L(j)| 1, require that arg L =


180 to maintain stability

December 29, 2011


Fall 2010 16.30/31 243

Direct approach works with S and T . Since e = r y, then for


perfect tracking, we need e 0
want S 0 since e = Sr + . . .

Sucient to discuss the magnitude of S because the only require


ment is that it be small.

Direct approach is to develop an upper bound for |S| and then test
if |S| is below this bound.
1
|S(j)| < ?
|Ws(j)|

or equivalently, whether |Ws(j)S(j)| < 1,

Typically pick simple forms for weighting functions (rst or second


order), and then cascade them as necessary. Basic one:
s/M + B
Ws(s) =
s + B A

Fig. 1: Example of a standard performance weighting lter. Typically have A 1,


M > 1, and |1/Ws | 1 at B

December 29, 2011


Fall 2010 16.30/31 244

Example: Simple system with Gc = 1


150
G(s) =
(10s + 1)(0.05s + 1)2

Require B 5, a slope of 1, low frequency value less than A = 0.01


and a high frequency peak less than M = 5.

s/M + B

Ws =
s + B A

Fig. 2: Want |SWp | < 1, so we just fail the test

Graph testing is OK, but what we need is a an analytical way of


determining whether
|Ws(j)S(j)| < 1,
Avoids a graphical/plotting test, which might be OK for analysis
(a bit cumbersome), but very hard to use for synthesis

December 29, 2011


Fall 2010 16.30/31 245

Bounded Gain

There exist very easy ways of testing (analytically) whether


|S(j)| < ,
Critically important test for robustness

SISO Bounded Gain Theorem: Gain of generic stable system1


x (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
is bounded in the sense that
Gmax = sup |G(j)| = sup |C(jI A)1B + D| <

if and only if (i)


1. |D| <

2. The Hamiltonian matrix


A + B( 2 I DT D)1 DT C B( 2 I DT D)1 B T
H=
C T (I + D( 2 I DT D)1 DT )C AT C T D( 2 I DT D)1 B T
has no eigenvalues on the imaginary axis.

1 MIMO result very similar, but need a dierent norm on the TFM of G(s)

December 29, 2011


Fall 2010 16.30/31 246

Note that with D = 0, the Hamiltonian matrix is

" #
1 T
A 2
BB
H=
C T C AT
Eigenvalues of this matrix are symmetric about the real and imag
inary axis (related to the SRL)

So sup |G(j)| < i H has no eigenvalues on the j-axis.

An equivalent test is if there exists a X 0 such that

AT X + XA + C T C + 2 XBB T X = 0

and A + 12 BB T X is stable.

This is another Algebraic Riccati Equation (ARE)


But there are some key dierences from the LQR/LQE AREs

December 29, 2011


Fall 2010 16.30/31 247

Typical Application

Direct approach provides an upper bound for |S|, so must test if |S|
is below this bound.
1
|S(j)| < ?
|Ws(j)|

or equivalently, whether |Ws(j)S(j)| < 1,

Pick simple forms for weighting functions (rst or second order), and
then cascade them as necessary. Basic one:
s/M + B
Ws(s) =
s + B A

Thus we can test whether |Ws(j)S(j)| < 1, by:


1. Forming a state space model of the combined system Ws(s)S(s)
2. Use the bounded gain theorem with = 1
3. Typically use a bisection section of to nd |Ws(j)S(j)|max

For our simple example system


150
G(s) = Gc = 1
(10s + 1)(0.05s + 1)2
Require B 5, a slope of 1, low frequency value less than
A = 0.01 and a high frequency peak less than M = 5.
In this case = 1.02, so we just fail the test - consistent with
graphical test.

December 29, 2011


Fall 2010 16.30/31 248

Issues

Note that it is actually not easy to nd Gmax directly using the state
space techniques
It is easy to check if Gmax <
So we just keep changing to nd the smallest value for which
we can show that Gmax < (called min)

Bisection search algorithm.

Bisection search algorithm (see web)


1. Select u, l so that l Gmax u

2. Test (u l )/l < TOL.


Yes Stop (Gmax 12 (u + l ))
No go to step 3.

3. With = 12 (l + u), test if Gmax < using i(H)

4. If i(H) jR, then set l = (test value too low), otherwise


set u = and go to step 2.

This is the basis of H control theory.

December 29, 2011


Fall 2010 16.30/31 249

Appendix: Sketch of Proof

Suciency: consider = 1, and assume D = 0 for simplicity

Now analyze properties of this special SISO closed-loop system.

r u y y2
G(s) GT (s)
+

AT C T

A B
G(s) := and G(s) GT (s) :=
C 0 BT 0

Note that
u/r = S(s) = [1 GG]1

Now nd the state space representation of S(s)


x 1 = Ax1 + B(r + y2) = Ax1 + BB T x2 + Br
x 2 = AT x2 C T y = AT x2 C T Cx
u = r + B T x2

T

x 1 A BB x1 B
T= T + r
x 2 C C A x2 0

x1
u = 0 BT +r
x2
poles of S(s) are contained in the eigenvalues of the matrix H.

December 29, 2011


Fall 2010 16.30/31 2410

Now assume that H has no eigenvalues on the j-axis,


S = [I GG]1 has no poles there

I GG has no zeros there

So I G G has no zeros on the j-axis, and we also know that


I G*G I > 0 as (since D = 0).
Together, these imply that

I G*G = I GT (j)G(j) > 0

For a SISO system, condition (I G*G > 0) is equivalent to


|G(j)| < 1
which is true i
Gmax = max |G(j)| < 1

Can use state-space tools to test if a generic system has a gain less
that 1, and can easily re-do this analysis to include bound .

December 29, 2011


MIT OpenCourseWare
http://ocw.mit.edu

16.30 / 16.31 Feedback Control Systems


Fall 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.