You are on page 1of 64

ME 132, Dynamic Systems and Feedback

Class Notes

Andrew Packard, Roberto Horowitz, Kameshwar Poolla, Francesco Borrelli

Fall 2017

Instructor:

Andy Packard

Department of Mechanical Engineering


University of California
Berkeley CA, 94720-1740

copyright 1995-2017 Packard, Horowitz, Poolla, Borrelli


ME 132, Fall 2017, UC Berkeley, A. Packard i

Contents

1 Introduction 1

1.1 Structure of a closed-loop control system . . . . . . . . . . . . . . . . . . . . 4

1.2 Example: Temperature Control in Shower . . . . . . . . . . . . . . . . . . . 5

1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Block Diagrams 13

2.1 Common blocks, continuous-time . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Mathematical Modeling and Simulation 24

3.1 Systems of 1st order, coupled differential equations . . . . . . . . . . . . . . 24

3.2 Remarks about Integration Options in simulink . . . . . . . . . . . . . . . . 26

3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 State Variables 30

4.1 Definition of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2 State-variables: from first order evolution equations . . . . . . . . . . . . . . 31

4.3 State-variables: from a block diagram . . . . . . . . . . . . . . . . . . . . . . 32

4.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 First Order, Linear, Time-Invariant (LTI) ODE 34

5.1 The Big Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2 Solution of a First Order LTI ODE . . . . . . . . . . . . . . . . . . . . . . . 35


ME 132, Fall 2017, UC Berkeley, A. Packard ii

5.2.1 Free response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.2.2 Forced response, constant inputs . . . . . . . . . . . . . . . . . . . . 37

5.2.3 Forced response, bounded inputs . . . . . . . . . . . . . . . . . . . . 38

5.2.4 Stable system, Forced response, input approaching 0 . . . . . . . . . 38

5.2.5 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.2.6 Forced response, input approaching a constant limit . . . . . . . . . . 40

5.3 Forced response, Sinusoidal inputs . . . . . . . . . . . . . . . . . . . . . . . . 41

5.3.1 Forced response, input approaching a Sinusoid . . . . . . . . . . . . . 43

5.4 First-order delay-differential equation: Stability . . . . . . . . . . . . . . . . 44

5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
ME 132, Fall 2017, UC Berkeley, A. Packard 1

1 Introduction

In this course we will learn how to analyze, simulate and design automatic control
strategies (called control systems) for various engineering systems.

The system whose behavior is to be controlled is called the plant. This term has its origins
in chemical engineering where the control of chemical plants or factories is of concern. On
occasion, we will also use the terms plant, process and system interchangeably. As a simple
example which we will study soon, consider an automobile as the plant, where the speed of
the vehicle is to be controlled, using a control system called a cruise control system.

The plant is subjected to external influences, called inputs which through a cause/effect
relationship, influence the plants behavior. The plants behavior is quantified by the value
of several internal quantities, often observable, called plant outputs.

Initially, we divide these external influences into two groups: those that we, the owner/operator
of the system, can manipulate, called them control inputs; and those that some other ex-
ternality (nature, another operator, normally thought of as antagonistic to us) manipulates,
called disturbance inputs.

By control, we mean the manipulation of the control inputs in a way to make the plant
outputs respond in a desirable manner.

The strategy and/or rule by which the control input is adjusted is known as the control law,
or control strategy, and the physical manner in which this is implemented (computer with a
real-time operating system; analog circuitry, human intervention, etc.) is called the control
system.

The most basic objective of a control system are:

The automatic regulation (referred to as tracking) of certain variables in the controlled


plant to desired values (or trajectories), in the presence of typical, but unforseen,
disturbances

An important distinguishing characteristic of a strategy is whether the controlling strategy


is open-loop or closed-loop. This course is mostly (completely) about closed-loop control
systems, which are often called feedback control systems.

Open-loop control systems: In an open-loop strategy, the values of the control


input (as a function of time) are decided ahead-of-time, and then this input is applied
to the system. For example bake the cookies for 8 minutes, at 350 . The design
of the open-loop controller is based on inversion and/or calibration. While open-loop
ME 132, Fall 2017, UC Berkeley, A. Packard 2

systems are simple, they generally rely totally on calibration, and cannot effectively
deal with exogenous disturbances. Moreover, they cannot effectively deal with changes
in the plants behavior, due to various effects, such as aging components. They require
re-calibration. Essentially, they cannot deal with uncertainty. Another disadvantage
of open-loop control systems is that they cannot stabilize an unstable system, such as
balancing a rocket in the early stages of liftoff (in control terminology, this is referred
to as an inverted pendulum).

Closed-loop control systems: In order to make the plants output/behavior more


robust to uncertainty and disturbances, we design control systems which continuously
sense (measure) the output of the plant (note that this is an added complexity that
is not present in an open-loop system), and adjust the control input, using rules,
which are based on how the current (and past) values of the plant output deviate
from its desired value. These feedback rules (or strategy) are usually based on a
model (ie., a mathematical description) of how the plant behaves. If the plant behaves
slightly differently than the model predicts, it is often the case that the feedback helps
compensate for these differences. However, if the plant actually behaves significantly
different than the model, then the feedback strategy might be unsuitable, and may
cause instability. This is a drawback of feedback systems.

In reality, most control systems are a combination of open and closed-loop strategies. In
the trivial cookie example above, the instructions look predominantly open-loop, though
something must stop the baking after 8 minutes, and temperature in the oven should be
maintained at (or near) 350 . Of course, the instructions for cooking would even be more
closed-loop in practice, for example bake the cookies at 350 for 8 minutes, or until
golden-brown. Here, the until golden-brown indicates that you, the baker, must act as
a feedback system, continuously monitoring the color of the dough, and remove from heat
when the color reaches a prescribed value.

In any case, ME 132 focuses most attention to the issues that arise in closed-loop systems,
and the benefits and drawbacks of systems that deliberately use feedback to alter the be-
havior/characteristics of the process being controlled.

Some examples of systems which benefit from well-designed control systems are

Airplanes, helicopters, rockets, missiles: flight control systems including autopilot, pilot
augmentation

Cruise control for automobiles. Lateral/steering control systems for future automated
highway systems

Position and speed control of mechanical systems:


ME 132, Fall 2017, UC Berkeley, A. Packard 3

1. AC and/or DC motors, for machines, including Disk Drives/CD, robotic manip-


ulators, assembly lines.
2. Elevators
3. Magnetic bearings, MAGLEV vehicles, etc.
Pointing control (telescopes)
Chemical and Manufacturing Process Control: temperature; pressure; flow rate; con-
centration of a chemical; moisture content; thickness.

Of course, these are just examples of systems that we have built, and that are usually thought
of as physical systems. There are other systems we have built which are not physical in
the same way, but still use (and benefit from) control, and additional examples that occur
naturally (living things). Some examples are

The internet, whereby routing of packets through communication links are controlled
using a conjestion control algorithm
Air traffic control system, where the real-time trajectories of aircraft are controlled
by a large network of computers and human operators. Of course, if you look in-
side, you see that the actual trajectories of the aircraft are controlled by pilots, and
autopilots, receiving instructions from the Air Traffic Control System. And if you
look inside again, you see that the actual trajectories of the aircraft are affected by
forces/moments from the engine(s), and the deflections of movable surfaces (ailerons,
rudders, elevators, etc.) on the airframe, which are receiving instructions from the
pilot and/or autopilot.
An economic system of a society, which may be controlled by the federal reserve (setting
the prime interest rate) and by regulators, who set up rules by which all agents in
the system must abide. The general goal of the rules is to promote growth and wealth-
building.
All the numerous regulatory systems within your body, both at organ level and cellular
level, and in between.

A key realization is the fact that most of the systems (ie, the plant) that we will attempt
to model and control are dynamic. We will later develop a formal definition of a dynamic
system. However, for the moment it suffices to say that dynamic systems have memory, i.e.
the current values of all variables of the system are generally functions of previous inputs, as
well as the current input to the system. For example, the velocity of a mass particle at time
t depends on the forces applied to the particle for all times before t. In the general case, this
means that the current control actions have impact both at the current time (ie., when they
are applied) and in the future as well, so that actions taken now have later consequences.
ME 132, Fall 2017, UC Berkeley, A. Packard 4

1.1 Structure of a closed-loop control system

The general structure of a closed-loop system, including the plant and control law (and other
components) is shown in Figure 1.

A sensor is a device that measures a physical quantity like pressure, acceleration, humidity,
or chemical concentration. Very often, in modern engineering systems, sensors produce an
electrical signal whose voltage is proportional to the physical quantity being measured. This
is very convenient, because these signals can be readily processed with electronics, or can be
stored on a computer for analysis or for real-time processing.

An actuator is a device that has the capacity to affect the behavior of the plant. In many
common examples in aerospace/mechanical systems, an electrical signal is applied to the
actuator, which results in some mechanical motion such as the opening of a valve, or the
motion of a motor, which in turn induces changes in the plant dynamics. Sometimes (for
example, electrical heating coils in a furnace) the applied voltage directly affects the plant
behavior without mechanical motion being involved.

The controlled variables are the physical quantities we are interested in controlling and/or
regulating.

The reference or command is an electrical signal that represents what we would like the
regulated variable to behave like.

Disturbances are phenomena that affect the behavior of the plant being controlled. Distur-
bances are often induced by the environment, and often cannot be predicted in advance or
measured directly.

The controller is a device that processes the measured signals from the sensors and the
reference signals and generates the actuated signals which in turn, affects the behavior of
the plant. Controllers are essentially strategies that prescribe how to process sensed signals
and reference signals in order to generate the actuator inputs.

Finally, noises are present at various points in the overall system. We will have some amount
of measurement noise (which captures the inaccuracies of sensor readings), actuator noise
(due for example to the power electronics that drives the actuators), and even noise affecting
the controller itself (due to quantization errors in a digital implementation of the control
algorithm). Note that the sensor is a physical device in its own right, and also subject to
external disturbances from the environment. This cause its output, the sensor reading, to
generally be different from the actual value of the physical veriable the sensor is sensing.
While this difference is usually referred to as noise, it is really just an additional disturbance
that acts on the overall plant.

Throughout these notes, we will attempt to consistently use the following symbols (note -
ME 132, Fall 2017, UC Berkeley, A. Packard 5

nevertheless, you need to be flexible and open-minded to ever-changing notation):

P plant K controller
u input y output
d disturbance n noise
r reference

Arrows in our block diagrams always indicate cause/effect relationships, and not necessarily
the flow of material (fluid, electrons, etc.). Power supplies and material supplies may not
be shown, so that normal conservations laws do not necessarily hold for the block diagram.
Based on our discussion above, we can draw the block diagram of Figure 1 that reveals the
structure of many control systems. Again, the essential idea is that the controller processes
measurements together with the reference signal to produce the actuator input u(t). In this
way, the plant dynamics are continually adjusted so as to meet the objective of having the
plant outputs y(t) track the reference signal r(t).

Disturbances
Measurement noise
Actuator noise

Actuators Plant Sensors

Controller

Controller noise Commands

Figure 1: Basic structure of a control system.

1.2 Example: Temperature Control in Shower

A simple, slightly unrealistic example of some important issues in control systems is the
problem of temperature control in a shower. As Professor Poolla tells it, Every morning
I wake up and have a shower. I live in North Berkeley, where the housing is somewhat
run-down, but I suspect the situation is the same everywhere. My shower is very basic. It
has hot and cold water taps that are not calibrated. So I cant exactly preset the shower
temperature that I desire, and then just step in. Instead, I am forced to use feedback control.
I stick my hand in the shower to measure the temperature. In my brain, I have an idea of
ME 132, Fall 2017, UC Berkeley, A. Packard 6

what shower temperature I would like. I then adjust the hot and cold water taps based on
the discrepancy between what I measure and what I want. In fact, it is possible to set the
shower temperature to within 0.5 F this way using this feedback control. Moreover, using
feedback, I (being the sensor and the compensatory strategy) can compensate for all sorts
of changes: environmental changes, toilets flushing, etc.

This is the power of feedback: it allows us to, with accurate sensors, make a precision device
out of a crude one that works well even in changing environments.

Lets analyze this situation in more detail. The components which make up the plant in the
shower are

Hot water supply (constant temperature, TH )

Cold water supply (constant temperature, TC )

Adjustable valve that mixes the two; use to denote the angle of the valve, with = 0
meaning equal amounts of hot and cold water mixing. In the units chosen, assume
that 1 1 always holds.

1 meter (or so) of piping from valve to shower head

If we assume perfect mixing, then the temperature of the water just past the valve is

Tv (t) := TH +T
2
C
+ TH T
2
C
(t)
= c1 + c2 (t)

The temperature of the water hitting your skin is the same (roughly) as at the valve, but
there is a time-delay based on the fact that the fluid has to traverse the piping, hence

T (t) = Tv (t )
= c1 + c2 (t )

where is the time delay, about 1 second.

Lets assume that the valve position only gets adjusted at regular increments, every
seconds. Similarly, lets assume that we are only interested in the temperature at those
instants as well. Hence, we can use a discrete notion of time, indexed by a subscript k, so
that for any signal, v(t), write
vk := v(t)|t=k
In this notation, the model for the Temperature/Valve relationship is

Tk = c1 + c2 k1 (1.1)
ME 132, Fall 2017, UC Berkeley, A. Packard 7

Now, taking a shower, you have (in mind) a desired temperature, Tdes , which may even be
a function of time Tdes,k . How can the valve be adjusted so that the shower temperature
approaches this?

Open-loop control: pre-solve for what the valve position should be, giving
Tdes,k c1
k = (1.2)
c2
and use this basically calibrate the valve position for desired temperature. This gives

Tk = Tdes,(k1)

which seems good, as you achieve the desired temperature one time-step after specifying
it. However, if c1 and/or c2 change (hot or cold water supply temperature changes, or valve
gets a bit clogged) there is no way for the calibration to change. If the plant behavior changes
to
Tk = c1 + c2 k1 (1.3)
but the control behavior remains as (1.2), the overall behavior is
c2
Tk+1 = c1 + (Tdes,k c1 )
c2
which isnt so good. Any percentage variation in c2 is translated into a similar percentage
error in the achieved temperature.

How do you actually control the temperature when you take a shower: Again, the behavior
of the shower system is:
Tk+1 = c1 + c2 k
Closed-loop Strategy: If at time k, there is a deviation in desired/actual temperature
of Tdes,k Tk , then since the temperature changes c2 units for every unit change in , the
valve angle should be increased by an amount c12 (Tdes,k Tk ). That might be too aggressive,
trying to completely correct the discrepancy in one step, so choose a number , 0 < < 1,
and try

k = k1 + (Tdes,k Tk ) (1.4)
c2
(of course, is limited to lie between 1 and 1, so the strategy should be written in a more
complicated manner to account for that - for simplicity we ignore this issue here, and return
to it later in the course). Substituting for k gives
1 1
(Tk+1 c1 ) = (Tk c1 ) + (Tdes,k Tk )
c2 c2 c2
which simplifies down to
Tk+1 = (1 ) Tk + Tdes,k
ME 132, Fall 2017, UC Berkeley, A. Packard 8

Starting from some initial temperature T0 , we have


T1 = (1 )T0 + Tdes,0
T2 = (1 )T1 + Tdes,1
= (1 )2 T0 + (1 )Tdes,0 + Tdes,1
.. .
. = ..
Tk = (1 )k T0 + k1 n
P
n=0 (1 ) Tdes,k1n

If Tdes,n is a constant, T , then the summation simplifies to


 
Tk = (1 )k T0 + 1 (1 )k T
= T + (1 )k T0 T
 

which shows that, in fact, as long as 0 < < 2, then the temperature converges (convergence
rate determined by ) to the desired temperature.

Assuming your strategy remains fixed, how do unknown variations in TH and TC affect the
performance of the system? Shower model changes to (1.3), giving
 
Tk+1 = 1 Tk + Tdes,k

where := cc22 . Hence, the deviation in c1 has no effect on the closed-loop system, and the
deviation in c2 only causes a similar percentage variation in the effective value of . As long
as 0 < < 2, the overall behavior of the system is acceptable. This is good, and shows
that small unknown variations in the plant are essentially completely compensated for by
the feedback system.

On the other hand, large, unexpected deviations in the behavior of the plant can cause
problems for a feedback system. Suppose that you maintain the strategy in equation (1.4),
but there is a longer time-delay than you realize? Specifically, suppose that there is extra
piping, so that the time delay is not just , but m. Then, the shower model is
Tk+m1 = c1 + c2 k1 (1.5)

and the strategy (from equation 1.4) is k = k1 + c2
(Tdes,k Tk ). Combining, gives
Tk+m = Tk+m1 + (Tdes,k Tk )
This has some very undesirable behavior, which is explored in problem 5 at the end of the
section.

1.3 Problems

1. For any C and integer N , consider the summation


N
X
:= k
k=0
ME 132, Fall 2017, UC Berkeley, A. Packard 9

If = 1, show that = N + 1. If 6= 1, show that

1 N +1
=
1
If || < 1, show that

X 1
k =
k=0
1

2. Consider the equations relating variables r, e, y, n, u and d. Assume P and C are given
numbers.
e = r (y + n)
u = Ce
y = P (u + d)
So, this represents 3 linear equations in 6 unknowns. Solve these equations, expressing
e, u and y as linear functions of r, d and n. The linear relationships will involve the
numbers P and C.

3. For a function F of a many variables (say two, for this problem, labeled x and y), the
sensitivity of F to x is defined as the ratio of the percentage change in F due to a
percentage change in x. Denote this by SxF .

(a) Suppose x changes by , to x + . The percentage change in x is then

(x + ) x
% change in x = =
x x
Likewise, the subsequent percentage change in F is
F (x + , y) F (x, y)
% change in F =
F (x, y)
Show that for infinitesimal changes in x, the sensitivity is
x F
SxF =
F (x, y) x
xy
(b) Let F (x, y) = 1+xy
. What is SxF .
xy
(c) If x = 5 and y = 6, then 1+xy 0.968. If x changes by 10%, using the quantity
F
Sx derived in part (24b), approximately what percentage change will the quantity
xy
1+xy
undergo?
1
(d) Let F (x, y) = xy
. What is SxF .
(e) Let F (x, y) = xy. What is SxF .
ME 132, Fall 2017, UC Berkeley, A. Packard 10

4. Consider the difference equation

pk+1 = pk + uk (1.6)

with the following parameter values, initial condition and terminal condition:
 
R
= 1+ , = 1, uk = M for all k, p0 = L, p360 = 0 (1.7)
12
where R, M and L are constants.

(a) In order for the terminal condition to be satisfied (p360 = 0), the quantities R, M
and L must be related. Find that relation. Express M as a function of R and L,
M = f (R, L).
(b) Is M a linear function of L (with R fixed)? If so, express the relation as M =
g(R)L, where g is a function you can calculate.
(c) Note that the function g is not a linear function of R. Calculate

dg
dR R=0.065

(d) Plot g(R) and a linear approximation, defined below



dg
gl (R) := g(0.065) + [R 0.065]
dR R=0.065

for R is the range 0.01 to 0.2. Is the linear approximation relatively accurate in
the range 0.055 to 0.075?
(e) On a 30 year home loan of $400,000, what is the monthly payment, assuming
an annual interest rate of 3.75%. Hint: The amount owed on a fixed-interest-
rate mortgage from month-to-month is represented by the difference equation in
equation (1.6). The interest is compounded monthly. The parameters in (1.7) all
have appropriate interpretations.

5. Consider the shower example. Suppose that there is extra delay in the showers re-
sponse, but that your strategy is not modified to take this into account. We derived
that the equation governing the closed-loop system is

Tk+m = Tk+m1 + (Tdes,k Tk )

where the time-delay from the water passing through the mixing value to the water
touching your skin is m. Using calculators, spreadsheets, computers (and/or graphs)
or analytic formula you can derive, determine the values of for which the system
is stable for the following cases: (a) m = 2, (b) m = 3, (c) m = 5. Remark
ME 132, Fall 2017, UC Berkeley, A. Packard 11

1: Remember, for m = 1, the allowable range for is 0 < < 2. Hint: For
a first attempt, assume that the water in the piping at k = 0 is all cold, so that
T0 , T1 , . . . , Tm1 = TC , and that Tdes,k = 12 (TH + TC ). Compute, via the formula, Tk
for k = 0, 1, . . . , 100 (say), and plot the result.

6. In this class, we will deal with differential equations having real coefficients, and real
initial conditions, and hence, real solutions. Nevertheless, it will be useful to use
complex numbers in certain calculations, simplifying notation, and allowing us to write
only 1 equation when there arep actually two. Let j denote 1. Recall that if is a
2 2
complex number, then || = R + I , where R := Real() and I := Imag(), and

= R + jI

and R and I are real numbers. If 6= 0, then the angle of , denoted , satisfies
R I
cos = , sin =
|| ||
and is uniquely determinable from (only to within an additive factors of 2).

(a) Draw a 2-d picture (horizontal axis for Real part, vertical axis for Imaginary part)
(b) Suppose A and B are complex numbers. Using the numerical definitions above,
carefully derive that

|AB| = |A| |B| , (AB) = A + B

7. (a) Given a real numbers 1 and 2 , using basic trigonometry and show that

[cos 1 + j sin 1 ] [cos 2 + j sin 2 ] = cos(1 + 2 ) + j sin(1 + 2 )

(b) How is this related to the identity

ej = cos + j sin

8. Given a complex number G, and a real number , show that (here, j := 1)

Re Gej = |G| cos ( + G)




9. Given a real number , and real numbers A and B, show that


1/2
A sin t + B cos t = A2 + B 2 sin (t + )

for all t, where is an angle that satisfies


A B
cos = , sin =
(A2 + B 2 )1/2 (A2 + B 2 )1/2
ME 132, Fall 2017, UC Berkeley, A. Packard 12

Note: you can only determine to within an additive factor of 2. How are these
conditions different from saying just
B
tan =
A

10. Draw the block diagram for temperature control in a refrigerator. What disturbances
are present in this problem?

11. Take a look at the four journals

IEEE Control Systems Magazine


IEEE Transactions on Control System Technology
ASME Journal of Dynamic Systems Measurement and Control
AIAA Journal on Guidance, Navigation and Control

All are in the Engineering Library. They are available on the web, and if you are
working from a UC Berkeley computer, you can access the articles for free (see the
UCB library webpage for instructions to configure your web browser at home with the
correct proxy so that you can access the articles from home as well, as needed).

(a) Find 3 articles that have titles which interest you. Make a list of the title, first
author, journal/vol/date/page information.
(b) Look at the articles informally. Based on that, pick one article, and attempt to
read it more carefully, skipping over the mathematics that we have not covered
(which may be alot/most of the paper). Focus on understanding the problem
being formulated, and try to connect it to what we have discussed.
(c) Regarding the papers Introduction section, describe the aspects that interest you.
Highlight or mark these sentences.
(d) In the body of the paper, mark/highlight figures, paragraphs, or parts of para-
graphs that make sense. Look specifically for graphs of signal responses, and/or
block diagrams.
(e) Write a 1 paragraph summary of the paper.

Turn in the paper with marks/highlights, as well as the title information of the other
papers, and your short summary.
ME 132, Fall 2017, UC Berkeley, A. Packard 13

2 Block Diagrams

In this section, we introduce some block-diagram notation that is used throughout the course,
and common to control system grammar.

2.1 Common blocks, continuous-time

The names, appearance and mathematical meaning of a handful of blocks that we will use
are shown below. Each block maps an input signal (or multiple input signals) into an output
signal via a prescribed, well-defined mathematical relationship.
Name Diagram Info Continuous

u- y
-
Gain R y(t) = u(t), t

u- y
7.2 -
(example) = 7.2 y(t) = u(t), t

w +
-e -
y
+6
Sum z y(t) = w(t) + z(t), t
w +
-e -
y
6
Difference z y(t) = w(t) z(t), t

u- R y
- Rt
Integrator y0 , t0 given y(t) = y0 + t0
u( )d, t t0

u- R y
-
Integrator y0 , t0 given y(t0 ) = y0 ; y(t) = u(t) t t0

u- y
-
Static Non- :RR y(t) = (u(t)), t
linearity

u- y
sin -
(example) () = sin() y(t) = sin(u(t)), t
ME 132, Fall 2017, UC Berkeley, A. Packard 14

u - delay, T y
-
Delay T 0 y(t) = u(t T ), t

2.2 Example

Consider a toy model of a stick/rocket balancing problem, as shown below.

d -
B
B
B
)
B
B
B
B
u -B

Imagine the stick is supported at its base, by a force approximately equal to its weight.
This force is not shown. A sideways force can act at the base as well, this is denoted by u,
and a sideways force can act at the top, denoted by d.

For the purposes of this example, the differential equations governing the angle-of-orientation
are taken to be
(t) = (t) + u(t) d(t).
where is the angle-of-orientation, u is the horizontal control force applied at the base, and
d is the horizontal disturbance force applied at the tip. Remark: These are not the
correct equations, as Newtons laws would correctly involve 2nd derivatives, and
terms with cos , sin , 2 and so on. However they do yield an interesting unstable system
(positive contributes to a positive ; negative contributes to a negative ) which can be
controlled with proportional control. This is similar to the dynamic instabilities of a rocket,
just after launch, when the velocity is quite slow, and the only dominant forces/moments
are from gravity and the engine thrust:

1. The large thrust of the rocket engines essentially cancels the gravitational force, and
the rocket is effectively balanced in a vertical position;

2. If the rocket rotates away from vertical (for example, a positive ), then the mo-
ment/torque about the bottom end causes to increase;
ME 132, Fall 2017, UC Berkeley, A. Packard 15

3. The vertical force of the rocket engines can be steered from side-to-side by powerful
motors which move the rocket nozzles a small amount, generating a horizontal force
(represented by u) which induces a torque, and causes to change;

4. Winds (and slight imbalances in the rocket structure itself) act as disturbance torques
(represented by d) which must be compensated for;

So, without a control system to use u to balance the rocket, it would tip over. As an exercise,
try balancing a stick or ruler in your hand (or better yet, on the tip of your finger).

Here, using a simple Matlab code, we will see the effect of a simple proportional feedback
control strategy
u(t) = 5 [des (t) (t)] .
This will result in a stable system, with a steerable rocket trajectory (the actual rocket
inclination angle (t) will generally track the reference inclination angle des (t)). Interestingly,
although the strategy for u is very simple, the actual signal u(t) as a function of t is somewhat
complex, for instance, when the conditions are: (0) = 0, des (t) = 0 for 0 t 2, des (t) = 1
for 2 < t, and d(t) = 0 for 0 t 6, d(t) = 0.6 for 6 < t. The Matlab files, and associated
plots are shown at the end of this section, after Section 2.4.

Depending on your point-of-view, the resulting u(t), and its affect on might seem almost
magical to have arisen from such a simple proportional control strategy. This is a great
illustration of the power of feedback.

Nevertheless, lets return to the main point of this section, block diagrams. How can this
composite system be represented in block diagram form? Use an integrator to transform
into , independent of the governing equation.

- R
-

Then, create in terms of , u and d, as the governing equation dictates. This requires
summing junctions, and simple connections, resulting in
d
u g - g -
- ?
R -
6

We discussed a proportional control strategy, namely

u(t) = 5 [des (t) (t)]


ME 132, Fall 2017, UC Berkeley, A. Packard 16

which looks like


des +
-e u
- 5 -
6

Putting these together yields the closed-loop system. See problem 1 in Section 2.4 for an
extension to a proportional-integral control strategy.

2.3 Summary

It is important to remember that while the governing equations are almost always written as
differential equations, the detailed block diagrams almost always are drawn with integrators
(and not differentiators). This is because of the mathematical equivalence shown in the
Integrator entry in the table in section 2.1. Using integrators to represent the relationship,
the figure conveys how the derivative of some variable, say x is a consequence of the values
of other variables. Then, the values of x evolve simply through the running integration of
this quantity.
ME 132, Fall 2017, UC Berkeley, A. Packard 17

2.4 Problems

1. This question extends the example we discussed in class. Recall that the process was
governed by the differential equation
(t) = (t) + u(t) d(t).
The proportional control strategy u(t) = 5 [des (t) (t)] did a good job, but there was
room for improvement. Consider the following strategy
u(t) = p(t) + a(t)
p(t) = Ke(t)
a(t) = Le(t) (with a(0) = 0)
e(t) = des (t) meas (t)
where K and L are constants. Note that the control action is made up of two terms
p and a. The term p is proportional to the error, while term as rate-of-change is
proportional to the error.

(a) Convince yourself (and me) that a block diagram for this strategy is as below (all
missing signs on summing junctions are + signs). Note: There is one minor issue
you need to consider - exchanging the order of differentiation with multiplication
by a constant...

- K

des -f
R
-f
? u
- - L -
6

meas

(b) Create a Simulink model of the closed-loop system (ie., process and controller,
hooked up) using this new strategy. The step functions for des and d should be
namely
0 for t 1 0 for t 6
des (t) = 1 for 1 < t 7 , d(t) = 0.4 for 6 < t 11
1.4 for t > 7 0 for t > 11
Make the measurement perfect, so that meas = .
(c) Simulate the closed-loop system for K = 5, L = 9. The initial condition for
should be (0) = 0. On three separate axis (using subplot, stacked vertically,
all with identical time-axis so they can be lined up for clarity) plot des and d
versus t; versus t; and u versus t.
ME 132, Fall 2017, UC Berkeley, A. Packard 18

(d) Comment on the performance of this control strategy with regards to the goal
of make follow des , even in the presence of nonzero d. What aspect of the
system response/behavior is insensitive to d? What signals are sensitive to d,
even in the steady-state?
(e) Suppose the process has up to 30% variablity due to unknown effects. By that,
suppose that the process ODE is
(t) = (t) + u(t) d(t).
where and are unknown numbers, known to satisfy 0.7 1.3 and 0.7
1.3. Using for loops, and rand (this generates random numbers uniformly
distributed between 0 and 1, hence 0.7 + 0.6*rand generates a random number
uniformly distributed between 0.7 and 1.3. Simulate the system 50 times (using
different random numbers for both and , and plot the results on a single 3-
axis (using subplot) graph (as in part 1c above). What aspect of the closed-loop
systems response/behavior is sensitive to the process variability? What aspects
are insensitive to the process variability?
(f) Return to the original process model. Simulate the closed-loop system for K = 5,
and five values of L, {1, 3.16, 10, 31.6, 100}. On two separate axis (using subplot
and hold on), plot versus t and u versus t, with 5 plots (the different values of
L) on each axis.
(g) Discuss how the value of the controller parameter L appears to affect the perfor-
mance.
(h) Return to the case K = 5, L = 9. Now, use the transport delay block (found
in the Continuous Library in Simulink) so that meas is a delayed (in time)
version of . Simulate the system for 3 different values of time-delay, namely
T = {0.001, 0.01, 0.1}. On one figure, superimpose all plots of versus t for the
three cases.
(i) Comment on the effect of time-delay in the measurement in terms of affecting the
regulation (ie., behaving like des .
(j) Return to the case of no measurement delay, and K = 5, L = 9. Now, use the
quantizer block (found in the Nonlinear Library in Simulink) so that meas is
the output of the quantizer block (with as the input). This captures the effect
of measuring with an angle encoder. Simulate the system for 3 different levels of
quantization, namely {0.001, 0.005, 0.025}. On one figure, make 3 subplots (one
for each quantization level), and on each axis, graph both and meas versus t.
On a separate figure, make 3 subplots (one for each quantization level), graphing
u versus t.
(k) Comment on the effect of measurement quantization in terms of limiting the
accuracy of regulation (ie., behaving like des , and on the jumpiness of the
control action u).
ME 132, Fall 2017, UC Berkeley, A. Packard 19

NOTE: All of the computer work (parts 1c, 1e, 1f, 1h and 1j) should be automated
in a single, modestly documented script file. Turn in a printout of your Simulink
diagrams (3 of them), and a printout of the script file. Also include nicely formatted
figure printouts, and any derivations/comments that are requested in the problem
statement.
ME 132, Fall 2017, UC Berkeley, A. Packard 24

3 Mathematical Modeling and Simulation

3.1 Systems of 1st order, coupled differential equations

Consider an input/output system, with m inputs (denoted by d), q outputs (denoted by e),
and governed by a set of n, 1st order, coupled differential equations, of the form

x1 (t) f1 (x1 (t), x2 (t), . . . , xn (t), d1 (t), d2 (t), . . . , dm (t), t)



x2 (t) f2 (x1 (t), x2 (t), . . . , xn (t), d1 (t), d2 (t), . . . , dm (t), t)
. ..
.
. .


x (t) f (x (t), x (t), . . . , x (t), d (t), d (t), . . . , d (t), t)

n n 1 2 n 1 2 m
= (3.1)

e1 (t) h1 (x1 (t), x2 (t), . . . , xn (t), d1 (t), d2 (t), . . . , dm (t), t)

e2 (t) h2 (x1 (t), x2 (t), . . . , xn (t), d1 (t), d2 (t), . . . , dm (t), t)

. ..
.. .
eq (t) hq (x1 (t), x2 (t), . . . , xn (t), d1 (t), d2 (t), . . . , dm (t), t)

where the functions fi , hi are given functions of the n variables x1 , x2 , . . . , xn , the m variables
d1 , d2 , . . . , dm and also explicit functions of t.

For shorthand, we write (3.1) as

x(t) = f (x(t), d(t), t)


(3.2)
e(t) = h (x(t), d(t), t)

Given an initial condition vector x0 , and a forcing function d(t) for t t0 , we wish to solve
for the solutions
x1 (t) e1 (t)
x2 (t) e2 (t)
x(t) = .. , e(t) = ..

. .
xn (t) eq (t)
on the interval [t0 , tF ], given the initial condition

x (t0 ) = x0 .

and the input forcing function d().

ode45 solves for this using numerical integration techniques, such as 4th and 5th order Runge-
Kutta formulae. You should have learned about this in E7, and used ode45 extensively. You
can learn more about numerical integration by taking Math 128. We will not discuss this
important topic in detail in this class - please review your E7 material.
ME 132, Fall 2017, UC Berkeley, A. Packard 25

Remark: The GSI will give 2 interactive discussion sections on how to use Simulink, a
graphical-based tool to easily build and numerically solve ODE models by interconnecting
individual components, each of which are governed by ODE models. Simulink is part of
Matlab, and is available on the computers in Etcheverry. The Student Version of Matlab
also comes with Simulink (and the Control System Toolbox). The Matlab license from the
UC Berkeley software licensing arrangement has both Simulink (and the Control System
Toolbox included.

However, to quickly recap (in an elementary manner) how ODE solvers work, consider the
Euler method of solution. If the functions f and d are reasonably well behaved in x and t,
then the solution, x() exists, is a continuous function of t, and in fact, is differentiable at all
points. Hence, it is reasonable that a Taylor series for x at a given time t will be predictive
of the values of x( ) for values of close to t.

If we do a Taylors expansion on a function x, and ignore the higher order terms, we get an
approximation formula

x(t + ) x(t) + x(t)


= x(t) + f (x(t), d(t), t)

Roughly, the smaller is, the closer the left-hand-side is to the actual value of x(t + ).
Eulers method propogates a solution to (3.1) by using this approximation repeatedly for a
fixed , called the stepsize. Hence, Eulers method gives that for any integer k >= 0, the
solution to (3.1) approximately satisfies

x((k + 1)) = x(k) + f (x(k), d(k), k)


| {z } | {z } | {z }
n1 n1 n1

Writing out the first 4 time steps (ie., t = 0, , 2, 3, 4) gives

x() = x(0) + f (x(0), d(0), 0)


x(2) = x() + f (x(), d(), )
(3.3)
x(3) = x(2) + f (x(2), d(2), 2)
x(4) = x(3) + f (x(3), d(3), 3)

and so on. So, as long as you have a subroutine that can evaluate f (x, d, t), given values
of x, d and t, you can quickly propogate an approximate solution simply by calling the
subroutine once for every timestep.

Computing the output, e(t) simply involves evaluating the function h(x(t), d(t), t) at the
solution points.

In the Runge-Kutta method, a more sophisticated approximation is made, which results


in more computations (4 function evaluations of f for every time step), but much greater
ME 132, Fall 2017, UC Berkeley, A. Packard 26

accuracy. In effect, more terms of the Taylor series are used, involving matrices of partial
derivatives, and even their derivatives,

df d2 f d3 f
, ,
dx dx2 dx3
but without actually requiring explicit knowledge of these derivatives of the function f .

3.2 Remarks about Integration Options in simulink

The Simulation SimulationParameters Solver page is used to set additional op-


tional properties, including integration step-size options, and may need to be used to obtain
smooth plots. Additional options are
Term Meaning
RelTol Relative error tolerance, default 1e-3, probably leave it
alone, though if instructions below dont work, try mak-
ing it a bit smaller
AbsTol Absolute error tolerance, default 1e-6, probably leave it
alone, though if instructions below dont work, try mak-
ing it a bit smaller
MaxStep maximum step size. I believe the default is (StopTime-
StartTime)/50. In general, make it smaller than
(StopTime-StartTime)/50 if your plots are jagged.

3.3 Problems
1. A sealed-box loudspeaker is shown below.
ME 132, Fall 2017, UC Berkeley, A. Packard 27

Voice Coil
Top Plate Former
Magnet
Back Plate
Vent
Signal From
From Amplifier
Accelerometer
Dust Cap
Spider Woofer Cone
Pole Piece
Basket Surround
Voice Coil Subwoofer Electrical Leads
Sealed Enclosure
Ignore the wire marked accelerometer mounted on cone. We will develop a model
for accoustic radiation from a sealed-box loudspeaker.
The equations are
Speaker Cone: force balance,

mz(t) = Fvc (t) + Fk (t) + Fd (t) + Fb + Fenv (t)

Voice-coil motor (a DC motor):


RI(t) Bl (t)z(t) = 0
Vin (t) LI(t)
Fvc (t) = Bl (t)I(t)

Magnetic flux/Length Factor:


BL0
Bl (t) =
1 + BL1 z(t)4

Suspension/Surround:

Fk (t) = K0 z(t) K1 z 2 (t) K2 z 3 (t)


Fd (t) = RS z(t)

Sealed Enclosure:  
Az(t)
Fb (t) = P0 A 1 +
V0
Environment (Baffled Half-Space)

x(t) = Ae x(t) + Be z(t)


Fenv (t) = AP0 Ce x(t) De z(t)
ME 132, Fall 2017, UC Berkeley, A. Packard 28

where x(t) is 6 1, and



474.4 4880 0 0 0 0
4880 9376 0 0 0 0

0 0 8125 7472 0 0
Ae :=

0 0 7472 5.717 0 0

0 0 0 0 3515 11124
0 0 0 0 11124 2596

203.4

594.2

601.6
Be :=

15.51

213.6
140.4
 
Ce := 203.4 594.2 601.6 15.51 213.6 140.4

De = 46.62
This is an approximate model of the impedance seen by the face of the loud-
speaker as it radiates into an infinite half-space.

The values for the various constants are


Symbol Value
A 0.1134 meters2
V0 0.17 meters3
P0 1.0133 105 Pa
m 0.117 kg
L 7 104 H
R 3
BL0 30.7 Tesla meters
BL1 107 meters4
K0 5380 N/meter
K1 0
K3 2 108 N/meter3
RS 12.8 N sec/meter

(a) Build a Simulink model for the system. Use the Subsystem grouping capability
to manage the complexity of the diagram. Each of the equations above should
represent a different subsystem. Make sure you think through the question what
is the input(s) and what is the output(s)? for each subsystem.
ME 132, Fall 2017, UC Berkeley, A. Packard 29

(b) Write a function that has two input arguments (denoted V and ) and two output
arguments, zmax,pos and zmax,neg . The functional relationship is defined as follows:
Suppose Vin (t) = V sin 2t, where t is in seconds, and V in volts. In other
words, the input is a sin-wave at a frequency of Hertz. Simulate the loudspeaker
behavior for about 25/ seconds. The displacement z of the cone will become
nearly sinusoidal by the end of the simulation. Let zmax,pos and zmax,neg be the
maximum positive and negative values of the displacement in the last full cycle
(i.e., the last 1/ seconds of the simulation, which we will approximately decree
as the steady-state response to the input).
(c) Using bisection, determine the value (approximately) of V so that the steady-
state maximum (positive) excursion of z is 0.007meters if the frequency of the
excitation is = 25. What is the steady-state minimum (negative) excursion of
z?
ME 132, Fall 2017, UC Berkeley, A. Packard 30

4 State Variables

See the appendix for additional examples on mathematical modeling of systems.

4.1 Definition of State

For any system (mechanical, electrical, electromechanical, economic, biological, acoustic,


thermodynamic, etc.) a collection of variables q1 , q2 , . . . , qn are called state variables, if the
knowledge of

the values of these variables at time t0 , and

the external inputs acting on the system for all time t t0 , and

all equations describing relationships between the variables qi and the external inputs

is enough to determine the value of the variables q1 , q2 , . . . , qn for all t t0 .

In other words, past history (before t0 ) of the systems evolution is not important in de-
termining its evolution beyond t0 all of the relevant past information is embedded in the
variables values at t0 .

Example: The system is a point mass, mass m. The point mass is acted on by an external
force f (t). The position of the mass is measured relative to an inertial frame, with coordinate
w, velocity v, as shown below in Fig. 2.

v(t) : velocity
w(t) : position

f(t)

Figure 2: Ideal force acting on a point mass


ME 132, Fall 2017, UC Berkeley, A. Packard 31

Claim #1: The collection {w} is not a suitable choice for state variables. Why? Note that
for t t0 , we have Z t Z 
1
w(t) = w(t0 ) + v(t0 ) + f ()d d
t0 t0 m

Hence, in order to determine w(t) for all t t0 , it is not sufficient to know w(t0 ) and the
entire function f (t) for all t t0 . You also need to know the value of v(t0 ).

Claim #2: The collection {v} is a legitimate choice for state variables. Why? Note that
for t t0 , we have Z t
1
v(t) = v(t0 ) + f ( )d
t0 m

Hence, in order to determine v(t) for all t t0 , it is sufficient to know v(t0 ) and the entire
function f (t) for all t t0 .

Claim #3: The collection {w, v} is a legitimate choice for state variables. Why? Note that
for t t0 , we have Rt
w(t) = w(t0 ) + t0 v()d
Rt
v(t) = v(t0 ) + t0 m1 f ( )d
Hence, in order to determine v(t) for all t t0 , it is sufficient to know v(t0 ) and the entire
function f (t) for all t t0 .

In general, it is not too hard to pick a set of state-variables for a system. The next few
sections explains some rule-of-thumb procedures for making such a choice.

4.2 State-variables: from first order evolution equations

Suppose for a system we choose some variables (x1 , x2 , . . . , xn ) as a possible choice of state
variables. Let d1 , d2 , . . . , df denote all of the external influences (ie., forcing functions) acting
on the system. Suppose we can derive the relationship between the x and d variables in the
form
x1 = f1 (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
x2 = f2 (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
.. .. (4.1)
. .
xn = fn (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
Then, the set {x1 , x2 , . . . , xn } is a suitable choice of state variables. Why? Ordinary differ-
ential equation (ODE) theory tells us that given

an initial condition x(t0 ) := x0 , and


the forcing function d(t) for t t0 ,
ME 132, Fall 2017, UC Berkeley, A. Packard 32

there is a unique solution x(t) which satisfies the initial condition at t = t0 , and satisfies the
differential equations for t t0 . Hence, the set {x1 , x2 , . . . , xn } constitutes a state-variable
description of the system.

The equations in (4.1) are called the state equations for the system.

4.3 State-variables: from a block diagram

Given a block diagram, consisting of an interconnection of

integrators,
gains, and
static-nonlinear functions,

driven by external inputs d1 , d2 , . . . , df , a suitable choice for the states is

the output of each and every integrator

Why? Note that

if the outputs of all of the integrators are labled x1 , x2 , . . . , xn , then the inputs to the
integrators are actually x1 , x2 , . . . , xn .
The interconnection of all of the base components (integrators, gains, static nonlinear
functions) implies that each xk (t) will be a function of the values of x1 (t), x2 (t), . . . , xn (t)
along with d1 (t), d2 (t), . . . , df (t).
This puts the equations in the form of (4.1).

We have already determined that that form implies that the variables are state variables.

4.4 Problems
1. Shown below is a block diagram of a DC motor connected to an load inertia via a
flexible shaft. The flexible shaft is modeled as a rigid shaft (inertia J1 ) inside the
motor, a massless torsional spring (torsional spring constant Ks ) which connects to
the load inertia J2 . is the angular position of the shaft inside the motor, and is
the angular position of the load inertia.
ME 132, Fall 2017, UC Berkeley, A. Packard 33

- 1 -
R
-
R (t) - y(t)
J2


e
Ks 
?
+
6
- 1
J1

+ e+ -
R R
u(t) - -? -
+6 (t)


Choose state variables (use rule given in class for block diagrams that do not contain
differentiators). Find matrices A, B and C such that the variables x(t), y(t) and u(t)
are related by
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t)
Hint: There are 4 state variables.

2. Using op-amps, resistors, and capacitors, design a circuit to implement a PI controller,


with KP = 4, KI = 10. Use small capacitors (these are more readily available) and
large resistors (to keep currents low).
ME 132, Fall 2017, UC Berkeley, A. Packard 34

5 First Order, Linear, Time-Invariant (LTI) ODE

5.1 The Big Picture

We shall study mathematical models described by linear time-invariant input-output differ-


ential equations. There are two general forms of such a model. One is a single, high-order
differential equation to describe the relationship between the input u and output y, of the
form

y [n] (t) + a1 y [n1] (t) + + an1 y(t) + an y(t) = b0 u[m] (t) + b1 u[m1] (t) + + bm1 u(t) + bm u(t)
(5.1)
[k] th dk y(t)
where y denotes the k derivative of the signal y(t): dtk . These notes refer to equation
(5.1) as a HLODE (High-order, Linear Ordinary Differential Equation).

An alternate form involves many first-order equations, and many inputs. The general case
of this situation has n dependent variables, x1 , x2 , . . . , xn , and m inputs, d1 , d2 , . . . , dm . The
differential equations governing the evolution of the xi variables is

x1 (t) = a11 x1 (t) + a12 x2 (t) + + a1n xn (t) + b11 d1 (t) + b12 d2 (t) . . . + b1m dm (t)
x2 (t) = a21 x1 (t) + a22 x2 (t) + + a2n xn (t) + b21 d1 (t) + b22 d2 (t) . . . + b2m dm (t)
.. .
. = ..
xn (t) = an1 x1 (t) + an2 x2 (t) + + ann xn (t) + bn1 d1 (t) + bn2 d2 (t) . . . + bnm dm (t)

We will learn how to solve these differential equations, and more importantly, we will discover
how to make broad qualitative statements about these solutions. Much of our intuition
about control system design and its limitations will be drawn from our understanding of the
behavior of these types of equations.

A great many models of physical processes that we may be interested in controlling are not
linear, as above. Nevertheless, as we shall see much later, the study of linear systems is a
vital tool in learning how to control even nonlinear systems. Essentially, feedback control
algorithms make small adjustments to the inputs based on measured outputs. For small
deviations of the input about some nominal input trajectory, the output of a nonlinear
system looks like a small deviation around some nominal output. The effects of the small
input deviations on the output is well approximated by a linear (possibly time-varying)
system. It is therefore essential to undertake a study of linear systems.

In this section, we review the solutions of linear, first-order differential equations with con-
stant coefficients and time-dependent forcing functions. The concepts of

stability
ME 132, Fall 2017, UC Berkeley, A. Packard 35

time-constant

sinusoidal steady-state

frequency response functions

are introduced. A significant portion of the remainder of the course generalizes these to
higher order, linear ODEs, with emphasis on applying these concepts to the analysis and
design of feedback systems.

5.2 Solution of a First Order LTI ODE

Consider the following system

x(t) = a x(t) + b u(t) (5.2)

where u is the input, x is the dependent variable, and a and b are constant coefficients
(ie., numbers). For example, the equation 6x(t) + 3x(t) = u(t) can be manipulated into
x(t) = 12 x(t) + 61 u(t). Given the initial condition x(0) = x0 and an arbitrary input function
u(t) defined for t [0, ), a solution of Eq. (5.2) must satisfy
Z t
at
xs (t) = e x0 + ea(t ) b u( ) d . (5.3)
0
| {z }
free resp. | {z }
forced resp.

You should derive this with the integrating factor method (problem 1). Also note that
the derivation makes it evident that given the initial condition and forcing function,
there is at most one solution to the ODE. In other words, if solutions to the ODE
exist, they are unique, once the initial condition and forcing function are specified.

We can also easily just check that (5.3) is indeed the solution of (5.2) by verifying two facts:
the function xs satisfies the differential equation for all t 0; and xs satisfies the given
initial condition at t = 0. In fact, the theory of differential equations tells us that there is
one and only one function that satisfies both (existence and uniqueness of solutions). For
this ODE, we can prove this directly, for completeness sake. Above, we showed that if x
satisfies (5.2), then it must be of the form in (5.3). Next we show that the function in (5.3)
does indeed satisfy the differential equation (and initial condition). First check the value of
xs (t) at t = 0:
Z 0
a0
xs (0) = e x0 + ea(t ) b u( ) d = x0 .
0
ME 132, Fall 2017, UC Berkeley, A. Packard 36

Taking the time derivative of (5.3) we obtain


 Z t 
at d at a
xs (t) = a e x0 + e e b u( ) d
dt 0
Z t
at
= a e x0 + a e at
ea b u( ) d +eat eat b u(t)
| {z0 }
axs (t)

= a xs (t) + b u(t) .
as desired.

5.2.1 Free response

Fig. 3 shows the normalized free response (i.e. u(t) = 0) of the solution Eq. (5.3) of ODE
(5.2) when a < 0. Since a < 0, the free response decays to 0 as t , regardless of the
1

0.9

0.8

0.7
Normalized State [x/xo]

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Normalized Time [t/|a|]

Figure 3: Normalized Free response of first order system (a < 0)

initial condition. Because of this (and a few more properties that will be derived in upcoming
sections), if a < 0, the system in eq. (5.2) is called stable (or sometimes, to be more precise,
asymptotically stable). Notice that the slope at time t = 0 is x(0) = ax0 and T = 1/|a| is
the time that x(t) would cross 0 if the initial slope is continued, as shown in the figure. The
time
1
T :=
|a|
ME 132, Fall 2017, UC Berkeley, A. Packard 37

is called the time constant of a first order asymptotically stable system (a < 0). T is
expressed in the units of time, and is an indication of how fast the system responds. The
larger |a|, the smaller T and the faster the response of the system.

Notice that
xf ree (T ) 1
= .37 = 37%
x0 e
xf ree (2T ) 1
= 2 .13 = 13%
x0 e
xf ree (3T ) 1
= 3 .05 = 5%
x0 e
xf ree (4T ) 1
= 4 .018 2%
x0 e

This calculation is often summarized informally as in a relative sense, the free-response of


a stable, first-order system decays to zero in approximately 3 time-constants. Of course, in
that usage, the notion of decays to zero is quite vague, and actually means that 95% of
the initial condition has decayed away after 3 time-constants. Obviously, a different notion
of decays to zero yields a different rule-of-thumb for the time to decay.

If a > 0 the free response of ODE (5.2) is unstable, i.e. limt |x(t)| = . When a = 0,
x(t) = x0 for all t 0, and we can say that this system is limitedly stable or limitedly
unstable.

5.2.2 Forced response, constant inputs

We first consider the system response to a step input. In this case, the input u(t) is given by

0 if t < 0
u(t) = um (t) =
um if t 0
where um is a constant and x(0) = 0. The solution (5.3) yields
b
1 eat um .

x(t) =
a
b
If a < 0, the steady state output xss is xss = a um . Note that for any t

x(t + 1 ) xss = 1 |x(t) xss | 0.37 |x(t) xss |

a e

So, when the input is constant, every 1 a


time-units, the solution moves 63% closer to its
final, limiting value. Exercise 5 gives another interesting, and useful interpretation of the
time-constant.
ME 132, Fall 2017, UC Berkeley, A. Packard 38

5.2.3 Forced response, bounded inputs

Rather than consider constant inputs, we can also consider inputs that are bounded by a
constant, and prove, under the assumption of stability, that the response remains bounded
as well (and the bound is a linear function of the input bound). Specifically, if a < 0, and
if u(t) is uniformly (in time) bounded by a positive number M , then the resulting solution
x(t) will be uniformly bounded by bM
|a|
. To derive this, suppose that |u( )| M for all 0.
Then for any t 0, we have
Z t
a(t )

|x(t)| = e b u( ) d
0
Z t
a(t )
e b u( ) d
Z0 t
ea(t ) b M d
0
bM
1 eat


a
bM
.
a
Thus, if a < 0, x(0) = 0 and u(t) M , the output is bounded by x(t) |bM/a|. This is
called a bounded-input, bounded-output (BIBO) system. If the initial condition is non-zero,
the output x(t) will still be bounded since the magnitude of the free response monotonically
converges to zero, and the response x(t) is simply the sum of the free and forced responses.
Note: Assuming b 6= 0, the system is not bounded-input/bounded-output when a 0. In
that context, from now on, we will refer to the a = 0 case (termed limitedly stable or limitedly
unstable before) as unstable. See problem 6 in Section 5.6.

5.2.4 Stable system, Forced response, input approaching 0

Assume the system is stable, so a < 0. Now, suppose the input signal u is bounded and
approaches 0 as t . It seems natural that the response, x(t) should also approach 0
as t , and deriving this fact is the purpose of this section. While the derivation
is interesting, the most important is the result: For a stable system, specifically
(5.2), with a < 0, if the input u satisfies limt u(t) = 0, then the solution x satisfies
limt x(t) = 0.

Onto the derivation: First, recall what limt z(t) = 0 means: for any  > 0, there is a
T > 0 such that for all t > T , |z(t)| < .
ME 132, Fall 2017, UC Berkeley, A. Packard 39

Next, note that for any t > 0 and a 6= 0,


Z t
1
ea(t ) d = 1 eat

0 a
If a < 0, then for all t 0,
Z t Z t Z
a(t ) a 1 1
e d = e d ea d = =
0 0 0 a |a|

Assume x0 is given, and consider such an input. Since u is bounded, there is a positive
constant B such that |u(t)| B for all t 0. Also, for every  > 0, there is

 |a| T,1
a T,1 > 0 such that |u(t)| < 3 |b|
for all t 2

t |a| T,2
a T,2 > 0 such that ea 2 < 3B|b|
for all t 2


a T,3 > 0 such that eat < 3|x0 |
for all t T,3

Let T be the maximum of {T,1 , T,2 , T,3 }. For t > T , the response x(t) satisfies
Rt
x(t) = eat x0 + 0 ea(t ) bu( )d
Rt Rt
= eat x0 + 02 ea(t ) bu( )d + t ea(t ) bu( )d
2

We can use the information to bound each term individually, namely

1. Since t T,3
at
e x0  
|x0 | =
3|x0 | 3
2. Since |u(t)| B for all t,
R t Rt
2 a(t )
0 e bu( )d 02 ea(t ) |bu( )| d

Rt
B|b| 02 ea(t

)
d 
1 a 2t t
= B|b| a e 1 ea 2
1 a2 t
B|b| a e
t |a|
Since t T,2 , ea 2 3B|b|
, which implies
Z t
2 
ea(t ) bu( )d <

3

0
ME 132, Fall 2017, UC Berkeley, A. Packard 40

3. Finally, since t T,1 , the input |u( )| < 3 |a|


|b|
for all 2t . This means
R
t a(t ) R t a(t )
te bu( )d < |b| 3 |a| t e d

2
|b| 2
t
|b| 3 |a|
R
|b| 0
ea(t ) d
|b| 3 |a| 1
|b| |a|
= 3

Combining implies that for any t > T , |x(t)| < . Since  was an arbitrary positive number,
this complete the proof that limt x(t) = 0.

5.2.5 Linearity

Why is the differential equation in (5.2) called a linear differential equation? Suppose x1 is
the solution to the differential equation with the initial condition x0,1 and forcing function u1 ,
and x2 is the solution to the differential equation with the initial condition x0,2 and forcing
function u2 . In other words, the function x1 satisfies x1 (0) = x1,0 and for all t > 0,

x1 (t) = ax1 (t) + bu1 (t).

Likewise, the function x2 satisfies x2 (0) = x2,0 and for all t > 0,

x2 (t) = ax2 (t) + bu2 (t).

Now take two constants and . What is the solution to the differential equation with
initial condition x(0) = x0,1 + x0,2 , and forcing function u(t) = u1 (t) + u2 (t)? It is easy
(just plug into the differential equation, or the integral form of the solution) to see that the
solution is
x(t) = x1 (t) + x2 (t)
This is often called the superposition property. In this class, we will more typically use the
term linear, indicating that the solution of the differential equation is a linear function of
the (initial condition, forcing function) pair.

5.2.6 Forced response, input approaching a constant limit

If the system is stable (a < 0), and the input u(t) has a limit, limt u(t) = u, then by
combining the results of Sections 5.2.4, 5.2.2 and 5.2.5, it is easy to conclude that x(t)
approaches a limit as well, namely
b
lim x(t) = u
t a
ME 132, Fall 2017, UC Berkeley, A. Packard 41

An alternate manner to say this is that


b
x(t) = u + q(t)
a
where limt q(t) = 0.

5.3 Forced response, Sinusoidal inputs

Consider the linear dynamical system

x(t) = Ax(t) + Bu(t)


(5.4)
y(t) = Cx(t) + Du(t)

We assume that A, B, C and D are scalars (1 1).

If the system is stable, (ie., A < 0) it might be intuitively clear that if u is a sinusoid, then
y will approach a steady-state behavior that is sinusoidal, at the same frequency, but with
different amplitude and shifted in-time. In this section, we make this idea precise.

Take 0 as the input frequency, and (although not physically relevant) let u be a fixed
complex number and take the input function u(t) to be

u(t) = uejt

for t 0. Note that this is a complex-valued function of t. Then, the response is


Rt
x(t) = eAt x0 + 0 eA(t ) Bu( )d
Rt
= eAt x0 + eAt 0 eA B uej d (5.5)
Rt
= eAt x0 + eAt 0 e(jA) d B u

Now, since A < 0, regardless of , (j A) 6= 0, and we can solve the integral as


 
At B u B
x(t) = e x0 + ejt (5.6)
j A j A

Hence, the output y(t) = Cx(t) + Du(t) would satisfy


   
At B u CB
y(t) = Ce x0 + D+ uejt
j A j A
In the limit as t , the first term decays to 0 exponentially, leaving the steady-state
response  
CB
yss (t) = D + uejt
j A
ME 132, Fall 2017, UC Berkeley, A. Packard 42

Hence, we have verified our initial claim if the input is a complex sinusoid, then the steady-
state output
h is a complex
i sinusoid at the same exact frequency, but amplified by a complex
CB
gain of D + jA .

The function G()


CB
G() := D + (5.7)
j A
is called the frequency response function of the system in (5.4). Hence, for stable, first-order
systems, we have proven

u(t) := uejt yss (t) = G()uejt

G can be calculated rather easily using a computer, simply by evaluating the expression in
(5.7) at a large number of frequency points R. The dependence on is often graphed
using two plots, namely

log10 |G()| versus log10 (), and


G() versus log10 ().

This plotting arrangement is called a Bode plot, named after one of the modern-day giants
of control system theory, Hendrik Bode.

What is the meaning of a complex solution to the differential equation (5.4)? Suppose that
functions u, x and y are complex, and solve the ODE. Denote the real part of the function
u as uR , and the imaginary part as uI (similar for x and y). Then, for example, xR and xI
are real-valued functions, and for all t x(t) = xR (t) + jxI (t). Differentiating gives
dx dxR dxI
= +j
dt dt dt
Hence, if x, u and y satisfy the ODE, we have (dropping the (t) argument for clarity)
dxR
dt
+ j dx
dt
I
= A (xR + jxI ) + B (uR + juI )
yR + jyI = C (xR + jxI ) + D (uR + juI )
But the real and imaginary parts must be equal individually, so exploiting the fact that the
coefficients A, B, C and D are real numbers, we get
dxR
dt
= AxR + BuR
yR = CxR + DuR
and
dxI
dt
= AxI + BuI
yI = CxI + DuI
ME 132, Fall 2017, UC Berkeley, A. Packard 43

Hence, if (u, x, y) are functions which satisfy the ODE, then both (uR , xR , yR ) and (uI , xI , yI )
also satisfy the ODE.

Finally, we need to do some trig/complex number calculations. Suppose that H C is not


equal to zero. Recall that H is the real number (unique to within an additive factor of 2)
which has the properties
ReH ImH
cos H = , sin H =
|H| |H|

Then, 
Re Hej = Re [(HR + jHI ) (cos + j sin )]
h HI sin
= HR cos i
= |H| H R
|H|
cos HI
|H|
sin
= |H| [cos H cos sin H sin ]

j
 = |H| cos ( + H)
Im He = Im [(HR + jHI ) (cos + j sin )]
= HR sin
h + HI cos i
= |H| H R
|H|
HI
sin + |H| cos
= |H| [cos H sin + sin H cos ]
= |H| sin ( + H)

Now consider the differential equation/frequency response case. Let G() denote the fre-
quency response function. If the input u(t) = cos t = Re (ejt ), then the steady-state
output y will satisfy
y(t) = |G()| cos (t + G())
A similar calculation holds for sin, and these are summarized below.

Input Steady-State Output


1 G(0) = D CB A
cos t |G()| cos (t + G())
sin t |G()| sin (t + G())

5.3.1 Forced response, input approaching a Sinusoid

If the system in (5.4) is stable (A < 0), combine the results of Sections 5.2.4, 5.3 and 5.2.5,
to obtain the following result: Suppose A < 0, 0, and u is a constant. If the input u is
of the form
u(t) = uejt + z(t)
ME 132, Fall 2017, UC Berkeley, A. Packard 44

and limt z(t) = 0, then the response y(t) is of the form


 
CB
y(t) = D + uejt + q(t)
j A

where limt q(t) = 0. Informally, we conclude eventually sinusoidal inputs lead to eventu-
ally sinusoidal outputs, and say that the system has the steady-state, sinusoidal gain (SStG)
property. Note that the relationship between the sinuosoidal terms is the frequency-response
function.

5.4 First-order delay-differential equation: Stability

In actual feedback systems, measurements from sensors are used to make decisions on what
corrective action needs to be taken. Often in analysis, we will assume that the time from
when the measurement occurs to when the corresponding action takes place is negligible (ie.,
zero), since this is often performed with modern, high-speed electronics. However, in reality,
there is a time-delay, so that describing the systems behavior involves relationships among
variables at different times. For instance, a simple first-order delay-differential equation is

x(t) = a1 x(t) + a2 x(t T ) (5.8)

where T 0 is a fixed number. We assume that for T = 0, the system is stable, so a1 +a2 < 0.
Since we are studying the effect of delay, we also assume that a2 6= 0. When T = 0, the
homogeneous solutions are of the form x(t) = e(a1 +a2 )t x(0), which decay exponentially to zero.
As the constant number T increases from 0, the homogeneous solutions change, becoming
complicated expressions that are challenging to derive. It is a fact that there is a critical
value of T , called Tc such that

for all T satisfying 0 T < Tc , the homogeneous solutions of (5.8) decay to zero as
t

for some 0 (denoted c ) the function ejt is a homogeneous solution when T = Tc .

Hence, we can determine Tc (and c ) by checking the conditions for ejt to be a homogeneous
solution of (5.8).

Plugging in give
jejt = a1 ejt + a2 ej(tT )
for all t. Since ejt 6= 0, divide, leaving

j = a1 + a2 ejT .
ME 132, Fall 2017, UC Berkeley, A. Packard 45

Since this equality relates complex numbers (which have a real and imaginary part), it can
be thought of as 2 equations
jTin
2 unknowns ( and T ). We know that regardless of and
T , it always holds that e
= 1, so it must be that

|j a1 | = |a2 |
p
which implies c = a22 a21 . Then Tc is the smallest positive number such that jc =
a1 + a2 ejTc .

5.5 Summary

In this section, we studied the free and forced response of linear, first-order differential
equations. Several concepts and properties were established, including

linearity of solution to initial condition and forcing;

stability;

time-constant;

response to step inputs;

response to sinusoidal inputs;

response to inputs which go to 0;

effect of additional terms from delays.

These concepts will be investigated for higher-order differential equations in later sections.
Many of the principles learned here carry over to those as well. For that reason, it is
important that you develop a mastery of the behavior of forced, first order systems.

In upcoming sections, we study simple feedback systems that can be analyzed using only
1st-order differential equations, using all of the facts about 1st-order systems that have been
derived.
ME 132, Fall 2017, UC Berkeley, A. Packard 46

5.6 Problems

1. Use the integrating factor method to derive the solution given in equation (5.3) to the
differential equation (5.2).

2. Suppose f is a piecewise continuous function. Assume A < B. Explain (with pictures,


or equations, etc) why Z B Z B

f (x)dx |f (x)| dx

A A

This simple idea is used repeatedly when bounding the output response in terms of
bounds on the input forcing function.

3. Work out the integral in the last line of equation (5.5), deriving equation (5.6).

4. A stable first-order system (input u, output y) has differential equation model

x(t) = ax(t)
 + bu(t)b

= a x(t) a u(t)

where a < 0 and b are some fixed numbers.

(a) Let denote the time-constant, and denote the steady-state gain from u x.
Solve for and in terms of a and b. Also, invert these solutions, expressing a
and b in terms of the time-constant and steady-state gain.
(b) Suppose > 0. Consider a first-order differential equation of the form x(t) =
x(t) + u(t). Is this system stable? What is the time-constant? What is the
steady-state gain from u x? Note that this is a useful manner to write a first-
order equation, since the time-constant and steady-state gain appear in a simple
manner.
(c) Suppose = 1, = 2. Given the initial condition x(0) = 4, and the input signal
u
u(t) = 1 for 0 t < 5
u(t) = 2 for 5 t < 10
u(t) = 6 for 10 t < 10.1
u(t) = 3 for 10.1 t < .
sketch a reasonably accurate graph of x(t) for t ranging from 0 to 20. The sketch
should be based on your understanding of a first-order systems response (using
its time-constant and steady-state gain), not by doing any particular inte-
gration.
ME 132, Fall 2017, UC Berkeley, A. Packard 47

(d) Now suppose = 0.001, = 2. Given the initial condition x(0) = 4, and the
input signal u
u(t) = 1 for 0 t < 0.005
u(t) = 2 for 0.005 t < 0.01
u(t) = 6 for 0.01 t < 0.0101
u(t) = 3 for 0.0101 t < .
sketch a reasonably accurate graph of x(t) for t ranging from 0 to 0.02. The
sketch should be based on your understanding of a first-order systems response,
not by doing any particular integration. In what manner is this the same as
the response in part 4c?

5. Consider the first-order linear system

x(t) = A [x(t) u(t)]


1
with A < 0. Note that the system is stable, the time constant is c = A , and the
steady-state gain from u to x is 1. Suppose the input (for t 0) is a ramp input, that
is
u(t) = t
(where is a known constant) starting with an initial condition x(0) = x0 . Show that
the solution for t 0 is

x(t) = (t c ) + (x0 + c ) eAt


| {z } | {z }
shifted ramp decaying exponential

Hint: you can do this in 2 different manners - carry out the convolution integral, or
verify that the proposed solution satisfies the initial condition (at t = 0) and satisfies
the differential equation for all t > 0. Both are useful exercises, but you only need to
do one for the assignment. Note that if we ignore the decaying exponential part of the
solution, then the steady-state solution is also a ramp (with same slope , since the
steady-state gain of the system is 1), but it is delayed from the input by c time-units.
This gives us another interpretation of the time-constant of a first-order linear system
(i.e., ramp-input leads to ramp-output, delayed by c ).
Make an accurate sketch of u(t) and x(t) (versus t) on the same graph, for x0 = 0,
= 3 and A = 0.5.

6. In the notes and lecture, we established that if a < 0, then the system x(t) = ax(t) +
bu(t) is bounded-input/bounded-output (BIBO) stable. In this problem, we show that
the linear system x(t) = ax(t) + bu(t) is not BIBO stable if a 0. Suppose b 6= 0.

(a) (a = 0 case): Show that there is an input u such that |u(t)| 1 for all t 0, but
the response x(t) satisfying x(t) = 0 x(t) + bu(t), with initial condition x(0) = 0
ME 132, Fall 2017, UC Berkeley, A. Packard 48

is not bounded as a function of t. Hint: Try the constant input u(t) = 1 for
all t 0. What is the response x? Is there a finite number which bounds |x(t)|
uniformly over all t?
(b) Take a > 0. Show that there is an input u such that |u(t)| 1 for all t 0, but
the response x(t) satisfying x(t) = ax(t) + bu(t), with initial condition x(0) = 0,
grows exponentially (without bound) with t.

7. Consider the system


x(t) = ax(t) + bu(t)
y(t) = cx(t) + du(t)
Suppose a < 0, so the system is stable.

(a) Starting from initial condition x(0) = 0, what is the response for t 0 due to the
unit-step input
0 for t 0
u(t) =
1 for t > 0
Hint: Since x(0) = 0 and u(0) = 0, it is clear from the definition of y that
y(0) = 0. For t > 0, x converges exponentially to its limit, and y differs from x
only by scaling (c) and the addition of du(t), which for t > 0 is just d.
(b) Compute, and sketch the response for a = 1; b = 1; c = 1; d = 1
(c) Compute and sketch the response for a = 1; b = 1; c = 2; d = 1
(d) Explain/justify the following terminology:
cb
the steady-state-gain from u y is d a
the instantaneous-gain from u y is d

8. (a) Suppose > 0 and > 0. Let y1 (t) := sin t, and y2 (t) = sin(t ). Explain

what is meant by the statement that the signal y2 lags the signal y1 by 2 of a
period.
(b) Let := 3 . On 3 separate graphs, plot 4 periods of sine-signals listed below.
i. sin 0.1t and sin(0.1t )
ii. sin t and sin(t )
iii. sin 10t and sin(10t )
(c) Explain how the graphs in part 8b confirm the claim in part 8a.

9. For the first-order linear system (constant coefficients a, b, and c)

x(t) = ax(t) + bu(t)


y(t) = cx(t)
ME 132, Fall 2017, UC Berkeley, A. Packard 49

(a) Using the convolution integral for the forced response, find the output y(t) for
t 0 starting from the initial condition x(0) = 0, subject to input
u(t) = 1 for t 0
u(t) = sin(t) for t 0 (you will probably need to do two steps of integration-
by-parts).
(b) For this first-order system, the frequency-response function G() is

cb
G() =
j a
Make a plot of
log10 |G()| versus log10
for 0.001 1000 for two sets of values: system S1 with parameters (a =
1, b = 1, c = 1) and system S2 with parameters (a = 10, b = 10, c = 1). Put
both magnitude plots on the same axis. Also make a plot of

G() versus log10

for 0.001 1000 for both systems. You can use the Matlab command angle
which returns (in radians) the angle of a nonzero complex number. Put both
angle plots on the same axis.
(c) What is the time-constant and steady-state gain (from u y) of each system?
How is the steady-state gain related to G(0)?
(d) For each of the following cases, compute and plot y(t) versus t for the :
i. S1 with x(0) = 0, u(t) = 1 for t 0
ii. S2 with x(0) = 0, u(t) = 1 for t 0
iii. S1 with x(0) = 0, u(t) = sin(0.1 t) for t 0
iv. S2 with x(0) = 0, u(t) = sin(0.1 t) for t 0
v. S1 with x(0) = 0, u(t) = sin(t) for t 0
vi. S2 with x(0) = 0, u(t) = sin(t) for t 0
vii. S1 with x(0) = 0, u(t) = sin(10 t) for t 0
viii. S2 with x(0) = 0, u(t) = sin(10 t) for t 0
Put cases (i), (ii) on the same graph, cases (iii), (iv) on the same graph, cases (v),
(vi) on the same graph and cases (vii), (viii) on the same graph. Also, on each
graph, also plot u(t). In each case, pick the overall duration so that the limiting
behavior is clear, but not so large that the graph is cluttered. Be sure and get
the steady-state magnitude and phasing of the response y (relative to u) correct.
ME 132, Fall 2017, UC Berkeley, A. Packard 50

(e) Compare the steady-state sinusoidal responses of the response you computed and
plotted in 9d with the frequency-response functions that are plotted in part 9b. Il-
lustrate out how the frequency-response function gives, as a function of frequency,
the steady-state response of the system to a sin-wave input of any frequency.
Mark the relevant points of the frequency-response curves.

10. With regards to your answers in problem 9,

(a) Comment on the effect parameters a and b have on the step responses in cases
(a)-(b).
(b) Comment on the amplification (or attenuation) of sinusodal inputs, and its rela-
tion to the frequency .
(c) Based on the speed of the response in (a)-(b) (note the degree to which y follows
u, even though u has an abrupt change), are the sinusoidal responses in (c)-(h)
consistent?

11. Consider a first-order system, where for all t,

x(t) = ax(t) + bu(t)


(5.9)
y(t) = cx(t)

under the action of delayed feedback

u(t) = Ky(t T )

where T 0 is a fixed number, representing a delay in the feedback path.

(a) Eliminate u and y from the equations to obtain a delay-differential equation for
x of the form
x(t) = A1 x(t) + A2 x(t T )
The parameters A1 and A2 will be functions of a, b, c and K.
(b) Assume T = 0 (ie., no delay). Under what condition is the closed-loop system
stable?
(c) Following the derivation in section 5.4 (and the slides), derive the value of the
smallest delay that causes instability for the five cases
i. a = 0, b = 1, c = 1, K = 5
ii. a = 1, b = 1, c = 1, K = 4
iii. a = 1, b = 1, c = 1, K = 6
iv. a = 3, b = 1, c = 1, K = 2
v. a = 3, b = 1, c = 1, K = 2
Also determine the frequency at which instability will occur.
ME 132, Fall 2017, UC Berkeley, A. Packard 51

(d) Confirm your findings using Simulink, implementing an interconnection of the


first-order system in (5.9), a Transport Delay block, and a Gain block for the
feedback. Show relevant plots.

12. Consider a system with input u, and output y governed by the differential equation

y(t) + a1 y(t) = b0 u(t) + b1 u(t) (5.10)

This is different than what we have covered so far, because the derivative of the input
shows up in the right-hand-side (the overall function forcing y, from the ODE point
of view). Note that setting b0 = 0 gives an ODE more similar to what we considered
earlier in the class.

(a) Let q(t) := y(t) b0 u(t). By substitution, find the differential equation governing
the relationship between u and q. This should look familar.
(b) Assume that the system is at rest (ie., y 0, u 0, and hence q 0 too), and
at some time, say t = 0, the input u changes from 0 to u (eg., a step-function
input), specifically
0 for t 0
u(t) =
u for t > 0
Solve for q, using the differential equation found in part 12a, using initial condition
q(0) = 0.
(c) Since y = q + u, show that the step-response of (5.10), starting from y(0) = 0 is
 
b1 b1
y(t) = u + b0 u u ea1 t for t > 0
a1 a1

This can be written equivalently as


b 1 a1 b 0
u 1 ea1 t + b0 u

a1

(d) Take a1 = 1. Draw the response for b0 = 2 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(e) Take a1 = 1. Draw the response for b0 = 1 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(f) Take a1 = 1. Draw the response for b0 = 0 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(g) Suppose that a1 > 0 and b0 = 1. Draw the step response for two cases: b1 = 0.9a1
and b1 = 1.1a1 . Comment on the step response for 0 < a1 b1 . What happens if
a1 < 0 (even if b1 a1 , but not exactly equal)?
ME 132, Fall 2017, UC Berkeley, A. Packard 52

13. (a) So, consider the cascade connection of two, first-order, stable, systems

x1 (t) = A1 x1 (t) + B1 u(t)


y1 (t) = C1 x1 (t) + D1 u(t)
x2 (t) = A2 x2 (t) + B2 y1 (t)
y(t) = C2 x2 (t) + D2 y1 (t)

By stable, we mean both A1 < 0 and A2 < 0. The cascade connection is shown
pictorially below.

u y1 y
- S1 - S2 -

Suppose that the frequency response of System 1 is M1 (), 1 () (or just the
complex G1 ()), and the frequency response of System 2 is M2 (), 2 () (ie., the
complex G2 ()). Now, suppose that is a fixed real number, and u(t) = sin t.
Show that the steady-state behavior of y(t) is simply

y,ss (t) = [M1 ()M2 ()] sin (t + 1 () + 2 ())

(b) Let G denote the complex function representing the frequency response (forcing-
frequency-dependent amplitude magnification A and phase shift , combined into
a complex number) of the cascaded system. How is G related to G1 and G2 ?
Hint: Remember that for complex numbers G and H,

|GH| = |G| |H| , (GH) = G + H

14. Re-read Leibnitzs rule in your calculus book, and consider the time-varying dif-
ferential equation
x(t) = a(t)x(t) + b(t)u(t)
with x(0) = xo . Show, by substitution, or integrating factor, that the solution to this
is Z t R
Rt t
a()d
x(t) = e 0 xo + e a()d b( )u( )d
0

15. Design Problem (stable plant/unstable plant): Consider a first-order system P ,


with inputs (d, u) and output y, governed by

x(t) = ax(t) + b1 d(t) + b2 u(t)


y(t) = cx(t)

A proportional control,
u(t) = K1 r(t) K2 ym (t)
ME 132, Fall 2017, UC Berkeley, A. Packard 53

is used, and the measurement is assumed to the be the actual value, plus measurement
noise,
ym (t) = y(t) + n(t)
As usual, y is the process output, and is the variable we want to regulate, r is the
reference signal (the desired value of y), d is a process disturbance, u is the control
variable, n is the measurement noise (so that y + n is the measurement of y), K1 and
K2 are gains to be chosen. For simplicity, we will choose some nice numbers for the
values, specifically b1 = b2 = c = 1. There will be two cases studied: stable plant,
with a = 1, and unstable plant, with a = 1. You will design the feedback gains
as described below, and look at closed-loop properties. The goal of the problem is to
start to see that unstable plants are intrinsically harder to control than stable plants.
This problem is an illustration of this fact (but not a proof).
(a) Keeping a, K1 and K2 as variables, substitute for u, write the differential equation
for x in the form

x(t) = Ax(t) + B1 r(t) + B2 d(t) + B3 n(t)

Also, express the output y and control input u as functions of x and the external
inputs (r, d, n) as
y(t) = C1 x(t) + D11 r(t) + D12 d(t) + D13 n(t)
u(t) = C2 x(t) + D21 r(t) + D22 d(t) + D23 n(t)
Together, these are the closed-loop governing equations. Note that all of the
symbols (A, B1 , . . . , D23 ) will be functions of a and the controller gains, K1 and
K2 . Below, we will design K1 and K2 two different ways, and assess the per-
formance of the overall system.
(b) Under what conditions is the closed-loop system is stable? Under those conditions,
i. What is the time-constant of the closed-loop system?
ii. What is the steady-state gain from r to y (assuming d 0 and n 0)?
iii. What is the steady-state gain from d to y (assuming r 0 and n 0)?
(c) First we will consider the stable plant case, so a = 1. If we simply look at
the plant (no controller), u and d are independent inputs, and the steady-state
gain from d to y is cb
a
1
which in this particular instance happens to be 1. Find
the value of K2 so that: the closed-loop system is stable; and the magnitude of
the closed-loop steady-state gain from d to y is 15 of the magnitude of the open-
loop steady-state gain from d to y. That will be our design of K2 , based on the
requirement of closed-loop stability and this disturbance rejection specification.
(d) With K2 chosen, choose K1 so that the closed-loop steady-state gain from r to y
is equal to 1 (recall, the goal is that y should track r, as r represents the desired
value of y).
ME 132, Fall 2017, UC Berkeley, A. Packard 54

(e) Temporarily, assume the feedback is delayed, so u(t) = K1 r(t) K2 ym (t T )


for some T 0. Using the numerical values, determine the smallest T > 0
such that the closed-loop system is unstable. You have to rewrite the equations,
accounting for the delay in the feedback path. Since stability is property of the
system independent of the external inputs, you can ignore (r, d, n) and obtain an
equation of the form x(t) = A1 x(t) + A2 (x(t T ). Then look back to problem 11
for reference.
(f) Next, we will assess the design in terms of the closed-loop effect that the external
inputs (r, d, n) have on two main variables of interest (y, u). For a change-of-pace,
we will look at the frequency-response functions, not time-domain responses. For
notational purposes, let Hvq denote the frequency-response function from a input
signal v to an output signal q (for example, from r to y). We will be making plots
of
|Hvq ()| vs and Hvq () vs
You will need to mimic code in the lab exercises, using the commands abs and
angle. As mentioned, the plots will be arranged in a 2 3 array, organized as

|Hry |
|Hdy | |Hny |
Hry
|Hru | |Hdu | |Hnu |

These are often referred to as the gang of six. The plot shows all the important
cause/effects, in the context of sinusoidal steady-state response, within the closed-
loop system, namely how (references, disturbances, measurement noise) affect the
(regulated variable, control variable). Note that because one of the entries actually
has both the magnitude and angle plotted, there will be 7 axes. If you forget how
to do this, try the following commands and see what happens in Matlab.

a11T = subplot(4,3,1);
a11B = subplot(4,3,4);
a12 = subplot(2,3,2);
a13 = subplot(2,3,3);
a21 = subplot(2,3,4);
a22 = subplot(2,3,5);
a23 = subplot(2,3,6);

Plot the frequency-response functions. Use (for example) 100 logarithmically


spaced points for , varying from 0.01 to 100. Make the linetype (solid, black)
(g) Next we move to the unstable plant case, a = 1. In order make a fair com-
parison, we need to make some closed-loop property identical to the previous
cases closed-loop property, and then compare other closed-loop properties. For
the unstable plant design, choose K1 and K2 so that
ME 132, Fall 2017, UC Berkeley, A. Packard 55

i. the closed-loop system is stable


ii. the closed-loop time-constant is the same as the closed-loop time constant in
the stable plant case,
iii. the closed-loop steady-state gain from r to y is 1.
With those choices, plot the closed-loop frequency response functions on the ex-
isting plots, using (dashed/red) linetypes for comparison.
(h) Note that several curves are the same as the stable plant case. However in all
the other cases (d u, n y, n u) the unstable plant case has higher gains.
This means that in order to get the same r y tracking and closed-loop time-
constant, the system with the unstable plant uses more control input u, and is
more sensitive to noise at all frequencies.
(i) Again, temporarily, assume the feedback is delayed, so u(t) = K1 r(t)K2 ym (tT )
for some T 0. Using the numerical values, determine the smallest T > 0 such
that the closed-loop system is unstable. How does this compare to the earlier
calculation of the time-delay margin in the open-loop stable case?
(j) Look at the paper entitled Respect the Unstable by Gunter Stein. It is in
the IEEE Control Systems Magazine, August 2003, pp. 12-25. You will need to
ignore most of the math at this point, but there are some good paragraphs that
reiterate what I am saying here, and good lessons to be learned from the accidents
he describes. Please write a short paragraph (a few sentences) about one of the
accidents he describes.

16. Open-loop versus Closed-loop control: Consider a first-order system P , with


inputs (d, u) and output y, governed by

x(t) = ax(t) + b1 d(t) + b2 u(t)


y(t) = cx(t)

(a) Assume P is stable (ie., a < 0). For P itself, what is the steady-state gain from
u to y (assuming d 0)? Call this gain G. What is the steady-state gain from d
to y (assuming u 0)? Call this gain H.
(b) P is controlled by a proportional controller of the form

u(t) = K1 r(t) + K2 [r(t) (y(t) + n(t))]

Here, r is the reference signal (the desired value of y), n is the measurement noise
(so that y + n is the measurement of y), K1 and K2 are gains to be chosen. By
substituting for u, write the differential equation for x in the form

x(t) = Ax(t) + B1 r(t) + B2 d(t) + B3 n(t)


ME 132, Fall 2017, UC Berkeley, A. Packard 56

Also, express the output y and control input u as functions of x and the external
inputs (r, d, n) as

y(t) = C1 x(t) + D11 r(t) + D12 d(t) + D13 n(t)


u(t) = C2 x(t) + D21 r(t) + D22 d(t) + D23 n(t)

All of the symbols (A, B1 , . . . , D23 ) will be functions of the lower-case given sym-
bols and the controller gains. Below, we will design K1 and K2 two different
ways, and assess the performance of the overall system.
(c) Under what conditions is the closed-loop system is stable? What is the steady-
state gain from r to y (assuming d 0 and n 0)? What is the steady-state
gain from d to y (assuming r 0 and n 0)?
(d) Design #1: In this part, we design a feedback control system that actually had
no feedback (K2 = 0). The control system is called open-loop or feed-forward,
and will be based on the steady-state gain G (from u y) of the plant. The
open-loop controller is simple - simply invert the gain of the plant, and use that
for K1 . Hence, we pick K1 := G1 , and K2 := 0. Call this Design #1. Note that
we are now considering a feedback control system that actually
i. For Design #1, compute the steady-state gains from all external inputs
(r, d, n) to the two outputs (y, u).
ii. Comment on the steady-state gain from r y.
iii. (See problem 24 for the definition of sensitivity). What is the sensitivity
of the steady-state gain from r y to the parameter b2 ? What about the
sensitivity to a? Here you should treat K1 as a fixed number.
iv. Comment on the relationship between the steady-state gain from d y
without any control (ie., H computed above) and the steady-state gain from
d y in Design #1, as computed in part 16(d)i.
v. Comment on the steady-state gain from d to u in Design #1. Based on ds
eventual effect on u, is the answer in part 16(d)iv surprising?
vi. Comment on the steady-state gain from n to both y and u in Design #1.
Remember that Design #1 actually does not use feedback...
vii. What it the time-constant of the system with Design #1.
viii. In this part we have considered a control system that actually had no feedback
(K2 = 0). Consequently, this is called open-loop control, or feedforward control
(since the control input is just a function of the reference signal r, fed-forward
to the process), or control-by-calibration since the reciprical of the value of
of G is used in the control law.
Write a short, concise (4 bullet points) quantitative summary of the effect
of this strategy. Include a comparison of the process time-constant, and the
resulting time-constant with the controller in place, as well as the tracking
ME 132, Fall 2017, UC Berkeley, A. Packard 57

capabilities (how y follows r), the sensitivity of the tracking capabilities to


parameter changes, and the disturbance rejection properties.
(e) Now design a true feedback control system. This is Design #2. Pick K2 so that
the closed-loop steady-state gain from d y is at least 5 times less than the
uncontrolled steady-state gain from d y (which we called H). Constrain your
choice of K2 so that the closed-loop system is stable. Since we are working fairly
general, for simplicity, you may assume a < 0 and b1 > 0, b2 > 0 and c > 0.
i. With K2 chosen, pick K1 so that the closed-loop steady-state gain from r y
is 1.
ii. With K1 and K2 both chosen as above, what is the sensitivity of the steady-
state gain from r y to the parameter b2 ?
iii. What is the time-constant of the closed-loop system?
iv. What is the steady-state gain from d u? How does this compare to the
previous case (feedforward)?
v. With K2 6= 0, does the noise n now affect y?
(f) Lets use specific numbers: a = 1, b1 = 1, b2 = 1, c = 1. Summarize all com-
putations above in a table one table for the feedforward case (Design #1),
and one table for the true feedback case (Design #2). Include in the table all
steady-state gains, time constant, and sensitivity of r y to b2 .
(g) Plot the frequency responses from all external inputs to both outputs. Do this in
a 2 3 matrix of plots that I delineate in class. Use Matlab, and the subplot
command. Use a frequency range of 0.01 100. There should be two lines
on each graph.
(h) Mark your graphs to indicate how Design #2 accomplishes tracking, disturbance
rejection, and lower time-constant, but has increased sensitivity to noise.
(i) Keeping K1 and K2 fixed, change b2 from 1 to 0.8. Redraw the frequency re-
sponses, now including all 4 lines. Indicate on the graph the evidence that De-
sign #2 accomplishes good r y tracking that is more insensitive to process
parameter changes than Design #1 .
17. At this point, we can analyze (stability, steady-state gain, sinusoidal steady-state gains
(FRFs), time-constant, etc.) of first-order, linear dynamical systems. In a previous
problem, we analyzed a 1st-order process model, and a proportional-control strategy.
In this problem, we try a different situation, where the process is simply proportional,
but the controller is a 1st-order, linear dynamical system. Specifically, suppose the
process model is nondynamic (static) simply
y(t) = u(t) + d(t)
where and are constants. Depending on the situation (various, below) may be
considered known to the control designer, or unknown. In the case it is unknown, we
ME 132, Fall 2017, UC Berkeley, A. Packard 58

will still assume that the sign (ie., ) is known. The control strategy is dynamic

x(t) = ax(t) + b1 r(t) + b2 ym (t)


u(t) = cx(t) + d1 r(t)

where ym (t) = y(t) + n(t) and the various gains (a, b1 , . . . , d1 ) constitute the de-
sign choices in the control strategy. Be careful, notation-wise, since (for example)
d1 is a constant parameter, and d(t) is a signal (the disturbance). There are alot of
letters/parameters/signals to keep track of.

(a) Eliminate u and ym from the equations to obtain a differential equation for x of
the form
x(t) = Ax(t) + B1 r(t) + B2 d(t) + B3 n(t)
which governs the closed-loop behavior of x. Note that A, B1 , B2 , B3 are functions
of the parameters a, b1 , . . . in the control strategy, as well as the process parameters
and .
(b) What relations on (a, b1 , . . . , d1 , , ) are equivalent to closed-loop system stabil-
ity?
(c) As usual, we are interested in the effect (with feedback in place) of (r, d, n) on
(y, u), the regulated variable, and the control variable, respectively. Find the
coefficients (in terms of (a, b1 , . . . , d1 , , )) so that

y(t) = C1 x(t) + D11 r(t) + D12 d(t) + D13 n(t)


u(t) = C2 x(t) + D21 r(t) + D22 d(t) + D23 n(t)

(d) Suppose that Tc > 0 is a desired closed-loop time constant. Show that the follow-
ing design objectives can be met with one design, assuming that the value of is
known to the designer.
closed-loop is stable
closed-loop time constant is Tc
steady-state gain from d y is 0
steady-state gain from r y is 1
A few things to look out for: the conditions above do not uniquely determine
all of the parameters: indeed, only the product b2 c can be determined; and any
arbitrary value for d1 is acceptable (although its particular value does affect other
properties, like r u, for instance).
(e) Assuming the choices above have been satisfied, what is the steady-state gain
from d u? Given that the steady-state gain from d y is 0, does this make
sense, in retrospect?
ME 132, Fall 2017, UC Berkeley, A. Packard 59

(f) Show that


1
a = 0, b1 = 1, b2 = 1; c = , d1 = arbitrary
Tc
is one acceptable choice. Note that to achieve the desired time-constant, the value
of must be known to the control designer. Write the controller equations with
all these simplifications.
(g) Assume that is not known, but it is known that 0 < L U , where L
and U are known bounds (and both are positive, as indicated). Suppose that
Tc > 0 is a desired closed-loop time constant. Show that the following design
objectives can be met with one design.
closed-loop is stable
actual closed-loop time constant is guaranteed Tc ;
steady-state gain from d y is 0
steady-state gain from r y is 1
(h) Again assume that is not known, but it is known that 0 < L U , where
L and U are known bounds (and both are positive, say). Suppose that Tc > 0
is a desired closed-loop time constant. Show that the following design objectives
can be met with one design.
closed-loop is stable
actual closed-loop time constant is guaranteed Tc ;
steady-state gain from d y is 0
steady-state gain from r y is 1
18. Let the plant (process to be controlled) be governed by the static (no differential
equations) model
y(t) = u(t) + d(t)
as in the lectures. Suppose nominal values of and are known. Consider an integral
controller, of the form
x(t) = r(t) ym (t)
u(t) = KI x(t) + D1 r(t)
where KI and D1 are two design parameters to be chosen, and ym = y + n where n is
an additive sensor noise.

(a) Derive the closed-loop differential equation governing x, with inputs r, d, n. Under
what conditions is the closed-loop stable? What is the time constant of the system.
(b) Suppose the nominal values of the plant parameters are = 2.1, = 0.9. Design
KI such that the closed-loop system is stable, and the nominal closed-loop time
constant is 0.25 time units.
(c) Simulate (using ode45) the system subject to the following conditions
ME 132, Fall 2017, UC Berkeley, A. Packard 60

Use KI as designed in part 18b, and set D1 = 0.


Initial condition, x(0) = 0.
Reference input is a series of steps: r(t) = 0 for 0 t < 1; r(t) = 2 for
1 t < 2; r(t) = 3 for 2 t < 6; r(t) = 0 for 6 t < 10.
Disturbance input is a series of steps: d(t) = 0 for 0 t < 3; d(t) = 1 for
3 t < 4; d(t) = 2 for 4 t < 5; d(t) = 3 for 5 t < 6. d(t) = 4 for
6 t < 7. d(t) = 0 for 7 t < 10.
Noise n(t) = 0 all t.
Plot y versus t and u versus t is separate, vertically stacked, axes. We refer to
these as the nominal, closed-loop responses.
(d) Assume that the true value of (unknown to the control designer) is different
from (= 2.1). Write an expression for actual the closed-loop time-constant.
Your answer should depend on , and the desired time constant (in this case,
0.25).
(e) Repeat the simulation above, with all parameters the same, except that the ,
in the plant itself, should take on some off-nominal values, in order to study the
robustness of the closed-loop system to variations in the process (plant) behavior.
Keep the value of KI fixed from the design step in part 18b. Do 5 simulations,
for taking on values from 1.5 to 2.5. Plot these with dashed-lines, and include
the nominal closed-loop responses (single, thick solid line) for comparison.
(f) For the all 5 of the systems (from the collection of perturbed plant models) make
a gang-of-six frequency-response function plot, using linear (not log) scales,
arranged as
|Hry |
|Hdy | |Hny |
Hry
|Hru | |Hdu | |Hnu |

Choose your frequency range appropriately, probably [0 20] should be adequate,


so that the plots look good.
(g) One informal goal of this class is to learn to make connections between the
frequency-domain analysis (eg., the frequency-response function plots) and the
time-domain analysis (ODE simulations with specific, often non-sinusoidal, in-
puts). Make a list of 5 connections between the frequency-response plots (which
precisely give sinusoidal, steady-state response information) and the time-response
plots (which were non-steady state, non-sinusoidal input responses).
(h) Finally, repeat the nominal simulation from part 18c, with D1 = 0.25 and sepa-
rately D1 = 0.25 (just two simulations). Again, plot these with dashed-lines,
and include the nominal closed-loop responses (single, thick solid line) for com-
parison.
ME 132, Fall 2017, UC Berkeley, A. Packard 61

i. How does the value of D1 affect the response (instantaneous, steady-state,


time-constant) of y and u due to reference input r?
ii. How does the value of D1 affect the response (instantaneous, steady-state,
time-constant) of y and u due to disturbance input d?

19. Suppose an input/output relationship is given by

y(t) = (u(t)) + d(t) (5.11)

where is a monotonically increasing, differentiable function. This will be used to


model a plant, with control input u, disturbance input d, and output (to be regulated)
y. Regarding , specifically, assume there exist positive and such that

0 (v)

for all v R. One such is given below in part 19e. Note that the model in (5.11)
generalizes the linear plant model in problem 18, to include nonlinear dependence of y
on u.
Well ignore sensor noise in this problem. An integral controller is used, of the form

x(t) = r(t) y(t)


u(t) = KI x(t)

As usual, we want to understand how e(t) := r(t) y(t) behaves, even in the presence
of nonzero disturbances d(t).

(a) Show that e(t) = r(t) (KI x(t)) d(t) for all t.
(b) If r(t) = r, a constant, and d(t) = d a constant, show that

e(t) = 0 (KI x(t))KI (r y(t))

which simplifies to e(t) = 0 (KI x(t))KI e(t).


(c) By chain rule, it is always true that

d(e2 )
= 2e(t)e(t).
dt
Substitute in expression for e to show that

d(e2 )
= 2KI 0 (KI x(t))e2 (t)
dt
ME 132, Fall 2017, UC Berkeley, A. Packard 62

(d) Assume KI > 0. Define z(t) := e2 (t). Note that z(t) 0 for all t, and show that

z(t) 2KI z(t)

for all t. Hence z evolves similarly to the stable, 1st order system w(t) =
2KI w(t), but may approach 0 even faster at times. Hence, we conclude
that z approaches 0 at least as fast as w would, and hence can be thought of as
having a maximum time-constant of 2K1I .
(e) Take, for example, to be

2v + 0.1v 3 for v 0
(v) := .
2v 0.2v 2 for v < 0

Plot this function on the domain 5 v 5.


(f) Use ode45 or Simulink to simulate the closed-loop system, with KI = 0.25.
Initialize the system at x(0) = 0, and consider the reference and disturbance
inputs as defined below: r(t) = 3 for 0 t < 10; r(t) = 6 for 10 t < 20;
r(t) = 6 + 0.25(t 20) for 20 t < 40; r(t) = 11 0.4(t 40) for 40 t 60, and

d(t) = sin( 15 t) for 0 t < 60. One on graph, plot y and r versus t. On another
axes, plot the control input u versus t.
(g) Consider an exercise machine that can be programmed to control a persons heart
rate by measuring the heart-rate, and adjusting the power input (from the person)
that must be delivered (by the person) to continually operate the machine (eg, an
elliptical trainer or stairmaster, where the resistance setting dictates the rate
at which a person must take steps). Assume people using the machine are in
decent enough shape to be exercising (as all machines have such warnings printed
on the front panel). Explain, in a point-by-point list, how the problem you have
just solved above can be related to a simple integral-control strategy for this type
of workout machine. Make a short list of the issues that would come up in a
preliminary design discussion about such a product.

20. A feedback system is shown below. All unmarked summing junctions are plus
(+).

d
r - f e- K
+ u- ?
f v-
y
P -
6
f
? n

Figure 4: Closed-loop system


ME 132, Fall 2017, UC Berkeley, A. Packard 63

(a) The plant P , is governed by the ODE y(t) = y(t) + v(t). Note that the plant is
unstable. The controller is a simple proportional control, so u(t) = Ke(t), where
K is a constant-gain. Determine the range of values of proportional gain K for
which the closed-loop system is stable.
(b) Temporarily, suppose K = 4. Confirm that the closed-loop system is stable.
What is the time-constant of the closed-loop system?
(c) The control must be implemented with a sampled-data system (sampler, discrete
control logic, zero-order hold) running at a fixed sample-rate, with sample time
TS . The proportional feedback uk = Kek is implemented, as shown below.

d
r - f e - Sample
+ ek - uk - z.o.h. u- ?
fv-
y
TS K TS P -
6
f
? n

Figure 5: Closed-loop, sampled-data system

The plant ODE is as before, y(t) = y(t) + v(t). Determine a relationship between
TS and K (sample-time and proportional gain) such that the closed-loop system
is stable.
(d) Return the situation where K = 4. Recall the rule-of-thumb described in class
1
that the sample time TS should be about 10 of the closed-loop time constant.
Using this sample time, determine the allowable range of K, and show that the
choice K = 4 is safely in that range.
(e) Simulate the overall system (Lab on Wednesday/Thursday will describe exactly
how to do this, and it will only take a few minutes to do this) and confirm that
the behavior with the sampled-data implementation is approximately the same
as the ideal continuous-time implementation.

21. Suppose two systems are interconnected, with individual equations given as

S1 : y(t) = [u(t) y(t)]


(5.12)
S2 : u(t) = 2 [y(t) r(t)]

(a) Consider first S1 (input u, output y): Show that for any initial condition y0 , if
u(t) u (a constant), then y(t) approaches a constant y, that only depends on
the value of u. What is the steady-state gain of S1 ?
(b) Next consider S2 (input (r, y), output u): Show that if r(t) r and y(t) y
(constants), then u(t) approaches a constant u, that only depends on the values
(r, y).
ME 132, Fall 2017, UC Berkeley, A. Packard 64

(c) Now, assume that the closed-loop system also has the steady-state behavior
that is, if r(t) r, then both u(t) and y(t) will approach limiting values, u and
y, only dependent on r. Draw a block-diagram showing how the limiting values
are related, and solve for u and y in terms of r.
(d) Now check your answer in part 21c. Suppose y(0) = 0, and r(t) = 1 =: r for all
t 0. Eliminate u from the equations 5.12, and determine y(t) for all t. Make a
simple graph. Does the result agree with your answer in part 21c?

Lesson: since the assumption we made in part 21c was actually not valid, the analysis
in part 21c is incorrect. That is why, for a closed-loop steady-state analysis to be based
on the separate components steady-state properties, we must know from other means
that the closed-loop system also has steady-state behavior.

22. Suppose two systems are interconnected, with individual equations given as

S1 : y(t) = [u(t) + y(t)]


(5.13)
S2 : u(t) = 2 [r(t) y(t)]

(a) Consider first S1 (input u, output y): If u(t) u (a constant), then does y(t)
approach a constant y, dependent only on the value of u?
(b) Next consider S2 (input (r, y), output u): If r(t) r and y(t) y (constants),
then does u(t) approach a constant u, dependent only on the values r, y?
(c) Suppose y(0) = y0 is given, and r(t) =: r for all t 0. Eliminate u from the
equations 5.13, and determine y(t) for all t. Also, plugging back in, determine
u(t) for all t. Show that y and u both have limiting values that only depend on
the value r, and determine the simple relationship between r and (y, u).

Lesson: Even though S1 does not have steady-state behavior on its own, in feedback
with S2 , the overall closed-loop system does.

23. Consider the equations relating variables r, e, y, n, u and d. Assume P and C are given
numbers.
e = r (y + n)
u = Ce
y = P (u + d)
So, this represents 3 linear equations in 6 unknowns. Solve these equations, expressing
e, u and y as linear functions of r, d and n. The linear relationships will involve the
numbers P and C.

24. For a function F of a many variables (say two, for this problem, labeled x and y), the
sensitivity of F to x is defined as the ratio of the percentage change in F due to a
percentage change in x. Denote this by SxF .
ME 132, Fall 2017, UC Berkeley, A. Packard 65

(a) Suppose x changes by , to x + . The percentage change in x is then

(x + ) x
% change in x = =
x x
Likewise, the subsequent percentage change in F is

F (x + , y) F (x, y)
% change in F =
F (x, y)

Show that for infinitesimal changes in x, the sensitivity is


x F
SxF =
F (x, y) x
xy
(b) Let F (x, y) = 1+xy
. What is SxF .
xy
(c) If x = 5 and y = 6, then 1+xy 0.968. If x changes by 10%, using the quantity
F
Sx derived in part (24b), approximately what percentage change will the quantity
xy
1+xy
undergo?
1
(d) Let F (x, y) = xy
. What is SxF .
(e) Let F (x, y) = xy. What is SxF .

You might also like