Professional Documents
Culture Documents
Class Notes
Fall 2017
Instructor:
Andy Packard
Contents
1 Introduction 1
1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Block Diagrams 13
2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 State Variables 30
4.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.5 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
ME 132, Fall 2017, UC Berkeley, A. Packard 1
1 Introduction
In this course we will learn how to analyze, simulate and design automatic control
strategies (called control systems) for various engineering systems.
The system whose behavior is to be controlled is called the plant. This term has its origins
in chemical engineering where the control of chemical plants or factories is of concern. On
occasion, we will also use the terms plant, process and system interchangeably. As a simple
example which we will study soon, consider an automobile as the plant, where the speed of
the vehicle is to be controlled, using a control system called a cruise control system.
The plant is subjected to external influences, called inputs which through a cause/effect
relationship, influence the plants behavior. The plants behavior is quantified by the value
of several internal quantities, often observable, called plant outputs.
Initially, we divide these external influences into two groups: those that we, the owner/operator
of the system, can manipulate, called them control inputs; and those that some other ex-
ternality (nature, another operator, normally thought of as antagonistic to us) manipulates,
called disturbance inputs.
By control, we mean the manipulation of the control inputs in a way to make the plant
outputs respond in a desirable manner.
The strategy and/or rule by which the control input is adjusted is known as the control law,
or control strategy, and the physical manner in which this is implemented (computer with a
real-time operating system; analog circuitry, human intervention, etc.) is called the control
system.
systems are simple, they generally rely totally on calibration, and cannot effectively
deal with exogenous disturbances. Moreover, they cannot effectively deal with changes
in the plants behavior, due to various effects, such as aging components. They require
re-calibration. Essentially, they cannot deal with uncertainty. Another disadvantage
of open-loop control systems is that they cannot stabilize an unstable system, such as
balancing a rocket in the early stages of liftoff (in control terminology, this is referred
to as an inverted pendulum).
In reality, most control systems are a combination of open and closed-loop strategies. In
the trivial cookie example above, the instructions look predominantly open-loop, though
something must stop the baking after 8 minutes, and temperature in the oven should be
maintained at (or near) 350 . Of course, the instructions for cooking would even be more
closed-loop in practice, for example bake the cookies at 350 for 8 minutes, or until
golden-brown. Here, the until golden-brown indicates that you, the baker, must act as
a feedback system, continuously monitoring the color of the dough, and remove from heat
when the color reaches a prescribed value.
In any case, ME 132 focuses most attention to the issues that arise in closed-loop systems,
and the benefits and drawbacks of systems that deliberately use feedback to alter the be-
havior/characteristics of the process being controlled.
Some examples of systems which benefit from well-designed control systems are
Airplanes, helicopters, rockets, missiles: flight control systems including autopilot, pilot
augmentation
Cruise control for automobiles. Lateral/steering control systems for future automated
highway systems
Of course, these are just examples of systems that we have built, and that are usually thought
of as physical systems. There are other systems we have built which are not physical in
the same way, but still use (and benefit from) control, and additional examples that occur
naturally (living things). Some examples are
The internet, whereby routing of packets through communication links are controlled
using a conjestion control algorithm
Air traffic control system, where the real-time trajectories of aircraft are controlled
by a large network of computers and human operators. Of course, if you look in-
side, you see that the actual trajectories of the aircraft are controlled by pilots, and
autopilots, receiving instructions from the Air Traffic Control System. And if you
look inside again, you see that the actual trajectories of the aircraft are affected by
forces/moments from the engine(s), and the deflections of movable surfaces (ailerons,
rudders, elevators, etc.) on the airframe, which are receiving instructions from the
pilot and/or autopilot.
An economic system of a society, which may be controlled by the federal reserve (setting
the prime interest rate) and by regulators, who set up rules by which all agents in
the system must abide. The general goal of the rules is to promote growth and wealth-
building.
All the numerous regulatory systems within your body, both at organ level and cellular
level, and in between.
A key realization is the fact that most of the systems (ie, the plant) that we will attempt
to model and control are dynamic. We will later develop a formal definition of a dynamic
system. However, for the moment it suffices to say that dynamic systems have memory, i.e.
the current values of all variables of the system are generally functions of previous inputs, as
well as the current input to the system. For example, the velocity of a mass particle at time
t depends on the forces applied to the particle for all times before t. In the general case, this
means that the current control actions have impact both at the current time (ie., when they
are applied) and in the future as well, so that actions taken now have later consequences.
ME 132, Fall 2017, UC Berkeley, A. Packard 4
The general structure of a closed-loop system, including the plant and control law (and other
components) is shown in Figure 1.
A sensor is a device that measures a physical quantity like pressure, acceleration, humidity,
or chemical concentration. Very often, in modern engineering systems, sensors produce an
electrical signal whose voltage is proportional to the physical quantity being measured. This
is very convenient, because these signals can be readily processed with electronics, or can be
stored on a computer for analysis or for real-time processing.
An actuator is a device that has the capacity to affect the behavior of the plant. In many
common examples in aerospace/mechanical systems, an electrical signal is applied to the
actuator, which results in some mechanical motion such as the opening of a valve, or the
motion of a motor, which in turn induces changes in the plant dynamics. Sometimes (for
example, electrical heating coils in a furnace) the applied voltage directly affects the plant
behavior without mechanical motion being involved.
The controlled variables are the physical quantities we are interested in controlling and/or
regulating.
The reference or command is an electrical signal that represents what we would like the
regulated variable to behave like.
Disturbances are phenomena that affect the behavior of the plant being controlled. Distur-
bances are often induced by the environment, and often cannot be predicted in advance or
measured directly.
The controller is a device that processes the measured signals from the sensors and the
reference signals and generates the actuated signals which in turn, affects the behavior of
the plant. Controllers are essentially strategies that prescribe how to process sensed signals
and reference signals in order to generate the actuator inputs.
Finally, noises are present at various points in the overall system. We will have some amount
of measurement noise (which captures the inaccuracies of sensor readings), actuator noise
(due for example to the power electronics that drives the actuators), and even noise affecting
the controller itself (due to quantization errors in a digital implementation of the control
algorithm). Note that the sensor is a physical device in its own right, and also subject to
external disturbances from the environment. This cause its output, the sensor reading, to
generally be different from the actual value of the physical veriable the sensor is sensing.
While this difference is usually referred to as noise, it is really just an additional disturbance
that acts on the overall plant.
Throughout these notes, we will attempt to consistently use the following symbols (note -
ME 132, Fall 2017, UC Berkeley, A. Packard 5
P plant K controller
u input y output
d disturbance n noise
r reference
Arrows in our block diagrams always indicate cause/effect relationships, and not necessarily
the flow of material (fluid, electrons, etc.). Power supplies and material supplies may not
be shown, so that normal conservations laws do not necessarily hold for the block diagram.
Based on our discussion above, we can draw the block diagram of Figure 1 that reveals the
structure of many control systems. Again, the essential idea is that the controller processes
measurements together with the reference signal to produce the actuator input u(t). In this
way, the plant dynamics are continually adjusted so as to meet the objective of having the
plant outputs y(t) track the reference signal r(t).
Disturbances
Measurement noise
Actuator noise
Controller
A simple, slightly unrealistic example of some important issues in control systems is the
problem of temperature control in a shower. As Professor Poolla tells it, Every morning
I wake up and have a shower. I live in North Berkeley, where the housing is somewhat
run-down, but I suspect the situation is the same everywhere. My shower is very basic. It
has hot and cold water taps that are not calibrated. So I cant exactly preset the shower
temperature that I desire, and then just step in. Instead, I am forced to use feedback control.
I stick my hand in the shower to measure the temperature. In my brain, I have an idea of
ME 132, Fall 2017, UC Berkeley, A. Packard 6
what shower temperature I would like. I then adjust the hot and cold water taps based on
the discrepancy between what I measure and what I want. In fact, it is possible to set the
shower temperature to within 0.5 F this way using this feedback control. Moreover, using
feedback, I (being the sensor and the compensatory strategy) can compensate for all sorts
of changes: environmental changes, toilets flushing, etc.
This is the power of feedback: it allows us to, with accurate sensors, make a precision device
out of a crude one that works well even in changing environments.
Lets analyze this situation in more detail. The components which make up the plant in the
shower are
Adjustable valve that mixes the two; use to denote the angle of the valve, with = 0
meaning equal amounts of hot and cold water mixing. In the units chosen, assume
that 1 1 always holds.
If we assume perfect mixing, then the temperature of the water just past the valve is
Tv (t) := TH +T
2
C
+ TH T
2
C
(t)
= c1 + c2 (t)
The temperature of the water hitting your skin is the same (roughly) as at the valve, but
there is a time-delay based on the fact that the fluid has to traverse the piping, hence
T (t) = Tv (t )
= c1 + c2 (t )
Lets assume that the valve position only gets adjusted at regular increments, every
seconds. Similarly, lets assume that we are only interested in the temperature at those
instants as well. Hence, we can use a discrete notion of time, indexed by a subscript k, so
that for any signal, v(t), write
vk := v(t)|t=k
In this notation, the model for the Temperature/Valve relationship is
Tk = c1 + c2 k1 (1.1)
ME 132, Fall 2017, UC Berkeley, A. Packard 7
Now, taking a shower, you have (in mind) a desired temperature, Tdes , which may even be
a function of time Tdes,k . How can the valve be adjusted so that the shower temperature
approaches this?
Open-loop control: pre-solve for what the valve position should be, giving
Tdes,k c1
k = (1.2)
c2
and use this basically calibrate the valve position for desired temperature. This gives
Tk = Tdes,(k1)
which seems good, as you achieve the desired temperature one time-step after specifying
it. However, if c1 and/or c2 change (hot or cold water supply temperature changes, or valve
gets a bit clogged) there is no way for the calibration to change. If the plant behavior changes
to
Tk = c1 + c2 k1 (1.3)
but the control behavior remains as (1.2), the overall behavior is
c2
Tk+1 = c1 + (Tdes,k c1 )
c2
which isnt so good. Any percentage variation in c2 is translated into a similar percentage
error in the achieved temperature.
How do you actually control the temperature when you take a shower: Again, the behavior
of the shower system is:
Tk+1 = c1 + c2 k
Closed-loop Strategy: If at time k, there is a deviation in desired/actual temperature
of Tdes,k Tk , then since the temperature changes c2 units for every unit change in , the
valve angle should be increased by an amount c12 (Tdes,k Tk ). That might be too aggressive,
trying to completely correct the discrepancy in one step, so choose a number , 0 < < 1,
and try
k = k1 + (Tdes,k Tk ) (1.4)
c2
(of course, is limited to lie between 1 and 1, so the strategy should be written in a more
complicated manner to account for that - for simplicity we ignore this issue here, and return
to it later in the course). Substituting for k gives
1 1
(Tk+1 c1 ) = (Tk c1 ) + (Tdes,k Tk )
c2 c2 c2
which simplifies down to
Tk+1 = (1 ) Tk + Tdes,k
ME 132, Fall 2017, UC Berkeley, A. Packard 8
which shows that, in fact, as long as 0 < < 2, then the temperature converges (convergence
rate determined by ) to the desired temperature.
Assuming your strategy remains fixed, how do unknown variations in TH and TC affect the
performance of the system? Shower model changes to (1.3), giving
Tk+1 = 1 Tk + Tdes,k
where := cc22 . Hence, the deviation in c1 has no effect on the closed-loop system, and the
deviation in c2 only causes a similar percentage variation in the effective value of . As long
as 0 < < 2, the overall behavior of the system is acceptable. This is good, and shows
that small unknown variations in the plant are essentially completely compensated for by
the feedback system.
On the other hand, large, unexpected deviations in the behavior of the plant can cause
problems for a feedback system. Suppose that you maintain the strategy in equation (1.4),
but there is a longer time-delay than you realize? Specifically, suppose that there is extra
piping, so that the time delay is not just , but m. Then, the shower model is
Tk+m1 = c1 + c2 k1 (1.5)
and the strategy (from equation 1.4) is k = k1 + c2
(Tdes,k Tk ). Combining, gives
Tk+m = Tk+m1 + (Tdes,k Tk )
This has some very undesirable behavior, which is explored in problem 5 at the end of the
section.
1.3 Problems
1 N +1
=
1
If || < 1, show that
X 1
k =
k=0
1
2. Consider the equations relating variables r, e, y, n, u and d. Assume P and C are given
numbers.
e = r (y + n)
u = Ce
y = P (u + d)
So, this represents 3 linear equations in 6 unknowns. Solve these equations, expressing
e, u and y as linear functions of r, d and n. The linear relationships will involve the
numbers P and C.
3. For a function F of a many variables (say two, for this problem, labeled x and y), the
sensitivity of F to x is defined as the ratio of the percentage change in F due to a
percentage change in x. Denote this by SxF .
(x + ) x
% change in x = =
x x
Likewise, the subsequent percentage change in F is
F (x + , y) F (x, y)
% change in F =
F (x, y)
Show that for infinitesimal changes in x, the sensitivity is
x F
SxF =
F (x, y) x
xy
(b) Let F (x, y) = 1+xy
. What is SxF .
xy
(c) If x = 5 and y = 6, then 1+xy 0.968. If x changes by 10%, using the quantity
F
Sx derived in part (24b), approximately what percentage change will the quantity
xy
1+xy
undergo?
1
(d) Let F (x, y) = xy
. What is SxF .
(e) Let F (x, y) = xy. What is SxF .
ME 132, Fall 2017, UC Berkeley, A. Packard 10
pk+1 = pk + uk (1.6)
with the following parameter values, initial condition and terminal condition:
R
= 1+ , = 1, uk = M for all k, p0 = L, p360 = 0 (1.7)
12
where R, M and L are constants.
(a) In order for the terminal condition to be satisfied (p360 = 0), the quantities R, M
and L must be related. Find that relation. Express M as a function of R and L,
M = f (R, L).
(b) Is M a linear function of L (with R fixed)? If so, express the relation as M =
g(R)L, where g is a function you can calculate.
(c) Note that the function g is not a linear function of R. Calculate
dg
dR R=0.065
for R is the range 0.01 to 0.2. Is the linear approximation relatively accurate in
the range 0.055 to 0.075?
(e) On a 30 year home loan of $400,000, what is the monthly payment, assuming
an annual interest rate of 3.75%. Hint: The amount owed on a fixed-interest-
rate mortgage from month-to-month is represented by the difference equation in
equation (1.6). The interest is compounded monthly. The parameters in (1.7) all
have appropriate interpretations.
5. Consider the shower example. Suppose that there is extra delay in the showers re-
sponse, but that your strategy is not modified to take this into account. We derived
that the equation governing the closed-loop system is
where the time-delay from the water passing through the mixing value to the water
touching your skin is m. Using calculators, spreadsheets, computers (and/or graphs)
or analytic formula you can derive, determine the values of for which the system
is stable for the following cases: (a) m = 2, (b) m = 3, (c) m = 5. Remark
ME 132, Fall 2017, UC Berkeley, A. Packard 11
1: Remember, for m = 1, the allowable range for is 0 < < 2. Hint: For
a first attempt, assume that the water in the piping at k = 0 is all cold, so that
T0 , T1 , . . . , Tm1 = TC , and that Tdes,k = 12 (TH + TC ). Compute, via the formula, Tk
for k = 0, 1, . . . , 100 (say), and plot the result.
6. In this class, we will deal with differential equations having real coefficients, and real
initial conditions, and hence, real solutions. Nevertheless, it will be useful to use
complex numbers in certain calculations, simplifying notation, and allowing us to write
only 1 equation when there arep actually two. Let j denote 1. Recall that if is a
2 2
complex number, then || = R + I , where R := Real() and I := Imag(), and
= R + jI
and R and I are real numbers. If 6= 0, then the angle of , denoted , satisfies
R I
cos = , sin =
|| ||
and is uniquely determinable from (only to within an additive factors of 2).
(a) Draw a 2-d picture (horizontal axis for Real part, vertical axis for Imaginary part)
(b) Suppose A and B are complex numbers. Using the numerical definitions above,
carefully derive that
7. (a) Given a real numbers 1 and 2 , using basic trigonometry and show that
ej = cos + j sin
8. Given a complex number G, and a real number , show that (here, j := 1)
Note: you can only determine to within an additive factor of 2. How are these
conditions different from saying just
B
tan =
A
10. Draw the block diagram for temperature control in a refrigerator. What disturbances
are present in this problem?
All are in the Engineering Library. They are available on the web, and if you are
working from a UC Berkeley computer, you can access the articles for free (see the
UCB library webpage for instructions to configure your web browser at home with the
correct proxy so that you can access the articles from home as well, as needed).
(a) Find 3 articles that have titles which interest you. Make a list of the title, first
author, journal/vol/date/page information.
(b) Look at the articles informally. Based on that, pick one article, and attempt to
read it more carefully, skipping over the mathematics that we have not covered
(which may be alot/most of the paper). Focus on understanding the problem
being formulated, and try to connect it to what we have discussed.
(c) Regarding the papers Introduction section, describe the aspects that interest you.
Highlight or mark these sentences.
(d) In the body of the paper, mark/highlight figures, paragraphs, or parts of para-
graphs that make sense. Look specifically for graphs of signal responses, and/or
block diagrams.
(e) Write a 1 paragraph summary of the paper.
Turn in the paper with marks/highlights, as well as the title information of the other
papers, and your short summary.
ME 132, Fall 2017, UC Berkeley, A. Packard 13
2 Block Diagrams
In this section, we introduce some block-diagram notation that is used throughout the course,
and common to control system grammar.
The names, appearance and mathematical meaning of a handful of blocks that we will use
are shown below. Each block maps an input signal (or multiple input signals) into an output
signal via a prescribed, well-defined mathematical relationship.
Name Diagram Info Continuous
u- y
-
Gain R y(t) = u(t), t
u- y
7.2 -
(example) = 7.2 y(t) = u(t), t
w +
-e -
y
+6
Sum z y(t) = w(t) + z(t), t
w +
-e -
y
6
Difference z y(t) = w(t) z(t), t
u- R y
- Rt
Integrator y0 , t0 given y(t) = y0 + t0
u( )d, t t0
u- R y
-
Integrator y0 , t0 given y(t0 ) = y0 ; y(t) = u(t) t t0
u- y
-
Static Non- :RR y(t) = (u(t)), t
linearity
u- y
sin -
(example) () = sin() y(t) = sin(u(t)), t
ME 132, Fall 2017, UC Berkeley, A. Packard 14
u - delay, T y
-
Delay T 0 y(t) = u(t T ), t
2.2 Example
d -
B
B
B
)
B
B
B
B
u -B
Imagine the stick is supported at its base, by a force approximately equal to its weight.
This force is not shown. A sideways force can act at the base as well, this is denoted by u,
and a sideways force can act at the top, denoted by d.
For the purposes of this example, the differential equations governing the angle-of-orientation
are taken to be
(t) = (t) + u(t) d(t).
where is the angle-of-orientation, u is the horizontal control force applied at the base, and
d is the horizontal disturbance force applied at the tip. Remark: These are not the
correct equations, as Newtons laws would correctly involve 2nd derivatives, and
terms with cos , sin , 2 and so on. However they do yield an interesting unstable system
(positive contributes to a positive ; negative contributes to a negative ) which can be
controlled with proportional control. This is similar to the dynamic instabilities of a rocket,
just after launch, when the velocity is quite slow, and the only dominant forces/moments
are from gravity and the engine thrust:
1. The large thrust of the rocket engines essentially cancels the gravitational force, and
the rocket is effectively balanced in a vertical position;
2. If the rocket rotates away from vertical (for example, a positive ), then the mo-
ment/torque about the bottom end causes to increase;
ME 132, Fall 2017, UC Berkeley, A. Packard 15
3. The vertical force of the rocket engines can be steered from side-to-side by powerful
motors which move the rocket nozzles a small amount, generating a horizontal force
(represented by u) which induces a torque, and causes to change;
4. Winds (and slight imbalances in the rocket structure itself) act as disturbance torques
(represented by d) which must be compensated for;
So, without a control system to use u to balance the rocket, it would tip over. As an exercise,
try balancing a stick or ruler in your hand (or better yet, on the tip of your finger).
Here, using a simple Matlab code, we will see the effect of a simple proportional feedback
control strategy
u(t) = 5 [des (t) (t)] .
This will result in a stable system, with a steerable rocket trajectory (the actual rocket
inclination angle (t) will generally track the reference inclination angle des (t)). Interestingly,
although the strategy for u is very simple, the actual signal u(t) as a function of t is somewhat
complex, for instance, when the conditions are: (0) = 0, des (t) = 0 for 0 t 2, des (t) = 1
for 2 < t, and d(t) = 0 for 0 t 6, d(t) = 0.6 for 6 < t. The Matlab files, and associated
plots are shown at the end of this section, after Section 2.4.
Depending on your point-of-view, the resulting u(t), and its affect on might seem almost
magical to have arisen from such a simple proportional control strategy. This is a great
illustration of the power of feedback.
Nevertheless, lets return to the main point of this section, block diagrams. How can this
composite system be represented in block diagram form? Use an integrator to transform
into , independent of the governing equation.
- R
-
Then, create in terms of , u and d, as the governing equation dictates. This requires
summing junctions, and simple connections, resulting in
d
u g - g -
- ?
R -
6
Putting these together yields the closed-loop system. See problem 1 in Section 2.4 for an
extension to a proportional-integral control strategy.
2.3 Summary
It is important to remember that while the governing equations are almost always written as
differential equations, the detailed block diagrams almost always are drawn with integrators
(and not differentiators). This is because of the mathematical equivalence shown in the
Integrator entry in the table in section 2.1. Using integrators to represent the relationship,
the figure conveys how the derivative of some variable, say x is a consequence of the values
of other variables. Then, the values of x evolve simply through the running integration of
this quantity.
ME 132, Fall 2017, UC Berkeley, A. Packard 17
2.4 Problems
1. This question extends the example we discussed in class. Recall that the process was
governed by the differential equation
(t) = (t) + u(t) d(t).
The proportional control strategy u(t) = 5 [des (t) (t)] did a good job, but there was
room for improvement. Consider the following strategy
u(t) = p(t) + a(t)
p(t) = Ke(t)
a(t) = Le(t) (with a(0) = 0)
e(t) = des (t) meas (t)
where K and L are constants. Note that the control action is made up of two terms
p and a. The term p is proportional to the error, while term as rate-of-change is
proportional to the error.
(a) Convince yourself (and me) that a block diagram for this strategy is as below (all
missing signs on summing junctions are + signs). Note: There is one minor issue
you need to consider - exchanging the order of differentiation with multiplication
by a constant...
- K
des -f
R
-f
? u
- - L -
6
meas
(b) Create a Simulink model of the closed-loop system (ie., process and controller,
hooked up) using this new strategy. The step functions for des and d should be
namely
0 for t 1 0 for t 6
des (t) = 1 for 1 < t 7 , d(t) = 0.4 for 6 < t 11
1.4 for t > 7 0 for t > 11
Make the measurement perfect, so that meas = .
(c) Simulate the closed-loop system for K = 5, L = 9. The initial condition for
should be (0) = 0. On three separate axis (using subplot, stacked vertically,
all with identical time-axis so they can be lined up for clarity) plot des and d
versus t; versus t; and u versus t.
ME 132, Fall 2017, UC Berkeley, A. Packard 18
(d) Comment on the performance of this control strategy with regards to the goal
of make follow des , even in the presence of nonzero d. What aspect of the
system response/behavior is insensitive to d? What signals are sensitive to d,
even in the steady-state?
(e) Suppose the process has up to 30% variablity due to unknown effects. By that,
suppose that the process ODE is
(t) = (t) + u(t) d(t).
where and are unknown numbers, known to satisfy 0.7 1.3 and 0.7
1.3. Using for loops, and rand (this generates random numbers uniformly
distributed between 0 and 1, hence 0.7 + 0.6*rand generates a random number
uniformly distributed between 0.7 and 1.3. Simulate the system 50 times (using
different random numbers for both and , and plot the results on a single 3-
axis (using subplot) graph (as in part 1c above). What aspect of the closed-loop
systems response/behavior is sensitive to the process variability? What aspects
are insensitive to the process variability?
(f) Return to the original process model. Simulate the closed-loop system for K = 5,
and five values of L, {1, 3.16, 10, 31.6, 100}. On two separate axis (using subplot
and hold on), plot versus t and u versus t, with 5 plots (the different values of
L) on each axis.
(g) Discuss how the value of the controller parameter L appears to affect the perfor-
mance.
(h) Return to the case K = 5, L = 9. Now, use the transport delay block (found
in the Continuous Library in Simulink) so that meas is a delayed (in time)
version of . Simulate the system for 3 different values of time-delay, namely
T = {0.001, 0.01, 0.1}. On one figure, superimpose all plots of versus t for the
three cases.
(i) Comment on the effect of time-delay in the measurement in terms of affecting the
regulation (ie., behaving like des .
(j) Return to the case of no measurement delay, and K = 5, L = 9. Now, use the
quantizer block (found in the Nonlinear Library in Simulink) so that meas is
the output of the quantizer block (with as the input). This captures the effect
of measuring with an angle encoder. Simulate the system for 3 different levels of
quantization, namely {0.001, 0.005, 0.025}. On one figure, make 3 subplots (one
for each quantization level), and on each axis, graph both and meas versus t.
On a separate figure, make 3 subplots (one for each quantization level), graphing
u versus t.
(k) Comment on the effect of measurement quantization in terms of limiting the
accuracy of regulation (ie., behaving like des , and on the jumpiness of the
control action u).
ME 132, Fall 2017, UC Berkeley, A. Packard 19
NOTE: All of the computer work (parts 1c, 1e, 1f, 1h and 1j) should be automated
in a single, modestly documented script file. Turn in a printout of your Simulink
diagrams (3 of them), and a printout of the script file. Also include nicely formatted
figure printouts, and any derivations/comments that are requested in the problem
statement.
ME 132, Fall 2017, UC Berkeley, A. Packard 24
Consider an input/output system, with m inputs (denoted by d), q outputs (denoted by e),
and governed by a set of n, 1st order, coupled differential equations, of the form
where the functions fi , hi are given functions of the n variables x1 , x2 , . . . , xn , the m variables
d1 , d2 , . . . , dm and also explicit functions of t.
Given an initial condition vector x0 , and a forcing function d(t) for t t0 , we wish to solve
for the solutions
x1 (t) e1 (t)
x2 (t) e2 (t)
x(t) = .. , e(t) = ..
. .
xn (t) eq (t)
on the interval [t0 , tF ], given the initial condition
x (t0 ) = x0 .
ode45 solves for this using numerical integration techniques, such as 4th and 5th order Runge-
Kutta formulae. You should have learned about this in E7, and used ode45 extensively. You
can learn more about numerical integration by taking Math 128. We will not discuss this
important topic in detail in this class - please review your E7 material.
ME 132, Fall 2017, UC Berkeley, A. Packard 25
Remark: The GSI will give 2 interactive discussion sections on how to use Simulink, a
graphical-based tool to easily build and numerically solve ODE models by interconnecting
individual components, each of which are governed by ODE models. Simulink is part of
Matlab, and is available on the computers in Etcheverry. The Student Version of Matlab
also comes with Simulink (and the Control System Toolbox). The Matlab license from the
UC Berkeley software licensing arrangement has both Simulink (and the Control System
Toolbox included.
However, to quickly recap (in an elementary manner) how ODE solvers work, consider the
Euler method of solution. If the functions f and d are reasonably well behaved in x and t,
then the solution, x() exists, is a continuous function of t, and in fact, is differentiable at all
points. Hence, it is reasonable that a Taylor series for x at a given time t will be predictive
of the values of x( ) for values of close to t.
If we do a Taylors expansion on a function x, and ignore the higher order terms, we get an
approximation formula
Roughly, the smaller is, the closer the left-hand-side is to the actual value of x(t + ).
Eulers method propogates a solution to (3.1) by using this approximation repeatedly for a
fixed , called the stepsize. Hence, Eulers method gives that for any integer k >= 0, the
solution to (3.1) approximately satisfies
and so on. So, as long as you have a subroutine that can evaluate f (x, d, t), given values
of x, d and t, you can quickly propogate an approximate solution simply by calling the
subroutine once for every timestep.
Computing the output, e(t) simply involves evaluating the function h(x(t), d(t), t) at the
solution points.
accuracy. In effect, more terms of the Taylor series are used, involving matrices of partial
derivatives, and even their derivatives,
df d2 f d3 f
, ,
dx dx2 dx3
but without actually requiring explicit knowledge of these derivatives of the function f .
3.3 Problems
1. A sealed-box loudspeaker is shown below.
ME 132, Fall 2017, UC Berkeley, A. Packard 27
Voice Coil
Top Plate Former
Magnet
Back Plate
Vent
Signal From
From Amplifier
Accelerometer
Dust Cap
Spider Woofer Cone
Pole Piece
Basket Surround
Voice Coil Subwoofer Electrical Leads
Sealed Enclosure
Ignore the wire marked accelerometer mounted on cone. We will develop a model
for accoustic radiation from a sealed-box loudspeaker.
The equations are
Speaker Cone: force balance,
Suspension/Surround:
Sealed Enclosure:
Az(t)
Fb (t) = P0 A 1 +
V0
Environment (Baffled Half-Space)
De = 46.62
This is an approximate model of the impedance seen by the face of the loud-
speaker as it radiates into an infinite half-space.
(a) Build a Simulink model for the system. Use the Subsystem grouping capability
to manage the complexity of the diagram. Each of the equations above should
represent a different subsystem. Make sure you think through the question what
is the input(s) and what is the output(s)? for each subsystem.
ME 132, Fall 2017, UC Berkeley, A. Packard 29
(b) Write a function that has two input arguments (denoted V and ) and two output
arguments, zmax,pos and zmax,neg . The functional relationship is defined as follows:
Suppose Vin (t) = V sin 2t, where t is in seconds, and V in volts. In other
words, the input is a sin-wave at a frequency of Hertz. Simulate the loudspeaker
behavior for about 25/ seconds. The displacement z of the cone will become
nearly sinusoidal by the end of the simulation. Let zmax,pos and zmax,neg be the
maximum positive and negative values of the displacement in the last full cycle
(i.e., the last 1/ seconds of the simulation, which we will approximately decree
as the steady-state response to the input).
(c) Using bisection, determine the value (approximately) of V so that the steady-
state maximum (positive) excursion of z is 0.007meters if the frequency of the
excitation is = 25. What is the steady-state minimum (negative) excursion of
z?
ME 132, Fall 2017, UC Berkeley, A. Packard 30
4 State Variables
the external inputs acting on the system for all time t t0 , and
all equations describing relationships between the variables qi and the external inputs
In other words, past history (before t0 ) of the systems evolution is not important in de-
termining its evolution beyond t0 all of the relevant past information is embedded in the
variables values at t0 .
Example: The system is a point mass, mass m. The point mass is acted on by an external
force f (t). The position of the mass is measured relative to an inertial frame, with coordinate
w, velocity v, as shown below in Fig. 2.
v(t) : velocity
w(t) : position
f(t)
Claim #1: The collection {w} is not a suitable choice for state variables. Why? Note that
for t t0 , we have Z t Z
1
w(t) = w(t0 ) + v(t0 ) + f ()d d
t0 t0 m
Hence, in order to determine w(t) for all t t0 , it is not sufficient to know w(t0 ) and the
entire function f (t) for all t t0 . You also need to know the value of v(t0 ).
Claim #2: The collection {v} is a legitimate choice for state variables. Why? Note that
for t t0 , we have Z t
1
v(t) = v(t0 ) + f ( )d
t0 m
Hence, in order to determine v(t) for all t t0 , it is sufficient to know v(t0 ) and the entire
function f (t) for all t t0 .
Claim #3: The collection {w, v} is a legitimate choice for state variables. Why? Note that
for t t0 , we have Rt
w(t) = w(t0 ) + t0 v()d
Rt
v(t) = v(t0 ) + t0 m1 f ( )d
Hence, in order to determine v(t) for all t t0 , it is sufficient to know v(t0 ) and the entire
function f (t) for all t t0 .
In general, it is not too hard to pick a set of state-variables for a system. The next few
sections explains some rule-of-thumb procedures for making such a choice.
Suppose for a system we choose some variables (x1 , x2 , . . . , xn ) as a possible choice of state
variables. Let d1 , d2 , . . . , df denote all of the external influences (ie., forcing functions) acting
on the system. Suppose we can derive the relationship between the x and d variables in the
form
x1 = f1 (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
x2 = f2 (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
.. .. (4.1)
. .
xn = fn (x1 (t), x2 (t), . . . , xn (t), d1 (t), . . . , df (t))
Then, the set {x1 , x2 , . . . , xn } is a suitable choice of state variables. Why? Ordinary differ-
ential equation (ODE) theory tells us that given
there is a unique solution x(t) which satisfies the initial condition at t = t0 , and satisfies the
differential equations for t t0 . Hence, the set {x1 , x2 , . . . , xn } constitutes a state-variable
description of the system.
The equations in (4.1) are called the state equations for the system.
integrators,
gains, and
static-nonlinear functions,
if the outputs of all of the integrators are labled x1 , x2 , . . . , xn , then the inputs to the
integrators are actually x1 , x2 , . . . , xn .
The interconnection of all of the base components (integrators, gains, static nonlinear
functions) implies that each xk (t) will be a function of the values of x1 (t), x2 (t), . . . , xn (t)
along with d1 (t), d2 (t), . . . , df (t).
This puts the equations in the form of (4.1).
We have already determined that that form implies that the variables are state variables.
4.4 Problems
1. Shown below is a block diagram of a DC motor connected to an load inertia via a
flexible shaft. The flexible shaft is modeled as a rigid shaft (inertia J1 ) inside the
motor, a massless torsional spring (torsional spring constant Ks ) which connects to
the load inertia J2 . is the angular position of the shaft inside the motor, and is
the angular position of the load inertia.
ME 132, Fall 2017, UC Berkeley, A. Packard 33
- 1 -
R
-
R (t) - y(t)
J2
e
Ks
?
+
6
- 1
J1
+ e+ -
R R
u(t) - -? -
+6 (t)
Choose state variables (use rule given in class for block diagrams that do not contain
differentiators). Find matrices A, B and C such that the variables x(t), y(t) and u(t)
are related by
x(t) = Ax(t) + Bu(t)
y(t) = Cx(t)
Hint: There are 4 state variables.
y [n] (t) + a1 y [n1] (t) + + an1 y(t) + an y(t) = b0 u[m] (t) + b1 u[m1] (t) + + bm1 u(t) + bm u(t)
(5.1)
[k] th dk y(t)
where y denotes the k derivative of the signal y(t): dtk . These notes refer to equation
(5.1) as a HLODE (High-order, Linear Ordinary Differential Equation).
An alternate form involves many first-order equations, and many inputs. The general case
of this situation has n dependent variables, x1 , x2 , . . . , xn , and m inputs, d1 , d2 , . . . , dm . The
differential equations governing the evolution of the xi variables is
x1 (t) = a11 x1 (t) + a12 x2 (t) + + a1n xn (t) + b11 d1 (t) + b12 d2 (t) . . . + b1m dm (t)
x2 (t) = a21 x1 (t) + a22 x2 (t) + + a2n xn (t) + b21 d1 (t) + b22 d2 (t) . . . + b2m dm (t)
.. .
. = ..
xn (t) = an1 x1 (t) + an2 x2 (t) + + ann xn (t) + bn1 d1 (t) + bn2 d2 (t) . . . + bnm dm (t)
We will learn how to solve these differential equations, and more importantly, we will discover
how to make broad qualitative statements about these solutions. Much of our intuition
about control system design and its limitations will be drawn from our understanding of the
behavior of these types of equations.
A great many models of physical processes that we may be interested in controlling are not
linear, as above. Nevertheless, as we shall see much later, the study of linear systems is a
vital tool in learning how to control even nonlinear systems. Essentially, feedback control
algorithms make small adjustments to the inputs based on measured outputs. For small
deviations of the input about some nominal input trajectory, the output of a nonlinear
system looks like a small deviation around some nominal output. The effects of the small
input deviations on the output is well approximated by a linear (possibly time-varying)
system. It is therefore essential to undertake a study of linear systems.
In this section, we review the solutions of linear, first-order differential equations with con-
stant coefficients and time-dependent forcing functions. The concepts of
stability
ME 132, Fall 2017, UC Berkeley, A. Packard 35
time-constant
sinusoidal steady-state
are introduced. A significant portion of the remainder of the course generalizes these to
higher order, linear ODEs, with emphasis on applying these concepts to the analysis and
design of feedback systems.
where u is the input, x is the dependent variable, and a and b are constant coefficients
(ie., numbers). For example, the equation 6x(t) + 3x(t) = u(t) can be manipulated into
x(t) = 12 x(t) + 61 u(t). Given the initial condition x(0) = x0 and an arbitrary input function
u(t) defined for t [0, ), a solution of Eq. (5.2) must satisfy
Z t
at
xs (t) = e x0 + ea(t ) b u( ) d . (5.3)
0
| {z }
free resp. | {z }
forced resp.
You should derive this with the integrating factor method (problem 1). Also note that
the derivation makes it evident that given the initial condition and forcing function,
there is at most one solution to the ODE. In other words, if solutions to the ODE
exist, they are unique, once the initial condition and forcing function are specified.
We can also easily just check that (5.3) is indeed the solution of (5.2) by verifying two facts:
the function xs satisfies the differential equation for all t 0; and xs satisfies the given
initial condition at t = 0. In fact, the theory of differential equations tells us that there is
one and only one function that satisfies both (existence and uniqueness of solutions). For
this ODE, we can prove this directly, for completeness sake. Above, we showed that if x
satisfies (5.2), then it must be of the form in (5.3). Next we show that the function in (5.3)
does indeed satisfy the differential equation (and initial condition). First check the value of
xs (t) at t = 0:
Z 0
a0
xs (0) = e x0 + ea(t ) b u( ) d = x0 .
0
ME 132, Fall 2017, UC Berkeley, A. Packard 36
= a xs (t) + b u(t) .
as desired.
Fig. 3 shows the normalized free response (i.e. u(t) = 0) of the solution Eq. (5.3) of ODE
(5.2) when a < 0. Since a < 0, the free response decays to 0 as t , regardless of the
1
0.9
0.8
0.7
Normalized State [x/xo]
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Normalized Time [t/|a|]
initial condition. Because of this (and a few more properties that will be derived in upcoming
sections), if a < 0, the system in eq. (5.2) is called stable (or sometimes, to be more precise,
asymptotically stable). Notice that the slope at time t = 0 is x(0) = ax0 and T = 1/|a| is
the time that x(t) would cross 0 if the initial slope is continued, as shown in the figure. The
time
1
T :=
|a|
ME 132, Fall 2017, UC Berkeley, A. Packard 37
is called the time constant of a first order asymptotically stable system (a < 0). T is
expressed in the units of time, and is an indication of how fast the system responds. The
larger |a|, the smaller T and the faster the response of the system.
Notice that
xf ree (T ) 1
= .37 = 37%
x0 e
xf ree (2T ) 1
= 2 .13 = 13%
x0 e
xf ree (3T ) 1
= 3 .05 = 5%
x0 e
xf ree (4T ) 1
= 4 .018 2%
x0 e
If a > 0 the free response of ODE (5.2) is unstable, i.e. limt |x(t)| = . When a = 0,
x(t) = x0 for all t 0, and we can say that this system is limitedly stable or limitedly
unstable.
We first consider the system response to a step input. In this case, the input u(t) is given by
0 if t < 0
u(t) = um (t) =
um if t 0
where um is a constant and x(0) = 0. The solution (5.3) yields
b
1 eat um .
x(t) =
a
b
If a < 0, the steady state output xss is xss = a um . Note that for any t
x(t + 1 ) xss = 1 |x(t) xss | 0.37 |x(t) xss |
a e
Rather than consider constant inputs, we can also consider inputs that are bounded by a
constant, and prove, under the assumption of stability, that the response remains bounded
as well (and the bound is a linear function of the input bound). Specifically, if a < 0, and
if u(t) is uniformly (in time) bounded by a positive number M , then the resulting solution
x(t) will be uniformly bounded by bM
|a|
. To derive this, suppose that |u( )| M for all 0.
Then for any t 0, we have
Z t
a(t )
|x(t)| = e b u( ) d
0
Z t
a(t )
e b u( ) d
Z0 t
ea(t ) b M d
0
bM
1 eat
a
bM
.
a
Thus, if a < 0, x(0) = 0 and u(t) M , the output is bounded by x(t) |bM/a|. This is
called a bounded-input, bounded-output (BIBO) system. If the initial condition is non-zero,
the output x(t) will still be bounded since the magnitude of the free response monotonically
converges to zero, and the response x(t) is simply the sum of the free and forced responses.
Note: Assuming b 6= 0, the system is not bounded-input/bounded-output when a 0. In
that context, from now on, we will refer to the a = 0 case (termed limitedly stable or limitedly
unstable before) as unstable. See problem 6 in Section 5.6.
Assume the system is stable, so a < 0. Now, suppose the input signal u is bounded and
approaches 0 as t . It seems natural that the response, x(t) should also approach 0
as t , and deriving this fact is the purpose of this section. While the derivation
is interesting, the most important is the result: For a stable system, specifically
(5.2), with a < 0, if the input u satisfies limt u(t) = 0, then the solution x satisfies
limt x(t) = 0.
Onto the derivation: First, recall what limt z(t) = 0 means: for any > 0, there is a
T > 0 such that for all t > T , |z(t)| < .
ME 132, Fall 2017, UC Berkeley, A. Packard 39
Assume x0 is given, and consider such an input. Since u is bounded, there is a positive
constant B such that |u(t)| B for all t 0. Also, for every > 0, there is
|a| T,1
a T,1 > 0 such that |u(t)| < 3 |b|
for all t 2
t |a| T,2
a T,2 > 0 such that ea 2 < 3B|b|
for all t 2
a T,3 > 0 such that eat < 3|x0 |
for all t T,3
Let T be the maximum of {T,1 , T,2 , T,3 }. For t > T , the response x(t) satisfies
Rt
x(t) = eat x0 + 0 ea(t ) bu( )d
Rt Rt
= eat x0 + 02 ea(t ) bu( )d + t ea(t ) bu( )d
2
1. Since t T,3
at
e x0
|x0 | =
3|x0 | 3
2. Since |u(t)| B for all t,
R t Rt
2 a(t )
0 e bu( )d 02 ea(t ) |bu( )| d
Rt
B|b| 02 ea(t
)
d
1 a 2t t
= B|b| a e 1 ea 2
1 a2 t
B|b| a e
t |a|
Since t T,2 , ea 2 3B|b|
, which implies
Z t
2
ea(t ) bu( )d <
3
0
ME 132, Fall 2017, UC Berkeley, A. Packard 40
Combining implies that for any t > T , |x(t)| < . Since was an arbitrary positive number,
this complete the proof that limt x(t) = 0.
5.2.5 Linearity
Why is the differential equation in (5.2) called a linear differential equation? Suppose x1 is
the solution to the differential equation with the initial condition x0,1 and forcing function u1 ,
and x2 is the solution to the differential equation with the initial condition x0,2 and forcing
function u2 . In other words, the function x1 satisfies x1 (0) = x1,0 and for all t > 0,
Likewise, the function x2 satisfies x2 (0) = x2,0 and for all t > 0,
Now take two constants and . What is the solution to the differential equation with
initial condition x(0) = x0,1 + x0,2 , and forcing function u(t) = u1 (t) + u2 (t)? It is easy
(just plug into the differential equation, or the integral form of the solution) to see that the
solution is
x(t) = x1 (t) + x2 (t)
This is often called the superposition property. In this class, we will more typically use the
term linear, indicating that the solution of the differential equation is a linear function of
the (initial condition, forcing function) pair.
If the system is stable (a < 0), and the input u(t) has a limit, limt u(t) = u, then by
combining the results of Sections 5.2.4, 5.2.2 and 5.2.5, it is easy to conclude that x(t)
approaches a limit as well, namely
b
lim x(t) = u
t a
ME 132, Fall 2017, UC Berkeley, A. Packard 41
If the system is stable, (ie., A < 0) it might be intuitively clear that if u is a sinusoid, then
y will approach a steady-state behavior that is sinusoidal, at the same frequency, but with
different amplitude and shifted in-time. In this section, we make this idea precise.
Take 0 as the input frequency, and (although not physically relevant) let u be a fixed
complex number and take the input function u(t) to be
u(t) = uejt
Hence, we have verified our initial claim if the input is a complex sinusoid, then the steady-
state output
h is a complex
i sinusoid at the same exact frequency, but amplified by a complex
CB
gain of D + jA .
G can be calculated rather easily using a computer, simply by evaluating the expression in
(5.7) at a large number of frequency points R. The dependence on is often graphed
using two plots, namely
This plotting arrangement is called a Bode plot, named after one of the modern-day giants
of control system theory, Hendrik Bode.
What is the meaning of a complex solution to the differential equation (5.4)? Suppose that
functions u, x and y are complex, and solve the ODE. Denote the real part of the function
u as uR , and the imaginary part as uI (similar for x and y). Then, for example, xR and xI
are real-valued functions, and for all t x(t) = xR (t) + jxI (t). Differentiating gives
dx dxR dxI
= +j
dt dt dt
Hence, if x, u and y satisfy the ODE, we have (dropping the (t) argument for clarity)
dxR
dt
+ j dx
dt
I
= A (xR + jxI ) + B (uR + juI )
yR + jyI = C (xR + jxI ) + D (uR + juI )
But the real and imaginary parts must be equal individually, so exploiting the fact that the
coefficients A, B, C and D are real numbers, we get
dxR
dt
= AxR + BuR
yR = CxR + DuR
and
dxI
dt
= AxI + BuI
yI = CxI + DuI
ME 132, Fall 2017, UC Berkeley, A. Packard 43
Hence, if (u, x, y) are functions which satisfy the ODE, then both (uR , xR , yR ) and (uI , xI , yI )
also satisfy the ODE.
Then,
Re Hej = Re [(HR + jHI ) (cos + j sin )]
h HI sin
= HR cos i
= |H| H R
|H|
cos HI
|H|
sin
= |H| [cos H cos sin H sin ]
j
= |H| cos ( + H)
Im He = Im [(HR + jHI ) (cos + j sin )]
= HR sin
h + HI cos i
= |H| H R
|H|
HI
sin + |H| cos
= |H| [cos H sin + sin H cos ]
= |H| sin ( + H)
Now consider the differential equation/frequency response case. Let G() denote the fre-
quency response function. If the input u(t) = cos t = Re (ejt ), then the steady-state
output y will satisfy
y(t) = |G()| cos (t + G())
A similar calculation holds for sin, and these are summarized below.
If the system in (5.4) is stable (A < 0), combine the results of Sections 5.2.4, 5.3 and 5.2.5,
to obtain the following result: Suppose A < 0, 0, and u is a constant. If the input u is
of the form
u(t) = uejt + z(t)
ME 132, Fall 2017, UC Berkeley, A. Packard 44
where limt q(t) = 0. Informally, we conclude eventually sinusoidal inputs lead to eventu-
ally sinusoidal outputs, and say that the system has the steady-state, sinusoidal gain (SStG)
property. Note that the relationship between the sinuosoidal terms is the frequency-response
function.
In actual feedback systems, measurements from sensors are used to make decisions on what
corrective action needs to be taken. Often in analysis, we will assume that the time from
when the measurement occurs to when the corresponding action takes place is negligible (ie.,
zero), since this is often performed with modern, high-speed electronics. However, in reality,
there is a time-delay, so that describing the systems behavior involves relationships among
variables at different times. For instance, a simple first-order delay-differential equation is
where T 0 is a fixed number. We assume that for T = 0, the system is stable, so a1 +a2 < 0.
Since we are studying the effect of delay, we also assume that a2 6= 0. When T = 0, the
homogeneous solutions are of the form x(t) = e(a1 +a2 )t x(0), which decay exponentially to zero.
As the constant number T increases from 0, the homogeneous solutions change, becoming
complicated expressions that are challenging to derive. It is a fact that there is a critical
value of T , called Tc such that
for all T satisfying 0 T < Tc , the homogeneous solutions of (5.8) decay to zero as
t
Hence, we can determine Tc (and c ) by checking the conditions for ejt to be a homogeneous
solution of (5.8).
Plugging in give
jejt = a1 ejt + a2 ej(tT )
for all t. Since ejt 6= 0, divide, leaving
j = a1 + a2 ejT .
ME 132, Fall 2017, UC Berkeley, A. Packard 45
Since this equality relates complex numbers (which have a real and imaginary part), it can
be thought of as 2 equations
jTin
2 unknowns ( and T ). We know that regardless of and
T , it always holds that e
= 1, so it must be that
|j a1 | = |a2 |
p
which implies c = a22 a21 . Then Tc is the smallest positive number such that jc =
a1 + a2 ejTc .
5.5 Summary
In this section, we studied the free and forced response of linear, first-order differential
equations. Several concepts and properties were established, including
stability;
time-constant;
These concepts will be investigated for higher-order differential equations in later sections.
Many of the principles learned here carry over to those as well. For that reason, it is
important that you develop a mastery of the behavior of forced, first order systems.
In upcoming sections, we study simple feedback systems that can be analyzed using only
1st-order differential equations, using all of the facts about 1st-order systems that have been
derived.
ME 132, Fall 2017, UC Berkeley, A. Packard 46
5.6 Problems
1. Use the integrating factor method to derive the solution given in equation (5.3) to the
differential equation (5.2).
This simple idea is used repeatedly when bounding the output response in terms of
bounds on the input forcing function.
3. Work out the integral in the last line of equation (5.5), deriving equation (5.6).
x(t) = ax(t)
+ bu(t)b
= a x(t) a u(t)
(a) Let denote the time-constant, and denote the steady-state gain from u x.
Solve for and in terms of a and b. Also, invert these solutions, expressing a
and b in terms of the time-constant and steady-state gain.
(b) Suppose > 0. Consider a first-order differential equation of the form x(t) =
x(t) + u(t). Is this system stable? What is the time-constant? What is the
steady-state gain from u x? Note that this is a useful manner to write a first-
order equation, since the time-constant and steady-state gain appear in a simple
manner.
(c) Suppose = 1, = 2. Given the initial condition x(0) = 4, and the input signal
u
u(t) = 1 for 0 t < 5
u(t) = 2 for 5 t < 10
u(t) = 6 for 10 t < 10.1
u(t) = 3 for 10.1 t < .
sketch a reasonably accurate graph of x(t) for t ranging from 0 to 20. The sketch
should be based on your understanding of a first-order systems response (using
its time-constant and steady-state gain), not by doing any particular inte-
gration.
ME 132, Fall 2017, UC Berkeley, A. Packard 47
(d) Now suppose = 0.001, = 2. Given the initial condition x(0) = 4, and the
input signal u
u(t) = 1 for 0 t < 0.005
u(t) = 2 for 0.005 t < 0.01
u(t) = 6 for 0.01 t < 0.0101
u(t) = 3 for 0.0101 t < .
sketch a reasonably accurate graph of x(t) for t ranging from 0 to 0.02. The
sketch should be based on your understanding of a first-order systems response,
not by doing any particular integration. In what manner is this the same as
the response in part 4c?
Hint: you can do this in 2 different manners - carry out the convolution integral, or
verify that the proposed solution satisfies the initial condition (at t = 0) and satisfies
the differential equation for all t > 0. Both are useful exercises, but you only need to
do one for the assignment. Note that if we ignore the decaying exponential part of the
solution, then the steady-state solution is also a ramp (with same slope , since the
steady-state gain of the system is 1), but it is delayed from the input by c time-units.
This gives us another interpretation of the time-constant of a first-order linear system
(i.e., ramp-input leads to ramp-output, delayed by c ).
Make an accurate sketch of u(t) and x(t) (versus t) on the same graph, for x0 = 0,
= 3 and A = 0.5.
6. In the notes and lecture, we established that if a < 0, then the system x(t) = ax(t) +
bu(t) is bounded-input/bounded-output (BIBO) stable. In this problem, we show that
the linear system x(t) = ax(t) + bu(t) is not BIBO stable if a 0. Suppose b 6= 0.
(a) (a = 0 case): Show that there is an input u such that |u(t)| 1 for all t 0, but
the response x(t) satisfying x(t) = 0 x(t) + bu(t), with initial condition x(0) = 0
ME 132, Fall 2017, UC Berkeley, A. Packard 48
is not bounded as a function of t. Hint: Try the constant input u(t) = 1 for
all t 0. What is the response x? Is there a finite number which bounds |x(t)|
uniformly over all t?
(b) Take a > 0. Show that there is an input u such that |u(t)| 1 for all t 0, but
the response x(t) satisfying x(t) = ax(t) + bu(t), with initial condition x(0) = 0,
grows exponentially (without bound) with t.
(a) Starting from initial condition x(0) = 0, what is the response for t 0 due to the
unit-step input
0 for t 0
u(t) =
1 for t > 0
Hint: Since x(0) = 0 and u(0) = 0, it is clear from the definition of y that
y(0) = 0. For t > 0, x converges exponentially to its limit, and y differs from x
only by scaling (c) and the addition of du(t), which for t > 0 is just d.
(b) Compute, and sketch the response for a = 1; b = 1; c = 1; d = 1
(c) Compute and sketch the response for a = 1; b = 1; c = 2; d = 1
(d) Explain/justify the following terminology:
cb
the steady-state-gain from u y is d a
the instantaneous-gain from u y is d
8. (a) Suppose > 0 and > 0. Let y1 (t) := sin t, and y2 (t) = sin(t ). Explain
what is meant by the statement that the signal y2 lags the signal y1 by 2 of a
period.
(b) Let := 3 . On 3 separate graphs, plot 4 periods of sine-signals listed below.
i. sin 0.1t and sin(0.1t )
ii. sin t and sin(t )
iii. sin 10t and sin(10t )
(c) Explain how the graphs in part 8b confirm the claim in part 8a.
(a) Using the convolution integral for the forced response, find the output y(t) for
t 0 starting from the initial condition x(0) = 0, subject to input
u(t) = 1 for t 0
u(t) = sin(t) for t 0 (you will probably need to do two steps of integration-
by-parts).
(b) For this first-order system, the frequency-response function G() is
cb
G() =
j a
Make a plot of
log10 |G()| versus log10
for 0.001 1000 for two sets of values: system S1 with parameters (a =
1, b = 1, c = 1) and system S2 with parameters (a = 10, b = 10, c = 1). Put
both magnitude plots on the same axis. Also make a plot of
for 0.001 1000 for both systems. You can use the Matlab command angle
which returns (in radians) the angle of a nonzero complex number. Put both
angle plots on the same axis.
(c) What is the time-constant and steady-state gain (from u y) of each system?
How is the steady-state gain related to G(0)?
(d) For each of the following cases, compute and plot y(t) versus t for the :
i. S1 with x(0) = 0, u(t) = 1 for t 0
ii. S2 with x(0) = 0, u(t) = 1 for t 0
iii. S1 with x(0) = 0, u(t) = sin(0.1 t) for t 0
iv. S2 with x(0) = 0, u(t) = sin(0.1 t) for t 0
v. S1 with x(0) = 0, u(t) = sin(t) for t 0
vi. S2 with x(0) = 0, u(t) = sin(t) for t 0
vii. S1 with x(0) = 0, u(t) = sin(10 t) for t 0
viii. S2 with x(0) = 0, u(t) = sin(10 t) for t 0
Put cases (i), (ii) on the same graph, cases (iii), (iv) on the same graph, cases (v),
(vi) on the same graph and cases (vii), (viii) on the same graph. Also, on each
graph, also plot u(t). In each case, pick the overall duration so that the limiting
behavior is clear, but not so large that the graph is cluttered. Be sure and get
the steady-state magnitude and phasing of the response y (relative to u) correct.
ME 132, Fall 2017, UC Berkeley, A. Packard 50
(e) Compare the steady-state sinusoidal responses of the response you computed and
plotted in 9d with the frequency-response functions that are plotted in part 9b. Il-
lustrate out how the frequency-response function gives, as a function of frequency,
the steady-state response of the system to a sin-wave input of any frequency.
Mark the relevant points of the frequency-response curves.
(a) Comment on the effect parameters a and b have on the step responses in cases
(a)-(b).
(b) Comment on the amplification (or attenuation) of sinusodal inputs, and its rela-
tion to the frequency .
(c) Based on the speed of the response in (a)-(b) (note the degree to which y follows
u, even though u has an abrupt change), are the sinusoidal responses in (c)-(h)
consistent?
u(t) = Ky(t T )
(a) Eliminate u and y from the equations to obtain a delay-differential equation for
x of the form
x(t) = A1 x(t) + A2 x(t T )
The parameters A1 and A2 will be functions of a, b, c and K.
(b) Assume T = 0 (ie., no delay). Under what condition is the closed-loop system
stable?
(c) Following the derivation in section 5.4 (and the slides), derive the value of the
smallest delay that causes instability for the five cases
i. a = 0, b = 1, c = 1, K = 5
ii. a = 1, b = 1, c = 1, K = 4
iii. a = 1, b = 1, c = 1, K = 6
iv. a = 3, b = 1, c = 1, K = 2
v. a = 3, b = 1, c = 1, K = 2
Also determine the frequency at which instability will occur.
ME 132, Fall 2017, UC Berkeley, A. Packard 51
12. Consider a system with input u, and output y governed by the differential equation
This is different than what we have covered so far, because the derivative of the input
shows up in the right-hand-side (the overall function forcing y, from the ODE point
of view). Note that setting b0 = 0 gives an ODE more similar to what we considered
earlier in the class.
(a) Let q(t) := y(t) b0 u(t). By substitution, find the differential equation governing
the relationship between u and q. This should look familar.
(b) Assume that the system is at rest (ie., y 0, u 0, and hence q 0 too), and
at some time, say t = 0, the input u changes from 0 to u (eg., a step-function
input), specifically
0 for t 0
u(t) =
u for t > 0
Solve for q, using the differential equation found in part 12a, using initial condition
q(0) = 0.
(c) Since y = q + u, show that the step-response of (5.10), starting from y(0) = 0 is
b1 b1
y(t) = u + b0 u u ea1 t for t > 0
a1 a1
(d) Take a1 = 1. Draw the response for b0 = 2 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(e) Take a1 = 1. Draw the response for b0 = 1 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(f) Take a1 = 1. Draw the response for b0 = 0 and five different values of b1 , namely
b1 = 0, 0.2, 1, 2, 4.
(g) Suppose that a1 > 0 and b0 = 1. Draw the step response for two cases: b1 = 0.9a1
and b1 = 1.1a1 . Comment on the step response for 0 < a1 b1 . What happens if
a1 < 0 (even if b1 a1 , but not exactly equal)?
ME 132, Fall 2017, UC Berkeley, A. Packard 52
13. (a) So, consider the cascade connection of two, first-order, stable, systems
By stable, we mean both A1 < 0 and A2 < 0. The cascade connection is shown
pictorially below.
u y1 y
- S1 - S2 -
Suppose that the frequency response of System 1 is M1 (), 1 () (or just the
complex G1 ()), and the frequency response of System 2 is M2 (), 2 () (ie., the
complex G2 ()). Now, suppose that is a fixed real number, and u(t) = sin t.
Show that the steady-state behavior of y(t) is simply
(b) Let G denote the complex function representing the frequency response (forcing-
frequency-dependent amplitude magnification A and phase shift , combined into
a complex number) of the cascaded system. How is G related to G1 and G2 ?
Hint: Remember that for complex numbers G and H,
14. Re-read Leibnitzs rule in your calculus book, and consider the time-varying dif-
ferential equation
x(t) = a(t)x(t) + b(t)u(t)
with x(0) = xo . Show, by substitution, or integrating factor, that the solution to this
is Z t R
Rt t
a()d
x(t) = e 0 xo + e a()d b( )u( )d
0
A proportional control,
u(t) = K1 r(t) K2 ym (t)
ME 132, Fall 2017, UC Berkeley, A. Packard 53
is used, and the measurement is assumed to the be the actual value, plus measurement
noise,
ym (t) = y(t) + n(t)
As usual, y is the process output, and is the variable we want to regulate, r is the
reference signal (the desired value of y), d is a process disturbance, u is the control
variable, n is the measurement noise (so that y + n is the measurement of y), K1 and
K2 are gains to be chosen. For simplicity, we will choose some nice numbers for the
values, specifically b1 = b2 = c = 1. There will be two cases studied: stable plant,
with a = 1, and unstable plant, with a = 1. You will design the feedback gains
as described below, and look at closed-loop properties. The goal of the problem is to
start to see that unstable plants are intrinsically harder to control than stable plants.
This problem is an illustration of this fact (but not a proof).
(a) Keeping a, K1 and K2 as variables, substitute for u, write the differential equation
for x in the form
Also, express the output y and control input u as functions of x and the external
inputs (r, d, n) as
y(t) = C1 x(t) + D11 r(t) + D12 d(t) + D13 n(t)
u(t) = C2 x(t) + D21 r(t) + D22 d(t) + D23 n(t)
Together, these are the closed-loop governing equations. Note that all of the
symbols (A, B1 , . . . , D23 ) will be functions of a and the controller gains, K1 and
K2 . Below, we will design K1 and K2 two different ways, and assess the per-
formance of the overall system.
(b) Under what conditions is the closed-loop system is stable? Under those conditions,
i. What is the time-constant of the closed-loop system?
ii. What is the steady-state gain from r to y (assuming d 0 and n 0)?
iii. What is the steady-state gain from d to y (assuming r 0 and n 0)?
(c) First we will consider the stable plant case, so a = 1. If we simply look at
the plant (no controller), u and d are independent inputs, and the steady-state
gain from d to y is cb
a
1
which in this particular instance happens to be 1. Find
the value of K2 so that: the closed-loop system is stable; and the magnitude of
the closed-loop steady-state gain from d to y is 15 of the magnitude of the open-
loop steady-state gain from d to y. That will be our design of K2 , based on the
requirement of closed-loop stability and this disturbance rejection specification.
(d) With K2 chosen, choose K1 so that the closed-loop steady-state gain from r to y
is equal to 1 (recall, the goal is that y should track r, as r represents the desired
value of y).
ME 132, Fall 2017, UC Berkeley, A. Packard 54
|Hry |
|Hdy | |Hny |
Hry
|Hru | |Hdu | |Hnu |
These are often referred to as the gang of six. The plot shows all the important
cause/effects, in the context of sinusoidal steady-state response, within the closed-
loop system, namely how (references, disturbances, measurement noise) affect the
(regulated variable, control variable). Note that because one of the entries actually
has both the magnitude and angle plotted, there will be 7 axes. If you forget how
to do this, try the following commands and see what happens in Matlab.
a11T = subplot(4,3,1);
a11B = subplot(4,3,4);
a12 = subplot(2,3,2);
a13 = subplot(2,3,3);
a21 = subplot(2,3,4);
a22 = subplot(2,3,5);
a23 = subplot(2,3,6);
(a) Assume P is stable (ie., a < 0). For P itself, what is the steady-state gain from
u to y (assuming d 0)? Call this gain G. What is the steady-state gain from d
to y (assuming u 0)? Call this gain H.
(b) P is controlled by a proportional controller of the form
Here, r is the reference signal (the desired value of y), n is the measurement noise
(so that y + n is the measurement of y), K1 and K2 are gains to be chosen. By
substituting for u, write the differential equation for x in the form
Also, express the output y and control input u as functions of x and the external
inputs (r, d, n) as
All of the symbols (A, B1 , . . . , D23 ) will be functions of the lower-case given sym-
bols and the controller gains. Below, we will design K1 and K2 two different
ways, and assess the performance of the overall system.
(c) Under what conditions is the closed-loop system is stable? What is the steady-
state gain from r to y (assuming d 0 and n 0)? What is the steady-state
gain from d to y (assuming r 0 and n 0)?
(d) Design #1: In this part, we design a feedback control system that actually had
no feedback (K2 = 0). The control system is called open-loop or feed-forward,
and will be based on the steady-state gain G (from u y) of the plant. The
open-loop controller is simple - simply invert the gain of the plant, and use that
for K1 . Hence, we pick K1 := G1 , and K2 := 0. Call this Design #1. Note that
we are now considering a feedback control system that actually
i. For Design #1, compute the steady-state gains from all external inputs
(r, d, n) to the two outputs (y, u).
ii. Comment on the steady-state gain from r y.
iii. (See problem 24 for the definition of sensitivity). What is the sensitivity
of the steady-state gain from r y to the parameter b2 ? What about the
sensitivity to a? Here you should treat K1 as a fixed number.
iv. Comment on the relationship between the steady-state gain from d y
without any control (ie., H computed above) and the steady-state gain from
d y in Design #1, as computed in part 16(d)i.
v. Comment on the steady-state gain from d to u in Design #1. Based on ds
eventual effect on u, is the answer in part 16(d)iv surprising?
vi. Comment on the steady-state gain from n to both y and u in Design #1.
Remember that Design #1 actually does not use feedback...
vii. What it the time-constant of the system with Design #1.
viii. In this part we have considered a control system that actually had no feedback
(K2 = 0). Consequently, this is called open-loop control, or feedforward control
(since the control input is just a function of the reference signal r, fed-forward
to the process), or control-by-calibration since the reciprical of the value of
of G is used in the control law.
Write a short, concise (4 bullet points) quantitative summary of the effect
of this strategy. Include a comparison of the process time-constant, and the
resulting time-constant with the controller in place, as well as the tracking
ME 132, Fall 2017, UC Berkeley, A. Packard 57
will still assume that the sign (ie., ) is known. The control strategy is dynamic
where ym (t) = y(t) + n(t) and the various gains (a, b1 , . . . , d1 ) constitute the de-
sign choices in the control strategy. Be careful, notation-wise, since (for example)
d1 is a constant parameter, and d(t) is a signal (the disturbance). There are alot of
letters/parameters/signals to keep track of.
(a) Eliminate u and ym from the equations to obtain a differential equation for x of
the form
x(t) = Ax(t) + B1 r(t) + B2 d(t) + B3 n(t)
which governs the closed-loop behavior of x. Note that A, B1 , B2 , B3 are functions
of the parameters a, b1 , . . . in the control strategy, as well as the process parameters
and .
(b) What relations on (a, b1 , . . . , d1 , , ) are equivalent to closed-loop system stabil-
ity?
(c) As usual, we are interested in the effect (with feedback in place) of (r, d, n) on
(y, u), the regulated variable, and the control variable, respectively. Find the
coefficients (in terms of (a, b1 , . . . , d1 , , )) so that
(d) Suppose that Tc > 0 is a desired closed-loop time constant. Show that the follow-
ing design objectives can be met with one design, assuming that the value of is
known to the designer.
closed-loop is stable
closed-loop time constant is Tc
steady-state gain from d y is 0
steady-state gain from r y is 1
A few things to look out for: the conditions above do not uniquely determine
all of the parameters: indeed, only the product b2 c can be determined; and any
arbitrary value for d1 is acceptable (although its particular value does affect other
properties, like r u, for instance).
(e) Assuming the choices above have been satisfied, what is the steady-state gain
from d u? Given that the steady-state gain from d y is 0, does this make
sense, in retrospect?
ME 132, Fall 2017, UC Berkeley, A. Packard 59
(a) Derive the closed-loop differential equation governing x, with inputs r, d, n. Under
what conditions is the closed-loop stable? What is the time constant of the system.
(b) Suppose the nominal values of the plant parameters are = 2.1, = 0.9. Design
KI such that the closed-loop system is stable, and the nominal closed-loop time
constant is 0.25 time units.
(c) Simulate (using ode45) the system subject to the following conditions
ME 132, Fall 2017, UC Berkeley, A. Packard 60
0 (v)
for all v R. One such is given below in part 19e. Note that the model in (5.11)
generalizes the linear plant model in problem 18, to include nonlinear dependence of y
on u.
Well ignore sensor noise in this problem. An integral controller is used, of the form
As usual, we want to understand how e(t) := r(t) y(t) behaves, even in the presence
of nonzero disturbances d(t).
(a) Show that e(t) = r(t) (KI x(t)) d(t) for all t.
(b) If r(t) = r, a constant, and d(t) = d a constant, show that
d(e2 )
= 2e(t)e(t).
dt
Substitute in expression for e to show that
d(e2 )
= 2KI 0 (KI x(t))e2 (t)
dt
ME 132, Fall 2017, UC Berkeley, A. Packard 62
(d) Assume KI > 0. Define z(t) := e2 (t). Note that z(t) 0 for all t, and show that
for all t. Hence z evolves similarly to the stable, 1st order system w(t) =
2KI w(t), but may approach 0 even faster at times. Hence, we conclude
that z approaches 0 at least as fast as w would, and hence can be thought of as
having a maximum time-constant of 2K1I .
(e) Take, for example, to be
2v + 0.1v 3 for v 0
(v) := .
2v 0.2v 2 for v < 0
20. A feedback system is shown below. All unmarked summing junctions are plus
(+).
d
r - f e- K
+ u- ?
f v-
y
P -
6
f
? n
(a) The plant P , is governed by the ODE y(t) = y(t) + v(t). Note that the plant is
unstable. The controller is a simple proportional control, so u(t) = Ke(t), where
K is a constant-gain. Determine the range of values of proportional gain K for
which the closed-loop system is stable.
(b) Temporarily, suppose K = 4. Confirm that the closed-loop system is stable.
What is the time-constant of the closed-loop system?
(c) The control must be implemented with a sampled-data system (sampler, discrete
control logic, zero-order hold) running at a fixed sample-rate, with sample time
TS . The proportional feedback uk = Kek is implemented, as shown below.
d
r - f e - Sample
+ ek - uk - z.o.h. u- ?
fv-
y
TS K TS P -
6
f
? n
The plant ODE is as before, y(t) = y(t) + v(t). Determine a relationship between
TS and K (sample-time and proportional gain) such that the closed-loop system
is stable.
(d) Return the situation where K = 4. Recall the rule-of-thumb described in class
1
that the sample time TS should be about 10 of the closed-loop time constant.
Using this sample time, determine the allowable range of K, and show that the
choice K = 4 is safely in that range.
(e) Simulate the overall system (Lab on Wednesday/Thursday will describe exactly
how to do this, and it will only take a few minutes to do this) and confirm that
the behavior with the sampled-data implementation is approximately the same
as the ideal continuous-time implementation.
21. Suppose two systems are interconnected, with individual equations given as
(a) Consider first S1 (input u, output y): Show that for any initial condition y0 , if
u(t) u (a constant), then y(t) approaches a constant y, that only depends on
the value of u. What is the steady-state gain of S1 ?
(b) Next consider S2 (input (r, y), output u): Show that if r(t) r and y(t) y
(constants), then u(t) approaches a constant u, that only depends on the values
(r, y).
ME 132, Fall 2017, UC Berkeley, A. Packard 64
(c) Now, assume that the closed-loop system also has the steady-state behavior
that is, if r(t) r, then both u(t) and y(t) will approach limiting values, u and
y, only dependent on r. Draw a block-diagram showing how the limiting values
are related, and solve for u and y in terms of r.
(d) Now check your answer in part 21c. Suppose y(0) = 0, and r(t) = 1 =: r for all
t 0. Eliminate u from the equations 5.12, and determine y(t) for all t. Make a
simple graph. Does the result agree with your answer in part 21c?
Lesson: since the assumption we made in part 21c was actually not valid, the analysis
in part 21c is incorrect. That is why, for a closed-loop steady-state analysis to be based
on the separate components steady-state properties, we must know from other means
that the closed-loop system also has steady-state behavior.
22. Suppose two systems are interconnected, with individual equations given as
(a) Consider first S1 (input u, output y): If u(t) u (a constant), then does y(t)
approach a constant y, dependent only on the value of u?
(b) Next consider S2 (input (r, y), output u): If r(t) r and y(t) y (constants),
then does u(t) approach a constant u, dependent only on the values r, y?
(c) Suppose y(0) = y0 is given, and r(t) =: r for all t 0. Eliminate u from the
equations 5.13, and determine y(t) for all t. Also, plugging back in, determine
u(t) for all t. Show that y and u both have limiting values that only depend on
the value r, and determine the simple relationship between r and (y, u).
Lesson: Even though S1 does not have steady-state behavior on its own, in feedback
with S2 , the overall closed-loop system does.
23. Consider the equations relating variables r, e, y, n, u and d. Assume P and C are given
numbers.
e = r (y + n)
u = Ce
y = P (u + d)
So, this represents 3 linear equations in 6 unknowns. Solve these equations, expressing
e, u and y as linear functions of r, d and n. The linear relationships will involve the
numbers P and C.
24. For a function F of a many variables (say two, for this problem, labeled x and y), the
sensitivity of F to x is defined as the ratio of the percentage change in F due to a
percentage change in x. Denote this by SxF .
ME 132, Fall 2017, UC Berkeley, A. Packard 65
(x + ) x
% change in x = =
x x
Likewise, the subsequent percentage change in F is
F (x + , y) F (x, y)
% change in F =
F (x, y)