Professional Documents
Culture Documents
Uncertainty
Chapter 1 showed how unmodeled dynamics might be treated as an external disturbance. The
system shown in gure 3 of chapter 1 had a feedback loop through the remainder term, R
1
. We were
able to think of this fed back remainder as a disturbance provided the gain through that loop was
not greater than one. This chapter studies the inclusion of modeling uncertainty in the controller
synthesis problem. The approach will be very similar in spirit to what we discussed earlier. This
chapter obtains bounds on the modeling uncertainty over which we can ensure closed loop stability
and performance. These problems are often referred to as the robust stability and robust performance
problems, respectively.
1. Modeling Uncertainty
Consider the scalar plant
G(s) =
1
s + 0.1
+
(s + 1)
2
s
2
+s + 1
= G
o
(s) +(s)
where [0.1, 0.2]. The rst term G
o
(s) =
1
s+0.1
is called the nominal plant and (s) =
(s+1)
2
s
2
+s+1
is called the modeling uncertainty. Note that (s) refers to a family of transfer functions that
have been parameterized by the unknown (though bounded parameter), .
One way of modeling this uncertainty is to over bound the gain-magnitude of (s) with a real-valued
function of frequency, w
() (18)
Alternatively, we can view w
(s). If W
(j)). In particuular,
we propose rewriting the bound in equation 18 as
((j)) (W
(j))
Dividing both sides by (W
) yields,
1
((j))
(W
(j))
= ((j))(W
1
(j))
77
78 5. UNCERTAINTY
Because the maximum singular value satises the submultiplicative identity, we can readily conclude
that the bound in equation 18 is satised if
1 ((j)W
1
(j))
which must hold for all . Therefore our bound on the uncertainty may be expressed as the H
_
_
1
The uncertainty model shown above is called an unstructured additive uncertainty because
perturbs the nominal plant G
o
(s) in an additive manner and because no structure has been assumed
on the uncertainty .
The preceding additive model is still rather loose, because we are not directly bounding the uncertain
parameter . A large part of our bound is over the known part of , so it should be possible to
obtain a tighter bound than what we obtained above using the unstructured additive uncertainty
model. In particular, note that
(s) =
(s + 1)
2
s
2
+s + 1
= (s + 1)
2
(s
2
+ 1 +s)
1
=
(s + 1)
2
s
2
+ 1
_
1 +
s
s
2
+ 1
_
1
This almost looks like an LFT. With a little more eort we can get it to look exactly like an LFT
of the form
(s) =
(s + 1)
2
s
2
+ 1
_
1
s
s
2
+ 1
_
1 +
s
s
2
+ 1
_
1
_
=
(s + 1)
2
s
2
+ 1
s
s
2
+ 1
_
1 +
s
s
2
+ 1
_
1
(s + 1)
2
s
2
+ 1
= P
11
+P
12
K(I KP
22
)
1
P
21
where K is equivalent to the uncertain parameter . So (s) is a linear fractional transformation
in which the uncertain parameter has been pulled out of the plant as shown in gure 1. The
augmented plant in this case has the form
P =
_
(s+1)
2
s
2
+1
s
s
2
+1
(s+1)
2
s
2
+1
s
s
2
+1
_
[0.1, 0.2]
In some cases, there may be several parametric uncertainties, in which case the uncertainty model
is represented as a family of transfer function matrices. Consider the following example,
G(s) =
10((2 + 0.2)s
2
+ (2 + 0.3 + 0.4)s + (1 + 0.2))
(s
2
+ 0.5s + 1)(s
2
+ 2s + 3)(s
2
+ 3s + 6)
1. MODELING UNCERTAINTY 79
P
w
z
Figure 1. LFT model of uncertain plant
where , [1, 1]. Well again use an additive uncertainty model of the form
G(s) = G
o
(s) +(s)
We want to specify G
o
(s) and a weighting system W
such that
((j)W
1
(j)) < 1
for all . This forms a weighted uncertainty model.
The obvious choice for G
o
(s) occurs when = = 0. In this case,
G
o
(s) =
20s
2
+ 20s + 20
(s
2
+ 0.5s + 1)(s
2
+ 2s + 3)(s
2
+ 3s + 6)
and
(s) =
1
(s) +
2
(s) =
_
1
(s)
2
(s)
_
_
_
where
1
(s) =
2s
2
+ 0.3s
(s
2
+ 0.5s + 1)(s
2
+ 2s + 3)(s
2
+ 3s + 6)
2
(s) =
4s + 2
(s
2
+ 0.5s + 1)(s
2
+ 2s + 3)(s
2
+ 3s + 6)
By the submultiplicative identity we know
_
_
_
_
1
2
__
_
_
_
_
_
_
_
_
__
_
_
_
_
=
_
_
_
_
1
2
__
_
_
= max
__
1
(j)
2
(j)
__
80 5. UNCERTAINTY
The maximum singular value for this particular example is plotted below in gure 2 The matlab
code generating this plot is
d = conv([1 .5 1],conv([1 2 3],[1 3 6]));
D=tf({[2 3 0] [4 2]},{d d}); sigma(D)
Frequency (rad/sec)
S
i
n
g
u
l
a
r
V
a
l
u
e
s
(
d
B
)
Singular Values
10
2
10
1
10
0
10
1
80
70
60
50
40
30
20
10
0
0.6964
max singular value
of perturbation
Figure 2. max singular value plot
The -tool box from Matlab provides a way of transforming the singular value plot in gure 2 into a
specic rational and stable W(s). The matlab function ginput allows the user to pick points o of
the above magnitude plot and store the points in a data structure that is then used by the matlab
routine fitmag to t a stable minimum-phase system to the plotted data. In our case, the following
matlab code was used to do this.
d=conv([1 .5 1],conv([1 2 3],[1 3 6]));
D=tf({[2 3 0] [4 2]},{d d});
sigma(D);
mf=ginput(12);
magg=vpck(10.^(mf(:,2)./20),mf(:,1));
Wa=fitmag(magg);
[A,B,C,D]=unpck(Wa);
[num,den]=ss2tf(A,B,C,D)
2. EXPERIMENTAL DETERMINATION OF UNCERTAINTY MODELS 81
The rst part of the code generates the singular value plot. The command ginput(12) places a
crosshair on the plot and the user then selects 12 points on the singular value plot. The mf structure
return by matlab needs to be reformatted using the vpck function and then the function fitmag is
used to obtain the desired weighting function W. The actual W obtained will vary depending on
how the points were chosen, but one possible answer would be
W(s) =
.0034s
4
+.1145s
3
+.3059s
2
+ 1.3518s +.4026
s
4
+ 2.3969s
3
+ 5.8215s
2
+ 4.0148s + 3.7632
The Bode plot for this particular weighting function is shown below in gure 3.
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
)
;
M
a
g
n
i
t
u
d
e
(
d
B
)
Bode Diagrams
50
40
30
20
10
0
From: U(1)
10
2
10
1
10
0
10
1
10
2
150
100
50
0
50
T
o
:
Y
(
1
)
Figure 3. Bode Plot for Weighting Function
The matlab function fitmag uses the signal processing toolbox procedure invfreqs to nd a real
rational transfer function for the data. The function invfreqs uses an equation error method to
identify the model minimizing
n
k=1
|h[k]A(j(k)) B(j(k))|
2
where h[k] is the gain data and A(j(k)) and B(j(k)) are the Fourier transforms of the numerator
and denominator polynomials, respectively.
2. Experimental Determination of Uncertainty Models
The determination of uncertainty models may also be done experimentally. The following discussion
is drawn from [6]. Consider a known eld-controlled DC motor that has been tted with a exible
shaft coupling. Frequency domain measurements have been taken on this servo system using the
82 5. UNCERTAINTY
input voltage as input and angular velocity of the motor as output. The results are given in the
following table.
data(:,1) data(:,2)-data(:,3) data(:,4)-data(:,5)
(rad/sec) Magnitude Phase(deg)
2(.03) 6.6-6.7 -3 to -5
2(.1) 6.3-6.5 -10 to -11
2(.3) 5.4-5.6 -23 to -25
2(1) 2.6-2.65 -50 to -60
2(3) 1.12-1.15 -50 to -70
2(10) .8-.85 -55 to -75
2(30) .32-.5 -70 to -113
2(100) .06- .15 -90 to -130
2(300) .037-.12 -89 to -214
2(1000) .01-.08 -214 to -326
The following plot in gure 4 shows the central data points used in identifying the nominal system
model. The nominal data points were obtained by taking the average of the gain and phase range,
as computed by the following Matlab formulae
nmag = (data(:,2)+data(:,3))./2;
nphase = data(:,4)+data(:,5))./2;
These average values are plotted as crosses in gure 4. We can then use a trial and error t of the
magnitude and phase data to obtain the nominal plant
G
0
(s) =
20
3
_
s
5
+ 1
_
(s + 100)
_
s
.5
+ 1
_ _
s
30
+ 1
_
(s + 100)
The Bode plot for this nominal model is shown in gure 4 alongside of the nominal data points.
We now need to determine a rational stable transfer function that contains the data uncertainty.
To determine the uncertainty bound, we plot the Nyquist plot of the nominal data. The ranges on
the data points sweep out a sector as shown in gure 5. We construct a circle that is centered at
the nominal model and whose radius is large enough to enclose the sector formed from the gain and
phase uncertainties at that frequency. A Matlab program was written to compute these circles and
the right hand plot in gure 5 shows the circles and sectors formed by the rst two data points in
our table.
Repeating this for all other data points in the table produces the plot shown in gure 6. This plot
shows both the Nyquist plot for our identied nominal model and the uncertainty circles. Having
identied the uncertainty radius at each frequency, we see that the circle centered at the nominal
transfer function with computed radius covers the uncertainty region at each frequency. Thus each
radius over bounds the additive perturbation needed at each frequency to include the measurement
2. EXPERIMENTAL DETERMINATION OF UNCERTAINTY MODELS 83
10
-2
10
-1
10
0
10
1
10
2
10
3
-40
-30
-20
-10
0
10
20
w(rad)
2
0
l
o
g
|
W
(
j
w
)
|
(
d
B
)
10
-2
10
-1
10
0
10
1
10
2
10
3
-300
-250
-200
-150
-100
-50
0
w(radians/s)
a
r
g
(
W
(
j
w
)
)
(
d
e
g
)
Figure 4. Nominal Data and Model
nominal model
_
_
< 1
where W
(s) is a proper stable minimum phase system characterizing our frequency dependent
knowledge of the modeling uncertainty. Let G
o
(s) denote the nominal open loop plant. We ask
what W
(s) we might choose to guarantee the stability of the true system under all (s) satisfying
the above bound.
We assume that the nominal loop (when = 0) is internally stable, which means that det(I +
G
o
(j)K(j)) is not equal to zero for any . We now ask how large of a perturbation can be
tolerated before this stable nominal closed loop system becomes unstable. So consider a perturbation
that renders the control loop marginally stable. This means there is some
0
such that
0 = det(I +G
o
(j
0
)K(j
0
) +(j
0
)K(j
0
))
= det(I +K(I +G
o
K)
1
(j
0
))det(I +G
o
K(j
0
))
86 5. UNCERTAINTY
G(s)
K(s)
- r
2
e
2
+
r
1
e
1
+
-
(s)
Figure 9. Closed Loop Map with Unstructured Additive Uncertainty
By assumption the nominal loop is stable, so that the second term in the above equation cannot be
zero. This means that the rst determinant is zero,
0 = det(I +K(I +G
o
K)
1
(j
0
))
Now recall that singular values measure how close a matrix is to being nonsingular. In particular,
recall that if Q is nonsingular then
min
R:det(R)=0
(R) = (Q)
Let R = K(I +G
o
K)
1
and let Q = I. The preceding result implies
min
R:det(R)=0
(K(I +G
o
K)
1
) < (I) = 1
We can therefore conclude that a sucient condition for the perturbed system to be stable is that
(K(I +G
o
K)
1
) < (I) < 1
The preceding bound can be used to bound the maximum singular value of by using the sub-
multiplicative property of the maximum singular value. In particular, we know that
_
K(I +G
o
K)
1
_
< ()
_
K(I +G
o
K)
1
_
If we can therefore guarantee that
()
_
K(I +G
o
K)
1
_
< 1
for all then we know we can ensure stability for the perturbed system. Finally, this preceding
inequality is usually written as a bound on (), so that if the uncertainty satises this bound,
then we can ensure the robust stability of the system. In other words, if
() <
1
(K(I +G
o
K)
1
)
3. ROBUST STABILITY 87
for all then the system in gure 9 has robust stability.
A similar bound may be obtained if the perturbed plant has a multiplicative uncertainty. Figure 10
shows the closed loop system with this class of uncertainty. In this case the perturbed plant has
the form G(s) = G
o
(s)(I +(s)) where the uncertainty generally satises a bound of some sort.
A commonly used bound is the frequency weighted bounds we introduced earlier,
_
_
W
1
_
_
< 1.
For this particular class of uncertainty, the associated robust stability condition has the form
((j)) <
1
(G
o
K(I +G
o
K)
1
)
(19)
for all . The informal derivation of this bound is very similar to what we did for the additive
uncertainty.
G
o
(s)
K(s)
- r
2
e
2
+
r
1
e
1
+
-
(s)
Figure 10. Closed loop system with unstructured multiplicative uncertainty
Example: Consider the closed loop system shown in gure 10 and let G = G
o
(I +) where
G
o
(s) =
1
s
The unstructured uncertainty, satises the bound
_
_
W
1
_
_
< 1
where
W
(s) =
s + 1
10
Lets now introduce a proportional controller, K(s) = k where k is a real constant. This implies,
therefore, that the nominal loop function
L
o
(s)K(s) =
k
s
Since this is an unstructured multiplicative uncertainty, the closed loop system has robust stability
if the inequality in equation 19 is satised. However, we also know that
(W
1
(j)) <
1
((j)))
for all . So by requiring that
(G
o
K(I +G
o
K(j))
1
) < (W
1
(j)),
88 5. UNCERTAINTY
we know that the robust stability condition in equation 19 will be satised. The above condition
for robust stability is easily checked for this particular system. In particular it implies that robust
stability will be guaranteed provided,
k
j +k
<
10
j + 1
Figure 11 graphically illustrates the implications of this bound. In this gure, weve plotted the gain
magnitude for the nominal closed loop transfer function with various values of k. Due to the form
of L
o
(s), we know that this gain magnitude plot is zero dB until it reaches a frequency k and then
it rolls o at 20 dB/decade. The overbound |10/(j + 1)| is also plotted here. The gain magnitude
for this system starts at 20 dB and begins rolling o at 20 dB/decade after 1 rad/sec. We therefore
see that as long as k 10, that the nominal closed loop transfer function has a gain-magnitude that
is over bounded by (W
(s) =
s + 1
10
_
1 0
0 1
_
We again have a multiplicative uncertainty, G = G
o
(I + ) where W
1
1
( ) W s
1
( )
o o
G K I G K
+
1
( )
o o
G K I G K
+
Frequency (rad/sec)
Frequency (rad/sec)
dB dB
0 <k < 0.1
0.1 <k < 3
Figure 12. Singular values of multivariate robust stability problem (left) 0 < k <
0.1; (right) .1 < k < 3
4. Small Gain Theorem
The arguments were incomplete that we used in the preceding section to justify the robust stability
condition. In the rst place, our arguments relied on the fact that
det(I +K(I +G
o
K)
1
)det(I +G
o
K) = 0
if and only if
det(I +K(I +G
o
K)
1
) = 0
and the nominal closed loop system is stable. This argument, however, requires that is a rational
function, which means we are restricting ourselves to perturbations that are linear time-invariant
systems. Since uncertainty often results from neglecting non-linearities in the original system, there
is no reason why this condition should be satised in practice.
In addition to this, we recall that our original result for internal stability was somewhat more
complicated than simply asserting that the determinant of the return dierence matrix is not zero.
In fact, our condition was actually that det(I+G
o
K) has no zeros on the RHS of the complex plane
and n
g
+n
k
RHP poles. Clearly the rst condition for no RHP zeros implies that the determinant is
nonzero. But the other condition cannot be neglected. In fact, since our is rational, it suggests that
one might introduce perturbations that actually change the number of RHP poles in the perturbed
return dierence matrix, in which case our stability condition would not be satised. In other words,
in addition to requiring that is rational, we must assume that the perturbation adds no additional
90 5. UNCERTAINTY
RHP poles or zeros to the open loop plant. Clearly both of these requirements are overly restrictive
and we need to provide a more rigorous proof of the robust stability condition if we are expect to
hold in situations that are of practical interest.
To develop a more rigorous proof of the robust stability condition, we need to allow to be a
nonlinear and time-varying map over L
2
. We then dene the incremental gain of as
() = inf { : w w
2
< w w
2
}
for all w and w in L
2
. We say that : L
2
L
2
is a contraction mapping if and only if its
incremental gain is less than one. In other words, is a contraction mapping if and only if () < 1.
A fundamental property of contraction mapping dened over complete normed linear spaces is that
they have unique xed points, w
such that w
= w
L such that w
= w
A formal proof of the contraction mapping principle is not dicult, but it requires some background
in real analysis. Below we review some of the relevant background. A more complete overview of
real analysis will be found in appendix A.
Given a normed linear space, X, we say that a series {x
n
}
n=1
is convergent if and only if there
exists an x
in X such that for all > 0, there exists an integer N > 0 such that x
x
k
<
for all k N. We denote x
as lim
k
x
k
as the limit of the sequence {x
k
}. This is the standard
-denition of a limit that many students confront in their rst calculus course. Essentially this
denition says that in any neighborhood around the limit point, x
. All of these well known and useful function spaces are complete. Complete
normed linear spaces are important enough to deserve their own special name. We often refer to a
complete normed linear space as a Banach space.
Proof (contraction mapping principle:) By assumption is a contraction mapping so that
() < 1. Now lets dene a sequence {x
k
} where x
k+1
= x
k
. The norm of a single iteration is
given by
x
k+1
x
k
= x
k
x
k1
() x
k
x
k1
k1
x
2
x
1
x
k+r
x
k+r1
+x
k+r1
x
k+r2
+ +x
k+1
x
k
k+r1
+
k+r2
+
k1
_
x
2
x
1
=
k
1
x
2
x
1
Note that as k , that the fraction in the last equation gets arbitrarily small. Since this is true
for any of value of r, we can immediately conclude that {x
k
} is Cauchy. By assumption the space
L is complete, therefore there exists a point x
.
92 5. UNCERTAINTY
We now prove that this limit point is the xed point. This is easily accomplished by noting that
that for any k
x
= x
x
k
+x
k
x
x
k
+x
k1
x
x
k
+() x
k1
x
x
k
+x
k1
x
Because x
is a limit point of {x
k
} we know that for any > 0, we can nd an n such that for k > N
the above inequality is less than . We can therefore show that x
= 0 if
and only if x
is a xed point.
We may establish that x
is unique using a similar approach. In this case, lets assume that there
are two limit points x
and y
= x
() x
where the last inequality follows from the fact that is a contraction mapping. This last inequality
can hold if and only if x
= y
2
= G
1
(w
1
+G
2
e
2
) G
1
(w
1
+G
2
e
2
)
2
(G
1
) w
1
+G
2
e
2
w
1
+G
2
e
2
2
= (G
1
) G
2
e
2
G
2
e
2
2
(G
1
)(G
2
) e
2
e
2
2
which under the theorems assumption implies that S is a contraction mapping over L
2
. By the
contraction mapping, we may therefore conclude that there exists a unique e
2
in L
2
for any w
1
and
w
2
in L
2
.
Note that a similar argument can be used to show the existence of unique e
1
in L
2
for any w
1
, w
2
in L
2
.
Remark: Note that G
1
and G
2
need not be linear. Nowhere in the above proof did we need to
use the principle of superposition. Also note that this theorem assumes that both G
1
and G
2
are
nite-gain L
2
stable. So the small gain theorem only applies to the interconnection of stable systems.
We now use the small gain theorem to formally prove our earlier robust stability bound for multi-
plicative uncertainties. The value of this proof is that the perturbation will no longer be constrained
to being linear and time-invariant. All we need to require is that the uncertainty, is L
2
stable.
The lefthand picture in gure 14 is a closed loop system with a structured additive uncertainty.
We pull out the from this diagram to obtain the linear fractional transformation shown on the
righthand side of gure 14. For the LFT, lets assume that : L
2
L
2
and let P =
_
P
11
P
12
P
21
P
22
_
be causal and linearly connected so that
z
1
= P
11
w
1
+P
12
w
2
z
2
= P
21
w
1
+P
22
w
2
Lets form the loop equations,
z
1
= P
11
w
1
+P
12
w
2
w
1
= z
1
We further assume that P
12
: L
2
L
2
so that P
12
w
2
L
2
. We may now apply the small gain
theorem to the above loop equations to conclude that if
()(P
11
) < 1
94 5. UNCERTAINTY
then for any w
2
in L
2
, there exists a unique w
1
and z
1
in L
2
.
G
o
(s)
K(s)
- r
2
e
2
+
r
1
e
1
+
-
(s)
G
o
(s)
K(s)
r
1
(s)
r
2
z
2
= y
z
1
= e
w
1
w
2
= {
P
Figure 14. LFT for feedback loop with additive uncertainty (small gain theorem)
For the specic system in gure 14, however, we have an explicit expression for the transfer function
matrix. In other words, we know that
P =
_
P
11
P
12
P
21
P
22
_
=
_
K(I +G
o
K)
1
(I +G
o
K)
1
K(I +G
o
K)
1
(I +G
o
K)
1
(I +G
o
K)
1
G
o
(I +G
o
K)
1
_
This means that are condition for robust stability now becomes,
()(K(I +G
o
K)
1
) < 1
whhich we can restate as
() <
1
(K(I +G
o
K)
1
)
This last equation is nearly identical to our earlier result for robust stability. The dierence is that
our earlier result only applied to linear systems, which is why the incremental gains were expressed in
terms of maximum singular values. In this particular condition, we no longer have the requirement
that the underlying systems are linear. We only require that they are stable with a suciently
bounded incremental gain.
5. Multipliers
The successful application of the small gain theorem requires that the product of the incremental
gains of the two systems, G
1
and G
2
, is less than unity. In other words, they have to be a contraction
mapping. If this is not the case, then we cannot infer that the feedback loop in gure 13 is not
internally stable for the small gain theorem is only a sucient condition. Since the condition is
only sucient, it suggests that it might be possible to tighten up the condition somehow. This
section explores that idea by using multipliers to rescale the systems G
1
and G
2
to obtain a tighter
condition. This particular approach has important consequences with regard to dening a notion
of gain margin that is relevant to multivariate systems. This notion of gain margin will lead in a
natural way to a new type of system gain known as the -norm.
5. MULTIPLIERS 95
Our rst results shows that the original loop in gure 13 is internally stable if the rescaled loop
shown in gure 15 is internally stable. In particular, we can state this result as follows:
Small Gain Theorem with Multipliers: For the control loop in gure 15,
assume that G
1
and G
2
are stable maps over L
2
and let W be a stable linear
system that is invertible in RH
G
1
= WG
1
and
G
2
= G
2
W
1
. If
(
G
1
)(
G
2
) < 1
then the closed loop system in gure 13 is internally stable.
G
1
(s)
G
2
(s)
+
+
w
1
e
1
+
-
W(s)
W
-1
(s)
2 2
w w = W
2
e
e
2
1
2 2
= G G W
1 1
= G WG
Figure 15. Small gain theorem in which the loop has been rescaled using multipliers
Proof: The proof of this assertion has two parts. We must rst show that the signal e
2
within the
loop of gure 15 is identical to the signal e
2
in gure 13. With this equivalence in hand, we can then
easily establish the assertion through a straightforward application of the small gain theorem. To
establish the equivalence of the two loops, lets rst note that
G
1
and
G
2
are clearly stable maps
over L
2
. Since W
1
is linear, we can readily see that the signal e
2
in gure 13 may be written as
e
2
= w
2
+G
1
e
1
= W
1
(Ww
2
+WG
1
e
2
)
This last equation, of course is the equation for e
2
in control loop of gure 15. The two signals are
clearly equal so we can infer that e
2
is the same in both loops.
Now note that if w
2
L
2
, then w
2
= Ww
2
is also in L
2
because W is stable. With the assumption
on the incremental gains of
G
1
and
G
2
, we can use the small gain theorem to infer that there exists
a unique e
1
and e
2
in L
2
. However, since W
1
is stable and e
2
= W
1
e
2
, we can immediately infer
that e
2
is also in L
2
, thereby establishing the theorems assertion.
An important application of the multiplier version of the small gain theorem is discussed below.
Lets consider the closed loop system shown in the lefthand side of gure 16. In this gure, the
loop is formed from the uncertainty system and the known nominal system, G
o
. We assume that
96 5. UNCERTAINTY
G
o
: L
2
L
2
is a stable map. The uncertainty, is taken to be a structured uncertainty of
the form
=
_
1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_
_
where each
i
on the diagonal is in RH
_
_
< 1 (21)
The system, W
(i.e. both W
and W
1
are in
RH
).
Were interested in the performance level of this uncertain system. As stated earlier, performance is
generally measured by the induced gain of a weighted sensitivity function. In this particular section,
we denote this specication as
W
p
S
< 1 (22)
where W
p
is a linear system that is invertible in RH
. As usual W
p
is a frequency weighted
specication determined by the control systems designer. This weighted specication is placed on
S, the output sensitivity (I + GK)
1
. As usual we assume that the nominal closed loop system
is internally stable under the controller K, so that S
o
= (I + G
o
K)
1
is in RH
. Our problem
is to identify sucient conditions under which the weighted sensitivity S satises the constraint in
equation 22 for all perturbations, that satisfy the constraint in equation 21. Closed loop systems
that satisfy this specication will be said to exhibit robust performance.
Were looking for a sucient condition on the nominal (rather than perturbed) systems sensitivity
function, for we assume G
o
is known whereas G is uncertain. The relationship between S and S
o
may be obtained by noting that
S = (I + (I +)G
o
K)
1
= (I +G
o
K+G
o
K)
1
Factoring out the return dierence function, I +G
o
K from the above expression yields,
S =
__
I +G
o
K(I +G
o
K)
1
_
(I +G
o
K)
1
= (I +G
o
K)
1
_
I +G
o
K(I +G
o
K)
1
_
1
= S
o
(I +G
o
KS
o
)
1
(23)
Equation 23 characterizes the perturbed systems sensitivity in terms of the nominal sensitivity S
o
and the uncertainty .
We use the relationship in equation 23 to nd bounds on the nominal system ensuring robust
performance. We start with a candidate bound on S
o
and T
o
and demonstrate that this bound
enforces the robust performance of the perturbed plant. As usual, S
o
= (I + G
o
K)
1
, is the
100 5. UNCERTAINTY
nominal sensitivity and T
o
is the nominal complementary sensitivity T
o
= G
o
K(I +G
o
K)
1
. Our
candidate bound requires that
1 > (W
p
(j))(S
o
(j)) +(W
(j))(T
o
(j)) (24)
= (W
p
(j))(S
o
(j)) +(W
(j))(G
o
KS
o
(j))
for all .
The bound in equation 24 may be rewritten as
(W
p
(j))(S
o
(j)) < 1 (W
(j))(T
o
(j))
Because ((j)) (W
(j))(T
o
(j)) > (W
T
o
(j))
for all . These bounds, of course, imply that
1 > W
p
S
o
1 > W
T
o
The rst inequality means that the nominal system satises the performance specication. We refer
to this as nominal performance. The second inequality is the robust stability condition we
derived in the preceding two sections. In other words, weve just shown that if we enforce the
robust performance through the condition in equation 24, we can also ensure the performance of
the nominal system (not surprising) and we can ensure the robust stability of the perturbed system
(also not surprising).
6. ROBUST PERFORMANCE 101
In view of the above discussion, it should be apparent that our robust performance condition is
that
(W
p
(j))(S
o
(j)) +(W
(j))(T
o
(j)) < 1
for all . Alternatively, we may pose this in terms of the H
S
o
+W
T
o
< 1
Example: Consider the closed loop system in gure 10 whose nominal loop function, L
o
= G
o
K,
is formed from the series combination of the nominal plant
G
o
(s) =
_
200
s+4
0
200s(3s+16)
(s+4)(s+8)
400(s+200)
s+8
_
and the controller
K =
_
1
s
0
0
1
s
_
The true plant, G = (I+)G
o
, is multiplicatively perturbed by an RH
_
_
< 1 in which
W
(s) =
10
3
s + 1
2(10
6
s + 1)
I
and I is a 2 by 2 identity matrix. We assume a performance bound on the perturbed sensitivity
function, S = (I +GK)
1
of the form W
p
S
1 where
W
p
(s) =
s + 100
2s
I
Once again, I is a 2 by 2 identity matrix.
The maximum singular values of W
and W
p
are plotted below in the lefthand plot of gure
18. From this plot, we can see that this is a well-posed set of specications because (W
(j))
and (W
p
(j)) are not both greater than one at the same frequency. From the gure we can
identify three dierent frequency intervals over which dierent aspects of our robust performance
specication are emphasized. For instance, in the region where (W
p
(j)) > 1, we are emphasizing
the nominal performance part of the specication. This occurs for frequencies below 50 rad/sec. In
the region where (W
(j)) > 1, we are assuming there is a large modeling error in our plant and
this is where the robust stability condition is emphasized. In this particular example,this occurs
for frequencies above 2000 rad/sec. Note that there is about a decade of frequencies between 50
and 2000 rad/sec in which both weights are less than one. We refer to this as the transition region
because neither the nominal performance or robust stability part of the constraint is dominant.
The signicance of having a well-posed set of weights cannot be overstated. Poorly chosen weights
invariably lead to problems that cannot be solved because no controller will satisfy the constraints.
The physical interpretation is that aggressive performance specications (large (W
p
(j))) cannot
102 5. UNCERTAINTY
Singular Values
Frequency (rad/sec)
S
in
g
u
la
r
V
a
lu
e
s
(
d
B
)
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
-10
0
10
20
30
40
50
60
Maximum Singular Values of Weighting Functions
Robust
Stability Region
Nominal
Performance
Region
( ) ( )
p
W j
( ) ( ) W j
Transition
Region
10
-2
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
-10
-5
0
5
10
15
20
25
Frequency (rad/sec)
( ) ( ) ( ) ( )
p o o
W S W T
+
dB
Frequency
interval over
which robust
performance
constraint is
Figure 18. (left) Maximum singular values of weighting functions, (right) Robust
performance condition
be satised in the presence of large modeling uncertainty (large (W
(j))(T
o
(j))
to see if it is less than 0 dB for all frequencies. This plot is shown in the righthand plot of gure
18. What is apparent from this plot is that the robust performance condition is only satised for
frequencies greater than 1000 rad/sec. Outside of this interval, the robust performance bound is
violated, thereby implying that the system does not have robust performance. What is important
about the plot is that it indicates the range of frequencies over which the condition is violated,
thereby providing the designer with important information about the reason the closed loop system
failed.
7. Robust Performance and the Loop Function
The robust performance condition given in the preceding section is of limited use because it only
uses the nominal closed loop maps. In practice, one often only has knowledge of the nominal loop
function L
o
. It is therefore of considerable practical interest to determine sucient bounds on the
gain of L
o
that assure robust performance. This section derives such conditions.
We rst conne our attention to frequencies where (L
o
(j)) > 1 and nd a bound on S
o
alone
that assures robust performance. By the complementary nature of S
o
and T
o
, we know that
S
o
+G
o
KS
o
= I
7. ROBUST PERFORMANCE AND THE LOOP FUNCTION 103
which implies that
1 = (I) = (S
o
(j) +G
o
KS
o
(j))
|(S
o
(j)) (G
o
KS
o
(j))|
Remember that were restricting our attention to frequencies for which (L
o
(j)) > 1. This means
that
(G
o
KS
o
(j)) > (L
o
(j))(S
o
(j)) > (S
o
(j))
We may therefore conclude that
1 (G
o
KS
o
(j)) (S
o
(j))
or rather that for frequencies in which (L
o
(j)) > 1 that
(T
o
(j)) < 1 + (S
o
(j)) (25)
We now insert this last relationship between the maximum singular values of the sensitivity functions
into our robust performance bound to obtain the desired result. Lets rst consider
1 (W
p
(j))(S
o
(j)) +(W
(j))(1 +(S
o
(j))) (26)
By virtue of the preceding bound in equation 25, we can see that
1 (W
p
(j))(S
o
(j)) +(W
(j))(T
o
(j))
which implies robust performance. We can now take the sucient condition in equation 26 and solve
for (S
o
(j)) to nd that
(S
o
(j))
1 (W
(j))
(W
p
(j)) +(W
(j))
This last condition assures that the robust performance constraint is satises at frequencies where
the minimum singular value of the loop function is greater than one.
The preceding condition on S
o
can now be recast as a bound on the loop gain itself. We do this by
noting that
(L
o
(j)) 1 (I +L
o
(j)) (L
o
(j)) + 1
which may be rewritten as a bound on the maximum singular value of S
o
1
(L
o
(j)) + 1
(S
o
(j))
1
(L
o
(j)) 1
(27)
Now constrain the minimum singular value of the loop function so that
1
(L
o
(j)) 1
<
1 (W
(j))
(W
p
(j)) +(W
(j))
104 5. UNCERTAINTY
at frequencies where (L
o
(j)) > 1. Combining this condition with the constraint on S
o
assuring
robust performance implies that this is sucient to guarantee robust performance of the system over
these frequencies. The preceding bound is more conveniently stated as
(L
o
(j)) >
1 +(W
p
(j))
1 (W
(j))
(28)
So if the above inequality is satised for where (L
o
(j)) > 1, then we know the robust perfor-
mance condition in equation 24 is satised at those frequencies.
A similar derivation may be used to obtain a robust performance bound on L
o
at other frequencies
not covered in the conditions given above. In this case, if we can guarantee that
(L
o
(j)) <
1 (W
p
(j))
1 +(W
(j))
(29)
at where (L
o
(j)) < 1, the the robust performance condition will be satised for this range of
frequencies.
The preceding discussion showed that we can use bounds on the maximum and minimum singular
value of the loop function, L
o
to check the robust performance conditions. These bounds, however,
only apply at frequencies where (L
o
(j)) < 1 or (L
o
(j)) > 1. At frequencies where neither
condition is satised, we can say nothing denitive about robust performance. Nonetheless, these
particular bounds are of immense use in the practical design of robust control systems using the
loopshaping method discussed in the next chapter. The following example demonstrates how these
bounds might be applied.
Example: We now return to the example in the preceding section. Figure 19 plots the minimum
and maximum singular values of the loop function L
o
. From this plot we see that the region in
which (L
o
(j)) > 1 is for < 10 rad/sec. Over this range of frequencies, our system is enforcing
a tracking constraint on the nominal system. Because the plant uncertainty is small over this
region, enforcement of the nominal performance constraint will satisfy, in large part, our robust
performance requirement. The region for which (L
o
(j)) < 1 is for > 1000 rad/sec. Over
this range of frequencies, our system is enforcing a robust stability condition because the plant
uncertainty (as measured by W
)
for such that (L
o
(j)) > 1. Well refer to
this as the nominal performance bound. It also plots the loop function bound
1(Wp)
1+(W)
for such
that (L
o
(j)) < 1. Well refer to this as the robust stability bound. Because of our choice for W
p
and W
)
(W
p
)
1 (W
p
)
1 +(W
)
1
(W
)
7. ROBUST PERFORMANCE AND THE LOOP FUNCTION 105
10
-2
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
-250
-200
-150
-100
-50
0
50
100
150
Frequency (rad/sec)
dB
( ) ( )
o
L j
( ) ( )
o
L j
Interval over which
Robust performance
Condition satisfied
( )
( )
( )
1
1
1
p
W
W W
+
( )
( )
( )
1
1
p
p
W
W
W
Figure 19. Singular value plot of Nominal Loop Function and Bounds for Robust Performance
The nominal performance bound, therefore, is determined primarily by W
p
, our performance weight-
ing function and the robust stability bound is determined primarily by 1/W
).
These weightings systems provide a systematic way for accounting for modeling uncertainty in
control system design. Control systems that are capable of assuring closed loop stability over a
set of bounded uncertainty are said to have robust stability. In practice, however, we are usually
more concerned with assuring a specied level of performance over a bounded uncertainty set. Such
control systems are said to have robust performance. This section derived a variety of sucient
conditions for both robust stability and robust performance of closed loop systems with unstructured
multiplicative uncertainty. These conditions were expressed as bounds on the nominal closed loop
sensitivity function as well as the open loop transfer function. The resulting sucient conditions
can be graphically veried and this provides the theoretical foundation for the loopshaping design
method to be presented in the next section.
The examples of uncertain systems shown in section 1 were taken from exercises and examples in [3].
The example in section 2 showing the empirical determination of weighting functions was taken from
a problem in [6]. The proofs for the robust stability condition are based on a similar approach taken
in [1], where we rst establish a Nyquist test and then use the small gain theorem to re-establish
the bounds for time-varying and nonlinear uncertainties. This provides a good example of how a
researcher often takes an initial limited result and systematically expands its scope. The notation
used here, however, follows [2], more than [1]. The examples in this section are drawn from an
example in [5] (SISO example) and [4] for the MIMO example. Multipliers play an important role
in the application of the small gain theorem. The treatment here follows that of [1], and adds the
discussion by [2] to cover the -synthesis. Finally, the robust performance discussion is based on
extending the SISO discussion in [5] to the MIMO case.