You are on page 1of 60

1/54

Linear Matrix Inequalities in Control


Carsten Scherer
Delft Center for Systems and Control (DCSC)
Delft University of Technology
The Netherlands
Siep Weiland
Department of Electrical Engineering
Eindhoven University of Technology
The Netherlands
Optimization and Control
2/54
Carsten Scherer Siep Weiland
Classically optimization and control are highly intertwined:
Optimal control (Pontryagin/Bellman)
LQG-control or H
2
-control
H

-synthesis and robust control


Model Predictive Control
Main Theme
View control input signal and/or feedback controller as decision
variable of an optimization problem.
Desired specs are imposed as constraints on controlled system.
Sketch of Issues
3/54
Carsten Scherer Siep Weiland
How to distinguish easy from dicult optimization problems?
What are the consequences of convexity in optimization?
What is robust optimization?
Which performance measures can be incorporated?
How can controller synthesis be convexied?
How can we check robustness by convex optimization?
What are the limits for the synthesis of robust controllers?
How can we perform systematic gain-scheduling?
Outline
3/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
Optimization
4/54
Carsten Scherer Siep Weiland
The ingredients of any optimization problem are:
A universe of decisions x A
A subset o A of feasible decisions
A cost function or objective function f : o R
Optimization Problem/Programming Problem
Find a feasible decision x o with the least possible cost f(x).
In short such a problem is denoted as
minimize f(x)
subject to x o
Questions in Optimization
5/54
Carsten Scherer Siep Weiland
What is the least possible cost? Compute the optimal value
f
opt
:= inf
xS
f(x) .
Convention: If o = then f
opt
= +.
If f
opt
= the problem is said to be unbounded from below.
How can we compute almost optimal solutions? For any chosen
positive absolute error , determine
x

o with f
opt
f(x

) f
opt
+ .
By denition of the inmum such an x

does exist for all > 0.


Questions in Optimization
6/54
Carsten Scherer Siep Weiland
Is there an optimal solution? Does there exist
x
opt
o with f(x
opt
) = f
opt
?
If yes, the minimum is attained and we write
f
opt
= min
xS
f(x).
Set of all optimal solutions is denoted as
arg min
xS
f(x) = x o : f(x) = f
opt
.
Is optimal solution unique?
Recap: Inmum and Minimum of Functions
7/54
Carsten Scherer Siep Weiland
Any f : o R has inmum l R denoted as inf
xS
f(x).
The inmum is uniquely dened by the following properties:
l f(x) for all x o.
l nite: For all > 0 exists x o with f(x) l + .
l = : For all > 0 exists x o with f(x) .
If there exists x
0
o with f(x
0
) = inf
xS
f(x) we say that f attains
its minimum on o and write l = min
xS
f(x).
If existing the minimum is uniquely dened through the properties:
l f(x) for all x o.
There exists some x
0
o with f(x
0
) = l.
Nonlinear Programming (NLP)
8/54
Carsten Scherer Siep Weiland
With decision universe A = R
n
, the feasible set o is often dened by
constraint functions g
1
: A R, . . . , g
m
: A R as
o = x A : g
1
(x) 0, . . . , g
m
(x) 0 .
The general nonlinear optimization problem is formulated as
minimize f(x)
subject to x A, g
1
(x) 0, . . . , g
m
(x) 0.
By exploiting particular properties of f, g
1
, . . . , g
m
(e.g. smoothness),
optimization algorithms typically allow to iteratively compute locally
optimal solutions x
0
: There exists an > 0 such that x
0
is optimal on
o x A : |x x
0
| .
Example: Quadratic Program
9/54
Carsten Scherer Siep Weiland
f : R
n
R is quadratic i there exists a symmetric matrix P with
f(x) =

1
x

T
P

1
x

.
Quadratically constrained quadratic program
minimize

1
x

T
P
0

1
x

subject to x R
n
,

1
x

T
P
k

1
x

0, k = 1, . . . , m
where P
0
, P
1
, . . . , P
m
S
n+1
.
Linear Programming (LP)
10/54
Carsten Scherer Siep Weiland
With the decision vectors x = (x
1
x
n
)
T
R
n
consider the problem
minimize c
1
x
1
+ + c
n
x
n
subject to a
11
x
1
+ + a
1n
x
n
b
1
.
.
.
a
m1
x
1
+ + a
mn
x
n
b
m
The cost is linear and the set of feasible decisions is dened by nitely
many ane inequality constraints.
Simplex algorithms or interior point methods allow to eciently
1. decide whether the constraint set is feasible (value < ).
2. decide whether problem is bounded from below (value > ).
3. compute an (always existing) optimal solution if 1. & 2. are true.
Linear Programming (LP)
11/54
Carsten Scherer Siep Weiland
Major early contributions in 1940s by
George Dantzig (simplex method)
John von Neumann (duality)
Lenoid Kantorovich (economics applications)
Leonid Khachiyan proved polynomial-time solvability in 1979.
Narendra Karmakar introduce an interior point method in 1984.
Has numerous applications for example in economical planning problems
(business management, ight-scheduling, resource allocation, nance).
LPs have spurred the development of optimization theory and appear
as subproblem in many optimization algorithms.
Recap
12/54
Carsten Scherer Siep Weiland
For a real or complex matrix A the inequality A 0 means that A
is Hermitian and negative semi-denite.
A is dened to be Hermitian if A = A

=

A
T
. If A is real then this
amounts to A = A
T
and A is then called symmetric.
All eigenvalues of Hermitian matrices are real.
By denition A is negative semi-denite if A = A

and
x

Ax 0 for all complex vectors x ,= 0.


A is negative semi-denite i all its eigenvalues are non-positive.
A B means by denition: A, B are Hermitian and A B 0.
A B, A B, A B, A ~ B dened/characterized analogously.
Recap
13/54
Carsten Scherer Siep Weiland
Let A be partitioned with square diagonal blocks as
A =

A
11
A
1m
.
.
.
.
.
.
.
.
.
A
m1
A
mm

.
Then
A 0 implies A
11
0, . . . , A
mm
0.
Prototypical Proof. Choose any vector z
i
,= 0 of length compatible
with the size of A
ii
. Dene z = (0, . . . , 0, z
T
i
, 0, . . . , 0)
T
,= 0 with the
zeros blocks compatible in size with A
11
, . . . , A
mm
. This construction
implies z
T
Az = z
T
i
A
ii
z
i
. Since A 0, we infer z
T
Az < 0. Therefore,
z
T
i
A
ii
z
i
< 0. Since z
i
,= 0 was arbitrary we infer A
ii
0.
Semi-Denite Programming (SDP)
14/54
Carsten Scherer Siep Weiland
Let us now assume that the constraint functions G
1
, . . . , G
m
map A
into the set of symmetric matrices, and dene the feasible set o as
o = x A : G
1
(x) 0, . . . , G
m
(x) 0 .
The general semi-denite program (SDP) is formulated as
minimize f(x)
subject to x A, G
1
(x) 0, . . . , G
m
(x) 0.
This includes NLPs as a special case.
Is called convex if f and G
1
, . . . , G
m
are convex.
Is called linear matrix inequality (LMI) optimization problem
or linear SDP if f and G
1
, . . . , G
m
are ane.
Outline
14/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
Recap: Ane Sets and Functions
15/54
Carsten Scherer Siep Weiland
The set o in the vector space A is ane if
x
1
+ (1 )x
2
o for all x
1
, x
2
o, R.
The matrix-valued function F dened on o is ane if the domain
of denition o is ane and if
F(x
1
+ (1 )x
2
) = F(x
1
) + (1 )F(x
2
)
for all x
1
, x
2
o, R.
For points x
1
, x
2
o recall that
x
1
+ (1 )x
2
: R is the line through x
1
, x
2
x
1
+ (1 )x
2
: [0, 1] is the line segment between x
1
, x
2
Recap: Convex Sets and Functions
16/54
Carsten Scherer Siep Weiland
The set o in the vector space A is convex if
x
1
+ (1 )x
2
o for all x
1
, x
2
o, (0, 1).
The symmetric-valued function F dened on o is convex if the
domain of denition o is convex and if
F(x
1
+ (1 )x
2
) F(x
1
) + (1 )F(x
2
)
for all x
1
, x
2
o, (0, 1).
F is strictly convex if can be replaced by .
If F is real-valued, the inequalities and are the same as the usual
inequalities and < between real numbers. Therefore our denition
captures the usual one for real-valued functions!
Examples of Convex and Non-Convex Sets
17/54
Carsten Scherer Siep Weiland
Examples of Convex Sets
18/54
Carsten Scherer Siep Weiland
The intersection of any family of convex sets is convex.
With a R
n
0 and b R, the hyperplane x R
n
: a
T
x = b
is ane while the half-space x R
n
: a
T
x b is convex.
The intersection of nitely many hyperplanes and half-planes denes
a polyhedron. Any polyhedron is convex and can be described as
x R
n
: Ax b, Dx = e
with suitable matrices A, D, vectors b, e. A compact polyhedron is
said to be a polytope.
The set of negative semi-denite/negative denite matrices is convex.
Convex Combination and Convex Hull
19/54
Carsten Scherer Siep Weiland
x A is convex combination of x
1
, ..., x
l
A if
x =
l

k=1

k
x
k
with
k
0,
l

k=1

k
= 1.
Convex combination of a convex combination is a convex combination.
The convex hull co(o) of any subset o A is dened in one of the
following equivalent fashions:
1. Set of all convex combinations of points in o.
2. Intersection of all convex sets that contain o.
For arbitrary o the convex hull co(o) is convex.
Explict Description of Polytopes
20/54
Carsten Scherer Siep Weiland
Any point in the convex hull of the nite set x
1
, . . . , x
l
is given by a
(not necessarily unique) convex combination of generators x
1
, . . . , x
l
.
The convex hull of nitely many points cox
1
, . . . , x
l
is a polytope.
Any polytope can be represented in this way.
In this fashion one can explicitly describe polytopes, in contrast to an
implicit description such as x R
n
: Ax b. Note that the implicit
description is often preferable for reasons of computational complexity.
Example: x R
n
: a x b is dened by 2n inequalities but
requires 2
n
generators for its description as a convex hull.
Examples of Convex and Non-Convex Functions
21/54
Carsten Scherer Siep Weiland
On Checking Convexity of Functions
22/54
Carsten Scherer Siep Weiland
All real-valued or Hermitian-valued ane functions are convex.
It is often not simple to verify whether a non-ane function is convex.
The following fact (for o R
n
with interior points) might help.
The C
2
-map f : o R is convex i
2
f(x) 0 for all x o.
It is not easy to nd convexity tests for Hermitian-valued function in the
literature. The following reduction to real-valued functions often helps.
The Hermitian-valued map F dened on o is convex i
o x z

F(x)z R
is convex for all complex vectors z C
m
.
Convex Constraints
23/54
Carsten Scherer Siep Weiland
Suppose that F dened on o is convex. For all Hermitian H the
strict or non-strict sub-level sets
x o : F(x) H and x o : F(x) H
of level H are convex.
Note that the converse is in general not true!
The feasible set of the convex SDP on slide 13 is convex.
Convex sets are typically described as the nite intersection of such
sub-level sets. If the involved functions are even ane this is often
called an LMI representation.
Convexity is necessary for a set to have an LMI representation.
Example: Quadratic Functions
24/54
Carsten Scherer Siep Weiland
The quadratic function
f(x) =

1
x

q s
T
s R

1
x

= q + 2s
T
x + x
T
Rx
is convex i R is positive semidenite.
The zero sub-level set of a convex quadratic function is a half-space
(R = 0) or an ellipsoid. The ellipsoid is compact if R ~ 0.
For later use: f(x) = q+2s
T
x+x
T
Rx with R = R
T
is nonnegative
for all x R
n
i its dening matrix satises

q s
T
s R

0.
Important Properties
25/54
Carsten Scherer Siep Weiland
Property 1: Jensens Inequality.
If F dened on o is convex then for all x
1
, . . . , x
l
o and
1
, . . . ,
l
0
with
1
+ +
l
= 1 we infer
1
x
1
+ +
l
x
l
o and
F(
1
x
1
+ +
l
x
l
)
1
F(x
1
) + +
l
F(x
l
).
Source of many inequalities in mathematics! Proof: cc of cc is cc!
Property 2. If F and G dene on o are convex then F + G and F
for 0 as well as
o x
max
(F(x)) R
are all convex. If F and G are scalar-valued then maxF, G is convex.
There are many other operations for functions that preserve convexity.
A Key Consequence
26/54
Carsten Scherer Siep Weiland
Let F be convex. Then
F(x) 0 for all x cox
1
, . . . , x
l

if and only if
F(x
k
) 0 for all k = 1, . . . , l.
Proof. We only need to show if. Choose x cox
1
, . . . , x
l
. Then
there exists
1
0, . . . ,
l
0 that sum up to one with
x =
1
x
1
+ +
l
x
l
.
By convexity and Jensens inequality we infer
F(x)
1
F(x
1
) + +
l
F(x
l
) 0
since the set of negative denite matrices is convex.
General Remarks on Convex Programs
27/54
Carsten Scherer Siep Weiland
Solvers for general nonlinear programs typically determine local op-
timal solutions. There are no guarantees for global optimality.
Main feature of convex programs:
Locally optimal solutions are globally optimal.
Convexity alone neither guarantees that the optimal value is nite,
nor that there exists an optimal solution/ecient solution algorithms.
In general convex programs can be solved with guarantees on accuracy
if one can compute (sub)gradients of objective/constraint functions.
Strictly feasible linear semi-denite programs are convex and can be
solved very eciently, with guarantees on accuracy at termination.
Outline
27/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
Linear Matrix Inequalities (LMIs)
28/54
Carsten Scherer Siep Weiland
With the decision vectors x = (x
1
x
n
)
T
R
n
a system of LMIs is
A
1
0
+ x
1
A
1
1
+ + x
n
A
1
n
0
.
.
.
A
m
0
+ x
1
A
m
1
+ + x
n
A
m
n
0
where A
i
0
, A
i
1
, . . . , A
i
n
, i = 1, . . . , m, are real symmetric data matrices.
LMI feasibility problem: Test whether there exist x
1
, . . . , x
n
that
render the LMIs satised.
LMI optimization problem: Minimize c
1
x
1
+ + c
n
x
n
over all
x
1
, . . . , x
n
that satisfy the LMIs.
Only simple cases can be treated analytically Numerical techniques.
LMI Optimization Problems
29/54
Carsten Scherer Siep Weiland
minimize c
1
x
1
+ + c
n
x
n
subject to A
i
0
+ x
1
A
i
1
+ + x
n
A
i
n
0, i = 1, . . . , m
Natural generalization of LPs with inequalities dened by the cone of
positive semi-denite matrices. Considerable richer class than LPs.
The i-th constraint can be equivalently expressed as

max
(A
i
0
+ x
1
A
i
1
+ + x
n
A
i
n
) 0.
Interior-point or bundle methods allow to eectively decide about
feasibility/boundedness and to determine almost optimal solutions.
Must be strictly feasible: There exists some decision x for which
the constraint inequalities are strictly satised.
Testing Strict Feasbility
30/54
Carsten Scherer Siep Weiland
Introduce the auxiliary variable t R and consider
A
1
0
+ x
1
A
1
1
+ + x
n
A
1
n
tI
.
.
.
A
m
0
+ x
1
A
m
1
+ + x
n
A
m
n
tI.
Find inmal value t

of t over these LMI constraints.


This problem is strictly feasible. We can hence compute t

eciently.
If t

is negative then original problem is strictly feasible.


If t

is non-negative then original problem is not strictly feasible.


LMI Optimization Problems
31/54
Carsten Scherer Siep Weiland
Developments
Bellman/Fan initialized derivation of optimality conditions (1963)
Jan Willems coined terminology LMI and revealed relation to dissi-
pative dynamical systems (1971/72)
Nesterov/Nemirovski exhibited essential feature (self-concordance)
for existence of polynomial-time solution algorithm (1988)
Interior-point methods: Alizadeh (1992), Kamath/Karmarkar (1992)
Suggested books: Boyd/El Ghaoui (1994), El Ghaoui/Niculescu (2000),
Ben-Tal/Nemiorvski (2001), Boyd/Vandenberghe (2004)
What are LMIs good for?
32/54
Carsten Scherer Siep Weiland
Many engineering optimization problem can be (often but not always
easily) translated into LMI problems.
Various computationally dicult optimization problems can be eec-
tively approximated by LMI problems.
In practice the description of the data is aected by uncertainty.
Robust optimization problems can be handled/approximated by
standard LMI problems.
In this course
How can we solve (robust) control problems with LMIs?
Outline
32/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
Truss Topology Design
33/54
Carsten Scherer Siep Weiland
0 50 100 150 200 250 300 350 400
0
50
100
150
200
250
Example: Truss Topology Design
34/54
Carsten Scherer Siep Weiland
Connect nodes with N bars of length l = col(l
1
, . . . , l
N
) (xed) and
cross-sections x = col(x
1
, . . . , x
N
) (to-be-designed).
Impose bounds a
k
x
k
b
k
on cross-section and l
T
x v on total
volume (weight). Abbreviate a = col(a
1
, . . . , a
N
), b = col(b
1
, . . . , b
N
).
If applying external forces f = col(f
1
, . . . , f
M
) (xed) on nodes the
construction reacts with the node displacement d = col(d
1
, . . . , d
M
).
Mechanical model: A(x)d = f where A(x) is the stiness matrix
which depends linearly on x and has to be positive semi-denite.
Goal is to maximize stiness, for example by minimizing the elastic
stored energy f
T
d.
Example: Truss Topology Design
35/54
Carsten Scherer Siep Weiland
Find x R
N
which minimizes f
T
d subject to the constraints
A(x) 0, A(x)d = f, l
T
x v, a x b.
Features
Data: Scalar v, vectors f, a, b, l, and symmetric matrices A
1
, . . . , A
N
which dene the linear mapping A(x) = A
1
x
1
+ + A
N
x
N
.
Decision variables: Vectors x and d.
Objective function: d f
T
d which happens to be linear.
Constraints: Semi-denite constraint A(x) 0, nonlinear equality
constraint A(x)d = f, and linear inequality constraints l
T
x v,
a x b. Latter interpreted elementwise!
From Truss Topology Design to LMIs
36/54
Carsten Scherer Siep Weiland
Render LMI inequality strict. Equality constraint A(x)d = f allows to
eliminate d which results in
minimize f
T
A(x)
1
f
subject to A(x) ~ 0, l
T
x v, a x b.
Push objective to constraints with auxiliary variable:
minimize
subject to > f
T
A(x)
1
f, A(x) ~ 0, l
T
x v, a x b.
Trouble: Nonlinear inequality constraint > f
T
A(x)
1
f.
Recap: Congruence Transformations
37/54
Carsten Scherer Siep Weiland
Given a Hermitian matrix A and a square non-singular matrix T,
A T

AT
is called a congruence transformation of A.
If T is square and non-singular then
A 0 if and only if T

AT 0.
The following more general statement is also easy to remember.
If A is Hermitian and T is nonsingular, the matrices A and T

AT
have the same number of negative, zero, positive eigenvalues.
What is true if T is not square? ... if T has full column rank?
Recap: Schur-Complement Lemma
38/54
Carsten Scherer Siep Weiland
The Hermitian block matrix

Q S
S
T
R

is negative denite
if and only if
Q 0 and R S
T
Q
1
S 0
if and only if
R 0 and QSR
1
S
T
0.
Proof. First equivalence follows from

I 0
S
T
Q
1
I

Q S
S
T
R

I Q
1
S
0 I

Q 0
0 R S
T
Q
1
S

.
The proof reveals a more general relation between the number of
negative, zero, positive eigenvalues of the three matrices.
From Truss Topology Design to LMIs
39/54
Carsten Scherer Siep Weiland
Render LMI inequality strict. Equality constraint A(x)d = f allows to
eliminate d which results in
minimize f
T
A(x)
1
f
subject to A(x) ~ 0, l
T
x v, a x b.
Push objective to constraints with auxiliary variable:
minimize
subject to > f
T
A(x)
1
f, A(x) ~ 0, l
T
x v, a x b.
Linearize with Schur lemma to equivalent LMI problem
minimize
subject to

f
T
f A(x)

~ 0, l
T
x v, a x b.
Yalmip-Coding: Truss Toplogy Design
40/54
Carsten Scherer Siep Weiland
minimize
subject to

f
T
f A(x)

~ 0, l
T
x v, a x b.
Suppose A(x)=

N
k=1
x
k
m
k
m
T
k
with vectors m
k
collected in matrix M.
The following code with Yalmip commands solves LMI problem:
gamma=sdpvar(1,1); x=sdpvar(N,1,full);
lmi=set([gamma f;f M*diag(x)*M]);
lmi=lmi+set(l*x<=v);
lmi=lmi+set(a<=x<=b);
options=sdpsettings(solver,sedumi);
solvesdp(lmi,gamma,options); s=double(x);
Result: Truss Toplogy Design
41/54
Carsten Scherer Siep Weiland
0 50 100 150 200 250 300 350 400
0
50
100
150
200
250
Quickly Accessible Software
42/54
Carsten Scherer Siep Weiland
General purpose Matlab interface Yalmip:
Free code developed by J. Lofberg and accessible at
http://control.ee.ethz.ch/~joloef/yalmip.msql
Can use usual Matlab-syntax to dene optimization problem.
Is extremely easy to use and very versatile. Highly recommended!
Provides access to a whole suite of public and commercial optimiza-
tion solvers, including fastest available dedicated LMI-solvers.
Matlab LMI-Toolbox for dedicated control applications. Has recently
been integrated into new Robust Control Toolbox.
Outline
42/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
General Formulation of LMI Problems
43/54
Carsten Scherer Siep Weiland
Let A be a nite-dimensional real vector space. Suppose the mappings
c : A R, F : A symmetric matrices of xed size are ane.
LMI feasibility problem: Test existence of X A with F(X) 0.
LMI optimization problem: Minimize f(X) over all X A that
satisfy the LMI F(X) 0.
Translation to standard form: Choose basis X
1
, . . . , X
n
of A and
parameterize X = x
1
X
1
+ + x
n
X
n
. For any ane f infer
f(
n

k=1
x
k
X
k
) = f(0) +
n

k=1
x
k
[f(X
k
) f(0)].
Diverse Remarks
44/54
Carsten Scherer Siep Weiland
The standard basis of R
pq
is X
(k,l)
, k = 1, . . . , p, l = 1, . . . , q,
where the only nonzero element of X
(k,l)
is one at position (k, l).
General ane equation constraint can be routinely eliminated - just
recall how we can parameterize the solution set of general ane equa-
tions. This might be cumbersome and is not required in Yalmip.
Multiple LMI constraints can be collected into one single constraint.
If F(X) is linear in X, then
F(X) 0 implies F(X) 0 for all > 0.
With some solvers this might cause numerical trouble. Avoided by
normalization or extra constraints (e.g. by bounding the variables).
Example: Spectral Norm Approximation
45/54
Carsten Scherer Siep Weiland
For real data matrices A, B, C and some unknown X consider
minimize |AXB C|
subject to X o
where o is a matrix subspace reecting structural constraints.
Key equivalence with Schur:
|M| < M
T
M
2
I

I M
M
T
I

~ 0.
Norm minimization hence equivalent to following LMI problem:
minimize
subject to X o,

I AXB C
(AXB C)
T
I

~ 0
Stability of Dynamical Systems
46/54
Carsten Scherer Siep Weiland
For dynamical systems one can distinguish many notions of stability.
We will mainly rely on denitions related to the state-space descriptions
x(t) = Ax(t), x(t) = A(t)x(t), x(t) = f(t, x(t)), x(t
0
) = x
0
which capture the behavior of x(t) for t depending on x
0
.
Exponential stability means that there exist real constants a > 0
(decay rate) and K (peaking constant) such that
|x(t)| |x(t
0
)|Ke
a(tt
0
)
for all trajectories and t t
0
.
K and are assumed not to depend on t
0
or x(t
0
) (uniformity).
Lyapunov theory provides the background for testing stability.
Stability of LTI Systems
47/54
Carsten Scherer Siep Weiland
The linear time-invariant dynamical system
x(t) = Ax(t)
is exponentially stable if and only if there exists K with
K ~ 0 and A
T
K + KA 0.
Two inequalities can be combined as

K 0
0 A
T
K + KA

0.
Since the left-hand side depends anely on the matrix variable K, this
is indeed a standard strict feasibility test!
Matrix variables are fully supported by Yalmip and LMI-toolbox!
Trajectory-Based Proof of Suciency
48/54
Carsten Scherer Siep Weiland
Choose > 0 such that A
T
K + KA + K 0. Let x(.) be any state-
trajectory of the system. Then
x(t)
T
(A
T
K + KA)x(t) + x(t)
T
Kx(t) 0 for all t R
and hence (using x(t) = Ax(t))
d
dt
x(t)
T
Kx(t) + x(t)
T
Kx(t) 0 for all t R
and hence (integrating factor e
t
)
x(t)
T
Kx(t) x(t
0
)
T
Kx(t
0
)e
(tt
0
)
for all t R, t t
0
.
Since
min
(K)|x|
2
x
T
Kx
max
(K)|x|
2
we can conclude that
|x(t)| |x(t
0
)|

max
(K)

min
(K)
e
(tt
0
)
for t t
0
.
Algebraic Proof
49/54
Carsten Scherer Siep Weiland
Suciency. Let (A). Choose a complex eigenvector x ,= 0 with
Ax = x. Then the LMIs imply x

Kx > 0 and
0 > x

(A
T
K + KA)x =

x

Kx + x

Kx = 2Re()x

Kx.
This guarantees Re() < 0. Therefore all eigenvalues of A are in C

.
Necessity if A is diagonizable. Suppose all eigenvalues of A are in C

.
Since A is diagonizable there exists a complex nonsingular T with TAT
1
=
= diag(
1
, . . . ,
n
). Since Re(
k
) < 0 for k = 1, . . . , n we infer

+ 0 and hence (T

)
1
A
T
T

+ TAT
1
0.
If we left-multiply with T

and right-multiply with T (congruence) we infer


A
T
(T

T) + (T

T)A 0.
Hence K = T

T ~ 0 satises the LMIs.


Algebraic Proof
50/54
Carsten Scherer Siep Weiland
Necessity if A is not diagonizable. If A is not diagonizable it can be
transformed by similarity into its Jordan form: There exists a nonsingular
T with TAT
1
= +J where is diagonal and J has either ones or zeros
on the rst upper diagonal.
For any > 0 one can even choose T

with T

AT
1

= +J. Since has


the eigenvalues of A on its diagonal we still infer

+ 0. Therefore it
is possible to x a suciently small > 0 with
0 ~

+ + (J
T
+ J) = ( + J)

+ ( + J).
As before we can conclude that K = T

satises the LMIs.


Stability of Discrete-Time LTI Systems
51/54
Carsten Scherer Siep Weiland
The linear time-invariant dynamical system
x(t + 1) = Ax(t), t = 0, 1, 2, . . .
is exponentially stable if and only if there exists K with
K ~ 0 and A
T
KA K 0.
Recall how negativity of
d
dt
x(t)
T
Kx(t) in continuous-time leads to
A
T
K + KA =

I
A

0 K
K 0

I
A

0.
Now negativity of x(t + 1)
T
Kx(t + 1) x(t)
T
Kx(t) leads to
A
T
KA K =

I
A

K 0
0 K

I
A

0.
Outline
51/54
Carsten Scherer Siep Weiland
From Optimization to Convex Semi-Denite Programming
Convex Sets and Convex Functions
Linear Matrix Inequalities (LMIs)
Truss-Topology Design
LMIs and Stability
A First Glimpse at Robustness
A First Glimpse at Robustness
52/54
Carsten Scherer Siep Weiland
With some compact set A R
nn
consider the family of LTI systems
x(t) = Ax(t) with A A.
A is said to be quadratically stable if there exists K such that
K ~ 0 and A
T
K + KA 0 for all A A.
Why name? V (x) = x
T
Kx is quadratic Lyapunov function.
Why relevant? Implies that all A A are Hurwitz.
Even stronger: Implies, for any piece-wise continuous A : R A,
exponential stability of the time-varying system
x(t) = A(t)x(t).
Computational Verication
53/54
Carsten Scherer Siep Weiland
If A has innitely many elements, testing quadratic stability amounts
to verifying the feasibility of an innite number of LMIs.
Key question: How to reduce to a standard LMI problem?
Let A be the convex hull of A
1
, . . . , A
N
: For each A A there
exist coecients
1
0, . . . ,
N
0 with
1
+ +
N
= 1 such that
A =
1
A
1
+ +
N
A
N
.
If A is the convex hull of A
1
, . . . , A
N
then A is quadratically
stable i there exists some K with
K ~ 0 and A
T
i
K + KA
i
0 for all i = 1, . . . , N.
Proof. Slide 26.
Lessons to be Learnt
54/54
Carsten Scherer Siep Weiland
Many interesting engineering problems are LMI problems.
Variables can live in arbitrary vector space.
In control: Variables are typically matrices.
Can involve equation and inequality constraints. Just check whether
cost function and constraints are ane & verify strict feasibility.
Translation to input for solution algorithm by parser (e.g. Yalmip).
Can choose among many ecient LMI solvers (e.g. Sedumi).
Main trick in removing nonlinearities so far: Schur Lemma.

You might also like