Professional Documents
Culture Documents
Introduction to static optimisation. Dynamic optimisation. Discrete time systems. Continuous time systems. Open nal time.
Pontryagin's minimum principle. Tracking optimal control. LQG
optimal control. Kalman ltering. Predictive control. Numerical
methods.
http://www.rdg.ac.uk/~shs99vmb/notes
Recommended books
Lewis F.L. and Syrmos V.L. (1995) Optimal Control. Second Edition. Wiley.
Assessment
Exam (100%)
Introduction
Optimal control is the process of nding control and state histories for a dynamic system
over a period of time to minimise a performance index.
Optimal control is used in many elds, for example:
In order to learn the principles of optimal control, it is important to understand the theory
of static optimisation.
Unconstrained optimization
(1)
x1
x2
x3
..
xn
3
7
7
7
7
7
7
5
(2)
(3)
(5)
Example 1.1
"
1
11 Q12
L(x) = [x1 x2] Q
Q21 Q22
2
#"
"
x1 +[b b ] x1
1 2
x2
x2
(7)
(8)
(9)
7
If Lxx has both positive and negative eigenvalues, then we have a saddle point.
A more general class of static optimisation problem involves a set of p constraint relations:
f (i)(x) = 0; i = 1; ; p
(10)
where p < n.
Using vector notation, let
2
f (1)
f = 64 .. 75
f (p)
(11)
(12)
f (x) = 0
(13)
subject to
10
(14)
Lx =
n
X
i=1
ifx(i)
(15)
where
Lx = T fx
(16)
T = [1; ; n]
(17)
12
(18)
Lx + T fx = 0
(19)
H = L + T f
(20)
(21)
13
Example 1.2
2
2
min
L
=
x
+
x
1
2
x
subject to:
2x1 + x2 + 4 = 0
Solution
Optimality conditions:
f (x) = 0
Hx = 0
) (2x1 + x2 + 4 = 0
2 = 0
) 22xx12 +
+ =0
8
7
6
5
4
3
L(x) = x21 + x22
2
1
0
2
2x1+x2+4=0
2
1
2
0
4
1
6
15