You are on page 1of 264

DISC Course, Lecture Notes

Model Predictive Control


Ton J.J. van den Boom & Anton A. Stoorvogel
January 16, 2010
2
3
Model Predictive Control
dr.ir. Ton J.J. van den Boom
Delft Center for Systems and Control
Delft University of Technology
Mekelweg 2, NL-2628 CD Delft, The Netherlands
Tel. (+31) 15 2784052 Fax. (+31) 15 2786679
Email: T.J.J.vandenBoom@dcsc.tudelft.nl
prof.dr. Anton A. Stoorvogel
Dept of Electrical Engineering,
Mathematics and Computer Science,
Citadel H-222,
University of Twente,
P.O. Box 217, NL-7500 AE Enschede, The Netherlands
Tel. (+31) 53 4893449 Fax. (+31) 53 4893800
Email: A.A.Stoorvogel@utwente.nl
4
Contents
1 Introduction 9
1.1 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 History of Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . 10
2 The basics of model predictive control 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Process model and disturbance model . . . . . . . . . . . . . . . . . . . . . 13
2.2.1 Input-output (IO) models . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 Other model descriptions . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Performance index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6 Receding horizon principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Prediction 31
3.1 Noiseless case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Noisy case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 The use of a priori knowledge in making predictions . . . . . . . . . . . . . 36
4 Standard formulation 39
4.1 The performance index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Handling constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.1 Equality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.2.2 Inequality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Structuring the input with orthogonal basis
functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4 The standard predictive control problem . . . . . . . . . . . . . . . . . . . 51
4.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5 Solving the standard predictive control problem 55
5.1 The nite horizon SPCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.1.1 Unconstrained standard predictive control problem . . . . . . . . . 56
5.1.2 Equality constrained standard predictive control problem . . . . . . 58
5
6 CONTENTS
5.1.3 Full standard predictive control problem . . . . . . . . . . . . . . . 62
5.2 Innite horizon SPCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2.1 Steady-state behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2.2 Structuring the input signal for innite horizon MPC . . . . . . . . 67
5.2.3 Unconstrained innite horizon SPCP . . . . . . . . . . . . . . . . . 69
5.2.4 The innite Horizon Standard Predictive Control Problem with con-
trol horizon constraint . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2.5 The Innite Horizon Standard Predictive Control Problem with struc-
tured input signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3.1 Implementation and computation in LTI case . . . . . . . . . . . . 89
5.3.2 Implementation and computation in full SPCP case . . . . . . . . . 91
5.4 Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6 Stability 101
6.1 Stability for the LTI case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Stability for the inequality constrained case . . . . . . . . . . . . . . . . . 105
6.3 Modications for guaranteed stability . . . . . . . . . . . . . . . . . . . . . 105
6.4 Relation to IMC scheme and Youla parametrization . . . . . . . . . . . . . 116
6.4.1 The IMC scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.4.2 The Youla parametrization . . . . . . . . . . . . . . . . . . . . . . . 121
6.5 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7 MPC using a feedback law, based on linear matrix inequalities 131
7.1 Linear matrix inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.2 Unconstrained MPC using linear matrix inequalities . . . . . . . . . . . . . 134
7.3 Constrained MPC using linear matrix inequalities . . . . . . . . . . . . . . 137
7.4 Robustness in LMI-based MPC . . . . . . . . . . . . . . . . . . . . . . . . 140
7.5 Extensions and further research . . . . . . . . . . . . . . . . . . . . . . . . 146
8 Tuning 151
8.1 Initial settings for the parameters . . . . . . . . . . . . . . . . . . . . . . . 152
8.1.1 Initial settings for the summation parameters . . . . . . . . . . . . 152
8.1.2 Initial settings for the signal weighting parameters . . . . . . . . . . 154
8.2 Tuning as an optimization problem . . . . . . . . . . . . . . . . . . . . . . 158
8.3 Some particular parameter settings . . . . . . . . . . . . . . . . . . . . . . 163
Appendix A: Quadratic Programming 167
Appendix B: Basic State Space Operations 177
Appendix C: Some results from linear algebra 181
CONTENTS 7
Appendix D: Model descriptions 183
D.1 Impulse and step response models . . . . . . . . . . . . . . . . . . . . . . . 183
D.2 Polynomial models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Appendix E: Performance indices 201
Appendix F: Manual MATLAB Toolbox 205
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Index 257
References 259
8 CONTENTS
Abbreviations
CARIMA model controlled autoregressive integrated moving average
CARMA model controlled autoregressive moving average
DMC Dynamic Matrix Control
EPSAC Extended Prediction Self-Adaptive Control
GPC Generalized Predictive Control
IDCOM Identication Command
IIO model Increment-Input Output model
IO model Input Output model
LQ control Linear-Quadratic control
LQPC Linear-Quadratic predictive control
MAC Model Algorithmic Control
MBPC Model Based Predictive Control
MHC Moving Horizon Control
MIMO Multiple-Input Multiple-Output
MPC Model Predictive Control
PFC Predictive Functional Control
QDMC Quadratic Dynamic Matrix Control
RHC Receding Horizon Control
SISO Single-Input Single-Output
SPC Standard Predictive Control
UPC Unied Predictive Control
Chapter 1
Introduction
1.1 Model predictive control
Process industry is characterized by product quality specications which become more
and more tight, increasing productivity demands, new environmental regulations and fast
changes in the economical market. In the last decades Model Predictive Control (MPC),
also referred to as Model Based Predictive Control (MBPC), Receding Horizon Control
(RHC) and Moving Horizon Control (MHC), has shown to be able to respond in an eective
way to these demands in many practical process control applications and is therefore widely
accepted in process industry. MPC is more of a methodology than a single technique.
The dierence in the various methods is mainly the way the problem is translated into
a mathematical formulation, so that the problem becomes solvable in a limited amount
of time. However, in all methods ve important items are recognizable in the design
procedure:
Process and disturbance model: The process and disturbance models are mostly cho-
sen linear. Extensions to nonlinear models can be made, but will not be considered
in this course. On the basis of the model a prediction of the process signals over a
specied horizon is made. A good performance in the presence of disturbances can
be obtained by using a prediction of the eect of the disturbance signal, given its
characteristics in the form of a disturbance model.
Performance index: A quadratic (2-norm) performance-index or cost-criterion is formu-
lated, reecting the reference tracking error and the control action.
Constraints: In practical situations, there will be constraints on control, state and output
signals, motivated by safety and environmental demands, economic requirements,
such as cost, and equipment limitations.
Optimization: An optimization algorithm will be applied to compute a sequence of future
control signals that minimizes the performance index subject to the given constraints.
9
10 CHAPTER 1. Introduction
For linear models with linear constraints and a quadratic performance index the
solution can be found using quadratic programming algorithms.
Receding horizon principle: Predictive control uses the so-called receding horizon prin-
ciple. This means that after computation of the optimal control sequence, only the
rst control sample will be implemented, subsequently the horizon is shifted one
sample and the optimization is restarted with new information of the measurements.
1.2 History of Model Predictive Control
Since 1970 various techniques have been developed for the design of model based control
systems for robust multivariable control of industrial processes ([10], [24], [25], [29], [43],
[46]). Predictive control was pioneered simultaneously by Richalet et al. [53], [54] and
Cutler & Ramaker [18]. The rst implemented algorithms and successful applications were
reported in the referenced papers. Model Predictive Control technology has evolved from
a basic multivariable process control technology to a technology that enables operation
of processes within well dened operating constraints [2], [6]), [50]. The main reasons for
increasing acceptance of MPC technology by the process industry since 1985 are clear:
MPC is a model based controller design procedure, which can easily handle processes
with large time-delays, non-minimum phase and unstable processes.
It is an easy-to-tune method, in principle there are only three basic parameters to be
tuned.
Industrial processes have their limitations in, for instance, valve capacity and other
technological requirements and are supposed to deliver output products with some
pre-specied quality specications. MPC can handle these constraints in a systematic
way during the design and implementation of the controller.
Finally MPC can handle structural changes, such as sensor and actuator failures,
changes in system parameters and system structure by adapting the control strategy
on a sample-by-sample basis.
However, the main reasons for its popularity are the constraint-handling capabilities, the
easy extension to multivariable processes and most of all increased prot as discussed in the
rst section. From academic side the interest in MPC mainly came from the eld of self-
tuning control. The problem of Minimum Variance control (

Astrom & Wittenmark [3]) was


studied while minimizing the performance index J(u, k) = E{( r(k + d) y(k + d) )
2
}
at time k, where y(k) is the process output signal, u(k) is the control signal, r(k) is the
reference signal, E() stands for expectation and d is the process dead-time. To overcome
stability problems with non-minimum phase plants, the performance index was modied
by adding a penalty on the control signal u(k). Later this u(k) in the performance index
was replaced by the increment of the control signal u(k) = u(k) u(k 1) to guarantee
1.2. HISTORY OF MODEL PREDICTIVE CONTROL 11
a zero steady-state error. To handle a wider class of unstable and non-minimum phase
systems and systems with poorly known delay the Generalized Predictive Control (GPC)
scheme (Clarke et al. [13],[14]) was introduced with a quadratic performance index.
In GPC mostly polynomial based models are used. For instance, Controlled AutoRegres-
sive Moving Average (CARMA) models or Controlled AutoRegressive Integrated Moving
Average (CARIMA) models are popular. These models describe the process using a mini-
mum number of parameters and therefore lead to eective and compact algorithms. Most
GPC-literature in this area is based on Single-Input Single-Output (SISO) models. How-
ever, the extension to Multiple-Input Multiple-Output (MIMO) systems is straightforward
as was shown by De Vries & Verbruggen [21] using a MIMO polynomial model, and by
Kinnaert [34] using a state-space model.
This text covers state-of-the-art technologies for model predictive process control that are
good candidates for future generations of industrial model predictive control systems. Like
all other controller design methodologies, MPC also has its drawbacks:
A detailed process model is required. This means that either one must have a good
insight in the physical behavior of the plant or system identication methods have
to be applied to obtain a good model.
The methodology is open, and many variations have led to a large number of MPC-
methods. We mention IDCOM (Richalet et al. [54]), DMC (Cutler & Ramaker [17]),
EPSAC (De Keyser and van Cauwenberghe [20]), MAC (Rouhani & Mehra [57]),
QDMC (Garca & Morshedi [27]), GPC (Clarke et al. [14][15]), PFC (Richalet et al.
[52]), UPC (Soeterboek [61]).
Although, in practice, stability and robustness are easily obtained by accurate tuning,
theoretical analysis of stability and robustness properties are dicult to derive.
Still, in industry, for supervisory optimizing control of multivariable processes MPC is often
preferred over other controller design methods, such as PID, LQ and H

. A PID controller
is also easily tuned but can only be straightforwardly applied to SISO systems. LQ and
H

can be applied to MIMO systems, but cannot handle signal constraints in an adequate
way. These techniques also exhibit diculties in realizing robust performance for varying
operating conditions. Essential in model predictive control is the explicit use of a model
that can simulate dynamic behavior of the process at a certain operating point. In this
respect, model predictive control diers from most of the model based control technologies
that have been studied in academia in the sixties, seventies and eighties. Academic research
has been focusing on the use of models for controller design and robustness analysis of
control systems for quite some time. With their initial work on internal model based
control, Garcia and Morari [26] made a rst step towards bridging academic research in
the area of process control and industrial developments in this area. Signicant progress
has been made in understanding the behavior of model predictive control systems, and a lot
of results have been obtained on stability, robustness and performance of MPC (Soeterboek
[61], Camacho and Bordons [11], Maciejowski [44], Rossiter [56]).
12 CHAPTER 1. Introduction
Since the pioneering work at the end of the seventies and early eighties, MPC has become
the most widely applied supervisory control technique in the process industry. Many papers
report successful applications (see Richalet [52], and Qin and Badgewell [50]).
Chapter 2
The basics of model predictive
control
2.1 Introduction
MPC is more of a methodology than a single technique. The dierence in the various
methods is mainly the way the problem is translated into a mathematical formulation, so
that the problem becomes solvable in the limited time interval available for calculation of
adequate process manipulations in response to external inuences on the process behav-
ior (Disturbances). However, in all methods ve important items are part of the design
procedure:
1. Process model and disturbance model
2. Performance index
3. Constraints
4. Optimization
5. Receding horizon principle
In the following sections these ve ingredients of MPC will be discussed.
2.2 Process model and disturbance model
The models applied in MPC serve two purposes:
Prediction of the behavior of the future output of the process on the basis of inputs
and known disturbances applied to the process in the past
Calculation of the input signal to the process that minimizes the given objective
function
13
14 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
The models required for these tasks do not necessarily have to be the same. The model
applied for prediction may dier from the model applied for calculation of the next control
action. In practice though both models are almost always chosen to be the same.
As the models play such an important role in model predictive control the models are
discussed in this section. The models applied are so called Input-Output (IO) models.
These models describe the input-output behavior of the process. The MPC controller
discussed in the sequel explicitly assumes that the superposition theorem holds. This
requires that all models used in the controller design are chosen to be linear. Two types
of IO models are applied:
Direct Input-Output models (IO models) in which the input signal is directly applied
to the model.
Increment Input-Output models (IIO models) in which the increments of the input
signal are applied to the model instead of the input directly.
The following assumptions are made with respect to the models that will be used:
1. Linear
2. Time invariant
3. Discrete time
4. Causal
5. Finite order
Linearity is assumed to allow the use of the superposition theorem. The second assump-
tion -time invariance- enables the use of a model made on the basis of observations of
process behavior at a certain time for simulation of process behavior at another arbitrary
time instant. The discrete time representation of process dynamics supports the sampling
mechanisms required for calculating control actions with an MPC algorithm implemented
in a sequential operating system like a computer. Causality implies that the model does
not anticipate future changes of its inputs. This aligns quite well with the behavior of
physical, chemical, biological systems. The assumption regarding the order of the model
to be nite implies that models and model predictions can always be described by a set of
explicit equations. Of course, these assumptions have to be validated against the actual
process behavior as part of the process modeling or process identication phase.
In this chapter only discrete time models will be considered. The control algorithm will
always be implemented on a digital computer. The design of a discrete time controller is
obvious. Further a linear time-invariant continuous time model can always be transformed
into a linear time-invariant discrete time model using a zero-order hold z-transformation.
The most general description of linear time invariant systems is the state space descrip-
tion. Models, given in a impulse/step response or transfer function structure, can easily be
converted into state space models. Computations can be done numerically more reliable if
2.2. PROCESS MODEL AND DISTURBANCE MODEL 15
based on (balanced) state space models instead of impulse/step response or transfer func-
tion models, especially for multivariable systems (Kailath [33]). Also system identication,
using the input-output data of a system, can be done using reliable and fast algorithms
(Ljung [41]; Verhaegen & Dewilde [71]; Van Overschee [70]).
2.2.1 Input-output (IO) models
In this course we consider causal, discrete time, linear, nite-dimensional, time-invariant
systems given by
y(k) = G
o
(q) u(k) + F
o
(q)d
o
(k) + H
o
(q)e
o
(k) (2.1)
in which G
o
(q) is the process model, F
o
(q) is the disturbance model, H
o
(q) is the noise
model, y(k) is the output signal, u(k) is the input signal, d
o
(k) is a known disturbance
signal, e
o
(k) is zero-mean white noise (ZMWN) and q is the shift-operator q
1
y(k) =
y(k 1). We assume G
o
(q) to be strictly proper, which means that y(k) does not depend
on the present value of u(k), but only on past values u(k j), j > 0.
G
o
F
o
H
o
E E E
T
T
c
c
e
u(k)
e
o
(k)
d
o
(k)
y(k)
Figure 2.1: Input-Output (IO) model
A state space representation for this system can be given as
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k) (2.2)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k) (2.3)
and so the transfer functions G
o
(q), F
o
(q) and H
o
(q) can be given as
G
o
(q) = C
o
( q I A
o
)
1
B
o
(2.4)
F
o
(q) = C
o
( q I A
o
)
1
L
o
+ D
F
(2.5)
H
o
(q) = C
o
( q I A
o
)
1
K
o
+ D
H
(2.6)
16 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
B
o
A
o
q
1
C
o
K
o
L
o
D
H
D
F
E E E E E E
E
T
T
T
E
c
c
c
t
t
t
t
'
e e
u(k) x
o
(k+1) x
o
(k) y(k)
e
o
(k)
d
o
(k)
Figure 2.2: State space representation of the model
Increment-input-output (IIO) models
Sometimes it is useful not to work with the input signal u(k) itself, but with the input
increment instead, dened as
u(k) = u(k) u(k 1) = (1 q
1
)u(k) = (q)u(k)
where (q) = (1 q
1
) is denoted as the increment operator. Using the increment of the
input signal implies that the model keeps track of the actual value of the input signal. The
model needs to integrate the increments of the inputs to calculate the output corresponding
with the input signals actually applied to the process. We obtain an increment-input-
output (IIO) model:
IIO-model: y(k) = G
i
(q) u(k) + F
i
(q) d
i
(k) + H
i
(q) e
i
(k) (2.7)
where d
i
(k) is a known disturbance signal and e
i
(k) is ZMWN. A state space representation
for this system is given by
x
i
(k + 1) = A
i
x
i
(k) + K
i
e
i
(k) + L
i
d
i
(k) + B
i
u(k) (2.8)
y(k) = C
i
x
i
(k) + D
H
e
i
(k) + D
F
d
i
(k) (2.9)
The transfer functions G
i
(q) and H
i
(q) become
G
i
(q) = C
i
( q I A
i
)
1
B
i
(2.10)
F
i
(q) = C
i
( q I A
i
)
1
L
i
+ D
F
(2.11)
H
i
(q) = C
i
( q I A
i
)
1
K
i
+ D
H
(2.12)
2.2. PROCESS MODEL AND DISTURBANCE MODEL 17
Relation between IO and IIO model
Given an IO-model with state space realization
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k)
Dene the system matrices
A
i
=
_
I C
o
0 A
o
_
B
i
=
_
0
B
o
_
K
i
=
_
D
H
K
o
_
C
i
=
_
I C
o

L
i
=
_
D
F
L
o
_
the disturbance and noise signals
d
i
(k) = d
o
(k) = d
o
(k) d
o
(k 1)
e
i
(k) = e
o
(k) = e
o
(k) e
o
(k 1)
and new state
x
i
(k) =
_
y(k 1)
x
o
(k)
_
where x
o
(k) = x
o
(k) x
o
(k 1) is the increment of the original state. Then the state
space realization, given by
x
i
(k + 1) = A
i
x
i
(k) + K
i
e
i
(k) + L
i
d
i
(k) + B
i
u(k)
y(k) = C
i
x
i
(k) + D
H
e
i
(k) + D
F
d
i
(k)
looks like a IIO model of the original IO model. However, there is one crucial dierence.
In the IO model, the noise e
o
is ZMWN but this implies that e
i
as dened above will be
integrated ZMWN. However, for the IIO model it is customary to assume that e
i
itself is
ZMWN. It is common to use the above model and assume that e
i
is ZMWN. This changes
the model compared to the original IO model but the advantage is that the resulting IIO
model is better able to handle drift terms in the noise.
An interesting observation in comparing the IO model with the corresponding IIO model
is the increase of the number of states with the number of outputs of the system. As can
been seen from the state matrix A
i
of the IIO model this increase in the number of states
compared to the state matrix A
o
of the IO model is related to the integrators required for
calculating the actual process outputs on the basis of input increments. The additional
eigenvalues of A
i
are all integrators: eigenvalues
i
= 1.
Proof:
From (2.2) we derive
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k)
18 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
and so for the output signal we derive
y(k) = y(k 1) + y(k) =
= y(k 1) +C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k)
so based on the above equations we get:
_
y(k)
x
o
(k + 1)
_
=
_
I C
o
0 A
o
__
y(k 1)
x
o
(k)
_
+
_
D
H
K
o
_
e
o
(k) +
+
_
D
F
L
o
_
d
o
(k) +
_
0
B
o
_
u(k)
y(k) =
_
I C
o

_
y(k 1)
x
o
(k)
_
+ D
H
e
o
(k) + D
F
d
o
(k)
by substitution of the variables x
i
(k), d
i
(k) and e
i
(k) and the matrices A
i
, B
i
, L
i
, K
i
and
C
i
the state space IIO model is found. 2 End Proof
The transfer functions of the IO-model and IIO-model are related by
G
i
(q) = G
o
(q)(q)
1
F
i
(q) = F
o
(q)(q)
1
H
i
(q) = H
o
(q)(q)
1
(2.13)
where (q) = (1 q
1
). In appendix D, we will show how the other models, like impulse
response models, step response models and polynomial models relate to state space models.
The advantage of using an IIO model
The main reason for using an IIO model is the good steady-state behavior of the controller
designed on the basis of this IIO model. As indicated above the IIO model simulates
process outputs on the basis of input increments. This implies that the model must have
integrating action to describe the behavior of processes that have steady state gains unequal
to zero. The integrating behavior of the model also implies that the model output can take
any value with inputs being equal to zero. This property of the IIO model appears to be
very attractive for good steady state behavior of the control system as we will see.
Steady-state behavior of an IO model: Consider a IO model G
o
(q) without a pole
in q = 1, so G
o
(1) < . Further assume e
o
(k) = 0 and d
o
(k) = 0 for all k. Moreover,
assume that our controller stabilizes the system. In that case, if u u
ss
for k then
y y
ss
= G
o
(1)u
ss
for k . This process is controlled by a predictive controller,
minimizing the IO performance index
J(k) =
N
2

j=1
y(k + j) r(k + j)
2
+
2
u(k + j 1)
2
2.2. PROCESS MODEL AND DISTURBANCE MODEL 19
Let the reference be constant r(k) = r
ss
= 0 for large k, and let k . Then u and y
will reach their steady-state and so J(k) = N
2
J
ss
for k , where
J
ss
= y
ss
r
ss

2
+
2
u
ss

2
= G
o
(1)u
ss
r
ss

2
+
2
u
ss

2
= u
T
ss
_
G
T
o
(1)G
o
(1) +
2
I
_
u
ss
2u
T
ss
G
T
o
(1)r
ss
+ r
T
ss
r
ss
Minimizing J
ss
over u
ss
means that
J
ss
/u
ss
= 2
_
G
T
o
(1)G
o
(1) +
2
I
_
u
ss
2G
T
o
(1)r
ss
= 0
so
u
ss
=
_
G
T
o
(1)G
o
(1) +
2
I
_
1
G
T
o
(1)r
ss
The steady-state output becomes
y
ss
= G
o
(1)
_
G
T
o
(1)G
o
(1) +
2
I
_
1
G
T
o
(1)r
ss
It is clear that y
ss
= r
ss
for > 0 and for this IO model there will always be a steady-state
error for r
ss
= 0 and > 0.
Steady-state behavior of an IIO model:
Consider the above model in IIO form, so G
i
(q) =
1
G
o
(q) under the same assumptions
as before, i.e. G
o
(1) is well-dened. Moreover, e
o
= 0 and d
o
= 0 which implies that e
i
= 0
and d
i
= 0. Let it be controlled by a stabilizing predictive controller, minimizing the IIO
performance index
J(k) =
N
2

j=1
y(k + j) r(k + j)
2
+
2
u(k + j 1)
2
In steady state the output is given by y
ss
= G
o
(1)u
ss
and the increment input becomes
u
ss
= 0, because the input signal u = u
ss
has become constant. In steady-state we will
reach the situation that the performance index becomes J(k) = N
2
J
ss
for k , where
J
ss
= y
ss
r
ss

2
The optimum J
ss
= 0 is obtained for y
ss
= r
ss
which means that no steady-state error
occurs. It is clear that for a good steady-state behavior in the presence of a reference
signal, it is necessary to design the predictive controller on the basis of an IIO model.
20 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
The standard model
In this course we consider the following standard model:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (2.14)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (2.15)
where x(k) is the state, e(k) is zero-mean white noise (ZMWN), v(k) is the control signal,
w(k) is a vector containing all known external signals, such as the reference signal and
known (or measurable) disturbances, and y(k) is the measurement. The control signal
v(k) can be either u(k) or u(k).
2.2.2 Other model descriptions
In this course we will consider state space description (2.14)-(2.15) as the model for the
system. In literature three other model descriptions are also often used, namely the impulse
response models, the step response models and the polynomial model. In this section we
will shortly describe their features, and discuss the choice of model description.
Impulse and step response models
MPC has part of its roots in the process industry, where the use of detailed dynamical
models is less common. To get reliable dynamical models of these processes on the basis
of physical laws is dicult and it is therefore not surprising that the rst models used in
MPC were impulse and step response models (Richalet [54]), (Cutler & Ramaker [17]).
The models are easily obtained by rather simple experiments and give good models for
suciently large length of the impulse/step response.
In appendix D.1 more detailed information is given on impulse and step response models.
Polynomial models
Sometimes good models of the process can be obtained on the basis of physical laws or
by parametric system identication. In that case a polynomial description of the model
is preferred (Clarke et al.[14],[15]). Less parameters are used than in impulse or step
response models and in the case of parametric system identication, the parameters can
be estimated more reliably (Ljung [41]).
In appendix D.2 more detailed information is given on polynomial models.
Choice of model description
Whatever model is used to describe the process behavior, the resulting predictive control
law will (at least approximately) give the same performance. Dierences in the use of a
model are in the eort of modeling, the model accuracy that can be obtained, and in the
computational power that is needed to realize the predictive control algorithm.
2.3. PERFORMANCE INDEX 21
Finite step response and nite impulse response models (non-parametric models) need
more parameters than polynomial or state space models (parametric models). The larger
the number of parameters of the model the lower the average number of data points
per parameter that needs to be estimated during identication. As a consequence the
variance of the estimated parameters will be larger for the parameters of non-parametric
models than for semi-parametric and parametric models. In turn, the variance of estimated
parameters of semi-parametric models will be larger than the variance of the parameters
of parametric models. This does not necessarily imply that the quality of the predictions
with parametric models outperforms the predictions of non-parametric models! If the test
signals applied for process identication have excited all relevant dynamics of the process
persistently, the qualities of the predictions are generally the same despite the signicantly
larger variance of the estimated model parameters. On the other hand step response
models are intuitive, need less a priori information for identication and the predictions
are constructed in a natural way. The models can be understood without any knowledge
of dynamical systems. Step response models are often still preferred in industry, because
advanced process knowledge is usually scarce and additional experiments are expensive.
The polynomial description is very compact, gives good (physical) insight in the systems
properties and the resulting controller is compact as well. However, as we will see later,
making predictions may be cumbersome and the models are far from practical for multi-
variable systems.
State space models are especially suited for multivariable systems, still providing a compact
model description and controller. The computations are usually well conditioned and the
algorithms easy to implement.
2.3 Performance index
A performance-index or cost-criterion is formulated, measuring the reference tracking error
and the control action. Let z
1
(k) be a signal, reecting the reference tracking error and let
z
2
(k) be a signal, reecting the control action. The following 2-norm performance index is
introduced:
J(v, k) =
N

j=Nm
z
T
1
(k+j 1|k) z
1
(k+j 1|k)+
N

j=1
z
T
2
(k+j 1|k) z
2
(k+j 1|k) (2.16)
where z
i
(k +j 1|k), i = 1, 2 is the prediction of z
i
(k + j 1) at time k. The variable N
denotes the prediction horizon, the variable N
m
denotes the minimum cost-horizon.
In the MPC literature and in industrial application of MPC three performance indices
often appear:
Generalized Predictive Control (GPC) performance index.
Linear Quadratic Predictive Control (LQPC) performance index
Zone performance index
22 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
These three performance indices can all be rewritten into the standard form of (2.16).
Generalized Predictive Control (GPC)
In the Generalized Predictive Control (GPC) method (Clarke et al., [14][15]) the perfor-
mance index is based on control and output signals:
J(u, k) =
N

j=Nm
_
y
p
(k + j|k) r(k + j)
_
T
_
y
p
(k + j|k) r(k + j)
_
+
+
2
N

j=1
u
T
(k + j 1|k)u(k + j 1|k) (2.17)
y
p
(k) = P(q)y(k) is the weighted process output signal
r(k) is the reference trajectory
y(k) is the process output signal
u(k) is the process control increment signal
N
m
is the minimum cost-horizon
N is the prediction horizon
N
c
is the control horizon
is the weighting on the control signal
P(q) = 1+p
1
q
1
+. . .+p
np
q
np
is a polynomial with desired closed-loop poles
where y
p
(k + j|k) is the prediction of y
p
(k + j), based on knowledge up to time k, the
increment input signal is u(k) = u(k) u(k 1) and u(k + j) = 0 for j N
c
.
determines the trade-o between tracking accuracy (rst term) and control eort (second
term). The polynomial P(q) can be chosen by the designer and broaden the class of control
objectives. It can be shown that n
p
of the closed-loop poles will be placed at the location
of the roots of the polynomial P(q).
Note that the GPC performance index can be translated into the form of (2.16) by choosing
z(k) =
_
z
1
(k)
z
2
(k)
_
=
_
y
p
(k + 1) r(k + 1)
u(k)
_
Linear Quadratic Predictive Control (LQPC)
In the Linear Quadratic Predictive Control (LQPC) method (Garca et al., [28]) the per-
formance index is based on control and state signals:
J(u, k) =
N

j=Nm
x
T
(k+j|k)Q x(k+j|k) +
N

j=1
u
T
(k+j1|k)Ru(k+j1|k) (2.18)
2.3. PERFORMANCE INDEX 23
x(k) is the state signal vector
u(k) is the process control signal
Q is the state weighting matrix
R is the control weighting matrix
where x(k+j|k) is the prediction of x(k+j), based on knowledge until time k. Q and R
are positive semi-denite matrices. In some papers the control increment signal u(k) is
used instead of the control signal u(k). Note that the LQPC performance index can be
translated into the form of (2.16) by choosing
z(k) =
_
z
1
(k)
z
2
(k)
_
=
_
Q
1/2
x(k + 1)
R
1/2
u(k)
_
.
Zone performance index
A performance index that is popular in industry is the zone performance index:
J(u, k) =
N

j=Nm
_
(k+j|k)
_
T
_
(k+j|k)
_
+
2
N

j=1
u
T
(k+j1|k)u(k+j1|k) (2.19)
where
i
(i = 1, . . . , m) contribute to the performance index only if |y
i
(k) r
i
(k)| >
max,i
.
To be more specic:

i
(k) =
_
_
_
0 for |y
i
(k) r
i
(k)|
max,i
y
i
(k) r
i
(k)
max,i
for y
i
(k) r
i
(k)
max,i
y
i
(k) r
i
(k) +
max,i
for y
i
(k) r
i
(k)
max,i
so
|
i
(k)| = min
|
i
(k)|
max,i
|y
i
(k) r
i
(k) +
i
(k)|
The relation between the tracking error y(k)r(k), the zone-bounds and zone performance
signal (k) are visualized in gure 2.3. Note that the zone performance index can be
translated into the form of (2.16) by choosing
z(k) =
_
z
1
(k)
z
2
(k)
_
=
_
r(k + 1) y(k + 1) +(k + 1)
u(k)
_
and
v(k) =
_
u(k)
(k + 1)
_
with an additional constraint that |(k+j)|
max
.
24 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
0 2 4 6 8 10 12 14 16 18 20
3
2
1
0
1
2
3
0 2 4 6 8 10 12 14 16 18 20
1.5
1
0.5
0
0.5
1
1.5
y
(
k
)

r
(
k
)

(
k
)

k
+

Figure 2.3: Tracking error y(k) r(k) and zone performance signal (k)
Standard performance index
Most papers on predictive control deal with either the GPC or the LQPC performance
indices which are clearly weighted squared 2-norms, which can be rewritten in the form of
(2.16). Now introduce the diagonal selection matrix
(j) =
_

_
_
0 0
0 I
_
for 0 j < N
m
1
_
I 0
0 I
_
for N
m
1 j N 1
, (2.20)
and dene
z(k) =
_
z
1
(k)
z
2
(k)
_
.
The performance index can be rewritten in the standard form:
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (2.21)
In this text, we will mainly deal with this 2-norm standard performance index (2.21). Other
performance indices can also be used These are often based on other norms, for example
the 1-norm or the -norm (Genceli & Nikolaou [30], Zheng & Morari [80]).
2.4. CONSTRAINTS 25
2.4 Constraints
In practice, industrial processes are subject to constraints. Specic signals must not violate
specied bounds due to safety limitations, environmental regulations, consumer specica-
tions and physical restrictions such as minimum and/or maximum temperature, pressure,
level limits in reactor tanks, ows in pipes and slew rates of valves.
Careful tuning of the controller parameters may keep these values away from the bounds.
However, because of economical motives, the control system should drive the process to-
wards the constraints as close as possible, without violating them: closer to limits in general
often means closer to maximum prot.
Therefore, predictive control employs a more direct approach by modifying the optimal
unconstrained solution in such a way that constraints are not violated. This can be done
using optimization techniques such as linear programming (LP) or quadratic programming
(QP) techniques.
In most cases the constraints can be translated in bounds on control, state or output
signals:
u
min
u(k) u
max
, k u
min
u(k) u
max
, k
y
min
y(k) y
max
, k x
min
x(k) x
max
, k
Under no circumstances the constraints may be violated, and the control action has to be
chosen such that the constraints are satised. However, when we implement the controller
we do not know the future state and replace the original constraints by constraints on the
predicted state.
Besides inequality constraints we can also use equality constraints. Equality constraints
are usually motivated by the control algorithm itself. An example is the control horizon,
which forces the control signal to become constant:
u(k+j|k) = 0 for j N
c
This makes the control signal smooth and the controller more robust. A second example
is the state end-point constraint
x(k + N|k) = x
ss
which is related to stability and forces the state at the end of the prediction horizon to its
steady state value x
ss
.
To summarize, we see that there are two types of constraints (inequality and equality
constraints). In this course we look at the two types in a generalized framework:
Inequality constraints:
(k) (k) k (2.22)
Since we generally do not know future values of , we often use in the optimization at time
k, constraints of the form:

(k + j|k) (k + j) (2.23)
26 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
for j = 1, . . . , N, where

(k +j|k) is the prediction of (k +j) at time k. Note that (k)
does not depend on future inputs of the system.
Equality constraints:
(k) = 0 k (2.24)
Again we generally do not know future values of but we also often want to use equality
constraints, as noted before, to force the control signal to become constant or to enforce
an endpoint penalty. Hence we use:
A
j

(k + j|k) = 0 (2.25)
for j = 1, . . . , N, where

(k +j|k) is the prediction of (k +j) at time k. The matrix A
j
is
used to indicate that certain equality constraints are only active for specic j. For instance,
in case of an endpoint penalty, we would choose A
N
= I and A
j
= 0 for j = 0, . . . , N 1.
In the above formulas (k) and (k) are vectors and the equalities and inequalities are
meant element-wise. If there are multiple constraints, for example
1
(k) = 0 and
2
(k) = 0,

1
(k)
1
(k), . . . ,
4
(k)
4
(k) we can combine them by both stacking them into one
equality constraint and one inequality constraint.
(k) =
_

1
(k)

2
(k)
_
= 0 (k) =
_

1
(k)
.
.
.

4
(k)
_

1
(k)
.
.
.

4
(k)
_

_
= (k)
Note that if a variable is bounded two-sided

1,min

1
(k)
1,max
we can always translate that into two one-sided constraints by setting:
(k) =
_

1
(k)

1
(k)
_

_

1,max

1,min
_
= (k)
which corresponds to the general form of eqaution (2.22).
The capability of predictive control to cope with signal constraints is probably the main
reason for its popularity. It allows operation of the process within well dened operating
limits. As constraints are respected, better exploitation of the permitted operating range is
feasible. In many applications of control, signal constraints are present, caused by limited
capacity of liquid buers, valves, saturation of actuators and the more. By minimizing
performance index (2.21), subject to these constraints, we obtain the best possible control
signal within the set of admissible control signals.
2.4. CONSTRAINTS 27
Structuring the input signal
To obtain a tractable optimization problem, the input signal should be structured. In
other words, the degrees of freedom in the future input signal [u(k|k), u(k +1|k), . . . , u(k +
N 1|k)] must be reduced. This can be done in various ways. The most common and
most frequently used method is by using a control horizon, introduce input blocking or
parametrize the input with (orthogonal) basis functions.
Control horizon:
When we use a control horizon, the input signal is assumed to be constant from a certain
moment in the future, denoted as control horizon N
c
, i.e.
u(k+j|k) = u(k + N
c
1|k) for j N
c
Blocking:
Instead of making the input constant beyond the control horizon, we can force the input
to remain constant during some predened (non-uniform) intervals. In that way, there is
some freedom left beyond the control horizon. Dene n
bl
intervals with range (m

, m
+1
).
The rst m
1
control variables (u(k), . . . , u(k + m
1
1)) are still free. The input beyond
m
1
is restricted by:
u(k+j|k) = u(k + m

1|k) for m

j < m
+1
, = 1, . . . , n
bl
Basis functions:
Parametrization of the input signal can also be done using a set of basis functions:
u(k+j|k) =
M

i=0
S
i
(j)
i
(k)
where S
i
(j) are the basis functions, M are the degrees of freedom left, and the scalars

i
(k), i = 1, . . . , M are the parameters, to be optimized at time k.
The already mentioned structuring of the input signal, with either a control horizon or
blocking, can be seen as a parametrization of the input signal with a set of basis functions.
For example, by introducing a control horizon, we choose M = N
c
1 and the basis
functions become:
S
i
(j) = (j i) for i = 0, . . . , N
c
2
S
Nc1
(j) = E(j N
c
+ 1)
where (k i) is the discrete-time impulse function, being 1 for j = i and zero elsewhere,
and E is the step function, i.e.
E[v] =
_
0 v < 0
1 v 0
28 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
The optimization parameters are now equal to

i
(k) = u(k + i|k) for i = 0, . . . , N
c
1
which indeed corresponds to the N
c
free parameters.
In the case of blocking, we choose M = m
1
+ n
bl
and the basis functions become:
S
i
(j) = (j i) for i = 0, . . . , m
1
1
S
m
1
+
(j) = E(j m

) E(j m
+1
) for = 1, . . . , n
bl
The optimization parameters are now equal to

i
(k) = u(k + i|k) for i = 0, . . . , m
1
1
and

m
1
+
(k) = u(k + m

|k) for = 1, . . . , n
bl
which corresponds to the N
c
1 +n
bl
free parameters. Figure 2.4 gives the basis functions
in case of a control horizon approach and a blocking approach, respectively.
0
S
Nc1
S
Nc2
S
Nc3
S
Nc4
S
Nc5
S
0
S
1
S
2
Nc1 0
S
M
S
M1
S
M2
S
M3
S
M4
S
0
S
1
S
2
N1
(a) control horizon (b) blocking
Figure 2.4: Basis functions in case of a control horizon approach (a) and a blocking ap-
proach (b).
To obtain a tractable optimization problem, we can choose a set of basis functions S
i
(k)
that are orthogonal. These orthogonal basis functions will be discussed in section 4.3.
2.5 Optimization
An optimization algorithm will be applied to compute a sequence of future control signals
that minimizes the performance index subject to the given constraints. For linear models
2.6. RECEDING HORIZON PRINCIPLE 29
with linear constraints and a quadratic (2-norm) performance index the solution can be
found using quadratic programming algorithms. If a 1-norm or - norm performance index
is used, linear programming algorithms will provide the solution (Genceli & Nikolaou [30],
Zheng & Morari [80]). Both types of algorithms are convex and show fast convergence.
In some cases, the optimization problem will have an empty solution set, so the problem
is not feasible. In that case we will have to relax one or more of the constraints to nd a
solution leading to an acceptable control signal. A prioritization among the constraints is
an elegant way to tackle this problem.
Procedures for dierent types of MPC problems will be presented in chapter 5.
2.6 Receding horizon principle
Predictive control uses the receding horizon principle. This means that after computa-
tion of the optimal control sequence, only the rst control sample will be implemented,
subsequently the horizon is shifted one sample and the optimization is restarted with new
information of the measurements. Figure 2.5 explains the idea of receding horizon. At
' E
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r r r r r r
E
N
m
N
c N
r(k)
y(k)
u(k)
k
k+j
PAST FUTURE
Figure 2.5: The Moving horizon in predictive control
time k the future control sequence {u(k|k), . . . , u(k + N
c
1|k)} is optimized such that
the the performance-index J(u, k) is minimized subject to constraints.
At time k the rst element of the optimal sequence (u(k) = u(k|k)) is applied to the real
process. At the next time instant the horizon is shifted and a new optimization at time
k + 1 is solved.
The prediction y
p
(k+j|k) in (2.17) and x(k+j|k) in (2.18) are based on dynamical models.
Common model representations in MPC are polynomial models, step response models
30 CHAPTER 2. THE BASICS OF MODEL PREDICTIVE CONTROL
impulse response models or state space models. In this course we will consider a state
space representation of the model. In appendix D it is shown how other realizations can
be transformed in a state space model.
Chapter 3
Prediction
In this chapter we consider the concept of prediction. We consider the following standard
model:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (3.1)
p(k) = C
p
x(k) + D
p1
e(k) + D
p2
w(k) + D
p3
v(k)
The prediction signal p(k) can be any signal which can be formulated as in expression (3.1)
(so the state x(k), the output signal y(k) or tracking error r(k) y(k) ). We will come
back on the particular choices of the signal p in Chapter 4.
The control law in predictive control is, no surprise, based on prediction. At each time
instant k, we consider all signals over the horizon N. To do so we make predictions at time
k of the signal p(k + j) for j = 1, 2, . . . , N. These predictions, denoted by p(k + j|k), are
based on model (3.1), using information given at time k and future values of the control
signal v(k+j|k). The predictions p(k+j|k) of the signal p(k+j) are based on the knowledge
at time k and the future control signals v(k|k), v(k+1|k), . . . , v(k+N1|k).
At time instant k we dene the signal vector p(k) with the predicted signals p(k + j|k),
the signal vector v(k) with the future control signals v(k +j|k) and the signal vector w(k)
with the future reference signals w(k + j|k) in the interval 0 j N1 as follows
p(k) =
_

_
p(k|k)
p(k+1|k)
.
.
.
p(k+N1|k)
_

_
v(k) =
_

_
v(k|k)
v(k+1|k)
.
.
.
v(k+N1|k)
_

_
w(k) =
_

_
w(k|k)
w(k+1|k)
.
.
.
w(k+N1|k)
_

_
(3.2)
The goal of the prediction is to nd an estimate for the prediction signal p(k), composed
of a free-response signal p
0
(k) and a matrix

D
p3
such that
p(k) = p
0
(k) +

D
p3
v(k)
where the term

D
p3
v(k) is the part of p(k), which is due to the selected control signal
v(k + j|k) for j 0 and p
0
(k) is the so called free-response. This free-response p
0
(k) is
31
32 CHAPTER 3. PREDICTION
the predicted output signal when the future input signal is put to zero (v(k +j|k) = 0 for
j 0) and depends on the present state x(k), the future values of the known signal w,
and the characteristics of the noise signal e.
3.1 Noiseless case
The prediction mainly consists of two parts: Computation of the output signal as a result
of the selected control signal v(k +j|k) and the known signal w(k +j) and a prediction of
the response due to the noise signal e(k + j). In this section we will assume the noiseless
case (so e(k +j) = 0) and we only have to concentrate on the response due to the present
and future values v(k + j|k) of the input signal and known signal w(k + j).
Consider the model from (3.1) for the noiseless case (e(k) = 0):
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k)
p(k) = C
p
x(k) + D
p2
w(k) + D
p3
v(k)
For this state space model the prediction of the state is computed using successive substi-
tution (Kinnaert [34]):
x(k + j|k) = Ax(k + j 1|k) + B
2
w(k + j 1|k) + B
3
v(k + j 1|k) =
A
2
x(k + j 2|k) + AB
2
w(k + j 2|k) + B
2
w(k + j 1|k) +
+AB
3
v(k + j 2|k) + B
3
v(k + j 1|k)
.
.
.
= A
j
x(k) +
j

i=1
A
i1
B
2
w(k + j i|k) +
j

i=1
A
i1
B
3
v(k + j i|k)
p(k + j|k) = C
p
x(k + j|k) + D
p2
w(k + j|k) + D
p3
v(k + j|k)
= C
p
A
j
x(k) +
j

i=1
C
p
A
i1
B
2
w(k + j i|k) + D
p2
w(k + j|k) +
+
j

i=1
C
p
A
i1
B
3
v(k + j i|k) + D
p3
v(k + j|k)
Now dene

C
p
=
_

_
C
p
C
p
A
C
p
A
2
.
.
.
C
p
A
N1
_

D
p2
=
_

_
D
p2
0 0 0
C
p
B
2
D
p2
.
.
.
.
.
.
.
.
.
C
p
AB
2
C
p
B
2
.
.
. 0 0
.
.
.
.
.
. D
p2
0
C
p
A
N2
B
2
C
p
B
2
D
p2
_

_
(3.3)
3.2. NOISY CASE 33

D
p3
=
_

_
D
p3
0 0 0
C
p
B
3
D
p3
.
.
.
.
.
.
.
.
.
C
p
AB
3
C
p
B
3
.
.
. 0 0
.
.
.
.
.
. D
p3
0
C
p
A
N2
B
3
C
p
B
3
D
p3
_

_
(3.4)
then the vector p(k) with the predicted output values can be written as
p(k) =

C
p
x(k) +

D
p2
w(k) +

D
p3
v(k)
= p
0
(k) +

D
p3
v(k)
In this equation p
0
(k) =

C
p
x(k) +

D
p2
w(k) is the prediction of the output if v(k) is
chosen equal to zero, denoted as the free-response signal, and

D
p3
is the predictor matrix,
describing the relation between future control vector v(k) and predicted output p(k).
The predictor matrix

D
p3
is a Toeplitz matrix. The matrix relates future inputs to future
output signals, and contains the elements D
p3
, and C
p
A
j
B
3
, j 0, which are exactly equal
to the impulse response parameters
g
i
=
_
D
p3
for i = 0
C
p
A
i1
B
3
for i > 0
of the transfer function P
p3
(q) = C
p
(qI A)
1
B
3
+D
p3
, which describes the input-output
relation between the signals v(k) and p(k).
3.2 Noisy case
As was stated in the previous section, the second part of the prediction scheme is making
a prediction of the response due to the noise signal e. If we know the characteristics of
the noise signal, this information can be used for prediction of the expected noise signal
over the future horizon. In case we do not know anything about the characteristics of
the disturbances, the best assumption we can make is the assumption that e is zero-mean
white noise. In these lecture notes we use this assumption of zero-mean white noise e, so
the best prediction for future values of signal e(k +j), j > 0 will be zero ( e(k +j) = 0 for
j > 0).
Consider the model given in equation (3.1):
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k)
p(k) = C
p
x(k) + D
p1
e(k) + D
p2
w(k) + D
p3
v(k)
the prediction of the state using successive substitution of the state equation is given by
34 CHAPTER 3. PREDICTION
(Kinnaert [34]):
x(k + j|k) = A
j
x(k) +
j

i=1
A
i1
B
1
e(k + j i|k) +
+
j

i=1
A
i1
B
2
w(k + j i|k) +
j

i=1
A
i1
B
3
v(k + j i|k)
p(k + j|k) = C
p
x(k + j|k) + D
p1
e(k + j|k) + D
p2
w(k + j|k) + D
p3
v(k + j|k)
The term e(k +m|k) for m > 0 is a prediction of e(k +m), based on measurements at time
k. If the signal e is assumed to be zero mean white noise, the best prediction of e(k +m),
made at time k, is
e(k + m|k) = 0 for m > 0
Substitution then results in (j > 0)
x(k + j|k) = A
j
x(k) +
j

i=1
A
i1
B
3
v(k + j i|k) + A
j1
B
1
e(k|k)
+
j

i=1
A
i1
B
2
w(k + j i|k)
p(k + j|k) = C
p
x(k + j|k) + D
p2
w(k + j|k) + D
p3
v(k + j|k)
= C
p
A
j
x(k) + C
p
A
j1
B
1
e(k|k) +
+
j

i=1
C
p
A
i1
B
2
w(k + j i|k) + D
p2
w(k + j|k)
+
j

i=1
C
p
A
i1
B
3
v(k + j i|k) + D
p23
v(k + j|k)
p(k|k) = C
p
x(k) + D
p1
e(k|k) + D
p2
w(k) + D
p3
v(k)
Dene

D
p1
=
_

_
D
p1
C
p
B
1
C
p
AB
1
.
.
.
C
p
A
N2
B
1
_

_
(3.5)
then, the vector p with the predicted output values can be written as
p(k) =

C
p
x(k) +

D
p1
e(k|k) +

D
p2
w(k) +

D
p3
v(k)
= p
0
(k) +

D
p3
v(k) (3.6)
3.2. NOISY CASE 35
In this equation the free-response signal is equal to
p
0
(k) =

C
p
x(k) +

D
p1
e(k|k) +

D
p2
w(k) (3.7)
Now, assuming D
11
to be invertible, we derive an expression for e(k) from (3.1):
e(k|k) = D
1
11
_
y(k) C
1
x(k) D
12
w(k)
_
in which y(k) is the measured process output and x(k) is the estimate of state x(k) at
previous time instant k 1.
p
0
(k) =

C
p
x(k) +

D
p1
e(k) +

D
p2
w(k) = (3.8)
=

C
p
x(k) +

D
p1
D
1
11
_
y(k) C
1
x(k) D
12
w(k)
_
+

D
p2
w(k) = (3.9)
= (

C
p


D
p1
D
1
11
C
1
) x(k) +

D
p1
D
1
11
y(k) + (

D
p2


D
p1
D
1
11
D
12
E
w
) w(k) (3.10)
where E
w
=
_
I 0 0

is a selection matrix such that


w(k) = E
w
w(k)
Example 1 : Prediction of the output signal of a system
In this example we consider the prediction of the output signal p(k) = y(k) of a second
order system. We have given
x
o1
(k + 1) = 0.7 x
o1
(k) + x
o2
(k) + e
o
(k) + 0.1 d
o
(k) + u(k)
x
o2
(k + 1) = 0.1 x
o1
(k) 0.2 e
o
(k) 0.05 d
o
(k)
y(k) = x
o1
(k) + e
o
(k)
where y(k) is the proces output, v(k) = u(k) is the proces input, w(k) = d(k) is a known
disturbance signal and e(k) is assumed to be ZMWN. We obtain the model of (3.1) by
setting
A =
_
0.7 1
0.1 0
_
B
1
=
_
1
0.2
_
B
2
=
_
0.1
0.05
_
B
3
=
_
1
0
_
C
1
= C
p
=
_
1 0

D
11
= D
p1
=
_
1

D
12
= D
p2
=
_
0

For N = 4 we nd:

C
1
=
_

_
C
1
C
1
A
C
1
A
2
C
1
A
3
_

_
=
_

_
1 0
0.7 1.0
0.39 0.7
0.203 0.39
_

D
11
=
_

_
D
11
C
1
B
1
C
1
AB
1
C
1
A
2
B
1
_

_
=
_

_
1
1
0.5
0.25
_

_
36 CHAPTER 3. PREDICTION

D
12
=
_

_
D
12
0 0 0
C
1
B
2
D
12
0 0
C
1
AB
2
C
1
B
2
D
12
0
C
1
A
2
B
2
C
1
AB
2
C
1
B
2
D
12
_

_
=
_

_
0 0 0 0
0.1 0 0 0
0.02 0.1 0 0
0.004 0.02 0.1 0
_

D
13
=
_

_
D
13
0 0 0
C
1
B
3
D
13
0 0
C
1
AB
3
C
1
B
3
D
13
0
C
1
A
2
B
3
C
1
AB
3
C
1
B
3
D
13
_

_
=
_

_
0 0 0 0
1 0 0 0
0.7 1 0 0
0.39 0.7 1 0
_

_
3.3 The use of a priori knowledge in making predic-
tions
In this chapter we have discussed the prediction of the signal p(k), based on the vector
w(k) =
_

_
w(k)
w(k+1)
. . .
w(k+N1)
_

_
This means that we assume future values of the reference signal or the disturbance signals
to be known at time k. In practice this is often not the case, and estimations of these
signals have to be made.
w(k) =
_

_
w(k|k)
w(k+1|k)
. . .
w(k+N1|k)
_

_
The estimation can be done in various ways. One can use heuristics to compute the
estimates w(k+j|k), or one can use an extrapolation technique. With extrapolation we
can construct the estimates of w beyond the present time instant k. It is similar to the
process of interpolation, but its results are often subject to greater uncertainty. We will
discuss three types of extrapolation, zero-th order, rst order and polynomial extrapolation.
We assume the past values of w to be known, and compute the future values of w on the
basis of the past values of w.
Zero-th order extrapolation:
In an zero-th order extrapolation we assume w(k + j) to be constant for j 0, and so
w(k + j) = w(k 1) for j 0. This results in
w(k) =
_

_
w(k 1)
w(k 1)
. . .
w(k 1)
_

_
3.3. THE USE OF A PRIORI KNOWLEDGE IN MAKING PREDICTIONS 37
Linear (rst order) extrapolation:
Linear (or rst order) extrapolation means creating a tangent line at the last values of w(k)
and extending it beyond that limit. Linear extrapolation will only provide good results
when used to extend the graph of an approximately linear function or not too far beyond the
known data. We dene w(k+j) = w(k1)+j (w(k1)w(k2)) = (j+1j q
1
)w(k1)
for j 0. This results in
w(k) =
_

_
2w(k 1) w(k 2)
3w(k 1) 2w(k 2)
. . .
(N + 1)w(k 1) Nw(k 2)
_

_
Polynomial extrapolation:
A polynomial curve can be created through more than one or two past values of w. For an
nth order extrapolation we use n + 1 past values of w. Note that high order polynomial
extrapolation must be used with due care. The error estimate of the extrapolated value
will grow with the degree of the polynomial extrapolation. Note that the zero-th and rst
order extrapolations are special cases of polynomial extrapolation.
The quality of extrapolation
Typically, the quality of a particular method of extrapolation is limited by the assump-
tions about the signal w made by the method. If the method assumes the signal is smooth,
then a non-smooth signal will be poorly extrapolated. Even for proper assumptions about
the signal, the extrapolation can diverge exponentially from the real signal. For partic-
ular problems, additional information about w may be available which may improve the
convergence.
In practice, the zero-th order extrapolation is mostly used, mainly because it never diverges,
but also due to lack of information about the future behavior of the signal w.
38 CHAPTER 3. PREDICTION
Chapter 4
Standard formulation
In this chapter we will standardize the predictive control setting and formulate the stan-
dard predictive control problem. First we consider the performance index, secondly the
constraints.
4.1 The performance index
In predictive control various types of performance indices are presented, in which either
the input signal is measured together with the state (LQPC), or the input increment
signal is measured together with the tracking error (GPC). This leads to two dierent
types of performance indices. We will show that both can be combined into one standard
performance index:
The standard predictive control performance index:
In this course we will use the state space setting of LQPC as well as the use of a reference
signal as is done in GPC. Instead of the state x or the output y we will use a general
signal z in the performance index. We therefore adopt the standard predictive control
performance index:
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (4.1)
where z(k + j|k) is the prediction of z(k + j) at time k and (j) is a diagonal selection
matrix with ones and zeros on the diagonal. Finally, we use a generalized input signal
v(k), which can either be the input signal u(k) or the input increment signal u(k) and we
use a signal w(k), which contains all known external signals, such as the reference signal
r(k) and the known disturbance signal d(k). In most cases, the noise signal e is ZMWN
although in some cases we actually have integrated ZMWN. The state signal x, input signal
v, output signal y, noise signal e, external signal w and performance signal z are related
39
40 CHAPTER 4. STANDARD FORMULATION
by the following equations:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (4.2)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (4.3)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k) (4.4)
We will show how both the LQPC and GPC performance index can be rewritten as a
standard performance index.
The LQPC performance index:
In the LQPC performance index (see Section 2.3) the state is measured together with the
input signal. The fact that we use the input in the performance index means that we will
use an IO-model. The LQPC performance index is now given by
J(u, k) =
N

j=Nm
x
T
o
(k + j|k)Q x
o
(k + j|k) +
N

j=1
u
T
(k + j 1|k)Ru(k + j 1|k) (4.5)
where x
o
is the state of the IO-model
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k) (4.6)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k) (4.7)
and N N
m
1 while Q and R are positive semi-denite.
Theorem 2 Transformation of LQPC into standard form
Given the MPC problem with LQPC performance index (4.5) for IO model (4.6),(4.7).
This LQPC problem can be translated into a standard predictive control problem with per-
formance index (4.1) and model (4.2),(4.3),(4.4) by the following substitutions:
x(k) = x
o
(k) v(k) = u(k) w(k) = d(k) e(k) = e
o
(k)
A = A
o
B
1
= K
o
B
2
= L
o
B
3
= B
o
C
1
= C
o
D
11
= D
H
D
12
= D
F
C
2
=
_
Q
1/2
A
o
0
_
D
21
=
_
Q
1/2
K
o
0
_
D
22
=
_
Q
1/2
L
o
0
_
D
23
=
_
Q
1/2
B
o
R
1/2
_
(j) =
_

1
(j) 0
0 I
_

1
(j) =
_

_
0 for 0 j < N
m
1
I for N
m
1 j N 1
4.1. THE PERFORMANCE INDEX 41
The proof of this theorem is in Appendix E.
The GPC performance index:
In the GPC performance index (see Section 2.3) the tracking error signal is measured
together with the increment input signal. The fact that we use the increment input in the
performance index means that we will use an IIO-model. The GPC performance index is
now given by
J(u, k) =
N

j=Nm
_
y
p
(k + j|k) r(k + j)
_
T
_
y
p
(k + j|k) r(k + j)
_
+
2
N

j=1
u
T
(k + j 1|k)u(k + j 1|k) (4.8)
where N N
m
1 and R. The output y(k) and increment input u(k) are related
by the IIO model
x
i
(k + 1) = A
i
x
i
(k) + K
i
e
i
(k) + L
i
d
i
(k) + B
i
u(k) (4.9)
y(k) = C
i
x
i
(k) + D
H
e
i
(k) (4.10)
and the weighted output y
p
(k) = P(q)y(k) is given in state space representation
x
p
(k + 1) = A
p
x
p
(k) + B
p
y(k) (4.11)
y
p
(k) = C
p
x
p
(k) + D
p
y(k) (4.12)
(Note that for GPC we choose D
F
= 0)
Theorem 3 Transformation of GPC into standard form
Given the MPC problem with GPC performance index (4.8) for IIO model (4.9),(4.10)
and the weighting lter P(q), given by (4.11),(4.12). This GPC problem can be trans-
lated into a standard predictive control problem with performance index (4.1) and model
(4.2),(4.3),(4.4) by the following substitutions:
x(k) =
_
x
p
(k)
x
i
(k)
_
, v(k) = u(k), w(k) =
_
d
i
(k)
r(k + 1)
_
e(k) = e
i
(k)
A =
_
A
p
B
p
C
i
0 A
i
_
B
1
=
_
B
p
D
H
K
i
_
B
2
=
_
0 0
L
i
0
_
B
3
=
_
0
B
i
_
C
1
=
_
0 C
i

D
11
= D
H
D
12
=
_
0 0

C
2
=
_
C
p
A
p
C
p
B
p
C
i
+ D
p
C
i
A
i
0 0
_
D
21
=
_
C
p
B
p
D
H
+ D
p
C
i
K
i
0
_
D
22
=
_
D
p
C
i
L
i
I
0 0
_
D
23
=
_
D
p
C
i
B
i
I
_
42 CHAPTER 4. STANDARD FORMULATION
(j) =
_

1
(j) 0
0 I
_

1
(j) =
_

_
0 for 0 j < N
m
1
I for N
m
1 j N 1
The proof of this theorem is in Appendix E.
The zone performance index: The zone performance index (see Section 2.3) uses the
IO-model, and is given by
J(u, k) =
N

j=Nm

T
(k + j|k) (k + j|k) +
N

j=1

2
u
T
(k + j 1|k)u(k + j 1|k) (4.13)
where
i
(k), i = 1, . . . , m contribute to the performance index only if |y
i
(k)r
i
(k)| >
max,i
:

i
(k) =
_
_
_
0 for |y
i
(k) r
i
(k)|
max,i
y
i
(k) r
i
(k)
max,i
for y
i
(k) r
i
(k)
max,i
y
i
(k) r
i
(k) +
max,i
for y
i
(k) r
i
(k)
max,i
so
|
i
(k)| = min
|
i
(k)|
max,i
|y
i
(k) r
i
(k) +
i
(k)|
There holds N N
m
1 and R. The output y(k) and input u(k) are related by the
IO model
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k) (4.14)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k) (4.15)
Theorem 4 Transformation of zone control into standard form
Given the MPC problem with zone performance index (4.13) for IO model (4.14),(4.15).
This GPC problem can be translated into a standard predictive control problem with per-
formance index (4.1) and model (4.2),(4.3),(4.4) by the following substitutions:
z(k) =
_
y(k + 1|k) r(k + 1) +(k + 1)
u(k)
_
v(k) =
_
u(k)
(k + 1)
_
w(k) =
_
d
o
(k)
r(k + 1)
_
e(k) = e
o
(k)
A =
_
A
o

B
1
=
_
K
o

B
2
=
_
L
o
0

B
3
=
_
B
o
0

C
1
= C
o
D
11
= D
H
D
12
=
_
D
F
0

4.1. THE PERFORMANCE INDEX 43


C
2
=
_
C
o
A
o
0
_
D
21
=
_
C
o
K
o
0
_
D
22
=
_
C
o
L
o
I
0 0
_
D
23
=
_
C
o
B
o
I
I 0
_
C
4
=
_
0
0
_
D
41
=
_
0
0
_
D
42
=
_
0
0
_
D
43
=
_
0 I
0 I
_
=
_

max

max
_
then:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (4.16)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
The proof is left to the reader. Note that we need a constraint on . This can be incorpo-
rated through the techniques presented in the next section.
Both the LQPC performance index and the GPC performance index are frequently used in
practice and in literature. The LQPC performance index resembles the LQ optimal control
performance index. The choices of the matrices Q and R can be done in a similar way as
in LQ optimal control. The GPC performance index originates from the adaptive control
community and is based on SISO polynomial models.
The standard predictive control performance index:
We have found that the GPC, LQPC and the zone performance indices can be rewritten
into the standard form:
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (4.17)
If we dene the signal vector z(k) with the predicted performance signals z(k +j|k) in the
interval 0 j N1 and the block diagonal matrix

as follows
z(k) =
_

_
z(k|k)
z(k+1|k)
.
.
.
z(k+N1|k)
_

_
,

=
_

_
(0) 0 . . . 0
0 (1) 0
.
.
.
.
.
.
.
.
.
0 0 . . . (N 1)
_

_
(4.18)
then the standard predictive control performance index (4.17) can be rewritten in the
following way:
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k)
= z
T
(k)

z(k)
where z(k) can be computed using the formulas from Chapter 3:
z(k) =

C
2
x(k) +

D
21
e(k) +

D
22
w(k) +

D
23
v(k) (4.19)
44 CHAPTER 4. STANDARD FORMULATION
with

C
2
=
_

_
C
2
C
2
A
C
2
A
2
.
.
.
C
2
A
N1
_

D
22
=
_

_
D
22
0 0 0
C
2
B
2
D
22
.
.
.
.
.
.
.
.
.
C
2
AB
2
C
2
B
2
.
.
. 0 0
.
.
.
.
.
. D
22
0
C
2
A
N2
B
2
C
2
B
2
D
22
_

D
21
=
_

_
D
21
C
2
B
1
C
2
AB
1
.
.
.
C
2
A
N2
B
1
_

D
23
=
_

_
D
23
0 0 0
C
2
B
3
D
23
.
.
.
.
.
.
.
.
.
C
2
AB
3
C
2
B
3
.
.
. 0 0
.
.
.
.
.
. D
23
0
C
2
A
N2
B
3
C
2
B
3
D
23
_

_
4.2 Handling constraints
As was already discussed in chapter 1, in practical situations there will be signal con-
straints. These constraints on control, state and output signals are motivated by safety
and environmental demands, economical perspectives and equipment limitations. There
are two types of constraints:
Equality constraints:
In general, we impose constraints on the actual inputs and states. Clearly, for future
constraints we often replace the actual input and state constraints by constraints
on the predicted state and input. However, in particular for equality constraints it
often occurs that we actually want to impose constraints directly on the predicted
and state and input. This is due to the fact, that equality constraints are usually
motivated by the control algorithm. For example, to decrease the degrees of freedom,
we introduce the control horizon N
c
, and to guarantee stability we can add a state
end-point constraint to the problem.
Hence we use:
A
j

(k + j|k) = 0
for j = 1, . . . , N, where

(k +j|k) is the prediction of (k +j) at time k. The matrix
A
j
is used to indicate that certain equality constraints are only active for specic j.
For instance, in case of an endpoint penalty, we would choose A
N
= I and A
j
= 0
for j = 0, . . . , N 1.
4.2. HANDLING CONSTRAINTS 45
In this course we look at equality constraints in a generalized form. We can stack all
constraints at time k on predicted inputs or states in one vector:

(k) =
_

_
A
1

(k + 1|k)
A
2

(k + 2|k)
.
.
.
A
N

(k + N|k)
_

_
Clearly, if some of the A
i
are zero or do not have full rank we can reduce the size of
this vector by only looking at the active constraints.
Using this framework, we can express the equality constraints in the form:

(k) =

C
3
x(k) +

D
31
e(k) +

D
32
w(k) +

D
33
v(k) = 0 (4.20)
where w(k) contains all known future external signals, i.e. w(k) is a vector consisting
of w(k), w(k + 1), . . . , w(k + N). Similarly, v(k) contains the predicted future input
signals, i.e. v(k) is a vector consisting of v(k), v(k + 1|k), . . . , v(k + N|k).
Inequality constraints:
Inequality constraints are due to physical, economical or safety limitations. We
actually prefer to work with constraints of a (k) which consists of actual inputs and
states:
(k) (k) k
Since these are unknown we have to work with predictions, i.e. we have constraints
of the form:

(k + j|k) (k + j)
for j = 0, . . . , N. Again, we stack all out inequality constraints:

(k) =
_

(k|k)

(k + 1|k)
.
.
.

(k + N|k)
_

_
and similarly

(k) is a stacked vector consisting of (k), (k + 1), . . . , (k + N).
In this course we then look at inequality constraints in a generalized form:

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)

(k) (4.21)
where

(k) does not depend on future input signals.
46 CHAPTER 4. STANDARD FORMULATION
In the above formulas

(k) and

(k) are vectors and the equalities and inequalities are
meant element-wise. If there are multiple constraints, for example

1
(k) = 0 and

2
(k) = 0,

1
(k)

1
(k), . . . ,

4
(k)

4
(k) we can combine them by both stacking them into one
equality constraint and one inequality constraint.

(k) =
_

1
(k)

2
(k)
_
= 0

(k) =
_

1
(k)
.
.
.

4
(k)
_

1
(k)
.
.
.

4
(k)
_

_
We will show how the transformation is done for some common equality and inequality
constraints.
4.2.1 Equality constraints
Control horizon constraint for IO systems
The control horizon constraint for an IO-system (with v(k) = u(k)) is given by
v(k + N
c
+ j|k) = v(k + N
c
1|k) for j = 0, . . . , N N
c
1
Then the control horizon constraint is equivalent with
_

_
v(k + N
c
|k) v(k + N
c
1|k)
v(k + N
c
+ 1|k) v(k + N
c
1|k)
.
.
.
v(k + N 1|k) v(k + N
c
1|k)
_

_
=
=
_

_
0 0 I I 0 0
0 0 I 0 I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
0 0 I 0 0 I
_

_
_

_
v(k|k)
.
.
.
v(k + N
c
2|k)
v(k + N
c
1|k)
v(k + N
c
|k)
v(k + N
c
+ 1|k)
.
.
.
v(k + N 1|k)
_

_
=

D
33
v(k)
= 0
By dening

C
3
=0 ,

D
31
=0 and

D
32
=0 , the control horizon constraint can be translated
into the standard form of equation (4.20).
4.2. HANDLING CONSTRAINTS 47
Control horizon constraint for IIO systems
The control horizon constraint for an IIO-system (with v(k) = u(k)) is given by
v(k + N
c
+ j|k) = 0 for j = 0, . . . , N N
c
1
Then the control horizon constraint is equivalent with
_

_
v(k + N
c
|k)
v(k + N
c
+ 1|k)
.
.
.
v(k + N 1|k)
_

_
=
_

_
0 0 I 0 0
0 0 0 I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
0 0 0 0 I
_

_
_

_
v(k|k)
.
.
.
v(k + N
c
1|k)
v(k + N
c
|k)
v(k + N
c
+ 1|k)
.
.
.
v(k + N 1|k)
_

_
=

D
33
v(k)
= 0
By dening

C
3
=0 ,

D
31
=0 ,

D
32
=0 , the control horizon constraint can be translated
into the standard form of equation (4.20).
State end-point constraint
The state end-point constraint is dened as
x(k + N|k) = x
ss
where x
ss
is the steady-state. The state end-point constraint guarantees stability of the
closed loop (see chapter 6). In section 5.2.1 we will discuss steady-state behavior, and we
derive that x
ss
can be given by
x
ss
= D
ssx
w(k)
where the matrix D
ssx
is related to the steady-state properties of the system. The state
end-point constraint now becomes
x(k + N|k) D
ssx
w(k) =
= A
N
x(k)+A
N1
B
1
e(k)+
_
A
N1
B
2
A
N2
B
2
B
2

w(k)
+
_
A
N1
B
3
A
N2
B
3
B
3

v(k) D
ssx
w(k)
= 0
By dening

C
3
= A
N

D
32
=
_
A
N1
B
2
A
N2
B
2
B
2

D
ssx

D
31
= A
N1
B
1

D
33
=
_
A
N1
B
3
A
N2
B
3
B
3

the state end-point constraint can be translated into the standard form of equation (4.20).
48 CHAPTER 4. STANDARD FORMULATION
4.2.2 Inequality constraints
For simplicity we only consider constraints of the form
p(k) p
max
i.e. we only consider upper bounds. Lower bounds such as a constraint p(k) p
min
are
easily translated into upper bounds by considering
p(k) p
min
Consider a constraint

(k + j)

(k + j) , for j = 0, . . . , N (4.22)
where (k) is given by
(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k) (4.23)
Linear inequality constraints on output y(k) and state x(k) can easily be translated into
linear constraints on the input vector v(k). By using the results of chapter 3, values of
(k + j) can be predicted and we obtain the inequality constraint:

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)

(k) (4.24)
where

(k) =
_

(k)

(k + 1)
.
.
.

(k + N1)
_

(k) =
_

_
(k)
(k + 1)
.
.
.
(k + N1)
_

C
4
=
_

_
C
4
C
4
A
C
4
A
2
.
.
.
C
4
A
N1
_

D
42
=
_

_
D
42
0 0 0
C
4
B
2
D
42
.
.
.
.
.
.
.
.
.
C
4
AB
2
C
4
B
2
.
.
. 0 0
.
.
.
.
.
. D
42
0
C
4
A
N2
B
2
C
4
B
2
D
42
_

D
41
=
_

_
D
41
C
4
B
1
C
4
AB
1
.
.
.
C
4
A
N2
B
1
_

D
43
=
_

_
D
43
0 0 0
C
4
B
3
D
43
.
.
.
.
.
.
.
.
.
C
4
AB
3
C
4
B
3
.
.
. 0 0
.
.
.
.
.
. D
43
0
C
4
A
N2
B
3
C
4
B
3
D
43
_

_
In this way we can formulate:
4.2. HANDLING CONSTRAINTS 49
Inequality constraints on the output (y(k + j) y
max
):
(k) = y(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
for C
4
= C
1
, D
41
= D
11
, D
42
= D
12
, D
43
= D
13
and (k) = y
max
.
Inequality constraints on the input (v(k + j) v
max
):
(k) = v(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
for C
4
= 0, D
41
= 0, D
42
= 0, D
43
= I and (k) = v
max
.
Inequality constraints on the state (x(k + j) x
max
):
(k) = x(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
for C
4
= I, D
41
= 0, D
42
= 0, D
43
= 0 and (k) = x
max
.
Problems occur when (k) in equation (4.22) depends on past values of the input. In that
case, prediction of (k + j) may depends on future values of the input, when we use the
derived prediction formulas. Two clear cases where this happen will be discussed next:
Constraint on the increment input signal for an IO model:
u(k + j|k) u
max
for j = 0, . . . , N1
Consider the case of an IO model, so v(k) = u(k). Then by dening

(k) =
_

_
u(k 1) + u
max
u
max
.
.
.
u
max
_

D
43
=
_

_
I 0 0
I I
.
.
.
.
.
.
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
0 0 I I
_

C
4
= 0

D
41
= 0

D
42
= 0
the input increment constraint can be translated into the standard form of equation (4.21).
Constraint on the input signal for IIO model:
Assume that we have the following constraint:
u(k + j|k) u
max
for j = 0, . . . , N1
Consider the case of an IIO model, so v(k) = u(k). We note:
u(k + j) = u(k 1) +
j

i=0
u(k + i)
50 CHAPTER 4. STANDARD FORMULATION
Then by dening

(k) =
_

_
u(k 1) +u
max
u(k 1) +u
max
.
.
.
u(k 1) +u
max
_

D
43
=
_

_
I 0 0
I I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
I I I
_

_
the input constraint can be translated into the standard form of equation (4.21).
4.3 Structuring the input with orthogonal basis
functions
On page 27 we introduced a parametrization of the input signal using a set of basis func-
tions:
v(k+j|k) =
M

i=0
S
i
(j)
i
(k)
where S
i
(j) are the basis functions. A special set of basis functions are orthogonal basis
functions [69], which have the property:

k=
S
i
(k)S
j
(k) =
_
0 for i = j
1 for i = j
Lemma 5 [68] Let A
b
, B
b
, C
b
and D
b
be matrices such that
_
A
b
B
b
C
b
D
b
_
T
_
A
b
B
b
C
b
D
b
_
= I
(this means that A
b
, B
b
, C
b
and D
b
are the balanced system matrices of a unitary function).
Dene
A
v
=
_

_
A
b
B
b
C
b
B
b
D
b
C
b
B
b
D
nv2
b
C
b
0 A
b
B
b
C
b
B
b
D
nv3
b
C
b
.
.
.
.
.
. A
b
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. B
b
C
b
0 0 A
b
_

_
C
v
=
_
C D
b
C
b
D
2
b
C
b
D
nv1
b
C
b

and let
S
i
(k) = C
v
A
k
v
e
i+1
4.4. THE STANDARD PREDICTIVE CONTROL PROBLEM 51
where
e
i
=
_
0 0 1 0 0

T
with the 1 at the ith position. Then, the functions S
i
(k) i = 0, . . . , M form an orthogonal
basis.
When we dene the parameter vector =
_

1

2

M

T
, the input vector is given
by the expression
v(k+j|k) = C
v
A
j
v
(4.25)
The number of degrees of freedom in the input signal is now equal to dim() = M = n
v
n
1
,
the dimension of parameter vector. One of the main reasons to apply orthogonal basis
functions is to obtain a reduction of the optimization problem by reducing the degrees of
freedom, so usually M will be much smaller than horizon N.
Consider an input sequence of N future values:
v(k) =
_

_
v(k|k)
v(k + 1|k)
.
.
.
v(k + N|k)
_

_
=
_

_
C
v
C
v
A
v
.
.
.
C
v
A
N
v
_

_
= S
v

Suppose N is such that S


v
has full column-rank, and a left-complement is given by S

v
(so S

v
S
v
= 0, see appendix C for a denition). Now we nd that
S

v
v(k) = 0 (4.26)
We observe that the orthogonal basis function can be described by equality constraint 4.26.
4.4 The standard predictive control problem
From the previous section we learned that the GPC and LQPC performance index can be
formulated in a standard form (equation 4.1). In fact most common quadratic performance
indices can be given in this standard form. The same holds for many kinds of linear equality
and inequality constraints, which can be formulated as (4.21) or (4.20). Summarizing we
come to the formulation of the standard predictive control problem:
Denition 6 Standard Predictive Control Problem (SPCP) Consider a system given by
the state-space realization
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (4.27)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (4.28)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k) (4.29)
52 CHAPTER 4. STANDARD FORMULATION
The goal is to nd a controller v(k) = K( w, y, k) such that the performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k)
= z
T
(k)

z(k)
is minimized subject to the constraints

(k) =

C
3
x(k) +

D
31
e(k) +

D
32
w(k) +

D
33
v(k) = 0 (4.30)

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)

(k) (4.31)
where
z(k) =
_

_
z(k|k)
z(k+1|k)
.
.
.
z(k+N1|k)
_

_
v(k) =
_

_
v(k|k)
v(k+1|k)
.
.
.
v(k+N1|k)
_

_
w(k) =
_

_
w(k|k)
w(k+1|k)
.
.
.
w(k+N1|k)
_

_
How the SPCP problem is solved will be discussed in the next chapter.
4.5 Examples
Example 7 : GPC as a standard predictive control problem
Consider the GPC problem on the IIO model of example 53 in Appendix D. The purpose
is to minimize the GPC performance index with
P(q) = 1 1.4q
1
+ 0.48q
2
In state space form this becomes:
A
p
=
_
0 1
0 0
_
B
p
=
_
1.4
0.48
_
C
p
=
_
1 0

D
p
=
_
1

Further we choose N = N
c
= 4 and = 0.001:
Following the formulas in section 4.1, the GPC performance index is transformed into the
standard performance index by choosing
A =
_
A
p
B
p
C
i
0 A
i
_
=
_

_
0 1 1.4 1.4 0
0 0 0.48 0.48 0
0 0 1 1 0
0 0 0 0.7 1
0 0 0 0.1 0
_

_
B
1
=
_
B
p
D
H
K
i
_
=
_

_
1.4
0.48
1
1
0.2
_

_
4.5. EXAMPLES 53
B
2
=
_
0 0
L
i
0
_
=
_

_
0 0
0 0
0 0
0.1 0
0.05 0
_

_
B
3
=
_
0
B
i
_
=
_

_
0
0
0
1
0
_

_
C
1
=
_
0 C
i

=
_
0 0 1 1 0

C
2
=
_
C
p
A
p
C
p
B
p
C
i
+ D
p
C
i
A
i
0 0
_
=
_
0 1 0.4 0.3 1
0 0 0 0 0
_
D
21
=
_
C
p
B
p
+ D
p
C
i
K
i
0
_
=
_
0.6
0
_
D
22
=
_
D
p
C
i
L
i
I
0 0
_
=
_
0.1 1
0 0
_
D
23
=
_
D
p
C
i
B
i
I
_
=
_
1
0.001
_
The prediction matrices are constructed according to chapter 3. We obtain:

C
2
=
_

_
0 1 0.4 0.3 1
0 0 0 0 0
0 0 0.08 0.19 0.3
0 0 0 0 0
0 0 0.08 0.183 0.19
0 0 0 0 0
0 0 0.08 0.1891 0.183
0 0 0 0 0
_

D
21
=
_

_
0.6
0
0.18
0
0.21
0
0.225
0
_

D
22
=
_

_
0.1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0.02 0 0.1 1 0 0 0 0
0 0 0 0 0 0 0 0
0.004 0 0.02 0 0.1 1 0 0
0 0 0 0 0 0 0 0
0.0088 0 0.004 0 0.02 0 0.1 1
0 0 0 0 0 0 0 0
_

D
23
=
_

_
1 0 0 0
0.001 0 0 0
0.3 1 0 0
0 0.001 0 0
0.19 0.3 1 0
0 0 0.001 0
0.183 0.19 0.3 1.000
0 0 0 0.001
_

_
54 CHAPTER 4. STANDARD FORMULATION
Example 8 : Constrained GPC as a standard predictive control problem
Now consider example 7 for the case where we have a constraint on the input signal:
u(k) 1
and a control horizon constraint
u(k + N
c
+ j|k) = 0 for j = 0, . . . , N N
c
1
We will compute the matrices

C
3
,

C
4
,

D
31
,

D
32
,

D
33
,

D
41
,

D
42
,

D
43
and the vector

(k)
for N = 4 and N
c
= 2.
Following the formulas of section 4.2 the vector

and the matrix

D
43
become:

(k) =
_

_
u(k 1) +u
max
u(k 1) +u
max
u(k 1) +u
max
u(k 1) +u
max
_

_
=
_

_
1 u(k 1)
1 u(k 1)
1 u(k 1)
1 u(k 1)
_

D
43
=
_

_
1 0 0 0
1 1 0 0
1 1 1 0
1 1 1 1
_

_
Further

C
4
= 0,

D
41
= 0 and

D
42
= 0.
Next we compute the matrices

C
3
,

D
31
,

D
32
and

D
33
corresponding to the control horizon
constraint (N = 4, N
c
= 2):

C
3
=
_
0 0 0 0 0
0 0 0 0 0
_

D
31
=
_
0
0
_

D
32
=
_
0 0 0 0
0 0 0 0
_

D
33
=
_
0 0 1 0
0 0 0 1
_
Chapter 5
Solving the standard predictive
control problem
In this chapter we solve the standard predictive control problem as dened in section 4.4.
We consider the system given by the state-space realization
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (5.1)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (5.2)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k) (5.3)
Here y contains the measurements so that at time k we have available all past inputs
v(0), . . . , v(k1) and all past and current measurements y(0), . . . , y(k). z contains variables
that we want to control and hence appear in the performance index.
The performance index and the constraints are given by
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.4)
= z
T
(k)

z(k) (5.5)

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) = 0 (5.6)

(k) =

C
4
x(k|k) +

D
41
e(k|k) +

D
42
w(k) +

D
43
v(k)

(k) (5.7)
Note that the performance index at time k contains estimates of z(k + j) based on all
inputs v(0), . . . , v(k + j) as well as past and current measurements y(0), . . . y(k). Clearly
v(k), . . . , v(k +j) are not yet available and need to be chosen to minimize the performance
criterion subject to the constraints. As described before,

(k) and

(k) are stacked vectors
containing predicted states or future inputs on which we want to impose either inequality
or equality constraints. Predictions of future states and inputs are clearly a function of all
past measurements but it can be shown that it can be expressed in terms of predictions
x(k|k) and e(k|k) of the state x(k) and the noise e(k) respectively. Implementation issues
about x(k|k) and e(k|k) will be discussed in Section 5.3.
55
56 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
The standard predictive control problem is to minimize (5.5) given (5.1), (5.2), (5.3) subject
to (5.6) and (5.7).
In this chapter we consider 3 subproblems:
1. The unconstrained SPCP: Minimize (5.5).
2. The equality constrained SPCP: Minimize (5.5) subject to (5.6).
3. The (full) SPCP: Minimize (5.5) subject to (5.6) and (5.7).
Assumptions
To be able to solve the standard predictive control problem, the following assumptions are
made:
1. The matrix

D
33
has full row rank.
2. The matrix

D
23


D
T
23
has full rank, where

= diag((0), (1), . . . , (N 1))
R
nzNnzN
.
3. The pair (A, B
3
) is stabilizable.
Assumption 1 basically guarantees that we have only equality constraints on the input.
If the system is subject to noise it is only in rare possible to guarantee that equality
constraints on the state are satised. Assumption 2 makes sure that the optimization
problem is strictly convex which guarantees a unique solution to the optimization problem.
Assumption 3 is necessary to be able to control all unstable modes of the system.
5.1 The nite horizon SPCP
5.1.1 Unconstrained standard predictive control problem
In this section we consider the problem of minimizing the performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.8)
= z
T
(k)

z(k) (5.9)
for the system as given in (5.1), (5.2) and (5.3), and without equality or inequality con-
straints. In this section we consider the case for nite N. The innite case will be treated
in section 5.2.
5.1. THE FINITE HORIZON SPCP 57
The performance index, given in (5.9), is minimized for each time instant k, and the optimal
future control sequence v(k) is computed. We use the prediction law
z(k) =

C
2
x(k|k) +

D
21
e(k|k) +

D
22
w(k) +

D
23
v(k) (5.10)
= z
0
(k) +

D
23
v(k) (5.11)
as derived in chapter 3, where
z
0
(k) =

C
2
x(k|k) +

D
21
e(k|k) +

D
22
w(k)
is the free-response signal given in equation (3.7). Without constraints, the problem be-
comes a standard least squares problem, and the solution of the predictive control problem
can be computed analytically. We assume the reference trajectory w(k + j) to be known
over the whole prediction horizon j = 0, . . . , N. Consider the performance index (5.9).
Now substitute (5.11) and dene matrix H, vector f(k) and scalar c(k) as
H = 2

D
T
23


D
23
, f(k) = 2

D
T
23

z
0
(k) and c(k) = z
T
0
(k)

z
0
(k)
We obtain
J(v, k) = v
T
(k)

D
T
23


D
23
v(k) + 2 v
T
(k)

D
T
23

z
0
(k) + z
T
0
(k)

z
0
(k) =
=
1
2
v
T
(k)H v(k) + v
T
(k)f(k) + c(k)
The minimization of J(v, k) has become a linear algebra problem. It can be solved, setting
the gradient of J to zero:
J
v
= H v(k) + f(k) = 0
By assumption 2, H is invertible and hence the solution v(k) is given by
v(k) = H
1
f(k)
=
_

D
T
23


D
23
_
1

D
T
23

z
0
(k)
=
_

D
T
23


D
23
_
1

D
T
23

C
2
x(k|k) +

D
21
e(k|k) +

D
22
w(k)
_
Because we use the receding horizon principle, only the rst computed change in the control
signal is implemented and at time k+1 the computation is repeated with the horizon moved
one time interval. This means that at time k the control signal is given by
v(k|k) =
_
I 0 . . . 0

v(k) = E
v
v(k)
We obtain
v(k|k) = E
v
v(k)
= E
v
_

D
T
23


D
23
_
1

D
T
23

C
2
x(k|k) +

D
21
e(k|k) +

D
22
w(k)
_
= F x(k|k) + D
e
e(k|k) + D
w
w(k)
58 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
where
F = E
v
(

D
T
23


D
23
)
1

D
T
23


C
2
D
e
= E
v
(

D
T
23


D
23
)
1

D
T
23


D
21
D
w
= E
v
(

D
T
23


D
23
)
1

D
T
23


D
22
E
v
=
_
I 0 . . . 0

with E
v
such that v(k|k) = E
v
v(k).
The results are summarized in the following theorem (Kinnaert [34],Lee et al. [38],Van den
Boom & de Vries [67]):
Theorem 9
Consider system (5.1) - (5.3) The unconstrained (nite) horizon standard predictive control
problem of minimizing performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.12)
is solved by control law
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) (5.13)
where
F = E
v
(

D
T
23


D
23
)
1

D
T
23


C
2
(5.14)
D
e
= E
v
(

D
T
23


D
23
)
1

D
T
23


D
21
(5.15)
D
w
= E
v
(

D
T
23


D
23
)
1

D
T
23


D
22
(5.16)
E
v
=
_
I 0 . . . 0

(5.17)
Note that for the computation of optimal input signal v(k), an estimate x(k|k) and e(k|k),
for the state x(k) and the noise signal e(k) respectively, have to be available. This can
easily be done with an observer and will be discussed in section 5.3.
5.1.2 Equality constrained standard predictive control problem
In this section we consider the problem of minimizing the performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.18)
for the system as given in (5.1), (5.2) and (5.3), and with equality constraints (but without
inequality constraints). In this section we consider the case for nite N. In section 5.2.4
we discuss the case where N = and a nite control horizon as the equality constraint.
5.1.2 EQUALITY CONSTRAINED SPCP 59
Consider the equality constrained standard problem
min
v
z
T
(k)

z(k)
subject to the constraint:

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) = 0
We will solve this problem by elimination of the equality constraint. We therefore give the
following lemma:
Lemma 10
Consider the equality constraint
C = (5.19)
for full-row rank matrix C R
mn
, R
m1
, R
n1
, and m n. Choose C
r
to be a
right-inverse of C and C
r
to be a right-complement of C, then all satisfying (5.19) are
given by
= C
r
+C
r

where R
(nm)1
is arbitrary.
2 End Lemma
Proof of lemma 10:
From appendix C we know that for
C = U
_
0

_
V
T
1
V
T
2
_
the right-inverse C
r
and right-complement C
r
are dened as
C
r
= V
1

1
U
T
C
r
= V
2
Choose vectors and as follows:
_

_
=
_
V
T
1
V
T
2
_
so =
_
V
1
V
2

_
then there must hold
C = U
_
0

_
= U =
and so we have to choose =
1
U
T
, while can be any vector (with the proper
dimension). Therefore, all satisfying (5.19) are given by
= V
1

1
U
T
+ V
2
= C
r
+C
r

60 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM


2 End Proof
Dene

D
r
33
and

D
r
33
as in the above lemma, then all v that satisfy the equality constraint

(k) = 0 are given by:


v(k) =

D
r
33
_

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k)
_
+

D
r
33
(k)
= v
E
(k) +

D
r
33
(k)
Note that by choosing v(k) = v
E
(k) +

D
r
33
(k) the equality constraint is always satised
while all remaining freedom is still in the vector (k), which is of lower dimension. So
by this choice the problem has reduced in order while the equality constraint is always
satised for all (k).
Substitution of v(k) = v
E
(k) +

D
r
33
(k) in (5.10) gives:
z(k) =

C
2
x(k|k) +

D
21
e(k|k) +

D
22
w(k) +

D
23
v(k)
= (

C
2

D
23

D
r
33

C
3
)x(k|k) + (

D
21

D
23

D
r
33

D
31
)e(k|k)
+ (

D
22

D
23

D
r
33

D
32
) w(k) +

D
23

D
r
33
(k)
= z
E
(k) +

D
23

D
r
33
(k)
where z
E
is given by:
z
E
(k)=(

C
2

D
23

D
r
33

C
3
)x(k|k) + (

D
21

D
23

D
r
33

D
31
)e(k|k) + (

D
22

D
23

D
r
33

D
32
) w(k)
The performance index is given by
J(k) = z
T
(k)

z(k) = (5.20)
= ( z
E
(k) +

D
23

D
r
33
(k))
T

( z
E
(k) +

D
23

D
r
33
(k)) = (5.21)
=
T
(k)
_
(

D
r
33
)
T

D
T
23


D
23

D
r
33
_
(k)
+2 z
T
E
(k)


D
23

D
r
33
(k) + z
T
E
(k)

z
E
(k) = (5.22)
=
1
2

T
H + f
T
(k) (k) + c(k) (5.23)
where the matrices H, f and c are given by
H = 2 (

D
r
33
)
T

D
T
23


D
23

D
r
33
f(k) = 2 (

D
r
33
)
T

D
T
23

z
E
(k)
c(k) = z
T
E

z
E
Note that by assumption 2, the matrix D
T
23


D
23
is invertible. By construction, the right
complement

D
r
33
is injective and hence H is invertible.
When inequality constraints are absent, we are left with an unconstrained optimization
min

1
2

T
H + f
T

5.1.2 EQUALITY CONSTRAINED SPCP 61
which has an analytical solution. In the same way as in section 5.1, we derive:
J

= H (k) + f(k) = 0
and so
(k) = H
1
f(k)
=
_
(

D
r
33
)
T

D
T
23


D
23

D
r
33
_
1
(

D
r
33
)
T

D
T
23

z
E
(k)
= z
E
(k) (5.24)
= (

D
23

D
r
33

C
3


C
2
)x(k|k) + (

D
23

D
r
33

D
31


D
21
)e(k|k)
+(

D
23

D
r
33

D
32


D
22
) w(k) (5.25)
where
=
_
(

D
r
33
)
T

D
T
23


D
23

D
r
33
_
1
(

D
r
33
)
T

D
T
23

Substitution gives:
v(k) =

D
r
33
_

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k)
_
+

D
r
33
(k)
=

D
r
33
_

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k)
_
+

D
r
33
(

D
23

D
r
33

C
3


C
2
)x(k|k) +
+

D
r
33
(

D
23

D
r
33

D
31


D
21
)e(k|k) +

D
r
33
(

D
23

D
r
33

D
32


D
22
) w(k)
= (

D
r
33


D
23

D
r
33

C
3

D
r
33


C
2


D
r
33

C
3
)x(k|k)
+(

D
r
33


D
23

D
r
33

D
31

D
r
33


D
21


D
r
33

D
31
)e(k|k) +
+(

D
r
33


D
23

D
r
33

D
32

D
r
33


D
22


D
r
33

D
32
) w(k)
The control signal can now be written as:
v(k|k) = E
v
v(k)
= E
v
(

D
r
33


D
23

D
r
33

C
3

D
r
33


C
2


D
r
33

C
3
)x(k|k) +
+E
v
(

D
r
33


D
23

D
r
33

D
31

D
r
33


D
21


D
r
33

D
31
)e(k|k) +
+E
v
(

D
r
33


D
23

D
r
33

D
32

D
r
33


D
22


D
r
33

D
32
) w(k)
= F x(k|k) + D
e
e(k|k) + D
w
w(k)
where
F = E
v
(

D
r
33


D
23

D
r
33

C
3


D
r
33


C
2


D
r
33

C
3
)
D
e
= E
v
(

D
r
33


D
23

D
r
33

D
31


D
r
33


D
21


D
r
33

D
31
)
D
w
= E
v
(

D
r
33


D
23

D
r
33

D
32


D
r
33


D
22


D
r
33

D
32
)
62 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
This means that also for the equality constrained case, the optimal predictive controller is
linear and time-invariant.
The control law has the same form as for the unconstrained case in the previous section
(of course for the alternative choices of F, D
w
and D
e
).
The results are summarized in the following theorem:
Theorem 11
Consider system (5.1)(5.3) The equality constrained (nite) horizon standard predictive
control problem of minimizing performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.26)
subject to equality constraint

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) = 0
is solved by control law
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) (5.27)
where
F = E
v
_

D
r
33


D
23

D
r
33

C
3


D
r
33


C
2


D
r
33

C
3
_
D
e
= E
v
_

D
r
33


D
23

D
r
33

D
31


D
r
33


D
21


D
r
33

D
31
_
D
w
= E
v
_

D
r
33


D
23

D
r
33

D
32


D
r
33


D
22


D
r
33

D
32
_
E
v
=
_
I 0 . . . 0

=
_
(

D
r
33
)
T

D
T
23


D
23

D
r
33
_
1
(

D
r
33
)
T

D
T
23

Again, implementation issues are discussed in section 5.3.


5.1.3 Full standard predictive control problem
In this section we consider the problem of minimizing the performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.28)
for the system as given in (5.1), (5.2) and (5.3), and with equality and inequality con-
straints. In this section we consider the case for nite N. The innite horizon case is
treated in the Sections 5.2.4 and 5.2.5.
5.1.2 EQUALITY CONSTRAINED SPCP 63
Consider the constrained standard problem
min
v
z
T
(k)

z(k)
subject to the constraints:

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) = 0 (5.29)

(k) =

C
4
x(k|k) +

D
41
e(k|k) +

D
42
w(k) +

D
43
v(k)

(k) (5.30)
Now v(k) is chosen as in the previous section
v(k) =

D
r
33
_

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k)
_
+

D
r
33
(k)
= v
E
(k) +

D
r
33
(k)
This results in

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) =
=

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v
E
(k) +

D
33

D
r
33
(k) = 0
and so the equality constraint is eliminated. The optimization vector (k) can now be
written as
(k) =
E
(k) +
I
(k) (5.31)
= H
1
f(k) +
I
(k) (5.32)
(5.33)
where
E
(k) = H
1
f(k) is the equality constrained solution given in the previous sec-
tion, and
i
(k) is an additional term to take the inequality constraints into account. Now
consider performance index (5.23) from the previous section. Substitution of (k) =
H
1
f(k) +
I
(k) gives us:
1
2

T
(k) H (k) + f
T
(k) (k) + c(k) =
=
1
2
_
H
1
f(k) +
I
(k)
_
T
H
_
H
1
f(k) +
I
(k)
_
+f
T
(k)
_
H
1
f(k) +
I
(k)
_
+ c(k) =
=
1
2

T
I
(k) H
I
(k) f
T
(k)H
1
H
I
(k)
+
1
2
f
T
(k)H
1
H H
1
f(k) f
T
(k)H
1
f(k) + f
T
(k)
I
(k) + c(k)
=
1
2

T
I
(k) H
I
(k)
1
2
f
T
(k)H
1
f(k) + c(k)
64 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
Using equations (5.24)-(5.25), the vector (k) can be written as
(k) =
E
(k) +
I
(k)
= z
E
(k) +
I
(k)
= (

D
23

D
r
33

C
3


C
2
)x(k|k) + (

D
23

D
r
33

D
31


D
21
)e(k|k)
+ (

D
23

D
r
33

D
32


D
22
) w(k) +
I
(k) (5.34)
Substitution of v(k) = v
E
(k) +

D
r
33
(k) = v
E
(k) +

D
r
33

E
(k) +

D
r
33

I
(k) in inequality
constraint (5.30) gives:

(k) = (

C
4

D
43

D
r
33

C
3
)x(k|k) + (

D
41

D
43

D
r
33

D
31
)e(k|k)
+(

D
42

D
43

D
r
33

D
32
) w(k) +

D
43

D
r
33

E
(k) +

D
43

D
r
33

I
(k)
= (

C
4

D
43

D
r
33

C
3
+

D
43

D
r
33


D
23

D
r
33

C
3

D
43

D
r
33


C
2
)x(k|k)
+(

D
41

D
43

D
r
33

D
31
+

D
43

D
r
33


D
23

D
r
33

D
31

D
43

D
r
33


D
21
)e(k|k)
+(

D
42

D
43

D
r
33

D
32
+

D
43

D
r
33


D
23

D
r
33

D
32

D
43

D
r
33


D
22
) w(k)
+

D
43

D
r
33

I
(k)
=

E
(k) +

D
43

D
r
33

I
(k)
Now let
A

=

D
43

D
r
33
b

(k) =

E
(k)

(k)
then we obtain the quadratic programming problem of minimizing
min
(k)
1
2

T
I
(k) H
I
(k)
subject to:
A


I
(k) + b

(k) 0
where H is as in the previous section and we dropped the term
1
2
f
T
(k)H
1
f(k) + c(k),
because it is not dependent on
I
(k) and so it has no inuence on the minimization. The
above optimization problem, which has to be solved numerically and on-line, is a quadratic
programming problem and can be solved in a nite number of iterative steps using reliable
and fast algorithms (see appendix A).
The control law of the full SPCP is nonlinear, and cannot be expressed in a linear form as
the unconstrained SPCP or equality constrained SPCP.
Theorem 12
Consider system (5.1) - (5.3) and the constrained (nite horizon) standard predictive con-
trol problem (CSPCP) of minimizing performance index
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.35)
5.2. INFINITE HORIZON SPCP 65
subject to equality and inequality constraints

(k) =

C
3
x(k|k) +

D
31
e(k|k) +

D
32
w(k) +

D
33
v(k) = 0

(k) =

C
4
x(k|k) +

D
41
e(k|k) +

D
42
w(k) +

D
43
v(k)

(k)
Dene:
A

=

D
43

D
r
33
b

(k) =

E
(k)

(k)
= (

C
4

D
43

D
r
33

C
3
+

D
43

D
r
33


D
23

D
r
33

C
3

D
43

D
r
33


C
2
)x(k|k)
+(

D
41

D
43

D
r
33

D
31
+

D
43

D
r
33


D
23

D
r
33

D
31

D
43

D
r
33


D
21
)e(k|k)
+(

D
42

D
43

D
r
33

D
32
+

D
43

D
r
33


D
23

D
r
33

D
32

D
43

D
r
33


D
22
) w(k)

(k)
H = 2 (

D
r
33
)
T

D
T
23


D
23

D
r
33
F = E
v
_

D
r
33


D
23

D
r
33

C
3


D
r
33


C
2


D
r
33

C
3
_
D
e
= E
v
_

D
r
33


D
23

D
r
33

D
31


D
r
33


D
21


D
r
33

D
31
_
D
w
= E
v
_

D
r
33


D
23

D
r
33

D
32


D
r
33


D
22


D
r
33

D
32
_
D

= E
v

D
r
33
=
_
(

D
r
33
)
T

D
T
23


D
23

D
r
33
_
1
(

D
r
33
)
T

D
T
23

E
v
=
_
I 0 . . . 0

The optimal control law that optimizes this CSPCP problem is given by:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) + D


I
(k)
where is the solution for the quadratic programming problem
min

I
(k)
1
2

T
I
(k) H
I
(k)
subject to:
A


I
(k) + b

(k) 0
5.2 Innite horizon SPCP
In this section we consider the standard predictive control problem for the case where the
prediction horizon is innite (N = ) [59].
66 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
5.2.1 Steady-state behavior
In innite horizon predictive control it is important to know how the signals behave for
k . We therefore dene the steady-state of a system in the standard predictive
control formulation:
Denition 13
The quadruple (v
ss
, x
ss
, w
ss
, z
ss
) is called a steady state if the following equations are
satised:
x
ss
= Ax
ss
+ B
2
w
ss
+ B
3
v
ss
(5.36)
z
ss
= C
2
x
ss
+ D
22
w
ss
+ D
23
v
ss
(5.37)
To be able to solve the predictive control problem it is important that for every possible w
ss
,
the quadruple (v
ss
, x
ss
, w
ss
, z
ss
) exists. For a steady state error free control the performance
signal z
ss
should be equal to zero for every possible w
ss
(this is also necessary to be able to
solve the innite horizon predictive control problem). We therefore look at the existence
of a steady state
(v
ss
, x
ss
, w
ss
, z
ss
) = (v
ss
, x
ss
, w
ss
, 0)
for every possible w
ss
.
Consider the matrix
M
ss
=
_
I A B
3
C
2
D
23
_
(5.38)
then (5.36) and (5.37) can be rewritten as:
M
ss
_
x
ss
v
ss
_
=
_
B
2
D
22
_
w
ss
Let the singular value decomposition be given by
M
ss
=
_
U
M1
U
M2

_

M
0
0 0
__
V
T
M1
V
T
M2
_
where
M
= diag(
1
, . . . ,
m
) with
i
> 0.
A necessary and sucient condition for existence of x
ss
and v
ss
is that
U
T
M2
_
B
2
D
22
_
= 0 (5.39)
The solution is given by:
_
x
ss
v
ss
_
= V
M1

1
M
U
T
M1
_
B
2
D
22
_
w
ss
+ V
M2

M
(5.40)
5.2. INFINITE HORIZON SPCP 67
where
M
is an arbitrary vector with the apropriate dimension. Substitution of (5.40) in
(5.36) and (5.37) shows that the quadruple (v
ss
, x
ss
, w
ss
, z
ss
)=(v
ss
, x
ss
, w
ss
, 0) is a steady
state. The solution (x
ss
, v
ss
) for a given w
ss
is unique if V
M2
is empty or, equivalently if
M
ss
has full column rank.
In the section on innite horizon MPC we will assume that z
ss
= 0 and that x
ss
and v
ss
exist, are unique, and related to w
ss
by
_
x
ss
v
ss
_
= V
M1

1
U
T
M1
_
B
2
D
22
_
w
ss
(5.41)
Usually, we assume a linear relation between w
ss
and w(k):
w
ss
= D
ssw
w(k)
Sometimes w
ss
is found by some extrapolation, but often we choose w to be constant
beyond the prediction horizon, so w
ss
= w(k + j) = w(k + N 1) for j N. The matrix
D
ssw
is then given by D
ssw
=
_
0 0 I

, such that w
ss
= w(k +N1) = D
ssw
w(k).
Now dene the matrices
_
D
ssx
D
ssv
_
= V
M1

1
M
U
T
M1
_
B
2
D
22
_
D
ssw
(5.42)
and
D
ssy
= C
1
D
ssx
+ D
12
D
ssw
+ D
13
D
ssv
(5.43)
then a direct relation between (x
ss
, v
ss
, w
ss
, y
ss
) and w(k) is given by
_

_
x
ss
v
ss
w
ss
y
ss
_

_
=
_

_
D
ssx
D
ssv
D
ssw
D
ssy
_

_
w(k) (5.44)
Remark 14: Sometimes it is interesting to consider cyclic or periodic steady-state behav-
ior. This means that the signals x, w and v do not become constant for k , but are
equal to some cyclic behavior. For a cyclic steady state we assume there is a cycle period P
such that (v
ss
(k + P), x
ss
(k + P), w
ss
(k + P), z
ss
(k + P)) = (v
ss
(k), x
ss
(k), w
ss
(k), z
ss
(k)).
For innite horizon predictive control, we again make the assumption that z
ss
= 0. In this
way, we can isolate the steady-state behavior from the predictive control problem without
aecting the performance index.
5.2.2 Structuring the input signal for innite horizon MPC
For innite horizon predictive control we will encounter an optimization problem with an
innite degrees of freedom, parameterized by v(k + j|k), j = 1, . . . , . For the uncon-
strained case this is not a problem, and we can nd the optimal predictive controller using
68 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
theory on Linear Quadratic Gaussian (LQG) controllers (Strejc, [63]). This is given in
section 5.2.3.
In the constrained case, we can tackle the problem by giving the input signal only a limited
degrees of freedom. We can do that by introducing the so-called switching horizon, denoted
by N
s
. This N
s
has to be chosen such, that all degrees of freedom are in the rst N
s
samples
of the input signal, i.e. in v(k +j|k), j = 0, . . . , N
s
1. In this section we therefore adopt
the input sequence vector
v(k) =
_

_
v(k|k)
v(k + 1|k)
.
.
.
v(k + N
s
1|k)
_

_
which contains all degrees of freedom at sample time k.
We will consider two ways to expand the input signal beyond the switching horizon:
Control horizon: By taking N
s
= N
c
, we choose v(k +j|k) = v
ss
for j N
s
. (Note that
v
ss
= 0 for IIO models).
Basis functions: We can choose v(k + j|k) for j N
s
as a nite sum of (orthogonal)
basis functions. The future input v(k +j|k) can be formulated using equation (4.25):
v(k + j|k) = C
v
A
j
v
(k|k) + v
ss
where we have taken added an extra term v
ss
to take the steady-state value into
account. Following the denitions from section 4.3, we derived the relation
v(k) =
_

_
v(k|k)
v(k+1|k)
.
.
.
v(k+N
s
1|k)
_

_
=
_

_
C
v
C
v
A
v
.
.
.
C
v
A
Ns1
v
_

_
(k|k) +
_

_
v
ss
v
ss
.
.
.
v
ss
_

_
= S
v
(k|k) + v
ss
Suppose N
s
is such that S
v
has full column-rank, and a left-complement is given by
S

v
(so S

v
S
v
= 0, see appendix C for a denition). Now we nd that
S

v
( v(k) v
ss
) = 0 (5.45)
or, by dening
v
ss
=
_

_
v
ss
v
ss
.
.
.
v
ss
_

_
=
_

_
D
ssv
D
ssv
.
.
.
D
ssv
_

_
w(k) =

D
ssv
w(k)
5.2. INFINITE HORIZON SPCP 69
we obtain
S

v
v(k) S

D
ssv
w(k) = 0 (5.46)
We observe that the orthogonal basis function can be described by equality constraint
5.46.
By dening B
v
= A
Ns
v
S

v
(where S

v
is the left inverse of S
v
, see appendix C for a
denition), we can express in v
A
Ns
v
(k|k) = B
v
v(k)
and the future input signals beyond N
s
can be expressed in terms of v(k):
v(k + j|k) = C
v
A
j
v
(k|k) + v
ss
= C
v
A
jNs
v
B
v
v(k) + v
ss
(5.47)
The description with (orthogonal) basis functions gives us that, for the constrained
case, the input signal beyond N
s
can be described as follows:
v(k +j|k) = C
v
A
jNs
v
B
v
v(k) +F
v
( x(k +j|k) x
ss
) +v
ss
j N
s
(5.48)
or
x
v
(k + N
s
|k) = B
v
v(k) (5.49)
x
v
(k+j+1|k) = A
v
x
v
(k + j|k) j N
s
(5.50)
v(k + j|k) = C
v
x
v
(k + j|k) + F
v
(x(k + j|k) x
ss
) + v
ss
j N
s
(5.51)
where x
v
is an additional state, describing the dynamics of the (orthogonal) basis
function beyond the switching horizon. Finally, we should note that v(k) can not be
chosen arbitrarily but is constrained by the equality constraint (5.46).
5.2.3 Unconstrained innite horizon SPCP
In this section, we consider the unconstrained innite horizon standard predictive control
problem. To be able to tackle this problem, we will consider a constant external signal w,
a zero steady-state performance signal and the weighting matrix (j) is equal to identity,
so
w(k + j) = w
ss
for all j 0
z
ss
= 0
(j) = I for all j 0
Dene
x

(k + j|k) = x(k + j|k) x


ss
for all j 0
v

(k + j|k) = v(k + j|k) v


ss
for all j 0
70 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
then it follows that
x

(k+j+1|k) = x(k+j+1|k) x
ss
= A x(k + j|k) + B
1
e(k + j|k) + B
2
w(k + j) + B
3
v(k + j|k) x
ss
= A x

(k + j|k) + B
1
e(k + j|k) + B
3
v

(k + j|k)
z(k + j|k) = C
2
x(k + j|k) + D
21
e(k + j|k) + D
22
w(k + j) + D
23
v(k + j|k)
= C
2
x

(k + j|k) + D
21
e(k + j|k) + D
23
v

(k + j|k)
where we used equations (5.36) and (5.37). Now we derive:
x

(k+j+1|k) = A x

(k + j|k) + B
3
v

(k + j|k)
z(k + j|k) = C
2
x

(k + j|k) + D
23
v

(k + j|k)
Substitution in the performance index leads to:
J(v, k) =

j=0
z
T
(k + j|k)(j) z(k + j|k)
=

j=0
__
x
T

(k + j|k)C
T
2
+ e(k + j|k)D
T
21
__
C
2
x

(k + j|k) + D
21
e(k|k)
_
+
2
_
x
T

(k + j|k)C
T
2
+ e(k + j|k)D
T
21
_
D
23
v

(k + j|k)
+ v
T

(k + j|k)D
T
23
D
23
v

(k + j|k)
_
Now dene
v(k + j|k) = v

(k + j|k) + (D
T
23
D
23
)
1
D
T
23
_
C
2
x

(k + j|k) + D
21
e(k + j|k)
_
(5.52)
Then the state equation is given by:
x

(k+j+1|k) =A x

(k + j|k) + B
1
e(k + j|k) + B
3
v

(k + j|k)
=
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
_
x

(k + j|k)+
_
B
1
B
3
(D
T
23
D
23
)
1
D
T
23
D
12
_
e(k + j|k) + B
3
v(k + j|k)
Now we dene a new state
x(k + j|k) =
_
x

(k + j|k)
e(k + j|k)
_
and dene

A =
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
B
1
B
3
(D
T
23
D
23
)
1
D
T
23
D
12
0 0
_

B =
_
B
3
0
_

C = (I D
23
(D
T
23
D
23
)
1
D
T
23
)
_
C
2
D
21

D = D
23
(5.53)
5.2. INFINITE HORIZON SPCP 71
We obtain
x(k+j+1|k) =

A x(k + j|k) +

B v(k + j|k)
z(k+j|k) =

C x(k + j|k) +

D v(k + j|k)
and the performance index becomes
J(v, k) =

j=0
z
T
(k+j|k) z(k+j|k)
= x
T
(k+j|k)

C
T

C x(k+j|k) + x
T
(k+j|k)

C
T

D v(k+j|k) + v
T
(k+j|k)

D
T

D v(k+j|k)
=

j=0
x
T
(k+j|k)

Q x(k+j|k) + v
T
(k+j|k)

R v(k+j|k)
where

Q =

C
T

C =
_
C
2
D
21

T
(I D
23
(D
T
23
D
23
)
1
D
T
23
)
_
C
2
D
21

R =

D
T

D = D
T
23
D
23
.
(5.54)
and we used the fact that

D
T

C = D
T
23
(I D
23
(D
T
23
D
23
)
1
D
T
23
)
_
C
2
D
21

= (D
23

D
23
)
_
C
2
D
21

= 0.
Minimizing the performance index has now becomes a standard LQG problem (Strejc,
[63]), and the optimal control signal v is given by
v(k) = (

B
T
P

B +

R)
1

B
T
P

A x(k|k) (5.55)
where P 0 is the smallest positive semi-denite solution of the discrete time Riccati
equation
P =

A
T
P

A

A
T
P

B(

B
T
P

B +

R)
1

B
T
P

A+

Q
which exists due to stabilizability of (

A,

B) and invertibility of

R.
The closed loop equations can now be derived by substitution of (5.52), (5.55) and (5.44)
into (5.52) to obtain
v(k) =F x(k|k) + D
w
w(k) + D
e
e(k|k)
where
F =
_
(

B
T
P

B +

R)
1

B
T
P
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
0
_
+ (D
T
23
D
23
)
1
D
T
23
C
2
_
D
w
=
_
(

B
T
P

B +

R)
1

B
T
P
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
0
_
+ (D
T
23
D
23
)
1
D
T
23
C
2
_
D
ssx
+ D
ssv
D
e
=
_
(

B
T
P

B +

R)
1

B
T
P
_
B
1
B
3
(D
T
23
D
23
)
1
D
T
23
D
12
0
_
+ (D
T
23
D
23
)
1
D
T
23
D
21
_
72 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
and w
ss
= w(k) = w(k).
Note that in the above we have used that

R = D
T
23
D
23
is invertible. This is a strengtened
version of assumption 2 as formulated in the beginning of this chapter. With some ef-
fort, the above can be derived solely based on assumptions 2 and 3 without relying on
invertibility of

R.
The results are summarized in the following theorem:
Theorem 15
Consider system (5.1) - (5.3) with a steady-state (v
ss
, x
ss
, w
ss
, z
ss
) = (D
ssv
w
ss
, D
ssx
w
ss
, w
ss
, 0).
Further let w(k + j) = w
ss
for j > 0 and let

A,

B,

C,

D,

Q, and

R be dened as in
equations (5.53) and (5.54). The unconstrained innite horizon standard predictive control
problem of minimizing performance index
J(v, k) =

j=0
z
T
(k + j|k) z(k + j|k) (5.56)
is solved by control law
v(k) = F x(k|k) + D
w
w(k) + D
e
e(k) (5.57)
where
F =
_
(

B
T
P

B +

R)
1

B
T
P
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
0
_
+ (D
T
23
D
23
)
1
D
T
23
C
2
_
D
w
=
_
(

B
T
P

B +

R)
1

B
T
P
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
0
_
+ (D
T
23
D
23
)
1
D
T
23
C
2
_
D
ssx
+ D
ssv
D
e
=
_
(

B
T
P

B +

R)
1

B
T
P
_
B
1
B
3
(D
T
23
D
23
)
1
D
T
23
D
12
0
_
+ (D
T
23
D
23
)
1
D
T
23
D
21
_
and P is the solution of the discrete time Riccati equation
P =

A
T
P

A

A
T
P

B(

B
T
P

B +

R)
1

B
T
P

A+

Q
5.2.4 The innite Horizon Standard Predictive Control Problem
with control horizon constraint
The innite Horizon Standard Predictive Control Problem with control horizon constraint
is dened as follows:
Denition 16 Consider a system given by the state-space realization
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (5.58)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (5.59)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k) (5.60)
5.2. INFINITE HORIZON SPCP 73
The goal is to nd a controller v(k) = K( w, y, k) such that the performance index
J(v, k) =

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.61)
is minimized subject to the control horizon constraint
v(k + j|k) = 0 , for j N
c
(5.62)
and inequality constraints

(k) =

C
Nc,4
x(k|k) +

D
Nc,41
e(k|k) +

D
Nc,42
w(k) +

D
Nc,43
v(k)

(k) (5.63)
and for all j N
c
:

(k + j) = C
5
x(k + j) + D
51
e(k + j) + D
52
w(k + j) + D
53
v(k + j)

(5.64)
where
w(k) =
_

_
w(k|k)
w(k + 1|k)
.
.
.
w(k + N
c
1|k)
_

_
v(k) =
_

_
v(k|k)
v(k + 1|k)
.
.
.
v(k + N
c
1|k)
_

_
The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with control horizon constraint.
Theorem 17
Consider system (5.58) - (5.60) and let
N
c
<

Nc
= diag
_
(0), (1), . . . , (N
c
1)
_
(j) =
ss
for j N
c
w(k + j) = w
ss
for j N
c
_
v
ss
x
ss
_
=
_
D
ssv
D
ssx
_
w(k)
w(k) =
_
I 0 0

w(k) = E
w
w(k)
Let A be partioned into a stable part

A
s
and an unstable part

A
u
as follows:
A =
_
T
s
T
u

_

A
s
0
0

A
u
__

T
s

T
u
_
(5.65)
74 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
where
_
T
s
T
u

_

T
s

T
u
_
=
_
I 0
0 I
_
.
Dene the vector
=

(C
5
D
ssx
+ D
52
E
w
+ D
53
D
ssv
) w(k) R
n
c5
1
(5.66)
diagonal matrix
E = diag(
1
,
2
, . . . ,
nc5
) > 0, (5.67)
Dene, for any integer m > 0, matrix
W
m
=
_

_
E
1
C
5
T
s
A
s
E
1
C
5
T
s
A
2
s
.
.
.
E
1
C
5
T
s
A
m
s
_

_
(5.68)
Let n

be such that for any x


E
1
C
5
T
s
A
k
s
x 1 for all k n

implies that
E
1
C
5
T
s
A
i
s
x 1 for all k > n

Let

C
Nc,2
,

D
Nc,21
,

D
Nc,22
,

D
Nc,23
,

C
Nc,3
,

D
Nc,31
,

D
Nc,32
and

D
Nc,33
be given by

C
Nc,2
=
_

_
C
2
C
2
A
.
.
.
C
2
A
Nc1
_

D
Nc,21
=
_

_
D
21
C
2
B
1
.
.
. C
2
A
Nc2
B
1
_

_
(5.69)

D
Nc,22
=
_

_
D
22
0 0
C
2
B
2
D
22
0
.
.
.
.
.
.
.
.
.
C
2
A
Nc2
B
2
C
2
A
Nc3
B
2
D
22
_

_
(5.70)

D
Nc,23
=
_

_
D
23
0 0
C
2
B
3
D
23
0
.
.
.
.
.
.
.
.
.
C
2
A
Nc2
B
3
C
2
A
Nc2
B
3
D
23
_

_
(5.71)

C
Nc,3
=
_
A
Nc


D
Nc,31
=
_
A
Nc1
B
1

(5.72)
5.2. INFINITE HORIZON SPCP 75

D
Nc,32
=
_
[ A
Nc1
B
2
B
2
] D
ssx

(5.73)

D
Nc,33
=
_
[ A
Nc1
B
3
B
3
]

(5.74)
Let Y be given by
Y =
_

T
u

D
Nc,33

, (5.75)
Let Y have full row-rank with a right-complement Y
r
and a right-inverse Y
r
. Further let

M be the solution of the discrete-time Lyapunov equation

A
T
s

M

A
s


M + T
T
s
C
T
2

ss
C
2
T
s
= 0
Consider the innite-horizon model predictive control problem, dened as minimizing (5.61)
subject to an input signal v(k+j|k), with v(k+j|k) = 0 for j N
c
and inequality constraints
(5.63) and (5.64).
Dene:
A

=
_

D
Nc,43
W
n

D
Nc,33
_
Y
r
H = 2(Y
r
)
T
_

D
T
Nc,33

T
T
s

M

T
s

D
Nc,33
+

D
T
Nc,23

Nc

D
Nc,23
_
Y
r
b

(k) =
_

C
Nc,4


D
Nc,43

F
W
n

(

C
Nc,3


D
Nc,33

F)
_
x(k|k)+
_

D
Nc,41
+

D
Nc,43

D
e
W
n

(

D
Nc,31
+

D
Nc,33

D
e
)
_
e(k|k)
+
_

D
Nc,42
+

D
Nc,43

D
w
W
n

(

D
Nc,32
+

D
Nc,33

D
w
)
_
w(k)
_

(k)
1
_
(5.76)
where

F = Z
1

C
Nc,3
Z
2

C
Nc,2
Z
3
Y
r

T
u

C
Nc,3

D
e
= Z
1

D
Nc,31
+ Z
2

D
Nc,21
+ Z
3
Y
r

T
u

D
Nc,31

D
w
= Z
1

D
Nc,32
+ Z
2

D
Nc,22
+ Z
3
Y
r

T
u

D
Nc,32
Z
1
= Y
r

T
u
2Y
r
H
1
(Y
r
)
T

D
T
Nc,33

T
T
s

M

T
s
Z
2
= 2Y
r
H
1
(Y
r
)
T

D
T
Nc,23

Nc
Z
3
= 2Y
r
H
1
(Y
r
)
T

D
T
Nc,33

T
T
s

M

T
s

D
Nc,33
+ 2Y
r
H
1
(Y
r
)
T

D
T
Nc,23

Nc

D
Nc,23
and 1 =
_
1 1 1

T
.
The optimal control law that optimizes the inequality constrained innite-horizon MPC
problem is given by:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) + D


I
(k) (5.77)
where
F = E
v

F , D
e
= E
v

D
e
, D
w
= E
v

D
w
, D

= E
v
Y
r
76 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
and where
I
is the solution for the quadratic programming problem
min

I
1
2

T
I
H
I
subject to:
A


I
+ b

(k) 0
and E
v
=
_
I 0 . . . 0

such that v(k) = E


v
v(k).
In the absence of inequality constraints (5.63) and (5.64), the optimum
I
= 0 is obtained
and so the control law is given analytically by:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) (5.78)
Theorem 17 is a special case of theorem 21 in the next subsection.
5.2.5 The Innite Horizon Standard Predictive Control Problem
with structured input signals
The Innite Horizon Standard Predictive Control Problem with structured input signals
is dened as follows:
Denition 18 Consider a system given by the state-space realization
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k) (5.79)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (5.80)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k) (5.81)
Consider the nite switching horizon N
s
, then the goal is to nd a controller such that the
performance index
J(v, k) =

j=0
z
T
(k + j|k)(j) z(k + j|k) (5.82)
is minimized subject to the constraints

(k) = S

D
ssv
w(k) +S

v
v(k) = 0 (5.83)

(k) =

C
Ns,4
x(k|k) +

D
Ns,41
e(k|k) +

D
Ns,42
w(k) +

D
Ns,43
v(k)

(k) (5.84)
where S
v
is dened on page 68 and for all j N
s
:

(k + j|k)

(5.85)
where

(k) = C
5
x(k|k) + D
51
e(k|k) + D
52
w(k) + D
53
v(k) ,
5.2. INFINITE HORIZON SPCP 77
w(k) =
_

_
w(k|k)
w(k + 1|k)
.
.
.
w(k + N
s
1|k)
_

_
v(k) =
_

_
v(k|k)
v(k + 1|k)
.
.
.
v(k + N
s
1|k)
_

_
and v(k + j|k) beyond the switching horizon (j N
s
) is given by
v(k + j|k) = C
v
A
jNs
v
B
v
v(k) + v
ss
j N
s
(5.86)
The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with structured input signals.
Remark 19: The equality constraint in the above denition is due to the constraint im-
posed on the structure of the input signal in the equations (5.86) and (5.83) as discussed
in section 5.2.2.
Remark 20: Equation (5.84) covers inequality constraints up to the switching horizon,
equation (5.85) covers inequality constraints beyond the switching horizon.
Theorem 21
Consider system (5.79) - (5.81) and let
N
s
<

Ns
= diag
_
(0), (1), . . . , (N
s
1)
_
(j) =
ss
for j N
s
w(k + j|k) = w
ss
for j N
s
_
v
ss
x
ss
_
=
_
D
ssv
D
ssx
_
w(k)
w(k) =
_
I 0 0

w(k) = E
w
w(k)
Let A
T
, C
T
be given by
A
T
=
_
A B
3
C
v
0 A
v
_
C
T
=
_
C
2
D
23
C
v

(5.87)
with a partitioning of A
T
into a stable part

A
s
and an unstable part

A
u
as follows:
A
T
=
_
T
s
T
u

_

A
s
0
0

A
u
__

T
s

T
u
_
(5.88)
where
_
T
s
T
u

_

T
s

T
u
_
=
_
I 0
0 I
_
78 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
Dene the vector
=

(C
5
D
ssx
+ D
52
E
w
+ D
53
D
ssv
) w(k) R
n
c5
1
(5.89)
diagonal matrix
E = diag(
1
,
2
, . . . ,
nc5
) > 0, (5.90)
and
C

=
_
C
5
D
53
C
v

(5.91)
Dene, for any integer m > 0, the matrix
W
m
=
_

_
E
1
C

T
s
A
s
E
1
C

T
s
A
2
s
.
.
.
E
1
C

T
s
A
m
s
_

_
(5.92)
Let n

be such that for any x


E
1
C

T
s
A
k
s
x 1 for all k n

implies that
E
1
C

T
s
A
i
s
x 1 for all k > n

Let

C
Ns,2
,

D
Ns,21
,

D
Ns,22
,

D
Ns,23
,

C
Ns,3
,

D
Ns,31
,

D
Ns,32
, and

D
Ns,33
be given by

C
Ns,2
=
_

_
C
2
C
2
A
.
.
.
C
2
A
Ns1
_

D
Ns,21
=
_

_
D
21
C
2
B
1
.
.
.
C
2
A
Ns2
B
1
_

_
(5.93)

D
Ns,22
=
_

_
D
22
0 0
C
2
B
2
D
22
0
.
.
.
.
.
.
.
.
.
C
2
A
Ns2
B
2
C
2
A
Ns3
B
2
D
22
_

_
(5.94)

D
Ns,23
=
_

_
D
23
0 0
C
2
B
3
D
23
0
.
.
.
.
.
.
.
.
.
C
2
A
Ns2
B
3
C
2
A
Ns2
B
3
D
23
_

_
(5.95)

C
Ns,3
=
_
A
Ns
0
_

D
Ns,31
=
_
A
Ns1
B
1
0
_
(5.96)
5.2. INFINITE HORIZON SPCP 79

D
Ns,32
=
_
[ A
Ns1
B
2
B
2
] D
ssx
0
_
(5.97)

D
Ns,33
=
_
[ A
Ns1
B
3
B
3
]
B
v
_
(5.98)
Let Y be given by
Y =
_

T
u

D
Ns,33
S

v
_
, (5.99)
Let Y have full row-rank with a right-complement Y
r
and a right-inverse Y
r
, partioned as
Y
r
=
_
Y
r
1
Y
r
2

(5.100)
such that

T
u

D
Ns,33
Y
r
1
= I,

T
u

D
Ns,33
Y
r
2
= 0 S

v
Y
r
1
= 0 and S

v
Y
r
2
= I. Further let

M be
the solution of the discrete-time Lyapunov equation

A
T
s

M

A
s


M + T
T
s
C
T
T

ss
C
T
T
s
= 0
Consider the innite-horizon model predictive control problem, dened as minimizing (5.82)
subject to an input signal v(k + j|k), given by (5.46) and (5.49)-(5.51) and inequality
constraints (5.84) and (5.85).
Dene:
A

=
_

D
Ns,43
W
n

D
Ns,33
_
Y
r
H = 2(Y
r
)
T
_

D
T
Ns,33

T
T
s

M

T
s

D
Ns,33
+

D
T
Ns,23

Ns

D
Ns,23
_
Y
r
b

(k) =
_

C
Ns,4


D
Ns,43

F
W
n

(

C
Ns,3


D
Ns,33

F)
_
x(k|k)+
_

D
Ns,41
+

D
Ns,43

D
e
W
n

(

D
Ns,31
+

D
Ns,33

D
e
)
_
e(k|k)
+
_

D
Ns,42
+

D
Ns,43

D
w
W
n

(

D
Ns,32
+

D
Ns,33

D
w
)
_
w(k)
_

(k)
1
_
(5.101)
where

F = Z
1

C
Ns,3
Z
2

C
Ns,2
Z
3
Y
r
1

T
u

C
Ns,3

D
e
= Z
1

D
Ns,31
+ Z
2

D
Ns,21
+ Z
3
Y
r
1

T
u

D
Ns,31

D
w
= Z
1

D
Ns,32
+ Z
2

D
Ns,22
+ Z
3
Y
r
1

T
u

D
Ns,32
(Z
3
I)Y
r
2
S

D
ssv
Z
1
= Y
r
1

T
u
2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
Z
2
= 2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns
Z
3
= 2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s

D
Ns,33
+ 2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns

D
Ns,23
and 1 =
_
1 1 1

T
.
80 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
The optimal control law that optimizes the inequality constrained innite-horizon MPC
problem is given by:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) + D


I
(k) (5.102)
where
F = E
v

F , D
e
= E
v

D
e
, D
w
= E
v

D
w
, D

= E
v
Y
r
and where
I
is the solution for the quadratic programming problem
min

I
1
2

T
I
H
I
subject to:
A


I
+ b

(k) 0
and E
v
=
_
I 0 . . . 0

such that v(k) = E


v
v(k).
In the absence of inequality constraints (5.84) and (5.85), the optimum
I
= 0 is obtained
and so the control law is given analytically by:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k) (5.103)
Remark 22: Note that if for any i = 1, . . . , n
c5
there holds:
[]
i
0
the i-th constraint
[

(k + j|k)]
i
[

]
i
cannot be satised for j and the innite horizon predictive control problem is infea-
sible. A necessary condition for feasibility is therefore
E = diag(
1
,
2
, . . . ,
nc5
) > 0
Proof of theorem 21:
The performance index can be split in two parts:
J(v, k) =

j=0
z
T
(k + j|k)(j) z(k + j|k) = J
1
(v, k) + J
2
(v, k)
5.2. INFINITE HORIZON SPCP 81
where
J
1
(v, k) =
Ns1

j=0
z
T
(k + j|k)(j) z(k + j|k)
J
2
(v, k) =

j=Ns
z
T
(k + j|k)
ss
z(k + j|k)
For technical reasons we will consider the derivation of criterion J
2
before we derive crite-
rion J
1
.
Derivation of J
2
:
Consider system (5.79) - (5.81) with structured input signal (5.49), (5.50) and (5.51) and
constraint (5.46). Dene for j N
s
:
x

(k + j|k) = x(k + j|k) x


ss
v

(k + j|k) = v(k + j|k) v


ss
Then, using the fact that w(k + j|k) = w
ss
and e(k + j|k) = 0 for j N
s
, it follows for
j N
s
:
x

(k+j+1|k) = x(k+j+1|k) x
ss
= A x(k + j|k) + B
1
e(k + j|k) + B
2
w(k + j|k) + B
3
v(k + j|k) x
ss
= A x(k + j|k) + B
1
0 +B
2
w
ss
+ B
3
v
ss
+ B
3
v

(k + j|k) x
ss
= A x(k + j|k) + 0 + (I A) x
ss
x
ss
+ B
3
C
v
x
v
(k + j|k)
= A x

(k + j|k) + B
3
C
v
x
v
(k + j|k)
z(k + j|k) = C
2
x(k + j|k) + D
21
e(k + j|k) + D
22
w(k + j|k) + D
23
v(k + j|k)
= C
2
x

(k + j|k) + C
2
x
ss
+ D
22
w
ss
+ D
23
v
ss
+ D
23
v

(k + j|k)
= C
2
x

(k + j|k) + 0 +D
23
C
v
x
v
(k + j|k)
= C
2
x

(k + j|k) + D
23
C
v
x
v
(k + j|k)
Dene
x
T
(k + j|k) =
_
x

(k + j|k)
x
v
(k + j|k)
_
Then, using A
T
and C
T
from (5.87), we nd for j N
s
:
x
T
(k+j+1|k) = A
T
x
T
(k + j|k)
z(k + j|k) = C
T
x
T
(k + j|k)
which is an autonomous system. This means that z(k+j|k) can be computed for all j N
s
if initial state x
T
(k +N
s
|k) is given, so if x(k +N
s
) and x
v
(k +N
s
) are known. The state
82 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
x
v
(k + N
s
) is given by equation (5.49), the state x(k + N
s
|k) can be found by successive
substitution:
x(k + N
s
|k) = A
Ns
x(k|k) + A
Ns1
B
1
e(k|k) +
_
A
Ns1
B
2
B
2

w(k) +
+
_
A
Ns1
B
3
B
3

v(k)
together with x
v
(k + N
s
|k) = B
v
v(k) we obtain
x
T
(k + N
s
|k) =
_
x(k + N
s
|k) x
ss
x
v
(k + N
s
)
_
=

C
Ns,3
x(k|k) +

D
Ns,31
e(k|k) +

D
Ns,32
w(k) +

D
Ns,33
v(k)
where

C
Ns,3
,

D
Ns,31
,

D
Ns,32
and

D
Ns,33
as dened in (5.96)-(5.98), and x
ss
= D
ssx
w(k).
The prediction z(k + j|k) for j N
s
is obtained by successive substitution, resulting in:
z(k + j|k) = C
T
A
jNs
T
x
T
(k + N
s
|k) (5.104)
Now a problem occurs when the matrix A
T
contains unstable eigenvalues (|
i
| 1). For
j , the matrix A
jNs
T
will become unbounded, and so will z. The only way to solve
this problem is to make sure that the part of the state x
T
(k +N
s
), related to the unstable
poles is zero (Rawlings & Muske, [51]). Make a partitioning of A
T
into a stable part

A
s
and an unstable part

A
u
as in (5.88), then (5.104) becomes:
z(k + j|k) = C
T
_
T
s
T
u

_

A
jNs
s
0
0

A
jNs
u
__

T
s

T
u
_
x
T
(k + N
s
|k)
= C
T
T
s

A
jNs
s

T
s
x
T
(k + N
s
|k) + C
T
T
u

A
jNs
u

T
u
x
T
(k + N
s
|k) (5.105)
The matrix

A
jNs
u
will grow unbounded for j . The prediction z(k + j|k) for j
will only remain bounded if

T
u
x
T
(k + N
s
|k) = 0.
In order to guarantee closed-loop stability for systems with unstable modes in A
T
, we need
to satisfy equality constraint

T
u
x
T
(k+N
s
|k) =

T
u
_

C
Ns,3
x(k|k)+

D
Ns,31
e(k|k)+

D
Ns,32
w(k)+

D
Ns,33
v(k)
_
= 0 (5.106)
Together with the equality constraint (5.83) we obtain the nal constraint:
_

T
u

C
Ns,3
0
_
x(k|k) +
_

T
u

D
Ns,31
0
_
e(k|k) +
_

T
u

D
Ns,32
S

D
ssv
_
w(k) +
_

T
u

D
Ns,33
S

v
_
v(k) = 0
Consider Y as given in (5.99) with right-inverse Y
r
=
_
Y
r
1
Y
r
2

and right-complement
Y
r
, then the set of all signals v(k) satisfying the equality constraints is given by
v(k) = v
E
(k) + Y
r
(k) (5.107)
where v
E
is given by
v
E
(k) = Y
r
__

T
u

C
Ns,3
0
_
x(k|k) +
_

T
u

D
Ns,31
0
_
e(k|k) +
_

T
u

D
Ns,32
S

D
ssv
_
w(k)
_
= Y
r
1

T
u
_

C
Ns,3
x(k|k) +

D
Ns,31
e(k|k) +

D
Ns,32
w(k)
_
+ Y
r
2

T
u
S

D
ssv
w(k)
5.2. INFINITE HORIZON SPCP 83
and (k) is a free vector with the appropriate dimensions.
If the equality

T
u
x
T
(k + N
s
|k) = 0 is satised, the prediction becomes:
z(k + j|k) = C
T
_
T
s
T
u

_

A
jNs
s
0
0

A
jNs
u
__

T
s
x
T
(k + N
s
|k)
0
_
= (5.108)
= C
T
T
s

A
jNs
s

T
s
x
T
(k + N
s
|k) (5.109)
Dene

M =

j=Ns
(

A
T
s
)
jNs
T
T
s
C
T
T

ss
C
T
T
s
(

A
s
)
jNs
(5.110)
then
J
2
(v, k) =

j=Ns
z
T
(k + j|k)
ss
z(k + j|k)
=

j=Ns
x
T
T
(k + N
s
|k)

T
T
s
(

A
T
s
)
jNs
T
T
s
C
T
T

ss
C
T
T
s
(

A
s
)
jNs
T
s
x
T
(k + N
s
|k)
= x
T
T
(k + N
s
|k)

T
T
s

M

T
s
x
T
(k + N
s
|k)
=
1
2

T
(k)H
2
(k) +
T
(k)f
2
(k) + c
2
(k)
where
H
2
= 2(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s

D
Ns,33
Y
r
f
2
(k) = 2(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
x
T,E
(k+N
s
|k)
c
2
(k) = x
T
T,E
(k + N
s
|k)

T
T
s

M

T
s
x
T,E
(k+N
s
|k)
x
T,E
(k+N
s
|k) =

C
Ns,3
x(k|k) +

D
Ns,31
e(k|k) +

D
Ns,32
w(k) +

D
Ns,33
v
E
(k)
The matrix

M can be computed as the solution of the discrete time Lyapunov equation

A
T
s

M

A
s


M + T
T
s
C
T
T

ss
C
T
T
s
= 0
Derivation of J
1
:
The expression of J
1
(v, k) is equivalent to the expression of the performance index J(v, k)
for a nite prediction horizon. The performance signal vector z(k) is dened for switching
horizon N
s
:
z(k) =
_

_
z(k|k)
z(k+1|k)
.
.
.
z(k+N
s
1|k)
_

_
84 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
Using the results of chapter 3 we derive:
z(k) =

C
Ns,2
x(k|k) +

D
Ns,21
e(k|k) +

D
Ns,22
w(k) +

D
Ns,23
v(k)
=

C
Ns,2
x(k|k)+

D
Ns,21
e(k|k)+

D
Ns,22
w(k)+

D
Ns,23
_
v
E
(k)+Y
r
(k)
_
where

C
Ns,2
,

D
Ns,21
,

D
Ns,22
and

D
Ns,23
are given in (5.93)-(5.95). Now J
1
becomes
J
1
(v, k) =
Ns1

j=0
z
T
(k + j|k)(j) z(k + j|k) =
= z
T
(k)

Ns
z(k) =
=
1
2

T
(k)H
1
(k) +
T
(k)f
1
(k) + c
1
(k)
where
H
1
= 2(Y
r
)
T

D
T
Ns,23

Ns

D
Ns,23
Y
r
f
1
(k) = 2(Y
r
)
T

D
T
Ns,23

Ns
z
E
(k)
c
1
(k) = z
T
E
(k)

Ns
z
E
(k)
z
E
(k) =

C
Ns,2
x(k|k) +

D
Ns,21
e(k|k) +

D
Ns,22
w(k) +

D
Ns,23
v
E
(k)
Minimization of J
1
+ J
2
:
Combining the results we obtain the problem of minimizing
J
1
+ J
2
=
1
2

T
(k)(H
1
+H
2
) (k) +
T
(k)(f
1
(k)+f
2
(k)) +c
1
(k) + c
2
(k) (5.111)
=
1
2

T
(k) H (k) +
T
(k) f(k) + c(k) (5.112)
Now dene:
x
T,0
(k + N
s
|k) =

C
Ns,3
x(k|k) +

D
Ns,31
e(k|k) +

D
Ns,32
w(k) (5.113)
z
0
(k) =

C
Ns,2
x(k|k) +

D
Ns,21
e(k|k) +

D
Ns,22
w(k) (5.114)

0
(k) = S

D
ssv
w(k) (5.115)
or
_
_
x
T,0
(k + N
s
|k)
z
0
(k)

0
(k)
_
_
=
_
_

C
Ns,3

C
Ns,2
0
_
_
x(k|k)+
_
_

D
Ns,31

D
Ns,21
0
_
_
e(k|k)+
_
_

D
Ns,32

D
Ns,22
S

D
ssv
_
_
w(k) (5.116)
Then:
v
E
(k) = Y
r
1

T
u
x
T,0
(k) Y
r
2

0
(k)
x
T,E
(k + N
s
|k) = x
T,0
(k) +

D
Ns,33
v
E
(k)
= x
T,0
(k)

D
Ns,33
Y
r
1

T
u
x
T,0
(k)

D
Ns,33
Y
r
2

0
(k)
z
E
(k) = z
0
(k) +

D
Ns,23
v
E
(k)
= z
0
(k)

D
Ns,23
Y
r
1

T
u
x
T,0
(k)

D
Ns,23
Y
r
2

0
(k)
5.2. INFINITE HORIZON SPCP 85
and thus:
_
_
v
E
(k)
x
T,E
(k + N
s
|k)
z
E
(k)
_
_
=
_
_
Y
r
1

T
u
0 Y
r
2
(I

D
Ns,33
Y
r
1

T
u
) 0

D
Ns,33
Y
r
2

D
Ns,23
Y
r
1

T
u
I

D
Ns,23
Y
r
2
_
_
_
_
x
T,0
(k + N
s
|k)
z
0
(k)

0
(k)
_
_
(5.117)
Collecting the results from equations (5.113)-(5.117) we obtain:
f(k) =f
1
(k) + f
2
(k)
=2(Y
r
)
T

D
T
Ns,23

Ns
z
E
(k) + 2(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
x
T,E
(k + N
s
|k)
=
_
0 2(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
2(Y
r
)
T

D
T
Ns,23

Ns

_
_
v
E
(k)
x
T,E
(k + N
s
|k)
z
E
(k)
_
_
In the absence of inequality constraints we obtain the unconstrained minimization of
(5.112). The minimum =
E
is found for

E
(k) = (H
1
+ H
2
)
1
(f
1
(k) + f
2
(k)) = H
1
f(k)
and so
v(k) = v
E
(k) + Y
r

E
(k)
= v
E
(k) Y
r
H
1
f(k)
= v
E
(k) 2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns
z
E
(k) 2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
x
T,E
(k + N
s
|k)
=
_
I 2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns

_
_
v
E
(k)
x
T,E
(k + N
s
|k)
z
E
(k)
_
_
=
_
I 2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns

_
_
Y
r
1

T
u
0 Y
r
2
(I

D
Ns,33
Y
r
1

T
u
) 0

D
Ns,33
Y
r
2

D
Ns,23
Y
r
1

T
u
I

D
Ns,23
Y
r
2
_
_
_
_
x
T,0
(k + N
s
|k)
z
0
(k)

0
(k)
_
_
=
_
I 2Y
r
H
1
(Y
r
)
T

D
T
Ns,33

T
T
s

M

T
s
2Y
r
H
1
(Y
r
)
T

D
T
Ns,23

Ns

_
_
Y
r
1

T
u
0 Y
r
2
(I

D
Ns,33
Y
r
1

T
u
) 0

D
Ns,33
Y
r
2

D
Ns,23
Y
r
1

T
u
I

D
Ns,23
Y
r
2
_
_

_
_
_
_

C
Ns,3

C
Ns,2
0
_
_
x(k|k)+
_
_

D
Ns,31

D
Ns,21
0
_
_
e(k|k)+
_
_

D
Ns,32

D
Ns,22
S

D
ssv
_
_
w(k)
_
_
=
_
Z
1
+ Z
3
Y
r
1

T
u
Z
2
(Z
3
I)Y
r
2

_
_
_
_

C
Ns,3

C
Ns,2
0
_
_
x(k|k)+
_
_

D
Ns,31

D
Ns,21
0
_
_
e(k|k)+
_
_

D
Ns,32

D
Ns,22
S

D
ssv
_
_
w(k)
_
_
=

Fx(k|k) +

D
e
e(k|k) +

D
w
w(k)
86 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
The optimal input becomes
v(k) =E
v
v(k)
=E
v

Fx(k|k) + E
v

D
e
e(k|k) + E
v

D
w
w(k)
=Fx(k|k) + D
e
e(k|k) + D
w
w(k)
which is equal to control law (5.103).
Inequality constraints:
Now we will concentrate on the inequality constraints. The following derivation is an
extension of the result, given in Rawlings & Muske [51]. Consider the inequality constraints:

(k) =

C
Ns,4
x(k|k)+

D
Ns,41
e(k|k)+

D
Ns,42
w(k)+

D
Ns,43
v(k)

(k)
where we assume that

(k) has nite dimension. To allow inequality constraints beyond
the switching horizon N
s
we have introduced an extra inequality constraint

(k + j|k)

, j N
s
where

R
n
c5
1
is a constant vector and

(k + j|k) is a prediction of

(k + j|k)
given by

(k + j|k) = C
5
x(k + j|k) + D
51
e(k + j|k) + D
52
w(k + j|k) + D
53
v(k + j|k)
Dene , E, C

and W
m
according to (5.89), (5.90), (5.91) and (5.92). Further let n

be
dened as in theorem 21 and let

(k + N
s
+ j|k)

for j = 1, . . . , n

(5.118)
We derive for j > 0:

(k+N
s
+j|k) = C
5
x(k + N
s
+ j|k) + D
52
w(k + N
s
+ j|k) + D
53
v(k + N
s
+ j|k)
= C
5
x
T
(k + N
s
+ j|k) + D
53
C
v
x
v
(k + N
s
+ j|k)) +
+C
5
x
ss
+ D
52
w
ss
+ D
53
v
ss
= C
5
x

(k + N
s
+ j|k) + D
53
C
v
x
v
(k + N
s
+ j) +
+(C
5
D
ssx
+ D
52
+ D
53
D
ssv
)w
ss
= C

x
T
(k + N
s
+ j|k) + (C
5
D
ssx
+ D
52
+ D
53
D
ssv
)w
ss
From equation (5.118) we know that for j = 1, . . . , n

(k +N
s
+j|k) (C
5
D
ssx
+D
52
+D
53
D
ssv
)w
ss
<

(C
5
D
ssx
+D
52
+D
53
D
ssv
)w
ss
and so
C

x
T
(k + N
s
+ j|k)
5.2. INFINITE HORIZON SPCP 87
Multiplied by E
1
we obtain for j = 1, . . . , n

:
E
1

(k + N
s
+ j|k) = E
1
C

x
T
(k + N
s
+ j|k) 1
or, equivalently,
E
1
C

A
j
T
x
T
(k + N
s
|k) 1
We know

T
u
x
T
(k + N
s
|k) = 0. We nd that:
E
1
C

T
s

A
j
s

T
s
x
T
(k + N
s
|k) 1.
We know that n

is chosen such that the above for j = 1, . . . , n

implies that
E
1
C

T
s

A
j
s

T
s
x
T
(k + N
s
|k) 1.
for all j > n

. Combined with

T
u
x
T
(k + N
s
|k) = 0, this implies that
E
1
C

A
j
T
x
T
(k + N
s
|k) 1.
for all j > n

. Therefore, starting from (5.118) we derived

(k + N
s
+ j|k)

for j = n

+ 1, . . . , (5.119)
This means that (5.118), together with the extra equality constraint (5.106) implies (5.119).
Equation (5.118) can be rewritten as
W
n

x
T
(k + N
s
|k) 1
Together with constraint (5.84) this results in the overall inequality constraint
_

(k)

(k)
W
n

x
T
(k + N
s
|k) 1
_
0
Consider performance index (5.112) in which the optimization vector (k) is written as
(k) =
E
(k) +
I
(k) (5.120)
= (H
1
+H
2
)
1
(f
1
(k)+f
2
(k)) +
I
(k) (5.121)
= H
1
(f(k)) +
I
(k) (5.122)
where
E
(k) is the equality constrained solution given in the previous section, and
i
(k) is
an additional term to take the inequality constraints into account. The performance index
88 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
now becomes
1
2

T
(k) H (k) + f
T
(k) (k) + c(k) =
=
1
2
_
H
1
f(k) +
I
(k)
_
T
H
_
H
1
f(k) +
I
(k)
_
+f
T
(k)
_
H
1
f(k) +
I
(k)
_
+ c(k) =
=
1
2

T
I
(k) H
I
(k) f
T
(k)H
1
H
I
(k)
+
1
2
f
T
(k)H
1
H H
1
f(k) f
T
(k)H
1
f(k) + f
T
(k)
I
(k) + c(k)
=
1
2

T
I
(k) H
I
(k)
1
2
f
T
(k)H
1
f(k) + c(k)
=
1
2

T
I
(k) H
I
(k) + c

(k) (5.123)
Now consider the input signal
v(k) = v
E
(k) + Y
r
(k)
= v
E
(k) Y
r
H
1
f(k) + Y
r

I
(k)
= v
I
(k) + Y
r

I
(k)
where
v
I
(k) = v
E
(k) + Y
r
(k) =

Fx(k|k) +

D
e
e(k|k) +

D
w
w(k)
and dene the signals

I
(k) =

0
(k)+

D
Ns,43
v
I
(k)
=(

C
Ns,4


D
Ns,43

F)x(k|k)+(

D
Ns,41
+

D
Ns,43

D
e
)e(k|k)
+(

D
Ns,42
+

D
Ns,43

D
w
) w(k) (5.124)
x
T,I
(k + N
s
|k) = x
T,0
(k + N
s
|k) +

D
Ns,33
v
I
(k)
=(

C
Ns,3


D
Ns,33

F)x(k|k)+(

D
Ns,31
+

D
Ns,33

D
e
)e(k|k)+
(

D
Ns,32
+

D
Ns,33

D
w
) w(k) (5.125)
then

(k) =

I
(k) +

D
Ns,43
Y
r

I
(k)
x
T
(k + N
s
|k) = x
T,I
(k + N
s
|k) +

D
Ns,33
Y
r

I
(k)
and we can rewrite the constraint as:
_

(k)

(k)
W
n

x
T
(k + N
s
|k) 1
_
= A


I
(k) + b

(k) 0 (5.126)
5.3. IMPLEMENTATION 89
where
b

(k) =
_

I
(k)

(k)
W
n

x
T,I
(k + N
s
|k) 1
_
(5.127)
A

=
_

D
Ns,43
W
n

D
Ns,33
_
Y
r
(5.128)
Substitution of (5.124) and (5.125) in (5.127) gives
b

(k) =
_

C
Ns,4


D
Ns,43

F
W
n

(

C
Ns,3


D
Ns,33

F)
_
x(k|k)+
_

D
Ns,41
+

D
Ns,43

D
e
W
n

(

D
Ns,31
+

D
Ns,33

D
e
)
_
e(k|k)
+
_

D
Ns,42
+

D
Ns,43

D
w
W
n

(

D
Ns,32
+

D
Ns,33

D
w
)
_
w(k)
_

(k)
1
_
The constrained innite horizon MPC problem is now equivalent to a Quadratic Program-
ming problem of minimizing (5.123) subject to (5.126). Note that the term c

(k) in (5.123)
does not play any role in the optimization and can therefore be skipped.
2 End Proof
5.3 Implementation
5.3.1 Implementation and computation in LTI case
In this chapter the predictive control problem was solved for various settings (uncon-
strained, equality/inequality constrained, nite/innite horizon). In the absence of in-
equality constraints, the resulting controller is given by the control law:
v(k) = F x(k|k) + D
e
e(k|k) + D
w
w(k). (5.129)
Unfortunately, the state x(k|k) of the plant and the true value of e(k|k) are unknown. We
therefore introduce a controller state x
c
(k) and an estimate e
c
(k). By substitution of x
c
(k)
and e
c
(k) in system equations (5.1)-(5.2) and control law (5.129) we obtain:
x
c
(k + 1) = Ax
c
(k) + B
1
e
c
(k) + B
2
w(k) + B
3
v(k) (5.130)
e
c
(k) = D
1
11
_
y(k) C
1
x
c
(k) D
12
w(k)
_
(5.131)
v(k) = F x
c
(k) + D
e
e
c
(k) + D
w
w(k) (5.132)
Elimination of e
c
(k) and v(k) from the equations results in the closed loop form for the
controller:
x
c
(k + 1) = (AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
) x
c
(k)
+(B
1
D
1
11
+ B
3
D
e
D
1
11
) y(k)
+(B
3
D
w
B
3
D
e
D
1
11
D
12
E
w
+ B
2
E
w
B
1
D
1
11
D
12
E
w
) w(k) (5.133)
v(k) = (F D
e
D
1
11
C
1
) x
c
(k) + D
e
D
1
11
y(k) +
+(D
w
D
e
D
1
11
D
12
E
w
) w(k) (5.134)
90 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
A
q
1
I C
1
B
2
B
3
B
1
D
11
D
12
process
E E E E E
c
c
E
c
E
c
E E
E E
' '
T
d
d
d
d
x(k+1) x(k)
e
w
F
B
1
B
3
A
q
1
I C
1
B
2
E
w
D
12
E
w
D
1
11
D
e
D
w
controller
' '
'
E E E E E
E
E
' ' E E
T
T
T
dd

c
c
'
c
d
d
d d r
e
c
w
v
x
c
(k+1) x
c
(k)
y
w
Figure 5.1: Realization of the LTI SPCP controller
with E
w
=
_
I 0 . . . 0

is such that w(k) = E


w
w(k).
Note that this is an linear time-invariant (LTI) controller.
Figure 5.1 visualizes the controller, where the upper block is the true process, satisfying
equations (5.1) and (5.2), and the lower block is the controller, satisfying equations (5.130),
(5.131) and (5.132). Note that the states of the process and controller are denoted by x
and x
c
, respectively. When the controller is stabilizing and there is no model error, the
state x
c
and the noise estimate e
c
will converge to the true state x and true noise signal
e respectively. After a transient the states of plant and controller will become the same.
Stability of the closed loop is discussed in chapter 6.
5.3. IMPLEMENTATION 91
Computational aspects:
Consider the expression (5.14)-(5.16). The matrices F, D
e
and D
w
can also be found by
solving
(

D
T
23


D
23
)
_

F

D
e

D
w

=

D
T
23

_

C
2

D
21

D
22

(5.135)
where we dene F = E
v

F, D
e
= E
v

D
e
and D
w
= E
v

D
w
. Inversion of

D
T
23


D
23
may
be badly conditioned and should be avoided. As dened in chapter 4, (j) is a diagonal
selection matrix with ones and zeros on the diagonal. This means that

=

2
. Dene
M =


D
23
, then we can do a singular value decomposition of M:
M =
_
U
1
U
2

_

0
_
V
T
and the equation 5.135 becomes:
( V
2
V
T
)
_

F

D
e

D
w

= V U
T
_

C
2

D
21

D
22

(5.136)
and so the solution becomes:
_

F

D
e

D
w

= ( V
2
V
T
)
1
V U
T
1
_

C
2

D
21

D
22

=
= V
1
U
T
1
_

C
2

D
21

D
22

and so
_
F D
e
D
w

= E
v
V
1
U
T
1
_

C
2

D
21

D
22

For singular value decomposition robust and reliable algorithms are available. Note that
the inversion of is simple, because it is a diagonal matrix and all elements are positive
(we assumes

D
23


D
T
23
to have full rank).
5.3.2 Implementation and computation in full SPCP case
In this section we discuss the implementation of the full SPCP controller. For both the
nite and innite horizon the full SPCP can be solved using an optimal control law
v(k) = v
0
(k) + D


I
(k) (5.137)
where is the solution for the quadratic programming problem
min

1
2

T
I
H
I
(5.138)
subject to:
A


I
+ b

(k) 0 (5.139)
92 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
A
q
1
I C
1
B
2
B
3
B
1
D
11
D
12
process
E E E E E
c
c
E
c
E
c
E E
E E
' '
T
d
d
d
d
x(k+1) x(k)
e
w
F
B
1
B
3
A
q
1
I C
1
B
2
E
w
D
12
E
w
D
1
11
D
e
D
w
D

Nonlinear mapping
controller
' '
'
E E E E E
E
E
' ' E E
T
T
T
dd

c
c
'
c
E
E '
'
T
T
d
d
d d r
e
c

I
w

e
c
x
c
v
x
c
(k+1) x
c
(k)
y
w
Figure 5.2: Realization of the FULL SPCP controller
5.3. IMPLEMENTATION 93
This optimization problem (5.137)-(5.139) can be seen as a nonlinear mapping with input
variables x(k|k), e(k|k) and w(k) and signal
I
(k):

I
(k) = h(x
c
(k), e
c
(k), w(k),

(k))
As in the LTI-case (section 5.3.1), estimations x
c
(k) and e
c
(k) from the state x(k) and
noise signal e(k), can be obtained using an observer.
Figure 5.2 visualizes the nonlinear controller, where the upper block is the true process,
satisfying equations (5.1) and (5.2), and the lower block is the controller, where the non-
linear mapping has inputs x
c
(k), e
c
(k), w(k) and

(k) and output v(k). Again, when the
controller is stabilizing and there is no model error, the state x
c
and the noise estimate e
c
will converge to the true state x and true noise signal e respectively. After a transient the
states of plant and controller will become the same.
An advantage of predictive control is that, by writing the control law as the result of a
constrained optimization problem (5.138)-(5.139), it can eectively deal with constraints.
An important disadvantage is that every time step a computationally expensive optimiza-
tion problem has to be solved [78]. The time required for the optimization makes model
predictive control not suitable for fast systems and/or complex problems.
In this section we discuss the work of Bemporad et al. [5], who formulate the linear
model predictive control problem as multi-parametric quadratic programs. The control
variables are treated as optimization variables and the state variables as parameters. The
optimal control action is a continuous and piecewise ane function of the state under the
assumption that the active constraints are linearly independent. The key advantage of this
approach is that the control actions are computed o-line: the on-line computation simply
reduces to a function evaluation problem.
The Kuhn-Tucker conditions for the quadratic programming problem (5.138)-(5.139) are
given by:
H
I
+ A
T

= 0 (5.140)

T
_
A


I
+ b

(k)
_
= 0 (5.141)
0 (5.142)
A


I
+ b

(k) 0 (5.143)
Equation (5.140) leads to optimal value for
I
:

I
= H
1
A
T

(5.144)
substitution into equation (5.141) gives condition

T
_
A

H
1
A
T

+ b

(k)
_
= 0 (5.145)
Let
a
and
i
denote the Lagrangian multipliers corresponding the active and inactive
constraints respectively. For inactive constraint there holds
i
= 0, for active constraint
94 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
there holds
a
> 0. Let S
i
and S
a
be selection matrices such that
_

a

i
_
=
_
S
a
S
i
_

and
_
S
a
S
i
_
T
_
S
a
S
i
_
= S
T
a
S
a
+ S
T
i
S
i
= I
then
=
_
S
a
S
i
_
T
_

a

i
_
= S
T
a

a
Substitution in (5.144) and (5.145) gives:

I
= H
1
A
T

S
T
a

a
(5.146)

T
a
S
a
_
A

H
1
A
T

S
T
a

a
+ b

(k)
_
= 0 (5.147)
Because
a
> 0 equation (5.147) leads to the condition:
S
a
A

H
1
A
T

S
T
a

a
+ S
a
b

(k) = 0
If S
a
A

has full-row rank then


a
can be written as

a
=
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
b

(k) (5.148)
Substitution in (5.146) gives

I
= H
1
A
T

S
T
a
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
b

(k) (5.149)
inequality (5.143) now becomes
A


I
+ b

(k) =
= b

(k) A

H
1
A
T

S
T
a
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
b

(k)
=
_
I A

H
1
A
T

S
T
a
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
_
b

(k)
0 (5.150)
The solution
I
of (5.149) for a given choice of S
a
is the optimal solution of the quadratic
programming problem as long as (5.150) holds and
a
0. Combining (5.148) and (5.150)
gives:
_
M
1
(S
a
)
M
2
(S
a
)
_
b

(k) 0 (5.151)
5.3. IMPLEMENTATION 95
where
M
1
(S
a
) = I A

H
1
A
T

S
T
a
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
M
2
(S
a
) =
_
S
a
A

H
1
A
T

S
T
a
_
1
S
a
Let
b

(k) = N
_

_
x
c
(k)
e
c
(k)
w(k)

(k)
_

_
Consider the singular value decomposition
N =
_
U
1
U
2

_
0
0 0
__
V
T
1
V
T
2
_
where is square and has full-rank n
N
, and dene
(k) = V
T
1
_

_
x
c
(k)
e
c
(k)
w(k)

(k)
_

_
(5.152)
Condition (5.151) now becomes
_
M
1
(S
a
)
M
2
(S
a
)
_
U
1
(k) 0 (5.153)
Inequality (5.153) describes a polyhedra P(S
a
) in the R
n
N
, and represent a set of all ,
that leads to the set of active constraints described by matrix S
a
, and thus corresponding
to the solution
I
, described by (5.149).
For all combinations of active constraints (for all S
a
) one can compute the set P(S
a
) and
in this way a part of the R
n
N
will be covered. For some values of (or in other words,
for some
_
x
c
(k), e
c
(k), w(k),

(k)
_
), there is no possible combination of active constraints,
and so no feasible solution to the QP-problem is found. In this case we have a feasibility
problem, for these values of we have to relax the predictive control problem in some way.
(this will be discussed in the next section).
All sets P(S
a
) can be computed o-line. The on-line optimization has now become a
search-problem. At time k we can compute (k) following equation (5.152), and determine
to which set P(S
a
) the vector (k) belongs.
I
can be computed using the corresponding
S
a
in equation (5.149).
Resuming, the method of Bemporad et al. [5], proposes an algorithm that partitions the
state space into polyhedral sets and computes the coecients of the ane function for
96 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
every set. The result is a search tree that determines the set to which a state belongs. The
tree is computed o-line, and the classication and computation of the control signal from
the state and the coecients is done on-line. The method works very well if the number of
inequality constraints is small. However, the number of polyhedral sets grows dramatically
with the number of inequality constraints.
Another way to avoid the cumbersome optimization, is described by Van den Boom et al.
[66]. The nonlinear mapping
I
(k) = h(x
c
(k), e
c
(k), w(k),

(k)) may be approximated by
a neural network with any desired accuracy. Contrary to [5], linear independency of the
active constraints is not required, and the approach is applicable to control problems with
many constraints.
5.4 Feasibility
At the end of this chapter, a remark has to be made concerning feasibility and stability.
In normal operating conditions, the optimization algorithm provides an optimal solution
within acceptable ranges and limits of the constraints. A drawback of using (hard) con-
straints is that it may lead to infeasibility: There is no possible control action without
violation of the constraints. This can happen when the prediction and control horizon are
chosen too small, around setpoint changes or when noise and disturbance levels are high.
When there is no feasibility guarantee, stability can not be proven.
If a solution does not exist within the predened ranges and limits of the constraints, the
optimizer should have the means to recover from the infeasibility. Three algorithms to
handle infeasibility are discussed in this section:
soft-constraint approach
minimal time approach
constraint prioritization
Soft-constraint algorithm
In contrast to hard constraints one can dene soft constraints [55],[79], given by an ad-
ditional term to the performance index and only penalizing constraint violations. For
example, consider the problem with the hard constraint
min
v
J subject to



(5.154)
This output constraint can be softened by considering the following problem:
min
v ,
J + c
2
subject to



+R , 0 , (5.155)
where c 1 and R = diag(r
1
, . . . , r
M
) with r
i
> 0 for i = 1, . . . , M. In this way, violation
of the constraints is allowed, but on the cost of a higher performance index. The entries
of the matrix R may be used to tune the allowed or desired violation of the constraints.
5.5. EXAMPLES 97
Minimal time algorithm
In the minimal time approach [51] consider the inequality constraints

(k +j|k) (k +
j|k) for j = 1, . . . , N (see section 4.2.2). Now we disregard the constraints at the start of
the prediction interval up to some sample number j
mt
and determine the minimal j
mt
for
which the problem becomes feasible:
min
v,jmt
j
mt
subject to

(k + j|k) (k + j) for j = j
mt
+ 1, . . . , N (5.156)
Now we can calculate the optimal input sequence v(k) by optimizing the following problem:
min
v
J subject to

(k + j|k) (k + j) for j = j
mt
+ 1, . . . , N (5.157)
where j
mt
is the minimum value of problem (5.156). The minimal time algorithm is used if
our aim is to be back in a feasible operation as fast as possible. As a result, the magnitude
of the constraint violations is usually larger than with the soft constraint approach.
Prioritization
The feasibility recovery technique we will discuss now is based on the priorities of the
constraints. The constraints are ordered from lowest to highest priority. In the (nominal)
optimization problem becomes infeasible we start by dropping the lowest constraints and
see if the resulting reduced optimization problem becomes feasible. As long as the problem
is not feasible we continue by dropping more and more constraints until the optimization
is feasible again. This means we solve a sequence of quadratic programming problems in
the case of infeasibility. The algorithm minimizes the violations of the constraints which
cannot be fullled. Note that it may take several trials of dropping constraints and trying
to nd an optimal solution, which is not desirable in any real time application.
5.5 Examples
Example 23 : computation of controller matrices (unconstrained case)
Consider the system of example 7 on page 52. When we compute the optimal predictive
control for the unconstrained case, with N = N
c
= 4, we obtain the controller matrices:
F =
_
0 1 0.4 0.3 1

D
e
=
_
0.6

D
w
=
_
0.1 1 0 0 0 0 0 0

Example 24 : computation of controller matrices (equality constrained case)


We add a control horizon for N
c
= 1 and use the results for the equality constrained case
98 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
to obtain the controller matrices:
F =
_
0 0.8624 0.2985 0.3677 1

D
e
=
_
0.6339

D
w
=
_
0.0831 0.8624 0.0232 0.2587 0.0132 0.1639 0.0158 0.1578

Example 25 : MPC of an elevator


A predictive controller design problem with constraints will be illustrated using a triple
integrator chain
...
y
(t) = u(t) + (t)
with constraints on y, y, y and
...
y
. This is a model of a rigid body approximation of an
elevator for position control, subject to constraints on speed, acceleration and jerk (time
derivative of accelaration).
Dene the state
x(t) =
_
_
x
1
(t)
x
2
(t)
x
3
(t)
_
_
=
_
_
y(t)
y(t)
y(t)
_
_
The noise model is given by:
(t) =
...
e (t) + 0.5 e(t) + e(t) + 0.1e(t)
where e is given as zero-mean white noise. Then the continuous-time state space description
of this system can be given
x
1
(t) = u(t) + 0.1e(t)
x
2
(t) = x
1
(t) + e(t)
x
3
(t) = x
2
(t) + 0.5 e(t)
y(t) = x
3
(t) + e(t)
where the noise is now acting on the states
...
y
(t) and output y(t). In matrix form we get:
x(t) = A
c
x(t) + K
c
e(t) + B
c
u(t)
y(t) = C
c
x(t) + e(t)
where
A
c
=
_
_
0 0 0
1 0 0
0 1 0
_
_
B
c
=
_
_
1
0
0
_
_
K
c
=
_
_
0.1
1
0.5
_
_
C
c
=
_
0 0 1

A zero-order hold transformation with sampling-time T gives a discrete-time IO model:


x
o
(k + 1) = A
o
x
o
(k) + K
o
e(k) + B
o
u(k)
y(k) = C
o
x
o
(k) + e(k)
5.5. EXAMPLES 99
where
A
o
=
_
_
1 0 0
T 1 0
T
2
/2 T 1
_
_
B
o
=
_
_
T
T
2
/2
T
3
/6
_
_
K
o
=
_
_
0.1 T
T + T
2
/20
T/2 +T
2
/6 +T
3
/60
_
_
C
o
=
_
0 0 1

The prediction model is built up as given in section 3. Note that we use u(k) rather
than u(k). This because the system already consists of a triple integrator, and another
integrator in the controller is not desired. The aim of this example is to direct the elevator
from position y = 0 at time t = 0 to position y = 1 as fast as possible. For a comfortable
and safe operation, control is done subject to constraints on the jerk, acceleration, speed
and overshoot, given by
|
...
y
(t)| 0.4 (jerk)
| y(t)| 0.3 (acceleration)
| y(t)| 0.4 (speed)
y(t) 1.01 (overshoot)
These constraints can be translated into linear constraints on the control signal u(k) and
prediction of the state x(k):
u(k + j 1) 0.4 (positive jerk)
u(k + j 1) 0.4 (negative jerk)
x
1
(k + j|k) 0.3 (positive acceleration)
x
1
(k + j|k) 0.3 (negative acceleration)
x
2
(k + j|k) 0.4 (positive speed)
x
2
(k + j|k) 0.4 (negative speed)
x
3
(k + j|k) 1.01 (overshoot)
for all j = 1, . . . , N.
We choose sampling-time T = 0.1, and prediction and control horizon N = N
c
= 30, and
minimum cost-horizon N
m
= 1. Further we consider a constant reference signal r(k) = 1
for all k, and the weightings parameters P(q) = 1, = 0.1. The predictive control problem
is solved by minimizing the GPC performance index (for an IO model)
J(u, k) =
N

j=Nm
_
y(k + j|k) r(k + j|k)
_
T
_
y(k + j|k) r(k + j|k)
_
+
+
2
Nc

j=1
u
T
(k + j 1|k)u(k + j 1|k)
subject to the above linear constraints.
The optimal control sequence is computed and implemented. In gure 5.3 the results are
given. It can be seen that the jerk (solid line), acceleration (dotted line), speed (dashed
line) and the position (dashed-dotted line) are all bounded by the constraints.
100 CHAPTER 5. SOLVING THE STANDARD PREDICTIVE CONTROL PROBLEM
0 10 20 30 40 50 60 70 80
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
position
velocity
acceleration
jerk
k
y
,
x
,
u

Figure 5.3: Predictive control of an elevator


Chapter 6
Stability
Predictive control design does not give an a priori guaranteed stabilizing controller. Meth-
ods like LQ, H

,
1
-control have built-in stability properties. In this chapter we analyze the
stability of a predictive controller. We distinguish two dierent cases, namely the uncon-
strained and constrained case. It is very important to notice that a predictive controller
that is stable for the unconstrained case is not automatically stable for the constrained
case.
In Section 6.4 the relation between MPC, the Internal Model Control (IMC) and the Youla
parametrization is discussed.
Section 6.5 nalizes this chapter by considering the concept of robustness.
6.1 Stability for the LTI case
In the case where inequality constraints are absent, the problem of stability is not too
dicult, because the controller is linear and time-invariant (LTI).
Let the process be given by:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) (6.1)
and the controller by (5.133) and (5.134):
x
c
(k + 1) = (AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
) x
c
(k)
+(B
1
D
1
11
+ B
3
D
e
D
1
11
) y(k) +
+(B
3
D
w
B
3
D
e
D
1
11
D
12
E
w
+ B
2
E
w
B
1
D
1
11
D
12
E
w
) w(k)
v(k) = (F D
e
D
1
11
C
1
) x
c
(k) + D
e
D
1
11
y(k) + (D
w
D
e
D
1
11
D
12
E
w
) w(k)
with E
w
=
_
I 0 . . . 0

is such that w(k) = E


w
w(k). We make one closed loop state
101
102 CHAPTER 6. STABILITY
space representation by substitution:
v(k) = (F + D
e
D
1
11
C
1
) x
c
(k) + D
e
D
1
11
y(k) + (D
w
D
e
D
1
11
D
12
E
w
) w(k)
= (F + D
e
D
1
11
C
1
) x
c
(k) + D
e
D
1
11
C
1
x(k) + D
e
e(k) + D
w
w(k)
x(k + 1) = Ax(k) + B
1
e(k) + B
2
E
w
w(k) + B
3
v(k)
= Ax(k) + B
1
e(k) + B
2
E
w
w(k) (B
3
F + B
3
D
e
D
1
11
C
1
) x
c
(k)
+B
3
D
e
D
1
11
C
1
x(k) + B
3
D
e
e(k) + B
3
D
w
w(k)
= (A+ B
3
D
e
D
1
11
C
1
) x(k) (B
3
F + B
3
D
e
D
1
11
C
1
) x
c
(k)
+(B
1
+ B
3
D
e
) e(k) + (B
3
D
w
+ B
2
E
w
) w(k)
x
c
(k + 1) = (AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
) x
c
(k)
+(B
1
D
1
11
+ B
3
D
e
D
1
11
) y(k)
+(B
3
D
w
B
3
D
e
D
1
11
D
12
E
w
+ B
2
E
w
B
1
D
1
11
D
12
E
w
) w(k)
= (AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
) x
c
(k)
+(B
1
D
1
11
C
1
+ B
3
D
e
D
1
11
C
1
) x(k) + (B
1
+ B
3
D
e
) e(k)
+(B
3
D
w
+ B
2
E
w
) w(k)
or in matrix form:
_
x(k + 1)
x
c
(k + 1)
_
=
_
A+B
3
D
e
D
1
11
C
1
B
3
F B
3
D
e
D
1
11
C
1
B
1
D
1
11
C
1
+B
3
D
e
D
1
11
C
1
AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
__
x(k)
x
c
(k)
_
+
_
B
1
+ B
3
D
e
B
1
+ B
3
D
e
_
e(k) +
_
B
3
D
w
+ B
2
E
w
B
3
D
w
+ B
2
E
w
_
w(k)
= A
cl
_
x(k)
x
c
(k)
_
+ B
1,cl
e(k) + B
2,cl
w(k)
_
v(k)
y(k)
_
=
_
D
e
D
1
11
C
1
F D
e
D
1
11
C
1
C
1
0
_ _
x(k)
x
c
(k)
_
+
_
D
e
D
11
_
e(k) +
_
D
w
D
12
E
w
_
w(k)
= C
cl
_
x(k)
x
c
(k)
_
+ D
1,cl
e(k) + D
2,cl
w(k)
Next we choose a new state:
_
x(k)
x
c
(k) x(k)
_
=
_
I 0
I I
__
x(k)
x
c
(k)
_
= T
_
x(k)
x
c
(k)
_
This state transformation with
T =
_
I 0
I I
_
and T
1
=
_
I 0
I I
_
leads to a new realization
_
x(k + 1)
x
c
(k + 1) x(k + 1)
_
=

A
cl
_
x(k)
x
c
(k) x(k)
_
+

B
1,cl
e(k) + B
2,cl
w(k)
_
v(k)
y(k)
_
=

C
cl
_
x(k)
x
c
(k)
_
+

D
1,cl
e(k) +

D
2,cl
w(k)
6.1. STABILITY FOR THE LTI CASE 103
with system matrices (see appendix B):

A
cl
= T A
cl
T
1
=
_
I 0
I I
__
A+B
3
D
e
D
1
11
C
1
B
3
F B
3
D
e
D
1
11
C
1
B
1
D
1
11
C
1
+B
3
D
e
D
1
11
C
1
AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
__
I 0
I I
_
=
_
AB
3
F B
3
F B
3
D
e
D
1
11
C
1
0 AB
1
D
1
11
C
1
_

B
1,cl
= T B
1,cl
=
_
I 0
I I
__
B
1
+ B
3
D
e
B
1
+ B
3
D
e
_
=
_
B
1
+ B
3
D
e
0
_

B
2,cl
= T B
2,cl
=
_
I 0
I I
__
B
3
D
w
+ B
2
E
w
B
3
D
w
+ B
2
E
w
_
=
_
B
3
D
w
+ B
2
E
w
0
_

C
cl
= C
cl
T
1
=
_
D
e
D
1
11
C
1
F D
e
D
1
11
C
1
C
1
0
_ _
I 0
I I
_
=
_
F F D
e
D
1
11
C
1
C
1
0
_

D
1,cl
= D
1,cl

D
2,cl
= D
2,cl
The eigenvalues of the matrix

A
cl
are equal to eigenvalues of (AB
3
F) and the eigenvalues
of (A B
1
D
1
11
C
1
). This means that in the unconstrained case necessary and sucient
conditions for closed-loop stability are:
1. the eigenvalues of (AB
3
F) are strictly inside the unit circle.
2. the eigenvalues of (AB
1
D
1
11
C
1
) are strictly inside the unit circle.
Condition (1) can be satised by choosing appropriate tuning parameters such that the
feedback matrix F makes |
(AB
3
F)
| < 1. We can obtain that by a careful tuning (chapter
8), by introducing the end-point constraint (section 6.3) or by extending the prediction
horizon to innity (section 6.3).
Condition (2) is related with the choice of the noise model H(q), given by the input output
relation (compare 6.1)
x(k + 1) = Ax(k) + B
1
e(k)
y(k) = C
1
x(k) + D
11
e(k)
104 CHAPTER 6. STABILITY
plant
predictive
controller
' '
E E
c c
T
v
y
e w
w
Figure 6.1: Closed loop conguration
with transfer function:
H(q) = C
1
(qI A)
1
B
1
+ D
11
From appendix B we learn that for
H(q)
_
A B
1
C
1
D
11
_
we nd
H
1
(q)
_
A B
1
D
1
11
C
1
B
1
D
1
11
D
1
11
C
1
D
1
11
_
so the inverse noise model is given (for v = 0 and w = 0) by the state space equations:
x(k + 1) = (AB
1
D
1
11
C
1
) x(k) + B
1
D
1
11
y(k)
e(k) = D
1
11
C
1
x(k) + D
1
11
y(k)
with transfer function:
H
1
(q) = D
1
11
C
1
(qI A+ B
1
D
1
11
C
1
)
1
B
1
D
1
11
+ D
1
11
From these equations it is clear that the eigenvalues of (A B
1
D
1
11
C
1
) are exactly equal
to the poles of the system H
1
(q), and thus a necessary condition for closed loop stability
is that the inverse of the noise model is stable.
If condition (2) is not satised, it means that the inverse of the noise model is not stable.
We will have to nd a new observer-gain B
1,new
such that A B
1,new
D
1
11
C
1
is stable,
6.2. STABILITY FOR THE INEQUALITY CONSTRAINED CASE 105
without changing the stochastic properties of the noise model. This can be done by a
spectral factorization
H(q)H
T
(q
1
) = H
new
(q) H
T
new
(q
1
)
where H
new
and H
1
new
are both stable. H(q) and H
new
(q) have the same stochastic prop-
erties and so replacing H by H
new
will not aect the properties of the process. The new
H
new
is given as follows:
H
new
(q) = C
1
(qI A)
1
B
1,new
+ D
11
The new observer-gain B
1,new
is given by:
B
1,new
= (AXC
T
1
+ B
1
D
1
11
)(I + C
1
XC
T
1
)
1
D
11
where X is the positive denite solution of the discrete algebraic Riccati equation:
(AB
1
D
1
11
C
1
)
T
X(I + C
T
1
C
1
X)(AB
1
D
1
11
C
1
) X = 0
6.2 Stability for the inequality constrained case
The issue of stability becomes even more complicated in the case where constraints have
to be satised. When there are only constraints on the control signal, a stability guarantee
can be given if the process itself is stable (Sontag [62], Balakrishnan et al. [4]).
In the general case with constraints on input, output and states, the main problem is
feasibility. The existence of a stabilizing control law is not at all trivial. For stability in
the inequality constrained case we need to modify the predictive control problem (as will
be discussed in Section 6.3) or we have to solve the predictive control problem with a state
feedback law using Linear Matrix inequalities, based on the work of Kothare et al. [35].
(This will be discussed in section 7.3.)
6.3 Modications for guaranteed stability
In this section we will introduce some modications of the MPC design method that
will provide guaranteed stability of the closed loop. We discuss the following stabilizing
modications ([45]):
1. Innite prediction horizon
2. End-point constraint (or terminal constraint)
3. Terminal cost function
4. Terminal cost function with end-point constraint set
106 CHAPTER 6. STABILITY
These modication uses the concept of monotonicity of the performance index to prove
stability ([12, 37]). In most modication we assume the steady-state (v
ss
, x
ss
, w
ss
, z
ss
) =
(v
ss
, x
ss
, w
ss
, 0) is unique, so the matrix M
ss
as dened in (5.38) has full column rank.
Furthermore we assume that the system with input v and output z is minimum-phase, so
Assumption 26 The matrix
_
zI A B
3
C
2
D
23
_
does not drop in rank for all |z| 1.
Note that if assumption 26 is satised, M
ss
will have full rank. In most of the modi-
cations we will use the KrasovskiiLaSalle principle [72] to prove that the steady-state is
asymptotically stable. If a function V (k) can be found such that
V (k) > 0 , for all x = x
ss
V (k) = V (k) V (k 1) 0 for all x
and V (k) = V (k) = 0 for x = x
ss
, and if the set {V (k) = 0} contains no trajectory of
the system except the trivial trajectory x(k) = x
ss
for k 0, then the Krasovskii-LaSalle
principle states that the the steady-state x(k) = x
ss
is asymptotically stable.
Innite prediction horizon
A straight forward way to guarantee stability in predictive control is to choose an innite
prediction horizon (N = ). It is easy to show that the controller is stabilizing for the
unconstrained case, which is formulated in the following theorem:
Theorem 27 Consider a LTI system given in the state space description
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k), (6.2)
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k), (6.3)
satisfying assumption 26. Dene the set
v(k) =
_

_
v(k|k)
.
.
.
v(k + N
c
1|k)
_

_
,
where N
c
ZZ
+
{}. A performance index is dened as
min
v(k)
J( v, k) = min
v(k)

j=0
_
z(k + j|k)
_
T
(j)
_
z(k + j|k)
_
,
where (i) (i+j) > 0 for j 0. The predictive control law minimizing this performance
index results in a stabilizing controller.
6.3. MODIFICATIONS FOR GUARANTEED STABILITY 107
Proof:
First let us dene the function
V (k) = min
v(k)
J( v, k) 0, (6.4)
and let v

(k) be the optimizer


v

(k) = arg min


v(k)
J(k),
where
v

(k) =
_

_
v

(k|k)
v

(k + 1|k)
.
.
.
v

(k + N
c
1|k)
_

_
for N
c
< ,
or
v

(k) =
_

_
v

(k|k)
v

(k + 1|k)
.
.
.
_

_
for N
c
= ,
where v

(k + j|k) means the optimal input value to be applied a time k + j as calculated


at time k. Let z

(k + j|k) be the (optimal) performance signal at time k + j when the


optimal input sequence v

(k) computed at time k is applied. Then


V (k) =

j=0
_
z

(k + j|k)
_
T
(j)
_
z

(k + j|k)
_
.
Now based on v

(k) and the chosen stabilizing modication method we construct a vector


v
sub
(k + 1). This vector can be seen as a suboptimal input sequence for the k + 1-th
optimization. The idea is to construct v
sub
(k + 1) such that
J( v
sub
, k + 1) V (k).
If we now compute
V (k + 1) = min
v(k+1)
J( v, k + 1),
we will nd that
V (k + 1) = min
v(k+1)
J( v, k + 1) J( v
sub
, k + 1) V (k).
108 CHAPTER 6. STABILITY
For the innite horizon case we dene the suboptimal control sequence as follows:
v
sub
(k + 1) =
_

_
v

(k + 1|k)
v

(k + 2|k)
.
.
.
v

(k + N
c
1|k)
v
ss
_

_
for N
c
< ,
or
v
sub
(k + 1) =
_

_
v

(k + 1|k)
v

(k + 2|k)
.
.
.
_

_
for N
c
= ,
and compute
V
sub
(k + 1) = J( v
sub
, k + 1),
then this value is equal to
V
sub
(k + 1) =

j=1
_
z

(k + j|k)
_
T
(j)
_
z

(k + j|k)
_
,
and so
V
sub
(k + 1) V (k)
We proceed by observing that
V (k + 1) = min
v(k+1)
J( v, k + 1) J( v
sub
, k + 1),
which means that V (k +1) V (k). Using the fact that V (k) = V (k) V (k 1) 0 and
that, based on assumption 26, the set {V (k) = 0} contains no trajectory of the system
except the trivial trajectory x(k) = x
ss
for k 0, the Krasovskii-LaSalle principle [72]
states that the steady-state is asymptotically stable. 2 End Proof
As is shown above, for stability it is not important whether N
c
= or N
c
< . For N
c
=
the predictive controller becomes equal to the optimal LQ-solution (Bitmead, Gevers &
Wertz [7], see section 5.2.3). The main reason not to choose an innite control horizon is the
fact that constraint-handling on an innite horizon is an extremely hard problem. Rawlings
& Muske ([51]) studied the case where the prediction horizon is equal to innity (N = ),
but the control horizon is nite (N
c
< ), see the sections 5.2.4 and 5.2.5. Because of the
nite number of control actions that can be applied, the constraint-handling has become
a nite-dimensional problem instead of an innite-dimensional problem for N
c
= .
6.3. MODIFICATIONS FOR GUARANTEED STABILITY 109
End-point constraint (terminal constraint)
The main idea of the end-point constraint is to force the system to its steady state at the
end of the prediction interval. Since the end-point constraint is an equality constraint, it
still holds that the resulting controller is linear and time-invariant.
Theorem 28 Consider a LTI system given in the state space description
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k),
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k),
satisfying assumption 26. A performance index is dened as
min
v(k)
J( v, k) = min
v(k)
N1

j=0
_
z(k + j|k)
_
T
(j)
_
z(k + j|k)
_
, (6.5)
where (i) (i +1) > 0 is positive denite for i = 0, . . . , N1 and an additional equality
constraint is given by:
x(k + N) = x
ss
. (6.6)
Finally, let w(k) = w
ss
for k 0. Then, the predictive control law, minimizing (6.5),
results in a stable closed loop.
proof:
Also in this modication the performance index will be used as a Lyapunov function for
the closed loop system. First let us dene the function
V (k) = min
v(k)
J( v, k) 0, (6.7)
and let v

(k) be the optimizer


v

(k) = arg min


v(k)
J( v, k)
=
_

_
v

(k|k)
.
.
.
v

(k + N
c
2|k)
v

(k + N
c
1|k)
v
ss
.
.
.
v
ss
_

_
= arg min
v(k)
J( v, k),
110 CHAPTER 6. STABILITY
where v

(k + j|k) means the optimal input value to be applied a time k + j as calculated


at time k. Let z

(k + j|k) be the (optimal) performance signal at time k + j when the


optimal input sequence v

(k) computed at time k is applied. Then


V (k) =

j=0
_
z

(k + j|k)
_
T
(j)
_
z

(k + j|k)
_
.
Now based on v

(k) and the chosen stabilizing modication method we construct the vector
v
sub
(k + 1) as follows:
v
sub
(k + 1) =
_

_
v

(k + 1|k)
.
.
.
v

(k + N
c
1|k)
v
ss
v
ss
.
.
.
v
ss
_

_
,
and we compute
V
sub
(k + 1) = J( v
sub
, k + 1).
Applying this above input sequence v
sub
(k+1) to the system results in a z
sub
(k+j|k+1) =
z

(k + j|k) for j = 1, . . . , N and z


sub
(k + N + j|k + 1) = 0 for j > 0. With this we derive
V
sub
(k + 1) =
N

j=1
_
z
sub
(k+j+1|k + 1)
_
T
(j + 1)
_
z
sub
(k+j+1|k + 1)
_

j=1
_
z
sub
(k+j+1|k + 1)
_
T
(j)
_
z
sub
(k+j+1|k + 1)
_
=
N

j=2
_
z

(k + j|k)
_
T
(j)
_
z

(k + j|k)
_
= V (k)
_
z

(k|k)
T
(1) z

(k|k)
_
V (k).
Further, because of the receding horizon strategy, we do a new optimization
V (k + 1) = min
v(k+1)
J( v, k + 1) J( v
sub
, k + 1),
and therefore, V (k + 1) = V (k + 1) V (k) 0. Also for the end-point constraint,
the set {V (k) = 0} contains no trajectory of the system except the trivial trajectory
x(k) = x
ss
for k 0 and we can use the Krasovskii-LaSalle principle [72] to proof that the
steady-state is asymptotically stable. 2 End Proof
6.3. MODIFICATIONS FOR GUARANTEED STABILITY 111
Remark 29: The constraint
W x(k + N) = W x
ss
,
where W is a mn-matrix with full column-rank (so m n), is equivalent to the end-point
constraint (6.6), because x(k +N) = x
ss
if W x(k +N) = W x
ss
. Therefore, the constraint
_

_
y(k + N|k)
y(k + N + 1|k)
.
.
.
y(k + N + n 1|k)
_

_
y
ss
.
.
.
y
ss
_

_
=
_

_
C
1
C
1
A
.
.
.
C
1
A
n1
_

_
x(k + N)
_

_
C
1
C
1
A
.
.
.
C
1
A
n1
_

_
x
ss
= 0,
also results in a stable closed loop if the matrix
_

_
C
1
C
1
A
.
.
.
C
1
A
n1
_

_
has full rank. For SISO systems with pair (C
1
, A) observable, we choose n = dim(A),
so equal to the number of states. For MIMO systems, there may be an integer value
n < dim(A) for which W has full column-rank.
The end-point constraint was rst introduced by (Clarke & Scattolini [16]), (Mosca & Zhang
[47]) and forces the output y(k) to match the reference signal r(k) = r
ss
at the end of the
prediction horizon:
y(k + N + j) = r
ss
for j = 1, . . . , n.
With this additional constraint the closed-loop system is asymptotically stable.
Although stability is guaranteed, the use of an end-point constraint has some disadvantages:
The end-point constraint is only a sucient, but not a necessary condition for stability.
The constraint may be too restrictive and can lead to infeasibility, especially if the horizon
N is chosen too small.
The tuning rules, as will discussed in the Chapter 8, change. This has to do with the fact
that because of the control horizon N
c
the output is already forced to its steady state at
time N
c
and so for a xed N
c
a prediction horizon N = N
c
will give the same stable closed
loop behavior as any choice of N > N
c
. The choice of the prediction horizon N is overruled
by the choice of control horizon N
c
and we loose a degree of freedom.
Terminal cost function
One of the earliest proposals for modifying the performance index J to ensure closed-loop
stability was the addition of a terminal cost [7] in the context of model predictive control
of unconstrained linear system.
112 CHAPTER 6. STABILITY
Theorem 30 Consider a LTI system given in the state space description
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k),
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k),
satisfying assumption 26, and w(k) = w
ss
for k 0. A performance index is dened as
min
v(k)
J( v, k) = min
v(k)
_
_
x(k + N|k) x
ss
_
T
P
0
_
x(k + N|k) x
ss
_
+
N1

j=0
_
z(k + j|k)
_
T
_
z(k + j|k)
_
_
, (6.8)
where P
0
is the solution to the Riccati equation

A
T
P
0

A

A
T
P
0
B
3
(B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A+

QP
0
= 0, (6.9)
where

A =
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
_
,

Q = C
T
2
(I D
23
(D
T
23
D
23
)
1
D
T
23
)C
2
,

R = D
T
23
D
23
.
Then the predictive control law, minimizing (6.8), results in a stable closed loop.
Proof:
Dene the signals
x

(k + j|k) = x(k + j|k) x


ss
for all j 0,
v

(k + j|k) = v(k + j|k) v


ss
for all j 0,
v(k + j) = v

(k + j) + (D
T
23
D
23
)
1
D
T
23
_
C
2
x

(k + j|k)
_
,
then, according to section 5.2.2, it follows that the system can be described by the state
space equation
x

(k+j+1|k) =

A x

(k + j|k) + B
3
v(k + j),
and the performance index becomes
J(v, k) = x
T

(k + N|k)P
0
x

(k + N|k)
+
N1

j=0
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j).
6.3. MODIFICATIONS FOR GUARANTEED STABILITY 113
Now consider another (innite horizon) performance index
J
a
(v, k) =

j=0
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j),
and we structure the input signal such that v(k + j) is free for j = 0, . . . , N
s
1 and
v(k + j) = F x

(k + j|k) where
F = (B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A.
The performance index can now be rewritten as
J
a
(v, k) =
N1

j=0
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j)
+

j=N
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j),
Then for j N:
x
T

(k + j + 1)P
0
x

(k + j + 1) x
T

(k + j)P
0
x

(k + j) + x
T

(k + j)

Q x

(k + j)
+ v
T
(k + j)

R v(k + j) =
=
_
x

(k + j)
v(k + j)
_
T
_

A
T
P
0

AP
0
+

Q

A
T
P
0
B
3
B
T
3
P
0

A B
T
3
P
0
B
3
+

R
__
x

(k + j)
v(k + j)
_
=
= x
T

(k + j)
_
I F
T

_

A
T
P
0

AP
0
+

Q

A
T
P
0
B
3
B
T
3
P
0

A B
T
3
P
0
B
3
+

R
__
I
F
_
x

(k + j)
= x
T

(k + j)
_
I

A
T
P
0
B
3
(B
T
3
P
0
B
3
+

R)
1

_

A
T
P
0

AP
0
+

Q

A
T
P
0
B
3
B
T
3
P
0

A B
T
3
P
0
B
3
+

R
_

_
I
(B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A
_
x

(k + j)
= x
T

(k + j)
_

A
T
P
0

AP
0
+

Q

A
T
P
0
B
3
(B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A
_
x

(k + j)
=0,
and so we nd

j=Ns
x
T

(k + j)

Q x

(k + j) + v
T
(k + j)

R v(k + j) = x

(k + N)
T
P
0
x

(k + N),
which means that J = J
a
. We already have proven that the innite horizon performance
index J
a
is a Lyapunov function. Because the nite horizon predictive controller with a
terminal cost function gives the same performance index J = J
a
means that also J is a
Lyapunov function and so nite horizon predictive controller with a terminal cost function
stabilizes the system.
2 End Proof
114 CHAPTER 6. STABILITY
Terminal cost function with End-point constraint set (Terminal
constraint set)
The main disadvantage of the terminal cost function is that it can only handle the uncon-
strained case. The end-point constraint can handle constraints, but it is very conservative
and can easily lead to infeasibility. If we replace the end-point constraint by an end-point
constraint set and also add the terminal cost function, we can ensure closed-loop stability
for the constrained case in a non-conservative way [31, 60, 64].
Theorem 31 Consider a LTI system given in the state space description
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k),
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k),
(k) = C
4
x(k) + D
42
w(k) + D
43
v(k),
satisfying assumption 26. A performance index is dened as
min
v(k)
J( v, k) = min
v(k)
_
_
x(k + N|k) x
ss
_
T
P
0
_
x(k + N|k) x
ss
_
+
N1

j=0
_
z(k + j|k)
_
T
_
z(k + j|k)
_
_
, (6.10)
where P
0
is the solution to the Riccati equation

A
T
P
0

A

A
T
P
0
B
3
(B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A+

QP
0
= 0, (6.11)
for

A =
_
AB
3
(D
T
23
D
23
)
1
D
T
23
C
2
_
,

Q = C
T
2
(I D
23
(D
T
23
D
23
)
1
D
T
23
)C
2
,

R = D
T
23
D
23
.
The signal constraint is dened by
(k) , (6.12)
where is constant vector. Let w(k) = w
ss
for k 0, and consider the linear control law
v(k + j|k) = (
_
F + (D
T
23
D
23
)
1
D
T
23
C
2
__
x(k + j|k) x
ss
_
+ v
ss
, (6.13)
with F = (B
T
3
P
0
B
3
+

R)
1
B
T
3
P
0

A. Finally let W be the set of all states for which (6.12)
holds under control law (6.13), and assume
D D
+
W. (6.14)
Then the predictive control law, minimizing (6.8), subject to (6.12) and terminal constraint
x(k + N|k) D, (6.15)
results in a stable closed loop.
6.3. MODIFICATIONS FOR GUARANTEED STABILITY 115
Proof:
Dene the signals
x

(k + j|k) = x(k + j|k) x


ss
for all j 0,
v

(k + j|k) = v(k + j|k) v


ss
for all j 0,
v(k + j) = v

(k + j) + (D
T
23
D
23
)
1
D
T
23
_
C
2
x

(k + j|k)
_
,
then according to section 5.2.2, it follows that the system can be described by the state
space equation
x

(k+j+1|k) =

A x

(k + j|k) + B
3
v(k + j),
and the performance index becomes
J( v, k) = x
T

(k + N|k)P
0
x

(k + N|k)
+
N1

j=0
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j).
Note that because x(k + N|k) D and the set D D
+
where D
+
is invariant under
feedback law (6.13), we can rewrite this performance index as
J

( v, k) =
N1

j=0
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j)
+

j=N
x
T

(k + j|k)

Q x

(k + j|k) + v
T
(k + j)

R v(k + j).
Dene the function
V (k) = min
v(k)
J

( v, k),
and the optimal vector
v

(k) = arg min


v(k)
J

(k),
where
v

(k) =
_

_
v

(k|k)
v

(k + 1|k)
.
.
.
v

(k + N 1|k)
F x

(k + N|k)
F x

(k + N + 1|k)
.
.
.
_

_
,
116 CHAPTER 6. STABILITY
where v

(k + j|k) means the optimal input value to be applied a time k + j as calculated


at time k. Now dene
v
sub
(k + 1) =
_

_
v

(k + 1|k)
v

(k + 2|k)
.
.
.
v

(k + N 1|k)
F x

(k + N|k)
F x

(k + N + 1|k)
.
.
.
_

_
.
Note that this control law is feasible, because x(k+N+j|k) D for all j 0. We compute
V
sub
(k + 1) = J( v
sub
, k + 1),
then this value is equal to
V
sub
(k + 1) =

j=1
_
z

(k + j|k)
_
T
_
z

(k + j|k)
_
,
and we nd
V
sub
(k + 1) V (k).
Further
V (k + 1) = min
v(k+1)
J( v, k + 1) J( v
sub
, k + 1),
and therefore, V (k + 1) = V (k + 1) V (k) 0. Also in this case the set {V (k) = 0}
contains no trajectory of the system except the trivial trajectory x(k) = x
ss
for k 0
and we can use so the Krasovskii-LaSalle principle [72] to proof that the steady-state is
asymptotically stable. 2 End Proof
6.4 Relation to IMC scheme and Youla parametriza-
tion
6.4.1 The IMC scheme
The Internal Model Control (IMC) scheme provides a practical structure to analyze prop-
erties of closed-loop behaviour of control systems and is closely related to model predictive
control [26], [46]. In this section we will show that for stable processes, the solution of the
SPCP (as derived in chapter 5) can be rewritten in the IMC conguration.
6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION 117
Q

H
1

G
G
H
E
E
E
E E
E
c
c
E
c
'
e
e

+
e
y
w
v

e
Figure 6.2: Internal Model Control Scheme
The IMC structure is given in gure 6.2 and consists of the process G(q), the model

G(q),
the (stable) inverse of the noise model

H
1
and the two-degree-of-freedom controller Q.
For simplicity reasons we will consider F(q) = 0 (known disturbances are not taken into
account), but the extension is straight forward. The blocks in the IMC scheme satisfy the
following relations:
Process : y(k) = G(q)v(k) + H(q)e(k) (6.16)
Processmodel : y(k) =

G(q)v(k) (6.17)
Pre-lter : e(k) =

H
1
(q)(y(k) y(k)) (6.18)
Controller : v(k) = Q(q)
_
w(k)
e(k)
_
= Q
1
(q) w(k) + Q
2
(q) e(k) (6.19)
Proposition 32 Consider the IMC scheme, given in gure 6.2, satisfying equations (6.16)-
(6.19), where G, H and H
1
are stable. If the process and noise models are perfect,so if

G(q) = G(q) and



H(q) = H(q), the closed loop description is given by:
v(k) = Q
1
(q) w(k) + Q
2
(q)e(k) (6.20)
y(k) = G(q)Q
1
(q) w(k) +
_
G(q)Q
2
(q) + H(q)
_
e(k) (6.21)
The closed loop is stable if and only if Q is stable.
Proof:
The description of the closed loop system is given by substitution of (6.16)-(6.18) in (6.16)
118 CHAPTER 6. STABILITY
and (6.19):
v(k) =
_
I Q
2
(q)

H
1
(q)(G(q)

G(q))
_
1
_
Q
1
(q) w(k)+Q
2
(q)

H
1
(q)H(q)e(k)
_
y(k) = G(q)
_
I Q
2
(q)

H
1
(q)(G(q)

G(q))
_
1
_
Q
1
(q) w(k)
+Q
2
(q)

H
1
(q)H(q)e(k)
_
+H(q)e(k)
For G =

G and H =

H this simplies to equations (6.20) and (6.21). It is clear that from
equations (6.20) and (6.21) that if Q is stable, the closed loop system is stable. On the
other hand, if the closed loop is stable, we can read from equation (6.20) that Q
1
and Q
2
are stable.
2 End Proof
The MPC controller in an IMC conguration
The next step is to derive the function Q for our predictive controller. To do so, we need
to nd the relation between the IMC controller and the common two-degree of freedom
controller C(q), given by:
v(k) = C
1
(q) w(k) + C
2
(q)y(k) (6.22)
From equations (6.17)-(6.19) we derive:
v(k) = Q
1
(q) w(k) + Q
2
(q) e(k)
= Q
1
(q) w(k) + Q
2
(q)

H
1
(q)(y(k) y(k))
= Q
1
(q) w(k) + Q
2
(q)

H
1
(q)y(k) Q
2
(q)

H
1
(q)

G(q)v(k)
_
I + Q
2
(q)

H
1
(q)

G(q)
_
v(k) = Q
1
(q) w(k) + Q
2
(q)

H
1
(q)y(k)
so
v(k) =
_
I + Q
2

H
1
(q)

G(q)
_
1
Q
1
(q) w(k)
+
_
I + Q
2

H
1
(q)

G(q)
_
1
Q
2
(q)

H
1
(q)y(k)
This results in:
C
1
(q) =
_
I + Q
2
(q)

H
1
(q)

G(q)
_
1
Q
1
(q)
C
2
(q) =
_
I + Q
2
(q)

H
1
(q)

G(q)
_
1
Q
2
(q)

H
1
(q)
= Q
2
(q)
_

H(q) +

G(q)Q
2
(q)
_
1
6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION 119
with the inverse relation:
Q
1
(q) =
_
I C
2
(q)

G(q)
_
1
C
1
(q)
Q
2
(q) =
_
I C
2
(q)

G(q)
_
1
C
2
(q)

H(q)
In state space we can describe the systems

G,

H
1
and Q(q) as follows:
Process model

G(q):
x
1
(k + 1) = Ax
1
(k) + B
3
v(k)
y(k) = C
1
x
1
(k)
Pre-lter

H
1
(q):
x
2
(k + 1) = (AB
1
D
1
11
C
1
) x
2
(k) + B
1
D
1
11
(y(k) y(k))
e(k) = D
1
11
C
1
x
2
(k) + D
1
11
(y(k) y(k))
Controller Q(q):
x
3
(k + 1) = (AB
3
F) x
3
(k) + B
3
D
w
w(k) + (B
1
+ B
3
D
e
) e(k) (6.23)
v(k) = F x
3
(k) + D
w
w(k) + D
e
e(k) (6.24)
The state equations of x
1
and x
2
follow from chapter 2. We will now show that the above
choice of Q(q) will indeed lead to the same control action as the controller derived in
chapter 5.
Combining the three state equations in one gives:
_
_
x
1
(k+1)
x
2
(k+1)
x
3
(k+1)
_
_
=
_
_
AB
3
D
e
D
1
11
C
1
B
3
D
e
D
1
11
C
1
B
3
F
B
1
D
1
11
C
1
AB
1
D
1
11
C
1
0
B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
A B
3
F
_
_
_
_
x
1
(k)
x
2
(k)
x
3
(k)
_
_
+
_
_
B
3
D
e
D
1
11
B
1
D
1
11
B
1
D
1
11
+ B
3
D
e
D
1
11
_
_
y(k) +
_
_
B
3
D
w
0
B
3
D
w
_
_
w(k)
v(k) =
_
D
e
D
1
11
C
1
D
e
D
1
11
C
1
F

_
_
x
1
(k)
x
2
(k)
x
3
(k)
_
_
+
_
D
e
D
1
11

y(k) +
_
D
w

w(k)
We apply a state transformation:
_
_
x

1
x

2
x

3
_
_
=
_
_
x
1
x
1
+ x
2
x
3
x
3
_
_
=
_
_
I 0 0
I I I
0 0 I
_
_
_
_
x
1
x
2
x
3
_
_
120 CHAPTER 6. STABILITY
This results in
_
_
x

1
(k+1)
x

2
(k+1)
x

3
(k+1)
_
_
=
_
_
A B
3
D
e
D
1
11
C
1
B
3
F B
3
D
e
D
1
11
C
1
0 A 0
0 B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
_
_
_
_
x

1
(k)
x

2
(k)
x

3
(k)
_
_
+
_
_
B
3
D
e
D
1
11
0
B
1
D
1
11
+ B
3
D
e
D
1
11
_
_
y(k) +
_
_
B
3
D
w
0
B
3
D
w
_
_
w(k)
v(k) =
_
0 D
e
D
1
11
C
1
F D
e
D
1
11
C
1

_
_
x

1
(k)
x

2
(k)
x

3
(k)
_
_
+
_
D
e

y(k) +
_
D
w

w(k)
It is clear that state x

1
is not observable and state x

2
is not controlable. Since A has all
eigenvalues inside the unit disc, we can do model reduction by deleting these states and
we obtain the reduced controller:
x

3
(k + 1) = (AB
3
F B
1
D
1
11
C
1
B
3
D
e
D
1
11
C
1
) x

3
(k)
+(B
1
D
1
11
+B
3
D
e
D
1
11
) y(k) + B
3
D
w
w(k)
v(k) = (F D
e
D
1
11
C
1
) x
3
(k) + D
e
D
1
11
y(k) + D
w
w(k)
This is exactly equal to the optimal predictive controller, derived in the equations (5.133)
and (5.134).
Remark 33: Note that a condition of a stable Q is equivalent to the condition that all
eigenvalues of (AB
3
F) are strictly inside the unit circle. (The condition a stable inverse
H
1
is equivalent to the condition that all eigenvalues of (AB
1
D
1
11
C
1
) are strictly inside
the unit circle.)
Example 34 : computation of function Q
1
(q) and Q
2
(q) for a given MPC con-
troller
Consider the system of example 7 on page 52 with the MPC controller computed in example
24 on page 97. Using equations 6.23 and 6.24 we can give the functions Q
1
(q) and Q
2
(q)
as follows:
Q
1
(q)
_
AB
3
F B
3
D
w
F D
w
_
Q
2
(q)
_
AB
3
F (B
1
+ B
3
D
e
)
F D
e
_
Note that
Q
1
(q) =

Q
1
(q) D
w
6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION 121
where

Q
1
(q)
_
AB
3
F B
3
F I
_
Now we can compute

Q
1
and Q
2
in polynomial form:

Q
1
(q) =
(z 1)(z 0.5)(z 0.2)
(z
3
1.3323 z
2
+ 0.4477 z)
Q
2
(q) =
0.6339(z
2
1.0534 z + 0.3265)(z 0.2)
(z
3
1.3323 z
2
+ 0.4477 z)
6.4.2 The Youla parametrization
The IMC scheme provides a nice framework for the case where the process is stable. For
the unstable case we can use the Youla parametrization [48], [73], [65]. We consider process
G(q), noise lter H(q), satisfying
y(k) = G(q) v(k) + H(q) e(k) (6.25)
For simplicity reasons we will consider F(q) = 0 (known disturbances are not taken into
account). The extension for F(q) = 0 is straight forward.
Consider square transfer matrices
(q) =
_
M(q) N(q)
X(q) Y (q)
_

1
(q) =

=
_

Y (q)

N(q)

X(q)

M(q)
_
satisfying the following properties:
1. The system (q) is stable.
2. The inverse system
1
(q) is stable.
3. M(q),

M(q) Y (q) and

Y (q) are square and invertible systems.
4. G(q) = M
1
(q)N(q) =

N(q)

M
1
(q).
5. H(q) = M
1
(q)R(q) where R(q) and R
1
(q) are both stable.
The above factorization described in items 1 to 4 is denoted as a doubly coprime factoriza-
tion of G(q). The last item is necessary to be able to stabilize the process. Now consider a
controller given by the following parametrization, denoted as the Youla parametrization:
v(k) =
_
Y (q) + Q
2
(q)N(q)
_
1
_
X(q) + Q
2
(q)M(q)
_
y(k)
+
_
Y (q) + Q
2
(q)N(q)
_
1
Q
1
(q) w(k) (6.26)
122 CHAPTER 6. STABILITY
where M(q), N(q), X(q) and Y (q) are found from a doubly coprime factorization of plant
model G(q) and Q
1
(q) and Q
2
(q) are transfer functions with the appropriate dimensions.
For this controller we can make the following proposition:
Proposition 35 Consider the process (6.25) and the Youla parametrization (6.26). The
closed loop description is then given by:
v(k) =

M(q)Q
1
(q) w(k) +
_

X(q) +

M(q)Q
2
(q)
_
R(q)e(k) (6.27)
y(k) =

N(q)Q
1
(q) w(k) +
_

Y (q) +

N(q)Q
2
(q)
_
R(q)e(k) (6.28)
The closed loop is stable if and only if both Q
1
and Q
2
are stable.
The closed loop of plant model G(q), noise model H(q) together with the Youla-based
controller is visualised in gure 6.3.
N M
Q
2
Y
1
X
Q
1
Controller Process
E
c
T
T
E E E
E E ' ' e
e
G
H
e
c
c
E E E
e
w
v y
Figure 6.3: Youla parametrization
The Youla parametrization (6.26) gives all controllers stabilizing G(q) for stable Q
1
(q) and
Q
2
(q). For a stabilizing MPC controller we can compute the corresponding functions Q
1
(q)
and Q
2
(q):
The MPC controller in a Youla conguration
The decomposition of G(q) in two stable functions M(q) and N(q) (or coprime factoriza-
tion) is not unique. We will consider two possible choices for the factorization:
Factorization, based on the inverse noise model:
A possible state-space realization of (q) is the following

ij
(q) = C
,i
(qI A

)
1
B
,j
+ D
,ij
6.4. RELATION TO IMC SCHEME AND YOULA PARAMETRIZATION 123
for
(q) =
_
M(q) N(q)
X(q) Y (q)
_
=
_
H
1
(q) H
1
(q)G(q)
X(q) Y (q)
_

_
_
AB
1
D
1
11
C
1
B
1
D
1
11
B
3
D
1
11
C
1
D
1
11
0
F 0 I
_
_
and we nd
R(q) = H(q) M(q) = I
And we choose
Q
1
= D
w
Q
2
= D
e
The poles of (q) are equal to the eigenvalues of the matrix AB
1
D
1
11
C
1
, so equal to the
poles of the inverse noise model H
1
(q). For
1
(q) we nd

1
(q) =

=
_

Y (q)

N(q)

X(q)

M(q)
_

_
_
AB
3
F B
1
B
3
C
1
D
11
0
F 0 I
_
_
and so the poles of
1
(q) are equal to the eigenvalues of AB
3
F. So, if the eigenvalues of
(AB
1
D
1
11
C
1
) and (AB
3
F) are inside the unit circle, the controller will be stabilizing
for any stable Q
1
and Q
2
. In our case Q
1
= D
w
and Q
2
= D
e
are constant matrices and
thus stable.
Factorization, based on a stable plant model:
If G, H and H
1
are stable, we can also make another choice for :
(q) =
_
M(q) N(q)
X(q) Y (q)
_
=
_
H
1
(q) H
1
(q)G(q)
0 I
_
(q) is stable, and so is
1
(q), given by:

1
(q) =
_
H(q) G(q)
0 I
_
By comparing Figure 6.2 and Figure 6.3 we nd that for the above choices the Youla
parametrization boils down to the IMC scheme by choosing Q
Y OULA
= Q
IMC
. We nd
that the IMC scheme is a special case of the Youla parametrization.
Example 36 : computation of the functions ,
1
, Q
1
(q) and Q
2
(q)
Given a system with process model G and noise model H given by
G = 0.2
q
1
1 2 q
1
H =
1 0.4 q
1
1 2 q
1
124 CHAPTER 6. STABILITY
For this model we obtain the state space model
_
A B
1
B
2
C
1
D
11
D
12
_
=
_
2 1.6 0.2
1 1 0
_
Now assume that the optimal state feedback for the MPC controller is given by
F = 6, D
w
= 3, and D
e
= 2
then
_
_
A

B
,1
B
,2
C
,1
D
,11
D
,12
C
,2
D
,21
D
,22
_
_
=
_
_
AB
1
D
1
11
C
1
B
1
D
1
11
B
3
D
1
11
C
1
D
1
11
0
F 0 I
_
_
=
_
_
0.4 1.6 0.2
1 1 0
6 0 1
_
_
and for
1
(q) we nd
_
_
A

1 B

1
,1
B

1
,2
C

1
,1
D

1
,11
D

1
,12
C

1
,2
D

1
,21
D

1
,22
_
_
=
_
_
AB
3
F B
1
B
3
C
1
D
11
0
F 0 I
_
_
=
_
_
0.8 1.6 0.2
1 1 0
6 0 1
_
_
We see that the poles of both and
1
are stable (0.4 and 0.8 respectively). Finally
Q
1
= D
w
= 3 and Q
2
= D
e
= 2.
6.5 Robustness
In the last decades the focus of control theory is shifting more and more towards control of
uncertain systems. A disadvantage of common nite horizon predictive control techniques
(as presented in the chapters 2 to 5) is their inability to deal explicitly with uncertainty
in the process model. In real life engineering problems, system parameters are uncertain,
because they are dicult to estimate or because they vary in time. That means that we
dont know the exact values of the parameters, but only a set in which the system moves.
A controller is called robustly stable if a small change in the system parameters does
not destabilize the system. It is said to give robust performance if the performance does
not deteriorate signicantly for small changes parameters in the system. The design and
analysis of linear time-invariant (LTI) robust controllers for linear systems has been studied
extensively in the last decade.
In practice a predictive controller usually can be tuned quite easily to give a stable closed-
loop and to be robust with respect to model mismatch. However, in order to be able to
guarantee stability, feasibility and/or robustness and to obtain better and easier tuning
rules by increased insight and better algorithms, the development of a general stability
and robustness theory for predictive control has become an important research topic.
One way to obtain robustness in predictive control is by careful tuning (Clarke & Mothadi
[13], Soeterboek [61], Lee & Yu [39]). This method gives quite satisfactory results in the
unconstrained case.
6.5. ROBUSTNESS 125
In the constrained case robustness analysis is much more dicult, resulting in more complex
and/or conservative tuning rules (Zariou [77], Gencelli & Nikolaou [30]). One approach to
guarantee robust stability is the use of an explicit contraction constraint (Zheng & Morari
[80], De Vries & van den Boom [74]). The two main disadvantages of this approach are the
resulting non-linear optimization problem and the large but unclear inuence of the choice
of the contraction constraint on the controlled system.
An approach to guarantee robust performance is to guarantee that the criterion function
is a contraction by optimizing the maximum of the criterion function over all possible
models (Zheng & Morari [80]). The main disadvantages of this method are the need to use
polytopic model uncertainty descriptions, the use of less general criterion functions and,
especially, the dicult min max optimization. In the mid-nineties Kothare et al. ([35])
derived an LMI-based MPC method (see chapter 7), that circumvented all of these dis-
advantages, however, it may become quite conservative. This method, and its extensions,
will be discussed in section 7.4 and 7.5, respectively.
Robust stability for the IMC scheme
Next we will study what happens if there is a mismatch between the plant model

G and
the true plant G in the Internal Model Control (IMC) scheme of Section 6.4.1. We will
elaborate on the additive model error case, so the model error is given by:
(q) = W
1
a
(q)
_
G(q)

G(q)
_
where W
a
(q) is a given stable weighting lter with stable inverse W
1
a
(q), such that


1. Note that for SISO systems this means that
|G(e
j
)

G(e
j
)| |W
a
(e
j
)| R
and so W
a
(e
j
) gives na upper bound on the magnitude of the model error.
We make the following assumptions:
1. G(q) and

G(q) are stable.
2. H(q) and

H(q) are stable.
3. H
1
(q) and

H
1
(q) are stable.
4. The MPC controller for the model

G(q),

H(q) is stabilizing.
The rst three assumptions guarantee that the nominal control problem ts in the IMC
framework. The fourth assumption is necessary, because it only makes sense to study
robust stability if the MPC controller is stabilizing for the nominal case.
Now consider the IMC-scheme given in section by (6.16)-(6.19) in section 6.4.1. We assume
that the nominal MPC controller is stabilizing, so Q
1
(q) and Q
2
(q) are stable. The closed
126 CHAPTER 6. STABILITY
loop is now given by (6.22)-(6.22):
v(k) =
_
I Q
2
(q)

H
1
(q)(G(q)

G(q))
_
1
_
Q
1
(q) w(k)+Q
2
(q)

H
1
(q)H(q)e(k)
_
y(k) = G(q)
_
I Q
2
(q)

H
1
(q)(G(q)

G(q))
_
1
_
Q
1
(q) w(k)
+Q
2
(q)

H
1
(q)H(q)e(k)
_
+H(q)e(k)
It is clear that the perturbed closed loop is stable if and only if
_
I Q
2
(q)

H
1
(q)(G(q)

G(q))
_
1
is stable. A sucient condition for stability is that for all |q| 1:
Q
2
(q)

H
1
(q)(G(q)

G(q)) = I
This leads to the sucient condition for stability:
Q
2

H
1
(G

G)

< 1
which can be relaxed to
Q
2

H
1
W
a

W
1
a
(G

G)

< 1
We know that W
1
a
(G

G)

< 1, and so a sucient condition is


Q
2

H
1
W
a

< 1
Dene the function T = Q
2

H
1
W
a
. This function T depends on the integer design pa-
rameters N, N
c
, N
m
, but also on the choice of performance index matrices C
2
, D
21
, D
22
,
and D
23
. A tuning procedure for robust stability can follow the next steps: Design the
controller for some initial settings (see also chapter 8). Compute T

. If T

< 1, we
already have robust stability and we can use the initial setting as nal setting. If T

1,
the controller has to be retuned to obtain robust stability:
We can perform a mixed-integer optimization to nd a better parameter setting and ter-
minate as soon as T

< 1, or proceeded to obtain a even better robustness margin


(T

1).
Tuning of the noise model in the IMC conguration
Until now we assumed that the noise model
H(q) = C
1
(qI A)
1
B
1
+ D
11
was correct with a stable inverse (eigenvalues of AB
1
D
1
11
C
1
are all strict inside the unit
circle). In the case of model uncertainty however, the state observer B
1
, based on the above
6.5. ROBUSTNESS 127
noise model may not work satisfactory and may even destabilize the uncertain system. In
many papers, the observer matrix B
1
is therefore considered as a design variable and is
tuned to obtain good robustness properties. By changing B
1
, we can change the dynamics
of

T(z) and thus improve the robustness of the predictive controller.
We make the following assumption:
D
21
= D
21,o
B
1
which means that matrix D
e
is linear in B
1
, and so there exist a matrix D
e,o
such that
D
e
= D
e,o
B
1
. This will make the computations much easier.
We already had the equation
Q
2

H
1
W
a

< 1
Note that Q
2
(q) is given by
Q
2
(q) = F(q I A + B
2
F)
1
(B
1
+ B
3
D
e
) + D
e
=
_
F(q I A+ B
2
F)
1
(I + B
3
D
e,o
) + D
e,o
_
B
1
= Q
2,o
(q) B
1
So a sucient condition for stability is
Q
2,o
B
1

H
1
W
a

< 1
where we now use B
1
as a tuning variable. Note that for B
1
0 we have that
Q
2,o
B
1

H
1
W
a

0 and so by reducing the magnitude of matrix B


1
we can al-
ways nd a value that will robustly stabilize the closed-loop system. A disadvantage of
this procedure is that the noise model is disturbed and noise rejection is not optimal any
more. To retain some disturbance rejection, we should detune the matrix B
1
not more
than necessary to obtain robustness.
Robust stability in the case of an IIO model
In this section we will consider robust stability in the case where we use an IIO model for
the MPC design with an additive model error between the IO plant model

G
o
and the true
plant G
o
. Note that we cannot use the IMC scheme in this case, because the plant is not
stable. We therefore use the Youla conguration. We make the following assumptions:
1. G
i
(q) =
1
(q)G
o
(q) and

G
i
(q) =
1
(q)

G
o
(q).
2. H
i
(q) =
1
(q)H
o
(q) and

H
i
(q) =
1
(q)

H
o
(q).
3. G
o
(q),

G
o
(q), H
1
o
(q) and

H
1
o
(q) are stable.
4. The MPC controller for the nominal model

G
i
(q),

H
i
(q) is stabilizing.
128 CHAPTER 6. STABILITY
5. The true IIO plant is given by G
i
(q) =
1
(q)
_
G
o
(q) + W
a
(q)(q)
_
where the
uncertainty bound is given by

1, and W
a
(q) is a given stable weighting lter
with stable inverse W
1
a
(q).
We introduce a coprime factorization on the plant

G
o
(q) = M
1
(q)N(q) =

N(q)

M
1
(q)
and dene R(q) = M(q)

H
i
(q) and S(q) =
1
(q)M(q). Now R(q), S(q), R
1
(q) and
R
1
(q) are all stable. (If we use the coprime factorization based on the inverse noise model
on page 122, we will obtain R(q) = I and S(q) =

H
1
o
(q).)
The closed loop conguration is now given in Figure (6.4). The controller is given by

G
o
M
Q
2
Y
1
X
Q
1
Controller Process
E
c
T
T
E E E
E E ' ' e
e
N
W
a S
M
1
R
e e E E
E E E
c
c
c
E E E

e
w
v y
Figure 6.4: Youla parametrization
(Y + Q
2
N) v(k) = (X + Q
2
M) y + Q
1
r(k)
and the plant
M y = N v + Re(k) + S
substitution and using the fact that

MX =

XM we obtain
(Y + Q
2
N) v(k) = (X + Q
2
M) y + Q
1
r(k)
= (XM
1
+ Q
2
) (N v + Re(k) + S ) + Q
1
r(k)
= (

M
1

X + Q
2
) (N v + Re(k) + S ) + Q
1
r(k)
(Y + Q
2
N

M
1

XN Q
2
N) v = (

M
1

X + Q
2
) (Re(k) + S ) + Q
1
r(k)
(

MY

XN) v = (

X +

MQ
2
) (Re(k) + S ) +

MQ
1
r(k)
6.5. ROBUSTNESS 129
By using the fact that

MY

XN = I we obtain
v = (

X +

MQ
2
) (Re(k) + S ) +

MQ
1
r(k)
= W
a
(

X +

MQ
2
) (Re(k) + S ) + W
a

MQ
1
r(k)
A necessary and sucient condition for robust stability is now given by
W
a
(

X +

MQ
2
S)

1
Dene the function T = W
a
(

X +

MQ
2
S). Similar to the IMC robust analysis case this
function T depends on the integer design parameters N, N
c
, N
m
, but also on the choice of
performance index matrices C
2
, D
21
, D
22
, and D
23
. We can use the same tuning procedure
as for the IMC case to achieve robust stability.
130 CHAPTER 6. STABILITY
Chapter 7
MPC using a feedback law, based on
linear matrix inequalities
In the chapters 2 to 6 the basic framework of model predictive control was elaborated.
In this chapter a dierent approach to predictive control is presented, using linear matrix
inequalities (LMIs), based on the work of Kothare et al. [35]. The control strategy is
focussed on computing an optimal feedback law in the receding horizon framework. A
controller can be derived by solving convex optimization problems using fast and reliable
tecniques. LMI problems can be solved in polynomial time, which means that they have
low computational complexity.
In fact, the eld of application of Linear Matrix Inequalities (LMIs) in system and control
engineering is enormous, and more and more engineering problems are solved using LMIs.
Some examples of application are stability theory, model and controller reduction, robust
control, system identication and (last but not least) predictive control.
The main reasons for using LMIs in predictive control are the following:
Stability is easily guaranteed.
Feasibility results are easy to obtain.
Extension to systems with model uncertainty can be made.
Convex properties are preserved.
Of course there are also some drawbacks:
Although solving LMIs can be done using convex techniques, the optimization prob-
lem is more complex than a quadratic programming problem, as discussed in chap-
ter 5.
Feasibility in the noisy case still can not be guaranteed.
131
132 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
In the section 7.1 we will discuss the main features of linear matrix inequalities, in section
7.2 we will show how LMIs can be used to solve the unconstrained MPC problem, and in
section 7.3 we will look at the inequality constrained case. Robustness issues, concerning
stability in the case of model uncertainty, are discussed in section 7.4. Finally section 7.5
will discuss some extensions of the work of Kothare et al. [35].
7.1 Linear matrix inequalities
Consider the linear matrix expression
F() = F
0
+
m

i=1

i
F
i
(7.1)
where R
m1
, with elements
i
, is the variable and the symmetric matrices F
i
= F
T
i

R
nn
, i = 0, . . . , m are given. A strict Linear Matrix Inequality (LMI) has the form
F() > 0 (7.2)
The inequality symbol in (7.2) means that F() is positive denite (i.e. x
T
F()x > 0 for
all nonzero x R
n1
). A nonstrict LMI has the form
F() 0
Properties of LMIs:
Convexity:
The set C = {| F() > 0} is convex in , which means that for each pair
1
,
2
C
and for all [0, 1] the next property holds

1
+ (1 )
2
C
Multiple LMIs:
Multiple LMIs can be expressed as a single LMI, since
F
(1)
> 0 , F
(2)
> 0 , . . . , F
(p)
> 0
is equaivalent to:
_

_
F
(1)
0 0
0 F
(2)
0
.
.
.
.
.
.
.
.
.
0 0 F
(p)
_

_
> 0
7.1. LINEAR MATRIX INEQUALITIES 133
Schur complements:
Consider the following LMI:
_
Q() S()
S
T
() R()
_
> 0
where Q() = Q
T
(), R() = R
T
() and S() depend anely on . This is equaiva-
lent to
R() > 0 , Q() S()R
1
()S
T
() > 0
Finally note that any symmetric matrix M R
nn
can be written as
M =
n(n+1)/2

i=1
M
i
m
i
where m
i
are the scalar entries of the upper-right triangle part of M, and M
i
are symmetric
matrices with entries 0 or 1. For example:
M =
_
m
1
m
2
m
2
m
3
_
=
_
1 0
0 0
_
m
1
+
_
0 1
1 0
_
m
2
+
_
0 0
0 1
_
m
3
This means that a matrix inequality, that is linear in a symmetric matrix M, can easily
be transformed into a linear matrix inequality with a parameter vector
=
_
m
1
m
2
. . . m
n(n+1)/2

T
Before we look at MPC using LMIs, we consider the more simple cases of Lyapunov stability
of an autonomous system and the properties of a stabilizing state-feedback.
Lyapunov theory:
Probably the most elementary LMI is related to the Lyapunov theory. The dierence state
equation
x(k + 1) = Ax(k)
is stable (i.e. all state trajectories x(k) go to zero) if and only if there exists a positive-
denite matrix P such that
A
T
PAP < 0
For this system the Lyapunov function V (k) = x
T
(k)Px(k) is positive denite for P > 0,
and the increment V (k) = V (k + 1) V (k) is negative denite because
V (k + 1) V (k) = x
T
(k + 1)Px(k + 1) x
T
(k)Px(k)
= (Ax(k))
T
P(Ax(k)) x
T
(k)Px(k)
= x
T
(k)(A
T
PAP)x(k)
< 0
for (A
T
PAP) < 0.
134 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
Stabilizing state-feedback:
Consider the dierence state equation:
x(k + 1) = Ax(k) + Bv(k)
where x(k) is the state and v(k) is the input. When we apply a state feedback v(k) =
F x(k), we obtain:
x(k + 1) = Ax(k) + BFx(k) = (A+ BF) x(k)
and so the corresponding Lyapunov equation becomes:
(A
T
+ F
T
B
T
)P(A+ BF) P < 0 , P > 0
Or by choosing P = S
1
this is equivalent to
(A
T
+ F
T
B
T
)S
1
(A+ BF) S
1
< 0 , S > 0
pre- and post-multiplying with S result in
S(A
T
+ F
T
B
T
)S
1
(A+ BF)S S < 0 , S > 0
By choosing F = Y S
1
we obtain the condition:
(SA
T
+ Y
T
B
T
)S
1
(AS + BY ) S < 0 , S > 0
which can be rewritten (using the Schur complement transformation) as
_
S SA
T
+ Y
T
B
T
AS + BY S
_
> 0 , S > 0 (7.3)
which are LMIs in S and Y . So any S and Y satisfying the above LMI result in a stabilizing
state-feedback F = Y S
1
.
7.2 Unconstrained MPC using linear matrix inequal-
ities
Consider the system
x(k + 1) = Ax(k) + B
2
w(k) + B
3
v(k) (7.4)
y(k) = C
1
x(k) + D
12
w(k) (7.5)
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k) (7.6)
7.2. UNCONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES 135
For simplicity reasons, in this chapter, we consider e(k) = 0. Like in chapter 5, we consider
constant external signal w, a zero steady-state performance signal z
ss
and a weighting
matrix (j) which is equal to identity, so
w(k + j) = w
ss
for all j 0
z
ss
= 0
(j) = I for all j 0
Let the system have a steady-state, given by (v
ss
, x
ss
, w
ss
, z
ss
) = (D
ssv
w
ss
, D
ssx
w
ss
, w
ss
, 0).
We dene the shifted versions of the input and state which are zero in steady-state:
v

(k) = v(k) v
ss
x

(k) = x(k) x
ss
Now we consider the problem of nding at time instant k an optimal state feedback v

(k +
j|k) = F(k) x

(k + j|k) to minimize the quadratic objective


min
F
(k)J(k)
where J(k) is given by ( > 0):
J(k) =

j=0
z
T
(k + j|k) z(k + j|k)
in which z(k + j|k) is the prediction of z(k + j), based on knowledge up to time k.
J(k) =

j=0
z(k + j|k)
T
z(k + j|k)
=

j=0
(C
2
x(k + j|k) + D
22
w(k + j) + D
23
v(k + j|k))
T
(C
2
x(k + j|k)
+D
22
w(k + j) + D
23
v(k + j|k))
=

j=0
(C
2
x

(k + j|k) + D
22
v

(k + j|k))
T
(C
2
x

(k + j|k) + D
22
v

(k + j|k))
=

j=0
x

T
(k + j|k)(C
T
2
+ F
T
(k)D
T
22
)(C
2
+ D
22
F(k))x

(k + j|k)
This problem can be translated to a minimization problem with LMI-constraints. Consider
the quadratic function V (k) = x

(k)
T
Px

(k), P > 0, that satises


V (k) = V (k +1) V (k) < x

T
(k)(C
T
2
+F
T
(k)D
T
22
)(C
2
+D
22
F(k))x

(k) (7.7)
136 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
for every trajectory. Note that because V (k) < 0 and V (k) > 0, it follows that V () =
0. This implies that x

() = 0 and therefore V () = 0. Now there holds:


V (k) = V (k) V ()
=

j=0
V (k + j)
>

j=0
x

T
(k + j|k)(C
T
2
+ F
T
(k)D
T
22
)(C
2
+ D
22
F(k))x

(k + j|k)
= J(k)
so x

T
(k)Px

(k) is a lyapunov function and at the same time an upper bound on J(k).
Now derive
V (k) = V (k + 1) V (k) =
= x

T
(k + 1|k)Px

(k + 1|k) x

T
(k)Px

(k) =
= x

T
(k)
_
(A + B
3
F)
T
P(A+ B
3
F)
_
x

(k) x

T
(k)Px

(k) =
= x

T
(k)
_
(A + B
3
F)
T
P(A+ B
3
F) P
_
x

(k)
And so condition (7.7) is the same as
(A+ B
3
F)
T
P(A+ B
3
F) P + (C
2
+ D
22
F)
T
(C
2
+ D
22
F) < 0
By setting P = S
1
and F = Y S
1
we obtain the following condition:
(A
T
+S
1
Y
T
B
T
3
)S
1
(A+B
3
Y S
1
) S
1
+(C
T
2
+S
1
Y
T
D
T
22
)(C
2
+D
22
Y S
1
) < 0
pre- and post-multiplying with S and division by result in
(SA
T
+ Y
T
B
T
3
)S
1
(AS + B
3
Y ) S +
1
(SC
T
2
+ Y
T
D
T
22
)(C
2
S + D
22
Y ) < 0 (7.8)
or
S
_
SA
T
+ Y
T
B
T
3
(SC
T
2
+ Y
T
D
T
22
)
1/2

_
S
1
0
0
1
I
__
AS + B
3
Y

1/2
(C
2
S + D
22
Y )
_
> 0
which can be rewritten (using the Schur complement transformation) as
_
_
S SA
T
+ Y
T
B
T
3
(SC
T
2
+ Y
T
D
T
22
)
1/2
AS + B
3
Y S 0

1/2
(C
2
S + D
22
Y ) 0 I
_
_
> 0 , S > 0 (7.9)
So any , S and Y satisfying the above LMI gives an upper bound
V (k) = x

(k)
T
S
1
x

(k) > J(k)


7.3. CONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES 137
Finally introduce an additional constraint that
x

T
(k)S
1
x

(k) 1 (7.10)
which, using the Schur complement property, is equivalent to
_
1 x

T
(k)
x

(k) S
_
0 (7.11)
Now
> J (7.12)
and so a minimization of subject to LMI constraints (7.9) and (7.11) is equivalent to a
miminization of the upper bound of the performance index.
Note that the LMI constraint (7.9) looks more complicated than expression (7.8). However
(7.9) is linear in S, Y and . A convex optimization algorithm, denoted as the interior
point algorithm, can solve convex optimization problems with many LMI constraints (up
to a 1000 constraints) in a fast and robust way.
7.3 Constrained MPC using linear matrix inequalities
In the previous section an unconstrained predictive controller in the LMI-setting was de-
rived. In this section we incorporate constraints. The most important tool we use is the
notion of invariant ellipsoid:
Invariant ellipsoid:
Lemma 37 Consider the system (7.4) and (7.6). At sampling time k, consider
x

(k|k)
T
S
1
x

(k|k) 1
and let S also satisfy (7.9), then there will hold for all j > 0:
x

(k + j|k)
T
S
1
x

(k + j|k) 1
Proof:
From the previous section we know that when (7.9) is satised, there holds:
V (k + j) < V (k) for j > 0
Using V (k) = x

T
(k)Px

(k) = x

T
(k)S
1
x

(k) we derive:
x

(k + j|k)
T
S
1
x

(k + j|k) =
1
V (k + j) <
1
V (k) = x

(k|k)
T
S
1
x

(k|k) 1
2 End Proof
138 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
E =
_
|
T
S
1
1
_
=
_
|
T
P
_
is an invariant ellipsoid for the predicted states
of the system, and so
x

(k|k) E = x

(k + j|k) E j > 0
The condition x

(k|k)
T
S
1
x

(k|k) 1 of the above lemma is equivalent to the LMI


_
1 x

T
(k|k)
x

(k|k) S
_
0 (7.13)
Signal constraints
Lemma 38 Consider the scalar signal
(k) = C
4
x(k) + D
42
w(k) + D
43
v(k) (7.14)
and the signal constraint
|(k + j|k)|
max
j 0 (7.15)
where D
43
= 0 and
max
> 0. Further let condition (7.11) hold and assume that |
ss
| <

max
where

ss
= C
4
x
ss
+ D
42
w
ss
+ D
43
v
ss
.
Then condition (7.15) will be satised if
_
S (C
4
S + D
43
Y )
T
C
4
S + D
43
Y (
max
|
ss
|)
2
_
0 (7.16)
Proof::
Note that we can write
(k) =

(k) +
ss
where

(k) = C
4
x

(k)(k) + D
43
v

(k)(k) .
Equation (7.16) together with lemma 37 gives that
|

(k + j|k)|
2
(
max
|
ss
|)
2
or
|

(k + j|k)|
max
|
ss
|
and so
|(k)| |(k) |
ss
|| +|
ss
|
max
|
ss
| +|
ss
| =
max
2 End Proof
7.3. CONSTRAINED MPC USING LINEAR MATRIX INEQUALITIES 139
Remark 39: Note that for an input constraint on the i-th input v
i
|v
i
(k + j)| v
max,i
for j 0
is equivalent to condition (7.15) for
max,i
= v
max,i
, C
2
= 0, D
42
= 0 and D
43
= e
i
, where
e
i
is a selection vector such that v
i
(k) = e
i
v(k).
An output constraint on the i-th output y
i
|y
i
(k+j+1) y
ss,i
| y
max,i
for j 1
where y
ss
= C
1
x
ss
+ D
12
w
ss
= (C
1
D
ssx
+ D
12
)w
ss
.
By deriving
y
i
(k+j+1) = C
1
x(k+j+1) +D
12
w(k+j+1) =
= C
1
x

(k + j + 1) +y
ss
=
= C
1
Ax

(k + j) + C
1
B
3
v

(k + j) + y
ss
we nd that the output condition is equivalent to condition (7.15) for
max,i
= y
max,i
,
C
4
= C
1
A, D
42
= 0 and D
43
= C
y,i
B
3
.
Summarizing MPC using linear matrix inequalities
Consider the system
x(k + 1) = Ax(k) + B
3
v(k) (7.17)
y(k) = C
1
x(k) + D
12
w(k) (7.18)
z(k) = C
2
x(k) + D
22
w(k) + D
23
v(k) (7.19)
(k) = C
4
x(k) + D
42
w(k) + D
43
v(k) (7.20)
with a steady-state (v
ss
, x
ss
, w
ss
, z
ss
) = (D
ssv
w
ss
, D
ssx
w
ss
, w
ss
, 0) The LMI-MPC control
problem is to nd, at each time-instant k, a state-feedback law
v(k) = F (x(k) x
ss
) + v
ss
such that the criterion
min
F
J(k) where J(k) =

j=0
z
T
(k + j)z(k + j) (7.21)
is optimized subject to the constraint
|
i
(k + j) |
i,max
, , for i = 1, . . . , n

, j 0 (7.22)
where
i,max
> 0 for i = 1, . . . , n

. The optimal solution is found by solving the following


LMI problem:
140 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
Theorem 40 Given a system with state space description (7.17)-(7.19) and a control
problem of minimizing (7.21) subject to constraint (7.22), and an external signal
w(k + j) = w
ss
for j > 0
The state-feedback F = Y S
1
minimizing the worst-case J(k) can be found by solving:
min
,S,Y
(7.23)
subject to
(LMI for lyapunov stability:)
S > 0 (7.24)
(LMI for ellipsoid boundary:)
_
1 x

T
(k|k)
x

(k|k) S
_
0 (7.25)
(LMIs for worst case MPC performance index:)
_
_
S SA
T
+Y
T
B
T
3
(SC
T
2
+Y
T
D
T
22
)
1/2
AS+B
3
Y S 0

1/2
(C
2
S + D
22
Y ) 0 I
_
_
> 0 (7.26)
(LMIs for constraints on :)
_
S (C
4
S + D
43
Y )
T
E
T
i
E
i
(C
4
S + D
43
Y ) (
i,max
|
i,ss
|)
2
_
0, i = 1, . . . , n

(7.27)
where E
i
=
_
0 . . . 0 1 0 . . . 0

with the 1 on the i-th position.


LMI (7.24) guarantees Lyapunov stability, LMI (7.25) describes an invariant ellipsoid for
the state, (7.26) gives the LMI for the MPC-criterion and (7.27) corresponds to the state
constraint on
i
.
7.4 Robustness in LMI-based MPC
In this section we consider the problem of robust performance in LMI based MPC. The
signals w(k) and e(k) are assumed to be zero. How these signals can be incorporated in
the design is in Kothare et al. [35].
We consider both unstructured and structured model uncertainty. Given the fact that w
and e are zero, we get the following model uncertainty descriptions for the LMI-based case:
7.4. ROBUSTNESS IN LMI-BASED MPC 141
Unstructured uncertainty
The uncertainty is given in the following description:
x(k + 1) = Ax(k) + B
3
v(k) + B
4
(k) (7.28)
z(k) = C
2
x(k) + D
23
v(k) + D
24
(k) (7.29)
(k) = C
4
x(k) + D
43
v(k) + D
44
(k) (7.30)
(k) = C
5
x(k) + D
53
v(k) (7.31)
(k) = (q)(k) (7.32)
where has a block diagonal structure:
=
_

2
.
.
.

r
_

_
(7.33)
with

(q) : R
n

R
n

, = 1, . . . , r is bounded in -norm:

1 , = 1, . . . , r (7.34)
A nominal model G
nom
G is given for (k) = 0:
x(k + 1) = Ax(k) + B
3
v(k)
Structured uncertainty
The uncertainty is given in the following description:
x(k + 1) = Ax(k) + B
3
v(k) (7.35)
z(k) = C
2
x(k) + D
23
v(k) (7.36)
(k) = C
4
x(k) + D
43
v(k) (7.37)
where
P =
_
_
A B
3
C
2
D
23
C
4
D
43
_
_
Co
_
_
_
_
_

A
1

B
3,1

C
2,1

D
23,1

C
4,1

D
43,1
_
_
, . . . ,
_
_

A
L

B
3,L

C
2,L

D
23,L

C
4,L

D
43,L
_
_
_
_
_
(7.38)
A nominal model G
nom
G is given by
x(k + 1) = A
nom
x(k) + B
3,nom
v(k)
142 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
Robust performance for LMI-based MPC
The robust performance MPC control problem is to nd, at each time-instant k, a state-
feedback law
v(k) = F x(k)
such that the criterion
min
F
max
GG
J(k) where J(k) =

j=0
z
T
(k + j)z(k + j) (7.39)
is optimized subject to the constraints
max
GG
|
i
(k + j) |
i,max
, for i = 1, . . . , n

, j 0 (7.40)
In equation (7.39) input-signal v(k) is choosen such that the performance index J is min-
imized for the worst-case choice of G in G, subject to constraints (7.40) for the same
worst-case G. As was introduced in the previous section, the uncertainty in G can be
either structured or unstructured, resulting in the theorems 41 and 42.
Theorem 41 Robust performance for unstructured model uncertainty
Given a system with uncertainty description (7.28)-(7.34) and a control problem of mini-
mizing (7.39) subject to constraints (7.40). The state-feedback F = Y S
1
minimizing the
worst-case J(k) can be found by solving:
min
,S,Y,T,V,
(7.41)
subject to
(LMI for lyapunov stability:)
S > 0 (7.42)
(LMI for ellipsoid boundary:)
_
1 x
T
(k|k)
x(k|k) S
_
0 (7.43)
(LMI for worst case MPC performance index:)
_

_
S (AS + B
3
Y )
T
(C
2
S + D
23
Y )
T
(C
5
S + D
53
Y )
T
(AS + B
3
Y ) S B
4
B
T
4
B
4
D
T
24
0
(C
2
S + D
23
Y ) D
24
B
T
4
I D
24
D
T
24
0
(C
5
S + D
53
Y ) 0 0
_

_
0 (7.44)
7.4. ROBUSTNESS IN LMI-BASED MPC 143
> 0 (7.45)
where is partitioned as :
= diag(
1
I
n
1
,
2
I
n
2
, . . . ,
r
I
nr
) > 0 (7.46)
(LMIs for constraints on :)
_
_

2
i,max
S (C
5
S + D
53
Y )
T
(C
4
S + D
43
Y )
T
E
T
i
C
5
S + D
53
Y V
1
0
E
i
(C
4
S + D
43
Y ) 0 I E
i
D
44
V
1
D
T
11
E
T
i
_
_
0, i = 1, . . . , n

(7.47)
V
1
> 0 (7.48)
where V is partitioned as :
V = diag(v
1
I
n
1
, v
2
I
n
2
, . . . , v
r
I
nr
) > 0 (7.49)
and E
i
=
_
0 . . . 0 1 0 . . . 0

with the 1 on the i-th position.


LMI (7.42) guarantees Lyapunov stability, LMI (7.43) describes an invariant ellipsoid for
the state, (7.44), (7.45) are the LMIs for the worst-case MPC-criterion and (7.47), (7.49)
and (7.48) correspond to the state constraint on .
Proof of theorem 41:
The LMIs (7.43) and (7.45) for optimizing (7.41) are given in Kothare et al.[35]. In the
notation of the same paper (7.47)-(7.48) can be derived as follows:
For any admissible (k) and l = 1, . . . , m we have
(k) = E
T
i
C
4
x(k) + E
T
i
D
43
v(k) + E
T
i
D
44
(k) (7.50)
= E
T
i
(C
4
+ D
43
F) x(k) + E
T
i
D
44
(k) (7.51)
and so for l = 1, . . . , n

and j 0:
max
j0
|
i
(k + j|k) | = max
j0
| E
T
i
(C
4
+ D
43
F) x(k + j|k) + E
T
i
D
44
(k + j|k) |
max
z
T
z1
|E
T
i
(C
4
+ D
43
F)S
1
2
z + E
T
i
D
44
(k + j|k)|
Further the derivation of (7.47)-(7.48) is analoguous to the derivation of the LMI for an
output constraint in Kothare et al.[35].
In the notation of the same paper (7.44) can be derived as follows:
V (x(k+j+1|k)) =
144 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
=
_
x(k + j|k)
(k + j|k)
_
T
_
(A+ B
3
F)
T
P(A+ B
3
F) (A+ B
3
F)
T
PB
4
B
T
4
P(A+ B
3
F) B
T
4
PB
4
__
x(k + j|k)
(k + j|k)
_
V (x(k + j|k)) =
_
x(k + j|k)
(k + j|k)
_
T
_
P 0
0 0
__
x(k + j|k)
(k + j|k)
_
J(k + j) =
= z(k + j)
T
z(k + j)
=
_
(C
2
+ D
23
F)x(k + j) + D
24
(k + j)
_
T
R
_
(C
2
+ D
23
F)x(k + j) + D
24
(k + j)
_
=
_
x(k + j|k)
(k + j|k)
_
T
_
(C
2
+ D
23
F)
T
(C
2
+ D
23
F) (C
2
+ D
23
F)
T
D
24
D
T
24
(C
2
+ D
23
F) D
T
24
D
24
__
x(k + j|k)
(k + j|k)
_
And so the condition
V (x(k+j+1|k)) V (x(k + j|k)) J(k + j)
result in the inequality
_
x(k+j|k)
(k+j|k)
_
T
_
_
(A+B
3
F)
T
P(A+B
3
F) P+ (A+B
3
F)
T
PB
4
+ (C
2
+D
23
F)
T
D
24
+(C
2
+D
23
F)
T
(C
2
+D
23
F)
B
T
4
P(A+B
3
F) + D
T
24
(C
2
+D
23
F) B
T
4
PB
4
+ D
T
24
D
24
_
_

_
x(k+j|k)
(k+j|k)
_
0
This condition together with condition

(k + j|k)
T

(k + j|k)
x(k + j|k)(C
22,
+ D
53,
F)
T
(C
22,
+ D
53,
F)x(k + j|k) , = 1, 2, . . . r
is satised if there exists a matrix Z 0 such that
_

_
(A+ B
3
F)
T
P(A+ B
3
F) P+ (A+ B
3
F)
T
PB
4
+ (C
2
+ D
23
F)
T
D
24
+(C
2
+ D
23
F)
T
(C
2
+ D
23
F)+
+(C
5
+ D
53
F)
T
(C
5
+ D
53
F)
B
T
4
P(A+ B
3
F) + D
T
24
(C
2
+ D
23
F) B
T
4
PB
4
+ D
T
24
D
24
Z
_

_
0
Susbtituting P = S
1
and F = Y S
1
we nd
_

_
(AS + B
3
Y )
T
S
1
(AS + B
3
Y ) S+ (AS + B
3
Y )
T
S
1
B
4
+
+(C
2
S + D
23
Y )
T
(C
2
S + D
23
Y ) + SQS+ +(C
2
S + D
23
Y )
T
D
24
+(C
5
S + D
53
Y )
T
(C
5
S + D
53
Y )
B
T
4
S
1
(AS + BY ) + D
T
24
(CS + DY ) B
T
4
S
1
B
4
+ D
T
24
D
24
Z
_

_
0
By susbtitution of Z =
1
and by applying some matrix operations we obtain equation
7.44.
2 End Proof
7.4. ROBUSTNESS IN LMI-BASED MPC 145
Theorem 42 Robust performance for structured model uncertainty
Given a system with uncertainty description (7.35)-(7.38) and a control problem of min-
imizing (7.39) subject to constraint (7.40). The state-feedback F = Y S
1
minimizing the
worst-case J(k) can be found by solving:
min
,S,Y
(7.52)
subject to
(LMI for lyapunov stability:)
S > 0 (7.53)
(LMI for ellipsoid boundary:)
_
1 x
T
(k|k)
x(k|k) S
_
0 (7.54)
(LMIs for worst case MPC performance index:)
_
_
S S

A
T

+Y
T

B
T
3,
(S

C
T
2,
+Y
T

D
T
23,
)
1/2

S+

B
3,
Y S 0

1/2
(

C
2,
S +

D
23,
Y ) 0 I
_
_
0 , = 1, . . . , L (7.55)
(LMIs for constraints on :)
_
S (

C
4,
S +

D
43,
Y )
T
E
T
i
E
i
(

C
4,
S +

D
43,
Y )
2
i,max
_
0, i = 1, . . . , n

, = 1, . . . , L (7.56)
where E
i
=
_
0 . . . 0 1 0 . . . 0

with the 1 on the i-th position.


LMI (7.53) guarantees Lyapunov stability, LMI (7.54) describes an invariant ellipsoid for
the state, (7.55) gives the LMIs for the worst-case MPC-criterion and (7.56) correspond to
the state constraint on .
Proof of theorem 42:
The LMI (7.54) for solving (7.52) is given in Kothare et al.[35]. In the notation of the same
paper (7.56) can be derived as follows:
We have, for i = 1, . . . , m:

i
(k) = E
i
C
4
x(k) + E
i
D
43
v(k) (7.57)
= E
i
(C
4
+ D
43
F) x(k) (7.58)
146 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
and so all possible matrices
_
C
4
D
43

and l = 1, . . . , n

there holds
max
j0
|
i
(k + j|k) | = max
j0
| E
i
(C
4
+ D
43
F) x(k + j|k) |
max
z
T
z1
|E
i
(C
4
+ D
43
F)S
1
2
z| , j 0
=
_
E
i
(C
4
+ D
43
F)S
1
2
_
, j 0
Further the derivation of (7.56) is analogous to the derivation of the LMI for an output
constraint in Kothare et al.[35].
In the notation of the same paper (7.55) can be derived as follows. The equation
(A+ B
3
F)
T
P(A+ B
3
F) P + Q + F
T
RF 0
is replaced by:
(A+ B
3
F)
T
P(A+ B
3
F) P + (C
2
+ D
23
F)
T
(C
2
+ D
23
F) 0
For P = S
1
and F = Y S
1
we nd
_
_
S SA
T
+ Y
T
B
T
3
(SC
T
2
+ Y
T
D
T
23
)
1/2
AS + B
3
Y S 0

1/2
(C
2
S + D
23
Y ) 0 I
_
_
0 (7.59)
which is ane in [A B
3
C
2
D
23
]. Hence the condition becomes equation (7.55).
2 End Proof
7.5 Extensions and further research
Although the LMI based MPC method, described in the previous sections, is a useful for
robustly controlling systems with signal constraints, there are still a few disadvantages.
Computational complexity: Despite the fact that solving an LMI problem is convex,
it is computationally more complex than a quadratic program. For larger systems
the time needed to solve the problem may be too long.
Conservatism: The use of invariant sets (see Section 7.3) may lead to very conserva-
tive approximations of the signal constraints. The assumption that constraints are
symmetric around the steady state value is very restrictive.
Approximation of the performance index: Equation (7.12) shows that in LMI-MPC
we do not minimize the performance index J itself, but only an upper bound J.
This means that we may be far away from the real optimal control input.
Since Kothare et al. [35] introduced the LMI-based MPC approach, many researchers have
been looking for ways to adapt and improve the method such that at least one of the above
disadvantages is relaxed.
7.5. EXTENSIONS AND FURTHER RESEARCH 147
Using LMI with Hybrid Approach
Kouvaritakis et al. ([36]) introduces a xed feedback control law which is deployed by
introducing extra degrees of freedom, namely, an additional term c(k) in the following
form
v(k) = Fx(k) + c(k), c(k + N + i) = 0, i > 0(3 36) (7.60)
In this way only a simple optimization is needed online to calculate the N extra control
moves. Then the closed-loop system is derived as follow.
x(k + 1) = A
c
(k)x(k) + B(k)c(k) (7.61)
where A
c
(k) = A(k) + B(k)F. Equation can be expressed as the following state-space
model.
(k + 1) = (k)(k)
(k) =
_
A
c
(k)
_
B(k) 0 . . . 0

0 M
_
(k) =
_
x(k)
f(k)
_
, f(k) =
_

_
c(k)
c(k + 1)
.
.
.
c(k + N 1)
_

_
(7.62)
M =
_

_
0 I
N
0 . . . 0
0 0 I
N
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I
N
0 0 . . . 0 0
_

_
(7.63)
If there exists an invariant set E
x
= {x|x
T
Q
1
x 1} for the control law v(k + i|k) =
Fx(k +i|k), i 0, there must exist ar least one invariant set E

for system (7.62), where


E
x
= {|
T
Q
1

1} (7.64)
Matrix Q
1

can be partioned as
Q
1

=
_

Q
11

Q
T
21

Q
21

Q
22
_
with

Q
11
= Q
1
Then the inequality of (7.64) can be written as
x
T

Q
11
1 2f
T

Q
21
x f
T

Q
22
f
Here one can see that nonzero f can be used to get an invariant ellipsoids which is larger
than E
x
, such that the applicability can be widen. In other words, this extra freedom
148 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
can aord highly tuned control law without the reduction of the invariant set. Hence, the
control law F can be designed oine, using the any of the robust control methods without
considering the constraints. When the optimal F is decided, the cost function can be
replaced by J
f
= f
T
f. One can see that the computational burden involved in the online
tuning of F is removed that shows the main advantage of this method.
Related to this method, Bloemen et al. [8] extends the hybrid approach by introducing a
switching horizon N
s
. Before N
s
the feedback law is designed oine, but after the N
s
the
control law F is a variable in the online optimization problem. Here the uncertainties are
not considered. The performance index is dened as
J(k) =

j=0
y
T
(k + j + 1|k)Qy(k + j + 1|k) + u
T
(k + j|k)Ru(k + j|k) (7.65)
where Q and R are positive-denite weighting matrices. If F is given, the innite sum
beyond N
s
in (7.65) can be replaced by quadratic terminal cost function x
T
(k+N
s
|k)Px(k+
N
s
|k), the weighting matrix P can be obtained via a Lyapunov equation. Based on this
terminal cost function the upper bound of the performance index beyond N
s
is given as
follow

J(k) =
Ns1

j=0
y
T
(k + j + 1|k)Qy(k + j + 1|k) + u
T
(k + j|k)Ru(k + j|k)
+x
T
(k + N
s
|k)Px(k + N
s
|k)
J(k) (7.66)
The the entire MPC optimization problem can be given by the semi-denite programming
problem for the optimization of

J(k) subject to the additional LMIs which express the
input and output constraints. The advantage of this method is that constraints in the rst
part of the rst part of the prediction horizon are taken into account in an appropriate
way. sThe computational burden will increase exponentially if N
s
increase, the automatic
trade-o between feasibility and optimality will still be obtained. The algorithm presented
in [8] is closely related to the algorithm presented by [42].
Combining Hybrid Approach with Multi-Lyapunov Functions
In [23] min-max algorithm using LMI with hybrid approach used by [8] and min-max
algorithm using LMI with multi-Lyapunov functions used by [19] are combined with LMI
technique mentioned by (Kothare et al. [35]). N
s
free control moves are used without
using the static feedback control law. These N
s
free control moves are optimized using
the min-max algorithm in order to minimize the worst-case cost function over a polytopic
model uncertainty set. Then after the switching horizon N
s
, the a feedback control law
F is implemented in each step of the innite horizon control. This feedback control is
calculated online subject to the upper bound of the terminal cost function and robust
stability constraints formed in LMI form. Of course the computational burden will increase,
but the numerical example show that better results are obtained compared to [19].
7.5. EXTENSIONS AND FURTHER RESEARCH 149
Using LMI with o-line formation
In order to reduce the online computational load of the method proposed by Kothare et
al. [35], Wan and Kothare [75] developed an o-line formulation of robust MPC which can
calculate a sequence of explicit control laws corresponding to a sequence of asymptotically
stable invariant ellipsoids constructed o-line. Two concepts are used in this method. The
rst concept is an asymptotically stable invariant ellipsoid, dened as follows:
Given a discrete dynamic system x(k + 1) = f(x(k)), subset E = {x
R
nx
|x
T
Q
1
x 1} of the state space R
nx
is said to be an asymptotically stable in-
variant ellipsoid, if it has the property that, whenever x(k
1
) E, then x(k) E for
all times k > k
1
and x(k) 0 as k
The set E = {x|x
T
Q
1
x 1} calculated by the method (Kothare et al. [35]) is an
asymptotically stable invariant ellipsoid according to the denition. The second concept
is the distance between the state x and the origin, dened as the weighted norm x
Q
1
def
=
_
x
T
Q
1
x. The main idea of this o-line formulation is that o-line a sequence of N
asymptotically stable invariant ellipsoids are calculated, one inside another, by choosing
N states ranging from zero to the largest feasible state that satises x(k +j +1)
Q
1
< 1,
j < N, for all systems belong to the uncertainty set. For each asymptotically stable
invariant ellipsoid, a state feedback law F
i
is calculated by solving the optimization problem
mentioned in (Kothare et al. [35]) o-line. Then, these control laws are put in a look-up
table. At last, the online calculation just concludes checking which asymptotically stable
invariant ellipsoid the state x(k) in time k belongs to. The control action u(k) is calculated
by applying the control law corresponding to the smallest ellipsoid for the state x(k). So
the online computational burden is reduced due to the o-line calculation. In addition,
when the state located between the two adjacent asymptotically stable invariant ellipsoids,

i
, 0
i
1, which is calculated by solving equation x
T
(k)(
i
Q
1
i
+(1
i
)Q
1
i+1
)x(k) = 1
is applied to the control law u(k) = (
i
F
i
+ (1
i
)F
i+1
)x(k) to avoid the discontinuity
on the borders of the ellipsoids. This method may give a little loss of the performance,
however the online computational load is reduced dramatically compared to [35].
150 CHAPTER 7. MPC BASED ON LINEAR MATRIX INEQUALITIES
Chapter 8
Tuning
As was already discussed in the introduction, one of the main reasons for the popularity
of predictive control is the easy tuning. The purpose of tuning the parameters is to ac-
quire good signal tracking, sucient disturbance rejection and robustness against model
mismatch.
Consider the LQPC-performance index
J(u, k) =
N

j=Nm
x
T
(k + j|k)Q x(k + j|k) +
N

j=1
u
T
(k + j 1)Ru(k + j 1)
and GPC-performance index
J(u, k) =
N

j=Nm
_
y
p
(k + j|k) r(k + j)
_
T
_
y
p
(k + j|k) r(k + j)
_
+
+
2
Nc

j=1
u
T
(k + j 1)u(k + j 1)
The tuning parameters can easily be recognized. They are:
N
m
= minimum-cost horizon
N = prediction horizon
N
c
= control horizon
= weighting factor (GPC)
P(q) = tracking lter (GPC)
Q = state weighting matrix (LQPC)
R = control weighting matrix (LQPC)
151
152 CHAPTER 8. TUNING
This chapter discusses the tuning of a predictive controller. In section 8.1 some rules of
thumb are given for the initial parameter settings. In section 8.2 we look at the case
where the initial controller does not meet the desired specications. An advanced tuning
procedure may provide some tools that lead to a better and suitable controller. The nal
ne-tuning of a predictive controller is usually done on-line when the controller is already
running. Section 8.3 discusses some special settings of the tuning parameters, leading to
well known controllers.
In this chapter we will assume that the model is perfect. Robustness against model mis-
match was already discussed in section 6.5.
8.1 Initial settings for the parameters
For GPC the parameters N, N
c
and are the three basic tuning parameters, and most
papers on GPC consider the tuning of these parameters. The rules-of-thumb for the settings
of a GPC-controller were rst discussed by Clarke & Mothadi ([13]) and later reformulated
for the more extended UPC-controller in Soeterboek ([61]). In LQPC the parameters N
and N
c
are present, but instead of and P(q), we have the weighting matrices R and Q.
In Lee & Yu ([39]), the tuning of the LQPC parameters is considered.
In the next sections the initial setting for the summation parameters (N
m
, N, N
c
) and the
signal weighting parameters (P(q), / Q, R) are discussed.
8.1.1 Initial settings for the summation parameters
In both the LQPC-performance index and the GPC-performance index, three summation
parameters, N
m
, N and N
c
are recognized, which can be used to tune the predictive
controller. Often we choose N
m
= 1, which is the best choice most of the time, but may
be chosen larger in the case of a dead-time or an inverse response.
The parameter N is mostly related to the length of the step response of the process, and
the prediction interval N should contain the crucial dynamics of x(k) due to the response
on u(k) (in the LQPC-case) or the crucial dynamics of y(k) due to the response on u(k)
(in the GPC-case).
The number N
c
N is called the control horizon which forces the control signal to become
constant
u(k + j) = constant for j N
c
or equivalently the increment input signal to become zero
u(k + j) = 0 for j N
c
An important eect of a small control horizon (N
c
N) is the smoothing of the control
signal (which can become very wild if N
c
= N) and the control signal is forced towards
8.1. INITIAL SETTINGS FOR THE PARAMETERS 153
its steady-state value, which is important for stability-properties. Another important con-
sequence of decreasing N
c
is the reduction in computational eort. Let the control signal
v(k) be a p 1 vector. Then the control vector v(k) is a pN 1 vector, which means
we have pN degrees of freedom in the optimization. By introducing the control horizon
N
c
< N, these degrees of freedom decrease because of the equality constraint, as was shown
in chapter 4 and 5. If the control horizon is the only equality constraint, the new opti-
mization parameter is , which is a pN
c
1 vector. Thus the degrees of freedom reduces
to pN
c
. This can be seen by observing that the matrix

D

33
is an pN pN
c
matrix, and so
the matrix H = 2 (

D

33
)
T

D
T
23


D
23

D

33
becomes an pN
c
pN
c
matrix. For the case without
further inequality constraints, this matrix H has to be inverted, which is easier to invert
for smaller N
c
.
If we add inequality constraints we have to minimize
min

1
2

T
H
subject to these constraints. The number of parameters in the optimization reduces from
pN to pN
c
, which will speed up the optimization.
Consider a process where
d = the dead time of the process
n = the number of poles of the model
= the dimension of the matrix A
o
for an IO model
= the order of polynomial a
o
(q) for an IO model
= the dimension of the matrix A
i
for an IIO model
= the order of polynomial a
i
(q) for an IIO model
t
s
= the 5% settling time of the process G
o
(q)

s
= the sampling-frequency

b
= the bandwidth of the process G
c,o
(s)
where G
c,o
(s) is the original IO process model in continuous-time (corresponding to the
discrete-time IO process model G
o
(q)). Now the following initial setting are recommended
(Soeterboek [[61]]):
N
m
= 1 +d
N
c
= n
N = int(
N
t
s
) for well-damped systems
N = int(
N

s
/
b
) for badly-damped and unstable systems
where
N
[1.5, 2] and
N
[4, 25]. Resuming, N
m
is equal to 1 + the process dead-time
estimate and N should be chosen larger than the length of the step response of the open-
loop process. When we use an IIO model, the value of N should be based on the original
IO-model (G
o
(q) or G
c,o
(s)).
154 CHAPTER 8. TUNING
The bandwidth
b
of a system with transfer function G
c,o
(s) is the frequency at which the
magnitude ratio drops to 1/

2 of its zero-frequency level (given that the gain characteristic


is at at zero frequency). So:

b
= max
R
_

|G
c,o
(j)|
2
/|G
c,o
(0)|
2
0.5
_
For badly-damped and unstable systems, the above choice of N = 2 int(2
s
/
b
) is a lower
bound. Often increasing N may improve the stability and the performance of the closed
loop.
In the constrained case N
c
will usually be chosen bigger, to introduce extra degrees of
freedom in the optimization. A reasonable choice is
n N
c
N/2
If signal constraints get very tight, N
c
can be even bigger than N/2. (see elevator example
in chapter 5).
Remark 43: Alamir and Bornard [1] show there exists a nite value N for which the
closed-loop system is asymptotically stable. Unfortunately, only the existence of such a
nite prediction horizon is shown, its computation remains dicult.
8.1.2 Initial settings for the signal weighting parameters
The GPC signal weighting parameters:
should be chosen as small as possible, 0 in most cases. In the case of non-minimum
phase systems, = 0 will lead to stability problems and so should be chosen as small as
possible. The parameters p
i
, i = 1, . . . , n
p
of lter
P(q) = 1 +p
1
q
1
+ p
2
q
2
+ + p
np
q
np
are chosen such that the roots of the polynomial P(q) are the desired poles of the closed
loop. P(q) makes y(k) to track the low-pass ltered reference signal P
1
(q)r(k).
The LQPC signal weighting parameters
In the SISO case, the matrix Q is a nn matrix and R will be a scalar. When the purpose
is to get the output signal y(k) to zero as fast as possible we can choose
Q = C
T
1
C
1
and R =
2
I
8.1. INITIAL SETTINGS FOR THE PARAMETERS 155
which makes the LQPC performance index equivalent to the GPC index for an IO model,
r(k) = 0 and P(q) = 1. This can be seen by substitution:
J(u, k) =
N

j=Nm
x
T
(k + j|k)Q x(k + j|k) +
Nc

j=1
u
T
(k + j 1)Ru(k + j 1)
=
N

j=Nm
x
T
(k + j|k)C
T
1
C
1
x(k + j|k) +
2
Nc

j=1
u
T
(k + j 1)u(k + j 1)
=
N

j=Nm
y
T
(k + j|k) y(k + j|k) +
2
Nc

j=1
u
T
(k + j 1)u(k + j 1)
The rules for the tuning of are the same as for GPC. So, = 0 is an obvious choice for
minimum phase systems, is small for non-minimum phase systems. By adding an term
Q
1
to the weighting matrix
Q = C
T
1
C
1
+ Q
1
we can introduce an additional weighting x
T
(k+j|k)Q
1
x(k+j|k) on the states.
In the MIMO case it is important to scale the input and output signals. To make this
clear, we take an example from the paper industry where we want to produce a paper roll
with a specic thickness (in the order of 80 m) and width (for A1-paper in the order of 84
cm), Let y
1
(k) represent the thickness-error (dened in meters) and y
2
(k) the width-error
(also dened in meters). If we dene an error like
J =

|y
1
(k)|
2
+|y
2
(k)|
2
it is clear that the contribution of the thickness-error to the performance index can be
neglected in comparison to the contribution due to the width-error. This means that the
error in paper thickness will be neglected in the control procedure and all the eort will be
put in controlling the width of the paper. We should not be surprised that using the above
criterion could result in paper with variation in thickness from, say 0 to 200 m. The
introduction of scaling gives a better balance in the eort of minimizing both width-error
and thickness-error. In this example let us require a precision of 1% for both outputs, then
we can introduce the performance index
J =

|y
1
(k)/8 10
7
|
2
+|y
2
(k)/8.4 10
3
|
2
Now the relative error of both thickness and width are measured in the same way.
In general, let the required precision for the i-th output be given by d
i
, so | y
i
| d
i
, for
i = 1, . . . , m. Then a scaling matrix can be given by
S = diag(d
1
, d
2
, . . . , d
m
)
156 CHAPTER 8. TUNING
By choosing
Q = C
T
1
S
2
C
1
the rst term of the LQPC performance index will consist of
x
T
(k + j|k)Q x(k + j|k) = x
T
(k + j|k)C
T
1
S
2
C
1
x(k + j|k) =
m

i=1
| y
i
(k + j|k)|
2
/d
2
i
We can do the same for the input, where we can choose a matrix
R =
2
diag(r
2
1
, r
2
2
, . . . , r
2
p
)
The variables r
2
i
> 0 are a measure how much the costs should increase if the i-th input
u
i
(k) increases by 1.
Example 44 : Tuning of GPC controller
In this example we tune a GPC controller for the following system:
a
o
(q) y(k) = b
o
(q) u(k) + f
o
(q) d
o
(k) + c
o
(q) e
o
(k)
where
a
o
(q) = (1 0.9q
1
)
b
o
(q) = 0.03 q
1
c
o
(q) = (1 0.7q
1
)
f
o
(q) = 0
For the design we consider e
o
(k) to be integrated ZMWN (so e
i
(k) = e
o
(k) is ZMWN).
First we transform the above polynomial IO model into a polynomial IIO model and we
obtain the following system:
a
i
(q) y(k) = b
i
(q) u(k) + f
i
(q) d
i
(k) + c
i
(q) e
i
(k)
where
a
i
(q) = (1 q
1
)a
o
(q) = (1 q
1
)(1 0.9q
1
)
b
i
(q) = b
o
(q) = 0.03 q
1
c
i
(q) = c
o
(q) = (1 0.7q
1
)
f
i
(q) = f
o
(q) = 0
Now we want to nd the initial values of the tuning parameters following the tuning rules
in section 8.1. To start with we plot the step response of the IO-model (!!!) in Figure 8.1.
We see in the gure that system is well-damped. The 5% settling time is equal to t
s
= 30.
The order of the IIO model is n = 2 and the dead time d = 0 .
8.1. INITIAL SETTINGS FOR THE PARAMETERS 157
0 10 20 30 40 50 60
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
step response with 5% bounds
k >
s
(
k
)

>
stepresponse
5% bounds
settling time
Figure 8.1: Step response of IO model
Now the following initial setting are recommended
N
m
= 1 +d = 1
N
c
= n = 2
N = int(
N
t
s
) = int(
N
30)
where
N
[1.5, 2]. Changing
N
from 1.5 to 2 does not have much inuence, and we
therefore choose for the smallest value
N
= 1.5, leading to N = 45.
To verify the tuning we apply a step to the system (r(k) = 1 for k 0) and evaluate the
output signal y(k). The result is given in 8.2.
Example 45 : Tuning of LQPC controller
In this example we tune a LQPC controller for the following system:
A
o
=
_
2.3000 1.0000
1.2000 0
_
C
o
=
_
1 0

B
o
=
_
0.1
0.09
_
K
o
=
_
1
1
_
L
o
=
_
2
1
_
which is found by sampling a continuous-time system with sampling-frequency
s
= 100
rad/s. Now we want to nd the initial values of the tuning parameters following the tuning
rules in section 8.1. To start with compute the eigenvalues of A
o
and we nd
1
= 0.8
158 CHAPTER 8. TUNING
0 10 20 30 40 50 60
1.5
1
0.5
0
0.5
1
1.5
solid: y , dotted: r , dashdot: eo
k >
y
(
k
)
,
r
(
k
)

>
y(k)
r(k)
eo(k)
Figure 8.2: Closed loop response on a step reference signal
and
2
= 1.5. We observe that the the system is unstable and so we have to determine
the bandwidth of the original continuous-time system. We plot the bode-diagram of the
continuous-time model in Figure 8.3. We see in the gure that
b
= 22.89 = 18.15 rad/s.
The order of the IO model is n = 2 and the dead time d = 0 (there is no delay in the
system).
Now the following initial setting are recommended
N
m
= 1 +d = 1
N
c
= n = 2
N = int(
N

s
/
b
) = int(
N
5.51)
where
N
[4, 25]. Changing
N
from 4 to 25 does not have much inuence, and we
therefore choose for the smallest value
N
= 4, leading to N = 22.
To verify the inuence of
N
, we start the system in initial state x =
_
1 1

T
, and
evaluate the output signal y(k). The result is given in 8.4. It is clear that for = 10 the best
response is found. We therefore obtain the optimal prediction horizon N = int(
N
5.51) =
int(10 5.51) = 55.
8.2 Tuning as an optimization problem
In the ideal case the performance index and constraints reect the mathematical formula-
tion of the specications from the original engineering problem. However, in practice the
8.2. TUNING AS AN OPTIMIZATION PROBLEM 159
10
2
10
1
10
0
10
1
10
2
10
1
10
0
10
1

b
G
b
Figure 8.3: Bode plot of IO continuous-time model
0 5 10 15 20 25 30
1
0.5
0
0.5
1
1.5
2
2.5
stepresponses for unstable system with LQPC
k >
y
(
k
)
,
e
o
(
k
)

>
y(k) for =4
y(k) for =10
y(k) for =25
e
o
(k)
Figure 8.4: Closed loop response for initial state x = [1 1]
T
.
160 CHAPTER 8. TUNING
desired behaviour of the closed loop system is usually expressed in performance specica-
tions, such as step-response criteria (overshoot, rise time and settling time) and response
criteria on noise and disturbances. If the specications are not too tight, the initial tuning,
using the rules of thumb of the previous section, may result in a feasible controller. If the
desired specications are not met, an advanced tuning is necessary. In that case the rela-
tion between the performance specications and the predictive control tuning parameters
(N
m
, N, N
c
, for GPC: , P(q), and for LQPC: Q, R) should be understood.
In this section we will study this relation and consider various techniques to improve or
optimize our predictive controller.
We start with the closed loop system as derived in chapter 6:
_
v(k)
y(k)
_
=
_
M
11
(q) M
12
(q)
M
21
(q) M
22
(q)
__
e(k)
w(k)
_
where w(k) consists of the present and future values of reference and disturbance signal.
Let w(k) = w
r
(k) represent a step reference signal and let w(k) = w
d
(k) represent a step
disturbance signal. Further dene the output signal on a step reference signal as
s
yr
(k) = M
22
(q) w
r
(k)
the output signal on a step disturbance signal as
s
yd
(k) = M
22
(q) w
d
(k)
the input signal on a step reference signal as
s
vr
(k) = M
12
(q) w
r
(k)
We will now consider the performance specications in more detail, and look at responses
criteria on step-reference signals (criteria 1 to 4), responses criteria on noise and distur-
bances (criteria 5 to 7) and nally at the stability of the closed loop (criterion 8):
1. Overshoot of output signal on step reference signal:
The overshoot on the output is dened as the peak-value of the output signal y(k)1
for r(k) is a unit step, or equivalently, w(k) = w
r
(k). This value is equal to:

os
= max
k
( s
yr
(k) 1 )
2. Rise time of output signal on step reference signal:
The rise time is the time required for the output to rise to 80% of the nal value.
This value is equal to:

rt
= min
k
_
k

( s
yr
(k) > 0.8 )
_
8.2. TUNING AS AN OPTIMIZATION PROBLEM 161
The rise time k
o
=
rt
is an integer and the derivative cannot be computed directly.
Therefore we introduce the interpolated rise time
rt
:

rt
=
0.8 + (k
o
1)s
yr
(k
o
) k
o
s
yr
(k
o
1)
s
yr
(k
o
) s
yr
(k
o
1)
The interpolated rise time is the (real-valued) time-instant where the rst-order in-
terpolated step-response reach the value 0.8.
3. Settling time of output signal on step reference signal:
The settling time is dened as the time the output signal y(k) needs to settle within
5% of its nal value.

st
= min
ko
_
k k
o

| s
yr
(k) 1 | < 0.05
_
The settling time k
o
=
st
is an integer and the derivative cannot be computed
directly. Therefore we introduce the interpolated settling time
st
:

st
=
0.05 + (k
o
1)|s
yr
(k
o
)| k
o
|s
yr
(k
o
1)|
|s
yr
(k
o
)| |s
yr
(k
o
1)|
The interpolated settling time is the (real-valued) time-instant where the rst-order
interpolated tracking-error on a step reference signal is settled within its 5% region.
4. Peak value of input signal on step reference signal
of the output signal v(k) for r(k) is a step, is given by

pvr
= max
k
| s
vr
(k) |
5. Peak value of output signal on step disturbance signal:
The peak-value of the output signal y(k) for d(k) is a step, or equivalently w(k) =
w
d
(k), is given by

pyd
= max
k
| s
yd
(k) |
6. RMS mistracking on zero-mean white noise signal:
The RMS value of the mistracking of the output due to a noise signal with spectrum
(k) I is given by

rm
= M
21
(e
j
)
2
=
__

| M
21
(e
j
) | d
_
1/2
7. Bandwidth of closed loop system:
The Bandwidth is the largest frequency below which the transfer function from noise
to output signal is lower than -20 dB.

bw
= min

| M
21
(e
j
) | < 0.1
_
162 CHAPTER 8. TUNING
8. Stability radius of closed loop system:
The stability radius is maximum modulus value of the closed loop poles, which is
equal to the spectral radius of the closed loop system matrix A
cl
.

sm
= (A
cl
) = max
i
|
i
(A
cl
) |
where
i
are the eigenvalues of closed loop system matrix A
cl
.
(Of course many other performance specications can be formulated.)
While tuning the predictive controller, some of the above performance specications will
be crucial in the design.
Dene the tuning vectors and as follows:
=
_
_
N
m
N
N
c
_
_
for GPC: =
_

vec(A
p
)
vec(B
p
)
vec(C
p
)
vec(D
p
)
_

_
for LQPC: =
_
vec(Q
1/2
)
vec(R
1/2
)
_
It is clear that the criteria depend on the chosen values of and , so:
= (, )
where consists of integer values and consists of real values. The tuning of and
can be done sequentially or simultaneously. In a sequential procedure we rst choose
the integer vector and tune the parameters for these xed . In a simultaneous proce-
dure the vectors and are tuned at the same time, using mixed-integer search algorithms.
The tuning of the integer values is not too dicult, if we can limit the size of the search
space S

. A reasonable choice for bounds on N


m
, N and N
c
is as follows:
1 N
m
N
1 N N
2,o
1 N
c
N
where N
2,o
is the prediction horizon from the initial tuning, and = 2 if N
2,o
is big and
= 3 or = 4 for small N
2,o
. By varying ZZ we can increase or decrease the search
space S

.
For the tuning of it is important to know the sensitivity of the criteria to changes in
. A good measure for this sensitivity is the derivative

i
. Note that this derivative only
gives the local sensitivity and is only valid for small changes in . Furthermore, the criteria
are not necessarily continuous functions of and so the derivatives may only give a local
one-sided measure for the sensitivity.
8.3. SOME PARTICULAR PARAMETER SETTINGS 163
Parameter optimization
Tuning the parameters and can be formulated in two ways:
1. Feasibility problem: Find parameters and such that some selected criteria

1
, . . . ,
p
satisfy specic pre-described bounds:

1
c
1
.
.
.

p
c
p
For example: nd parameters and such that the overshoot is smaller than 1.05,
the rise time is smaller than 10 samples and the bandwidth of the closed loop system
is larger than /2 (this can be written as
bw
/2).
2. Optimization problem: Find parameters and such that criterion
o
is mini-
mized, subject to some selected criteria
1
, . . . ,
p
satisfying specic pre-described
bounds:

1
c
1
.
.
.

p
c
p
For example: nd parameters and such that the settling time is minimized,
subject to an overshoot smaller than 1.1 and actuator eort smaller than 10.
Since has integer values, an integer feasibility/optimization problem arises. The criteria

i
(x) are nonlinear functions of and , and a general solution technique does not exist.
Methods like Mixed Integer Linear Programming (MILP), dynamic programming, Con-
straint Logic Propagation (CLP), genetic algorithms, randomized algorithms and heuristic
search algorithms can yield a solution. In general, these methods require large amounts of
calculation time, because of the high complexity and they are mathematically classied as
NP-hard problems. This means that the search space (the number of potential solutions)
grows exponentially as a function of the problem size.
8.3 Some particular parameter settings
Predictive control is an open method, that means that many well-known controllers can
be found by solving a specic predictive control problem with special parameter settings
(Soeterboek [61]). In this section we will look at some of these special cases. We consider
only the SISO case in polynomial setting, and
y(k) =
q
d
b(q)
a(q)(q)
u(k) +
c(q)
a(q)(q)
e(k)
164 CHAPTER 8. TUNING
where
a(q) = 1 +a
1
q
1
+ . . . + a
na
q
na
b(q) = b
1
q
1
+ . . . + b
n
b
q
n
b
c(q) = 1 +c
1
q
1
+ . . . + c
nc
q
nc
(q) = 1 q
1
First we dene the following parameters:
n
a
= the degree of polynomial a(q)
n
b
= the degree of polynomial b(q)
d = time-delay of the process.
Note that d 0 is the rst non-zero impulse response element h(d+1) = 0, or equivalently
in a state space representation d is equal to the smallest number for which C
1
A
d
B
3
= 0.
Minimum variance control:
In a minimum variance control (MVC) setting, the following performance index is mini-
mized:
J(u, k) = | y(k + d|k) r(k + d)|
2
This means that if we use the GPC performance with setting N
m
= N = d, N
c
= 1,
P(q) = 1 and = 0 we obtain the minimum variance controller. An important remark is
that a MV controller only gives a stable closed loop for minimum phase plants.
Generalized minimum variance control:
In a generalized minimum variance control (GMVC) setting, the performance index is
extended with a weighting on the input:
J(u, k) = | y(k + d|k) r(k + d)|
2
+
2
|u(k)|
2
where y(k) is the output signal of an IIO-model. This means that if we use the GPC
performance, with setting N
m
= N = d, N
c
= d, P(q) = 1 and some > 0 we obtain the
generalized minimum variance controller. A generalized minimum variance controller also
may give a stable closed loop for nonminimum phase plants, if is well tuned.
Dead-beat control:
In a dead-beat control setting, the following performance index is minimized:
J(u, k) =
na+n
b
+d

j=n
b
+d
| y(k + j|k) r(k + j)|
2
u(k + j) = 0 for j n
a
+ 1
This means that if we use the GPC performance, with setting N
m
= n
b
+ d, N
c
= n
a
+ 1,
P(q) = 1 and N = n
a
+ n
b
+ d and = 0 we obtain the dead-beat controller.
8.3. SOME PARTICULAR PARAMETER SETTINGS 165
Mean-level control:
In a mean-level control (MLC) setting, the following performance index is minimized:
J(u, k) =

j=1
| y(k + j|k) r(k + j)|
2
u(k + j) = 0 for j 1
This means that if we use the GPC performance with setting N
m
= 1, N
c
= 1, P(q) = 1
and N . we obtain the mean-level controller.
Pole-placement control:
In a pole-placement control setting, the following performance index is minimized:
J(u, k) =
na+n
b
+d

j=n
b
+d
| y
p
(k + j|k) r(k + j)|
2
u(k + j) = 0 for j n
a
+ 1
where y
p
(k) = P(q)y(k) is the weighted output signal. This means that if we use the GPC
performance with setting N
m
= n
b
+ d, N
c
= n
a
+ 1, P(q) = P
d
(q) and N = n
a
+ n
b
+ d.
In state space representation, the eigenvalues of (AB
3
F) will become equal to the roots
of polynomial P
d
(q).
166 CHAPTER 8. TUNING
Appendix A: Quadratic
Programming
In this appendix we will consider optimization problems with a quadratic object function
and linear constraints, denoted as the quadratic programming problem.
Denition 46 The quadratic programming problem:
Minimize the object function
F() =
1
2

T
H + f
T

over variable , where H is a semi-positive denite matrix, subject to the linear inequality
constraint
A b
2 End Denition
This is the type of quadratic programming problem we have encountered in chapter 5,
where we solved the standard predictive control problem.
Remark 47: Throughout this appendix we assume H to be a symmetric matrix. If H is
a non-symmetric matrix, we can dene
H
new
=
1
2
(H + H
T
)
Substitution of this H
new
for H does not change the object function.
Remark 48: The unconstrained quadratic programming problem:
If the constraints are omitted and only the quadratic object function is considered, an
analytic solution exists: An extremum of the object function
F() =
1
2

T
H + f
T

is found when the gradient F is equal to zero


F() = H + f = 0
167
168 Appendices
If H is non-singular the extremum is reached for
= H
1
f
Because H is a positive denite matrix, the extremum will be a minimum.
There are various algorithms to solve the quadratic programming problem.
1. The modied simplex method: For small and medium-sized quadratic program-
ming problems then modied simplex method is the most ecient optimization al-
gorithm. It will nd the optimum in a nite number of steps.
2. The interior point method: For large-sized quadratic programming problems it is
better to use the interior-point method. A disadvantage compared to the modied
simplex methods is that the optimum can only be approximated. In Nesterov and
Nemirovsky [49] an upper bound is derived on the number of iterations that are
needed to nd the optimum with a pre-described precision.
3. Other convex optimization methods: Also other convex optimization methods, like
the cutting plane algorithm [10, 22] or the ellipsoid algorithm [10, 32] may be used
to solve the quadratic programming problem.
The modied simplex method
First we dene another type of quadratic programming problem that is very common in
literature:
Denition 49 The quadratic programming problem (Type 2):
Minimize the object function
F() =
1
2

T
H + f
T

over variable , where H is a semi-positive denite matrix, subject to the linear equality
constraint
A = b
and the non-negative constraint
0
2 End Denition
Lemma 50 The quadratic programming problem of denition 46 in a quadratic program-
ming problem of the form of denition 49.
169
Proof : Dene
=
+

, with
+
,

0
Note that is now uniquely decomposed into
+
and

. Further introduce a slack-variable y


such that
A + y = b where y 0
We dene

H =
_
_
H H 0
H H 0
0 0 0
_
_
,

f =
_
_
f
f
0
_
_
,

A =
_
A A I

,

=
_
_

y
_
_
and obtain a quadratic programming problem of denition 49:
F() =
1
2

T

H

+

f
T

A

= b

0
2
Consider the quadratic programming problem of denition 49 with the object function
F() =
1
2

T
H + f
T
,
the linear equality constraint
A = b,
and the non-negative constraint
0
for a semi-positive denite matrix H.
For an inequality/equality constrained optimization problem necessary conditions for an
extremum of the function F() in , satisfying h() = 0 and g() 0, are given by the
Kuhn-Tucker conditions:
There exist vectors and , such that
F() +g() +h() = 0

T
g() = 0
0
h() = 0
g() 0
170 Appendices
The Kuhn-Tucker conditions for this quadratic programming problem are given by:
H + A
T
= f (8.1)

T
= 0 (8.2)
A = b (8.3)
, 0 (8.4)
We recognize equality constraints (8.3),(8.1) and a non-negative constraint (8.4), two in-
gredients of a general linear programming problem. On the other hand, we miss the object
function, and we have an additional nonlinear equality constraint (8.2). (Note that the
multiplication
T
is nonlinear in the new parameter vector (, , )
T
.) An object function
can be obtained by introducing two slack variables u
1
and u
2
, and dening the problem:
A + u
1
= b (8.5)
H + A
T
+ u
2
= f (8.6)
, , u
1
, u
2
0 (8.7)

T
= 0 (8.8)
while minimizing
min
,,,u
1
,u
2
u
1
+ u
2
(8.9)
Construct the matrices
A
0
=
_
A 0 0 I 0
H A
T
I 0 I
_
b
0
=
_
b
f
_
f
0
=
_

_
0
0
0
1
1
_

0
=
_

u
1
u
2
_

_
(8.10)
and the general linear programming problem can be written as:
min f
T
0

0
where A
0

0
= b
0
and
0
0
with the additional nonlinear constraint
T
= 0.
It may be clear from the Kuhn-Tucker conditions in (8.3-8.2) that an optimum is only
reached when u
1
and u
2
in (8.5-8.7) become zero. This can be achieved by using a simplex
algorithm which is modied in the sense that there is an extra condition on nding feasible
basic solutions namely that
T
= 0.
Note from (8.4) and (8.2) that

T
=
1

1
+
2

2
+ . . . +
n

n
= 0 where
i
,
i
0 for i = 1, . . . , n
171
means that the nonlinear constraint (8.2) can be rewritten as:

i
= 0 for i = 1, . . . , n
and thus either
i
= 0 or
i
= 0. The extra feasibility condition on a basic solution now
becomes
i

i
= 0 for i = 1, . . . , n.
Remark 51: In the above derivations we assumed b 0 and f 0. If this is not the
case, we have to change the corresponding signs of u
1
and u
2
in the equations (8.5) and
(8.6).
We will sketch the modied simplex algorithm on the basis of an example.
Modied simplex algorithm:
Consider the following type 2 quadratic programming problem:
min

1
2

T
H + f
T

A = b
0
where
H =
_

_
2 1 0 0
1 1 1 0
0 1 4 2
0 0 2 4
_

_
f =
_

_
0
3
2
0
_

_
A =
_
2 1 0 0
0 2 1 2
_
b =
_
4
4
_
Before we start the modied simplex algorithm, we check if a simple solution can be found.
If we minimize the unconstrained function
min

1
2

T
H + f
T

we obtain =
_
7 14 4 2

T
. This solution is not feasible, because neither the equal-
ity constraint A = b, nor the non-negative constraint 0 are satised.
We construct the matrix A
0
, the vectors b
0
, f
0
and the parameter vector
0
according to
equation (8.10). Note that
0
is a 16 1-vector with the elements is a 4 1-vector, is
a 2 1-vector, is a 4 1-vector, u
1
is a 2 1-vector and u
2
is a 4 1-vector.
The modied simplex method starts with nding a rst feasible solution. We choose B to
contain the six most-right columns of A
0
. We obtain the solution
=
_

_
0
0
0
0
_

_
=
_
0
0
_
=
_

_
0
0
0
0
_

_
u
1
=
_
4
4
_
u
2
=
_

_
0
3
2
0
_

_
172 Appendices
The solution is feasible because , , u
1
, u
2
0 and
i

i
= 0 for i = 1, 2, 3, 4. However,
the optimum is not found yet, because u
1
, u
2
> 0 .
After a nite number of iterations the optimum is found by selecting columns 1, 2, 5, 6, 9
and 10 of the matrix A
0
for the matrix B. We nd the optimal solution
=
_

_
1
2
0
0
_

_
=
_
0
1
_
=
_

_
0
0
1
2
_

_
u
1
=
_

_
0
0
0
0
_

_
u
2
=
_

_
0
0
0
0
_

_
and the Kuhn-Tuker conditions (8.3-8.2) are satised for these values.
Algorithms that use a modied version of the simplex method are Wolfes algorithm [76],
and the pivoting algorithm of Lemke [40].
The interior-point method
In this section we discuss a popular convex optimization algorithm, namely the interior-
point method from Nesterov & Nemirovsky [49]. This method solves the constrained convex
optimization problem
f

= f(

) = min

f() subject to g() 0


and is based on the barrier function method. Consider the non-empty strictly feasible set
G = { | g
i
() < 0, i = 1, . . . , m}
and dene the function:
() =
_

m
i=1
log(g
i
()) G
otherwise
Note that is convex on G and, as approaches the boundary of G, the function will go
to innity. Now consider the unconstrained optimization problem
min

_
t f() + ()
_
for some t 0. Then the value

(t) that minimizes this function is denoted by

(t) = arg min

_
t f() + ()
_
.
It is clear that the curve

(t), which is called the central path, is always located inside the
set G. The variable weight t gives the relative weight between the objective function f and
the barrier function , which is a measure for the constraint violation. Intuition suggests
that

(t) will converge to the optimal value

of the original problem as t . Note


173
that in a minimum

(t) the gradient of (, t) = t f() + () with respect to is zero,


so:

, t) = t

f(

(t)) +
m

i=1
1
g
i
(

(t))

g
i
(

(t)) = 0 .
We can rewrite this as

f(

(t)) +
m

i=1

i
(t)

g
i
(

(t)) = 0
where

i
(t) =
1
g
i
(

(t)) t
. (8.11)
Then from the above we can derive that at the optimum the following conditions hold:
g(

(t)) 0 (8.12)

i
(t) 0 (8.13)

f(

(t)) +
m

i=1

i
(t)

g
i
(

(t)) = 0 (8.14)

i
(t)g
i
(

(t)) = 1/t (8.15)


which proves that for t the right-hand side of the last condition will go to zero and so
the Kuhn-Tucker conditions will be satised. This means that

(t)

and

(t)

for t .
From the above we can see that the central path can be regarded as a continuous defor-
mation of the Kuhn-Tucker conditions.
Now dene the dual function [58, 9]:
d() = min

_
f() +
m

i=1

i
g
i
()
_
for R
m
A property of this dual function is that
d() f

for any 0
174 Appendices
and so we have
f

d(

(t))
min

_
f() +
m

i=1

i
(t)g
i
()
_
f(

(t)) +
m

i=1

i
(t)g
i
(

(t)) (since

(t) also minimizes the function


F dened by F() = f() +
m

i=1

i
(t)g
i
()
since

F(

(t)) = 0 by (8.14))
= f(

(t)) m/t (by (8.11))


and thus
f(

(t)) f

f(

(t)) m/t
It is clear that m/t is an upper bound for the distance between the computed f(

(t)) and
the optimum f

. If we want a desired accuracy > 0, dened by


| f

f(

(t)) |
then choosing t = m/ gives us one unconstrained minimization problem, with a solution

(m/) that satises:


| f

f(

(m/)) | .
Algorithms to solve unconstrained minimization problems are discussed in [58].
The approach presented above works, but can be very slow. A better way is to increase
t in steps to the desired value m/ and use each intermediate solution as a starting value
for the next. This algorithm is called the sequential unconstrained minimization technique
(SUMT) and is described by the following steps
Given: G, t > 0 and a tolerance
Step 1: Compute

(t) starting from :

(t) = arg min

_
t f() + ()
_
.
Step 2: Set =

(t).
Step 3: If m/t , return and stop.
Step 4: Increase t and goto step 1.
This algorithm generates a sequence of points on the central path and solves the constrained
optimization problem via a sequence of unconstrained minimizations (often quasi-Newton
175
methods). A simple update rule for t is used, so t(k + 1) = t(k), where is typically
between 10 and 100.
The steps 14 are called the outer iteration and step 1 involves an inner iteration (e.g., the
Newton steps). A trade-o has to be made: a smaller means fewer inner iterations to
compute
k+1
from
k
, but more outer iterations.
In Nesterov and Nemirovsky [49] an upper bound is derived on the number of iterations
that are needed for an optimization using the interior-point method. Also tuning rules for
the choice of are given in their book.
176 Appendices
Appendix B: Basic State Space
Operations
Cascade:
G(q)

G(q) = G
1
(q) G
2
(q)
G
1

_
A
1
B
1
C
1
D
1
_
G
2

_
A
2
B
2
C
2
D
2
_

G
_
A
1
B
1
C
1
D
1
_

_
A
2
B
2
C
2
D
2
_
=
_
_
A
1
B
1
C
2
B
1
D
2
0 A
2
B
2
C
1
D
1
C
2
D
1
D
2
_
_
=
_
_
A
2
0 B
2
B
1
C
2
A
1
B
1
D
2
D
1
C
2
C
1
D
1
D
2
_
_
Parallel:
G(q)

G(q) = G
1
(q) + G
2
(q)
177
178 Appendices
G
1

_
A
1
B
1
C
1
D
1
_
G
2

_
A
2
B
2
C
2
D
2
_

G
_
A
1
B
1
C
1
D
1
_
+
_
A
2
B
2
C
2
D
2
_
=
_
_
A
1
0 B
1
0 A
2
B
2
C
1
C
2
D
1
+ D
2
_
_
Change of variables:
x x = T x T is invertible
u u = P u P is invertible
y y = Ry
G
_
A B
C D
_

G
_

A

B

C

D
_
=
_
TAT
1
TBP
1
RCT
1
RDP
1
_
State feedback:
u u + F x
G
_
A B
C D
_

G
_

A

B

C

D
_
=
_
A+ BF B
C + DF D
_
179
Output injection:
x(k + 1) = Ax(k) + Bu(k) x(k + 1) = A x(k) + Bu(k) + Hy(k)
G
_
A B
C D
_

G
_

A

B

C

D
_
=
_
A+ HC B + HD
C D
_
Transpose:
G(q)

G(q) = G
T
(q)
G
_
A B
C D
_

G
_
A
T
C
T
B
T
D
T
_
Left (right) Inversion:
G(q)

G(q) = G
+
(q)
G
+
is a left (right) inverse of G.
G
_
A B
C D
_

G
_
A BD
+
C BD
+
D
+
C D
+
_
where D
+
is the left (right) inverse of D.
For G(q) is square, we obtain G
+
(q) = G
1
(q) and D
+
= D
1
.
180 Appendices
Appendix C: Some results from
linear algebra
Left inverse and Left complement:
Consider a full-column rank matrix M R
mn
for m n. Let the singular value decom-
postion of M be given by:
M =
_
U
1
U
2

_

0
_
V
T
The left inverse is dened as
M

= V
1
U
T
1
and the left complement is dened as
M

= U
T
2
The left-inverse and left-complement have the following properties:
M

M = I
M

M = 0
Right inverse and Right complement:
Consider a full-row rank matrix M R
mn
for m n. Let the singular value decompostion
of M be given by:
M = U
_
0

_
V
T
1
V
T
2
_
The right inverse is dened as
M
r
= V
1

1
U
T
and the right complement is dened as
M
r
= V
2
The right-inverse and right-complement have the following properties:
M M
r
= I
M M
r
= 0
181
182 Appendices
Singular value decomposition
If A R
mn
then there exist orthogonal matrices
U =
_
u
1
u
m

R
mm
and
V =
_
v
1
v
n

R
nn
such that
U
T
AV =
where
=
_

1
0 0
.
.
.
.
.
.
.
.
.

m
0 0
_

_
for m n
=
_

1
.
.
.

n
0 0
.
.
.
.
.
.
0 0
_

_
for m n
where
1

2
. . .
p
0, p = min(m, n). This means that
A = UV
T
This decomposition of A is known as the singular value decomposition of A. The
i
are
known as the singular values of A (and are usually arranged in decending order).
Property:
U contains the eigenvectors of AA
T
. V contains the eigenvectors of A
T
A. is a diagonal
matrix whose entries are the nonnegative square roots of the eigenvalues A
T
A (for m n)
or AA
T
(for m n).
Property:
Let
min
and
max
be the smallest and largest singular value of a square matrix A. Further
let
i
denote the eigenvalues of A. Then there holds:

min
|
i
|
max
, i
Appendix D: Model descriptions
D.1 Impulse and step response models
D.1.1. Model descriptions
Let g
m
and s
m
denote the impulse response parameters and the step response parameters
of the stable system G
o
(q), respectively. Then
s
m
=
m

j=1
g
j
and g
m
= s
m
s
m1
, m {1, 2, . . .}
The transfer function G
o
(q) is found by
G
o
(q) =

m=1
g
m
q
m
=

m=1
s
m
q
m
(1 q
1
) (8.16)
In the same way, let f
m
and t
m
denote the impulse response parameters and the step
response parameters of the stable disturbance model F
o
(q), respectively. Then
t
m
=
m

j=0
f
j
and f
m
= t
m
t
m1
, m {0, 1, 2, . . .}
The transfer function F
o
(q) is found by
F
o
(q) =

m=0
f
m
q
m
=

m=0
t
m
q
m
(1 q
1
) (8.17)
The truncated impulse response model is dened by
y(k) =
n

m=1
g
m
u(k m) +
n

m=0
f
m
d
o
(k m) + e
o
(k)
where n is an integer number such that g
m
0 and f
m
0 for all m n.
This is an IO-model in which the disturbance d
o
(k) is known and noise signal e
o
(k) is
chosen as ZMWN.
183
184 Appendices
The truncated step response model is dened by
y(k) =
n1

m=1
s
m
u(k m) + s
n
u(k n) +
n1

m=0
t
m
d
i
(k m) + t
n
d
o
(k n) + e
o
(k)
This is a IIO-model in which d
i
(k) = d
o
(k) = d
o
(k) d
o
(k 1) is the disturbance
increment signal and the noise increment signal e
i
(k) = e
o
(k) = e
o
(k) e
o
(k 1) is
chosen as ZMWN to account for an oset at the process output, so
e
o
(k) =
1
(q)e
i
(k)
From impulse response model to state space model
(Lee et al. [38]) Let
G
o
(q) = g
1
q
1
+ g
2
q
2
+ . . . + g
n
q
n
F
o
(q) = f
0
+ f
1
q
1
+ f
2
q
2
+ . . . + f
n
q
n
H
o
(q) = 1
where g
i
are the impulse response parameters of the system, then the state space matrices
are given by:
A
o
=
_

_
0 I 0 . . . 0
0 0 I 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
.
.
. I
0 0 0 . . . 0
_

_
K
o
=
_

_
0
0
.
.
.
0
_

_
L
o
=
_

_
f
1
f
2
.
.
.
f
n
_

_
B
o
=
_

_
g
1
g
2
.
.
.
g
n
_

_
C
o
=
_
I 0 0 . . . 0

D
H
= I D
F
= f
0
This IO system can be translated into an IIO model using the formulas from section 2.1:
A
i
=
_

_
I I 0 . . . 0 0
0 0 I 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I 0
0 0 0 . . . 0 I
0 0 0 . . . 0 0
_

_
K
i
=
_

_
I
0
.
.
.
0
_

_
L
i
=
_

_
f
0
f
1
f
2
.
.
.
f
n
_

_
B
i
=
_

_
0
g
1
g
2
.
.
.
g
n
_

_
C
i
=
_
I I 0 . . . 0

D
H
= I D
F
= f
0
185
E
g
n
c
c
e E
q
1 E
E
f
n
T
T
g
n1
c
c
eE
f
n1
T
T



E
g
2
c
c
f
2
T
T
eE
q
1 E
g
1
c
c
f
1
T
T
eE
q
1 E
f
0
T
T
e
c
E
u
d
o
x
n
x
3
x
2
x
1
y
e
o
Figure 8.5: IO state space representation of impulse response model
E
g
n
c
c
E
f
n
T
T
eE
q
1 E
g
n1
c
c
f
n1
T
T
eE



E
g
2
c
c
f
2
T
T
eE
q
1 E
g
1
c
c
f
1
T
T
eE
q
1 E e
c
d
d
d
f
0
T
T
E
'
q
1
u
d
i
x
n+1
x
4
x
3
x
2
x
1
y
e
o
Figure 8.6: IIO state space representation of impulse response model
From step response model to state space model
(Lee et al. [38]) Let
G
i
(q) = s
1
q
1
+ s
2
q
2
+ . . . + s
n1
q
n+1
+ s
n
q
n
+ s
n
q
n1
+ s
n
q
n2
+ . . .
= s
1
q
1
+ s
2
q
2
+ . . . + s
n1
q
n+1
+ s
n
q
n
1 q
1
=
= g
1
q
1
1 q
1
+ g
2
q
2
1 q
1
+ . . . + g
n
q
n
1 q
1
F
i
(q) = t
0
+ t
1
q
1
+ t
2
q
2
+ . . . + t
n1
q
n+1
+ t
n
q
n
+ t
n
q
n1
+ t
n
q
n2
+ . . .
= t
0
+ t
1
q
1
+ t
2
q
2
+ . . . + t
n1
q
n+1
+ t
n
q
n
1 q
1
= f
0
1
1 q
1
+ f
1
q
1
1 q
1
+ f
2
q
2
1 q
1
+ . . . + f
n
q
n
1 q
1
H
i
(q) =
I
(1 q
1
)
186 Appendices
where s
i
are the step response parameters of the system, then the state space matrices are
given by:
A
i
=
_

_
0 I 0 . . . 0 0
0 0 I 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I 0
0 0 0 . . . 0 I
0 0 0 . . . 0 I
_

_
K
i
=
_

_
I
I
.
.
.
I
_

_
L
i
=
_

_
t
1
t
2
.
.
.
t
n
_

_
B
i
=
_

_
s
1
s
2
.
.
.
s
n
_

_
C
i
=
_
I 0 0 . . . 0

D
H
= I D
F
= t
0
= f
0
E
E
E
s
n
t
n
c
E
T

T
eE
q
1 E
c
s
n1
t
n1
T

T
c
c
eE
q
1 E
s
n2
t
n2
T

T
c
c
eE




E
s
1
t
1
T

T
c
c
eE
q
1 E e
T
E
u
d
i
e
i
x
n
x
n1
x
2
x
1 y(k)
Figure 8.7: State space representation of step response model
Prediction using step response models
prediction for innite-length step response model:
The step response model has been derived in section 2.2.2 and is given by:
y(k) =

m=1
s
m
u(k m) +

m=0
t
m
d
i
(k m) +

m=0
e
i
(k m) (8.18)
where y is the output signal, u is the increment of the input signal u, d
i
is the increment
of the (known) disturbance signal d
o
, and e
i
, which is the increment of the output noise
signal e
o
, is assumed to be zero-mean white noise.
Note that the output signal y(k) in equation (8.18) is build up with all past values of the
input, disturbance and noise signals. We introduce a signal y
0
(k|k ), > 0, which gives
187
the value of y(k) based on the inputs, disturbances and noise signals up to time k ,
assuming future values to be zero, so u(t) = 0, d
i
(t) = 0 and e
i
(t) = 0 for t > k :
y
0
(k+j|k)=

m=0
s
j++m
u(km)+

m=0
t
j++m
d
i
(km)+

m=0
e
i
(km) (8.19)
Based on the above denition it easy to derive a recursive expression for y
0
:
y
0
(k + j|k) =

m=0
s
j+m
u(km) +

m=0
t
j+m
d
i
(km) +

m=0
e
i
(km)
=

m=1
s
j+m
u(km) +

m=1
t
j+m
d
i
(km)
+

m=1
e
i
(km) + s
j
u(k) + t
j
d
i
(k) + e
i
(k)
= y
0
(k + j|k 1) +s
j
u(k) + t
j
d
i
(k) + e
i
(k)
Future values of y can be derived as follows:
y(k + j) =

m=1
s
m
u(k + j m) +

m=0
t
m
d
i
(k + j m) +

m=0
e
i
(k + j m)
=

m=1
s
j+m
u(k m) +

m=1
t
j+m
d
i
(k m) +

m=1
e
i
(k m)
+
j

m=1
s
m
u(k + j m) +
j

m=0
t
m
d
i
(k + j m) +
j

m=0
e
i
(k + j m)
= y
0
(k + j|k 1)
+
j

m=1
s
m
u(k + j m) +
j

m=0
t
m
d
i
(k + j m) +
j

m=0
e
i
(k + j m)
Suppose we are at time k, then we only have knowlede up to time k, so e
i
(k +j), j > 0 is
unknown. The best estimate we can make of the zero-mean white noise signal for future
time instants is (see also previous section):
e
i
(k + j|k) = 0 , for j > 0
If we assume u() and d
i
() to be known in the interval k k + j we obtain the
prediction of y(k + j), based on knowledge until time k, and denoted as y(k + j|k):
y(k + j|k) = y
0
(k + j|k 1) +
j

m=1
s
m
u(k + j m) +
j

m=0
t
m
d
i
(k + j m) + e
i
(k)
188 Appendices
prediction for truncated step response model:
So far we derived the following relations:
y
0
(k + j|k) = y
0
(k + j|k 1) +s
j
u(k) + t
j
d
i
(k) + e
i
(k)
y(k) = y
0
(k|k 1) +t
0
d
i
(k) + e
i
(k)
y(k + j|k) = y
0
(k + j|k 1) +
j

m=1
s
m
u(k+jm) +
j

m=0
t
m
d
i
(k+jm) + e
i
(k)
In section 2.2.2 the truncated step response model was introduced. For a model with
step-response length n there holds:
s
m
= s
n
for m > n
Using this property we can derive from 8.19 for j 0:
y
0
(k+n+j|k 1) =

m=1
s
n+m+j
u(k m) +

m=1
t
n+m+j
d
i
(km) +

m=1
e
i
(km)
=

m=1
s
n
u(km) +

m=1
t
n
d
i
(km) +

m=1
e
i
(km)
= y
0
(k+n1|k 1)
and so y
0
(k+n+j|k1) = y
0
(k+n1|k1) for j 0. This means that for computation of
y(k+n+j|k) where j 0, so when the prediction is beyond the length of the step-response
we can use the equation:
y(k + n + j|k) = y
0
(k + n 1|k 1) +
n+j

m=1
s
m
u(k + n + j m)
+
n+j

m=0
t
m
d
i
(k + n + j m) + e
i
(k)
Matrix notation
From the above derivation we found that computation of y
0
(k+j|k1) for j = 0, . . . , n1
is sucient to make predictions for all j 0. We obtain the equations:
y
0
(k+j|k) = y
0
(k+j|k!1) +s
j
u(k) + t
j
d
i
(k) + e
i
(k) for 0 j n1
y
0
(k+j|k) = y
0
(k+n1|k1) +s
n
u(k) + t
j
d
i
(k) + e
i
(k) for j n
y(k|k) = y
0
(k|k1) +t
0
d
i
(k) + e
i
(k)
(8.20)
189
In matrix form:
_

_
y
0
(k+1|k)
y
0
(k+2|k)
.
.
.
y
0
(k+n1|k)
y
0
(k+n|k)
_

_
=
_

_
y
0
(k+1|k1)
y
0
(k+2|k1)
.
.
.
y
0
(k+n1|k1)
y
0
(k+n1|k1)
_

_
+
_

_
s
1
u(k)
s
2
u(k)
.
.
.
s
n1
u(k)
s
n
u(k)
_

_
+
_

_
t
1
d
i
(k)
t
2
d
i
(k)
.
.
.
t
n1
d
i
(k)
t
n
d
i
(k)
_

_
+
_

_
e
i
(k)
e
i
(k)
e
i
(k)
.
.
.
e
i
(k)
_

_
y(k|k) = y
0
(k|k 1) +t
0
d
i
(k) + e
i
(k)
Dene the following signal vector for j 0, 0:
y
0
(k + j|k ) =
_

_
y
0
(k + j + 1|k )
y
0
(k + j + 2|k )
.
.
.
y
0
(k + j + n 1|k )
y
0
(k + j + n|k )
_

_
and matrices
M =
_

_
0 I 0 0 0
0 0 I 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I 0
0 0 0 0 I
0 0 0 0 I
_

_
S =
_

_
s
1
s
2
.
.
.
s
n1
s
n
_

_
T =
_

_
t
1
t
2
.
.
.
t
n1
t
n
_

_
C =
_
I 0 0

Then equations (8.20) can be written as:


y
0
(k + 1|k) = M y
0
(k|k 1) +Su(k) + Td
i
(k) + II
n
e
i
(k) (8.21)
y(k) = C y
0
(k|k 1) +t
o
d
i
(k) + e
i
(k) (8.22)
To make predictions y(k + j|k), we can extend (8.22) over the prediction horizon N. In
matrix form:
_

_
y(k+1|k)
y(k+2|k)
.
.
.
y(k+n1|k)
y(k+n|k)
.
.
.
y(k+N|k)
_

_
=
_

_
y
0
(k+1|k1)
y
0
(k+2|k1)
.
.
.
y
0
(k+n1|k1)
y
0
(k+n1|k1)
.
.
.
y
0
(k+n1|k1)
_

_
+
_

_
s
1
u(k)
s
2
u(k)+s
1
u(k+1)
.
.
.
s
n1
u(k)+ +s
1
u(k+n2)
s
n
u(k)+ +s
1
u(k+n1)
.
.
.
s
N
u(k)+ +s
1
u(k+N1)
_

_
+
190 Appendices
+
_

_
t
1
d
i
(k)
t
2
d
i
(k)+t
1
d
i
(k+1)
.
.
.
t
n1
d
i
(k)+ +t
1
d
i
(k+n2)
t
n
d
i
(k)+ +t
1
d
i
(k+n1)
.
.
.
t
N
d
i
(k)+ +t
1
d
i
(k+N1)
_

_
+
_

_
e
i
(k)
e
i
(k)
.
.
.
e
i
(k)
e
i
(k)
.
.
.
e
i
(k)
_

_
Dene the following signal vectors:
z(k) =
_

_
y(k+1|k)
y(k+2|k)
.
.
.
y(k+N1|k)
y(k+N|k)
_

_
u(k) =
_

_
u(k)
u(k+1)
.
.
.
u(k+N2)
u(k+N1)
_

_
w(k) =
_

_
d
i
(k)
d
i
(k+1)
.
.
.
d
i
(k+N2)
d
i
(k+N1)
_

_
and matrices

M =
_

_
0 I 0 0 0
0 0 I 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 I 0
0 0 0 0 I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 I
_

S =
_

_
s
1
0
.
.
. 0
s
2
s
1
0
.
.
.
.
.
.
.
.
.
s
N
s
N1
s
1
_

T =
_

_
t
1
0
.
.
. 0
t
2
t
1
0
.
.
.
.
.
.
.
.
.
t
N
t
N1
t
1
_

_
(Note that s
m
= s
n
and t
m
= t
n
for m > n.)
Now the prediction model can be written as:
z(k) =

M y
0
(k) +

S u(k) +

T w(k) + II
N
e(k) (8.23)
Relation to state space model
The model derived in equations (8.21), (8.22) and (8.23) are similar to the state space
model derived in the previous section. By choosing
x(k) = y
0
(k) z(k) = y(k) e(k) = e
i
(k) w(k) = d
i
(k) v(k) = u(k)
A = M B
1
= II
n
B
2
= T B
3
= S
191
C
1
= C D
11
= I D
12
= t
0

C
2
=

M

D
21
= II
N

D
22
=

T

D
23
=

S
we have obtain the following prediction model:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k)
z(k) =

C
2
x(k) +

D
21
e(k) +

D
22
w(k) +

D
23
v(k)
D.2 Polynomial models
D.2.1. Model description
Consider the SISO IO polynomial model:
G
o
(q) =
b
o
(q)
a
o
(q)
F
o
(q) =
f
o
(q)
a
o
(q)
H
o
(q) =
c
o
(q)
a
o
(q)
where a
o
(q), b
o
(q) and c
o
(q) are polynomials in the operator q
1
a
o
(q) = 1 +a
o,1
q
1
+ . . . + a
o,na
q
na
b
o
(q) = b
o,1
q
1
+ . . . + b
o,n
b
q
n
b
(8.24)
c
o
(q) = 1 +c
o,1
q
1
+ . . . + c
o,nc
q
nc
f
o
(q) = f
o,0
+ f
o,1
q
1
+ . . . + f
o,nc
q
n
f
The dierence equation corresponding to the IO-model is given by the controlled autore-
gressive moving average (CARMA) model:
a
o
(q) y(k) = b
o
(q) u(k) + f
o
(q) d
o
(k) + c
o
(q) e
o
(k)
in which e
o
(k) is chosen as ZMWN.
Now consider the SISO IIO polynomial model (compare eq. 2.13):
G
i
(q) =
b
i
(q)
a
i
(q)
F
i
(q) =
f
i
(q)
a
i
(q)
H
i
(q) =
c
i
(q)
a
i
(q)
where a
i
(q) = a
o
(q)(q) = a
o
(q)(1 q
1
), b
i
(q) = b
o
(q), c
i
(q) = c
o
(q), and f
i
(q) = f
o
(q),
192 Appendices
so
a
i
(q) = 1 +a
i,1
q
1
+ . . . + a
i,n+1
q
n1
= (1 q
1
)(1 +a
o,1
q
1
+ . . . + a
o,n
q
n
)
b
i
(q) = b
i,1
q
1
+ . . . + b
i,n
q
n
= b
o,1
q
1
+ . . . + b
o,n
q
n
(8.25)
c
i
(q) = 1 +c
i,1
q
1
+ . . . + c
i,n
q
n
= 1 +c
o,1
q
1
+ . . . + c
o,n
q
n
f
i
(q) = f
i,0
+ f
i,1
q
1
+ . . . + f
i,n
q
n
= f
o,0
+ f
o,1
q
1
+ . . . + f
o,n
q
n
The dierence equation corresponding to the IIO-model is given by the controlled autore-
gressive integrated moving average (CARIMA) model:
a
i
(q) y(k) = b
i
(q) u(k) + f
i
(q) d
i
(k) + c
i
(q) e
i
(k)
in which d
i
(k) = (q)d
o
(k) and e
i
(k) is chosen as ZMWN.
From IO polynomial model to state space model
(Kailath [33]) Let
G
o
(q) =
b
o
(q)
a
o
(q)
F
o
(q) =
f
o
(q)
a
o
(q)
H
o
(q) =
c
o
(q)
a
o
(q)
where a
o
(q), b
o
(q), f
o
(q) and c
o
(q) are given in (8.24). Then the state space matrices are
given by:
A
o
=
_

_
a
o,1
1 0 . . . 0
a
o,2
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
a
o,n1
0 0
.
.
. 1
a
o,n
0 0 . . . 0
_

_
B
o
=
_

_
b
o,1
b
o,2
.
.
.
b
o,n1
b
o,n
_

_
(8.26)
C
o
=
_
1 0 0 . . . 0

D
H
= I D
F
= f
o,0
(8.27)
L
o
=
_

_
f
o,1
f
0,o
a
o,1
f
o,2
f
0,o
a
o,2
.
.
.
f
o,n1
f
0,o
a
o,n1
f
o,n
f
0,o
a
o,n
_

_
K
o
=
_

_
c
o,1
a
o,1
c
o,2
a
o,2
.
.
.
c
o,n1
a
o,n1
c
o,n
a
o,n
_

_
(8.28)
193
'
'
E
b
o,n
a
o,n
c
o,n
f
o,n
c

T
d
ds
c
d
d
e

T
E
q
1 E





E
b
o,2
a
o,2
c
o,2
f
o,2
c

T
d
ds
c
d
d
e

T
E
q
1 E
b
o,1
a
o,1
c
o,1
f
o,1
c

T
d
ds
c
d
d
e

T
E
q
1 E E e
c
'
e
o
d
o
u
x
n
x
3
x
2
x
1
y
Figure 8.8: State space representation of IO polynomial model
A state space realization of a polynomial model is not unique. Any (linear nonsingular)
state transformation gives a valid alternative. The above realization is called the observer
canonical form (Kailath [33]).
We can also translate the IO polynomial model into an IIO state space model using the
transformation formulas. We obtain the state space matrices:
A
i
=
_

_
1 1 0 0 . . . 0
0 a
o,1
1 0 . . . 0
0 a
o,2
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 a
o,n1
0 0
.
.
. 1
0 a
o,n
0 0 . . . 0
_

_
B
i
=
_

_
0
b
0,1
b
o,2
.
.
.
b
o,n1
b
o,n
_

_
(8.29)
C
i
=
_
1 1 0 0 . . . 0

D
H
= I D
F
= f
o,0
(8.30)
L
i
=
_

_
0
f
o,1
f
0,o
a
o,1
f
o,2
f
0,o
a
o,2
.
.
.
f
o,n1
f
0,o
a
o,n1
f
o,n
f
0,o
a
o,n
_

_
K
i
=
_

_
1
c
o,1
a
o,1
c
o,2
a
o,2
.
.
.
c
o,n1
a
o,n1
c
o,n
a
o,n
_

_
(8.31)
Example 52 : IO State space representation of IO polynomial model
In this example we show how to nd the state space realization of an IO polynomial model.
Consider the second order system
y(k) = G
o
(q) u(k) + F
o
(q) d
o
(k) + H
o
(q) e
o
(k)
194 Appendices
where y(k) is the proces output, u(k) is the proces input, d
o
(k) is a known disturbance
signal and e
o
(k) is assumed to be ZMWN and
G
o
(q) =
q
1
(1 0.5q
1
)(1 0.2q
1
)
F
o
(q) =
0.1q
1
1 0.2q
1
H
o
(q) =
1 + 0.5q
1
1 0.5q
1
To use the derived formulas, we have to give G
o
(q), H
o
(q) and F
o
(q) the common denom-
inator (1 0.5q
1
)(1 0.2q
1
) = (1 0.7q
1
+ 0.1q
2
):
F
o
(q) =
(0.1q
1
)(1 0.5q
1
)
(1 0.2q
1
)(1 0.5q
1
)
=
0.1q
1
0.05q
2
(1 0.7q
1
+ 0.1q
2
)
H
o
(q) =
(1 + 0.5q
1
)(1 0.2q
1
)
(1 0.5q
1
)(1 0.2q
1
)
=
1 + 0.3q
1
0.1q
2
(1 0.7q
1
+ 0.1q
2
)
A state-space representation for this example is now given by (8.26)-(8.28):
A
o
=
_
0.7 1
0.1 0
_
B
o
=
_
1
0
_
C
o
=
_
1 0

L
o
=
_
0.1
0.05
_
K
o
=
_
1
0.2
_
D
H
= 1 D
F
= 0
and so
x
o1
(k + 1) = 0.7 x
o1
(k) + x
o2
(k) + e
o
(k) + 0.1 d
o
(k) + u(k)
x
o2
(k + 1) = 0.1 x
o1
(k) 0.2 e
o
(k) 0.05 d
o
(k)
y(k) = x
o1
(k) + e
o
(k)
Example 53 : IIO State space representation of IO polynomial model
In this example we show how to nd the IIO state space realization of the IO polynomial
system of example 52 using (8.29)-(8.31):
A
i
=
_
_
1 1 0
0 0.7 1
0 0.1 0
_
_
B
i
=
_
_
0
1
0
_
_
C
i
=
_
1 1 0

L
i
=
_
_
0
0.1
0.05
_
_
K
i
=
_
_
1
1
0.2
_
_
D
H
= 1 D
F
= 0
and so
x
i1
(k + 1) = x
i1
(k) + x
i2
(k) + e
i
(k)
x
i2
(k + 1) = 0.7 x
i2
(k) + x
i3
(k) + e
i
(k) + 0.1 d
i
(k) + u(k)
x
i3
(k + 1) = 0.1 x
i2
(k) 0.2 e
i
(k) 0.05 d
i
(k)
y(k) = x
i1
(k) + x
i2
(k) + e
i
(k)
195
From IIO polynomial model to state space model
(Kailath [33]) Let
G
i
(q) =
b
i
(q)
a
i
(q)
F
i
(q) =
f
i
(q)
a
i
(q)
H
i
(q) =
c
i
(q)
a
i
(q)
where a
i
(q), b
i
(q) and c
i
(q) are given in (8.25). Then the state space matrices are given
by:
A
i
=
_

_
a
i,1
1 0 . . . 0
a
i,2
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
a
i,n
0 0
.
.
. 1
a
i,n+1
0 0 . . . 0
_

_
B
i
=
_

_
b
i,1
b
i,2
.
.
.
b
i,n
0
_

_
(8.32)
C
i
=
_
1 0 0 . . . 0

(8.33)
L
i
=
_

_
f
i,1
f
i,2
.
.
.
f
i,n
0
_

_
K
i
=
_

_
c
i,1
a
i,1
c
i,2
a
i,2
.
.
.
c
i,n
a
i,n
a
i,n+1
_

_
(8.34)
Example 54 : IIO State space representation of IIO polynomial model
In this example we show how to nd the state space realization of the system of example
52, rewritten as an IIO polynomial model: We consider the system:
y(k) = G
i
(q) u(k) + F
i
(q) d
i
(k) + H
i
(q) e
i
(k)
where y(k) is the proces output, u(k) is the proces increment input, d
i
(k) is the increment
disturbance signal and e(k) is assumed to be integrated ZMWN and
G
i
(q) =
q
1
(1 q
1
)(1 0.5q
1
)(1 0.2q
1
)
=
q
1
(1 1.7q
1
+ 0.8q
2
0.1q
3
)
F
i
(q) =
0.1q
1
(1 0.5q
1
)
(1 q
1
)(1 0.5q
1
)(1 0.2q
1
)
=
0.1q
1
0.05q
2
(1 1.7q
1
+ 0.8q
2
0.1q
3
)
H
i
(q) =
(1 + 0.5q
1
)(1 0.2q
1
)
(1 q
1
)(1 0.5q
1
)(1 0.2q
1
)
=
1 + 0.3q
1
0.1q
2
(1 1.7q
1
+ 0.8q
2
0.1q
3
)
196 Appendices
A state-space representation for this example is now given by (8.33)-(8.34):
A
i
=
_
_
1.7 1 0
0.8 0 1
0.1 0 0
_
_
B
i
=
_
_
1
0
0
_
_
C
i
=
_
1 0 0

L
i
=
_
_
0.1
0.05
0
_
_
K
i
=
_
_
2.0
0.9
0.1
_
_
D
H
= 1 D
F
= 0
and so
x
i1
(k + 1) = 1.7x
i1
(k) + x
i2
(k) + 2.0e
i
(k) + 0.1d
i
(k) + u(k)
x
i2
(k + 1) = 0.8x
i1
(k) + x
i3
(k) 0.9e
i
(k) 0.05d
i
(k)
x
i3
(k + 1) = 0.1x
i1
(k) + 0.1e
i
(k)
y(k) = x
i1
(k) + e
i
(k)
Clearly, the IIO state space representation in example 3 is dierent from the IIO state space
representation in example 2. By denoting the state space matrices of example 2 with a hat
( so

A
i
,

B
i
,

C
i
,

L
i
,

K
i
,

D
H
,

D
F
), the relation between the two state space representations
is given by:

A
i
= T
1
A
i
T

C
i
= C
i
T

D
H
= D
H

D
F
= D
F

B
i
= T
1
B
i

L
i
= T
1
L
i

K
i
= T
1
K
i
where state transformation matrix T is given by:
T =
_
_
1 1 0
0.7 0 1
0.1 0 0
_
_
Prediction using polynomial models
Diophantine equation and making predictions
Consider the polynomial model
u
1
(k) =
c(q)
a(q)
u
2
(k)
where
a(q) = 1 +a
1
q
1
+ . . . + a
na
q
na
c(q) = c
0
+ c
1
q
1
+ . . . + c
nc
q
nc
197
Each set of polynomial functions a(q), c(q) together with a positive integer j saties a set
of Diophantine equations
c(q) = M
j
(q) a(q) + q
j
L
j
(q) (8.35)
where
M
j
(q) = m
j,0
+ m
j,1
q
1
+ . . . + m
j,j1
q
j+1
L
j
(q) =
j,0
+
j,1
q
1
+ . . . +
j,na
q
na
Remark 55: Diophantine equations are standard in the theory of prediction with polyno-
mial models. An algorithm for solving these equations is given in appendix B of Soeterboek
[61].
When making a prediction of u
1
(k +j) for j > 0 we are interested how u
1
(k +j) depends
on future values of u
2
(k + i), 0 < i j and present and past values of u
2
(k i), i 0.
Using Diophantine equation (8.35) we derive (Clarke et al.[14][15], Bitmead et al.[6]):
u
1
(k + j) =
c(q)
a(q)
u
2
(k + j)
= M
j
(q)u
2
(k + j) + q
j
L
j
(q)
a(q)
u
2
(k + j)
= M
j
(q)u
2
(k + j) +
L
j
(q)
a(q)
u
2
(k)
This means that u
1
(k + j) can be written as
u
1
(k + j) = u
1,future
(k + j) + u
1,past
(k + j)
where the factor
u
1,future
(k + j) = M
j
(q)u
2
(k + j) = m
j,0
u
2
(k + j) + . . . + m
j,j1
u
2
(k + 1)
only consist of future values of u
2
(k + i), 1 i j, and the second term
u
1,past
(k + j) =
L
j
(q)
a(q)
u
2
(k)
only present and past values of u
2
(k i), i 0.
Prediction using polynomial models
Let model (3.1) be given in polynomial form as:
a(q) y(k) = c(q) e(k) + f
1
(q) w
1
(k) + f
2
(q) w
2
(k) + b(q) v(k) (8.36)
n(q) z(k) = h(q) e(k) + m
1
(q) w
1
(k) + m
2
(q) w
2
(k) + g(q) v(k) (8.37)
198 Appendices
where w
1
and w
2
usually corresponds to the reference signal r(k) and the known distrubance
signal d(k), respectively. In the sequel of this section we will use
f(q)w(q) =
_
f
1
(q) f
2
(q)

_
w
1
(k)
w
2
(k)
_
(8.38)
m(q)w(q) =
_
m
1
(q) m
2
(q)

_
w
1
(k)
w
2
(k)
_
(8.39)
as an easier notation.
To make predictions of future output signals of CARIMA models one rst needs to solve
the following Diophantine equation
h(q) = E
j
(q) n(q) + q
j
F
j
(q) (8.40)
Using (8.37) and (8.40), we derive
z(k + j) =
h(q)
n(q)
e(k + j) +
m(q)
n(q)
w(k + j) +
g(q)
n(q)
v(k + j)
= E
j
(q) e(k + j) +
q
j
F
j
(q)
n(q)
e(k + j) +
m(q)
n(q)
w(k + j) +
g(q)
n(q)
v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)
n(q)
e(k) +
m(q)
n(q)
w(k + j) +
g(q)
n(q)
v(k + j) (8.41)
From (8.36) we derive
e(k) =
a(q)
c(q)
y(k)
f(q)
c(q)
w(k)
b(q)
c(q)
v(k)
Substitution in (8.41) yields:
z(k + j) = E
j
(q) e(k + j) +
F
j
(q)
n(q)
e(k) +
m(q)
n(q)
w(k + j) +
g(q)
n(q)
v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k)
F
j
(q)f(q)
n(q)c(q)
w(k)
F
j
(q)b(q)
n(q)c(q)
v(k) +
+
m(q)
n(q)
w(k + j) +
g(q)
n(q)
v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k) +
_
m(q)
n(q)

q
j
F
j
(q)f(q)
n(q)c(q)
_
w(k + j) +
+
_
g(q)
n(q)

q
j
F
j
(q)b(q)
n(q)c(q)
_
v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k) +
m(q)
n(q)c(q)
w(k + j) +
g(q)
n(q)c(q)
v(k + j)
199
where
m(q) = m(q)c(q) q
j
F
j
(q)f(q)
g(q) = g(q)c(q) q
j
F
j
(q)b(q)
Finally solve the Diophantine equation:
q g(q) =
j
(q) n(q) c(q) + q
j

j
(q) (8.42)
Then:
z(k + j) = E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k) +
m(q)
n(q)c(q)
w(k + j) +
g(q)
n(q)c(q)
v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k) +
m(q)
n(q)c(q)
w(k + j) +
+
q
1
(q)
n(q)c(q)
v(k) + q
1

j
(q) v(k + j)
= E
j
(q) e(k + j) +
F
j
(q)a(q)
n(q)c(q)
y(k) +
m(q)
n(q)c(q)
w(k + j) +
+
(q)
n(q)c(q)
v(k 1) +
j
(q) v(k + j 1)
Using the fact that the prediction e(k +j|k) = 0 for j > 0 the rst term vanishes and the
optimal prediction z(k + j) for j > 0 is given by:
z(k + j) =
F
j
(q)a(q)
n(q)c(q)
y(k) +
_
m(q)
n(q)c(q)
_
w(k + j) +
+
_
(q)
n(q)c(q)
_
v(k 1) +
j
(q) v(k + j 1)
= F
j
(q)a(q) y
f
(k) + m(q) w
f
(k + j) + (q) v
f
(k 1) +
j
(q) v(k + j 1)
where y
f
, w
f
and v
f
are the ltered signals
y
f
(k) =
1
n(q)c(q)
y(k)
w
f
(k) =
1
n(q)c(q)
w(k)
v
f
(k) =
1
n(q)c(q)
v(k)
The free-run response is now given by F
j
(q)a(q) y
f
(k) + m(q) w
f
(k +j) +
j
(q) v
f
(k 1).
which is the output response if all future input signals are taken zero. The last term

j
v(k +j 1) accounts for the inuence of the future input signals v(k), . . . , v(k +j 1).
The prediction of the performance signal can now be given as:
200 Appendices
z(k) =
_

_
F
0
(q) y
f
(k) + m
0
(q) w
f
(k) +
0
(q) v
f
(k1) +
0
(q) v(k)
F
1
(q) y
f
(k) + m
1
(q) w
f
(k+1) +
1
(q) v
f
(k1) +
1
(q) v(k+1)
.
.
.
F
N1
(q) y
f
(k) + m
N1
(q) w
f
(k+N1) +
N1
(q) v
f
(k1) +
N1
(q) v(k+N1)
_

_
= z
0
(k) +

D
vz
v(k)
The free response vector z
0
and the predictor matrix

D
vz
will be equal to z
0
and

D
3
,
respectively, in the case of a state-space model.
Relation to state-space model:
Consider the above system a(q) u
1
(k + j) = c(q) u
2
(k + j) in state-space representation,
where A, B, C, D are given by the controller canonical form:
A =
_

_
a
1
a
2
a
n1
a
n
1 0 0 0
0 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 0
_

_
B =
_

_
1
0
0
.
.
.
0
_

_
C =
_
c
1
c
o
a
1
c
2
c
o
a
2
. . . c
n
c
o
a
n

D = c
o
Then based on the previous sections we know that the prediction of u
1
(k +j) is given by:
u
1
(k + j) = CA
j
x(k) +
j

i=1
CA
i1
Bu
2
(k + j i) + Du
2
(k + j)
For this description we will nd that:
CA
j
=
_

j,0

j,1

j,na

CA
i1
B = m
j,ji
for 0 < i j
D = m
j,0
and so where
CA
j
x(k) =
L
j
(q)
a(q)
u
2
(k)
reects the past of the system, the equation
Du
2
(k + j) +
j

i=1
CA
i1
Bu
2
(k + j i) = m
j,0
u
2
(k + j) + . . . + m
j,j1
u
2
(k + 1)
= M
j
(q)u
2
(k + j)
gives the response of the future inputs u
2
(k + i), i > 0.
Appendix E: Performance indices
In this appendix we show how the LQPC performance index, the GPC performance index
and the zone performance index can be can be translated into a standard predictive control
problem.
proof of theorem 2:
First choose the performance signal
z(k) =
_
Q
1/2
x
o
(k + 1)
R
1/2
u(k)
_
(8.43)
Then it easy to see that
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k)
=
N1

j=0
x
T
(k + 1 +j|k)Q
1/2

1
(j)Q
1/2
x(k + 1 +j|k) +
N1

j=0
u
T
(k + j|k)R
1/2
IR
1/2
u(k + j|k)
=
N

j=1
x
T
(k + j|k)Q
1/2

1
(j 1)Q
1/2
x(k + j|k) +
N

j=1
u
T
(k + j 1|k)Ru(k + j 1|k)
=
N

j=Nm
x
T
(k + j|k)Q x(k + j|k) +
N

j=1
u
T
(k + j 1|k)Ru(k + j 1|k)
and so performance index (4.1) is equivalent to (4.5) for the above given z, N and . From
the IO state space equation
x
o
(k + 1) = A
o
x
o
(k) + B
o
u(k) + L
o
d(k) + K
o
e
o
(k)
201
202 Appendices
and using the denitions of x, e, v and w, we nd:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
d(k) + B
3
v(k)
z(k) =
_
Q
1/2
x
o
(k + 1)
R
1/2
u(k)
_
=
_
Q
1/2
_
A
o
x
o
(k) + B
o
u(k) + L
o
d(k) + K
o
e
o
(k)
_
R
1/2
u(k)
_
=
_
Q
1/2
A
o
0
_
x
o
(k) +
_
Q
1/2
K
o
0
_
e
o
(k) +
_
Q
1/2
L
o
0
_
d
o
(k) +
_
Q
1/2
B
o
R
1/2
_
u(k)
= C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
for the given choices of A, B
1
, B
2
, B
3
, C
1
, C
2
, D
21
, D
22
and D
23
. 2 End Proof
proof of theorem 4.1:
The GPC performance index as given in (4.8) can be transformed into the standard per-
formance index (4.1). First choose the performance signal
z(k) =
_
y
p
(k + 1|k) r(k + 1)
u(k)
_
(8.44)
Then it easy to see that
J(v, k) =
N1

j=0
z
T
(k + j|k)(j) z(k + j|k) =
=
N1

j=0
_
y
T
p
(k + 1 +j|k)r
T
(k + 1 +j) u
T
(k + j|k)

(j)
_
y
p
(k + 1 +j|k)r(k + 1 +j|k)
u(k + j|k)
_
=
=
N1

j=0
_
y
p
(k + 1 +j|k) r(k + 1 +j)
_
T

1
(j)
_
y
p
(k + 1 +j|k) r(k + 1 +j)
_
+
2
N1

j=0
u
T
(k + j|k)u(k + j|k)
=
N

j=Nm
_
y
p
(k + j|k) r(k + j)
_
T
_
y
p
(k + j|k) r(k + j)
_
+
2
N

j=1
u
T
(k + j 1|k)u(k + j 1|k)
and so performance index (4.1) is equivalent to (4.8) for the above given z(k), N and (j).
Substitution of the IIO-output equation
y(k) = C
i
x
i
(k) + D
H
e
i
(k)
203
in (4.11),(4.12) results in:
x
p
(k + 1) = A
p
x
p
(k) + B
p
C
i
x
i
(k) + B
p
D
H
e
i
(k)
y
p
(k) = C
p
x
p
(k) + D
p
C
i
x
i
(k) + D
p
D
H
e
i
(k)
and so
x(k + 1) =
_
x
p
(k + 1)
x
i
(k + 1)
_
=
=
_
A
p
B
p
C
i
0 A
i
_ _
x
p
(k)
x
i
(k)
_
+
_
B
p
D
H
K
i
_
e
i
(k) +
_
0 0
L
i
0
_ _
d
i
(k)
r(k + 1)
_
+
_
0
B
i
_
v(k) =
= Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
H
e
i
(k) +
_
0 0

_
d
i
(k)
r(k + 1)
_
= C
1
x(k) + D
11
e
i
(k) + D
12
w(k)
y
p
(k) =
_
C
p
D
p
C
i

x(k) + D
p
D
H
e
i
(k) +
_
0 0

_
d
i
(k)
r(k + 1)
_
for the given choices of A, B
1
, B
2
, B
3
and C
1
. For y
p
(k + 1|k) we derive:
y
p
(k + 1|k) =
_
C
p
D
p
C
i

x(k + 1) +D
p
D
H
e
i
(k + 1|k)
=
_
C
p
D
p
C
i

_
A
p
B
p
C
i
0 A
i
_ _
x
p
(k)
x
i
(k)
_
+
_
C
p
D
p
C
i

_
B
p
D
H
K
i
_
e
i
(k) +
_
C
p
D
p
C
i

_
0 0
L
i
0
__
d
i
(k)
r(k + 1)
_
+
_
C
p
D
p
C
i

_
0
B
i
_
v(k) +
D
p
D
H
e
i
(k + 1|k) =
=
_
C
p
A
p
C
p
B
p
C
i
+ D
p
C
i
A
i

_
x
p
(k)
x
i
(k)
_
+
_
C
p
B
p
D
H
+ D
p
C
i
K
i

e
i
(k) +
_
D
p
C
i
L
i
0

_
d
i
(k)
r(k + 1)
_
+
_
D
p
C
i
B
i

v(k)
where we used the estimate e
i
(k + 1|k) = 0.
204 Appendices
For z(k) we derive:
z(k) =
_
y
p
(k + 1|k) r(k + 1)
u(k)
_
=
_
y
p
(k + 1|k)
0
_
+
_
r(k + 1)
0
_
+
_
0
v(k)
_
=
_
C
p
A
p
C
p
B
p
C
i
+ D
p
C
i
A
i
0 0
__
x
p
(k)
x
i
(k)
_
+
_
C
p
B
p
D
H
+ D
p
C
i
K
i
0
_
e
i
(k) +
_
D
p
C
i
L
i
0
0 0
__
d
i
(k)
r(k+1)
_
+
_
D
p
C
i
B
i
0
_
v(k) +
_
I
0
_
r(k+1) +
_
0
I
_
v(k)
=
_
C
p
A
p
C
p
B
p
C
i
+ D
p
C
i
A
i
0 0
__
x
p
(k)
x
i
(k)
_
+
_
C
p
B
p
D
H
+ D
p
C
i
K
i
0
_
e
i
(k) +
_
D
p
C
i
L
i
I
0 0
_
w(k) +
_
D
p
C
i
B
i
I
_
v(k)
= C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
for the given choices of C
2
, D
21
, D
22
and D
23
, results in a performance index (4.1) equivalent
to (4.5). 2 End Proof
Appendix F:
Standard Predictive Control Toolbox
Version 5.6
T.J.J. van den Boom
Manual MATLAB Toolbox
January 16, 2010
205
Contents
Contents
Introduction ............................................................................................. 207
Reference ............................................................................................. 209
gpc ............................................................................................. 211
gpc ss ............................................................................................. 215
lqpc ............................................................................................. 218
tf2syst ............................................................................................. 221
imp2syst ............................................................................................. 222
ss2syst ............................................................................................. 224
syst2ss ............................................................................................. 226
io2iio ............................................................................................. 228
gpc2spc ............................................................................................. 229
lqpc2spc ............................................................................................. 231
add du ............................................................................................. 232
add u ............................................................................................. 234
add y ............................................................................................. 238
add x ............................................................................................. 240
add nc ............................................................................................. 242
add end ............................................................................................. 243
external ............................................................................................. 244
dgamma ............................................................................................. 246
pred ............................................................................................. 248
contr ............................................................................................. 250
lticll ............................................................................................. 252
lticon ............................................................................................. 253
simul ............................................................................................. 254
rhc ............................................................................................. 255
206 Standard Predictive Control Toolbox
Introduction
Introduction
This toolbox is a collection of a few Matlab functions for the design and analysis of
predictive controllers. This toolbox has been developed by Ton van den Boom at the Delft
Center for Systems and Control (DCSC), Delft University of Technology, the Netherlands.
The Standard Predictive Control toolbox is an independent tool, which only requires the
control system toolbox and the signal processing toolbox.
models in SYSTEM format
The notation in the SPC toolbox is kept as close as possible to the one in the lecture
notes. To describe systems (IO model, IIO model, standard model, prediction model) we
use the SYSTEM format, where all the system matrices are stacked in a matrix G and the
dimensions are given in a vector dim.
IO model in SYSTEM format:
Consider the IO-system
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
d
o
(k)
for A
o
R
nana
, K
o
R
nan
k
, L
o
R
nan
l
, B
o
R
nan
b
and C
o
R
ncna
.
In SYSTEM format this will be given as:
G =
_
A
o
K
o
L
o
B
o
C
o
D
H
D
F
0
_
dim =
_
n
a
n
k
n
l
n
b
n
c
0 0 0

IIO model in SYSTEM format:


Consider the IIO-system
x
i
(k + 1) = A
i
x
i
(k) + K
i
e
i
(k) + L
i
d
i
(k) + B
i
u(k)
y(k) = C
i
x
i
(k) + D
H
e
i
(k) + D
F
d
i
(k)
for A
i
R
nana
, K
i
R
nan
k
, L
i
R
nan
l
, B
i
R
nan
b
and C
i
R
ncna
.
In SYSTEM format this will be given as:
G =
_
A
i
K
i
L
i
B
i
C
i
D
H
D
F
0
_
dim =
_
n
a
n
k
n
l
n
b
n
c
0 0 0

Standard Predictive Control Toolbox 207


Introduction
Standard model in SYSTEM format:
Consider the standard model of the SPCP:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
(k) = C
3
x(k) + D
31
e(k) + D
32
w(k) + D
33
v(k)
(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
for A R
nana
, B
i
R
nan
bi
, i = 1, 2, 3, C
j
R
n
cj
na
, j = 1, 2, 3, 4 and In SYSTEM
format this will be given as:
G =
_

_
A B
1
B
2
B
3
C
1
D
11
D
12
0
C
2
D
21
D
22
D
23
C
3
D
31
D
32
D
33
C
4
D
41
D
42
D
43
_

_
dim =
_
n
a
n
b1
n
b2
n
b3
n
c1
n
c2
n
c3
n
c4

Prediction model in SYSTEM format:


Consider the prediction model of the SPCP:
x(k + 1) = Ax(k) +

B
1
e(k) +

B
2
w(k) +

B
3
v(k)
y(k) =

C
1
x(k) +

D
11
e(k) +

D
12
w(k) +

D
13
v(k)
z(k) =

C
2
x(k) +

D
21
e(k) +

D
22
w(k) +

D
23
v(k)

(k) =

C
3
x(k) +

D
31
e(k) +

D
32
w(k) +

D
33
v(k)

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)
where A R
nana
,

B
i
R
nan
bi
,

C
j
R
n
cj
na
, and

D
ji
R
n
cj
n
bi
, i = 1, 2, 3, j = 2, 3, 4.
the SYSTEM format is given as:
G =
_

_
A

B
1

B
2

B
3

C
1

D
11

D
12

D
13

C
2

D
21

D
22

D
23

C
3

D
31

D
32

D
33

C
4

D
41

D
42

D
43
_

_
dim =
_
n
a
n
b1
n
b2
n
b3
n
c2
n
c3
n
c4

208 Standard Predictive Control Toolbox


Reference
Reference
Common predictive control problems
gpc solves the generalized predictive control problem
gpc ss solves the state space generalized predictive control problem
lqpc solves the linear quadratic predictive control problem
Construction of process model
tf2syst transforms a polynomial into a state space model in SYSTEM
format
imp2syst transforms a impulse response model into a state space model in
SYSTEM format
ss2syst transforms a state space model into the SYSTEM format
syst2ss transforms SYSTEM format into a state space model
io2iio transforms an IO system into an IIO system
Formulation of standard predictive control problem
gpc2spc transforms GPC problem into an SPC problem
lqpc2spc transforms LQPC problem into an SPC problem
add du function to add increment input constraint to SPC problem
add u function to add input constraint to SPC problem
add v function to add input constraint to SPC problem
add y function to add output constraint to SPC problem
add x function to add state constraint to SPC problem
add nc function to add a control horizon to SPC problem
add end function to add a state end-point constraint to SPC problem
Standard predictive control problem
external function makes the external signal w from d and r
dgamma function makes the selection matrix

pred function makes a prediction model


contr function computes controller matrices for predictive controller
contr function computes controller matrices for predictive controller
lticll function computes the state space matrices of the LTI closed loop
lticon function computes the state space matrices of the LTI optimal
predictive controller
simul function makes a closed loop simulation with predictive controller
rhc function makes a closed loop simulation in receding horizon mode
Standard Predictive Control Toolbox 209
Reference
Demonstrations of tuning GPC and LQPC
demogpc Tuning GPC controller
demolqpc Tuning LQPC controller
210 Standard Predictive Control Toolbox
gpc
gpc
Purpose
Solves the Generalized Predictive Control (GPC) problem
Synopsis
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);
Description
The function solves the GPC method (Clarke et al.,[14],[15]) for a controlled
autoregressive integrated moving average (CARIMA) model, given by
a
i
(q) y(k) = b
i
(q)u(k) + c
i
(q) e
i
(k) + f
i
(q) d
i
(k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min
u(k)
J(u, k) = min
u(k)
N

j=Nm
| y
p
(k+j|k) r(k+j)|
2
+
2
Nc

j=1
|u(k+j1)|
2
where
y
p
(k) = P(q)y(k) is the weighted process output signal
r(k) is the reference trajectory
y(k) is the process output signal
d
i
(k) is the known disturbance increment signal
e
i
(k) is the zero-mean white noise
u(k) is the process control increment signal
N
m
is the minimum cost- horizon
N is the prediction horizon
N
c
is the control horizon
is the weighting on the control signal
P(q) = 1+p
1
q
1
+. . .+p
np
q
np
is a polynomial with desired closed-loop poles
The signal y
p
(k +j|k) is the prediction of y
p
(k +j), based on knowledge up to
time k. The input signal u(k + j) is forced to become constant for j N
c
, so
u(k + j) = 0 for j N
c
Standard Predictive Control Toolbox 211
gpc
Further the increment input signal is assumed to be bounded:
| u(k + j) | u
max
for 0 j < N
c
The parameter determines the trade-o between tracking accuracy (rst term)
and control eort (second term). The polynomial P(q) can be chosen by the
designer and broaden the class of control objectives. n
p
of the closed-loop poles
will be placed at the location of the roots of polynomial P(q).
The function gpc returns the output signal y and increment input signal du.
The input and output variables are the following:
y the output signal y(k).
du the increment input signal u(k).
sys the model (in standard form).
dim dimensions sys.
vv controller parameters.
ai,bi,ci,fi polynomials of CARIMA model.
P is the polynomial P(q).
lambda is the trade-o parameter .
Nm,N,Nc are the summation parameters.
r is the column vector with reference signal r(k).
di is the column vector disturbance increment signal d
i
(k).
ei is the column vector with ZMWN noise signal e
i
(k).
dumax is the maximum bound u
max
on the increment input signal.
lensim is length of simulation interval.
apr vector with ags for a priori knowledge on external signal.
rhc ag for receding horizon mode.
Setting dumax=0 means no increment input constraint will be added.
The vector apr (with length 2) determines if the external signals d
i
(k +j) and
r(k +j) are a priori known or not. The disturbance d
i
(k +j) is a priori known
for apr(1)=1 and unknown for apr(1)=0. The reference r(k + j) is a priori
known for apr(2)=1 and unknown for apr(2)=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
functions lticll and lticon
212 Standard Predictive Control Toolbox
gpc
Example
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Example for gpc
%
randn(state,4); % set initial value of random generator
%%% G(q) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ai = conv([1 -1.787 0.798],[1 -1]); % polynomial ai(q)
bi = 0.029*[0 1 0.928 0]; % polynomial bi(q)
ci = [1 -1.787 0.798 0]; % polynomial ci(q)
fi = [0 1 -1.787 0.798]; % polynomial fi(q)
%%% P(q) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
P = 1; % No pole-placement
lambda=0; % increment input is not weighted
Nm = 8; % Mimimum cost horizon
Nc = 2; % Control horizon
N = 25; % Prediction horizon
lensim = 61; % length simulation interval
dumax = 0; % dumax=0 means that no constraint is added !!
apr = [0 0]; % no apriori knowledge about w(k)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
r = [zeros(1,10) ones(1,100)]; % step on k=11
di = [zeros(1,20) 0.5 zeros(1,89)]; % step on k=21
ei = [zeros(1,30) -0.1*randn(1,31)]; % noise starting from k=31
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,r,di,ei,dumax,apr);
u = cumsum(du); % compute u(k) from du(k)
do = cumsum(di(1:61)); % compute do(k) from di(k)
eo = cumsum(ei(1:61)); % compute eo(k) from ei(k)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% plot figures %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
t=0:60;
[tt,rr]=stairs(t,r(:,1:61));
[tt,uu]=stairs(t,u);
Standard Predictive Control Toolbox 213
gpc
[tt,duu]=stairs(t,du);
subplot(211);
plot(t,y,b,tt,rr,g,t,do,r,t,eo,c);
title(green: r , blue: y , red: do , cyan: eo);
grid;
subplot(212);
plot(tt,duu,b,tt,uu,g);
title(blue: du , green: u);
grid;
disp( );
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
See Also
tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y, contr,
simul
214 Standard Predictive Control Toolbox
gpc ss
gpc ss
Purpose
Solves the Generalized Predictive Control (GPC) problem for a state space IIO
model
Synopsis
[y,du]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);
Description
The function solves the GPC method (Clarke et al., [14],[15]) for a state space
IIO model, given by
x(k + 1) = A
i
x(k) + K
i
e
i
(k) + L
i
d
i
(k) + B
i
u(k)
y(k) = C
i
x(k) + D
H
e
i
(k) + D
H
d
i
(k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min
u(k)
J(u, k) = min
u(k)
N

j=Nm
|W
y
y
p
(k+j|k) r(k+j)|
2
+
Nc

j=1
|W
u
u(k+j1)|
2
where | (k)| stands for the euclidean norm of the vector (k), and the variables
are given by:
y
p
(k) = P(q)y(k) is the weighted process output signal
r(k) is the reference trajectory
y(k) is the process output signal
d
i
(k) is the known disturbance increment signal
e
i
(k) is the zero-mean white noise
u(k) is the process control increment signal
N
m
is the minimum cost- horizon
N is the prediction horizon
N
c
is the control horizon
W
y
is the weighting matrix on the tracking error
W
u
is the weighting matrix on the control signal
P(q) is the reference system
Standard Predictive Control Toolbox 215
gpc ss
The reference system P(q) is given by the state space realization:
x
p
(k + 1) = A
p
x(k) + B
p
y(k)
(k) = C
p
x(k) + D
p
y(k)
The signal y
p
(k +j|k) is the prediction of y
p
(k +j), based on knowledge up to
time k. The input signal u(k + j) is forced to become constant for j N
c
, so
u(k + j) = 0 for j N
c
Further the increment input signal is assumed to be bounded:
| u(k + j) | u
max
for 0 j < N
c
For MIMO systems u
max
can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
u
max
holds for each element of u(k). In the case of a vector u
max
, the
bound is taken elementwise.
The weighting matrices W
y
and W
u
determines the trade-o between tracking
accuracy (rst term) and control eort (second term). The reference system
P(q) can be chosen by the designer and broaden the class of control objectives.
The function gpc ss returns the output signal y and increment input signal du.
The input and output variables are the following:
y the output signal y(k).
du the increment input signal u(k).
sys the model (in standard form).
dim dimensions sys.
vv controller parameters.
Ai,Ki,Li,Bi,
Ci,DH,DF system matrices of IIO model.
Ap,Bp,Cp,Dp system matrices of reference system P(q).
Wy weighting matrix on tracking error.
Wu weighting matrix on input increment signal.
Nm,N,Nc are the summation parameters.
r is the column vector with reference signal r(k).
di is the column vector disturbance increment signal d
i
(k).
ei is the column vector with ZMWN noise signal e
i
(k).
dumax is the vector with bound u
max
.
lensim is length of simulation interval.
apr vector with ags for a priori knowledge on external signal.
rhc ag for receding horizon mode.
216 Standard Predictive Control Toolbox
gpc ss
Setting dumax=0 means no increment input constraint will be added.
The vector apr (with length 2) determines if the external signals d
i
(k +j) and
r(k +j) are a priori known or not. The disturbance d
i
(k +j) is a priori known
for apr(1)=1 and unknown for apr(1)=0. The reference r(k + j) is a priori
known for apr(2)=1 and unknown for apr(2)=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
functions lticll and lticon
See Also
gpc, tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y,
contr, simul
Standard Predictive Control Toolbox 217
lqpc
lqpc
Purpose
Solves the Linear Quadratic Predictive Control (LQPC) problem
Synopsis
[y,u,x]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);
[y,u,x,sys,dim,vv]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);
Description
The function solves the Linear Quadratic Predictive Control (LQPC) method
(Garca et al., [28]) for a state space model, given by
x(k + 1) = A
o
x(k) + K
o
e
o
(k) + L
o
d
o
(k) + B
o
u(k)
y(k) = C
o
x(k) + D
H
e
o
(k) + D
F
d
0
(k)
A performance index, based on state and input signals, is minimized:
min
u(k)
J(u, k) = min
u(k)
N

j=Nm
x(k+j)
T
Q x(k+j) +
N

j=1
u(k+j1)
T
Ru(k+j1)
where
x(k) is the state signal vector
u(k) is the process control signal
y(k) is the process output signal
d
o
(k) is the known disturbance signal
e
o
(k) is the zero-mean white noise
N
m
is the minimum cost- horizon
N is the prediction horizon
N
c
is the control horizon
Q is the state weighting matrix
R is the control weighting matrix
The signal x(k +j|k) is the prediction of state x(k +j), based on knowledge up
to time k. The input signal u(k + j) is forced to become constant for j N
c
.
Further the input signal is assumed to be bounded:
| u(k + j) | u
max
for 0 j < N
c
218 Standard Predictive Control Toolbox
lqpc
The matrices Q and R determines the trade-o between tracking accuracy (rst
term) and control eort (second term). For MIMO systems the matrices can
scale the states and inputs.
The function lqpc returns the output signal y, input signal u and the state
signal x. The input and output variables are the following:
y the output signal y(k).
u the input signal u(k).
x the state signal x(k).
sys the model (in standard form).
dim dimensions sys.
vv controller parameters.
Ao,Ko,Lo,Bo,Co,DH,DF system matrices of state space IO model.
Q,R the weighting matrices Q and R
Nm,N,Nc the summation parameters.
xo the column vector with initial state xo = x(0).
do the disturbance signal d
o
(k).
eo the ZMWN noise signal e
o
(k).
umax the maximum bound u
max
on the input signal.
lensim length of simulation interval.
apr ag for a priori knowledge on disturbance signal.
rhc ag for receding horizon mode.
The signal eo must have as many rows as there are noise inputs. The signal do
must have as many rows as there are disturbance inputs. Each column of eo
and do corresponds to a new time point.
The signal y will have as many rows as there are outputs. The signal u will
have as many rows as there are inputs. The signal x must have as many rows
as there are states. Each column of y, u and x corresponds to a new time point.
Setting umax=0 means no input constraint will be added.
The (scalar) variable apr determines if the disturbance signal d
i
(k+j) is a priori
known or not. The signal d
i
(k + j) is a priori known for apr=1 and unknown
for apr=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the return-key. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
Standard Predictive Control Toolbox 219
lqpc
functions lticll and lticon
See Also
lqpc2spc, pred, add nc, add u, add y, contr, simul
220 Standard Predictive Control Toolbox
tf2syst
tf2syst
Purpose
Transforms a polynomial into a state space model in SYSTEM format
Synopsis
[G,dim]=tf2syst(a,b,c,f);
Description
The function transforms a polynominal model into a state space model in SYS-
TEM format (For SISO system only). The input system is:
a(q)y(k) = b(q)v(k) + c(q)e(k) + f(q)d(k)
The state space equations become
x(k + 1) = Ax(k) + B
1
e(k) + B
2
d(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
d(k)
A R
nana
, B
1
R
nanb1
, B
2
R
nanb2
B
3
R
nanb3
, C
1
R
nc1na
, D
11
R
nc1nb1
, D
12
R
nc1nb2
The state space representation is given in a SYSTEM format, so
G =
_
A B
1
B
2
B
3
C
1
D
11
D
12
0
_
with dimension vector
dim =
_
na nb1 nb2 nb3 nc1 0 0 0

The function tf2syst returns the state space model in Go and its dimension
dimo, given in SYSTEM format. The input and output variables are the following:
G The state space model in SYSTEM format.
dim the dimension of the system.
a,b,c,f polynomials of CARIMA model.
See Also
io2iio, imp2syst, ss2syst
Standard Predictive Control Toolbox 221
imp2syst
imp2syst
Purpose
Transforms an impulse reponse model into a state space model in SYSTEM
format
Synopsis
[G,dim]=imp2syst(g,f,nimp);
[G,dim]=imp2syst(g,f);
Description
The function transforms an impulse response model into a state space model
in SYSTEM format (For SISO system only) The input system is:
y(k) =
ng

i=1
g(i)u(k i) +
n
f

i=1
f(i)d(k i) + e(k)
where the impulse response parameters g(i) and f(i) are given in the vectors
g =
_
0 g(1) g(2) . . . g(n
g
)

f =
_
0 f(1) f(2) . . . f(n
f
)

The state space equations become


x(k + 1) = Ax(k) + B
1
e(k) + B
2
d(k) + B
3
v(k)
y(k) = C
1
x(k) + e(k)
A R
nimpnimp
, B
1
R
nimpnb1
, B
2
R
nimpnb2
B
3
R
nimpnb3
, C
1
R
nc1nimp
The state space representation is given in a SYSTEM format, so
G =
_
A B
1
B
2
B
3
C
1
I 0 0
_
with dimension vector
dim =
_
nimp nb1 nb2 nb3 nc1 0 0 0

The function imp2syst returns the state space model in Go and its dimension
dimo, given in SYSTEM format. The input and output variables are the following:
222 Standard Predictive Control Toolbox
imp2syst
G The state space model in SYSTEM format.
dim the dimension of the system.
g vectors with impulse response parameters of process model.
f vectors with impulse response parameters of disturbance model.
nimp length of impulse response in model G
If nimp is not given, it will be set to nimp = max(n
g
,n
f
).
See Also
io2iio, tf2syst, ss2syst
Standard Predictive Control Toolbox 223
ss2syst
ss2syst
Purpose
Transforms a state space model into the SYSTEM format
Synopsis
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,...
C3,D31,D32,D33,C4,D41,D42,D43);
Description
The function transforms a state space model into the SYSTEM format The
state space equations are
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) + D
13
v(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
(k) = C
3
x(k) + D
31
e(k) + D
32
w(k) + D
33
v(k)
(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
A R
nana
, B
1
R
nanb1
, B
2
R
nanb2
, B
3
R
nanb3
C
1
R
nc1na
, C
2
R
nc2na
, C
3
R
nc3na
The output system is given in the SYSTEM format, so
G =
_

_
A B
1
B
2
B
3
C
1
D
11
D
12
D
13
C
2
D
21
D
22
D
23
C
3
D
31
D
32
D
33
C
4
D
41
D
42
D
43
_

_
with dimension vector
dim =
_
na nb1 nb2 nb3 nc1 nc2 nc3 nc4

The function ss2syst returns the state space model in G and its dimension dim,
given in SYSTEM format. The input and output variables are the following:
224 Standard Predictive Control Toolbox
ss2syst
G The state space model in SYSTEM format.
dim the dimension of the system.
A,B1,. . .,D43 system matrices of state space model.
See Also
io2iio, imp2syst, tf2syst, syst2ss
Standard Predictive Control Toolbox 225
syst2ss
syst2ss
Purpose
Transforms SYSTEM format into a state space model
Synopsis
[A,B1,B2,B3,C1,D11,D12,D13]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,C3,D31,D32,D33,...
C4,D41,D42,D43]=syst2ss(sys,dim);
Description
The function transforms a model in SYSTEM format into its state space repre-
sentation.
Consider a system in the SYSTEM format, so
sys =
_
A B
1
B
2
B
3
C
1
D
11
D
12
D
13
_
with dimension vector dim. Using
[A,B1,B2,B3,C1,D11,D12,D13]=syst2ss(sys,dim);
the state space equations become:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) + D
13
v(k)
Consider a standard predictive control problem in the SYSTEM format, so
sys =
_
_
A B
1
B
2
B
3
C
1
D
11
D
12
D
13
C
2
D
21
D
22
D
23
_
_
with dimension vector dim. Using
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23]=syst2ss(sys,dim);
226 Standard Predictive Control Toolbox
syst2ss
the state space equations become:
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k) + D
13
v(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
Consider a standard predictive control problem after prediction in SYSTEM for-
mat, so
sysp =
_

_
A

B
1

B
2

B
3

C
1

D
11

D
12

D
13

C
2

D
21

D
22

D
23

C
3

D
31

D
32

D
33

C
4

D
41

D
42

D
43
_

_
with dimension vector dimp. Using
[A,tB1,tB2,tB3,tC1,D11,D12,D13,tC2,tD21,tD22,tD23,...
tC3,tD31,tD32,tD33,tC4,tD41,tD42,tD43]= syst2ss(sysp,dimp);
the state space equations become:
x(k + 1) = Ax(k) +

B
1
e(k) +

B
2
w(k) +

B
3
v(k)
y(k) =

C
1
x(k) +

D
11
e(k) +

D
12
w(k) +

D
13
v(k)
z(k) =

C
2
x(k) +

D
21
e(k) +

D
22
w(k) +

D
23
v(k)

(k) =

C
3
x(k) +

D
31
e(k) +

D
32
w(k) +

D
33
v(k)

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)
See Also
io2iio, imp2syst, tf2syst, ss2syst
Standard Predictive Control Toolbox 227
io2iio
io2iio
Purpose
Transformes a IO system (Go) into a IIO system (Gi)
Synopsis
[Gi,dimi]=io2iio(Go,dimo);
Description
The function transforms an IO system into an IIO system
The state space representation of the input system is:
Go =
_
A
o
K
o
L
o
B
o
C
o
D
H
D
F
0
_
and the dimension vector
dimo =
_
na nk nl nb nc

The state space representation of the output system is:


Gi =
_
A
i
K
i
L
i
B
i
C
i
D
H
D
F
0
_
and a dimension vector
dimi =
_
na + nc nk nl nb nc

The function io2iio returns the state space IIO-model Gi and its dimension
dimi, The input and output variables are the following:
Gi The increment input output(IIO) model in SYSTEM format.
dimi the dimension of the IIO model.
Go The Input output(IO) model in SYSTEM format.
dimo the dimension of the IO model.
228 Standard Predictive Control Toolbox
gpc2spc
gpc2spc
Purpose
Transforms GPC problem into a SPC problem
Synopsis
[sys,dim]=gpc2spc(Gi,dimi,P,lambda);
Description
The function transforms the GPC problem into a standard predictive control
problem. For an IIO model G
i
:
x
i
(k + 1) = A
i
x
i
(k) + K
i
e
i
(k) + L
o
w(k) + B
o
u(k)
y(k) = C
i
x
i
(k) + D
H
e
i
(k) + D
F
w(k)
The weighting P(q) is given in state space (!!) representation:
x
p
(k + 1) = A
p
x
p
(k) + B
p
y(k)
y
p
(k) = C
p
x
p
(k) + D
p
y(k)
Where x
p
is the state of the realization describing y
p
(k) = P(q)y(k).
After the transformation, the system becomes
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
21
w(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
z(k) =
_
y
p
(k + 1) r(k + 1)
u(k)
_
w(k) =
_
d(k)
r(k + 1)
_
The IO model Gi, the weighting lter P and the transformed system sys are
given in the SYSTEM format. The input and output variables are the following:
sys Standard model of SPC problem in SYSTEM format.
dim The dimension in SPC problem.
Gi The IIO model in SYSTEM format.
P The weighting polynominal.
lambda Weighting factor.
Standard Predictive Control Toolbox 229
gpc2spc
See Also
lqpc2spc, gpc
230 Standard Predictive Control Toolbox
lqpc2spc
lqpc2spc
Purpose
Transforms LQPC problem into a SPC problem
Synopsis
[sys,dim]=lqpc2spc(Go,dimo,Q,R);
Description
The function transforms the LQPC problem into a standard predictive control
problem. Consider the IO model G
o
:
x
o
(k + 1) = A
o
x
o
(k) + K
o
e
o
(k) + L
o
w(k) + B
o
u(k)
y(k) = C
o
x
o
(k) + D
H
e
o
(k) + D
F
w(k)
After the transformation, the system becomes
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
21
w(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
z(k) =
_
Q
1/2
x(k + 1)
R
1/2
u(k)
_
w(k) = d(k)
where
Q is the state weighting matrix
R is the control weighting matrix
The IO model Gi and the transformed system sys are given in the SYSTEM
format. The input and output variables are the following:
sys Standard model of SPC problem in SYSTEM format.
dim The dimension in SPC problem.
Go The IO model in SYSTEM format.
Q The state weighting matrix .
R The control weighting matrix.
See Also
gpc2spc, lqpc
Standard Predictive Control Toolbox 231
add du
add du
Purpose
Function to add increment input constraint to standard problem
Synopsis
[sys,dim] = add du(sys,dim,dumax,sgn,descr);
[sysp,dimp]= add du(sysp,dimp,dumax,sgn,descr);
Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) |u
max
| for j = 0, , N 1
For sgn=-1 , the constraint is given by:
u(k + j) |u
max
| for j = 0, , N 1
For sgn=0 , the constraint is given by:
|u
max
| u(k + j) |u
max
| for j = 0, , N 1
where
u
max
is the maximum bound on the increment input signal
u(k + j) is the input increment signal
For MIMO systems u
max
can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
u
max
holds for each element of u(k). In the case of a vector u
max
, the
bound is taken elementwise.
The function add du returns the system, subjected to the increment input con-
straint. The input and output variables are the following:
sys Standard model
dim The dimension of the standard model.
sysp Prediction model
dimp The dimension of the prediction model.
dumax The maximum bound on the increment input.
sgn Scalar with value of +1/0/-1.
descr Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).
232 Standard Predictive Control Toolbox
add du
Setting dumax=0 means no input constraint will be added.
See Also
add u, add v, add y, add x, gpc
Standard Predictive Control Toolbox 233
add u
add u
Purpose
Function to add input constraint to standard problem
Synopsis
[sys,dim]=add u(sys,dim,umax,sgn,descr);
[sysp,dimp]=add u(sysp,dimp,umax,sgn,descr);
Description
The function adds input constraint to the standard problem (sys) or the pre-
diction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) |u
max
| for j = 0, , N 1
For sgn=-1 , the constraint is given by:
u(k + j) |u
max
| for j = 0, , N 1
For sgn=0 , the constraint is given by:
|u
max
| u(k + j) |u
max
| for j = 0, , N 1
Where
u
max
is the maximum bound on the input signal
u(k + j) is the input signal
For MIMO systems u
max
can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
u
max
holds for each element of u(k). In the case of a vector u
max
, the bound is
taken elementwise.
The function add u returns the system, subjected to the input constraint. The
input and output variables are the following:
234 Standard Predictive Control Toolbox
add u
sys Standard model
dim The dimension of the standard model.
sysp Prediction model
dimp The dimension of the prediction model.
umax The maximum bound on the input signal.
sgn Scalar with value of +1/0/-1.
descr Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).
Setting umax=0 means no input constraint will be added.
See Also
add du, add v, add y, add x, lqpc
Standard Predictive Control Toolbox 235
add v
add v
Purpose
Function to add input constraint to standard problem
Synopsis
[sys,dim] = add v(sys,dim,vmax,sgn);
[sysp,dimp]= add v(sysp,dimp,vmax,sgn);
Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
v(k + j) |v
max
| for j = 0, , N 1
For sgn=-1 , the constraint is given by:
v(k + j) |v
max
| for j = 0, , N 1
For sgn=0 , the constraint is given by:
|v
max
| v(k + j) |v
max
| for j = 0, , N 1
where
v
max
is the maximum bound on the input signal
v(k + j) is the input signal
For MIMO systems v
max
can be choosen as a scalar value or a vector value
(with the same dimension as vector v(k)). In the case of a scalar the bound
v
max
holds for each element of v(k). In the case of a vector v
max
, the bound is
taken elementwise.
The function add v returns the system, subjected to the increment input con-
straint. The input and output variables are the following:
sys Standard model
dim The dimension of the standard model.
sysp Prediction model
dimp The dimension of the prediction model.
vmax The maximum bound on the input.
sgn Scalar with value of +1/0/-1.
236 Standard Predictive Control Toolbox
add v
Setting vmax=0 means no input constraint will be added.
See Also
add u, add du, add y, add x, gpc
Standard Predictive Control Toolbox 237
add y
add y
Purpose
Function to add output constraint to to the standard problem (sys) or the
prediction model (sysp).
Synopsis
[sys,dim] =add y(sys,dim,ymax,sgn);
[sysp,dimp]=add y(sysp,dimp,ymax,sgn);
Description
The function adds output constraint to the standard problem (sys) or the
prediction model (sysp). For sgn=1 , the constraint is given by:
y(k + j) |y
max
| for j = 0, , N 1
For sgn=-1 , the constraint is given by:
y(k + j) |y
max
| for j = 0, , N 1
For sgn=0 , the constraint is given by:
|y
max
| y(k + j) |y
max
| for j = 1, , N 1
Where
y
max
is the maximum bound on the output signal
y(k + j) is the system output signal
For MIMO systems y
max
can be choosen as a scalar value or a vector value
(with the same dimension as vector y(k)). In the case of a scalar the bound
y
max
holds for each element of y(k). In the case of a vector y
max
, the bound is
taken elementwise.
The function add y returns the system subjected to the output constraint. The
input and output variables are the following:
238 Standard Predictive Control Toolbox
add y
sys Standard model
dim The dimension of the standard model.
sysp Prediction model
dimp the dimension of the prediction model.
ymax The maximum bound on the output signal.
sgn Scalar with value of +1/0/-1.
Setting ymax=0 means no output constraint will be added.
See Also
add u, add du, add v, add x
Standard Predictive Control Toolbox 239
add x
add x
Purpose
Function to add state constraint to to the standard problem.
Synopsis
[sys,dim]=add x(sys,dim,E);
[sys,dim]=add x(sys,dim,E,sgn);
Description
The function adds state constraint to the standard problem.
NOTE: The function add x can NOT be applied to the prediction model !!!!
(i.e. add x must be used before pred).
For sgn=1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=-1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=0 , the constraint is given by:
1 Ex(k + j) 1 for j = 1, , N 1
Where
E is a matrix
x(k + j) is the state of the system
The function add x returns the standard problem, subjected to the state con-
straint. The input and output variables are the following:
sys The standard model.
dim The dimension of the standard model.
E Matrix to dene bound on the state.
sgn Scalar with value of +1/0/-1.
Setting E=0 means no state constraint will be added.
240 Standard Predictive Control Toolbox
add x
See Also
add u, add du, add v, add y
Standard Predictive Control Toolbox 241
add nc
add nc
Purpose
Function to add a control horizon to prediction problem
Synopsis
[sysp,dimp]=add nc(sysp,dimp,dim,Nc);
[sysp,dimp]=add nc(sysp,dimp,dim,Nc,descr);
Description
The function add nc returns the system with a control horizon sysp and the
dimension dimp. The input and output variables are the following:
sysp The prediction model.
dimp The dimension of the prediction model.
dim The dimension of the original IO or IIO system
Nc The control horizon.
descr Flag for model type (descr=IO for input output model,
descr=IIO for an increment input output model)
The default value of descr is IIO.
See Also
gpc, lqpc
242 Standard Predictive Control Toolbox
add end
add end
Purpose
Function to add a state end-point constraint to prediction problem.
Synopsis
[sysp,dimp] = add end(sys,dim,sysp,dimp,N);
Description
The function adds a state end-point constraint to the standard problem:
x(k + N) = x
ss
= D
ssx
w(k + N)
The function add end returns the system with a state end-point constraint in
sysp and the dimension dimp. The input and output variables are the following:
sys The standard model.
dim The dimension of the standard model.
sysp The system of prediction problem.
dimp The dimension of the prediction problem.
N The prediction horizon.
Standard Predictive Control Toolbox 243
external
external
Purpose
Function makes the external signal w from d and r for simulation purpose
Synopsis
[tw]=external(apr,lensim,N,w1);
[tw]=external(apr,lensim,N,w1,w2);
[tw]=external(apr,lensim,N,w1,w2,w3);
[tw]=external(apr,lensim,N,w1,w2,w3,w4);
[tw]=external(apr,lensim,N,w1,w2,w3,w4,w5);
Description
The function computes the vector w with with predictions of the external signal
w:
w =
_
w(k) w(k + 1) w(k + N 1)

where
w(k) =
_
w
1
(k) w
5
(k)

For the lqpc controller, the external signal w(k) is given by


w(k) =
_
d(k)

and so we have to choose w


1
(k) = d(k). For the gpc controller, the external
signal w(k) is given by
w(k) =
_
d(k) r(k + 1)

and so we have to choose w


1
(k) = d(k) and w
2
(k) = r(k + 1).
tw is the row vector w with predictions of the external signal w.
apr vector with ags for a priori knowledge on external signal.
lensim is length of simulation interval.
N is the prediction horizon.
w1 is the row vector with external signal w
1
(k).
.
.
.
.
.
.
w5 is the row vector with external signal w
5
(k).
244 Standard Predictive Control Toolbox
external
The vector apr determine if the external signal w(k +j) is a priori known. For
i = 1, . . . , 5 there holds:
_
signal w
i
(k) is a priori known for apr(i) = 1
signal w
i
(k) is not a priori known for apr(i) = 0
See Also
gpc, lqpc
Standard Predictive Control Toolbox 245
dgamma
dgamma
Purpose
Function makes the selection matrix

. For the innite horizon case the funtion
also creates the steady state matrix
ss
.
Synopsis
[dGamma] = dgamma(Nm,N,dim);
[dGamma,Gammass] = dgamma(Nm,N,dim);
Description
The selection matrix is dened as

= diag((0), (1), . . . , (N 1))


where
(j) =
_

_
_
0 0
0 I
_
for 0 j N
m
1
_
I 0
0 I
_
for N
m
j N 1
Where
N
m
is the minimum cost- horizon
N is the prediction horizon
dim is the dimension of the system
The function dgamma returns the diagonal elements of the selection matrix

,
so dGamma = diag(

). The input and output variables are the following:


dGamma vector with diagonal elements of

.
Gammass vector with diagonal elements of
ss
.
Nm is the minimum-cost horizon.
N is the prediction horizon.
Nc is the control horizon.
dim the dimension of the system.
246 Standard Predictive Control Toolbox
dgamma
See Also
gpc, lqpc, blockmat
Standard Predictive Control Toolbox 247
pred
pred
Purpose
Function computes the prediction model from the standard model
Synopsis
[sysp,dimp]=pred(sys,dim,N);
Description
The function makes a prediction model for the system
x(k + 1) = Ax(k) + B
1
e(k) + B
2
w(k) + B
3
v(k)
y(k) = C
1
x(k) + D
11
e(k) + D
12
w(k)
z(k) = C
2
x(k) + D
21
e(k) + D
22
w(k) + D
23
v(k)
(k) = C
3
x(k) + D
31
e(k) + D
32
w(k) + D
33
v(k)
(k) = C
4
x(k) + D
41
e(k) + D
42
w(k) + D
43
v(k)
for
sys =
_

_
A B
1
B
2
B
3
C
1
D
11
D
12
0
C
2
D
21
D
22
D
23
C
3
D
31
D
32
D
33
C
4
D
41
D
42
D
43
_

_
dim =
_
n
a
n
b1
n
b2
n
b3
n
c1
n
c2
n
c3
n
c4

Where
x(k) is the state of the system
e(k) is zero-mean white noise (ZMWN)
w(k) is known external signal
v(k) is the control signal
z(k) is the performance signal
(k) is the equality constraint at time k
(k) is the inequality constraint at time k
A, B
i
, C
j
, D
ij
the system matrices
248 Standard Predictive Control Toolbox
pred
For the above model, the prediction model is given by
x(k + 1) = Ax(k) +

B
1
e(k) +

B
2
w(k) +

B
3
v(k)
y(k) =

C
1
x(k) +

D
11
e(k) +

D
12
w(k) +

D
13
v(k)
z(k) =

C
2
x(k) +

D
21
e(k) +

D
22
w(k) +

D
23
v(k)

(k) =

C
3
x(k) +

D
31
e(k) +

D
32
w(k) +

D
33
v(k)

(k) =

C
4
x(k) +

D
41
e(k) +

D
42
w(k) +

D
43
v(k)
where
p =
_

_
p(k|k)
p(k + 1|k)
.
.
.
p(k + N|k)
_

_
stands for the predicted signal for p = y, z, , and for future values for p =
v, w.
The function pred returns the prediction model sysp:
sysp =
_

_
A

B
1

B
2

B
3

C
1

D
11

D
12

D
13

C
2

D
21

D
22

D
23

C
3

D
31

D
32

D
33

C
4

D
41

D
42

D
43
_

_
dimp =
_
n
a
n
b1
N n
b2
N n
b3
N n
c1
N n
c2
N n
c3
N n
c4

The input and output variables are the following:


sysp The prediction model.
dimp The dimension of the prediction model.
sys The system to be controlled.
dim The dimension of the system.
N The prediction horizon.
See Also
gpc, lqpc, blockmat
Standard Predictive Control Toolbox 249
contr
contr
Purpose
Function computes controller matrices for predictive controller
Synopsis
[vv,HH,AA,bb]=contr(sysp,dimp,dim,dGam);
Description
The function computes the matrices for predictive controller for use in the
function simul.
sysp is the prediction model, dimp is the dimension-vector of the prediction
model, dim is the dimension-vector of the standard model, and dGam is a row-
vector with the diagonal elements of the matrix

.
If inequality constraints are present, the control problem is transformed into a
quadratic programming problem, where the objective function is chosen as
J(
I
, k) =
1
2

T
I
(k)H
I
(k)
subject to
A


I
b

0
which can be solved at each time instant k in the function simul. There holds
The variable vv is given by:
vv =
_
F D
e
D
w
D

so that
v(k) = vv
_

_
x
c
(k)
e(k)
w(k)

I
(k)
_

_
= F x
c
(k) + D
e
e(k) + D
w
w(k) + D


I
(k)
Further bb is such that
b

(k) = bb
_

_
1
x
c
(k)
e(k)
w(k)
_

_
250 Standard Predictive Control Toolbox
contr
If no inequality constraints are present, we nd
I
= 0 and the control problem
can be solved analytically at each time instant in the function simul, using
v(k) = F x
c
(k) + D
e
e(k) + D
w
w(k)
The variable vv is now given by:
vv =
_
F D
e
D
w

so that
v(k) = vv
_
_
x
c
(k)
e(k)
w(k)
_
_
The input and output variables are the following:
sysp The prediction model.
dimp The dimension of the prediction model.
dim the dimension of the system.
dGam vector with diagonal elements of

.
vv,HH,AA,bb controller parameters.
See Also
simul, gpc, lqpc
Standard Predictive Control Toolbox 251
lticll
lticll
Purpose
Function computes the state space matrices of the closed loop in the LTI case
(no inequality constraints).
Synopsis
[Acl,B1cl,B2cl,Ccl,D1cl,D2cl]=lticll(sys,dim,vv,N);
Description
Function lticll computes the state space matrices of the LTI closed loop,
given by
x
cl
(k + 1) = A
cl
x
cl
(k) + B
1cl
e(k) + B
2cl
w(k)
_
v(k)
y(k)
_
= C
cl
x
cl
(k) + D
1cl
e(k) + D
2cl
w(k)
The input and output variables are the following:
sys the model.
dim dimensions sys.
N the prediction horizon.
vv controller parameters.
Acl,B1cl,B2cl,Ccl,D1cl,D2cl system matrices of closed loop
See Also
gpc, lqpc, gpc ss, contr, simul, lticon
252 Standard Predictive Control Toolbox
lticon
lticon
Purpose
Function computes the state space matrices of the optimal predictive controller
in the LTI case (no inequality constraints).
Synopsis
[Ac,B1c,B2c,Cc,D1c,D2c]=lticon(sys,dim,vv,N);
Description
Function lticon computes the state space matrices of the optimal predictive
controller, given by
x
c
(k + 1) = A
c
x
c
(k) + B
1c
y(k) + B
2c
w(k)
v(k) = C
c
x
c
(k) + D
1c
y(k) + D
2c
w(k)
The input and output variables are the following:
sys the model.
dim dimensions sys.
N the prediction horizon.
vv controller parameters.
Ac,B1c,B2c,Cc,D1c,D2c system matrices of LTI controller
See Also
gpc, lqpc, gpc ss, contr, simul, lticll
Standard Predictive Control Toolbox 253
simul
simul
Purpose
Function makes a closed loop simulation with predictive controller
Synopsis
[x,xc,y,v]=simul(syst,sys,dim,N,x,xc,y,v,tw,e,...
k1,k2,vv,HH,AA,bb);
Description
The function makes a closed loop simulation with predictive controller, the in-
put and output variables are the following:
syst the true system.
sys the model.
dim dimensions sys and syst.
N the horizon.
x state of the true system.
xc state of model/controller.
y the true system output.
v the true system input.
tw vector with predictions of the external signal.
k1 begin simulation interval.
k2 end simulation interval.
vv,HH,AA,bb controller parameters.
See Also
contr, gpc, lqpc
254 Standard Predictive Control Toolbox
rhc
Purpose
Function makes a closed loop simulation in receding horizon mode
Synopsis
[x,xc,y,v]=rhc(syst,sys,sysp,dim,dimp,dGam,N,x,xc,y,v,tw,e,k1,k2);
Description
The function makes a closed loop simulation with predictive controller in the
receding horizon mode. Past signals and prediction of future signals are given
within the prediction range. The simulation is paused every sample and can be
continued by pressing the return-key. The input and output variables are the
following:
syst the true system.
sys the model.
sysp the prediction model.
dim dimensions sys and syst.
dimp dimensions sysp.
dGamma vector with diagonal elements of

.
N the horizon.
x state of the true system.
xc state of model/controller.
y the true system output.
v the true system input.
tw vector with predictions of the external signal.
k1 begin simulation interval.
k2 end simulation interval.
See Also
simul, contr, gpc, lqpc
255
256
Index
a priori knowledge, 36
add du.m, 232
add end.m, 243
add nc.m, 242
add u.m, 234
add v.m, 236
add x.m, 240
add y.m, 238
basis functions, 27
blocking, 27
constraints, 25, 44
control horizon constraint, 46
equality constraints, 25, 44, 46
inequality constraints, 25, 45, 48
state-end point constraint, 47
contr.m, 250
control horizon, 27, 46
Controlled AutoRegressive Integrated Mov-
ing Average (CARIMA), 11
Controlled AutoRegressive Moving Average
(CARMA), 11
coprime factorization, 121
dead-beat control, 164
dgamma.m, 246
Diophantine equation, 196
equality constrained SPCP, 58
external.m, 244
feasibility, 96
minimal time algorithm, 97
soft-constraint algorithm, 96
nite horizon SPCP, 56
Full innite horizon SPCP, 76
full SPCP, 62
generalized minimum variance control, 164
Generalized predictive control (GPC), 11
GPC, 154
gpc.m, 211
gpc2spc.m, 229
gpc ss.m, 215
imp2syst.m, 222
implementation, 89
implementation, full SPCP case, 91
implementation, LTI case, 89
Increment-input-output (IIO) models, 16
Innite horizon SPCP, 65
innite horizon SPCP with control horizon ,
72
Input-Output (IO) models, 15
internal model control (IMC) scheme, 116
invariant ellipsoid, 137
io2iio.m, 228
linear matrix inequalities (LMI), 131
LQPC, 154
lqpc.m, 218
lqpc2spc.m, 231
lticll.m, 252
lticon.m, 253
mean-level control, 165
minimum variance control, 164
model, 13
disturbance model, 13, 15
impulse response model, 20, 183
noise model, 15
polynomial model, 20, 191
process model, 13, 15
257
step response model, 20, 183
MPC applications, 12
MPC using LMIs, 139
optimization, 28
orthogonal basis functions, 28, 50
performance index, 21, 39
GPC performance index, 22, 41, 202
LQPC performance index, 22, 40, 201
standard predictive control, 39
standard predictive control performance
index, 43
zone performance index, 23, 42
pole-placement control, 165
pred.m, 248
prediction, 31
polynomial model, 196
state space model, 31
step response model, 186
receding horizon principle, 29
rhc.m, 255
robustness, 124
LMI-based MPC, 140
Schur complement, 133
simul.m, 254
ss2syst.m, 224
stability, 101
end-point constraint, 109
end-point constraint set, 111, 114
inequality constrained case, 105
innite prediction horizon, 106
internal model control (IMC) scheme, 116
LTI case, 101
terminal constraint, 109
terminal constraint set, 114
terminal cost function with end-point con-
straint set, 114
Youla parametrization, 121
standard performance index, 24
Standard predictive control problem, 55
standard predictive control problem (SPCP),
39, 51
state space model, 15, 16
steady-state behavior, 66
Structuring input signal, 67
structuring of input signal, 27
syst2ss.m, 226
tf2syst.m, 221
tuning, 151
GPC, 154
initial setting, 153
LQPC, 154
optimization, 158
Unconstrained innite horizon SPCP, 69
unconstrained SPCP, 56
Youla parametrization, 121
258
Bibliography
[1] M. Alamir and G. Bornard. Stability of a truncated innite constrained receding
horizon scheme: The general discrete nonlinear case. Automatica, 31(9):13531356,
1995.
[2] F. Allgower, T.A. Badgwell, J.S. Qin, and J.B. Rawlings. Nonlinear predictive control
and moving horizon estrimation an introductory overview. In Advances in Control,
Highlights of ECC99, Edited by F. Frank, Springer-Verlag, London, UK, 1999.
[3] K.J.

Astrom and B. Wittenmark. On self-tuning regulators. Automatica, 9:185199,
1973.
[4] V. Balakrishnan, A. Zheng, and M. Morari. Constrained stabilization of discrete-time
systems. In Advances in MBPC, Oxford University Press, 1994.
[5] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos. The expicit linear
quadratic regulator for constrained systems. Automatica, 38(1):320, 2002.
[6] B.W. Bequette. Nonlinear control of chemical processes: A review. Ind. Eng. Chem.
Res., 30(7):13911413, 1991.
[7] R.R. Bitmead, M. Gevers, and V. Wertz. Adaptive Optimal Control. The Thinking
Mans GPC. Prentice Hall, Upper Saddle River, New Jersey, 1990.
[8] H. H. J. Bloemen, T. J. J. van den Boom, and H. B. Verbruggen. Optimizing the end
point state weighting in Model-based Predictive Control. Automatica, 38(6):1061
1068, June 2002.
[9] S. Boyd and Y. Nesterov. Nonlinear convex optimization: New methods and new
applications. Minicourse on the European Control Conference, Brussels, Belgium,
July 1-4, 1997.
[10] S.P. Boyd and C.H. Barratt. Linear Controller Design, Limits of performance. Pren-
tice Hall, Information and System Sciences Series, Englewood Clis, New Jersey, 1991.
[11] E.F. Camacho and C. Bordons. Model Predictive Control in the Process Industry,
Advances in Industrial Control. Springer, London, 1995.
259
[12] C.C. Chen and L. Shaw. On receding horizon feedback control. Automatica, 18:349
352, 1982.
[13] D.W. Clarke and C. Mohtadi. Properties of generalized predictive control. Automatica,
25(6):859875, 1989.
[14] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control - part 1. the
basic algorithm. Automatica, 23(2):137148, 1987.
[15] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control - part 2.
extensions and interpretations. Automatica, 23(2):149160, 1987.
[16] D.W. Clarke and R. Scattolini. Constrained receding horizon predictive control. IEE
Proc-D, 138(4), 1991.
[17] C.R. Cutler and B.L. Ramaker. Dynamic matrix control - a computer control algo-
rithm. In AIChE Nat. Mtg, 1979.
[18] C.R. Cutler and B.L. Ramaker. Dynamic matrix control - a computer control algo-
rithm. In Proceeding Joint American Control Conference, San Francisco, CA, USA,
1980.
[19] F.C. Cuzzola, J.C. Geromel, and M. Morari. An improved approach for constrained
robust model predictive control. Automatica, 38(7):11831189, 2002.
[20] R.M.C. De Keyser and A.R. van Cauwenberghe. Typical application possibilities for
self-tuning predictive control. In IFAC Symp.Ident.& Syst.Par.Est., 1982.
[21] R.A.J. De Vries and H.B Verbruggen. Multivariable process and prediction models in
predictive control a unied approach. Int. J. Adapt. Contr.& Sign. Proc., 8:261278,
1994.
[22] V.F. Demyanov and L.V. Vasilev. Nondierentiable Optimization, Optimization Soft-
ware. Springer-Verlag, 1985.
[23] B. Ding, Y. Xi, and S. Li. A synthesis approach of on-line constraint robust model
predictive control. Automatica, 40(1):163167, 2004.
[24] J.C. Doyle, B.A. Francis, and A.R. Tannenbaum. Feedback control systems. MacMillan
Publishing Company, New York, USA, 1992.
[25] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis. State-space solutions to
standard H
2
and H

control problems. IEEE AC, 34:pp831847, 1989.


[26] C.E. Garcia and M. Morari. Internal model control. 1. a unifying review and some
new results. Ind.Eng.Chem.Proc.Des.Dev., 21(2):308323, 1982.
260
[27] C.E. Garcia and A.M. Morshedi. Quadratic programming solution of dynamic matrix
control (QDMC). Chem.Eng.Comm., 46(11):7387, 1986.
[28] C.E. Garcia, D.M. Prett, and M. Morari. Model predictive control: Theory and
practice - a survey. Automatica, 25(3):335348, 1989.
[29] C.E. Garcia, D.M. Prett, and B.L. Ramaker. Fundamental Process Control. Butter-
worths, Stoneham, MA, 1988.
[30] H. Genceli and M. Nikolaou. Robust stability analysis of constrained
1
-norm model
predictive control. AIChE Journ., 39(12):19541965, 1993.
[31] E.G. Gilbert and K.T. Tan. Linear systems with state and control constraints: the
theory and application of maximal output admissible sets. IEEE AC, 36:10081020,
1991.
[32] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial
Optimization. Springer-Verlag, Berlin, Germany, 1988.
[33] T. Kailath. Linear Systems. Prentice Hall, New York, 1980.
[34] M. Kinnaert. Adaptive generalized predictive controller for mimo systems. Int.J.
Contr., 50(1):161172, 1989.
[35] M.V. Kothare, V. Balakrishnan, and M. Morari. Robust contrained predictive control
using linear matrix inequalities. Automatica, 32(10):13611379, 1996.
[36] B. Kouvaritakis, J.A. Rossiter, and J. Schuurmans. Ecient robuts predictive control.
IEEE AC, 45(8):15451549, 2000.
[37] W.H. Kwon and A.E. Pearson. On feedback stabilization of time-varying discrete
systems. IEEE AC, 23:479481, 1979.
[38] J.H. Lee, M. Morari, and C.E. Garcia. State-space interpretation of model predictive
control. Automatica, 30(4):707717, 1994.
[39] J.H. Lee and Z.H.Yu. Tuning of model predictive controllers for robust performance.
Comp.Chem.Eng., 18(1):1537, 1994.
[40] C.E. Lemke. On complementary pivot theory. In Mathematics of the Decision Sci-
ences, G.B.Dantzig and A.F.Veinott (Eds.), 1968.
[41] L. Ljung. System Identication: Theory for the User. Prentice Hall, Englewood Clis,
NJ, 1987.
[42] Y. Lu and Y. Arkun. Quasi-min-max mpc algorithms for LPV systems. Automatica,
36(4):527540, 2000.
261
[43] J.M. Maciejowski. Multivariable Feedback Control Design. Addison-Wesley Publishers,
Wokingham, UK, 1989.
[44] J.M. Maciejowski. Predictive control with constraints. Prentice Hall, Pearson Educa-
tion Limited, Harlow, UK, 2002.
[45] D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert. Constrained model
predictive control: stability and optimality. Automatica, 36:789814, 2000.
[46] M. Morari and E. Zariou. Robust Process Control. Prentice Hall, Englewood Clis,
New Jersey, 1989.
[47] E. Mosca and J. Zhang. Stable redesign of predictive control. Automatica, 28(6):1229
1233,, 1992.
[48] A. Naganawa, G. Obinata, and H. Inooka. A design method of model predictive
control system using coprime factorization approach. In ASCC, 1994.
[49] Y. Nesterov and A. Nemirovskii. Interior-Point Polynomial Algorithms in Convex
Programming. SIAM, Philadelphia, 1994.
[50] S.J. Qin and T.A. Badgewell. An overview of industrial model predictive control
technology. In Chemical Process Control - V, AIChe Symposium Series - American
Institute of Chemical Engineers, volume 93, pages 232256, 1997.
[51] J.B. Rawlings and K.R. Muske. The stability of constrained receding horizon control.
IEEE AC, 38:15121516, 1993.
[52] J. Richalet. Industrial applications of model based predictive control. Automatica,
29(5):12511274, 1993.
[53] J. Richalet, A. Rault, J.L. Testud, and J. Papon. Algorithmic control of industrial
processes. In Proceedings of the 4th IFAC Symposium on Identication and System
Parameter Estimation, Tbilisi, 1976.
[54] J. Richalet, A. Rault, J.L. Testud, and J. Papon. Model predictive heuristic control:
Applications to industrial processes. Automatica, 14(1):413428, 1978.
[55] N.L. Ricker, T. Subrahmanian, and T. Sim. Case studies of model-predictive control
in pulp and paper production. In Proc. 1988 IFAC Workshop on Model Based Pro-
cess Control, T.J. McAvoy, Y. Arkun and E. Zariou eds., Pergamon Press, Oxford,
page 13, 1988.
[56] J.A. Rossiter. Model-Based Predictive Control: A Practical Approach. CRC Press,
Inc., 2003.
[57] R. Rouhani and R.K. Mehra. Model algorithmic control (mac): Basic theoretical
properties. Automatica, 18(4):401414, 1982.
262
[58] L.E. Scales. Introduction to Non-Linear Optimization. MacMillan, London, UK, 1985.
[59] P.O.M. Scokaert. Innite horizon genralized predictive control. Int. J. Control,
66(1):161175, 1997.
[60] P.O.M. Scokaert and J.B. Rawlings. Constrained linear quadratic regulation. vol-
ume 43, pages 11631169, 1998.
[61] A.R.M. Soeterboek. Predictive control - A unied approach. Prentice Hall, Englewood
Clis, NJ, 1992.
[62] E.D. Sontag. An algebraic approach to bounded controllability of linear systems.
Int.J.Contr., pages 181188, 1984.
[63] V. Strejc. State Space Theory of Discrete Linear Control. Academia, Prague, 1981.
[64] M. Sznaier and M.J. Damborg. Suboptimal control of linear systems with state and
control inequality constraints. In Proceedings of the 26th IEEE conference on decision
and control, pages 761762, Los Angeles, 1987.
[65] T.J.J. van den Boom. Model based predictive control, status and perspectives. In Tu-
torial Paper on the CESA IMACS Conference. Symposium on Control, Optimization
and Supervision, volume 1, pages 112, Lille, France, 1996.
[66] T.J.J. van den Boom, M. Ayala Botto, and P. Hoekstra. Design of an analytic con-
strained predictive controller using neural networks. International Journal of System
Science, 36(10):639650, 2005.
[67] T.J.J. van den Boom and R.A.J. de Vries. Robust predictive control using a time-
varying Youla parameter. Journal of applied mathematics and computer science,
9(1):101128, 1999.
[68] E.T. van Donkelaar. Improvement of eciency in system identication and model
predictive control of industrial processes. PhD Thesis, Delft University of Technology,
Delft, The Netherlands, 2000.
[69] E.T. van Donkelaar, O.H. Bosgra, and P.M.J. Van den Hof. Model predictive control
with generalized input parametrization. In ECC, 1999.
[70] P. Van Overschee and B. de Moor. Subspace algorithms for the stochastic identication
problem. Automatica, 29(3):649660, 1993.
[71] M. Verhaegen and P. Dewilde. Subspace model identication. part i: The output-error
state space model identication class of algorithms / part ii: Analysis of the elementary
output-error state space model identication algorithm. Int.J.Contr., 56(5):1187
1241, 1992.
263
[72] M. Vidyasagar. Nonlinear Systems Analysis. SIAM Classics in Applied Mathematics,
SIAM Press, 2002.
[73] R.A.J. De Vries and T.J.J. van den Boom. Constrained predictive control with guar-
anteed stability and convex optimization. In American Control Conference, pages
28422846, Baltimore, Maryland, 1994.
[74] R.A.J. De Vries and T.J.J. van den Boom. Robust stability constraints for predictive
control. In 4th European Control Conference, Brussels, Belgium, 1997.
[75] Z. Wan and M.V. Kothare. An ecient oine formulation of robust model predictive
control using linear matrix inequalities. Automatica, 39(5):837846, 2003.
[76] P. Wolfe. The simplex method for quadratic programming. Econmetrica, 27:382398,
1959.
[77] E. Zariou. Robust model predictive control of processes with hard constraints.
Comp.Chem.Engng., 14(4/5):359371, 1990.
[78] A. Zheng. Reducing on-line computational demands in model predictive control by
approximating qp constraints. Journal of Process Control, 9(4):279290, 1999.
[79] A. Zheng and M. Morari. Stability of model predictive control with mixed constraints.
IEEE AC, 40:1818, 1995.
[80] Z.Q. Zheng and M. Morari. Robust stability of contrained model predictive control.
In ACC, 1993.
264

You might also like