You are on page 1of 8

State feedback

State feedback control law: u(t) = K x(t)


C
A
1
s
K
B
P(s)

+
v v
-
6
-
6
u(s) sx(s) x(s)
x(s)
y(s)
Salient points:
1. There is no reference input (we will add one later). We are driving the state to zero.
2. We can measure all of the states which is often not realistic. We will remove this
assumption later by designing an estimator.
3. The feedback gain, K, is static. This controller has no states.
Roy Smith: ECE 147b 11: 2
State Feedback
State Feedback
State-space models allow us to use a very powerful design method: state feedback.
To begin: assume that we can measure all of the state vector.
We will address the case where this cannot be done later.
State feedback control law: u(t) = K x(t) or, equivalently, u(s) = K x(s)
C
A
1
s
K
B
P(s)

+
v v
-
6
-
6
u(s) sx(s) x(s)
x(s)
y(s)
Roy Smith: ECE 147b 11: 1
State feedback
Structure of the state feedback control gains:
Examine the feedback gain matrix in more detail,. . .
u(k) = K x(k)
=
_
K
1
K
2
K
n

_
x
1
(k)
x
2
(k)
.
.
.
x
n
(k)
_

_
Note that K is a static nu nx matrix.
We have at least as many control gains in K as states in the system.
We can often use state feedback to place the closed-loop poles anywhere we want.
This may take a lot of gain. In other words, large values of K
i
.
Roy Smith: ECE 147b 11: 4
State feedback
Discrete-time state feedback
Given a discrete-time state-space system,
x(k + 1) = Ax(k) + Bu(k)
Recall that the poles of the system are the eigenvalues of A.
State feedback law: u(k) = K x(k).
This gives,
x(k + 1) = Ax(k) BK x(k)
= (ABK)
. .
x(k)
closed-loop dynamics
So state feedback changes the pole locations (which is what we expect feedback to do).
The closed-loop poles are given by the eigenvalues of ABK.
Roy Smith: ECE 147b 11: 3
State feedback
Controllable canonical form
One particular state-space representation (controllable canonical form) makes this
particularly easy.
For example,
P(z) =
b
1
z
2
+b
2
z +b
3
z
3
+a
1
z
2
+a
2
z +a
3
.
The controllable canonical form for this is:
A =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_
, B =
_
_
1
0
0
_
_
, C =
_
b
1
b
2
b
3

.
Now the closed-loop dynamics, ABK, are given by,
ABK =
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_

_
_
1
0
0
_
_
_
K
1
K
2
K
3

,
=
_
_
a
1
K
1
a
2
K
2
a
3
K
3
1 0 0
0 1 0
_
_
easy to see the characteristic polynomial
Roy Smith: ECE 147b 11: 6
State feedback
Calculating K
Start by speciying the desired closed-loop poles. Say they have positions in the z-plane:
z =
1
,
2
, ,
n
.
Note that root-locus arguments, or a knowledge of nice pole positions, can often be used to
come up with suitable closed-loop pole locations.
The closed-loop pole locations can also be specied as the roots of the polynomial,
(z
1
)(z
2
) (z
n
) = 0.
Now, under state feedback, the closed-loop poles are the roots of,
det(zI (ABK)) = det(zI A +BK) = 0.
One approach:
Match coecients in these two expressions and solve for K
1
, K
2
, , K
n
.
Roy Smith: ECE 147b 11: 5
State feedback
General case: Ackermanns Formula
Given the state equation (A, B), and desired closed-loop pole positions (specied by
coecients
1
, ,
n
), Ackermanns formula calculates K.
First form the controllability matrix,
C =
_
B AB A
2
B A
n1
B

. .
n n matrix when nu = 1
If the desired closed-loop pole positions are given by,
z
n
+
1
z
n1
+
2
z
n2
+ +
n
= 0,
then dene:

c
(A) := A
n
+
1
A
n1
+ +
n
I this is an n n matrix.
Ackermanns formula: K =
_
0 0 1

C
1

c
(A)
In Matlab, K can be calculated by the command: place.
Roy Smith: ECE 147b 11: 8
State feedback
Controllable canonical form
The closed-loop matrix is still in controllable canonical form,
A BK =
_
_
a
1
K
1
a
2
K
2
a
3
K
3
1 0 0
0 1 0
_
_
and so,
det(zI A +BK) = z
3
+ (a
1
+K
1
)z
2
+ (a
2
+K
2
)z + (a
3
+K
3
).
Equating coecients with the desired closed-loop poles is easy:
Say, the desired closed-loop poles are given by,
det(zI A +BK) = (z
1
)(z
2
)(z
3
)
z
3
+
1
z
2
+
2
z +
3
= 0,
Then,
K
1
=
1
a
1
, K
2
=
2
a
2
, K
3
=
3
a
3
.
Roy Smith: ECE 147b 11: 7
Controllability
Controllability under similarity transforms
What is the eect of a similarity transform, x(k) = V (k).
This gives,
(k + 1) = V
1
AV (k) +V
1
Bu(k),
and the controllability matrix is now,
C

=
_
(V
1
B) (V
1
AV V
1
B) (V
1
AV V
1
AV V
1
B)

= V
1
_
B AB A
2
B A
n1
B

= V
1
C.
Because V
1
is invertible, rank(C

) = rank(C).
In other words controllability is not aected by a similarity transform. C

= C, but if one is
invertible, so is the other.
We can transform (via similarity) the system into controllable canonical form i the system is
controllable.
Note that controllability applies to the state of a system. It doesnt make sense in terms of
transfer functions (which can always be expressed as controllable canonical forms).
Roy Smith: ECE 147b 11: 10
Controllability
Controllability
Note that Ackermanns formula,
K =
_
0 0 1

C
1

c
(A),
will not work if C
1
does not exist.
Controllability matrix:
C =
_
B AB A
2
B A
n1
B

.
A system is called controllable if and only if C is invertible (equivalently, rank(C) = n).
Formal denition:
A system is controllable (or state controllable) if and only if for any given initial state,
x(t
0
) = x
0
, and any given nal state, x
1
, specied at an arbitrary future time, t
1
> t
0
, there
exists a control input u(t) that will drive the system from x(t
0
) = x
0
to x(t
1
) = x
1
.
Roy Smith: ECE 147b 11: 9
Controllability
Another example: Flexible space structures
Large space telescopes:
Flexible modes must be suppressed
Limited number of piezoelectric actuators
Many locations for placing actuators
Which locations give the most controllable
structure
Roy Smith: ECE 147b 11: 12
State feedback
Controllability as a design tool
Consider a two-input, two-output system,
_
x
1
(k + 1)
x
2
(k + 1)
_
=
_
1 T
0 1
_ _
x
1
(k)
x
2
(k)
_
+
_
T
2
/2 0
0 T
_ _
u
1
(k)
u
2
(k)
_
_
y
1
(k)
y
2
(k)
_
=
_
1 0
0 1
_ _
x
1
(k)
x
2
(k)
_
,
where T is the sample period.
This system is unstable (where are the poles?)
Suppose that implementing u
1
(k) costs $50 and implementing u
2
(k) costs $100. What is the
minimum cost to stabilize the system: $50, $100 or $150?
Controllability can answer questions like this and is useful when actually building systems to
be controlled.
Roy Smith: ECE 147b 11: 11
State feedback
Example continued
The closed-loop state-space system with u(k) = Kx(k) is,
x(k + 1) = (ABK) x(k)
=
_
e
T
(1 e
T
)K
_
x(k).
And to get the only pole at zero means that ABK = 0.
e
T
(1 e
T
)K = 0 which implies that K =
e
T
(1 e
T
)
.
Controllability?
C =
_
B

= 1 e
T
= 0, so it is controllable.
Other pole positions
For = 0.5, K =
(e
T
0.5)
(1 e
T
)
and for = 0.75, K =
(e
T
0.75)
(1 e
T
)
.
Simulate these choices, with intersample behavior.
Roy Smith: ECE 147b 11: 14
State feedback
A simple pole placement example
P(s) =
1
s 1
an unstable system
The ZOH equivalent of this system is,
P(z) =
1 e
T
z e
T
pole at z = e
T
(outside unit circle)
One particular state-space representation is,
x(k + 1) = e
T
x(k) + (1 e
T
) u(k)
y(k) = 1 x(k)
In this case we are measuring the entire state (1 pole = 1 state), and so state feedback is the
same as output feedback.
Pole placement: Lets try = 0 (deadbeat control).
The desired closed-loop characteristic polynomial is simply
c
(z) = z = 0.
Roy Smith: ECE 147b 11: 13
State feedback
State feedback: picking good closed-loop pole locations
Choosing the closed-loop pole locations can be dicult.
You know the general good locations (in z-plane and s-plane). But,. . .
1. Look at the transient responses of the system and identify the various pole locations.
Change the ones that you dont like.
2. The further that you move the poles the greater the gain that you will require. This can
lead to cost and/or saturation problems. (I.e. move them as little as possible).
3. Take an optimal control course to nd out the best tradeo between fast response and
input signal size. This can be viewed as nding K which minimizes the following cost:

k=0
x(k)
T
Qx(k) +u(k)
T
Ru(k).
where Q and R are positive denite weighting matrices for the state error and the control
action respectively. This is known as a Linear Quadratic Regulator (LQR) problem.
ECE 230a and ECE 147c cover this material.
Roy Smith: ECE 147b 11: 16
State feedback
Pole position versus actuator eort:
0 1 2 3 4 5 6 7 8 9 10
-0. 2
0
0.2
0.4
0.6
0.8
1
1.2
Sample index
State, x(k)
0 1 2 3 4 5 6 7 8 9 10
-0. 5
0
0.5
1
1.5
2
Sample index
Input, u(k)
e 0.75
T
1 e
T
K =
e 0.5
T
1 e
T
K =
e
T
1 e
T
K = , = 0
, = 0.5
, = 0.75
e 0.75
T
1 e
T
K = , = 0.75
e 0.5
T
1 e
T
K = , = 0.5
e
T
1 e
T
K = , = 0
Nice intersample behavior is due to the simplicity of the system (1 state).
Roy Smith: ECE 147b 11: 15

You might also like