You are on page 1of 6

ECE 486 STATE OBSERVER DESIGN Fall 08

Reading: FPE, Section 7.7, 7.7.1, 7.7.3


Consider again the plant
x = Ax +Bu
y = Cx .
We have seen that if this realization is controllable, we can arbitrarily place the closed loop poles via state
feedback of the form u = Kx + r. However, there is one problem with this: it assumes that we have
access to the entire state vector x. This is typically not the case in practice, since we only have access
to the output y, which represents sensor measurements of only a few of the states. Measuring all of the
states via sensors is usually not possible, since sensors can be expensive, and some states simply cannot be
measured (for example, the state might represent the temperature inside an extremely hot reactor, where
it is not possible to place a sensor without damaging it). How can we place the closed loop poles if we do
not have access to the entire state?
The commonly used method to get around this problem is to construct an estimator for the state based on
the output y. Specically, the output measures some of the state variables, which are aected by the states
that we do not measure. So by examining how the measured states change with time, we can potentially
determine the values of the unmeasured states as well. We will do this by constructing a state observer
(also called a state estimator). We will then use the state estimate provided by the observer to control
the system. This is called observer feedback and the feedback loop will look like this:
Note that we are allowing the observer to have access to the input u since we are assuming that u is ours
to choose, and therefore known.
1 State Estimator Design
To see how we can obtain an estimate of the entire state, suppose that we construct a new system with
state z that mimics the behavior of the plant:
z = Az +Bu .
If we initialize this system with z(0) = x(0) and we apply the same input u to this system and the plant,
we would have z(t) = x(t) for all time. Thus, we would have a perfect estimate of the state for all time,
and we could use the state feedback control u = Kz, where K is the control gain required to place the
poles at desired locations. In summary, if we knew the initial state x(0) of the system, we could technically
obtain an estimate of the state at any time. However, there are some problems with this:
We may not know the initial state of the system (especially if we cannot measure some of the states
of the system).
The above observer does not make use of any measurements of the states, and thus it has no way of
correcting itself if the estimated states start diverging from the actual states (e.g., due to noise or
disturbances in the system).
In order to x these shortcomings, we will modify the observer equation as follows:
z = Az +Bu +L(y Cz) .
In this modied observer, the role of the corrective term L(yCz) is to utilize the measurements of the
state vector in order to help the observer do a good job of tracking the state. Specically, since y = Cx,
the term y Cz represents the error between the measured states and the estimates of those states. If
z(t) = x(t) (i.e., the state observer is perfectly synchronized with the state), then the term y Cz will be
zero. If the state estimate is dierent from the actual state, however, the hope is that the term y Cz
will also be nonzero, and help to reduce the estimation error to zero. The gain matrix L is used to ensure
that this will happen.
To see how to choose L, let us examine the estimation error dened as e = x z. The evolution of the
estimation error is given by
e = x z
= Ax +Bu Az Bu L(y Cz)
= A(x z) L(Cx Cz)
= Ae LCe
= (ALC)e .
Taking Laplace transforms, we get
sE(s) e(0) = (ALC)E(s) E(s) = (sI (ALC))
1
e(0) .
The right hand side of the above equation is simply a matrix of transfer functions, each of which has
determinant given by det(sI (ALC)) (i.e., the poles are the eigenvalues of ALC). In order for the
error to go to zero regardless of the initial estimation error, we want all of the poles to be in the OLHP.
This means that we have to choose the matrix L so that the eigenvalues of ALC are all in the OLHP.
1.1 Condition For Placing Eigenvalues of ALC: Observability
To determine conditions on A and C which will allow us to arbitrarily place the eigenvalues of ALC, let
us make a connection to controllability. Recall that if the pair (A, B) is controllable, then it is possible to
place the eigenvalues of ABK arbitrarily via a choice of matrix K. For the observer, we are dealing with
the matrix ALC; this is dierent from ABK because the gain matrix L pre-multiplies the matrix C,
whereas the gain matrix K post-multiplies the matrix B. However, note that the eigenvalues of a matrix
are the same as the eigenvalues of the transpose of the matrix. This means that the eigenvalues of ALC
2
are the same as the eigenvalues of the matrix A
T
C
T
L
T
, and this matrix has the same form as ABK.
Based on our discussion of controllability, we know that if the pair (A
T
, C
T
) is controllable, then we can
choose L to place the eigenvalues of A
T
C
T
L
T
(and thus ALC) at arbitrary locations. Recall that
the pair (A
T
, C
T
) is controllable if and only if
rank
_
C
T
A
T
C
T

_
A
T
_
n1
C
T
_
= n .
Since the rank of a matrix is unchanged if we take the transpose of the matrix, we can equivalently state
the above condition as
rank
_

_
C
CA
.
.
.
CA
n1
_

_
. .
O
= n .
The matrix O in the above equation is called the observability matrix for the pair (A, C), and so we
can state the following result.
The eigenvalues of ALC can be placed arbitrarily if and only if the rank of the observability
matrix
O =
_

_
C
CA
.
.
.
CA
n1
_

_
is n. In this case, the pair (A, C) is said to be observable.
Example. Is the system x =
_
1 1
0 2
_
x +
_
1
1
_
u, y =
_
1 0

x observable?
Solution.
3
Example. Is the system x =
_
_
1 0 1
0 1 1
0 0 2
_
_
x +
_
_
1
0
2
_
_
u, y =
_
1 1 0
0 0 1
_
x observable?
Solution.
Notes:
1. It is easy to show that observability is not aected by performing a similarity transformation on a
system (the proof is directly analogous to the proof for controllability).
2. One can also show that if a realization is not observable, then the system is not minimal (i.e., the
transfer function will have a pole/zero cancellation). A realization is minimal if and only if it is both
controllable and observable.
3. While we introduced the observability matrix as a dual to controllability, we can actually obtain it
directly by asking the question: Can we determine the state x(T
1
) at some time T
1
by viewing the
output y(t) for some period of time T
1
t T
2
? The answer is yes if and only if the observability
matrix has rank n. This can be readily proved for discrete-time systems in a manner similar to what
we did for the controllability proof.
1.2 Locations of Observer Eigenvalues
Once the eigenvalues of ALC are placed at desired locations in the OLHP, the estimation error e(t)
will asymptotically go to zero, and thus the estimated state z(t) will asymptotically approach x(t):
4
The speed at which the estimated state catches up to the actual state is given by the eigenvalues of
ALC: the further into the left half plane these eigenvalues are, the faster the estimate will catch up to
the actual state. This can be seen from the expression for the error E(s) we calculated earlier: the poles
of the transfer functions governing the dynamics of e(t) are given by the eigenvalues of ALC, and we
have seen from the rst part of the course that poles that are further in the OLHP cause the transients
to die out faster. How far depends on where we want the state feedback poles to lie, because the faster
the estimation error goes to zero, the better the estimate of x that we are using in our feedback, and the
better our controller works. A general rule of thumb is as follows:
The eigenvalues of ALC should be placed about ve times further in the OLHP than the
eigenvalues of ABK.
Once the observer is constructed, the observer feedback input to the system is given by
u = Kz +r ,
where K is the same gain matrix that we would use if we had access to the actual system state (i.e., if we
were using state feedback u = Kx +r).
Note: There are multiple ways to nd L to place the eigenvalues of ALC. One way is to consider this as
a controllability problem, and nd L
T
to place the eigenvalues of A
T
C
T
L
T
by using the controllability
techniques from the last section. Another way is to directly work with the given matrices and their
eigenvalues, as we will see in the following example.
Example. Consider the system x =
_
1 1
0 2
_
x +
_
1
1
_
u, y =
_
1 0

x. We would like to place closed


loop poles at s = 1, 2. Design an observer to estimate the state, and use observer feedback to achieve
this goal.
5
In order to automate the observer design process, we can use the place command in MATLAB. In partic-
ular, K = place(A,B,p) will return a gain K so that the eigenvalues of ABK are at the pole locations
specied in vector p. To perform observer design, we use L = place(A,C,p), which will place the
eigenvalues of ALC at desired locations (note the use of transposes in the command in order to convert
the problem to a controllability problem, which place can solve).
6

You might also like