You are on page 1of 5

WP5 - 2:30

BIGENSTRUCTURE ASSIGNMENT:

A TUTORIAL - PART I THEORY

K. N. Sobel
Lockheed California Company
Burbank CA 91520

E. Y. Shapiro
HR Textron
Valencia, CA 91335

Theorem: Given the controllable and observable


system described by Equations (1) and (2) and the
assumptions that the matrices B and C are of full
rank, then sax (a, r) closed-loop eigenvalues can
be assigned and a (a, r) eigenvectors can be
partially assigned with min (m, r) entries in each
vector arbitrarily chosen using gain output
feedback, i.e., with a control law u - Fy.

ABSTRACT
A tutorial description of the theory of
eigenstructure assignment using both output
feedback and constrained output feedback is
presented. The computer implementation of the
algorithm is discussed including the utilization
of real arithmetic for complex conjugate

eigenvalues.

Here we review the implications of the theorem and


some aspects that have been highlighted by Harvey
and Stein [21 and by Shapiro et. al [3, 4, 5].

INTRODUCTION
We shall consider a linear time invariant system
described by the equations:

xi(t)

y(t)

Ax(t) + Bu(t)

Cx(t)

EIGENVECTOR ASSIGNABILITY

(1)
(2)

The purpose
problems of
be assigned
determining

of this section is to consider the


characterizing eigenvectors which can
as closed-loop eigenvectors, and of
the best possible set of assigned
closed-loop eigenvectors in case a desired
eigenvector is not assignable.

where x is the state vector (cl), u is the


control vector (ax), and y is the output vector
(rxl). Without loss of generality, we assume that
the a inputs are independent and the r outputs are
independent. If there are no reference cmmands,
the control vector u equals a linear matrix times
the output vector y.
u a Fy

We begin by considering the closed-loop system

j(t)

(A + BFC) x(t)

(4)

Assume we are given ()Ai), i - 1, 2, ..., r as the


desired closed-loop eigenvalues and assume v is
the corresponding closed-loop eigenvector. bhen

(3)

The output feedback problem can be stated as


Given a set of desired eigenvalums
1 29 *... r ar a corresponding set of
dekired eigenvectors, (v1), i - 1, 2, .., r, find
a real mxr matrix
such that the eigenvalues of
A + BFC contain (A1) as a subset, and the
corresponding eigenvectors of A + BFs are close to
the respective members of the set (vi ).

foAlows:
(A ), i -

BFC)vi Aiv

(5)

vi = (AII - A) 1 BFCv

(6)

(A +
or

It is assumed that none of the desired eigenvalues


match the existing eigenvalues of A so that the
inverse of (A I-A) exists.

The constrained output feedback problem can be


stated as follaws: Given a set of desired
eigenvalues ( ki) i - 1, 2, ..., r and a
d
corresponding set of desired eigenvectors, (vi)s i
- 1b 2, ..., r, find, if possible, a real mxr
matrix F, some of whose elements are fixed as
zero, suchdthat the eigenvalues of (A + BFC)
contain (A ) as a subset, and the corresponding
eigenvectos of (A + BFC) are clase to the
respective members of the set (vi).

Analyzing Equation (6), we define the r-vector


as

mi -

FCvi

i
(7)

Then Equation (6) becomes

ThI i
V= (A,A)
The implication of Equation (8)
importance. The eigenvector v
subspace spanned by the columnh
This subspace is of dimension a

The following theorem due to Srinathkumar [1]


provides a solution to the output feedback
eigenstructure problem.

456

(8)
is of great
must be injthe
of (AiI-A) B.
which is equal to

decomposition or a singutar value decomposition is

rank B which is equal to the number of independent


control variables. Therefore, the mnmber of
control variables determines the dimension of the
subspace in which the achievable eigenvectors must
reside. The orientation of the subspace is
determined by the open loop parameters described
by A, B and the desired closed loop eigenvalue kX.
We coaclude that if we choose an eigenvector vi
which lies precisely in1the subspace spanned by
the colums of (AiX-A) B, it will be achieved

advised at this stage (2].

We sumarize with the following:


o The matrix F will exactly assign r eigenvalues.
It will also assign each of the corresponding r
eigenvectors to u-dimensional subspaces which
are determined by Ais A, and B.

components are specified for a


particular eigenvector, then an achievable
eigenvector is computed by projecting the
desired eigenvector onto the allowable subspace.

exactly.

o If more than m

In general, a desired eigenvector vdwill not


reside in the prescribed subspace and hence cannot
be achieved. Instead a "best possible' choice for
an achievable eigenvector is wade. This begt
possible eigenvector is the projection of vi onto
the subspace spanned by the columns of (A I-A) B
as depicted in Figure 1.

o If control over a larger

mmber of eigenvalues
is required, then additional independent sensors
must be added.
eigenvector assignability is
required, then additional independent control
surfaces must be added.

o If improved

Vd

SUBSPACE SPANNED BY
COM MS OF XiI- A)1B

In many practical

lituatious, complete

specification of v is neither required nor known


but rather the designer is interested only in
certain elements of the eigenvector. To handle
such a situat&on, we again consider our desired
eigenvector vi and assume that it has the
following structure:

[vilp XI

Vd
Fig 1.

Geometric Interpretation of Vd, v Where


is the Desired Eigenvector and V iAis the
A
Achievable Egenvector

Vd

where
is an

v iA.

Begin by defining,

Li - (

An achievable eignevector must reside in the


required subspace and hence

v1A
To f ind the

of z
valge
v onto

projection of

subspace," we
J

Li zi

zi

which minimizes

Thus

Ri L
{(kiT.-A) Bm D

(1I)

and v

To obtain Z

- 2 LiTi
/i
dJ/dzi

vd)i)

(Liz
Li ii

Z- (Li

Li ) -1

Li'LT i Li )-I

Td

Li iv

Td

now

(sea
v1

1i replaces

and

(12)

ti

(13)

If the dimension of 1i is greater than or equal to


m, we compute z by using

replaces

zi

(14)

~i i

(15)

we proceed as before

16-14.) %fwever,

equations

Hence dJ/dz - 0 implies


T

-1

Now

Li

(Li Li) Lili

(16)

otherwise,

to possible ill co itioning,


taken when inverting LiLi. A Cholesky

Due

Ri

We also reorder the rows of the matrix (A I-A 1B


to conform with the reordered components of vi.

corresponding to the
"achievability

tvd - LiziIl2.

uAJpecified component.

Ai

(10)

where 1 is a vector of specified components of


van
is a vector of unspecified components of
'

the

cAoose zi

a |Ivd - vI 2

vJin

are designer specified components and x

vi

d1Ri

(9)

I-A) B

We define, as in [2J, a reordering operator ('1


as follows:

Anal ytically, we compute the achievable

eignevector

viii x,X

x, x, x,

care

must be

Z,
457

Or
L

41

-JIT

(LiL

) IIi
-

(17)

In what follow, it is assumed that all matrices


and eigenvectors have been transformed to obtain
the necessary structure of the B matrix. For
notational convenience, we supress the tilde
notation.

Thens in either case, the achievable eigenvector


is given by
tS

iA

mLizi

( 183 )

If I iis a closed-loop eigenvalue and v its


associated eigenvector, then

The Aodifications for complex eigenvalues and


eigenvectors are shown in the Appendix.
Feedback Gain

We now turn our attention to computing the


feedback gain matrix F. We assume in what follow
that when the term eignevector is used, we are
considering an achievable eigenvector.

y(t)

vi

and

A =fiA

-~~~

-1

v, vi-(

12]

-A12

11

zi

FC

(27)

:9

At1)zi A12wi FCvi


liIazi (Aklzi+ A12wi) inFCi

(28)

(29)
(30)

zizi A1V

OR

FCvi

where

Al

(A1

All

A12

We rewrite (30) as

FC)vi = lizi

(31)

This last equation holds for each desired


eigenvalue achievable eigenvector pair. Thus,

(A

(21a)
(21b)

FC)v1 =Ilz1

FC)v2
(A1 + FC)vr

(A1
OR

(22a)
(22b)

(32a)

2 2

(32b)

krzr

(32c)

in condensed form,

(Al

FC)V

where

C - CT
(22c)
of
the
Under this transformation, the eigenvalues
original system are identical to the eigenvalues
of the transformed system and the eigenvectors of
the two systems are related by
T

(Ai m

where
AB
T_-1 AT
B - T B

lwJ

1-

(20)

ai(t)

SZi

multiplying through yields


OR

Ax(t) + Bu(t)

[2]

LA1 A22

Then the transformation T may be obtained by


choosing the columns of P equal to columns (m+1)
to n of the matrix U.

x(t)

where

described by

0~~~~~

An
[2
i] 4]F
2]

i n

Upon considering the first matrix equation from


the partitioned form, we obtain

In order to obtain this transformation T, consider


that T is of the form [B PF where P is any matrix
such that rank [T] - n. One method for computing
T is to compute the singular valve decomposition

Using T we consider the change of coordinates, x


Tx. Then the open loop-system Equations (1) and
(2), is transformed to

(25)

are partitioned appropriately.

(19)

uqT

(24)

; i -1, 2, ..., r

BFCvi

I-A)vi

21

There is no loss of generality in doing this since


B is of full rank and there exists a
transformation (nonunique) which does this.

B =

- XAvi

We partition Zquation (25 ) conformally, uing the


special structure of B to obtain

Thus, the matrix B is transformed to the following


form:
--> [--]

(\i

OR

We shall proceed to compute the feedback control


law u-Fy for the unconstrained problem. However,
anticipating that the B matrix must be transformed
to lead block identity for the constrained output
feedback problem, we shall apply the
transformation to both the constrained and
unconstrained gain computations.

BFC)vi

(A +

Computation

(33)

.[v1

4X1zI

'

v2
A2Z2

vr]

V is (nxr)

r r]

Z is

(mxr)

(34)

(35)

We use Equation (33) to calculate F. To eliminate


the need for complex arithmetic we use a

(23:

458

where S represents the row stacking operator.


Letting fi denote the ith row of F and V denote
the ith row of 1, we obtaini
7T O .
ET1
m1
1
T
TT2
QT

Details are
transformation due to Moore [6].
shown in the Appendix. Hence, without loss of
generality, we assume Z and V to be real matrices.
Thus,
F

(Z-A1V)

(CV) 1

fT

(36)

Om Am

The matrix F will exist provided the matrix CV is


nonsingular. From a mathematical standpoint, the
inverse of CV is guaranteed provided the nullspace
of C and the subspace spanned by the columns of V
intersect only at the origin. From a physical
standpoint, CV will be singular (or extremely ill
conditioned) when measurements taken (reflected by
the C matrix) have little or no impact on the
achievable eigenvectors (reflected by the V
matrix). The singularity or nonsingularity of CV,
therefore, provides an excellent check as to the
reasonableness of our eigenvectors in relation to
the outputs being measured and fed back.

(42)
0

fi

(43)

i
Let us focus on this ith row equation.
we have

[ UTI

f ii

f i2

Expanding

VTi,

(44)

ij
f ir

To begin, recall Equation 33,

If we were to contrain f
to be zero, then we
f
from f and deidte the jth column of
dilete
a
We iw solveithe reduced problem,

+ FC)V s Z

and that the equation is of this form due to the


special structure of B. (Had we not imposed this
structure on B, the matrix B would be
premultiplying FC). Therefore
A1V + FCV = Z

FCV = Z

A1

(37)

(38)

V.

Proceeding, we let
-

(39)

(40)

A1 V.

i
The notation ( )
pseudoinverse.

FD = V.
IWu

S(V)

(47)
indicates the appropriate

If iwre than o e gainTin a row of F is to be set


to zero, the fi and a must be modified

We revrite this equation in terms of the Kronecker


product and a row stacking operator,

aJTS(F)

(45)

fi

Thea, to re-emphasize B's structure, Equation


(38) may be rewritten as

VT

where aT is tse matrix QT witV its Jth column


deleted and fi is the vector fi with its jth
element deleted. Our reduced problem is
overdetermined in that we now have more equations
than unknowns. jsing a pseudoinverse, our
solution" for f. (the remaining active gains in
ith row) is
(T(T)tT
(46)
i
i
or

or

[Im

i.e.,

(reliability).

ACV

The matrix T is repeated along the main diagonal


while 0 matrices reside off the diagonal.
m
The structure of Equation (42) is due solely to
the transformation of the B matrix. If the
transformation were not made, the coefficient
matrix of the feedback gains would possibly have
been full. The obvious advantage of this
structure is that each row of feedback gains (f )
can be computed independently of all other rows,

From the analysis of the previous section, we note


that every output is fed back to each input. In
this section, we wish to consider the possibility
of not feeding back certain outputs to certain
inputs, i.e., we wish to impose constraints on the
feedback matrix F in the form of fixed zeros
within the matrix to reflect physically desirable
feedback combinations. This feature of
constrained feedback lends practicality and
flexibility to our procedure. Also, it will
provide the designer, as we shall see, the
spectrum of tradeoffs between dynamic performance
and the structural complexity of the controller

TV

CONSTRAINED OUTPUT FEEDBACK

(A1

f2

appropriately.

The potential offered by this procedure is great


aknd will be highlighted by examples to follow.
What will be seen is that a designer can now look

41)
(41)

459

at a spectrum of possibilities in synthesizing a


feedback controller. For example, in considering
a given problem, one might obtain a certain
dynamic performance given unconstrained output
feedback. Upon further consideration, the
designer aay be interested in determining
performance if, for instance, certain states are
not fed to certain inputs. By suppressing certain
gains to zero, the designer reduces controller
complexity and increases reliability. This gives
the designer the ability to examine a spectrum of
tradeoffs between performance and

v,
and

(vR

+ j I

{Li)

vI

[zR +

jzI] (A4)

Combini

{Li)

* z

2R + Re {Li)
) and A6) yelds

(AS)

zI

-1 i-

(A6)

-] Al)

Thus, for complex eigenvalues we replace Equation


(18) by Equation (A7). Observe that the matrix in
(A7) is diaensioned (2n x 2m).

Srinathkumar, S., "Eigenvalue ei envector


assigment using output feedback." IEE
Trans. Automat. Contr., AC-23, 1 (1978),
79-81.

For the gain computation we define a matrix U as


follows:

Harvey, C.H., and Stein, G., "Quadratic


2 rties
weights for asympgtotic r&eglator _p
properie"
IEEE Trans. Automat. Contr., AC-23 (June
1978), 378-387.

~~~~~~~~

uL

yA

v3A

Imv3A

Shapiro, E.Y., and Chung, J.C., "Application


of eigenvalue/eigenvector assignment by
constant output feedback to flight control
systems design." In Proc. 15th Annu. Conf.
Informatiou Sciences and Systems (The Johns
Hopkins University, Balitmore, Md., 1981), pp.
164-169.

Observe that the matrix U is (tar). We apply the


transformation T to the matrix U which yields

Shapiro, E.Y., and Chung, J.C., "Constrained


eigenvalue/eigenvector assigment Application to flight control systems" In

Define the vector zi as the first a elements of


column i of the matrix V. Then for real
eigenvalues,

Real

Complex

V-T

(A9)

[A1z1,A2z2poms rzr]

(A10)

and for complex eigenvalues

Andry, A.N., Shapiro, E.Y., and Chung, J.C.,

"Eigenstructure pssiment for Linear


IEEE Trans. Aerospace and
Electronic Systems, AES-19 (September 1983),

Z -

Systems

[iRe

A1
A2

Re

Re
Re

A1

A2

711-728.

6.

ILi

z - I
R
e
i
r
Equating imaginary terms yields

Proc. 19th Annu. Conf. Cosunication Control


and Computing (University of Illinois,
Champaign-Urbana, Ill., 1981), 320-328.

5.

(A3)

Multiplying through and equating real terms yields

REFERENCES

4.

(A2)

designer.

3.

zr +

zi .

jvI
jzI

Then, (Al) becomes

It should be noted that the tradeoff in


performance comes from the fact that once
constraints are put on the feedback structure, it
may not be possible to assign the closed loop
eigenvalues to the exact desired locations.
However, if the eigenvalues are "close," the
dynamic behavior may be acceptable to the

2.

(Al)

Let vi - vr +

complexity/reliability.

1.

Li zi

Mbore, B.C., "On the flexibility offered4by

saefeedback in multivariable systems byn


closed loop eitenvalue assignment.
IEEE Trans. Automat. Contr., AC-21 (1976),
689-692.

- Im

in A1
z2- 32

-z2 + Im A1 : 211
23,
Z4 + T A 2
(All)

APPENDIX

Finally,
F

We shall describe how real arithmetic may be used


for complex eigenvalues and eigenvectors.
Consider Equation (18) which is described by

(Z - AV) (CV) -l

Observe that Equation (A12) uses only real


arithmetic.

460

(A12)

You might also like