Professional Documents
Culture Documents
8, AUGUST 1987
Abstract-Iterative methods for findingthe optimal constant feedback point of the loss function. Some convergence results have also
gains for parametric LQ problems, notably for optimal constant output been obtained for Levine-Athans like methods [47] and [71]. The
feedback problems, are surveyed. Tbe connections of several methods to work of Medanic et al. (see. e.g., [34] and [52]) on a projective
loss function expansions are discussed with important implications to the controls approach to output feedback design has given new insight
understanding of their convergence properties. Especially, the descent into the conditions for existence of a solution to the nonlinear
Anderson-Moore method, Levine-Athans like methods, and the Newton matrix equations in Levine-Athans like methods. Advanced ~ ~~ ~
method are considered. Convergence results are also included. The projection techniques form also the basis of the eigenprojection
initialization problem and the output feedback stabilization problem are method in [37].
also discussed. Furthermore, it is shown that the concepts and methods The best rate of convergence results apply to the Newton and
surveyed in this paper are useful in solving many realistic generalized inexact Newton methods for solving PLQ problems [73]. These
parametric LQ problems as well, notably so-called robust parametric LQ methods have the desirable superlinear (actually quadratic)
problems. terminating rate of convergence property. Bingulac. Cuk, and
Calovic [ 1 11 describe a Newton-Raphson method to solve the set
I. INTRODUCTION of nonlinear matrix equations expressing the necessary optimality
conditions for continuous-time optimal constant output feedback
N this paper computational methods for one of the classic problems.
I control problems in linear quadratic control theory are
surveyed. The term parametric LQ (PLQ) control is used here to
In this survey paper the numerical methods are described for
linear stochastic discrete-time systems. Manyofthe references
include such important control topics as optimal constant output consider the continuous-time and deterministic problems. too. For
feedback (see, e.g., [40], [4], [51], and [57]), optimal low-order convenience. only the case in which all the feedback gain matrix
dynamical and observer feedback (see, e.g., [38]. [77], [65], and elements are optimized is described. In some of the references it is
[%I), and optimal decentralized control [26], [27], [72]. shown that many of the methods can be used also when only some
Much of the recent interest in PLQ control has been motivated elements of the feedback gain matrix are optimized using an
by applications of the theory to advanced industrial process elimination of variables technique (see. e.g.. [19]. [72], and [73]).
control problems [75], [31], [49], to flight control problems [ 121. The output feedback stabilization problem and the initialization
[23], [25], and byits potential ‘In optimal decentralized control problem are all discussed as well. In. e.g., [54] output feedback
[63], [SI and in adaptive control [30], [45]. stabilization is studied based on minimizing a spectral radius
This interest in applying PLQ control to complex practical functional. A sufficient condition for output feedback stabilizabil-
control problems revealed several deficiencies in the numerical ity isusedin1671. Some other approaches to the initialization
algorithms then available to solve PLQ problems. The Levine- problem presented in the literature (see, e.g.. [73] and [12]) are
Athans method [40] was considered to be computationally also discussed. The perturbation [ 121 or the scaling approach is
expensive and its convergence properties were not well-under- interesting as it allows standard methods for parametric LQ
stood in the 1970’s [19], [68]. The Anderson-Moore method [4] problems to be used in the initialization problem as well.
was known not to converge always [66]. This made the use of Furthermore, it is shown thatthe concepts and methods
general-purpose function minimization methods popular in solv- surveyed in this paper can be exploited to solve many realistic
ing PLQ problems in the 1970’s [19], [35]. The lack of efficient generalized parametric LQ problems. notably so-called robust
special-purpose numerical methods to solve PLQ problems was parametric LQ problems [411, [ 161. and [lo].
madeall the more apparent by the many significant results The material in the paper is organized as follows. In Section I1
obtained for solving algebraic Riccati equations and linear the necessary background material on the infinite-horizon para-
quadratic Gaussian control problems (see, e.g., [ 2 ] and [39]). metric LQ control problem for linear discrete-time stochastic
Recently, however, some interesting results have been obtained systems is presented. The descent Anderson-Moore method is
for modified Anderson-Moore and Levine-Athans methods. The discussed in Section 111. and Levine-Athans like methods are
descent Anderson-Moore method has been studied in several discussed in Section IV. The Sewton method is considered in
papers [32], [43],[45],[47], and [55]. An essential feature in Section V . Other methods are discussed in Section VI. including
these descent Anderson-Moore algorithms is the introduction of a the BFGS variable metric method as a good representative of
step-length parameter to enhance convergence to a stationary modern general-purpose function minimization methods. The
output feedback stabilization and the initialization problems are
Manuscript received May 26. 1986; revised February 23. 1987. Paper considered in Section VII. Numerical solution of some realistic
recommended by Associate witor, A. J . h u b . This work was supported in generalized parametric LQ problems is discussed in Section VIII.
part by the Foundation of Abo Akademi and by the Council of Technical Some topics deserving further attention are discussed in Section
Sciences, Academy of Finland.
P. M. M5kila is with the Department of Electrical Engineering. McGill IX .
University, Montreal,P.Q.,Canada,on leaye from the Department of
Chemical Engineering. Swedish University of Abo (Ab0 Akademi). Turku, 11. THEPARAMETRIC
LQ COSTROL
PROBLEM
Finland.
H. T. Toivonen is with the Department of Chemical Engineering, Swedish
University of Abo (Abo Akademi), Turku, Finland. Consider finding the optimal constant feedback gains for
IEEE Log Number 8715365. parametric LQ problems involving the minimization ofthe
where p( -) denotes the spectral radius of a square matrix. Then a + (SFD+ B r ~r d~ ~ (16)
) .
~
necessary optimality condition is that any minimizer F* must be a
stationary point of the loss function (1). All the computational The term o(ll dF 11 :) in (13) is such that
methods to be discussed in this paper generate, either explicitly or
implicitly, a sequence of feedback gains ( F k )hopefully converg-
ing to a stationary point of the control criterion. To understand
these methods it is essential to consider how they are suggested by
various loss increment expansions. For this purpose some useful In compact notation the parametric LQ (PLQ) control problem
loss increment expansions are taken up next. is as follows:
Assume that SF is nonempty and let F E SF. Then the loss
function (1) can be written as min J ( F ) .
F E SF
J(F)=tr
(Q+DrFrRFD)P+tr
FTRFR, (5) Some of the properties of (17) as a minimization problem are now
discussed, and two technical assumptions used in the sequel are
where P is the stationary state covariance matrix for the system introduced.
(2), when controlled with the time-invariant regulator (3), i.e., Consider then the level set
P = Ex( t ) x ( t ) (6) n ( d ) = { F E S,cl J ( F ) S d ) (18)
where P is given as a positive semidefinite solution to the discrete where d 2 0. Let SF be nonempty. Often it is useful to visualize
Lyapunov equation that, under certain conditions, the loss function in (17) grows
P=(A+BFD)P(A+BFD)T+R,+BFR,FTBT. (7)
without bound as the boundary of SFis approached along any path
in the open set SF. This gives the motivation to consider (17) as, in
Introduce also the symmetric matrix S given as a positive effect, an unconstrained minimization problem. A technical
semideffite solution to the discrete Lyapunov equation consequence of this unbounded-loss-at-the-boundary assumption
is that the level set n ( J ( F ) )is compact for any F E SF. A more
S=(A+BFD)'S(A+BFD)+Q+DrFTRFD. (8) detailed discussion of this topic in terms of the system formulation
(1)-(3) is given in Appendix A.
The linear matrix equations (7), (8) define the matrix functions An assumption which appears naturally, cf. especially Section
P ( F ) and S ( F ) , respectively, for F E SF. IV on Levine-Athans like methods, is that & F ) and $ F ) > 0
Consider the loss increment (i.e., strictly positive definite) for anyF E SF. Then the
following test is insightful.
AJ=J(F2)-J(F,) (9) For F E SF,
where F,, F2 E SF. Introduce the notation Q ( F ) Q ( F ) ~ + R , > o>=P ( F ) > o (194
where O(F) = [ D U ( F D ) @(F)U(F) -D @ ( F ) n - l U ( F ) ] , where Pk = P(Fk), s k = S(Fk), etc., Cf. (7), (8), and (10). h
"(F)T= [BTH(F)T B'@(F)TH(F)T * * * BT(@(F)n-l)TH(F)T], alternative way of introducing the Anderson-Moore method is as
@ ( F ) = A + BFD, U ( F ) U(F)' = R , + BFR,FrBT, and follows.
H ( F ) T H ( F )= Q + DTFTRFD. Let Fo E SF. Consider then a sequence of feedback gains {Fk}
Let D have full row rank and B full column ran&, respectively. generated by
Then U ( F ) is nonsingular, say R , > 0, implies P ( F ) > 0 for F
E SF by (19a) [an analogous result holds for S ( F ) ] .Now if U ( F ) Fk+i=Fk+akGAM,k (25)
'
is singular, it may still be possible to have Q(F)Q(F) > 0 when
) greater rank where an-> 0 is such that Fk+ E SF, and GA,w&is given by the
the observability matrix of the pair ( D , @ ( F ) has
than D , as rank Q ( F ) = rank D = > Q(F)Q(F)' > 0. unique minimizer to the positive definite quadratic form
Returning to the compactness assumption of the loss level sets,
cf. (18), it is observed that if II(J(F))is compact for some F E
SF, then the loss (5) has a minimizer in S F ,due to the continuity of
the loss in SF [32].In the following sections the condition
where qAM(dF) is a quadratic appro_ximatio_nto the loss increment
dJ, Cf. (13) and ( 1 3 , at Fk, and Pk and Sk are positive definite
II(J( Fo))
compact
is (20)
matrices given by
where FO E SF is an initial stabilizing feedback gain matrix, is p k-Pk+rk
- -
often used in the convergence analysis of numerical methods to (274
find a local minimizer of J ( F ) on SF. It is a mild condition, and
even in cases where it is not fulfilled, one may still get sk=sk+Ak. (2%)
convergence so that The positive semidefinije matrices r k and-& are20 chosen that the
condition numbers, K(Pk)and K(Sk),OfPk and Sk are both 5 K ,
> 1, cf. Remark 3.1.
Thus, GA,w,k is given by
where { ( a J / a F ) k )is the gradient sequence generated during the
iterative solution of a PLQ problem.
In this paper the term explicit gain space methods refers to
computational methods for solving parametric LQ problems based and it is easy to see that
on explicitly generating a sequence of feedback gains {Fk)
according to
Fk+l=Fk+akGk (22)
where Gk is a search direction, (?k > 0 is a step-length parameter,
and { F k )hopefully converges to a local minimizer of the loss (5). i.e., GAM,k is a descent direction of the loss J ( F ) at Fk.
The more successful explicit gain space methods generate a It should be observed that when the term tr dFrSkdFpk
sequence of monotonically decreasing loss function values dominates other terms in (15), it can be expected that the
J(Fk))so that Fk E SF for all k . The initialization problem, i.e., Anderson-Moore method performs close to the Newton method
the problem of finding an Fo E SF when the system ( 2 ) is open- for solving parametric LQ problems. It is also interesting to note
loop unstable, is then of importance. This problem and the output that when R , = 0 and D = Z (of size n), the Anderson-Moore
feedback stabilizability problem are discussed in Section VU. algorithm gives
Remark 2.1: The use of parametric LQ techniques in rms
tuning of control loops is classical. One useful interpretation of Fk+[=-S^,'B'&A. (30)
the control criterion (1) is as follows. In many stochastic control
problems performance requirements are stated as upper bounds on Then the Anderson-Moore algorithm is equivalent to a method for
the variances of certain process variables (see, e.g., [49], [ 2 5 ] , solving algebraic Riccati equations having local quadratic rate of
and [20]). One engineering tool to address multiple variance convergence [ 3 3 ] .
requirements is by a penalty function method, such as linear Note that (25) gives the same Fk+I as (24) for all k , if ak = 1,
quadratic and parametric linear quadratic theories. Then, techni- r k = 0 , and A, = 0. Unfortunately, convergence of the sequence
cally, it is also possible to interpret the control criterion (1) as a { F k } to a stationary point of the loss function is then not
linear combination of the components of a multiobjective variance guaranteed (see, e.g., [66]) and it can even happen that the
criterion. Gangsaas et ai. [25] discuss an advanced practical successor Fk+ SF,although Fk E SF, causing a collapse of the
control systems design approach based on parametric LQ ideas. algorithm. This phenomenon is in fact not too rare rendering the
Robust performance and stability requirements can then also be original Anderson-Moore algorithm of limited use.
addressed, cf. also Section VIII. It seems that the importance of the descent property (29) was
first observed by Halyo and Broussard [32] and independently by
III. THEDESCENT ANDERSON-MOORE METHOD Makila [43].Halyo and Broussard [32]proved also a convergence
result, which is repeated here in a slightly different form.
A well-known necessary condition for optimality for the PLQ Theorem I {Convergence of {(aJ/aF)k}):Let &be nonvoid,
problem is that and let Fo E SF.
Let the level set n(J(F0))
aJ
-=0 implying F= -S^-'BTSAPDTP-l (23)
aF II(J(Fo))={F E S F ( J ( F ) S J ( F O ) I (3 1)
assuming that p-' and S-I exist. This suggested the original be compact. Then there exists 0 > 0 such that
Anderson-Moore algorithm [ 4 ] ,[58],where the sequence { F k )is
generated as, starting from Fo E SF,
-s^;'BTSkAPkDTpk'
Fk+l= (24)
MAKILA AND TONONEN: PARAMETRICLQ PROBLEMS 66 1
whenever 0 < CY I/3 and the sequence { F k } is defined by necessarily converges to a stationary point of the loss function.
Note also that in any case the loss J ( F ) must attain the same
value, say Jo, at all the cluster points of {Fk}, so that for any e >
0 there exists a no egative integer L , such that 0 I J(Fk) - JO
where GALv,kis given by (28) with p k and $ having condition < e for all k 5: If J ( F ) has a unique stationary point on II
numbers bounded above for all k. (J(Fo)),then Theorem 2 guarantees that (Fk} converges to this
Proof: Essentially the same as Theorem 3 in [32]. 0 point, which must be the global minimizer of the loss J ( F ) on the
To find such a constant step-length parameter CY > 0 , [32] compact set II(J(Fa)).
suggests that fzk in (25) is chosen to satisfy FkT E SF and the Various implementation aspects of descent Anderson-Moore
descent condition algorithms are discussed, e.g., in [32], (461, and [47]. The
algorithmic structure of the descent Anderson-Moore method is
very attractive. Only linear discrete Lyapunov matrix equations
have to be solved at each iteration step. In numerical comparisons
The step-length parameter ak is computed by finding the smallest [32], [44] the descent Anderson-Moore method has compared
nonnegative. integer j in favorably with popular general-purpose function minimization
methods.
ak=yJak-j,
O<y<l, a-1 3 1 (35) In 1431 and [72] the descent Anderson-Moore method is applied
to optimal decentralized control. In a numerical comparison [72]
such that Fk + +ax-- lGA?W,ksatisfies the stabilizing and descent it is shown to be superior to the method proposed in [26]. It is also
conditions (34). It is then hoped that there exists an integer M 5: 0 straightforward to generalize the descent Anderson-Moore
such that q b r E (0,/3], where /3 is as in Theorem 1, i.e., it is hoped method to arbitrarily constrained feedback gain matrices F by an
that after a finite number M o f iteration steps ak = a,wE (0, /3] is elimination of variables technique [43], [73].
accepted for all k 5: M . The result (32) of Theorem 1 would then In Theorems 1 and 2 it was not assumed that the necessary
apply. optimality condition (23) has a unique solution. Then itwas
In [55] the descent condition is used in a descent Anderson- convenient to consider the convergence of the gradient sequence
Moore algorithm for continuous-time systems. { ( a J / a F ) k } .Actually, it is known that condition (23) can have
The descent condition (34) and the boundedness property J ( F ) several solutions in some cases [43]. Then itis necessary to
5: 0 for all F E SF guarantee convergence of the sequence compare the different solutions to decide for the global minimizer
{J(Fk)} for any Fo E SF. This does not, however, imply of J ( F ) on SF.
convergence of { F k )to a local minimizer of the loss J ( F ) , neither Remark 3. I : For certain parametric LQ problems, the matrix
convergence of {(dJ/dF)k} to zero, due to basically that inverses in (24) can become ill-conditioned, or they may not even
condition (34) accepts arbitrarily small reductions in the loss J ( F ) exist, for some stabilizing feedback gains. The device of (27),
(see, e.g., [24]). Thus, the conclusions on the convergence to a (28) is then important, cf. also the bounded condition number
stationary point for the Halyo-Broussard 1321 and for the requiremenkin Theorems-1 and 2 . One technique is then to choose
Moerder-Calise [55] algorithms remain unproved. It should be rk = E tc Pk x I , ifK(Pk) 5: E - ] + 1, and r k = 0 otherwise.
observed, however, that in practice the descent condition (34) is Then K(Pk) < E - I + 1 for all k , where e is a small positive
often enough to obtain the result (32)(no counterexample is number, say * * Akin (27b) can be chosen in a similar
known). way. This simple technique bounds the condition numbers of the
In [43], [44], and [46] a descent Anderson-Moore method was matrix inverses in (28) (assuming only that tr Pk(tr Sk) > 0 for all
suggested such that the step-length parameter uk is chosen to k ) enhancing the convergence properties of the descent Ander-
satisfy the Goldstein step-length condition [29] son-Moore method for ill-conditioned parametric LQ problems.
IV. LEVINE-ATHANS
LIKEMETHODS
Levine and Athans [40] suggested a method for solving the
optimal constant output feedback problem for continuous-time
systems based on an iterative solution of the necessary optimality
conditions of the control problem. At each iteration step a system
where 0 < u < 1/2, in addition to the condition F k + lE SF. A of nonlinear equations is involved. Levine and Athans [40] were
somewhat simpler step-length rule was used in [47]. Note that as ablc to show that undcr an cxistcncc assumption their method
generates a sequence of monotonically decreasing loss function
the condition numbers of p k and s k in (27) are bounded above for values. The convergence properties of the method, however, have
all k,condition (36) guarantees that the reductions in J ( F ) do not not been well-understood [40], [68]. Recently, some convergence
become arbitrarily small, as then (aJ/aF)kand GA.w,k will not be results have been obtained for Levine-Athans like methods [47],
arbitrarily close to orthogonality. In 1441 and [47] convergence of [71].The proofs of these results utilize an interpretation of
the sequence {(dJ/aF)x-} to zero was proved when using these Levine-Athans type methods with loss increment expansions.
stronger step-length rules under mild assumptions.
Theorem 2 (Convergence of {(aJ/aF)k}When Using Condi- The Levine-Athans Method
tion (36)): Let SF be nonvoid, and let Fo E SF. Let the level set II
(J(F0))be compact. Let the sequence {Fk} be generated by Fk+ Let us iysert P k - = P(Fk+ and Pk+I = B(Fk+ for p(Fk +
= Fk + akGA.v,k, where GA,w,k isgiven by (28). and the step- d F ) and P(Fk + d F ) , respectively, in (12) and (1 1). Then the
length parameter ak is chosen so that the Goldstein step-length rule loss increment (12) gives a quadratic form whose minimizer is
(36) is satisfied, and Fk+l E SF. Then given by
where P ( - )is given by (7). Thus, each iteration step Consider then the sequence {Fk) generated by
Fk+l=Fk+GLA,k (39)
of the Levine-Athans method requires the solution of a nonlinear
matrix equation. Due to its construction, the Levine-Athans
method satisfies the descent property where 0 < 8 k I1, and G u , k is as in (37).
A convergence theorem was given for this type of modified
J(Fk)-J(Fk+I)>o, ifGLA,k#o (40) Levine-Athans method in [71] for continuous-time systems.
Theorem 4 (Convergence of { G L A , k } and { ( a J / a F ) k } ) : Let
without any line search dong G L A , k , cf. the descent Anderson- SF be nonempty. Let Fo E SF, and let II(J(Fo))be compact. Let
Moore method. Presently, however, there s e e m to be no general { F k } be generated by (48), (49), where 8 k E (0,13, k 2 0, is
existence proof available for a stabilizing solution to the nonlinear chosen so as to satisfy the conditions for existence of a solution to
equations (37) and (38). Some numerical experience indicates that (37), (49) for each k. Let inf 6 k > 0. Then
a stabilizing solution often exists.
Note that (39) can also be written as
- S , lFBkT+SL k=A P k + l D T P k ; L I .
(E) k
=0, for some k , or
k
-0. (E) Note Gat in the loss increm_ent (1 1) the arguments Fl and Fzin
(43b) S(Fl), S(Fl), P(Fz), and P(F3 can be permuted due to a
+
symmetry of the loss increment. Let F k + = Fk G D L A , k , where
GDLA,k is defined as follows. Introduce the notation S k + , =
Proof: See Appendix B. S ( F k + ] ) and $ k + l = s ^ ( F k + I ) . Consider then S k + [ and g k + l as
Thus, if the loss J ( F ) has a unique stationary point on constant, butyet unknown, matrices. Then the loss increment
(J(Fo)),then Theorem 3 guarantees that ( F k } converges to this gives a quadratic form whose minimizer is the dual Levine-
point, which is then also the global minimizer of the loss on the AthaIlS Step G D L A , k
by-assumption-compact set l T ( J ( F 0 ) ) (see the discussion after
Theorem 2).
Toivonen [71] has considered a modified Levine-Athans
method for continuous-time systems. Let then the sequence { F k )
be generated as assuming that and P i exist, cf. the Levine-Athans method.
G D L A , k is defined implicitly by (51), where & + l is given by
Fk+l=Fk+eGLA.k, 0<8<2 (44)
where GLA,k is defined as in (37), and (38) for P k + is replaced
with The mamx function S(-) is defined in (8). Thus (51) and (52)
form a set of nonlinear matrix equations for G D L A , k (and S k + I).
Pk+I=p(Fk+eGu,k). (45) Note that when D = Z (of size n X n) and R , = 0, the PLQ
control problem reduces to a steady-state LQG problem with
Then it is seen from (11) and (37) complete state information. Then
J(Fk+l)-J(Fk)=e(e-2) tr G~A,kSkGLA,k~k+l- (46)
Thus, the modified Levine-Athans method satisfies the descent
property
J(Fk)-J(Fk+I)>o, if GLA,kfO- (47)
[Observe that & and p k + l are positive definite by assumption, cf.
(37).] The introduction of the parameter 6 allows the derivation of
an existence lemma.
Lemma I : (Existence of Solutionjo (37) and (45)): Let SF be In this case the dual Levine-Athans method solves the control
nonempty, and let F k E SF. Let P ( F ) and s ( F ) be positive problem in one iteration step only corresponding to the solution of
definite on SF. Then there exists a real number 8 > 0, such that the algebraic Riccati matrix equation (55). This is alnice property
for every 8 E [0, G) there exists a positive semidefinite matrix for the dual Levine-Athans method.
P k + l such that P k + I is a Solution to (37) and (45), and F k + l Returning to the general case, (51), (52), it follows that
defined by (44)satisfies then Fk+ E SF.
Proof: See Appendix C. S=ATSA+Q-ATSBS^-lBTSA+CRTATSBS-'BTSACR(56)
MjiKZL;i AND TOIVONEN: PARAMETRIC LQ PROBLEMS 663
u(r)=LX,(DX,)-Lz(t) (61)
retain the invariant subspace X , of the full-state feedback, u(t) =
-:
G D , ~ ( S ( F ) ) = $(F)-'{2[s(F)Fkpk
( A+BLS)X,=X,A,. (62) and S( -) is defined by (8). The equations for the Newton step A F ;
at F' E Sf (i = 0, 1, * . .), with = Fk, become then
Now, if u ( t ) = 0, then u(f) = LOx(t).Medanic, Petranovic, and
Gluhajic [52] consider L defined by F'+AF'-Fk-GDLA(S(Fi))-SIGD,y=O (70)
\
S = A 'SA + Q - A 'SBS^-'BTSA (63) where GIGDLa is the first-order Taylor series term of GDLA(S(F))
at S(F')) given by
L = -S^-'B'SA. (64)
Thus, L solves the full LQG problem corresponding to the PLQ
problem at hand.
Let the system (2), (3) be transformed so that D = [ I ! 01. It is
shown in [52] that if Pk in (56) has the form
and
where the step-length parameter cyi E (0, 11 is chosen so that the
sequence { IIF' - Fk - GDLA(S(F'))112, i = 0, 1, 2, . . } is
(strictly) monotone decreasing. See, e.g., [69] for a discussion on
then the DEARE (56) has a solution S 2 0, provided that the how good global convergence properties are obtained with
corresponding projective controls give a stable closed-loop Newton methods. Close to FkT ai= 1 is accepted in (72) by the
system, i.e., if p(A + B L 6 ) < 1. This seems tobe the best descent condition on the norm above, and thus the quadratic
result presently available on the existence of solutions to the terminating rate of convergence property of the ordinary Newton
DEARE (56). method is obtained for {F'}. Note that an analogous algorithm can
There are available several convergence results for the dual be used to solve the nonlinear equations in the Levine-Athans
Levine-Athans method, which are analogous to Theorems 3 and 4 method.
664 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-32, NO. 8, AUGUST 1987
T+ BdF
6PI,k= ( A +BF,D)GPl,k(A +BFkD)
' [ ~ ~ F ~ ~ T + D P ~ A T ] + [ ~ ~ k F ~ B T + D P(83)
kAT
where 6Pl,k is the first-order Taylor series expansion term (in d F )
Of dP = P(& + d F ) - P(Fk).
Thus, GN,k is given as a solution to the system of linear matrix
equations (81)-(83). The solution exists and is unique if the
quadratic program is positive definite, i.e., if the Hessian matrix
of the loss function J ( F ) is positive definite at Fk. For effective
Fk+l=Fk+
GDu.k+ Gu,k- (77) solution of the linear matrix equations (81)-(83) it is convenient to
make a similarity transformation, such that the transformed
The alternating Levine-Athans method is initialized by choosing closed-loop system matrix obtains a simple form. Let Tk be a
an Fo E SF, and by computing Po = P(Fo).Each iteration step of similarity transformation and let c k denote the transformed
the method consists then of first solving the nonlinear equations closed-loop system matrix, i.e.,
( 7 3 , (74) for GDu,k and s, and then solving the nonlinear
equations (75), (76) using the previously computed value for T i l ( A+BFkD) Tk= ck. (84)
GDu,k and s.A convergence result analogous to Theorems 4 and
5 for the Levine-Athans method and the dual Levine-Athans Introduce also the symmetric matrices K and L such that
method, respectively, can be obtained for the alternating Levine- 6S1,p=( T , l ) T K T , ' (85a)
Athans method.
6PI.k= TkL T i . (85b)
V. THENEWTONMETHOD
Then the system of linear equations (81)-(83) can be written as
Newton-like methods for solving function minimization prob-
lems have the well-known desirable characteristic of showing
second-order rate of convergence close to a minimizer (see, e.g.,
[24]). These methods have been successful in solving algebraic
Riccati equations (see, e.g., [2]). The software implementation of
an effective Newton method for parametric LQ problems is,
however, not a trivial problem. The implementation should be
such that the computations at each iteration step are not excessive
compared to the descent Anderson-Moore method. This would
make the Newton method competitive with the descent Anderson-
Moore method due to the second-order rate of convergence
property.
In Newton's method a sequence of feedback gains (Fk) is
generated according to In [73] the Newton step is solved in an iterative way using the
conjugate gradient method. Then the system of linear equations
Fk+l=Fk+QkGN,k (78) (86)-(88) is solved for a sequence of arguments dF. This can be
done effectively if the similarity transformation Tk is chosen so
where GN,kis taken to be the minimizer of the second-order that c k will be in real Schur, or quasi-triangular, form. Note that
truncated Taylor series expansion of the loss increment dJ (13) at the Bartels and Stewart [6] algorithm for solving linear matrix
Fk E SF, i.e., G,v,kis the minimizer of the quadratic form equations is based on this transformation. An interesting alterna-
tive is to choose Tk so that the closed-loop system matrix ( A +
BFkD)will be reduced to block diagonal form [48]. An algorithm
(79) for such a reduction is given in [ 7 ] .
Note that it may still be computationally expensive to solve the
linear equations (86)-(88) exactly at each iteration step k . This
may not be justified when far from the solution as the benefits of
the Newton method are mainly local close to the solution. In [73]
an inexact Newton methodis considered in which the Newton
equations (86)-(88) are solved only approximately in such a way
and 6Sl.k is the first-order Taylor series term of dS = s(Fk + that close to the solution quadratic rate of convergence is
d F ) - s(Fk), cf. (16). The step-len@ parameter ak in (78) is obtained.
included to improve the global convergence properties of the Good global convergence properties are obtained with the
Newton method. Note that if the quadratic form (79) is positive Newtonmethodwhen the Hessian matrix of the loss J ( F ) is
definite, then it has a unique minimizer. positive definite on SF, if the step-length parameter ak in (78) is
The quadratic program (79), (80) with the equality constraint chosen so that a satisfactory reduction is obtained in the loss
defining 6Sl.k as a linear function of dF, has the necessary function value at each iteration step. The Newton step is a descent
AND TOIVONEN: PARAMETNC LQ PROBLEMS 665
direction of the loss, i.e., method for solving the nonlinear matrix equations expressing the
necessary optimality conditions of a parametric LQ problem,
based on a dynamic compensator formulation, has been given in
[37] and [9].
Thus, the Goldstein step length rule, cf. condition (36) or, e.g., W. THEINITIALIZATION PROBLEM
the Armijo line search process, will give effective schemes to
choose the step-length parameter ak in (78). Close to a minimum In this section the problem of finding an initial stabilizing
these schemes accept ak = 1 preserving the second-order final feedback gain matrix Fo E SF and the related problem of output
convergence rate of the Newton method. When the Hessian feedback stabilizability are discussed. However, this section does
matrix of the loss J ( F ) is not positive definite everywhere on SF, not intend to be a survey of these control topics.
the Newton method must be somewhat modified. One possibility It would be useful to have a computationally simple test to
is to solve a positive definite subproblem of the quadratic program decide whether SF is nonempty or empty, such that it tries to
(79), (80), or to consider restricted step Newton methods (see, construct a stabilizing feedback gain matrix, when the system (2)
e.g., [241). is open-loop unstable. Anderson et ai. [3] have suggested the use
For details of implementation of the Newton method and of methods to solve polynomial inequalities in such a test. Miller
numerical results, see [73]. There it is also shown that it may be et ai. 1541 use a special method to minimize a spectral radius
advantageous to use a preconditioned form of the conjugate functional. Prakash and Fam [60] consider a geometric approach.
gradient method when solving the Newton equations. The Soh, Berger, and Dabke [67] suggest a cost-function approach
Anderson-Moore search direction, cf. (25), offers then a natural based on sufficient conditions for output feedback stabilizability.
preconditioning direction. Consider the problem of finding an F E SF, i.e., a feedback
Remark 5.1: The Hessian matrix of the loss can also beof +
gain matrix F such that p(A BFD) < 1. This is actually a set of
interest in itsown right. It gives information on the convexity n inequalities
properties of the loss. Furthermore, it is possible to consider a set
of quadratic approximations of the loss whose domains "cover" IIX;(A+BFD)II2<1, i=l, .e., TI. (93)
the set of stabilizing feedback gains SF to obtain inclusions of the
global minimum of the loss on SF, cf. Taylor form methods for Godbout and Jordan [28] give gradient matrices for the eigen-
obtaining inclusions for the range of functions 1611. values X,(A + BFD). Thus, it is feasible to use Newton-like
algorithms for solving sets of inequalities (see, e.g., [SO]).
Alternatively, as
VI. OTHERMETHODS
In this section some alternative methods for solving parametric p(A+BFD)=%x[I(h;(A+BFD)112] (94)
1
of ( A + BFD)
coefficients in the
P=APAT+R,-APDrP-lDPA7 C(Z)=Z"+~~(F)Z"-'+.**+~,(F).
(97)
(91)
+(I-BS-'BTS)APDrP-'DPAT(I-SBS^-IBT) Let now F* denote the global minimizer of the function H ( F ) on
RPxr
where s^ = BTSB + R , and = DPDT + R , . The
corresponding feedback gain matrix F is given as
(98)
F= -S-~BTSAPDTP-~. (92) i= 1
Direct solution of the matrix equations (go), (91) has not been If H(F*) < 1, then the system (2), (3) is (output feedback)
much discussed in the literature. In [ l l ] a Newton-Raphson stabilizable. Soh, Berger, and Dabke [67] suggest to use the
method has been presented to solve an analogous set of nonlinear global minimizer of the differentiable function E:= tf(F)in the
matrix equations for continuous-time systems. An interesting new dominance test H ( F ) < 1 .
666 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. VOL. AC-32. NO. 8, AUGUST I987
scaled system systems are models of the same physical system representing
different operating conditions (and/or various actuator/sensor
q ( t + l)=uAy(t)+uBp(t) failure situations, modeling errors, etc.). The control law is
scaled system (loo), h(F, a) is a standard quadratic Criterion of i = 1, . ., M ) are the corresponding weighting functions, for
the form (1). Let then S d u ) denote the set the M linear systems. Let (dJ;/dF, i = 1, * ., M } denote the
gradient matrices of the functions (.I@), i = 1, . * , M } with
respect to the feedback matrix F, cf. (14).
Consider the control criterion
cf. the definition (4) of SF. The standard parametric LQ problem M
for the scaled system
JR(F)= w;Ji(F) (104)
i=l
problem, cf. (1)-(3), have natural extensions to the robust y ( t ) is an output vector, and q-' is the backward shift operator
parametric LQ problem. (q-Iy(t) = y ( t - l ) , etc.).
Consider then applying the effective descent Anderson-Moore The well-known LQG solution for globally minimizing (i.e.,
method, cf. Section III. Introduce the quadratic form also with respect to controller structure) the quadratic criterion (1)
often results in a fairly complex regulator. Surprisingly often a
aJR much simpler regulator will give almost the same performance.
qR(dF)=tr dF'-+z wi tr dFT$(F)dFpj(F) (109)
aF i = I Parametric LQ methods make such a comparison attractive.
Recently it has also been shown that, e.g., the descent As an example of applying (A5),
Anderson-Moore and the Newton methods can be readily applied
to optimal decentralized control problems and to parametric LQ Q > O and R,>O = (Al) (A61
problems with arbitrary controller structure constraints. Further-
more, dynamic compensators can be considered. The technique of when B and D have full rank, cf. also [32].
the loss increment has been the main analytical tool in these recent Introduce the observability matrix of the pair ( H ( F ) ,+ ( F ) )
developments. With this tool it is possible to generalize many of
the effective methods surveyed in this paper to other realistic
parametric LQ control problems, including robust parametric LQ
problems.
In numerical comparisons (see, e.g., [32], [U],and [72]) the
descent Anderson-Moore method has compared favorably with
tested general-purpose function minimization methods (including
the DFP and BFGS variable metric methods). The descent
Newton method converges usually in fewer iterations than the
The loss (A3) can then be written as
descent Anderson-Moore method, butitis more complex to
implement [73]. The Levine-Athans like methods require no line m
search but at the moment only some partial existence results for a
solution to the related nonlinear matrix equations seem to be
J(F)= 2 tr [ 9 ( F ) @ ( F ) k . " U ( F ) ]
k=O
available.
Projection techniques [53], [37], are providing new insight into . [ 8 ( F ) @ ( F ) k ' " U ( F ) ] 7 + tFTRFR,
r (AS)
some of the remaining open issues.
APPENDIX
A where F E SF. Let 8 ( F ) have rank no 5 n , and let 8 ( F )denote a
square submatrix of B(F), of size n and rank no. Let Fa E asF.
Compactness of the Level Set n(J(F)), F E SF Then
Let asFdenote the boundary of SF, i.e., F E asFif and only if lirn J ( F ) is finite * lirn lim F)@(F)k'" e(
U(F ) = 0. (A9)
the closed-loop system matrix A +
BFD has at least one F-Fa
F E SF
k - m F+Fa
F E SF
eigenvalue on the unit circle and the remaining eigenvalues inside
the unit circle. To guarantee that SF is bounded (and thus that asF
exists in the sense defined), it is required that B and D have full Therefore,
column and row rank, respectively.
Consider then the unbounded-loss-at-the-boundary assumption, ( H ( F a ) , @(Fa))observable and
cf. the discussion after (18),
U ( F J )nonsingular for all Fa E as,. (A10)
lirn
inf J(F)=a.
Fa E aSF F-Fa * ('41)
F E SF
Note that in (A10) observability can be replaced by the weaker
The matrix function P ( F ) in (7) can be expressed as an infinite assumption of detectability as the unobservable modes then have
series corresponding eigenvalues strictly inside the unit circle.
m To summarize, ( A ,E , D ) is output feedback stabilizable, B and
P ( F ) = Z @ ( F ) ' U ( F ) U ( F ) ~ ( ~ ( F ) ' ) 7 (A21 D have full column and row rank, respectively, (H(Fa),@(Fa))
i=O detectable and U(F8) nonsingular for all Fa E asF= >
where @ ( F = +
) A BFD, and U ( F )U ( F )7 = R, + BFRLF7B7. 1) (Al)
Thus, the loss (5) can be written as 2 ) n ( J ( F ) ) = ( FE S , I J ( p ) < J ( F ) }
m is compact for any F E S F
J ( F ) = Z tr [ H ( F ) + ( F ) ' U ( F ) ] 3) thePLQproblem (17)has a minimizer in SF. , (A11)
i=O
Note that in (A1 l), ( 2 ) and (3) are implied by (1). This ends the
. [ H ( F ) @ ( F ) ' U ( F ) ] 7 + t FTRFR,
r (A3) short "intuitive" discussion on the meaning of the compactness
assumption (20), and of the related unbounded-loss-at-the-bound-
where H ( F ) 7 H ( F ) = Q + DTF7RFD,and F E SF. Let Fa E ary assumption. The compactness assumption is seen to be a most
as., Then natural assumption.
pk+l-pk+o (B1)
due to (43a). Assume that (43b) is not true. Then there exists a
subsequence {Fkj} such that Fkj E II(J(F0))for all kj and a and the criterion
number h2 > 0 such that Il(aJ/dF)kjlli > d2. Therefore, there . A'-I
exists a number v2 > 0 and an integer K2 2 0 such that
f ( y , e )=o (C1)
Then Fk E sdffk). From (DS), (D.6) and the fact that ak < 1 it
follows that if SF is nonempty,
h(Fk-1, U k ) < Illin h(F, 1) 0.9)
F E SF
where F . + , isa local minimizer of (D.6). Thus, (D.lO) and 1291 A. A. Goldstein, “On steepest descent,” J. SIAM Contr., Ser. A ,
V O ~ .3, pp. 147-151, 1965.
(D.11) apply in this case as well. If SALT.)consists of discon- r301 G. C. Goodnjin and P. J. Ramadge, “Design of restricted complexity
nected sets for some k 2 0, then the algorithm can fail and should adaptive controllers.” IEEE Trans. Automat. Contr., vol. AC-24,
be reinitialized with a different Fo. pp. 584-588, 1979.
1311 G. Guardabassi, A. Locatelli. C. Maffezzoni, andN. Schiavoni,
“Computer-aided design of structurally constrained multivariable
REFERENCES regulators. Part 1: Problem statement, analysis andsolution,” IEE
Proc., vol. 130. Pt. D., pp. 155-164, 1983; Part2: Applications, IEE
111 J. Ackermann, “Parameter space design of robust control systems,” Proc., vol. 130, Pt. D., pp. 165-172, 1983.
IEEE Trans. Automat. Contr., vol. AC-25, pp. 1058-1072. 1980. 1321 N. Halyo and J. R. Broussard. “A convergent algorithm forthe
121 B. D. 0. Anderson, “Second-order convergent algorithms for the stochastic infinite-time discrete optimal output feedback problem,” in
steady-state Riccati-equation.” Int. J. Contr., vol. 28, pp. 295-306, Proc. Joint Automat. Contr. Conf..,Charlottesville. NC, 1981.
1978. G. A. Hewer, “An iterative technique for the computation of the steady
[31 B. D. 0. Anderson,N. K. Bose, and E. I. Jury, “Output feedback state gains for the discrete optimal regulator,” IEEE Trans. Automat.
stabilization and related problems-Solution via decision methods.” Contr., vol. AC-16, pp. 383-383, 1971.
IEEE Trans. Automat. Contr., vol. AC-20. pp. 53-56. 1975. W. E. Hopkins, Jr., J. Medanic, and W. R. Perkins. “Output feedback
r41 B. D. 0. Anderson and J. B. Moore, Linear OptimalControl. pole placement in the design of suboptimal linear quadratic regula-
Englewood Cliffs, NJ: Prentice-Hall, 1971. tors,” Int. J. Contr., vol. 34, pp. 593-612, 1981.
151 M. J. Balas, “Trends in large space structure control theory: Fondest H. P. Horisberger and P. R. Belanger. “So,l,ution of the optimal output
hopes, wildest dreams,” IEEE Trans. Automat. Contr..vol. AC-27, feedback problem by conjugate gradients, IEEE Trans. Automat.
pp. 522-535, 1982. Contr.. vol. AC-19. pp. 434-435, 1974.
[61 C. A. Bartels and G. W. Stewart, ‘Solution of the equation AX + XB 1361 D. C. Hyland and D. S. Bernstein, “The optimal projection equations
= C,” C.A.C.M.,VOI. 15, pp. 820-826. 1979. for fixed-order dynamiccompensation,” IEEE Trans. Automat.
171 C. A. Bavely and G. W. Stewart,“An algorithm for computing Contr., vol. AC-29, pp. 1034-1037, 1984.
reducing subspaces by block diagonalization,” SIAM J. Numer. -, “The optimal projection equations for model reduction and the
Anal., vol. 16. pp. 359-367, 1979. relationships among the methods of Wilson, Skelton, and Moore,”
181 C. S. Berger,“An algorithm for designing suboptimal dynamic IEEE Trans. Automat. Contr., vol. AC-30, pp. 1201-1211, 1985.
controllers,” IEEE Trans. Automat. Contr., vol. AC-19. pp. 596- 1381 D. E. Johansen, Optimal Controlof Linear Stochastic Systems with
597,1974. ComplexityConstraints. In Advances in ControlSystems, Vol.
[91 D.S. Bernstein, L. D.Davis,and S. W. Greeley.“The optimal 4. New York: Academic, 1966.
projection equations for fixed-order sampled-data dynamic compensa- A. J . h u b . “A Schur method for solving algebraic Riccati equations,”
tion with computation delay,” IEEE Trans. Automat. Contr., vol. IEEE Trans. Automat. Contr., vol. AC-24, pp. 913-921, 1979.
AC-31, pp. 859-862. 1986. W. S. Levine and M. Athans, “On the determination of the optimal
D. S. Bernstein and S. W. Greeley, “Robust controller synthesis using constant output feedback gainsforlinear multivariable systems,’’
the maximum entropy design equations,” IEEE Trans. Automat. IEEE Trans. Automat. Contr., vol. AC-15. pp. 4448, 1970.
Contr., vol. AC-31, pp. 362-364, 1986. 1411 D. P. Looze, “A dual 2ptimization procedure forlinearquadratic
S. P. Bingulac, N. K. Cuk,and M. S. Calovic, “Calculation of robust control problems, Automatica. vol. 19, pp. 299-302, 1983.
optimum feedback gains for output-constrained regulators:” IEEE r421 D. P. Looze and N. R. Sandell, Jr., “Gradient calculations for h e a r
Trans. Automat. Contr., vol. AC-20, pp. 164-166, 1975. quadratic fixed-control structure problems,“ IEEE Trans. Automat.
J. R. Broussard and N. Halyo. “Active flutter control using discrete Contr.. vol. AC-25, pp. 285-288, 1980.
optimal constrained dynamic compensators,” in Proc. 1983 Amer. P.M.Makila, “Constrained linear quadratic gaussiT control for
Contr. Conf., San Francisco, CA, 1983. process application,” Ph.D. dissertation, Swedish Univ. Abo, Finland,
~ 3 1 ~ “Optimal multi-rate output feedback,“ in Proc. 23rdIEEE 1982.
Coni Decision Contr., Las V e g a , N V , 1984. -. “Linear quadratic design of structure-constrained controllers,”
141 P. E. Caines and D. Q. Mayne, “On the discrete time matrix Riccati in Proc. 1983 Amer. Contr. Conf.., San Francisco, CA, 1983.
equation of optimal control,” Int. J. Contr., vol. 12. pp. 785-794, -, “A self-tuning regulator based on optimal output feedback
1970. theory,” Automatica, vol. 20. pp. 671-679, 1984.
-, “Onthediscrete time matrix Riccati equation of optimal -. “On the Anderson-Moore method for solvine the00timal outuut
control,” Int. J. Contr., vol. 14. pp. 205-207, 1971. feedback problem,” IEEE Trans. Automat. Con;.., voi. AC-29, pp.
A. J. Calise and D. D. Moerder, “Optimal output feedback design of 834-836.
. ~1984.
systems with ill-conditioned dynamics,” Automatica, vol. 21,pp.
271-276, 1985.
-.
~. .
“Parametric LQ control,” Int. J. Contr., vol. 41, pp. 1413-
1428, 1985.
1171 R. M. Chamberlain, J. D. Powell, C. Lemarechal,and H. C. Pedersen, P.M. M W a and H. T.Toivonen, “On numerical methods for
“The watchdo- technique for forcing convergence in algorithms for parametric LQproblems,” in Proc. 1986 Amer. Contr. Conf.,
constrained opt”lmization,” Math. Progr. Study, vol.16, pp. 1-17, Seattle, WA. 1986.
1982. P. M. Mgliila, T. Westerlund,andH. T. Toivonen, “Constrained
K. C. Cheok, N. K. Loh, and M. A. Zohdy, “Discrete-time optimal linear quadratic Gaussian control with process applications,” Auto-
feedback controllers with time-multiplied performance indexes,” matica, vol. 20. pp. 15-29, 1984.
IEEE Trans. Automat. Contr., vol. AC-30, pp. 494-496, 1985. D. Q . Mayne andM.Sahba,“An efficient algorithm for solving
S.S. Choiand H. R. Sirisena, “Computation of optimal output inequalities,” J. Optimiz. Theory Appl., vol. 45. pp. 407423, 1985.
feedback gains for linear multivariable systems,” IEEE Trans. P. J. McLane, “Linear optimal stochastic control using instantaneous
Automat. Contr., vol. AC-19, pp. 257-258, 1974. output feedback,” Int. J. Contr., vol. 13, pp. 383-396, 1971.
E. G. Collins, Jr. and R. E. Skelton, “A theory of state covariance J. Medanic, D.Petranovic,and N. Gluhajic, “The design of output
assignment fordiscretesystems,“ IEEE Trans. Automat. Contr., regulators for discrete-time linear systems by projective controls,” Int.
vol. AC-32, pp. 35-41, 1987. J. Contr.. vol. 41, pp. 615-639, 1985.
r2 11 E. J . Davison and I. J. Ferguson, “The design of controllers for the J. Medanic and Z. Uskokovic, “The design of optimal output
multivariable robust servomechanism problem using parameter optimi- regulators forlinear multivariable systems with constant distur-
zation methods,” IEEE Trans. Automat. Contr., vol. AC-26, pp. bances,” Int. J. Contr., vol. 37. pp. 809-830, 1983.
93-110, 1981. 1541 L. F. Miller, R. G.Cochran,and J. W . Howze, “Output feedback
E. J. Davison and U. Ozgiiner, “Characterizations of decentralized stabilization by minimization of a spectral radius functional.” Int. J.
modes for interconnected systems,” Automatica, vol. 19, pp. 169- Contr.: vol. 27, pp. 455-462. 1978.
182, 1983. D. D. Moerder and A. J. Calise, “Convergenceof a numerical
P. Fleming, “A non-linear programming approach to the computer- algorithm for calculating optimal output feedback gains,” IEEE
aided design of regulators using a linear quadratic formulation,” Int. Trans. Automat. Contr., vol. AC-30, pp. 900-903, 1985.
J. Contr., vol. 42, pp. 257-268, 1985. 1561 D. M. Moerder, N. Halyo, 3. R.Broussard,andA. K. Caglayan,
R.Fletcher, Practical MethodsofOptimization, Vol. 1. New “Application of precomputed control laws in a reconfigurable aircraft
York: Wiley, 1980. flight control system,’’ presented at the 1986 Amer.Contr.Conf.,
D. Gangsaas, K. R. Bruce, J. D. Blight, and U.-L. Ly, “Application of Seattle, WA, 1986.
modern synthesis to aircraft control: Three case studies.” IEEE J . O’Reilly, “Optimal instantaneous output feedback controllersfor
Trans. Automat. Contr., vol. AC-31, pp. 995-1014, 1986. linear discrete-time systems with inaccessible state,” Int. J. Syst. Sci.,
J. C. Geromel and J. Bernussou, “ A n algorithm for optimal decentral- VOI. 9. pp. 9-16, 1978.
ized control ofdynamic systems,” Automatica, vol. 15. pp. 489491,
1979.
-. Optimal Loworder Feedback Controllers for Linear Db-
Crete-Time Systems. In Control and Dynamic Systems. Vol. 16.
[27] -, “Optimal decentralized control of dynamic systems,’’ Auto- New York: Academic, 1980.
matica,
pp. vol. 18, 545-557, 1982. [591 M. J. D. Powell, “Some global convergence properties of a variable
[28] L. F. Godbout and D. Jordan, “Gradient matrices for output feedback metric algorithm for minimization without exact line searches,” in
systems,” Int. J. Confr.,vol. 32. pp. 411-433. 1980. Proc. Symp. Nonlinear Programming. Amer. Math. SOC., 1975.
MAKILA AKD TOIVONEN: PARAMETRIC LQ PROBLEMS 67 1
M. N. Prakash and A. T. Fam, ‘‘A geometric approach to stabilization [75] T. Westerlund, “A digital quality control system for an industrial dry
by output feedback,’‘ Int. J. Conrr.. vol. 37. pp. 111-125, 1983. process rotary cement kiln,” IEEETrans. Automat. Contr., vol.
H. Ratschek and J. Rokne, Computer Methods for theRange of AC-26, pp. 885-890, 1981.
Functions. Chichester: Ellis Horwood,1984. [76] Y. Xi and G . Schmidt, “A note on the location of the roots of a
S. Richter and R. DeCarlo. “A homotopy method for eigenvalue polynomial,” IEEE Trans. Automat. Contr., vol. AC-30, pp. 78-
assignment using decentralized state feedback.” IEEE Trans. Auto- 80, 1985.
mat. Contr.. vol. AC-29, pp. 148-158, 1984. [77] T. Yahagi, “A new method of optimal digital PID feedback control,”
N. R. Sandell. Jr.. P. Varaiya, M . Athans. andM. G . Safonov, Innt. J. Contr., vol. 18, pp. 849-861, 1973.
“Survey of decentralized control methods for large scale systems,”
IEEE Trans. Automat. Contr., vol. AC-23, pp. 108-128, 1978.
D. F. Shanno and K . H. Phua, “Xumerical comparison of several Pertti M. Makila (SM’84) was born in Turku,
variable-metric algorithms.” J. Optimiz. Theory Appl.. vol. 25, pp. Finland, in 1954. He received the M.Sc. and Ph.D.
507-518. 1978. degrees in chemical engi$eering in 1978 and 1983,
hl. Sidar and B.-Z. Kurtaran. ”Optimal low-order controllers for linear
stochastic syqtrms.“ Inr. J. Conrr., vnl. 22,pp. 377-387, 1975.
respectively, both from Abo Akademi (the Swedish
T. Soderstrom. ”On some algorithms for design of optimal constrained University of Abo), Turku, Finland.
regulators.“ IEEE Trans. Automat. Contr.. vol. AC-23. pp. 1100- He has been a Senior Fulbright Scholar at the
1101. 1978. University of California, Berkeley, Gendron Fellow
C. B. Soh, C. S. Berger, and K. P. D a b h . ‘‘A simple approach to at the Pulp and Paper Research Institute of Canada,
stabilization of discrete-time systems by output feedback,” Int. J. Montreal and Vancouver, and manager of paper
Conrr.. vol. 42. pp. 1481-1490, 1985. mill surveys at the R&D Department of Valmet,
Y. G. Srinivasa and T. Rajagopalan, ”Algorithms for the computation Inc.,Turku. Presently he is a Senior Research
of optimal output feedback gains,’’ in Proc. 18thIEEE Conf. Fellow at the Academy of Finland. His research interests include process
Deckion Contr., Fort Lauderdale. FL, 1979. control, adaptive control, and stochastic control theory.
J. Stoer and R. Bulirsch, Inrroducrion to Numerica[Analysis. New
York: Springer-Verlag.1980.
H . T. Toivonen. “Multivariablecontrollerfordiscrete stochastic Hmnu T. Toivonen (“80) was born in Turku
amplitude-constrained systems,“ Modeling, Ident. Conrr., vol. 4 ,
pp. 83-93, 1983. . (Abo), Finland, in 1952. He received the M.Sc.
-. “A globally convergent algorithm for the optimal constant and Ph.D. degrees inchemical engineering in 1976
output feedback problem,” Int. J. Contr.. vol. 41, pp. 1589-1599, and 1981, respectively, both from Abo Akademi
1985. ’ , (the Swedish University of Abo), Turku,Finland.
H . T. Toivonen and P. M . Makila, “A convergent Anderson-Moore From 1979 to1981 he was employed asa
algorithm for optimal decentralized control,’‘ Automatica, vol. 21, Research Fellow of the Academy of Finland, and
pp. 743-744. 1985.
~~ . “On Newton‘s method for solving parametric linear quadratic from 1982 to 1983 he was a Postdoctoral Fellow at
the Norwegian Institute of Technology, Trondheim,
control problems,’’ Int. J. Contr., 1987.
C. J. W’enk and C. H . Knapp, “Parameter optimization in linear Norway. His research interests include applications
systems with arbitrarily constrained controllerstructure,” IEEE of stochastic control theory,self-tuning control, and
Trans. Automar. Contr., vol. AC-25. pp. 496-500, 1980. nonjinear programming.