Professional Documents
Culture Documents
8,AUGUST 1972
I.INTRODUCTION
Fig. 1. Broad-band antenna array and equivalent processorfor signals
plane waves from a chosen direction (called the “look direc- Previous work on iterative least squares array processing
tion”).Thealgorithmiterativelyadaptstheweights of a was done by Griffiths [l]; his method uses an unconstrained
broad-band sensor array (Fig. 1) to minimize noise power a t minimum-mean-square-error optimization
criterionwhich
a t the array output while maintaining a chosen frequency requires a priori knowledge of second-order signal statistics.
response in the look direction. Widrow, Mantey, Griffiths, and Goode [ 2 ] proposed a vari-
Thealgorithm, called the‘ConstrainedLeastMean- able-criterion [3] optimizationprocedureinvolvingthe use
Squares” or “ConstrainedLMS”algorithm,is a simple of a known training signal; this was an application and exten-
stochasticgradient-descentalgorithm which requiresonly sion of the original work on adaptive filters done by Widrow
that the direction of arrival and a frequency band of interest and Hoff [4]. Griffithsalso proposed a constrainedleast
be specified a priori. In the adaptive process, the algorithm mean-squares processor not requiring a priori knowledge of
progressively learns statisticsof noise arriving from directions the signal statistics [SI; a new derivation of this processor,
other than the look direction. Noise arriving from the look given in [ 6 ] , shows thatitmaybeconsideredasputting
direction may be filtered out by a suitable choice of the fre- “soft” constraints on the processor via the quadratic penalty-
quency response characteristic in that direction, or by exter- function method.
nalmeans. Subsequent processing of the array output may “Hard” (i.e., exactly)-constrainediterativeoptimization
be done for detection or classification. was studied by Rosen [7] for the deterministic case. Lacoss
A major advantage of the constrained LMS algorithm is [8], Booker et al. [g], and Kobayashi [ l o ] studied “hard”-
that it has a self-correcting feature permitting it to operate constrained optimization in the array processing context for
for arbitrarily long periods of time in a digital computer im- filtering short lengths of data. All four authors used gradient-
plementation without deviating from its constraints because projection techniques [ l l ] ;Rosen and Booker correctly indi-
of cumulative roundoff or truncation errors. catedthatgradient-projectionmethodsaresusceptibleto
The algorithm is applicable to array processing problems cumulative roundoff errors and are not suitable for long runs
in geoscience, sonar, and electromagnetic antenna arrays in withoutanadditionalerror-correctionprocedure.Thecon-
which a simple method is required for adjusting an array in strained LMS algorithmdevelopedinthepresentworkis
real timetodiscriminateagainst noises impinging on the designed toavoiderroraccumulation while maintaininga
array sidelobes. “hard” constraint; as a result, it is able to provide continual
filtering for arbitrarily large numbers of iterations.
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
FROST: ALGORITHM FOR ADAPTIVE ARRAY PROCESSING 92 7
1 K taps
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
928 PROCEEDINGS OF THE IEEE, AUGUST 1972
LOOK DIMCTION
SIGNALS Ax)
NOISES
P1.1
the requirement
NOISE A
NOISE B
NOW-LOOK DIAECTDN
no(Ks
Fig. 2.
- Fth2Ztxt-
Signals and noises on the array. Because the array is steered toward the look direction, all beam signal
components on any given column of filter taps are identical.
subject
to CTW = 5. (1W
0 This is the constrainedLMS problem.
W,,t isfoundbythemethod of Lagrangemultipliers,
.om
which is discussed in [13]. Including a factor of to simplify +
later arithmetic, the constraint function is adjoined to the
Constraining the weight vector to satisfy the J equations of costfunctionby a /-dimensionalvector of undetermined
(6)restricts W to a (KJ-J)-dimensional plane. Lagrange multipliers X:
Define the constraint matrix C as H ( W ) = +WTRxxW + XT(CTW- 5). (12)
-J- Taking the gradient of (12) with respect to W
V w H ( W ) RxxW + a. (13)
The first term in (13) is a vector proportional to the gradient
and define 5 as the J-dimensional vector of weights of the of the cost function (lla), and the second term is a vector
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
FROST: ALGORITHM FOR ADAPTIVE
ARRAY PROCESSING 929
normal to the (KJ-J)-dimensional constraint plane defined implement and, for a given computational cost, is applicable
by P W - S = O [14]. For optimality these vectors must be to arrays in which the number of weights is on the order of
antiparallel [IS], which is achieved by setting the sumof the the square of the number that could be handled by the itera-
vectors (13) equal to zero tive matrix inversion method and the cube of the number that
V w H ( W ) = RxxW + CX = 0. could be handled by the direct substitution method.
Derivation
Interms of theLagrangemultipliers,theoptimalweight
vector is then For motivation of the algorithm derivation temporarily
suppose that the correlation matrix Rxx is known. I n con-
Wopt = - R x x - 0 (14) strained gradient-descent optimization, the weight vector is
where Rxx-l exists because RXX was assumed positive defi- initialized at a vectorsatisfyingtheconstraint(llb),say
nite. Since Wept must satisfy the constraint (llb) W ( 0 )= C ( P C ) - V , and at each iteration the weight vector is
moved in the negative direction of the constrained gradient
CTWopt= 5 = - CTRxx-‘CX (13). The length of the step is proportional to the magnitude
of the constrained gradient and is scaled by a constant p.
and the Lagrange multipliers are found to be
After the kth iteration the next weight vector is
x = - [ cTR~x-’C]-’5 (15)
W(k + 1) = W ( k ) - p v w a [ W ( k ) ]
W ( k ) - p [ R x x W ( k )+ CX(k)]
where the existence of [CTRxx-’C]-’follows from the facts
= (18)
t h a t RXX is positive definite and C has full rank [6]. From
(14) and (15) the optimum-constrained LMS weight vector where the second step is from (13). The Lagrange multipliers
solving (11) is arechosenbyrequiringW(kS1)tosatisfytheconstraint
(llb):
Wept = R x x - ’ C [ C ~ R X X - * C ] - ~ ~ . (16)
Using the set of weights Wept in the array processor of
+
5 = CTW(k 1) = C T W ( k )- pCTRxxW(k) - pCTCX(k).
Fig. 2 forms the optimum constrained LMS processor, which Solving for the Lagrange multipliersX(k) and substitutinginto
Wept in (4), the the weight-iteration equation (18) we have
is a filter in space and frequency. Substituting
constrained least squares estimateof the look-direction wave-
form is +
W ( k 1) = W ( k ) - p [ l - C(CTC)-’CT]RxxW(k)
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
930 PROCEEDINGS OF THE IEEE, AUGUST 1972
Discussion (22), using (4), (2a), and the independence assumption yields
Theconstrained LMS algorithm (22)satisfies thecon- an iterative equation in the mean value of the constrained
straint P W ( k + 1 ) = 5 at each iteration, as can be verified LMS weight vector
by premultiplying (22) by P and using (20). At each iteration
the algorithm requires only the tap voltages X ( k ) and the
E[W(k + I)] = P { E [ W ( k ) ]- r R x x E [ W ( k ) ] ]+ F . (23)
array outputy ( k ) ; no a priori knowledge of the input correla- Define the vector V ( k f 1 ) to be the difference between
tionmatrixisneeded. F is a constant vector that can be the mean adaptive weight vector at iteration kfl and the
precomputed. One of the two most complex operations re- optimal weight vector (16)
quired by (22) is the multiplication of each of the K J com-
ponents of the vector X ( k ) by the scalar p y ( k ) ; the other
V(k + 1) p E [ W ( k + l)] - wopt.
These equations can readily be implemented on a digital The idempotence ofP (i.e., P 2 = P )which
, can be verified
computer. by carrying out the multiplication using (20b) and premulti-
plication of equation (24) by P shows that P V ( k ) = V ( k ) for
IV.PERFORMANCE all k, so (24) can be written
Convergence to the Optimum
The weight vector W ( k ) obtained by the use of the sto- V ( k 1) = [I - p P R x x P ] V ( k ) +
chastic algorithm (22) is a random vector. Convergenceof the = [ I - pPRxxP]k+1V(O).
meanweightvectortotheoptimumisdemonstratedby
showing that the length of the difference vector between the The matrix PRxxP determines both .the rate of conver-
meanweightvectorandtheoptimum (16) asymptotically gence of the mean weight vector to the optimum and the
approaches zero. steady-state varianceof the weight vector about the optimum.
Proof of convergence of themean is greatly simplified I t i s shown in [6] that PRxxP has precisely J zero eigen-
by the assumption (used in [2]) that successive samples of values, corresponding t o the column vectors of the constraint
the input vector taken A seconds apart are statistically inde- matrix C; this is a result of the fact that during adaption no
pendent.Thisconditioncanusually be approximatedin movement is permitted away from (KJ- J)-dimensional con-
practice by sampling the input vectorat intervals large com- straint plane. I t is also shown in[6, appendix C ] t h a tPRxxP
pared to the correlation time of the input process plus the has K J - J nonzero eigenvalues ui. i = 1, 2, . . K J - J, with 0 ,
length of time it takes an input waveform to propagate down values bounded between the smallest and largest eigenvalues
the array. The assumption is more restrictive than necessary, of Rxx
since Daniel1 [19] has shown that the much weaker assump- Amin 5 ami, 5 (Ti 5 umax 5 X,, i = 1 , 2 , . . . , KJ - J
tion of asymptoticindependence of theinputvectorsis
sufficient to demonstrate convergence in the related uncon- where A m i n and X,, are the smallest and largest eigenvalues
strained least squares problem. of RXXand urninand urnaxare the smallest and largest nonzero
Taking the expected value of both sides of the algorithm eigenvalues of PRxx P .
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
FROST: ALGORITHM FOR ADAPTIVE ARRAY PROCESSING 931
and so if the initial difference is finite the mean weight vector Steady-State Performance-Nonstatwnary Environment
converges to the optimum, i.e., A model of the effect of a nonstationary noise environment
lim
k+ m
I~E[W(K)]
- Woptl(= 0 proposed by Brown [23] is that the steady-state rms change
of the optimal weight vector Wopt(K)between iterations has
magnitude 6, i.e.,
with time constants given by (25).
lim s u p E I J W o p t ( k 1) - Wopt(lZ)l12
k- m
= 62. +
Steady-State Performance-Stationary Environment
The algorithm is designed to continually adapt for coping Brown’sgeneralresultsmay be applied to the constrained
with nonstationary noise environments. In stationary environ- LMS algorithm by restricting the optimal weight vector to
ments this adaptation causes the weight vector to have a vari- have magnitude less than some number Wmnxlland again //
ance about the optimum and produces an additional com- assuming the successive input vectors are independent with
ponent of noise (above the optimum) to appear at the outputGaussian-distributedcomponents. For p small i t canbe
of the adaptive processor. shown [23, p. 471 that the steady-state rms distance of the
The output power of the optimum processor with a fixed weight vector from the optimum is bounded by
weight vector (17) is
E[yopt2(h)]= WoptTRxxWopt
= ST(CTRxx-’C)-’5.
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
932 PROCEEDINGS OF THE IEEE, AUGUST 1972
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
FROST: ALGORITHM FOR ADAPTIVE ARRAY PROCESSING 933
h
Fig. 7. Operation of the gradient-projection algorithm (31).
LOOK-
OlRECtlW
WSE A SGWAL
c
P .To
.. ..
. .
t
0
I
0.1
' 1
oz
' ! L
a3
1
0.4
. I
0.5
rafpufm
Fig. 10. Powerspectral density of incomingsignals.SeeFig. 2 and
Table I for spatial position of noises.
TABLE I
SIGNALS AND NOISESIN THE SIMULATION (SEE FIG.2)
Direction Center
(Oo is normal Frequency
Source Power to array) (1.0 is 1 / ~ ) Bandwidth
Look-direction
signal 0.1 00 0.3 0.1
I\ \ A
Noise A
Noise B
White noise
1 .o
1 .o
450
600
0.2
0.4
0.05
0.07
(per tap) a. 1
Fig. 8. Error propagation. The constrained L M S algorithm (a) corrects assumption made in the derivation of the gradient-projection
deviations from theconstraintswhile the gradient-projection algo- algorithm.
rithm (b) allows them to accumulate.
Accumulating errors in the gradient-projection algorithm
can be expected to cause the weight vector to do a random
because theunconstrainedgradientestimate y ( k ) X ( k ) is walk away from the constraint plane with variance (expected
projected onto the constraint subspace and then added to thesquared distance from the plane) increasing linearly with the
current weight vector. Its operation is shown in Fig. 7 (com- number of iterations. By contrast, the expected deviation of
pare with Fig. 6). the constrained LMS algorithm from the constraint does not
A comparison between the effect of computational errors grow, remaining at its original value.
on the gradient-projection algorithm and on the constrained
L M S algorithmisshownin Fig. 8. Theweightvectoris VII. SIMULATION
assumed to be off the constraint at the kth iteration because A computer simulation of the processor was made using
of a quantization error occurring in the previous iteration. I t 6-digitfloatingpointarithmetic on asmallcomputer(the
isshowninFig.8(a)thattheconstrained LMS algorithm HP-2116). The processor had four sensors on a line spaced at
makestheunconstrainedstep,projectsontothesubspace, per (thus K J = 16).
7-second intervals and had four taps sensor
and then addsF,producing a new weight vector W ( k + 1) t h a t Theenvironmenthadthreepoint-noisesources,andwhite
satisfies theconstraint.Thegradient-projectionalgorithm noise added to each sensor. Power of the look direction signal
[Fig.8(b)], however,projects thegradientestimateonto was quite smallincomparisontothe power of interfering
the subspace and adds the projected vector to the past weightnoises (see Table I). The tap spacing defined a frequency of
vector,movingparalleltotheconstraintplane but con- 1.0 (i.e., f = 1.0 is a frequency of 1/7 Hz). I n the look-direc-
tinuing the error. Note the implicit (incorrect) assumption tion, foldover frequency for the processor response was 37, or
t h a t W ( k ) satisfied the constraint, corresponding to the same 0.5. All signals were generatedby a pseudo-Gaussiangen-
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
934 PROCEEDINGS OF TJ3E IEEE, AUGUST 1972
.... ...........
... ..... .'. *....-.... . ..-*...^
... "-*.,.."..
ni-
I-
3 2..a.--.
* .. . r-.
I -.-
-
00 ~ o - ~ - h - ~ - ~ -- -~' - - - ~
1% ~' - -' - ~
' - - - 200
1~ - l ~ -
-
6(10)-"
w
-*
b.
I
- . . . . . . . . . . . . 50J . . ' - . .
wu b
..
...
100 I50
. . . . . . -....
200
NUYBER OF AOIPTATIONS
Fig. 11. The output power of the constrained LMS filter (upper graph) decreases as it adapts to discriminate against unwanted noise.
Lower curve shows small deviations from the constraint due to quantization.
_
..........
"...
. _
.._..._.-a
-
". . ..... --.
. ............... .. .*-
. e
0
.-
.
.-4.
I
.
-. ....._- .--.-.....-:....
'
r
a
NUYBER OF AOAPTATIONS
Fig. 12. Output power of the gradient-projection algorithm (upper graph) operated on the same data as theconstrained L M S algorithm
(c.f., Fig. 11). Lower curve shows that deviations from the constraint tend to increase with time. Note scale.
eratorand filtered to give themtheproperspatialand spaced upper two lines are upper and lower bounds for the
temporalcorrelations. All temporalcorrelations were ar- adaptive processor output power, which is the optimum out-
ranged to be identically zero for timedifferences greater than put power plus misadjustment. The mean steady-state value
257. The time between adaptations A was assumed greater of the processor's output power falls somewhere between the
than 587, so successive samples of X ( k ) were uncorrelated. upper and lower bounds (but may, at any instant fall above
The look direction filter was specified bythevector or below these bounds). The difference betweentheinitial
ST= [l, - 2, 1.5, 21 which resulted in the frequency charac- and steady-state power levels is the amount of undesirable
teristicshown in Fig. 9. The signal and noise spectraare noise power the processor has been able to remove from the
shown in Fig. 10 and their spatial position in Fig.2. output.
I n this problem, the eigenvaluesof Rxx ranged from 0.111 A simulation of the gradient projection algorithm (31) on
t o 8.355. The upper permissible bound on the convergence the array problem was made using exactly the same data as
constant p calculated by (26) was 0.074; a value of p=O.Ol used bytheconstrained LMS algorithm.Theresultsare
was selected, which, by (27), would lead to a misadjustment shown in Fig. 12. The lower part of Fig. 1 2 shows how the
of between 15.2 and 17.0 percent, gradient-projection algorithm walks away from the constraint.
The processor was initialized with W ( 0 )= F = C ( P C ) - ' 3 , Notethechangein scale. If theerrors of theconstrained
and Fig. 11 showsperformanceasafunction of time. The LMS algorithm (Fig. 11) were plotted on the same scale they
upper graph has three horizontal lines. The lower line is the would not be discernible. The errorsof the gradient-projection
output power of theoptimumweightvector.The closely method are expected to continue to grow.
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.
FROST: ALGORITHM FOR ADAPTIVE ARRAY PROCESSING 935
The fact that the output power of the gradient-projection signalestimationanddiscriminatingagainst noises in a
processor (upper curve, Fig. 12) is virtually identical to the possibly time-varying environment where little a priori
output power of the constrained LMS processor is a result of informationisavailableaboutthesignalsor noises. Time
the fact that the errors have not yet accumulated to the point constants, steady-state performance, and a proof of converg-
of moving the constraint asignificant radial distance from the ence are derived for operation of the algorithmin a stationary
origin. environment: convergence and steady-state performance ina
nonstationary environment are also shown.
VIII. LIMITATIONS AND EXTENSION A simple extension of the algorithm may be used t o solve
a general constrained LMS problem, which is t o minimize the
Application of the constrained LMS algorithm in some
expected squared difference betweena multidimensional filter
array processing problems is limited by the requirement that
outputandaknowndesiredsignalunderaset of linear
the non-look-direction noise voltages on the taps be uncor-
equality constraints.
related with the look-direction signal voltages.
This restriction is a result of the fact that if the noise
voltagesarecorrelatedwiththesignalthentheprocessor REFERENCES
may cancel out portions of the signal with them in spite of [ l ] L. J. GrifIiths, “A simple adaptive algorithm for real-time processing
the constraints. If the source of correlated noise is known, its in antenna arrays,” Proc. IEEE, vol. 57, pp. 1696-1704, Oct. 1969.
effect may be reduced by placing additional constraints to [2] B. Widrow, P. E. Manpy, L. J. Griffiths, and B. B. Goode, “Adap-
tiveantenna systems, Proc. IEEE, vol. 55, pp. 214342158, Dec.
minimize the array response in its direction. 1967.
Implementation errors, i.e., deviations from the assumed [3] L. J. GrBiths, “Comments on ‘A simple adaptive algorithm for real-
electrical and spatial propertiesof the array(such as incorrect time processing in antenna arrays’ (Author’s reply),” Proc. IEEE
(Lett.), vol. 58, p. 798, May 1970.
amplifier gains, incorrect sensor placements, or unpredicted [4] B. Widrow and M. E. Hoff, Jr., “Adaptive switching circuits,”
mutual coupling between sensors) may also limit the effec- IRE WESCON Conv. Rec., pt. 4, pp. 96-104, 1960.
[5] L. J. GrBiths, “Signal extraction using real-time adaptation of a
tiveness of the processorbypermittingittodiscriminate linear multichannel filter,“ StanfordElectron.
Lab.,
Stanford,
against look-direction signals while still satisfying the letter Calif., Doc. SEL-60-017, Tech. Rep. T R 67881-1, Feb. 1968.
of the constraints. Injection of known test signals into the [6] 0.L. Frost, 111, “Adaptiveleastsquaresoptimizationsubject to
linear equality constraints,”StanfordElectron.Lab., Stanford,
array may provide information about the signal paths that Calif., Doc. SEL-70-055, Tech. Rep. TR 679rS-2, Aug. 1970.
can be used to compensate, in part, for the errors. [7] J. B. Rosen, “The gradient projection method for nonlinear pro-
The algorithm may be extended to a moregeneral sto- gramming, pt. 1 : Linear constraints, J . Soc. Indust. Appl. Math.,
vol. 8, p. 181, Mar. 1960.
chastic constrained least squares problem [8] R. T. Lacoss, “Adaptive combining of wideband arraydata for
optimal reception,” IEEE Trans. Gcosci. Electron., vol. GE-6, pp.
min E { [d(k) - W T X ( k ) I 2 )subject to CTW = 5 (32)
78-86, May 1968.
[9] A. H. Booker, C. Y . Ong, J;, P. Burg, and G. D. Hair, “Multiple-
constraint adaptive filtering, Texas Instruments, Sci. Services Div.,
where d(R) is a scalar variable related to the observation vec- Dallas, Tex., Apr. 1969.
tor X(R) a n d C i sa general constraint matrix. The scalar d ( k ) [lo] H.Kobayshi,“Iterative synthesismethods for a seismic array
may be a random variable correlated withX ( R ) or it maybe a processor, IEEETrans. Geosci. Electron; vol. GE-8, pp. 169-178,
July 1970.
known test signal used to compensate for array errors. This (111 D. G. Luenberger, Optimization by VectorSpace Methods. New
wouldbe a classical least squares problem except that the York: Wiley, 1969.
I. J. Good and K. Koog, “A paradoxconcerning rate of information,”
statistics of X ( k ) and d(k) are assumed unknown a priori. [12] Informat. Contr., vol. 1, pp. 113-116, May 1958.
The general constrained LMS algorithm solving (32) may be [13] A. E. Brymn, Jr., and Y . C. Ho, Applied Optimal Control. Waltham,
derived similarly t o (22) and is Mass. : Blaisdell, 1969.
[14] W. H. Fleming, Functions of Smeral Variables. Reading, Mass.:
Addison-Wesley, 1965.
W ( 0 ) = C(CTC)-’5 [15] E. J. Kelly, Jr., and M. J. Levin, “Signal parameter estimation for
seismometer arrays,” Mass. Inst. Technol. Lincoln Lab. Tech.
Rept. 339, Jan. 1964.
.~ A. H. Nuttall and D.W. Hyde, “A unified approach to optimum and
1161
suboptimum processing for^ arrays,” U. S. Navy Underwater Sound
T h e general algorithm is applicable to constrained modeling, Lab., New London, Conn., USL Rep. 992, Apr. 1969.
prediction, estimation, and control. I t is discussed in [6]. [If] G. N. Saradis, 2.J. Nikolic, and K. S. F u , “Stochastic approxima-
tion algorithms for system identification, estimation, and decompo-
sition of mixtures,” IEEETrans.Sys. Sri. Cybnn., Vol. SSC-5,
I X . CONCLUSION pp. 8-15, Jan. 1969.
[18] P.E. Mantey and L.J. Gri5ths, “Iterativeleast-squares algorithms
Analysisandcomputersimulationshaveconfirmedthe for signal extraction,” in Proc. 2nd Hawaii Znt. Conf. on Syst. Sci.,
ability of the constrained LMS algorithm to adjust an array pp. 767-770.
[19] T. P.Daniell, “Adaptive estimation with mutually correlated train-
of sensors in real time to respond to a desired signal while ing samples,” Stanford Electron. Labs., Stanford, Calif., Doc. SEL-
discriminatingagainst noise.Because of a system of con- 68-083. Tech. Rep, TR 6778-4, Aug. 1968.
straints on weights in the array, the algorithm is shown to [ZO] J. L. Moschner, “Adaptive filtering with clipped inputdata,”
Stanford Electron. Labs., Stanford, Calif., Doc. SEL-70-053, Tech.
require no prior knowledge of the signal or noise statistics. Rep. T R 6796-1, June 1970.
Ageometricalpresentationhasshownwhythecon- [21] K. D. Senne, ”Adaptive linear discrete-time estimation,” Stanford
Electron. Labs., Stanford, Calif., Doc. SEL-68090, Tech. Rep.
strained LMS algorithm has an ability to maintain the con- T R 6778-5, June 1968.
straints and prevent the accumulationof quantization errors [22] -, ”New results in adaptive estimation theory,” Frank J. Seiler
in a digital implementation. The simulation tests have con- Res. Lab., USAF Academy, Colo., Tech. Rep. SRL-TR-70-0013,
Apr. 1970.
firmed the effectiveness of thiserror-correctingfeature,in [23] J. E. p w n , 111, ”Adaptive estimation in nonstationary, environ-
contrastwith
the usualuncorrected
gradient-projection ments, Stanford Electron. Labs., Stanford, Calif., Doc. SEL-70-056.
algorithm. The error-correcting feature and the simplicity of [24] Tech. Rep. TR 6795-1, Aug. 1970.
D. T. Finkbeiner, 11, Introduction to Mai~iccsand L i u a ~Transfor-
the algorithm make it appropriate for continuous real-time mcrtions. San Francisco, Calif.: Freeman. 1966.
Authorized licensed use limited to: RMIT University. Downloaded on February 13, 2009 at 01:38 from IEEE Xplore. Restrictions apply.