Professional Documents
Culture Documents
The LMS algorithm is very useful and easy to compute. The LMS algorithm will
perform well, if the adaptive system is an adaptive linear combiner, as well as, if both the n-
dimensional input vector X (k) and the desire output d (k) are available in each iteration, where X
(k) is
x1 (k)
x (k )
X (k ) 2
x n (k )
and the n-dimensional corresponding set of adjustable weights W(k) is
w1 (k )
w (k )
W (k ) 2
wn ( k )
By having the input vector X (k), the estimated output y (k), can be computed as a linear
combination of the input vector X (k) with the weight vector W (k) as
y (k ) X T (k )W ( k )
Thus, the estimated error e (k), the difference between the estimated output y (k), and the desired
e( k ) d ( k ) y (k ) d ( k ) X T ( k )W ( k )
From a mathematics perspective, to find the optimal solution is to minimize the E {e2
(k)}, whereas E {.} is the expectation value. In order to find the minimum value of e2(k), lets
e 2 (k ) d 2 ( k ) 2 d (k ) X T (k )W (k ) W T ( k ) X (k ) X T ( k )W (k )
After that, take the expectation value for both sides of the equation,
R E{ X (k ) X T (k )}
P E d (k ) X T (k )
Thus, E{ e2(k) } can be rewritten as
E{e 2 (k )} E{d 2 (k )} 2 P T W (k ) W T (k ) RW (k )
Since the LMS algorithm is the descending on the performance surface algorithm. Therefore, we
e 2 (k )
( k ) 2 e(k ) X (k )
W ( k )
W (k 1) W (k ) ( k )
W ( k 1) W ( k ) 2 e( k ) X ( k )
Substitute e(k), we get
W (k 1) W (k ) y (k ) d (k ) X (k )
This equation is the LMS algorithm, which uses the difference between the desired signal
d(k) and the estimated output y (k) to drive each weight tap to find the impulse response of the
unknown system. W(k) is the present value of weight and W(k+1) is the new value, which
eventually will be converted. The parameter is a positive constant that controls the rate of
convergence and the stability of the algorithm, called the learning rate. The value of the learning
rate has to be smaller than 1/2max, where max is the largest eigenvalue of the correlation
matrix, R, or the learning rate can be selected as small as possible to make sure that the
algorithm will be converted. The smaller the learning rate is employed, the longer the training
+
x2k W2k +
+ _
Sum Sum
Output Error
+
+
Y(k) E(k)
xnk Wnk
From the LMS algorithm diagram in figure 1, the estimated output Y(k) is computed from
a linear combination of the input signal, X(k) and the weight vector, W(k). The estimated output
y(k) is then compared to the desired output, d(k), to find the error, e(k). If there is any difference,
the error signal will be used as a feedback mechanism to adjust the weight vector W(k). On the
other hand, if there is no difference between these signals, it means that no adjustment is needed,