Professional Documents
Culture Documents
weights w(n). This generalized adaptive filter structure can be applied to a range of applications.
echo path so that in operation, the filter can negate effects of this path, by synthesizing the resounding signal and then subtracting it from the original received signal.
Figure 2 : System identification Figure 3, shows the inverse system identification. Similarly to the example in Figure 2, the adaptive filter is trying to define a model; however, in this case it is trying to model the inverse of an unknown system H, thereby negating the effects of this unknown system. One such application is channel equalization (Drewes et al. 1998) where the inter-symbol interference and noise within a transmission channel are removed by modelling the inverse characteristics of the contamination within the channel.
Figure 3: Inverse system identification Figure 4 shows the adaptive filter used for prediction. A perfect example of this application is in audio compression for speech for telephony. The adaptive differential pulse code modulation (ADPCM) algorithm tries to predict the next audio sample. Then only the difference between the prediction and actual value is coded and transmitted. By
doing this, the data rate for speech can be halved to 32 kbps while maintaining toll quality. The international standard (International Telecommunications Union) for ADPCM defines the structure as an IIR two-pole, six-zero adaptive predictor (ITU-T 1990).
Figure 4: Prediction The structure of adaptive noise cancellation is given in Figure 5. Here the structure is slightly different. The reference signal in this case is the data s(n) corrupted with a noise signal v(n). The input to the adaptive filter is a noise signal v(n) that is strongly correlated with the noise v(n), but uncorrelated with the desired signal s(n). There are many applications in which this may be used. It can be applied to echo cancellation for both echoes caused by hybrids in the telephone networks. But also for acoustic echo cancellation in hands-free telephony. Within medical applications, noise cancellation can be used to remove contaminating signals from electrocardiograms (ECG). A particular example is the removal of the mothers heartbeat from the ECG trace of an unborn child
Adaptive beamforming is another key application and can be used for noise cancellation (Litva and Lo 1996, Moonen and Proudler 1998, Ward et al. 1986). The function of a typical adaptive beamformer is to suppress signals from every direction other than the desired look direction by introducing deep nulls in the beam pattern in the direction of the interference. The beam former output is a weighted combination of signals received by a set of spatially separated antennae, one primary antenna and a number of auxiliary antennae. The primary signal constitutes the input from the main antennae, which has high directivity. The auxiliary signals contain samples of interference threatening to swamp the desired signal. The filter eliminates this interference by removing any signals in common with the primary input signal, i.e. those that correlate strongly with the reference noise. The input data from the auxiliary and primary antennae is fed into the adaptive filter, from which the weights are calculated. These weights are then applied on the delayed input data to produce the output beam.
Adaptive Algorithms
The computational ingenuity within adaptive filtering are the algorithms used to calculate the updated filter weights. Two conflicting algorithms dominate the area, the recursive least-squares (RLS) algorithm and the least-mean-squares (LMS) algorithm. The RLS algorithm is a powerful technique derived from the method of least-squares. It offers greater convergence rates than its rival LMS algorithm, but this gain is at a cost of increased computational complexity, a factor that has hindered its use in real-time applications. The LMS algorithm offers a very simplistic yet powerful approach, giving good performance under the right conditions (Haykin 2001). However, its limitations lie with its sensitivity to the condition number of the input data matrix as well as slow convergence rates. Filter coefficients may be in the form of tap weights, reflection coefficients or rotation parameters depending on the filter structure, i.e. transversal, lattice or systolic array respectively, (Haykin 1986). However, the LMS and RLS algorithms can be
applied to the basic structure of a transversal filter , consisting of a linear combiner which forms a weighted sum of the system inputs, x(n), and then subtracts them from the desired signal, y(n), to produce an error signal, e(n), as given in Equation 1.
There is no distinct technique to determine the optimum adaptive algorithm for a specific application. The choice comes down to a balance in the range of characteristics defining the algorithms, such as: Rate of convergence, i.e. the rate at which the adaptive algorithm reaches within a tolerance of an optimum solution Steady-state error, i.e. the proximity to an optimum solution Ability to track statistical variations in the input data
Computational complexity Ability to operate with ill-conditioned input data Sensitivity to variations in the word lengths used in the implementation
LMS Algorithm The LMS algorithm is a stochastic gradient algorithm, which uses a fixed stepsize parameter to control the updates to the tap weights of a transversal filter, (Widrow and Hoff 1960) as shown in Figure 1. The algorithm aims to minimize the mean-square error, the error being the difference in y(n) and yest(n). The dependence of the meansquare error on the unknown tap weights may be viewed as a multidimensional paraboloid referred to as the error surface (depicted in Figure 6 for a two-tap example) (Haykin 2001). The surface has a uniquely defined minimum point Defining the tap weights for the optimum Wiener solution. However, in the nonstationary environment this error surface is continuously changing, thus the LMS algorithm needs to be able to track the bottom of the surface. The LMS algorithm aims to minimize a cost function, V (w(n)), at each time step n, by a suitable choice of the weight vector w(n). The strategy is to update the parameter estimate proportional to the instantaneous gradient value, dV (w(n))/dw(n), so that:
(2) where is a small positive step size and the minus sign ensures that the parameter estimates descend the error surface. The cost function, V (w(n)), which minimizes the mean-square error, results in the following recursive parameter update equation:
(3) The recursive relation for updating the tap weight vector Equation (2) may be rewritten as:
(4) which can be represented as filter output (Equation 5), estimation error (Equation 6) and tap weight adaptation (Equation 7).
(5) (6) (7) The LMS algorithm requires only 2N + 1 multiplications and 2N additions per iteration for an N tap weight vector. Therefore it has a relatively simple structure and the hardware is directly proportional to the number of weights.
This paper describes the general characteristics and potential capabilities of digital beam forming (DBF) ubiquitous radar, one that looks everywhere all the time. In a ubiquitous radar, the receiving antenna consists of a number of fixed contiguous highgain beams that cover the same region as a fixed low-gain (quasi-omnidirectional) transmitting antenna. The ubiquitous radar is quite different from the mechanically rotating-antenna radar or the conventional multifunction phased array radar in that it can carry out multiple functions simultaneously rather than sequentially. Thus it has the important advantage that its various functions do not have to be performed in sequence one at a time, something that is a serious limitation of conventional phased arrays. A radar that looks everywhere all the time uses long integration times with many pulses, which allows better shaping of Doppler filters for better MTI or pulse Doppler processing.
The DBF ubiquitous radar is a new method for achieving important radar capabilities not readily available with current radar architectures.