You are on page 1of 27

Optimal Signal Processing Wiener Filters

Optimal filter theory was developed to provide structure to the process of selecting the most appropriate frequency characteristics. A wide range of different approaches can be used to develop an optimal filter depending on the nature of the problem: specifically, what, and how much, is known about signal and noise features. If a representation of the desired signal is available, then a well-developed and popular class of filters known as Wiener Filters can be applied.

Weiner Filters
Input: x(n) H(z) Linear Filter Filter Output: y(n) Desired Response: d(n)

Estimation Error: e(n)

_ )

The basic concept behind Wiener Filter theory is to minimize the difference between the filtered output and some desired output. This minimization is based on the least mean square approach which adjusts the filter coefficients to reduce the square of the difference between the desired and actual waveform after filtering.

Based on the basic configuration of the Wiener Filter:

Input: x(n) H(z) Linear Filter


L-1

Filter Output: y(n)

Desired Response: d(n) +

Estimation Error: e(n)

_ )

e(n) = d(n) - y(n) = d(n) - h(k) x(n-k)


k= 0

where L is the length of the FIR filter. In fact, it is the sum of e(n)2 which is minimized:

2 e2(n) ! d(n) - b(k) x(n-k) n 1 k 1 n 1


N N

Expanding and taking the derivative with respect to b(k):

xe ! 0; which leads to: x b(k)

b(k) r
k=1

xx

(k-m) = rdx(m),

for 1 e m e L

The optimal filter can be derived from the autocorrelation function of the input and the crosscorrelation function between the input and desired waveform.

Wiener Filter Equation


To solve for the FIR coefficients in the Wiener equation, note that this equation actually represents a series of L equations that must be solved simultaneously. The matrix expression for these simultaneous equations is:

rxx( 0 ) rxx( 1 ) r (1) r ( 0 ) xx xx / / rxx(L) rxx(L-1 )

. . 1 .

rxx(L) rxx(L-1 ) / rxx( 0 )

b( 0 ) rdx( 0 ) b( 1 ) r ( 1 ) = dx / / b(L) rdx(L)

This equation is commonly known as the Wiener-Hopf equation. Stated in matrix notation:

Rb = rxy

and the solution is: b = R-1rxy

Wiener Filter ~ System Identification


The Wiener-Hopf approach has a number of other applications in addition to standard filtering including systems identification, interference canceling, and inverse modeling or deconvolution. For system identification, the filter is placed in parallel with the unknown system.

The desired output is the output of the unknown system, and the filter coefficients are adjusted so that the filters output best matches that of the unknown system.

Unknown System

Input: x(n)

) _
H(z) Linear Filter

Estimation Error: e(n)

Estimated utput: y(n)

Wiener Filter ~ MATLAB Implementation


The Wiener-Hopf equation can be solved using MATLABs matrix inversion operator (\) as shown in the following example. The MATLAB toeplitz function is useful in setting up the correlation matrix. The function call is: Rxx = toeplitz(rxx); where rxx is the input row vector. This constructs a symmetrical matrix from a single row vector and can be used to generate the correlation matrix in the WienerHopf equation from the autocorrelation function rxx .

Example 8-1 Given a sinusoidal signal in noise (SNR = - 8 db), design an optimal filter using the Wiener-Hopf equation. Assume that you have a copy of the desired signal available (usually the desired signal would have to be estimated. The program to implement this example first generates the data, then calculates the coefficients using the routine wiener-hopf, filters the data using filter, and plots the results.

% xn is signal + noise and % x is the desired signal % Determine the optimal FIR filter coefficients and apply b = wiener_hopf(xn,x,L); % Apply Wiener-Hopf Equations y = filter(b,1,xn); % Filter data using optimum filter . plot results

fs = 1000; N = 1024; L = 256; % [xn, t, x] = sig_noise(10,-8,N);

% Sampling frequency % Number of points % Optimal filter order

Example 8-1 (continued) The solution uses the routine wiener_hopf to calculate the optimum filter coefficients. This program computes the correlation matrix from the autocorrelation function and the toeplitz routine, and also computes the crosscorrelation function.

function b = wiener_hopf(x,y,maxlags) % Function to compute LMS algol using Wiener-Hopf equations % Inputs: x = input % y = desired signal % Maxlags = filter length % Outputs: b = FIR filter coefficients % rxx = xcorr(x,maxlags,'coeff'); % Compute the autocorrelation vector rxx = rxx(maxlags+1:end)'; % Use only positive half of symm. vector rxy = xcorr(x,y,maxlags); % Compute the crosscorrelation vector rxy = rxy(maxlags+1:end)'; % Use only positive half % rxx_matrix = toeplitz(rxx); % Construct correlation matrix b = rxx_matrix\rxy; % Calculate FIR coefficients using matrix % inversion,

Ex

l 8-1

s lts
2

; 0 z si

Th ri i r l ( c si r is ft filt ri ( l t). Th c th l sh

l t t) r l l ss r i l

0 -

0.

0.

0.3

0. Ti

0. (s c)

0.

0.7

0.

0.9

ft r 500 0 - 00 0 0.

ti l ilt ri

filt r str ct Wi rf rith h s th f ss filt r with k fr c t th si l fr c f 10 z.

0.2

0.3

0. Ti

0. (s c)

0.

0.7

0.

0.9

0.6 ti l Filt r Fr 0.4 0.2 0 0 c Pl t

10

20

30

40 Fr

50 60 c ( z)

70

80

90

100

Example 8-2 Apply the LMS algorithm to a systems identification task. The unknown system will be an all-zero linear process with a digital Transfer Function of: H(z) = .5 +. 5z-1 + 1.2 z-2 Confirm the match by plotting the magnitude of the Transfer Function for both the unknown and matching systems.

b_unknown = [.5 . 5 1.2]; % Define unknown process xn = randn(1,N); xd = conv(b_unknown,xn); % Generate unknown system output % Truncate extra points (symmetrically) xd = xd( :N+2); % % Apply Weiner filter b = wiener_hopf(xn,xd,L); % Compute matching filter coefficients b = b/N; % Scale filter coefficients ..Calculate and plot frequency characteristics.

Example 8-2 Results

Original Coefficients: 0.5; 0. 5; 1.2 Identified Coefficients: 0.44; 0.6 ; 1.1


Unkno wn P ro ce ss
7 5

M a tching P ro ce ss

The identified Transfer Function and coefficients closely matched those of the unknown system. In this example, the unknown system is an all-zero system so the match by an FIR filter was quite close. A system containing both poles and zeros would be more difficult to match.
|H (z)|

4.5 6 4 5

3.5

3 4

|H (z)|
3 2 1 0 0 50 100 150 200 250

2.5

1.5

0.5

50

100

150

200

250

F re q ue ncy (Hz)

F re q ue ncy (Hz)

Adaptive Filters
Classical filters (FIR and IIR) and optimal Wiener filters have fixed frequency characteristics and can not respond to changes that might occur during the course of the signal. Adaptive filters can modify their properties based on selected features of signal being analyzed. A typical adaptive filter paradigm is shown.

D R

H(z

R y

+ _ )
E

Least Mean Squared (LMS) Approach


Adaptive filters are often implemented as FIR filter, but with coefficients bn(k) that change over time. The LMS algorithm adjusts the filter coefficients so that the sum of squared error moves toward this minimum. The LMS algorithm uses a recursive gradient method known as the steepest-descent method for finding the filter coefficients that produce the minimum sum of squared error. Filter coefficients are modified based on an estimate of the negative gradient of the error function with respect to a given bn(k). This estimate is given by the partial derivative of the squared error, , with respect to the coefficients, bn(k):

xe n 2 ! x bn(k)

x(d(n) - y(n)) 2e(n) x bn(k)

LMS Algorithm (continued)


Since d(n) is independent of the coefficients, bn(k), its partial derivative with respect to bn(k) is zero. As y(n) is a function of the input times bn(k), then its partial derivative with respect to bn(k) is just x(n-k), and the equation for the gradient estimate can be rewritten in terms of the instantaneous product of error and the input:

n ! 2e(n) x(n  k )
Based on this new error signal, the new gradient is determined, and the filter coefficients are updated:

bn(k) = bn-1(k) +

e(n) x(n-k)

where is a constant that controls the descent and, hence, the rate of convergence.

Adaptive Noise Cancellation


Adaptive noise cancellation requires a reference signal that contains components of the noise, but not the signal. The reference channel carries a signal N(n), that is correlated with the noise N(n), but not with the signal of interest, x(n). The adaptive filter will produce an output N*(n), that minimizes the overall output. Since the adaptive filter has no access to the signal, x(n), it can only reduce the overall output by minimizing the noise in this output.
x(n) + N(n) Signal Channel N(n) Reference Channel Adaptive Filter

N*(n)

'

Desired output Adaptive Noise Cancellation

rror Signal e(n) = x(n) -N*(n)

Adaptive Line Enhancement (ALE)


A reference signal is not necessary to separate narrowband from broadband signals. In Adaptive Noise Enhancement, broadband and narrowband signals are separated by delay: only narrowband signals will be related to delayed versions of themselves. The error signal contains both broadband and narrowband signals, but the filter can reduce only the narrowband signals. Hence the adaptive filter output contains the filtered narrowband singal. The decorrelation delay must be chosen with care.
Desired output: Broadband Signal (Inteference Suppression)

B(n)

Nb(n)

'
Delay D
Decorrelation delay

Adaptive I ilter

Nb*(n) Error Signal Desired output: Narrowband Signal Nb(n) - Nb*(n) (Adaptive ine Enhancement)

e(n) = Bb(n)

Example 8-3 Optimal Filtering using the LMS algorithm. Given the same sinusoidal signal in noise as used in Example 8-1, design an adaptive filter to remove the noise. Just as in Example 8-1, assume that you have a copy of the desired signal.

% Same initial lines as in Example 8-1 ..... % xn in the input signal containing noise % x is the desired signal (as in Ex 8-1 I nose free version of the signal) % % Calculate Convergence arameter X = (1/(N+1))* sum(xn.^2); % Calculate approx. power in xn % Calculate delta = a * (1/(10*L* X)); % [b,y] = lms(xn,x,delta,L); % Apply LMS algorithm (see below) % % lotting identical to Example 8-1....

The adaptive filter coefficients are determined by the routine lms.

LMS Algorithm
The LMS algorithm is implemented in the function lms. The input is x, the desired signal is d, delta is the convergence factor and L is the filter length.
function [b,y,e] = lms(x,d,delta,L) % Simple function to adjust filter coefficients using the LSM algorithm % Adjusts filter coefficients, b, to provide the best match between % the input, x(n), and a desired waveform, d(n), % Both waveforms must be the same length % Uses a standard FIR filter % M = length(x); b = zeros(1,L); y = zeros(1,M); % Initialize outputs for n = L:M x1 = x(n:-1:n-L+1); % Select input for convolution y(n) = b * x1'; % Convolve (multiply) weights with input e(n) = d(n) - y(n); % Calculate error b = b + delta*e(n)*x1; % Adjust weights end

Example 8- ~ Results
Application of an adaptive filter using the LSM recursive algorithm to data containing a single sinusoid (10 Hz) in noise (SNR = -8 db). The filter requires the first 0.4 to 0.5 seconds to adapt (400-500 points), and that the frequency characteristics after adaptation are those of a bandpass filter with a single cutoff frequency of 10 Hz.
S NR -8 db; 10 Hz s ine 4 2

x(t)

0 -2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.5

Tim e(sec)

y(t)

-0.5 0 x 10 8 6
-7

A fter A daptive Filtering


0.1 0.2 0.3 0.4 0.5 Tim e(s ec ) 0.6 0.7 0.8 0.9 1

A daptive Filter Frequency P lot

|H (f)|

4 2 0 0 10 20 30 40 50 60 70 80 90 100

Frequency (Hz)

Adaptive Line Enhancement


In Example 8-4 an ALE Filter is constructed using the LMS algorithm. The desired waveform is just the signal delayed. The best delay was found empirically to be 5 samples.
delay = 5; % Decorrelation delay a = .0 5; % Convergence gain % %Generate data: two sequential sinusoids, 10 & 20 Hz in noise (SNR = -6) x = [sig_noise(10,-6,N/2) sig_noise(20,-6,N/2)]; .. lot original signal . % X = (1/(N+1))* sum(x.^2); % Calculate waveform power for delta delta = (1/(10*L* X)) * a; % Use 10% of the max. range of delta % xd = [x(delay:N) zeros(1,delay-1)]; % Delay signal to decorrelate noise [b,y] = lms(xd,x,delta,L); % Apply LMS algorithm lot filtered signal ..

Example 8-4 ~ Results Unlike a fixed Wiener Filter, an adaptive


filter can track changes in a waveform as shown in Example 8-4 where two sequential sinusoids having different frequencies (10 & 20 Hz) are adaptively filtered.
0 & 20
6 4 2

x(t)

Tim (

0.4

0.2

(t)

-0.2

-0.4

Tim (

  

0.2

   

A ft r A d p tiv

i lt r i

0.4

0.6

0.8

.2

1.4

1.6

1.8

0.

0.

0.

0.8



S NR -6 db

.8

Example 8-5 Adaptive Noise Cancellation (ANC). The LMS algorithm is used with a reference signal to cancel a narrowband interference signal.
O r ig in a l S ig n a l
1 0.5

x(t)

In this application, approximately 1000 samples (2.0 sec) are required for the filter to adapt correctly.

0 -0.5 -1 2 0 0.5 1 1.5 2 2.5 3 3.5 4

S ig n a l + in te r fe r e n c e x(t) + n (t)
1 0 -1 -2 2 0 0.5 1 1.5 2 2.5 3 3.5 4

A fte r A d a p tiv e N o is e C a n c e lla tio n


1

y(t)

0 -1 -2 0 0.5 1 1.5 2 2.5 3 3.5 4

Tim e (se c)

Phase Sensitive Detection


Phase Sensitive Detection, also known as Synchronous Detection, is a technique for demodulating amplitude modulated (AM) signals that is also very effective in reducing noise. From a frequency domain point of view, the effect of amplitude modulation is to shift the signal frequencies to another portion of the spectrum on either side of the modulating, or carrier, frequency. Amplitude modulation can be very effective in reducing noise because it can shift signal frequencies to spectral regions where noise is minimal. The application of a narrowband filter centered about the new frequency range (i.e. the carrier frequency) can then be used to remove the noise. A Phase Sensitive Detector functions as a narrowband filter that tracks the carrier frequency. The bandwidth can be quite small.

Phase Sensitive Detection


Multiplier V m(t) V(t) Lowpass Filter V out(t) V c(t)


Phase Shifter

V c*(t)

If Vm(t) = A(t) cos( ct) and Vc(t) = cos( ct) the carrier signal. Te output of the multiplier, V(t) is: V(t) = Vm(t) cos ( = A(t)/2 [ cos (2
ct ct

) = A(t) cos ( ]

ct)

cos (

ct

+ )

+ ) + cos

After lowpass filtering the high frequency term, cos (2 Vout(t) = A(t) cos

ct + ), is removed:

And the output is a demodulated signal, A(t) time a constant (cos ).

Phase Sensitive Detection (continued)


Fre uenc characteristics of a phase sensitive detector. The fre uenc response of the lowpass filter is effectivel reflected about the carrier fre uenc producing a bandpass filter that tracks the carrier fre uenc . B making the cutoff fre uenc small, the bandwidth of the virtual bandpass can be ver narrow.
1.2
B
1

0.8

0.6

0.4

0.2

0.5

Fre uenc ( z)

!
fc
1 1.5

Example 8-6 Using a Phase Sensitive Detection to demodulate the signal amplitude modulated with a 5 Hz sawtooth wave. The AM signal is buried in -10 db noise. The filter is chosen as a second-order Butterworth lowpass filter with a cutoff frequency set for best noise rejection while still providing reasonable fidelity to the sawtooth waveform.

wn = .02; [b,a] = butter(2,wn); % % Phase sensitive detection ishift = fix(.125 * fs/fc); vc = [vc(ishift:N) vc(1:ishift-1)]; v1 = vc .* vm; vout = filter(b,a,v1);

% Lowpass filter cutoff frequency % Design lowpass filter

% Shift carrier by 1/4 period % using periodic shift % Multiplier % Apply lowpass filter

Example 8-6 ~ Results


P ha s e S e ns itive D e te c tio n
2 1 0 -1 -2 0 2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

V m (t)

M o d u la te d S in g a l w ith N o is e

V m (t)

1 0 -1 -2 0 1 0.5 0 -0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

V m (t)

D e m o d u la te d S ig n a l
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Tim e (s e c )

You might also like