Professional Documents
Culture Documents
1.1
Let
r u ( k ) = E [ u ( n )u * ( n k ) ]
(1)
r y ( k ) = E [ y ( n )y * ( n k ) ]
(2)
(3)
Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get
r y(k ) = E [(u(n + a) u(n a))(u*(n + a k ) u*(n a k ))]
= 2r u ( k ) r u ( 2a + k ) r u ( 2a + k )
1.2
= R
R R
= I
where I is the identity matrix. Taking the Hermitian transpose of both sides:
RR
= I
Hence,
R
= R
2
r 11 r 12
0
=
+
2
r 21 r 22
0
r 11 +
r 21
r 12
r 22 +
det ( R u ) = ( r 11 + ) ( r 22 + ) r 12 r 21 > 0
With r12 = r21 for real data, this condition reduces to
2
( r 11 + ) ( r 22 + ) r 12 r 21 > 0
2
Since this is quadratic in , we may impose the following condition on for nonsingularity of Ru:
4 r
2 1
> --- ( r 11 + r 22 ) 1 --------------------------------------
2
2
( r + r ) 1
11
22
where r = r 11 r 22 r 12
1.4
We are given
R = 1 1
1 1
This matrix is positive definite because
a
T
a Ra = [ a 1 ,a 2 ] 1 1 1
1 1 a2
2
= a 1 + 2a 1 a 2 + a 2
det ( R ) = ( 1 ) ( 1 ) = 0
Hence, it is possible for a matrix to be positive definite and yet it can be singular.
1.5
(a)
H
r(0) r
R M+1 =
r RM
(1)
Let
1
R M+1 =
a
b
b
C
(2)
r(0) r
I M+1 =
r RM
a
b
r ( 0 )a + r b = 1
(3)
ra + R M b = 0
(4)
(5)
rb + R M C = I M
H
r ( 0 )b + r C = 0
(6)
b = R M ra
(7)
(8)
Correspondingly,
1
RM r
b = -----------------------------------H 1
r ( 0 ) r RM r
(9)
From (5):
1
C = R M R M rb
1
RM
R M rr R M
+ -----------------------------------H 1
r ( 0 ) r RM r
(10)
As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6).
H
r ( 0 )r R M
H 1 r R M rr R M
r ( 0 )b + r C = ------------------------------------ + r R M + ------------------------------------H 1
H 1
r ( 0 ) r RM r
r ( 0 ) r RM r
H
= 0
1
R M+1
T
1
r R M
0 0
=
+
a
1
0 R1
R M r R 1 rr H R 1
M
M
M
T
1
0 0
=
+
a
1
0 R1
R M r
M
[ 1 r R M ]
(b)
RM
B*
r
R M+1 =
BT
r(0)
r
(11)
Let
D
R M+1 =
e
f
(12)
B*
r
I M+1 =
BT
r(0)
r
e
f
Therefore
RM D + r
RM e + r
r
r
B* H
B*
(13)
= I
(14)
f = 0
BT
e + r(0) f = 1
BT
D + r ( 0 )e
= 0
(15)
T
(16)
From (14):
1 B*
e = RM r
(17)
(18)
Correspondingly,
1 B*
RM r
e = --------------------------------------------BT 1 B*
r ( 0 ) r RM r
(19)
From (13):
1
1 B* H
D = RM RM r
1 B* BT
1
RM
RM r r RM
+ --------------------------------------------BT 1 B*
r ( 0 ) r RM r
(20)
As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus
BT
BT
D + r ( 0 )e
= r
BT
= 0
1
RM +
1 B* BT
BT
r ( 0 )r R M
r RM r r RM
------------------------------------------------ --------------------------------------------BT 1 B*
BT 1 B*
r ( 0 ) r RM r
r ( 0 ) r RM r
1
R M+1
RM 0
0
+f
RM 0
0
RM r
BT
R M R 1 r B*
M
RM
1 B*
+f
R M r
[ r
BT
RM 1 ]
(a) We express the difference equation describing the first-order AR process u(n) as
u ( n ) = v ( n ) + w1 u ( n 1 )
where w1 = -a1. Solving this equation by repeated substitution, we get
u ( n ) = v ( n ) + w1 v ( n 1 ) + w1 u ( n 2 )
=
2
n-1
= v ( n ) + w1 v ( n 1 ) + w1 v ( n 2 ) + + w1 v ( 1 )
(1)
for all n,
n-1
E [ u ( n ) ] = + w1 + w1 + + w1
1 w n
1
- ,
= -------------1
n,
w1 1
w1 = 1
This result shows that if 0 , then E[u(n)] is a function of time n. Accordingly, the
AR process u(n) is not stationary. If, however, the AR parameter satisfies the
condition:
a 1 < 1 or w 1 < 1
then
E [ ( n ) ] --------------- as n
1 w1
Under this condition, we say that the AR process is asymptotically stationary to order
one.
(b) When the white noise process v(n) has zero mean, the AR process u(n) will likewise
have zero mean. Then
var [ v ( n ) ] = v
var [ u ( n ) ] = E [ u ( n ) ].
(2)
Substituting Eq. (1) into (2), and recognizing that for the white noise process
2
E [ v ( n )v ( k ) ] = v
0,
n=k
(3)
nk
2n-2
2n
2 1 w1
- ,
v ----------------2
=
1
w
2
v n,
w1 1
w1 = 1
v
v
var [ u ( n ) ] --------------- = -------------- for large n
2
2
1 w1
1 a1
(c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting
Eq. (1) into this formula, and using Eq. (3), we get
2
k+2
E [ u ( n )u ( n k ) ] = v ( w 1 + w 1
+ + w1
k+2n-2
2n
2 k 1 w1
- ,
v w 1 ----------------2
=
1
w
2
v n,
w1 1
w1 = 1
For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as
r ( k ) = E [ u ( n )u ( n k ) ]
2 k
v w1
--------------- for large n
2
1 w1
Case 1: 0 < a1 < 1
In this case, w1 = -a1 is negative, and r(k) varies with k as follows:
r(k)
-3
-1
+3
+1
-2
-4
+2
+4
-4
1.7
-3
-2
-1
+1
+2
+3 +4
r(0)
r(1)
r(1)
r(0)
1 = r(1)
0.5
r(2)
(1)
(2)
(c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence,
2
var [ u ( n ) ] = E [ ( u n ) ]
= r ( 0 ).
We know that
2
v
ak r ( k )
k=0
= r ( 0 ) + a1 r ( 1 ) + a2 r ( 2 )
(3)
Substituting (1) and (2) into (3), and solving for r(0), we get
2
v
r ( 0 ) = ---------------------------- = 1.2
2 1
1 + --- a 1 --- a 2
3 6
1.8
By definition,
P0 = average power of the AR process u(n)
10
= E[|u(n)|2]
= r(0)
(1)
where r(0) is the autocorrelation function of u(n) for zero lag. We note that
r(1) r(2) r(M )
-, ----------, , -------------
r--------r(0)
(0) r(0)
{ a 1, a 2, , a M }
{ a 1, a 2, , a M }
(2)
{ P 0, a , a 2 , , a M }
1
1.9
H ( z ) = 1 + b1 z
* 2
+ b2 z
+ + bK z
* K
* 1
* 2
* K
We are given
x ( n ) = ( n ) + 0.75 ( n 1 ) + 0.25 ( n 2 )
Taking the z-transforms of both sides:
11
(3)
X ( z ) = ( 1 + 0.75z
+ 0.25z )V ( z )
1
= --------------------------------------------------------------1
2 1
( 1 + 0.75z + 0.25z )
(1)
Using long division, we may perform the following expansion of the denominator in Eq.
(1):
( 1 + 0.75z
2 1
+ 0.25z )
3 1 5 2 3 3 11 4
45 5
= 1 --- z + ------ z ------ z --------- z + ------------ z
64
4
16
256
1024
85 8
627 9
91 6
93 7
1541 10
------------ z + --------------- z + --------------- z ------------------ z + --------------------- z
+
65536
262144
4096
16283
1048576
1 0.75z
0.0222z
+ 0.3125z
+ 0.0057z
0.0469z
+ 0.0013z
0.043z
0.0024z
+ 0.0439z
+ 0.0015z
10
(2)
(a) M = 2
Retaining terms in Eq. (2) up to z-2, we may approximate the MA model with an AR
model of order two as follows:
X (z)
1
----------- ---------------------------------------------------------
1
V ( z ) 1 0.75z + 0.3125z 2
(b) M = 5
Retaining terms in Eq. (2) up to z-5, we obtain the following approximation in the
forms of an AR model of order five:
X (z)
1
----------- --------------------------------------------------------------------------------------------------------------------------------------------------
1
2
V ( z ) 1 0.75z + 0.3125z 0.0469z 3 0.043z 4 + 0.0439z 5
12
(c) M = 10
Finally, retaining terms in Eq. (2) up to z-10, we obtain the following approximation in
the form of an AR model of order ten:
X (z)
1
----------- ----------V (z) D(z)
where D(z) is given by the polynomial on the right-hand side of Eq. (2).
1.11
x(n) = w u(n)
where u(n) is the tap-input vector. The average power of the filter output is therefore
2
E [ x ( n ) ] = E [ w u ( n )u ( n )w ]
H
= w E [ u ( n )u ( n ) ]w
H
= w Rw
(b) If u(n) is extracted from a zero mean white noise of variance 2, we have
2
R = I
where I is the identity matrix. Hence,
2
2 H
E [ x(n) ] = w w
1.12
(a) The process u(n) is a linear combination of Gaussian samples. Hence, u(n) is
Gaussian.
(b) From inverse filtering, we recognize that v(n) may also be expressed as a linear
combination of samples represented by u(n). Hence, if u(n) is Gaussian, then v(n) is
also Gaussian.
1.13
E ( u1 u2 )
= E [ u1 u1 u2 u2 ]
*
13
= k! E [ u 1 u 2 ] E [ u 1 u 2 ]
*
= k! ( E [ u 1 u 2 ] )
(1)
2k
] = k! ( E [ u ] )
1.14
It is not permissible to interchange the order of expectation and limiting operations in Eq.
(1.113). The reason is that the expectation is a linear operation, whereas the limiting
operation with respect to the number of samples N is nonlinear.
1.15
h ( i )u ( n i )
i
h ( k )u ( m k )
k
Hence,
*
r y ( n, m ) = E [ y ( n )y ( m ) ]
= E
h ( i )u ( n i ) h
i
( k )E [ u ( n i )u ( m k ) ]
h ( i )h
( k )r u ( n i, m k )
1.16
( k )u ( m k )
h ( i )h
i
The mean-square value of the filter output in response to white noise input is
2
2
P o = -----------------
14
The value Po is linearly proportional to the filter bandwidth . This relation holds
irrespective of how small is, compared to the mid-band frequency of the filter.
1.17
2
2
y = -----------------
We are given
2
= 0.1 volt
= 2 1 radians/sec.
Hence,
2
2
2 0.1 2
y = -------------------------- = 0.4 volt
1.18
Uk =
u ( n ) exp ( jnk ) ,
k = 0,1,...,N-1
n=
15
*
E[UkUl ]
N -1 N -1
= E
n=0 m=0
N -1 N -1
n=0 m=0
N -1 N -1
n=0 m=0
N -1
N -1
m=0
(1)
n=0
r ( n ) exp ( jnk )
= Sk
n=0
Moreover, since r(n) is periodic with period N, we may invoke the time-shifting
property of the discrete Fourier transform to write
N -1
r ( n m ) exp ( jnk )
= exp ( jm k )S k
n=0
N -1
= Sk
exp ( jm ( l k ) )
m=0
S ,
= k
0,
l=k
otherwise
(b) Part (a) shows that the complex spectral samples Uk are uncorrelated. If they are
Gaussian, then they will also be statistically independent. Hence,
16
1 H
1
f U ( U 0, U 1, , U N -1 ) = --------------------------------- exp --- U U
2
N
( 2 ) det ( )
where
U = [ U 0, U 1, , U N -1 ]
H
1
= --- E [ UU ]
2
1
= --- diag ( S 0, S 1, , S N 1 )
2
N -1
1
det ( ) = ------ S k
N
2 k=0
Therefore,
N -1 U 2
1
1
k
- exp --- ------------
f U ( U 0, U 1, , U N -1 ) = --------------------------------------N -1
2
1
N N
k=0 --- S k
( 2 ) 2 S k
2
k=0
2
1.19
N -1
Uk
exp ------------ ln S k
k=0 S k
E [ dz ( ) ] = S ( )d
Hence E[|dz()|2] is measured in watts.
1.20
17
c 3 ( 1, 2 ) = 0
The fourth-order cumulant is
c 4 ( 1, 2, 3 ) = E [ u ( n )u ( n + 1 )u ( n + 2 )u ( n + 3 ) ]
E [ u ( n )u ( n + 1 ) ]E [ u ( n + 2 )u ( n + 3 ) ]
E [ u ( n )u ( n + 2 ) ]E [ u ( n + 1 )u ( n + 3 ) ]
E [ u ( n )u ( n + 3 ) ]E [ u ( n + 1 )u ( n + 2 ) ]
For the special case of = 1 = 2 = 3, the fourth-order moment of a zero-mean Gaussian
process of variance 2 is 34, and its second-order moment of 2. Hence, the fourth-order
cumulant is zero. Indeed, all cumulants higher than order two are zero.
1.21
The trispectrum is
C 4 ( 1, 2, 3 ) =
1 =- 2 =- 3 =-
c 4 ( 1, 2, 3 )e
j ( 1 1 + 2 2 + 3 3 )
hi hi + 1 hi + k-1
i=-
c 3 ( 1, 2 ) = 3
hi hi + 1 hi + 2
i=-
18
C 3 ( 1, 2 ) = 3
1 =- 2 =-
= 3
c 3 ( 1, 2 )e
j ( 1 1 + 2 2 )
i=- 1 =- 2 =-
hi hi + hi + e
1
2
j ( 1 1 + 2 2 )
Hence,
j 1
j 2
* j ( 1 + 2 )
C 3 ( 1, 2 ) = 3 H e H e H e
1.23
+ arg H e j 2 arg H e j ( 1 + 2 )
j 1
The output of a filter of impulse response hi due to an input u(i) is given by the
convolution sum
y(n) =
hi u ( n i )
i
hi u ( n i ) hk u ( n + 1 k ) hl u ( n + 2 l )
i
= E
hi u ( n i ) hk+1 u ( n k ) hl+2 u ( n l )
i
hi hk+1 hl+2 E [ u ( n i )u ( n k )u ( n l ) ]
i
i = k= l
otherwise
19
Hence,
C 3 ( 1, 2 ) = 3
hi hi+1 hi+2
i=-
C 3 ( 1, 2, , k-1 ) = k
hi hi+1 hi+k-1
i=-
1.24
By definition:
()
N -1
*
j2n jk
1
( k ) = ---- E [ u ( n )u ( n k )e
]e
N
n=0
Hence,
()
N -1
*
j2n j k
1
( k ) = ---- E [ u ( n )u ( n + k )e
]e
N
n=0
( )*
N -1
*
j2n j k
1
( k ) = ---- E [ u ( n )u ( n k )e
]e
N
n=0
We are told that the process u(n) is cyclostationary, which means that
*
E [ u ( n )u ( n + k )e
j2n
] = E [ u ( n )u ( n k )e
j2n
()
( k ) = r
( )*
(k)
For = 0, the input to the time-average cross-correlator reduces to the squared amplitude
of a narrow-band filter with mid-band frequency . Correspondingly, the time-average
cross-correlator reduces to an average power meter. Thus, for = 0, the instrumentation of
Fig. 1.16 reduces to that of Fig. 1.13.
20