You are on page 1of 20

CHAPTER 1

1.1

Let
r u ( k ) = E [ u ( n )u * ( n k ) ]

(1)

r y ( k ) = E [ y ( n )y * ( n k ) ]

(2)

We are given that


y(n) = u(n + a) u(n a)

(3)

Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get
r y(k ) = E [(u(n + a) u(n a))(u*(n + a k ) u*(n a k ))]
= 2r u ( k ) r u ( 2a + k ) r u ( 2a + k )
1.2

We know that the correlation matrix R is Hermitian; that is


R

= R

Given that the inverse matrix R-1 exists, we may write


1 H

R R

= I

where I is the identity matrix. Taking the Hermitian transpose of both sides:
RR

= I

Hence,
R

= R

That is, the inverse matrix R-1 is Hermitian.


1.3

For the case of a two-by-two matrix, we may


Ru = Rs + R

2
r 11 r 12

0
=
+
2
r 21 r 22
0

r 11 +
r 21

r 12
r 22 +

For Ru to be nonsingular, we require


2

det ( R u ) = ( r 11 + ) ( r 22 + ) r 12 r 21 > 0
With r12 = r21 for real data, this condition reduces to
2

( r 11 + ) ( r 22 + ) r 12 r 21 > 0
2

Since this is quadratic in , we may impose the following condition on for nonsingularity of Ru:
4 r

2 1
> --- ( r 11 + r 22 ) 1 --------------------------------------
2
2

( r + r ) 1
11

22

where r = r 11 r 22 r 12
1.4

We are given
R = 1 1
1 1
This matrix is positive definite because
a
T
a Ra = [ a 1 ,a 2 ] 1 1 1
1 1 a2
2

= a 1 + 2a 1 a 2 + a 2

= ( a 1 + a 2 ) > 0 for all nonzero values of a1 and a2


(Positive definiteness is stronger than nonnegative definiteness.)
But the matrix R is singular because
2

det ( R ) = ( 1 ) ( 1 ) = 0
Hence, it is possible for a matrix to be positive definite and yet it can be singular.
1.5

(a)
H

r(0) r
R M+1 =
r RM

(1)

Let
1

R M+1 =

a
b

b
C

(2)

where a, b and C are to be determined. Multiplying (1) by (2):


H

r(0) r
I M+1 =
r RM

a
b

where IM+1 is the identity matrix. Therefore,


H

r ( 0 )a + r b = 1

(3)

ra + R M b = 0

(4)

(5)

rb + R M C = I M
H

r ( 0 )b + r C = 0

(6)

From Eq. (4):

b = R M ra

(7)

Hence, from (3) and (7):


1
a = -----------------------------------H 1
r ( 0 ) r RM r

(8)

Correspondingly,
1

RM r
b = -----------------------------------H 1
r ( 0 ) r RM r

(9)

From (5):
1

C = R M R M rb

1
RM

R M rr R M
+ -----------------------------------H 1
r ( 0 ) r RM r

(10)

As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6).
H

r ( 0 )r R M
H 1 r R M rr R M
r ( 0 )b + r C = ------------------------------------ + r R M + ------------------------------------H 1
H 1
r ( 0 ) r RM r
r ( 0 ) r RM r
H

= 0

We have thus shown that


H

1
R M+1

T
1
r R M
0 0
=
+
a
1
0 R1
R M r R 1 rr H R 1
M
M
M
T
1
0 0
=
+
a
1
0 R1
R M r
M

[ 1 r R M ]

where the scalar a is defined by Eq. (8):

(b)

RM

B*

r
R M+1 =
BT
r(0)
r

(11)

Let
D

R M+1 =

e
f

(12)

where D, e and f are to be determined. Multiplying (11) by (12):


RM

B*

r
I M+1 =
BT
r(0)
r

e
f

Therefore
RM D + r
RM e + r
r
r

B* H

B*

(13)

= I

(14)

f = 0

BT

e + r(0) f = 1

BT

D + r ( 0 )e

= 0

(15)
T

(16)

From (14):
1 B*

e = RM r

(17)

Hence, from (15) and (17):


1
f = --------------------------------------------BT 1 B*
r ( 0 ) r RM r

(18)

Correspondingly,

1 B*

RM r
e = --------------------------------------------BT 1 B*
r ( 0 ) r RM r

(19)

From (13):
1

1 B* H

D = RM RM r

1 B* BT

1
RM

RM r r RM
+ --------------------------------------------BT 1 B*
r ( 0 ) r RM r

(20)

As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus
BT

BT

D + r ( 0 )e

= r

BT

= 0

1
RM +

1 B* BT

BT

r ( 0 )r R M
r RM r r RM
------------------------------------------------ --------------------------------------------BT 1 B*
BT 1 B*
r ( 0 ) r RM r
r ( 0 ) r RM r

We have thus shown that


1 B* BT

1
R M+1

RM 0
0

+f

RM 0
0

RM r

BT

R M R 1 r B*
M

RM

1 B*

+f

R M r

[ r

BT

RM 1 ]

where the scalar f is defined by Eq. (18).


1.6

(a) We express the difference equation describing the first-order AR process u(n) as
u ( n ) = v ( n ) + w1 u ( n 1 )
where w1 = -a1. Solving this equation by repeated substitution, we get
u ( n ) = v ( n ) + w1 v ( n 1 ) + w1 u ( n 2 )

=
2

n-1

= v ( n ) + w1 v ( n 1 ) + w1 v ( n 2 ) + + w1 v ( 1 )

(1)

Here we have used the initial condition


u(0) = 0
or equivalently
u(1) = v(1)
Taking the expected value of both sides of Eq. (1) and using
E [v(n)] =

for all n,

we get the geometric series


2

n-1

E [ u ( n ) ] = + w1 + w1 + + w1

1 w n
1

- ,
= -------------1

n,

w1 1
w1 = 1

This result shows that if 0 , then E[u(n)] is a function of time n. Accordingly, the
AR process u(n) is not stationary. If, however, the AR parameter satisfies the
condition:
a 1 < 1 or w 1 < 1
then

E [ ( n ) ] --------------- as n
1 w1
Under this condition, we say that the AR process is asymptotically stationary to order
one.
(b) When the white noise process v(n) has zero mean, the AR process u(n) will likewise
have zero mean. Then

var [ v ( n ) ] = v

var [ u ( n ) ] = E [ u ( n ) ].

(2)

Substituting Eq. (1) into (2), and recognizing that for the white noise process

2
E [ v ( n )v ( k ) ] = v
0,

n=k

(3)

nk

we get the geometric series


var [ u ( n ) ] = v ( 1 + w 1 + w 1 + + w 1
2

2n-2

2n

2 1 w1
- ,
v ----------------2
=
1
w

2
v n,

w1 1
w1 = 1

When |a1| < 1 or |w1| < 1, then


2

v
v
var [ u ( n ) ] --------------- = -------------- for large n
2
2
1 w1
1 a1
(c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting
Eq. (1) into this formula, and using Eq. (3), we get
2

k+2

E [ u ( n )u ( n k ) ] = v ( w 1 + w 1

+ + w1

k+2n-2

2n

2 k 1 w1
- ,
v w 1 ----------------2
=

1
w

2
v n,

w1 1
w1 = 1

For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as
r ( k ) = E [ u ( n )u ( n k ) ]
2 k

v w1
--------------- for large n
2
1 w1
Case 1: 0 < a1 < 1
In this case, w1 = -a1 is negative, and r(k) varies with k as follows:
r(k)
-3

-1

+3

+1

-2

-4

+2

+4

Case 2: -1 < a1 < 0


In this case, w1 = -a1 is positive and r(k) varies with k as follows:
r(k)

-4

1.7

-3

-2

-1

+1

+2

+3 +4

(a) The second-order AR process u(n) is described by the difference equation:


u ( n ) = u ( n 1 ) 0.5u ( n 2 ) + v ( n )
Hence
w1 = 1
w 2 = 0.5
and the AR parameters equal
a1 = 1
a 2 = 0.5
Accordingly, we write the Yule-Walker equations as

r(0)
r(1)

r(1)
r(0)

1 = r(1)
0.5
r(2)

(b) Writing the Yule-Walker equations in expanded form:


r ( 0 ) 0.5r ( 1 ) = r ( 1 )
r ( 1 ) 0.5r ( 0 ) = r ( 2 )
Solving the first relation for r(1):
2
r ( 1 ) = --- r ( 0 )
3

(1)

Solving the second relation for r(2):


1
r ( 2 ) = --- r ( 0 )
6

(2)

(c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence,
2

var [ u ( n ) ] = E [ ( u n ) ]
= r ( 0 ).
We know that
2
v

ak r ( k )
k=0

= r ( 0 ) + a1 r ( 1 ) + a2 r ( 2 )

(3)

Substituting (1) and (2) into (3), and solving for r(0), we get
2

v
r ( 0 ) = ---------------------------- = 1.2
2 1
1 + --- a 1 --- a 2
3 6
1.8

By definition,
P0 = average power of the AR process u(n)

10

= E[|u(n)|2]
= r(0)

(1)

where r(0) is the autocorrelation function of u(n) for zero lag. We note that
r(1) r(2) r(M )
-, ----------, , -------------
r--------r(0)
(0) r(0)

{ a 1, a 2, , a M }

Equivalently, except for the scaling factor r(0),


{ r ( 1 ), r ( 2 ), , r ( M ) }

{ a 1, a 2, , a M }

(2)

Combining Eqs. (1) and (2):


{ r ( 0 ), r ( 1 ), r ( 2 ), , r ( M ) }

{ P 0, a , a 2 , , a M }
1
1.9

(a) The transfer function of the MA model of Fig. 2.3 is


* 1

H ( z ) = 1 + b1 z

* 2

+ b2 z

+ + bK z

* K

(b) The transfer function of the ARMA model of Fig. 2.4 is


b0 + b1 z + b2 z + + bK z
H ( z ) = --------------------------------------------------------------------------------* 1
* 2
* M
1 + a1 z + a2 z + + a M z
*

* 1

* 2

* K

(c) The ARMA model reduces to an AR model when


b0 = b1 = = bK = 0
It reduces to an MA model when
a1 = a2 = = a M = 0
1.10

We are given
x ( n ) = ( n ) + 0.75 ( n 1 ) + 0.25 ( n 2 )
Taking the z-transforms of both sides:

11

(3)

X ( z ) = ( 1 + 0.75z

+ 0.25z )V ( z )

Hence, the transfer function of the MA model is


1
2
X (z)
----------- = 1 + 0.75z + 0.25z
V (z)

1
= --------------------------------------------------------------1
2 1
( 1 + 0.75z + 0.25z )

(1)

Using long division, we may perform the following expansion of the denominator in Eq.
(1):
( 1 + 0.75z

2 1

+ 0.25z )

3 1 5 2 3 3 11 4
45 5
= 1 --- z + ------ z ------ z --------- z + ------------ z
64
4
16
256
1024
85 8
627 9
91 6
93 7
1541 10
------------ z + --------------- z + --------------- z ------------------ z + --------------------- z
+
65536
262144
4096
16283
1048576
1 0.75z
0.0222z

+ 0.3125z

+ 0.0057z

0.0469z

+ 0.0013z

0.043z

0.0024z

+ 0.0439z

+ 0.0015z

10

(2)

(a) M = 2
Retaining terms in Eq. (2) up to z-2, we may approximate the MA model with an AR
model of order two as follows:
X (z)
1
----------- ---------------------------------------------------------
1
V ( z ) 1 0.75z + 0.3125z 2
(b) M = 5
Retaining terms in Eq. (2) up to z-5, we obtain the following approximation in the
forms of an AR model of order five:
X (z)
1
----------- --------------------------------------------------------------------------------------------------------------------------------------------------
1

2
V ( z ) 1 0.75z + 0.3125z 0.0469z 3 0.043z 4 + 0.0439z 5

12

(c) M = 10
Finally, retaining terms in Eq. (2) up to z-10, we obtain the following approximation in
the form of an AR model of order ten:
X (z)
1
----------- ----------V (z) D(z)
where D(z) is given by the polynomial on the right-hand side of Eq. (2).
1.11

(a) The filter output is


H

x(n) = w u(n)
where u(n) is the tap-input vector. The average power of the filter output is therefore
2

E [ x ( n ) ] = E [ w u ( n )u ( n )w ]
H

= w E [ u ( n )u ( n ) ]w
H

= w Rw
(b) If u(n) is extracted from a zero mean white noise of variance 2, we have
2

R = I
where I is the identity matrix. Hence,
2

2 H

E [ x(n) ] = w w
1.12

(a) The process u(n) is a linear combination of Gaussian samples. Hence, u(n) is
Gaussian.
(b) From inverse filtering, we recognize that v(n) may also be expressed as a linear
combination of samples represented by u(n). Hence, if u(n) is Gaussian, then v(n) is
also Gaussian.

1.13

(a) From the Gaussian moment factoring theorem:


*

E ( u1 u2 )

= E [ u1 u1 u2 u2 ]
*

13

= k! E [ u 1 u 2 ] E [ u 1 u 2 ]
*

= k! ( E [ u 1 u 2 ] )

(1)

(b) Putting u2 = u1 = u, Eq. (1) reduces to


E[ u

2k

] = k! ( E [ u ] )

1.14

It is not permissible to interchange the order of expectation and limiting operations in Eq.
(1.113). The reason is that the expectation is a linear operation, whereas the limiting
operation with respect to the number of samples N is nonlinear.

1.15

The filter output is


y(n) =

h ( i )u ( n i )
i

Similarly, we may write


y(m) =

h ( k )u ( m k )
k

Hence,
*

r y ( n, m ) = E [ y ( n )y ( m ) ]
= E

h ( i )u ( n i ) h
i

( k )E [ u ( n i )u ( m k ) ]

h ( i )h

( k )r u ( n i, m k )

1.16

( k )u ( m k )

h ( i )h
i

The mean-square value of the filter output in response to white noise input is
2

2
P o = -----------------

14

The value Po is linearly proportional to the filter bandwidth . This relation holds
irrespective of how small is, compared to the mid-band frequency of the filter.
1.17

(a) The variance of the filter output is


2

2
2
y = -----------------

We are given
2

= 0.1 volt

= 2 1 radians/sec.
Hence,
2
2
2 0.1 2
y = -------------------------- = 0.4 volt

(b) The pdf of the filter output y is


2
2
y 2 y
1
f ( y ) = ----------------- e
2 y
2
y 0.8
1
= ---------------------e
0.63 2

1.18

(a) We are given


N -1

Uk =

u ( n ) exp ( jnk ) ,

k = 0,1,...,N-1

n=

where u(n) is real valued and


2
k = ------k
N
Hence,

15

*
E[UkUl ]

N -1 N -1

= E

u ( n )u ( m ) exp ( jnk + jml )

n=0 m=0
N -1 N -1

exp ( jnk + jml )E [ u ( n )u ( m ) ]

n=0 m=0
N -1 N -1

exp ( jnk + jml )r ( n m )

n=0 m=0
N -1

N -1

exp ( jnk ) r ( n m ) exp ( jnk )

m=0

(1)

n=0

By definition, we also have


N -1

r ( n ) exp ( jnk )

= Sk

n=0

Moreover, since r(n) is periodic with period N, we may invoke the time-shifting
property of the discrete Fourier transform to write
N -1

r ( n m ) exp ( jnk )

= exp ( jm k )S k

n=0

Thus, recognizing that k = (2/N)k, Eq. (1) reduces to


*
E[UkUl ]

N -1

= Sk

exp ( jm ( l k ) )

m=0

S ,
= k
0,

l=k
otherwise

(b) Part (a) shows that the complex spectral samples Uk are uncorrelated. If they are
Gaussian, then they will also be statistically independent. Hence,

16

1 H
1
f U ( U 0, U 1, , U N -1 ) = --------------------------------- exp --- U U
2

N
( 2 ) det ( )
where
U = [ U 0, U 1, , U N -1 ]

H
1
= --- E [ UU ]
2

1
= --- diag ( S 0, S 1, , S N 1 )
2
N -1

1
det ( ) = ------ S k
N
2 k=0
Therefore,
N -1 U 2
1
1
k
- exp --- ------------
f U ( U 0, U 1, , U N -1 ) = --------------------------------------N -1
2
1
N N
k=0 --- S k
( 2 ) 2 S k
2
k=0
2

1.19

N -1

Uk

exp ------------ ln S k
k=0 S k

The mean square value of the increment process dz() is


2

E [ dz ( ) ] = S ( )d
Hence E[|dz()|2] is measured in watts.
1.20

The third-order cumulant of a process u(n) is


c 3 ( 1, 2 ) = E [ u ( n )u ( n + 1 )u ( n + 2 ) ]
= third-order moment.
All odd-order moments of a Gaussian process are known to be zero; hence,

17

c 3 ( 1, 2 ) = 0
The fourth-order cumulant is
c 4 ( 1, 2, 3 ) = E [ u ( n )u ( n + 1 )u ( n + 2 )u ( n + 3 ) ]
E [ u ( n )u ( n + 1 ) ]E [ u ( n + 2 )u ( n + 3 ) ]
E [ u ( n )u ( n + 2 ) ]E [ u ( n + 1 )u ( n + 3 ) ]
E [ u ( n )u ( n + 3 ) ]E [ u ( n + 1 )u ( n + 2 ) ]
For the special case of = 1 = 2 = 3, the fourth-order moment of a zero-mean Gaussian
process of variance 2 is 34, and its second-order moment of 2. Hence, the fourth-order
cumulant is zero. Indeed, all cumulants higher than order two are zero.
1.21

The trispectrum is

C 4 ( 1, 2, 3 ) =

1 =- 2 =- 3 =-

c 4 ( 1, 2, 3 )e

j ( 1 1 + 2 2 + 3 3 )

Let the process be passed through a three-dimensional band-pass filter centered on 1, 2,


and 3. We assume that the bandwidth (along each dimension) is small compared to the
respective center frequency. The average power of the filter output is proportional to the
trispectrum, C4(1, 2, 3).
1.22

(a) Starting with the formula


c k ( 1, 2, , k-1 ) = k

hi hi + 1 hi + k-1

i=-

the third-order cumulant of the filter output is

c 3 ( 1, 2 ) = 3

hi hi + 1 hi + 2

i=-

where 3 is the third-order cumulant of the filter input. The bispectrum is

18

C 3 ( 1, 2 ) = 3

1 =- 2 =-

= 3

c 3 ( 1, 2 )e

j ( 1 1 + 2 2 )

i=- 1 =- 2 =-

hi hi + hi + e
1
2

j ( 1 1 + 2 2 )

Hence,
j 1
j 2
* j ( 1 + 2 )
C 3 ( 1, 2 ) = 3 H e H e H e

(b) From this formula, we immediately deduce that


arg [ C 3 ( 1, 2 ) ] = arg H e

1.23

+ arg H e j 2 arg H e j ( 1 + 2 )

j 1

The output of a filter of impulse response hi due to an input u(i) is given by the
convolution sum
y(n) =

hi u ( n i )
i

The third-order cumulant of the filter output is, for example,


C 3 ( 1, 2 ) = E [ y ( n )y ( n + 1 )t ( n + 2 ) ]
= E

hi u ( n i ) hk u ( n + 1 k ) hl u ( n + 2 l )
i

= E

hi u ( n i ) hk+1 u ( n k ) hl+2 u ( n l )
i

hi hk+1 hl+2 E [ u ( n i )u ( n k )u ( n l ) ]
i

For an input sequence of independent and identically distributed random variables, we


note that

E [ u ( n i )u ( n k )u ( n l ) ] = 3
0,

i = k= l

otherwise

19

Hence,

C 3 ( 1, 2 ) = 3

hi hi+1 hi+2

i=-

In general, we may thus write

C 3 ( 1, 2, , k-1 ) = k

hi hi+1 hi+k-1

i=-

1.24

By definition:

()

N -1

*
j2n jk
1
( k ) = ---- E [ u ( n )u ( n k )e
]e
N
n=0

Hence,

()

N -1

*
j2n j k
1
( k ) = ---- E [ u ( n )u ( n + k )e
]e
N
n=0

( )*

N -1

*
j2n j k
1
( k ) = ---- E [ u ( n )u ( n k )e
]e
N
n=0

We are told that the process u(n) is cyclostationary, which means that
*

E [ u ( n )u ( n + k )e

j2n

] = E [ u ( n )u ( n k )e

j2n

It follows therefore that


r
1.25

()

( k ) = r

( )*

(k)

For = 0, the input to the time-average cross-correlator reduces to the squared amplitude
of a narrow-band filter with mid-band frequency . Correspondingly, the time-average
cross-correlator reduces to an average power meter. Thus, for = 0, the instrumentation of
Fig. 1.16 reduces to that of Fig. 1.13.

20

You might also like