You are on page 1of 307

k

dy
we
Solutions Manual
A First Course in Digital Communications

Sh
Cambridge University Press

Ha H. Nguyen
Department of Electrical & Computer Engineering
&
University of Saskatchewan
Saskatoon, SK, CANADA S7N 5A9
Ed Shwedyk
Department of Electrical & Computer Engineering
University of Manitoba
en

Winnipeg, MB, CANADA R3T 5V6

August 5, 2011
uy
Ng
k
dy
Preface

we
This Solutions Manual was last updated in August, 2011. We appreciate receiving comments
and corrections you might have on the current version. Please send emails to ha.nguyen@usask.ca
or shwedyk@ee.umanitoba.ca.

Sh
&
en
uy
Ng

1
k
dy
Chapter 2

Deterministic Signal Characterization

we
and Analysis

(a) Given {Ak , qBk }, i.e., s(t) =


Then Ck = Ak + Bk , k = tan1 B
2 2 Sh
P2.1 First let us establish (or review) the relationships for real signals
P
k=0 [Ak cos (2kfr t) + Bk sin (2kfr t)], Ak , Bk are real.

Ak , where we write s(t) as


k

s(t) =

X
Ck cos (2kfr t k )
&
k=0

If we choose to write it as

X
s(t) = Ck cos (2kfr t + k )
k=0
en

Bk
then the phase becomes k = tan1 Ak . Further

Ak jBk
Dk =
2
uy

Dk = Dk , k = 1, 2, 3, . . .
D0 = A0 (B0 = 0 always).

(b) Given {Ck , k } then Ak = Ck cos k , Bk = Ck sin k . They are obtained from:

X
X
Ng

s(t) = Ck cos (2kfr t k ) = [Ck cos k cos(2kfr t) + Ck sin k sin(2kfr t)]


k=0 k=0

Now s(t) can written as:



( )
X ej[2kfr tk ] + ej[2kfr tk ]
s(t) = Ck
2
k=0
X
ej0 + ej0 Ck jk j2kfr t Ck jk j2kfr t
s(t) = C0 + e e + e e
2 2 2
| {z } k=1
cos 0

1
Nguyen & Shwedyk A First Course in Digital Communications

Therefore
D0 = C0 cos 0 where 0 is either 0 or

k
Ck jk
Dk = e , k = 1, 2, 3, . . .
2

dy
What about negative frequencies? Write the third term as C2k ejk ej[2k(fr )t] , where
(fr ) is interpreted as negative frequency. Therefore Dk = C2k e+jk , i.e., Dk = Dk .
(c) Given {Dk }, then Ak = 2R{Dk } and Bk = 2I{Dk }. Also Ck = 2|Dk |, k = Dk ,
where Dk is in general complex and written as Dk = |Dk |ejDk .

we
Remark: Even though given any set of the coefficients, we can find the other 2 sets, we
can only determine {Ak , Bk } or {Dk } from the signal, s(t), i.e., there is no method to
determine {Ck , k } directly.

Consider now that s(t) is a complex, periodic time signal with period T = f1r , i.e.,

Sh
s(t) = sR (t) + jsI (t) where the real and imaginary components, sR (t), sI (t), are each
periodic with period T = f1r . Again we represent s(t) in terms of the orthogonal basis
set {cos (2kfr t), sin (2kfr t)}k=1,2,... . That is

s(t) =

X

k=0
[Ak cos (2kfr t) + Bk sin (2kfr t)] (2.1)

2
R 2
R
&
where Ak = T tT s(t) cos (2kfr t)dt; Bk = T tT s(t) sin (2kfr t)dt are now complex
numbers.

One approach to finding Ak , Bk is to express sR (t), sI (t) in their own individual Fourier
series, and then combine to determine Ak , Bk . That is
en

h
X i
(R) (R)
sR (t) = Ak cos (2kfr t) + Bk sin (2kfr t)
k=0
h
X i
(I) (I)
sI (t) = Ak cos (2kfr t) + Bk sin (2kfr t)
uy

k=0

X (R)

s(t) = A + jA(I) cos (2kfr t) + B (R) + jB (I) sin (2kfr t)
k k k k
k=0 | {z } | {z }
=Ak =Bk
Ng

(d) So now suppose we are given {Ak , Bk }. How do we determine {Ck , k } and Dk ?
Again, (2.1) can be written as

X
Ak jBk j2kfr t Ak + jBk j2kfr t
s(t) = e + e
2 2
k=0
Ak jBk P Ak +jBk j2kfr t
As before, define Dk as Dk 2 and note that the term k=0 2 e can
be written as
0
X 0
X 0
X
Ak + jBk j2kfr t Ak jBk
e = ej2kfr t = Dk ej2kfr t
2 2
k= k= k=

Solutions Page 22
Nguyen & Shwedyk A First Course in Digital Communications

P
Therefore s(t) = k= Dk e
j2kfr t , where D = Ak jBk , k = 1, 2, . . . and D =
k 2 0
A0 jB0 (note that B0 is not necessary equal to zero now).

k
Equation (2.1) can also be written as:

dy
q
X
2 2 1 Bk
s(t) = Ak + Bk cos 2kfr t tan
Ak
k=0
q
Bk
from which it follows that Ck A2k + Bk2 and k = tan1 Ak .

we
Remarks:
(i) Dk 6= Dk if s(t) is complex.
(ii) The amplitude, Ck , and k are in general complex quantities and as such lose their
usual physical meanings.
(iii) A complex signal, s(t) would arise not from the physical phenomena but from our

n
Consider now that Dk Ak jB 2
k
o
Sh
analysis procedure. This occurs for instance when we go to an equivalent baseband
model.
are given. Then since Dk = Ak +jB
adding and subtracting Dk , Dk : Ak = Dk + Dk ; Bk = j(Dk Dk ).
2
k
, we have, upon

{Ck , k } can be determined from {Ak , Bk }, which in turn can be determined from Dk
as above.
&
R R
P2.2 (a) Ak = T2 tT s(t) cos (2kfr t)dt = T2 tT s(t) cos (2(k)fr t)dt = Ak , i.e., even.
R R
(b) Bk = T2 tT s(t) sin (2kfr t)dt = T2 tT s(t) sin (2(k)fr t)dt = Bk , i.e., odd.
q p p
(c) Ck = A2k + Bk2 ; Ck = (Ak )2 + (Bk )2 = (Ak )2 + (Bk )2 = Ck , i.e., even (note
that squaring an odd real function always gives an even function).
en


(d) k = tan1 B k
; = tan1 Bk = tan1 Bk = tan1 Bk = , i.e.,
Ak k Ak Ak Ak k
odd (note that tan1 () is an odd function).
Ak jBk Ak jBk Ak +jBk
(e) Dk = 2 ; Dk = 2 = 2 = Dk .
uy

Remarks: Properties (a), (b) are true for complex signals as well. But not (c), (d), (e),
at least not always.
1 1
P2.3 fr = T = 8 (Hz).
A1 = 2R(D1 ) = 0; B1 = 2I(D1 ) = 2.
Ng

A5 = 2R(D5 ) = 4; B5 = 2I(D5 ) = 0.
C1 = 2|D1 | = 2; 1 = D1 = 2 .
C5 = 2|D5 | = 4; 5 = 0.

s(t) = 2 sin 2 18 t + 4 cos 2 58 t = 2 cos 2 81 t + 2 + 4 cos 2 85 t .
P2.4 The fundamental period, fr , is fr = fc (Hz).
V j j2fc t V j j2fc t
(a) Write s(t) as s(t) = e |e {z } + e e| {z }.
|2{z } k=1 |2 {z } k=1
D1 D1

Solutions Page 23
Nguyen & Shwedyk A First Course in Digital Communications

(i) D1 = V2 ej , D1 = V2 ej = D1 .
(ii) = 0 D1 = V2 = D1 .

k
(iii) = D1 = V2 = D1 .
(iv) = same as (iii), not much difference between + and .

dy
(v) = 2 D1 = j V2 , D1 = j V2 .
(vi) = 2 D1 = j V2 , D1 = j V2 .
(b) The magnitude spectrum is the same for all the cases, as expected, since only the phase
of the sinusoid changes. It looks like:

we
| | (volts)
V V 2
2

fc 0 fc f (Hz)

Sh Figure 2.1

The phase spectra of cosines change as shown in Fig. 2.2. For (iii) and (iv), which
plot(s) do you prefer and why?
Note: In all cases, the magnitude and phase spectra are even and odd functions, respec-
tively, as they should be.
&
(c) s(t) = V cos cos (2fc t) V sin sin (2fc t)
(i) A1 = V cos , B1 = V sin .
(ii) A1 = V , B1 = 0.
(iii) A1 = V , B1 = 0.
en

(iv) same as (iii).


(v) A1 = 0, B1 = V .
(vi) A1 = 0, B1 = V .
For {Ck , k } we have C1 = V always and 1 = +.
uy

P2.5 s(t) = 2 sin(2t 3) + 6 sin(6t).

(a) The frequency of the first sinusoid is 1 Hz and that of the second one is 6
2 = 3 Hz. The
fundamental frequency is fr = GCD{1, 3} = 1 Hz. Rewrite sin(x) as cos(x 2 ). Then

Ng


2 cos
s(t) = |{z} 2t 3 + + 1 cos 2(3)t .
|{z} 2
C1 | {z 2 } C3 |{z}
1 3

Ck
Use |Dk | = 2 , Dk = k , Dk = Dk relationships to obtain:
j(3+ 2 )
D1 = 1e , D3 = 12 ej 2 .
j(3+ 2 )
D1 = 1e , D3 = 12 ej 2 .
(b) See Fig. 2.3.

Solutions Page 24
Nguyen & Shwedyk A First Course in Digital Communications

(i) (radians) (ii) (radians)


k
fc

dy
0 fc f (Hz) fc 0 fc f (Hz)

(Phase spectrum of a cos())


we
(iii), (iv) Or
(radians) (radians)

fc fc

Or
0

Sh
fC

(radians)
f (Hz)

Or
fc 0

(radians)
f (Hz)
&

fc fc
fc 0 fc f (Hz) 0 f (Hz)
en

(Phase spectrum of a cos())


(v) (radians) (vi) (radians)


uy


2 2

fc fc
0 fc f (Hz) fc 0 f (Hz)
Ng


2 2

(Phase spectrum of a sin()) (Phase spectrum of a sin())

Figure 2.2

P2.6 Classification of signals is as follows:

Solutions Page 25
Nguyen & Shwedyk A First Course in Digital Communications

| Dk | (volts)
1

k
1
2

dy
3 2 1 0 1 2 3 f (Hz)

Dk (radians)

(3 + 2 )

we
2
3 2 1
0 3 f (Hz)
1 2
2

Figure 2.3

(a)
(b)
(c)
(d)
It has odd (but not halfwave) symmetry.
Sh
Neither even, nor odd but show a halfwave symmetry.
Even symmetry, also halfwave symmetry, therefore even quarterwave symmetry.
(i) odd symmetry, also halfwave odd quarterwave symmetry.
&
(ii) neither even, nor odd, nor halfwave.
(iii) even but not halfwave.
(e) Neither even, nor odd, nor halfwave.
(f) Any even signal still remains even since s0 (t) = DC + s(t) and s0 (t) = DC + s(t) =
DC + s(t) = s0 (t).
en

Any odd signal is no longer odd. Note that a DC is an even signal.

Any halfwave symmetric signal is no longer halfwave symmetric. Again a DC signal is


not halfwave symmetric and this component destroys the halfwave symmetry. Note that
uy

any signal with a nonzero DC component cannot be halfwave symmetric.

Regarding the values of the Fourier series coefficients, all that changes is the D0 , C0 or
A0 value, i.e., the DC component value. All other coefficients are unchanged.
Ng

(g) In general the classification changes completely. But for certain specific time shift the
classification may remain the same. To see the effect of a time shift on the coefficients
consider first
Z Z
(shifted) 1 j2kfr t =t j2kfr 1 j2kfr
Dk = s(t )e dt = e s()e d
T tT T T
| {z }
Dk

(shifted)
Dk = ej2kfr Dk
(shifted) (shifted)
(shifted) Ak jBk Ak jBk
Dk = = [cos (2kfr ) j sin (2kfr )]
2 2

Solutions Page 26
Nguyen & Shwedyk A First Course in Digital Communications

(shifted)
Ak = Ak cos (2kfr ) Bk sin (2kfr ).
(shifted)

k
Bk = Ak sin (2kfr ) + Bk cos (2kfr ).
Now
(shifted)

dy
(shifted) Ck (shifted) Ck jk
Dk = ejk = ej2kfr e
2 2
| {z }
Dk

(shifted)
Ck = Ck ,

we
(shifted)
k = k j2kfr .

Basically a time shift leaves the amplitude of the sinusoid unchanged and results in a
linear change in the phase.

P2.7 (a) s(t) = s1 (t) + s2 (t) = s1 (t) + s2 (t) = s(t) even.

symmetric.

(d) Cant say anything.


(e) Again cannot say anything.



Sh
(b) s(t) = s1 (t) + s2 (t) = s1 (t) s2 (t), no symmetry.

(c) s t T2 = s1 t T2 + s2 t T2 = s1 (t) s2 (t) = s(t). Therefore s(t) is halfwave


&
(f) Consider s t T2 = s1 t T
2 + s2 t T
2 = s1 (t) s2 (t) = s(t). Therefore s(t) is
halfwave symmetric.

Now if both s1 (t), s2 (t) are even then s(t) is even and therefore it is even quarterwave
symmetric. If both are odd, s(t) is odd and s(t) is odd quarterwave symmetric. However
en

if one is even and the other is odd then s(t) is only halfwave symmetric.
(g) s(t) is even but not halfwave symmetric.

P2.8 s(t) = s1 (t)s2 (t)


uy

(a) s(t) = s1 (t)s2 (t) = s1 (t)s2 (t) = s(t) even.


(b) s(t) = s1 (t)s2 (t) = s1 (t)s2 (t) = s(t) odd.

(c) s t T2 = s1 t T2 s2 t T2 = [s1 (t)] [s2 (t)] = s(t). Therefore s(t) is not
halfwave symmetric. Note that the fundamental period changes to T2 .
Ng

(d) Neither even nor halfwave symmetric.


(e) Neither odd nor halfwave symmetric.
(f) Have 3 possibilities: (i) s1 (t), s2 (t) are both even quarterwave; (ii) s1 (t), s2 (t) are both
odd quarterwave; (iii) one is odd quarterwave, the other is even quarterwave.
(i) From (a) it follows that s(t) is even. From (c) it follows that it is not halfwave
symmetric.

(ii) Easy to show that s(t) is even. But s t T2 = s1 t T2 s2 t T2 = [s1 (t)] [s2 (t)] =

s1 (t)s2 (t) = s(t) 6= s t T2 , therefore not halfwave symmetric.

Solutions Page 27
Nguyen & Shwedyk A First Course in Digital Communications

T
T

(iii) s(t) is odd (follows from (b)) but again s t 2 = s(t) 6= s t 2 , therefore not
halfwave symmetric.

k
Again note that in each case the fundamental period changes to T2 .

dy
(g) s(t) is even. Is it halfwave symmetric?

T T T T
s t = s1 t s2 t = s1 t s2 (t) 6= s(t)
2 2 2 2

No.

we
P2.9 (a) Check for 2 conditions: that the magnitude is even and the phase is odd. If both are
satisfied then the signal is real. If one or the other or neither condition is satisfied then
the signal is complex.
(i) real, (ii) real (note the phase of is the same as ), (iii) complex (phase spectrum
is not odd).

if odd then only sin() terms.


Sh
(b) If a harmonic frequency has a phase of (or 0) then there is only a cos() term at this
frequency. Similarly if the phase is 2 (or 3 2 ) then there is only a sin() term at that
frequency. Finally if the signal is even then it only has cos() terms in the Fourier series,

Therefore the signal in (ii) is even (and real). No signal is odd.


&
P2.10 The two spectra are

Dk coefficients of Vm cos(2 f mt )

Vm Vm
en

2 2
fm 0 fm f (Hz)

Dk coefficients of Vc cos(2 f ct )
uy

Vc Vc
2 2

fc 0 f c f (Hz)
Ng

VmVc VmVc VmVc VmVc


4 4 4 4

fc fm fc + f m 0 fc f m f c + f m f (Hz)

Figure 2.4

Solutions Page 28
Nguyen & Shwedyk A First Course in Digital Communications

P2.11 The difference between two modulations of P2.1 and P2.2 is that the one in P2.2 has a spectral
component at fc or equivalently the modulating signal, s1 (t) = VDC + Vm cos (2fm t), has a

k
spectral component at f = 0 (i.e., DC component).

S1 ( f ) (V/Hz)

dy
Vm VDC Vm
2 2

fm 0 fm f (Hz)

we
S2 ( f )

Vc Vc
2 2

fc 0 fc f (Hz)

VcVm
4
VcVDC
2

fc f m f c fc + f m
Sh
VcVm
4

0
S( f )

VcVm
2

fc f m
VcVDC

fc
2 VcVm

fc + f m
2

f (Hz)
&
Figure 2.5

Remarks:
en

(i) One can use the frequency shift property to determine the spectrum of s(t).
(ii) The phases of all the spectra are zero, a consequence of only cosines being involved, both
as carrier and message signals. A good exercise is to change some (or all) of the signals
to sines and redo the spectra.
(iii) The simplest way to find the spectra is to use some high-school trig, namely cos(x) cos(y) =
uy

cos(x+y)+cos(xy)
2 . No need to know convolution, frequency shift property, etc.
1 d
P2.12 (a) fi (t) = 2 dt [2fc t Vm sin (2fm t)] = fc Vm fm cos (2fm t) (Hz).

fi (t ) (Hz)
Ng

f c + Vm f m

fc
f c Vm f m

0 1 fm t (sec)

Figure 2.6

Solutions Page 29
Nguyen & Shwedyk A First Course in Digital Communications

(b) Consider

k
k k k
s t+ = Vc cos 2fc t + Vm sin 2fm t +
fm fm fm

dy
fc
= Vc cos
2fc t + 2k fm Vm sin (2fm t + 2k) = s(t)
|{z}
n

1 1
Choose k = 1, i.e., s t + = s(t) periodic with period T = (sec).

we
fm fm

(c) Consider s1 (t) = ej[2fc tVm sin(2fm t)]



k
s1 t + = ej[2fc t+kn2Vm sin(2fm t+k2)] = e|j2kn
{z } |e
j[2fc tVm sin(2fm t)]
{z }
fm
1 s1 (t)

Dk =
1
Sh
Therefore s1 (t) is periodic with period f1m (sec). Its Fourier coefficients are determined
as follows: Z T= 1
2 2fm

T T = 1
Change variable to = 2fm T
2
ej[2fc tVm sin(2fm t)] ej2kfm t dt
2fm

Z
&
1
Dk = ej[(nk)Vm sin ] d = Jnk (Vm ),
2
1
R j[nx sin ]
where by definition Jn (x) = 2 e d is nth order Bessel function.
Remarks: The spectrum depends on Vm , the amplitude of the modulating signals. In
en

fact Vm determines the maximum instantaneous frequency (see plot in (a)) which implies
that the larger Vm is, the wider the spectrum bandwidth becomes. Of course in theory
there are spectral components out to infinity but in practise a finite bandwidth would
capture most of the power of the transmitted signal. The restriction that fc is an integer
multiple of fm is easily removed but for this the Fourier transform is needed since the
uy

signal is no longer periodic in general.

P2.13 (a)
j j2t j j2t
s1 (t) = 1 + 2e 4 e + 2e 4 e + 5ej2.21 ej2(2t) + 5ej2.21 ej2(2t)
+5ej2.21 ej2(3t)
+ 5e
j2.21 j2(3t)
e
Ng


= 1 + 2 2 cos 2t + + 10 cos (4t + 2.21) + 10 cos (6t + 2.21)
4
where fr is taken to be 1 Hz.

Solutions Page 210


Nguyen & Shwedyk A First Course in Digital Communications


s2 (t) = 2 cos 2t + 4

k
s1 (t)s2 (t) = 2 cos 2t + + 2 2 cos2 2t +
4 4

+20 cos (4t + 2.21) cos 2t +

dy
4

+20 cos (6t 2.21) cos 2t +
4

= 2 cos 2t +
4

+ 2 1 + cos 4t + + 10 cos 2t + 2.21

we
2 4

+10 cos 6t + + 2.21 + 10 cos 4t 2.21
4 4

+10 cos 8t 2.21 +
4

= 2 + 2 cos 2t +
4

Sh

+10 cos 2t + 2.21 + 2 cos 4t +
4 2

+10 cos 4t 2.21 + 10 cos 6t + 2.21 +
4 4

+10 cos 8t 2.21 +
4

2; D1 = ej 4 + 5ej ( 4 2.21) ; D2 = 22 ej 2 + 5ej ( 4 +2.21) ;

&
D0 =
D3 = 5ej ( 4 +2.21) ; D4 = 5ej ( 4 2.21) ; Dk = Dk

P2.14 The signal s1 (t) is shown below:


en

V cos(2 f c t )
V
T T
2 2
t
0 1
T=
fc
uy

s1 (t ) V
V

t
T T 0 T T
4 4
Ng

Figure 2.7

The Fourier coefficients of s1 (t) are given by


Z T
[s (t)] 1 4
j2 Tk t sin k
2
Dk 1 = e dt =
T T4 k
V
[V cos(2fc t)] 2 , k = 1 V V
Dk = = D (k 1) + D (k + 1).
0, k otherwise 2 2

Solutions Page 211


Nguyen & Shwedyk A First Course in Digital Communications

where D () is interpreted as a discrete delta function, i.e.,



1, x = 0

k
D (x) =
0, x 6= 0

dy
[s(t)] [s (t)] c [V cos(2f t)]
Dk = Dk 1 Dk
X
sin ( kn
2 ) V V
= D (n 1) + D (n + 1)
n=
(k n) 2 2
k1 k+1
V sin ( 2 ) V sin ( 2 )

we
= +
2 (k 1) 2 (k + 1)

sout (t )
s (t ) = V cos(2 f m t )
V

Sh
V1

t2 t2
t
t1 t1

V1

V
&
s1 (t )
1
1 4 fm
en

0
t
t2 t1 t1 t2
t1 + 1
2 fm
1
2 fm
uy

s2 (t )

V1

t2
Ng

0 t2
t
t1 t1

V1
1
fm

Figure 2.8

P2.15 It is obvious that we are interested in the case V1 < V , otherwise there is no saturation (see
Figure 2.8).

Solutions Page 212


Nguyen & Shwedyk A First Course in Digital Communications


The time t1 is given by: V cos(2fm t1 ) = V1 t1 = 2f1 m cos1 VV1 .

The time t2 is given by t2 = 4f1m + 4f1m t1 = 2f1m 2f1 m cos1 VV1 .

k

Lastly t2 t1 = 2f1m f1m cos1 VV1 t.

dy
We are now in a position to draw the waveforms s1 (t), s2 (t) as in Figure 2.9. Note that s1 (t)
is periodic with period 2f1m and has a DC component equal to t2 tT . On the other hand s2 (t)
1

is periodic with period f1m .

Now we determine the Fourier coefficients for s1 (t), s2 (t). The basic pulse shapes for s1 (t)

we
(over time interval 4f1m to 4f1m ) and s2 (t) (over time interval 2f1m to 2f1m ) are shown in
Fig. 2.9.
s1 (t ) s2 (t )

1
V1

1
4 fm
= T
2
t1
0
t1
1 Sh
4 fm
=T
2
t
1
2 fm
= T
2
t2
t1

1
0
t1
t2
1

V1
2 fm
=T
2
t

fm
&
Figure 2.9

Fourier coefficients of s1 (t):


en

Z T
[s (t)] 1 2 k 1
Dk 1 = s1 (t)ej2 T t dt, where T =
T T2 2fm
"Z Z T
#
t1
1 j2 Tk t
2 k
= e dt + ej2 T t dt
T T2
uy

t1

kt1
sin(k) sin sin k cos1 VV1 T
= =
k k
t1 1 1 V1
where = 2fm t1 = cos , k = 1, 2, . . . ,
T V
Ng

[s (t)] t2 t1
D0 1 = .
T

Fourier coefficients of s2 (t):


"Z Z Z T
#
t2 t1
[s (t)] 1 j2 Tk t j2 Tk t
2
j2 Tk t
Dk 2 = V1 e dt + V1 e dt + V1 e dt
T T2 t1 t2

V1 2kt2 2kt1 1
= sin + sin , where T = .
k T T fm

Solutions Page 213


Nguyen & Shwedyk A First Course in Digital Communications

t1 1
V1 t2 1
V1
1
Now T = 2 cos1 V and T = 2 cos1
V . Therefore
2

k
[s (t)] V1
Dk 2 = [1 cos(k)] sin k cos1 .
V

dy
Consider now the coefficients of s1 (t)s(t) which is a convolution of the Fourier coefficients of
s1 (t) and s(t). The Fourier coefficients of s(t) are simply D1 = V2 (k = 1 or f = fm ) and
D1 = V2 (k = 1 or f = fm ). All other coefficients are zero.
Graphically:

we
Dk[ s ( t )]

V V
2 2

 5 f m 4 f m 3 f m2 f m f m 
 
0 fm 2 fm 3 fm 4 fm 5 fm 6 fm f k (Hz)


3 2 1

D[ s21 ] D[ s11 ] Sh 0 1
Dk[ s1 (t )]

D0[ s1 ]
2 3 4 5 6

D1[ s1 ] D2[ s1 ]

k

6 f m 4 f m 2 f m 

&
0 fm 2 fm 4 fm 6 fm f k (Hz)
3 2 1 0 1 2 3 k
)

)
D1[ s1 ] + D0[ s1 ]

D2[ s1 ] + D1[ s1 ]

D3[ s1 ] + D2[ s1 ]

D4[ s1 ] + D3[ s1 ]
Dk[ s1 (t ) s (t )]
en
(

(
D k = Dk*
2

2
V


4 f m 3 f m2 f m f m 

0 fm 2 fm 3 fm 4 fm 5 fm 6 fm 7 fm f k (Hz)
7
uy

0 1 2 3 4 5 6 k

Figure 2.10

[s(t)]
Convolve the Fourier series of s1 (t), s(t) graphically, namely flip Dk around the vertical
[s1 (t)]
Ng

axis, slide it along the horizontal axis, multiply by Dk and sum the product. The plot
looks as in Fig. 2.10.

(
1 1 [s (t)] [s (t)]
[s (t)s(t)] D2k1 + D2k2 , k = 1, 3, 5, . . .
Dk 1 = ,
0, k = 0, 2, 4, . . .
n o
[s (t)s(t)] [s (t)s(t)]
Dk1 = Dk 1 .

[s (t)] [s (t)s(t)] [s (t)]


Finally, Dk out = Dk 1 + Dk 2 .

Solutions Page 214


Nguyen & Shwedyk A First Course in Digital Communications

P2.16 (a)
Z
ej2f ej2f

k
j2f t
S(f ) = V e dt = V
j2f
sin(2f ) sin(2f )

dy
= V = 2V
f 2f

2V sin(2f )
k
[T ] 1 1 sin(2 T )
(b) Dk = T 2f = T 2V k
f =kfr =k/T
(2 T )

1 sin( 2 )
k

we
[T ]
(c) Plots of Dk = 2 ( k ) are shown in Fig. 2.11 for = [1, 1.5, 2, 3, 5] (we set V = = 1
2
and T = 4).

0.5
=1
=1.5
0.4 =2

0.3

Sh =3
=5
(V/Hz)

0.2
[ T]
Dk

0.1
&
0

0.1
en

0.2
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5
f (Hz)

Figure 2.11

[T ]
uy

(d) As , the amplitude (magnitude) of Dk goes to zero; however the envelope


is always a sinc(), i.e., that of S(f ). Note also that the frequency spacing between the
frequency components, f is proportional to 1 and it goes to zero as .

P2.17 The basic difference in the magnitude/phase plots of the Fourier series representation and
the Fourier transform representation is that the Fourier transform magnitude plot will have
Ng

impulses at the fundamental (and harmonic) frequency(ies) of strength = |Dk |. This reflects
the fact that the signal is periodic and that we have finite power concentrated in a zero
interval. The phase spectrum is the same.

P2.18 (a) (i) S(f ) = V21 (f 2) + (f + 2) + V22 [(f 2) + (f + 2)] (V/Hz)

(ii) S(f ) = V21 (f 2) + (f + 2) + V22 (f 2 2) + (f + 2 2)

(b) (i) Not periodic. The ratio of the two frequencies of the 2 sinusoids, 2 and 2 Hz, is
not a rational number.
(ii) Periodic, since the ratio of the 2 frequencies is a rational number, 222 = 2. The
spectra look as follows:

Solutions Page 215


Nguyen & Shwedyk A First Course in Digital Communications

Dk (volts)
V1 V1
2 2

k
V2 V2
2 2

dy
f k (Hz)
2 2 2 0 2 2 2

Figure 2.12

R h i
ej2fc t ej2fc t M (f fc )+M (f +fc )
ej2f t dt =

we
P2.19 (a) S(f ) = m(t) 2 2
(b) See Figure 2.13.
M ( f ) (V/Hz)

A M ( f ) (V/Hz)

W Sh

S( f )
0
W

M ( f ) (rad)
f k (Hz)
&
S( f )
A/ 2


fc W fc fc + W 0 fc W fc fc + W f k (Hz)
en


S ( f )

Figure 2.13
uy

P2.20 First find the Fourier series of ejVm sin(2fm t) :


Z
1 T jVm sin(2fm t) j2kfm t 1
Dk = e e dt where T =
T 0 fm
Z 2fm T
=2fm t 1 d
ejVm sin ejk
Ng

=
T 0 2fm
Z 2
fm T =1 1
= ej(kVm sin ) d
2 0
= Jk (Vm ) (kth order Bessel function). (2.2)
P
Then the Fourier transform is: k= Jk (Vm )(f kfm ). Multiplying by ej2fc t shifts the
above spectrum by fc , i.e.,

X
s(t) S(f ) = Vc Jk (Vm )(f fc kfm ).
k=

Solutions Page 216


Nguyen & Shwedyk A First Course in Digital Communications

P2.21 Differentiate the 1st signal twice:

k
ds (t ) d 2 s(t )
s (t ) dt dt 2

dy
A (A/T) (A/T)
A/T
0 T
T 0 T t T t T 0 T t
A/T
( 2 A / T )

we
Figure 2.14

Then
d2 s(t) A j2f T 2A A j2f T 2A
F = e + e = [cos(2f T ) 1]
dt2 T T T T

S(f ) =
Sh
F
n
d2 s(t)
dt2
(j2f )2

Consider now the 2nd signal. The 1st derivation looks like
o

=
A 1 cos(2f T )
T 2 2 f 2
(V/Hz).

ds (t )
&
dt

A/T
t
T 0 T
en

(2 A)

Figure 2.15
uy

We could differentiate the rectangular pulse again but we have done it so many times that
its transform is almost engrained in our minds.

ds(t) A sin(2f T )
F = 2A +
Ng

dt f T

A sin(2f T )
F {s(t)} = 2 + .
j2f f T

P2.22 For reference the Fourier transform of a rectangular pulse, centered at t = 0, width 2T , height
A, is AT sin(2f T)
2f T .

Now decompose s(t) into the sum of rectangular pulses, s1 (t) and s2 (t) (note that this de-
composition is not unique):

Solutions Page 217


Nguyen & Shwedyk A First Course in Digital Communications

s1 (t ) s2 (t )
s (t ) = 2 +

k
t 1 2 3 4 t
0 1 2 3 4 0 1

dy
Figure 2.16

s1 (t) is a shifted version of a rectangular pulse of width 2T = 3 and height A = 2; shifted by


2.5 seconds. Therefore,

we
sin(2f (3/2)) sin(3f )
F {s1 (t)} = |ej2(2.5)f
{z } 2(3/2) = 3ej5f .
2f (3/2) 3f
due to shifting

s2 (t) is a rectangular pulse of width 2T = 1, height A = 1; shift of 2.5 seconds. Thus,

Finally, Sh
F {s(t)} = F {s1 (t)} + F {s2 (t)} = 3ej5f
1
F {s1 (t)} = ej5f
2
sin(f )
f
.

sin(3f ) 1 j5f sin(f )


3f
e
2 f
&
P2.23 Consider the 1st figure.

S(f ) = 1ej 2 [u(f + 2) u(0)] + 1ej 2 [u(0) u(f 2)]

Z 0 Z 2
j 2 j2f t 1 cos(4t)
en

s(t) = 1e e df + 1ej 2 ej2f t df =


2 0 t

The spectrum in the 2nd figure is: S(f ) = 1ej 4 f [u(f + 2) u(f 2)].
Z 2
2 cos(4t)
uy


s(t) = ej 4 f ej2f t df =
2 2t 4

Remark: The problem is a small illustration that the phase is also important in determining
the shape of a signal. An interesting question is: Is the ear more sensitive to phase or
magnitude distortions in an audio signal? Is the eye more sensitive to phase or magnitude
Ng

distortions in a image? For this one needs to resort to Matlab and experiment since we are
now in the realm of psychophysics.

P2.24 scos (t) = A cos(2fm t) where fm = 1/2T Scos (f ) = (A/2) [(f fm ) + (f + fm )].

srect (t) = u t + T2 u t T2 Srect (f ) = T sin(f
f T
T)
.
Z
AT sin((f )T )
S(f ) = Scos (f ) Srect (f ) = [( fm ) + ( + fm )] d
2 (f )T
2AT cos(f T )
= .
1 4(f T )2

Solutions Page 218


Nguyen & Shwedyk A First Course in Digital Communications

W
W

P2.25 A rectangular signal of width W (sec) and amplitude A (volts), i.e., srect (t) = A u t + 2 u t 2
has Fourier transform AW sin(f W)
f W .

k
(a) The signal can be decomposed into 2 shifted rectangular signals; each of width T seconds;
amplitude A volts; one shifted T /2 seconds to the left, the other T /2 seconds to the

dy
right. Now F {s(t )} = ej2f s(t). Here = +T /2 (or T /2). Also W = T .
sin(f T ) sin(f T )
S(f ) = AT ej2f (T /2) + AT ej2f (T /2)
f T f T
2
sin (f T )
= 2jAT (V/Hz).

we
f T
(b) The derivative of s(t) results in 3 impulses, i.e.,
ds(t)
= A(t + T ) 2A(t) + A(t T )
dt

ds(t)
F = Aej2f T 2A + Aej2f T .

P2.26 (a)

dt

S(f ) =
n
F ds(t)
dt
j2f

Z
o

Sh
= 2A


cos(2f T ) 1
j2f
= 2jAT
sin2 (f T )
f T
(V/Hz).

x(t)ej2f t dt
&
X(f ) =

Z Z
X (f ) = x(t)ej2f t dt = x (t) ej2f t dt

Z Z
= x(t)ej2f t dt = x(t)ej2(f )t dt = X(f ),
en

where the following were used: x(t) is real as is dt; the complex conjugate of a sum equals
the sum of complex conjugates (and integration for engineers is essentially a summation
operation); the complex conjugate of a product equals the product of complex conjugates.
(b) Now
uy

Z Z Z
x(t)y(t)dt = Y (f )ej2f t df
dtx(t)
t= f =
Z Z Z
j2(f )t
= df Y (f ) x(t)e dt = X (f )Y (f )df.
f = t= f =
| {z }
Ng

=X(f )=X (f )

And interchanging the roles of x(t), y(t) we have


Z Z

X (f )Y (f )df = X(f )Y (f )df.
f = f =

If x(t) = y(t), we get


Z Z
x2 (t)dt = |X(f )|2 df
| {z }
| {z }
which means this has units of joules/Hz
This has units of (volts)2 sec=joules

Solutions Page 219


Nguyen & Shwedyk A First Course in Digital Communications

or |X(f )|2 is an energy density spectrum, i.e., it tells us how the energy of x(t) is dis-
tributed in the frequency domain.

k
P2.27 (a)
Z

dy

1
Es = e2at dt = (joules).
0 2a

(b)
Z Z

df 1 f 1

we
Es = |X(f )| df =2
= tan1 = (joules).
a2 + 4 2 f 2 a a/2 2a
f =0

(c)
Z W
2 1
2 |X(f )| df = 0.95
2a

Sh
0
Z W
W
df 1 f 1 2W
2 = tan 1 = tan 1
0 2 2 f 2 + 4a2 a a/2 f =0 a a
2

1 2W 0.95
Solve tan =
a 2

a 0.95
W = tan = 2.02a (Hz).
&
2 2
1
Note that the required bandwidth is proportional to time constant the smaller the time
constant, the larger the bandwidth and vice versa. Makes intuitive sense, nest-ce pas?
R
P2.28 (a) Since sT (t) = ST (f )ej2f t df . Write s2T (t) as:
en

Z Z
s2T (t) = ST (f )e df j2f t
ST ()ej2t d
f = =
Z
1 2
P = lim sT (t)dt
uy

T T
Z Z Z
1 j2f t j2t
= lim ST (f )e df ST ()e d dt
T T t= f = =

Interchange the order of integration:


Z Z Z
Ng

1
P = lim df ST (f ) dST () ej2(+f )t dt
T T f = = t=
Z Z
1
= lim df ST (f ) d( + f )ST ()
T T f = =
Z Z
1 1
= lim
df ST (f )ST (f ) = lim |ST (f )|2 df
T T T T

R
In the above derivation we made use of the facts that t= ej2(+f )t dt = ( + f ) and
R
= d( + f )ST () = ST (f ) = ST (f ) (since sT (t) is real).

Solutions Page 220


Nguyen & Shwedyk A First Course in Digital Communications

R R
Note that as T , the integral |ST (f )|2 df s2T (t)dt . To overcome
this difficulty; interchange the integration and limiting operations (mathematicians may

k
worry about this, engineers shall assume the functions exhibit reasonable behaviour).
Then
Z
|ST (f )|2

dy
P = lim df.
T T
R 2 |ST (f )|2
(b) From P = limT T1 s2T (t)dt, P has a unit of volts sec
sec
= joules
sec = watts T
has a unit of watts
Hz . Call the limit a power spectrum density, i.e., it shows how the power
of s(t) is distributed in the frequency domain.

we
P2.29
Z
F {R( )} = R( )ej2f d
=
Z Z
1
= lim sT (t)sT (t + )dt ej2f d
T T =

Sh
t=
Z Z
1
= lim dt sT (t) sT (t + )ej2f d
T T t=
| = {z }
=t+ j2f t R
= e j2f d=ej2f t S (f )
= sT ()e T
Z
1
= lim dt sT (t)ej2f t ST (f )
T T t=
&
Z
1
= lim ST (f ) dt sT (t)ej2f t
T T
| t= {z }
=ST (f )=ST (f )

|ST (f )|2
F {R( )} = lim .
en

T T
P2.30 (a)
Z
1 V 2 (T )
R( ) = lim sT (t)sT (t + )dt = lim = V 2 watts
T T T T
uy

S(f ) = V 2 (f ) watts/Hz, all the power is concentrated at f = 0 (DC only)


(b)
Z
1 V 2 (T /2 ) V2
R( ) = lim sT (t)sT (t + )dt = lim = watts
T T T T 2
Ng

V2
S(f ) = (f ) watts/Hz (half a DC, half the power)
(c) 2
Z
1 T /2
lim V cos (2fc t + ) V cos [2fc (t + ) + ] dt
T T T /2
Z Z
1 T /2 V 2 1 V 2 T /2
= lim cos [2fc (2t + ) + 2] dt + lim cos (2fc ) dt
T T T /2 2 T T 2 T /2
| {z } | {z }
=0 T cos(2fc )

V2 V 2
R( ) = cos(2fc ) watts and S(f ) = [(f fc ) + (f + fc )] watts/Hz.
2 4

Solutions Page 221


Nguyen & Shwedyk A First Course in Digital Communications

sT (t )
V

k
t
T / 2 0 T /2

dy
sT (t + )

t
T / 2 0 T / 2

we
T

Figure 2.17
sT (t )

Sh 0
sT (t + )
V
V

T /2
t
&
t
0 T / 2

Figure 2.18
en

V2
The average power of a sinusoid is 2 watts. Half of it is concentrated at f = fc and
half at f = fc .
P2.31
Z
1
R( ) = lim sT (t)sT (t + )dt
uy

T T
Z
1
R( + Ts ) = lim sT (t) sT (t + + Ts ) dt = R( )
T T | {z }
sT (t+ )

where Ts is the period of s(t).


Ng

P2.32 (a) Multiplication in the time domain results in a differentiation in the frequency domain.
(b)
Z
S(f ) = s(t)ej2f t dt

Z Z
dS(f )
= (j2t)s(t)ej2f t dt = j2 ts(t)ej2f t dt
df
1 dS(f )
which shows that ts(t) and 2j df are a Fourier transform pair. Continuing, it is
n n
n 1 d S(f )
easily shown that t s(t) 2j df n are a Fourier transform pair.

Solutions Page 222


Nguyen & Shwedyk A First Course in Digital Communications

P2.33 Duality states that if s(t) S(f ) then s(f ) S(t). Therefore, if
2V a

k
V ea|t|
a2
+ 4 2 f 2
2V a V 2a
then V ea|f | 2 = h i.

dy
a + 4 f2 2 a2
4 2 4 2 + t
2

1 a2 2V a 2V 2V
Adjust V, a to get 1+t2
4 2
= 1 a = 2 and a2
= a = 2 = 1 V = .
1
e2|f | .

we
1 + t2
It is easy enough to check that
n o Z
1 1 2|f |
= F e = e2|f | e2f t df.
1 + t2

1
P2.34 P2.33 shows that s(t) = e2|f | = S(f ) are a Fourier transform pair.

Sh
1+t2
t 1 dS(f )
From P2.32 we have ts(t) = 1+t2
j2 df are a Fourier transform pair, or

t 1 d(e2|f | )

1 + t2 2j df
.
d(e2f ) d(e2f )
Now with f < 0 we have = 2e2f , and with f > 0 we have = 2e2f .
&
df df
Thus,
d(e2|f | )
= 2e2|f | sgn(f )
df
t
je2|f | sgn(f ).
en

1 + t2
R
P2.35 Certainly [s1 (t) + s2 (t)]2 dt 0 for all or
Z Z Z
2
s1 (t)dt + 2 s1 (t)s2 (t)dt + 2
s2 (t)dt 2 0.
uy

| {z } | {z } | {z }
c b a

2
The above is a quadratic in with coefficients a, b, c and roots 1,2 = b 2a b 4ac
. For the
polynomial to be always 0 requires that the roots to be complex (at very minimum a
double root), i.e., the quadratic should never intersect the axis, at best it may only touch
Ng

it. Therefore, b2 4ac 0 or


Z 2 Z Z
4 s1 (t)s2 (t)dt 4 s21 (t)dt s22 (t)dt

Z sZ Z

or s1 (t)s2 (t)dt s21 (t)dt s22 (t)dt

sZ sZ
p p
= s21 (t)dt s22 (t)dt = E1 E2 .

Equality holds when s1 (t) = Ks2 (t), i.e., one signal is a scaled version of the other.

Solutions Page 223


Nguyen & Shwedyk A First Course in Digital Communications

P2.36
Z Z sZ sZ

k
S1 (f )S2 (f )df =

S1 (f )S2 (f )df |S1 (f )|2 df |S2 (f )|2 df .

dy
P2.37 Let s1 (t) = s(t), s2 (t) = s(t + ). Then
Z v Z Z
u u 2

s(t)s(t + )dt u s (t)dt s2 (t + )dt.
u
| {z } t| {z } | {z }

we
Rs ( ) Rs (0) Rs (0)

Therefore, |Rs ( )| Rs (0).


Remark: Above was done for energy signals but readily shown to be true for power signals.

P2.38
Z Z Z Z

Sh

s(0) = S(f )df |S(f )| df ; S(0) = s(t)dt |s(t)| dt
R
R

|S(f )| df |s(t)| dt 1
WT = .
2S(0) s(0) 2

P2.39 (a)
&
s(t) = V eat u(t)
Z Z
V2
Es = s2 (t)dt = V 2 e2at (t)dt = (joules).
2a

(b) Let V = 1
en

Z f
2 1
df = .
a2 0 1+ 4 2 f 2 2a
a2
2f
Let fn = a , i.e., change variables. Then
uy

Z 2
f
2 a 1 dfn 1
a =
a2 0
2
1 + fn 2 2a

2 f 1
or tan1 f = = tan .
a 2 a 2 2
Ng

R
(c) s(0) = 1 T = 0 eat dt = a1 . With this T and f found in (b) we have
f 1
T f = = tan .
a 2 2
(d) Matlab plot.

P2.40 (a)
Z Z Z
2 2a|t| 1
Es = s (t)dt = e dt = 2 e2at dt = (joules), where V = 1.
0 a

Solutions Page 224


Nguyen & Shwedyk A First Course in Digital Communications

(b) Using Equation (2.62c), f is determined from


Z f Z f
16 2 f 2 f2

k
2 2 2 2 2
df = 2 df =
(a + 4 f ) a 2 32 2 a
0 0 a4 1 + 4
a2 f 2

dy
Let integration variable be = 2f
a . Then,
Z 2f
a 2
d =
0 (1 + 2 )2 4
dx
Change variable again to x = 2 d = .

we
2 x

Z 2f Z 4 2 f
2

a 2 a2 x
d = dx.
=0 (1 + 2 )2 x=0 2 (1 + x)2
4 2 f2
To integrate, use G&R, p.71, 2.313.5, where z1 = a + bx, a = b = 1. Let fn2 = a2
.
The integral becomes


Sh

and using 2.211 (same page in G&R)


Z fn2
2
x fn 1 fn

2(1 + x) 0

dx
+
Z 2
dx

4 0 (1 + x) x

fn2

1
= 2 tan x .
&
0 (1 + x) x 0
Putting it all together we have
fn 1
+ tan1 (fn ) = (2.3)
2(1 + fn2 ) 2 4
as the equation that defines fn . Need to solve in Matlab for various values of .
en

R R
(c) s(0) = 1 T = ea|t| sgn(t) dt = 2 0 eat dt = a2 .
a
a
In (b), after solving for fn we get f to be f = cfn , c = 2 T f = a2 2 fn = 1 fn
(where to repeat fn is the solution of (2.3) for various values of ).
(d) Matlab work.
uy

P2.41
s(t) S(f )
dn s(t)
= s1 (t) + K(t) S1 (f ) + K
dtn
Ng

where S1 (f ) 0 as f . Now
n
d s(t) S1 (f ) K
F n
= (j2f )n S(f ) S(f ) = n
+ .
dt (j2f ) (j2f )n
K
Since S1 (f ) 0 as f , the above means that S(f ) (j2f )n for large n, i.e., the asymp-
totic behaviour of the spectrum depends on how many times the signal can be differentiated
before an impulse(s) appears.

How many times can one differentiate a pure sinusoid (i.e., one that lasts from to +)
before an impulse appears? How concentrated is the amplitude spectrum of the sinusoid?

Solutions Page 225


Nguyen & Shwedyk A First Course in Digital Communications

P2.42 (a)
Z
ej2fc t + ej2fc t j2f t

k
SDSB-SC (f ) = m(t) e dt
2
Z Z
1 1

dy
= m(t)ej2(f fc )t dt + m(t)ej2(f +fc )t dt
2 2
M (f fc ) M (f + fc )
= + .
2 2

S DSB-SC ( f ) (V/Hz)

we
SDSB-SC ( f ) (V/Hz)

A/ 2

Sh
fc W fc fc + W 0 fc W fc fc + W f (Hz)
S DSB-SC ( f ) (rad)

Figure 2.19

(b) Note: The magnitude spectrum is a shifted and scaled version of the original spectrum
while the phase spectrum is simply a shifted version.
&
(c) The passband bandwidth is 2W Hz.

How would the above change if the carrier is sin(2fc t) instead of cos(2fc t)?
P2.43 The difference between SDSB-SC (f ) of P2.42 and SAM (f ) is a spectral component at fc due
en

to the carrier (or if you wish a DC component added to m(t)). It looks like in Fig. 2.20.

S AM ( f ) (V/Hz)

VC / 2 VC / 2 S AM ( f )
uy

A/ 2

fc W fc fc + W 0 fc W fc fc + W f (Hz)
S AM ( f ) (rad)
Ng

Figure 2.20

Note: We invariably assume that the message, m(t), has zero DC. DC is useful for powering
stuff but contains no information of interest to be transmitted. The reason to insert a DC is
to simplify the demodulator as we see in the next 2 problems.
P2.44 By now we are beginning to appreciate that multiplying a signal in the time domain by a
pure sinusoid simply shifts the spectrum and scales it. It is the shift that is important, the
scaling is somewhat arbitrary and practically would be eventually controlled by a volume
control.

Solutions Page 226


Nguyen & Shwedyk A First Course in Digital Communications

(a) For DSB-SC; S1 (f ) = [SDSB-SC (f fc ) + SDSB-SC (f + fc )] /2 and looks like in Fig. 2.21
(note that the impulses in the figure only occur for AM, not for DSB-SC).

k
dy
(VC / 2)
A/ 2
(VC / 4) (VC / 4)
A/ 4

2 f c W 2 f c 2 f c + W W 0 W 2 fc W 2 fc 2 fc + W f (Hz)

Output of Filtered out

we
lowpass filter by the
lowpass filter

Figure 2.21

Sh
(b) The AM spectrum is the same as the DSB-SC except for the DC component that results
in the impulses at 0, 2fc . Normally the demodulated signal, i.e., s1 (t), would not only
be passed through a lowpass filter but also through a (well designed) high pass filter to
eliminate the DC component.
(c) Of course, as usual, multiplying by a sinusoid shifts the spectrum, scales it, etc. But it
is scaling that important, perhaps a more appropriate terminology would be scaling and
phasing. Let us look at the problem in 2 ways (not completely distinct). First in the
&
time domain:

s1 (t) = s(t) sin(2fc t) = m(t) cos(2fc t) sin(2fc t).

Now
en

sin(x + y) + sin(x y) 1
sin x cos y = s1 (t) = m(t) sin [2(2fc )t]
2 2
which shows that the spectrum of s1 (t) is that of M (f ) shifted by 2fc output of the
lowpass filter is ZERO.
uy

In the frequency domain:


j2fc t
e ej2fc t 1
s1 (t) = s(t) S1 (f ) = [S(f fc ) S(f + fc )] .
2j 2j
Ng

1
Considering 2j to be just a scaling factor, albeit a complex one, S1 (f ) is S(f ) shifted
up and down (more appropriately to the left and right) by fc . But note the minus sign
in front of S(f + fc ). This results in the spectrum around f = 0 canceling out but we
know this already from the time domain analysis.

1
Finally the 2j not only scales but adds a phase of 2 . Putting this all together the
spectrum S1 (f ) looks like:
To generalize one should consider s1 (t) = s(t) cos(2fc t + ), where would represent
the phase difference between the oscillator at the transmitter (modulator) and the one
at the receiver (demodulator), and see what happens to the output of the demodulator.

Solutions Page 227


Nguyen & Shwedyk A First Course in Digital Communications

S1 ( f )

+

k
2 S1 ( f )

+ 2
2 A/ 4

dy
2 f c W 2 f c 2 f c + W 0 2 fc W 2 fc 2 fc + W f (Hz)


2

2

we
2
S1 ( f )

Figure 2.22

The last chapter in the text considers a circuit (phase locked loop) which estimates

But alas it only works for AM.


Sh
the incoming phase of the received signal. The next problem looks at a demodulation
technique that does not require knowledge of the phase, hence it is called noncoherent.

P2.45 (a) s1 (t) = |[Vc + m(t)] cos 2fc t| = |Vc + m(t)| |cos 2fc t|. But Vc + m(t) 0. Therefore,
|Vc + m(t)| = Vc +m(t). And |cos 2fc t| is a fullwave rectified sinusoid of period = 2f1c or
P
fr = 2fc . It therefore has a Fourier series of k= Dk e
j2kfr t , where most importantly
&
D0 (the DC value) 6= 0.
P
The spectrum of s1 (t) = [Vc + m(t)] |cos 2fc t| = k= Dk [Vc + m(t)] e
j2kfr t , then

consists of shifted versions of the spectrum of [Vc + m(t)], shifted by kfr = k2fc Hz,
with the kth shift weighted by Dk .
en

The output of the lowpass filter is sout = D0 [Vc + m(t)] since the spectrum components
at 2fc , 4fc , etc, are filtered out. The DC term D0 Vc is easily filtered out by a high
pass filter whose output would be D0 m(t) (assuming m(t) does not have significant
energy around f = 0 typically the case).
uy

P
(b) s1 (t) now is s1 (t) = |m(t)| |cos 2fc t| =
k= Dk |m(t)| e
j2kfr t . Therefore the spec-

trum of |m(t)| will lie in a larger bandwidth, theoretically infinite but practically in some
bandwidth, say Wc = W ( > 1 but say less than 3). Then by adjusting the bandwidth
of the lowpass filter to Wc , the output would be D0 |m(t)|, a distorted version of m(t).
Ng

Of course, Vc = 0 is an extreme condition but distortion shall exist as long as Vc <


max |m(t)| since then |Vc + m(t)| = 6 Vc + m(t).

P2.46 (a) The double sideband suppressed carrier spectrum looks like:

Solutions Page 228


Nguyen & Shwedyk A First Course in Digital Communications

S DSB-SC ( f ) (V/Hz)

k
A/ 2

dy
fc W fc fc + W 0 fc W fc fc + W f (Hz)

Figure 2.23

we
1+sgn(f fc ) 1sgn(f +fc )
And the function 2 + 2 plots as:

1 1 sign ( f + f c ) 1 + sign ( f f c ) 1
2 2

fc

Sh 0

Figure 2.24

Multiplying the 2 frequency functions together results in


fc f (Hz)

S USSB ( f ) (V/Hz)
&
A/ 2
S USSB ( f )

fc W fc 0 fc fc + W f (Hz)
en

S USSB ( f ) (rad)

Figure 2.25

Note that the magnitude function is even and the phase function is odd. This tells us
uy

that the time domain signal is real.



(b) s(t) = sUSSB (t)2 cos 2fc t = sUSSB (t) ej2fc t + ej2fc t . Therefore, S(f ) = SUSSB (f
fc ) + SUSSB (f + fc ), i.e., the upper sideband spectrum shifted by fc Hz.
(c) Since the spectrum is exactly that of m(t) and the Fourier transform is unique.
Ng

M (f fc )+M (f +fc )
P2.47 (a) sDSB-SC (f ) = 2 and some very simple algebra gives (P2.11).
(b) m(t) cos(2fc t).
(c) M (t) sgn(f ) is a product in the frequency domain which means that in the time domain
we have a convolution, i.e, m(t) h(t) where h(t) = F 1 {sgn(f )}. A shift of the
spectrum by fc is accomplished my multiplying the
time function by ej2fc t . Therefore,
[M (f fc ) sgn(f fc )] m(t) F 1 {sgn(f )} ej2fc t .

(d) The shift is now fc Hz, hence [M (f fc ) sgn(f + fc )] m(t) F 1 {sgn(f )} ej2fc t .
(e) Use the fact that the inverse transform is a linear operation, i.e., superposition holds.

Solutions Page 229


Nguyen & Shwedyk A First Course in Digital Communications

shift by f c

k
A/ 2 A/ 2 A/ 2

dy
2 f c W 2 f c W 0 W 2 fc 2 fc + W f (Hz)

shift by f c
Output of lowpass filter
= m(t) (actually m(t)/2)

we
Figure 2.26

(f) No. The frequency function, sgn(f ) has a magnitude spectrum = |sgn(f )| = 1, an even
function but its phase spectrum is sgn(f ) = 0 for f 0 and = for f < 0, which is
not an odd function.

Sh
(g) Yes. The magnitude spectrum is |j sgn(f )| = 1, an even function and the phase spec-
trum is j sgn(f ) = /2 for f 0 and = /2 for f < 0 an odd function.

Recall that a real time function always has an even magnitude spectrum and an odd
phase spectrum.
(h) jh(t) j sgn(f ).
&
(i) Note that

ej2fc t ej2fc t
sin(2fc t).
2j
en

From (b) and (h), we get the block diagram of Fig. P2.42.

P2.48 In the frequency domain the lower single-sideband signal looks like:

SLSSB ( f ) (V/Hz)
uy

A/ 2

fc fc + W 0 fc W fc f (Hz)

Ng

this is
M ( f + f c ) 1 + sgn ( f + f c ) this is
2

2


M ( f f c ) 1 sgn ( f f c )

2 2

Figure 2.27

Therefore,
M (f + fc ) + M (f fc ) M (f + fc ) sgn(f + fc ) M (f fc ) sgn(f fc )
SLSSB (f ) = +
2 2 2 2 2

Solutions Page 230


Nguyen & Shwedyk A First Course in Digital Communications

In the time domain:


m(t) M (f + fc ) + M (f fc )

k
cos(2fc t)
2 2
Let h(t) = F 1 {j sgn(f )}. Recognize

dy
j2fc t
m(t) e M (f + fc ) j sgn(f + fc )
h(t)
2 2j 2 2j
j2fc t
m(t) e M (f fc ) j sgn(f fc )
h(t)

we
2 2j 2 2j

Therefore,
j2fc t
m(t) m(t) e ej2fc t
sLSSB (t) = cos(2fc t) + h(t)
2 2 2j

m(t) m(t)
=
2

Sh
cos(2fc t)
2
h(t) sin(2fc t)

The above result means that the summer in Fig. 2.42 would now be a subtractor.

P2.49 Nothing, except that there would be a carrier at fc .

P2.50
&
Z
ha (t) = j ea|f | sgn(f )ej2f t df

Z 0 Z
(a+j2t)f (a+j2t)f
= j e df + e df
0
4t 1
en

a0
= = .
a2 4 2 t2 t

P2.51
Z Z
S(f )S (f )df
uy

s(t)
s(t)dt = (Parseval)

Z

= S(f )S (f )(j sgn(f ))df

Z
= j |S(f )|2 sgn(f ) df.
| {z } | {z }
Ng


even odd
R
|S(f )|2 sgn(f ) is odd s(t)
s(t)dt = 0, i.e., they are orthogonal.

s (t ) S ( f ) s(t ) S ( f ) = S ( f ) j sgn( f )
j sgn( f )

Hilbert transform

Figure 2.28

Solutions Page 231


Nguyen & Shwedyk A First Course in Digital Communications

P2.52 We have established that R( ) |S(f )|2 are a Fourier transform pair. S(f ) = j sgn(f )S(f ),
2

therefore S(f ) = |j sgn(f )|2 |S(f )|2 = |S(f )|2 , i.e., the autocorrelation functions are the

k
same.

P2.53 In general,

dy

(f fc ) + (f + fc )
M (f ) = j sgn(f )M (f ) = j V sgn(f )
| {z2 }
F {V cos(2fc t)}

(f fc ) sgn(f ) + (f + fc ) sgn(f )

we
= Vj
2

(f fc ) sgn(f ) (f + fc ) sgn(f )
= V
2j
| {z }
sin(2fc t)
m(t)
= V sin(2fc t).

P2.54 (a)
Sh
S(f ) = SPB (f ) j SPB (f ) = SPB (f ) j(j sgn(f ))SPB (f )

= SPB (f ) (1 + sgn(f )) =

2SPB (f ),
0,
f 0
f <0
&
S ( f ) ( V/Hz )
S ( f ) = 2 S PB ( f ) u ( f )
en

0 f (Hz)
S ( f ) = S PB ( f ) u ( f )

Figure 2.29
uy

(b) Choose fs as shown. Note that choice is somewhat arbitrary.

S BB ( f ) ( V/Hz )
2 S PB ( f + f s ) u ( f + f s )
Ng

0 f (Hz)
fs
S PB ( f + f s ) u ( f + f s )

Figure 2.30

Note that sBB (t) is complex because the magnitude |SBB (f )| is not an even function and
the phase is not an odd function, at least not in general.

Solutions Page 232


Nguyen & Shwedyk A First Course in Digital Communications

(c) From (b) we have that sBB (t) = s(t)ej2fs t


n o n o

k
R sBB (t)ej2fs t = R s(t)ej2fs t ej2fs t = R {s(t)} .

dy
But R {s(t)} = R {sPB (t) jsPB (t)} = sPB (t). Note that sPB (t) is a real signal when
sPB (t) is real.
nh i o
[real] [imag]
sPB (t) = R sBB (t) + j sBB (t) [cos(2fs t) + j sin(2fs t)]
[real] [imag]
= sBB (t) cos(2fs t) sBB (t) sin(2fs t).

we
(d) If this axis of symmetry exists then when SPB (f )u(f ) is shifted down by fs , the resultant
baseband spectrum will have magnitude spectrum that is even and a phase spectrum
[imag]
that is odd the baseband signal is real or the imaginary component sBB (t) is zero.

Sh
&
en
uy
Ng

Solutions Page 233


Nguyen & Shwedyk A First Course in Digital Communications

P2.55 The spectrum S(f ) = SUSSB (f )u(f ) looks like:

S ( f ) (V/Hz)

k
A

dy
i

0 fc fc + W f (Hz)
(i)

we
Figure 2.31

Let sBB (t) = s(t)ej2fs t and choose fs to be fc . Then the picture looks as follows:

S BB ( f ) (V/Hz)

0
A

(i)
ShW

f (Hz)
&
Figure 2.32

Note that sBB (t) is complex, i.e., has a real and imaginary component. The real component
is modulated by cos(2fc t), the imaginary component by sin(2fc t) to produce the upper
sideband signal, as shown in Fig 2.42 in the textbook.
en
uy
Ng

Solutions Page 234


k
dy
Chapter 3

Probability Theory, Random

we
Variables and Random Processes

Sh
P3.1 A random experiment consists of tossing two fair coins. Denote H=Head and T=Tail.

(a) Obviously the sample space is ={HH, HT, TH, TT}. Since the coins are said to be
fair, the implication is that each individual outcome is equally probably, namely the
probability of each outcome is 14 .

Sample space
&
Event B
HH
TT
en

TH
HT Event A
uy

Figure 3.1: Sample space representation for the random experiment of tossing two coins.

(b) A=The event that at least one head shows={HH, HT, TH}
1 1 1 3
Ng

P (A) = + + =
4 4 4 4
B=The event that there is a match of two coins={HH, TT}
1 1 2 1
P (B) = + = =
4 4 4 2

(c) Using conditional probability:


1
P (A B) P ({HH}) 4 1
P (A|B) = = = 1 =
P (B) P (B) 2
2

1
Nguyen & Shwedyk A First Course in Digital Communications

1
P (A B) P ({HH}) 4 1
P (B|A) = = = 3 =
P (A) P (A) 4
3

k
(d) Observe that P (A B) = P ({HH}) = 14 and P (A)P (B) = 3
4 21 = 38 . Since P (A B) 6=
P (A)P (B), A and B are not statistically independent.

dy
P3.2 We write down all the provided probabilities. The test reliability specifies the conditional
probability of y given x, where y is the test result, x the test outcome:

P (y = 1|x = 1) = 0.95 P (y = 1|x = 0) = 0.05


(3.1)
P (y = 0|x = 1) = 0.05 P (y = 0|x = 0) = 0.95;

we
and the disease prevalence tells us about the marginal probability of x:

P (x = 1) = 0.01 P (x = 0) = 0.99. (3.2)

From the marginal P (x) and the conditional probability P (y|x) we can deduce the joint

Sh
probability P (x, y) = P (x)P (y|x) and any other probabilities we are interested in. For
example, by the sum rule, the marginal probability of y = 1 the probability of getting a
positive result is

P (y = 1) = P (y = 1|x = 1)P (x = 1) + P (y = 1|x = 0)P (x = 0). (3.3)

The person has received a positive result y = 1 and is interested in how plausible it is that
&
she has the disease (i.e., that x = 1). The man in the street might be duped by the statement
the test is 95% reliable, so the persons positive result implies that there is a 95% chance
that the person has the disease. But this is incorrect. The correct solution is found using
the Bayes theorem.
P (y = 1|x = 1)P (x = 1)
P (x = 1|y = 1) = (3.4)
en

P (y = 1|x = 1)P (x = 1) + P (y = 1|x = 0)P (x = 0)


0.95 0.01
= (3.5)
0.95 0.01 + 0.05 0.99
= 0.16. (3.6)
uy

Thus, in spite of the positive result, the probability that the person has the disease is only 16%!

A follow-up question is: Suppose that the doctor recommends a medical procedure that you
have a 50/50 chance of surviving. Do you accept her recommendation? Implicit assumption
Ng

is that disease is so nasty that you will not survive it. More interesting what should the
probability of survival due to medical intervention or not be to tip your decision one way or
another. How does the test reliability affect this, and on and on.

P3.3 An information source produces 0 and 1 with probabilities 0.6 and 0.4, respectively. The
output of the source is transmitted via a channel that has a probability of error (turning a 1
into a 0 or a 0 into a 1) equal to 0.1 (see Fig. 3.2).
Define the following events:

E1 = Output of the channel is 1 (3.7)


E2 = Output of the source is 1 (3.8)

Solutions Page 32
Nguyen & Shwedyk A First Course in Digital Communications

Probability cross-over probability

k
0.6 0 0
0.1

dy
Source's Output Channel's output
0.1

0.4 1 1

we
Channel

Figure 3.2

Obviously,

E2c
Sh
E1c = Output of the channel is 0

Note that P (E2 ) = 0.4 and P (E2c ) = 0.6.


= Output of the source is 0

(a) Since what the channel does is independent from what the source does, the probability
(3.9)
(3.10)

that 1 is observed at the output of the channel can be calculated as follows:


&
P (E1 ) = P (Output of the source is 0 Channel makes error)
+ P (Output of the source is 1 Channel makes no error)
= P (Output of the source is 0) P (Channel makes error)
en

+ P (Output of the source is 1) P (Channel makes no error)


= 0.6 0.1 + 0.4 (1 0.1) = 0.42 (3.11)

(b) The probability that a 1 is the output of the source if at the output of the channel a 1
is observed can be calculated using the conditional probabilities as follows:
uy

P (E2 E1 )
P (E2 |E1 ) =
P (E1 )
P (Output of the source is 1 Output of the channel is 1)
=
P (E1 )
Ng

P (Output of the source is 1 Channel makes no error)


=
P (E1 )
P (Output of the source is 1) P (Channel makes no error)
=
P (E1 )
0.4 0.9 0.36
= = = 0.857 (3.12)
0.42 0.42

(c) The question in Part (b) is exactly the same as Problem 3.2 if x and y are used to describe
the channels input and output values, while the reliability of the test is considered as
the probability that the channel makes no error, i.e., defines the transition probability.

Solutions Page 33
Nguyen & Shwedyk A First Course in Digital Communications

cross-over probability
P(x )

k
p 0 0

dy
Input x Output y

1 p 1 1

we
BSC

Figure 3.3

P3.4 Consider a binary symmetric channel (BSC) with cross-over probability , i.e., P (y = 1|x =

Sh
0) = P (y = 0|x = 1) = , where x and y represent the binary input and output, respectively.
The probability of a 0 transmitted is p (see Fig. 3.3).

(a)

P (x = 1, y = 1)
P (x = 1|y = 1) =
P (y = 1)
&
P (y = 1|x = 1)P (x = 1)
=
P (y = 1|x = 1)P (x = 1) + P (y = 1|x = 0)P (x = 0)
(1 )(1 p)
= (3.13)
(1 )(1 p) + p

(b)
en

P [error] = P [(y = 0, x = 1) or (y = 1, x = 0]
= P (y = 0|x = 1)P (x = 1) + P (y = 1|x = 0)P (x = 0)
= (3.14)
uy


(c) There are nk = k!(nk)!
n!
possible sequences, each having k ones and (n k) zeros. Each
sequence (or event if you wish) is mutually exclusive. Therefore

n
P [k 1s are received] = P [a sequence has k 1s and (n k) 0s]
Ng

k

n
= P [there are k 1s]P [there are (n k) 0s]
k

n
= (P [1 received])k (P [0 received])nk . (3.15)
k

The last two equalities in the above follow from the fact that the transmitted symbols
are statistically independent. Now

P [1 received] = p + (1 p)(1 )
P [0 received] = 1 P [1 received] = 1 p (1 p)(1 ) = (1 p) + p(1 ).

Solutions Page 34
Nguyen & Shwedyk A First Course in Digital Communications


n
P [k 1s are received] = [p + (1 p)(1 )]k [(1 p) + p(1 )]nk .
k

k
(d) Since the messages are to be equally likely, it follows that p = 1/2. Note also that in
contrast to (c) the n transmitted bits here are not at all statistically independent.

dy
Assume that n is odd. A natural decision rule would be a majority rule one, i.e., if more
1s than 0s are received, decide that a 1 was transmitted and vice versa. If n is even
there is the possibility that an equal numbers of 1s and 0s are received. If this happens,
flip an unbiased coin to make a decision, otherwise rule is the same as when n is odd.
Let k be the number of received 1s. Then the decision rule can be written as:

we
m2
n
k R
m1 2

where m1 is one message (represented by n 0s) and m2 is other message (represented


by n 1s).
Therefore

=
1
2

n
Sh
P [error] = P [{(m1 decided) and (m2 xmitted)} or {(m2 decided) and (m1 xmitted)}]
= P [(m1 decided)|(m2 xmitted)]P [m2 xmitted]
+ P [(m2 decided)|(m1 xmitted)]P [m1 xmitted]

1

n
P k < n 1s xmitted + P k n 0s xmitted
2 2 2

&
b2c
n
n
1X n k nk 1 X n k
= (1 ) + (1 )nk
2 k 2 n
k
k=0 k=d 2 e

(e) As n the P [error] 0 (you might want to try justifying this mathematically). But
as n the rate of transmission goes to zero. One could only transmit one message,
en

if that.
Remark: The mapping in (d) is a repetition code. Thus a repetition code, if it is long
enough, can always be detected correctly, but the efficiency is too low. Of course we
do not want to use n time to just transmit one bit. Here we can see the tradeoff
uy

between the efficiency and the error probability. In practice, better codes that achieve
the same error probability but use much less time exist.

P3.5 (a) Since Fx (x) = B for all x > 10 one has 1 = P (x ) = Fx () = B, i.e., B = 1.
Moreover, Fx (x) reaches (or should reach) a value of 1 at x = 10. Therefore A 103 = 1
A = 1013 . So the cdf Fx (x) is as follows:
Ng


0, x0
Fx (x) = 3 3
10 x , 0 x 10 (3.16)

1, x > 10

(b) The pdf of x is simply:



0, x0
dFx (x)
fx (x) = = 3 103 x2 , 0 x 10 (3.17)
dx
0, x > 10
Both the cdf and pdf are shown in Fig. 3.4.

Solutions Page 35
Nguyen & Shwedyk A First Course in Digital Communications

Fx ( x ) fx ( x )

k
1.0

dy
0.3

0 10 x 0 10 x

we
Figure 3.4: Plots of Fx (x) and fx (x).

(c) The mean value of x is:


Z +
mx = E{x} = xfx (x)dx

Sh

Z 10 Z 10
30
= xfx (x)dx = 3 103 x3 dx = = 7.5
0 0 4

The variance of x is:


Z +
&
x2 = var(x) = E{(x mx ) } = 2
(x mx )2 fx (x)dx

Z 10
= (x 7.5)2 fx (x)dx = 3.75
0
en

(d)
P (3 x 7) = Fx (7) Fx (3) = 0.316 (3.18)

P3.6 To find fy (y), we first find the cdf of y, i.e., Fy (y). Consider the following cases for y (see
Fig. 3.5):
uy

(i) For y < b. Note that y can only receive values in the range b y b. Therefore the
event y y is an impossible event and for this range of y one has Fy (y) = P (y y) = 0.
(ii) For y = b. Then
Ng

Fy (y) = Fy (b) = P (y b) = P (y < b) + P (y = b) = P (y = b)


= P (x b) = Fx (b)
Z b Z b
1 2 2
= fx (x)d(x) = ex /(2 ) dx (3.19)
2 2

(iii) For y b. In contrast to case (i), for the range of y in this case the event y y is a
certain event and therefore Fy (y) = P (y y) = 1.
(iv) For b < y < b. For this range of y, the random variable y and x are the same. Thus
Fy (y) = P (y y) = P (x y) = Fx (y). This also implies that fy (y) = fx (y) =
2 2
1 ey /(2 ) for y in this range.
22

Solutions Page 36
Nguyen & Shwedyk A First Course in Digital Communications

y = g (x )

k
b

-b

dy
x
0 b

-b

we
f x ( x) Fx ( x)
1
2 2 1

12

1
0

fy ( y)
Sh x
0

Fy ( y )
x

2 2
&
1
( Fx (b)) ( Fx (b)) Fx (b)

12

Fx (b)
en

y y
-b 0 b -b 0 b

Figure 3.5
uy

In summary,
0, y < b
Fy (y) = Fx (y), b y < b (3.20)

1, yb
Now, differentiating Fy (y) gives fy (y), the pdf of y:
Ng



0, y < b


Fx (b)(y + b), y = b
dFy (y) 2 2
fy (y) = = fx (y) = 1 2 ey /(2 ) , b < y < b (3.21)
dy
2

[1 Fx (b)](y b) = Fx (b)(y b), y=b

0, y>b

Note that the two delta functions are due to the discontinuities of Fy (y) at y = b and y = b.
The strengths of these two delta functions are equal to the value of the jump of the cdf at
the discontinuities, which is Fx (b). Plots of Fx (x), fx (x), Fy (y) and fy (y) are provided in
Fig. 3.5.

Solutions Page 37
Nguyen & Shwedyk A First Course in Digital Communications

P3.7 It is straightforward enough to determine the mean mx and variance x2 of a random variable
x from the definitions:

k
Z
mx = E{x} = xfx (x)dx (3.22)

dy
x2 = E{(x mx )2 }
= E{x2 } m2x
Z
= x2 fx (x)dx m2x . (3.23)

The only challenge perhaps is doing the integrations. However using the fact that the mean
is the center of gravity (or centroid) of the probability mass density and that the variance

we
is the variation around the mean can simplify (even eliminate sometimes) the necessity for
integrating.

(a) The pdf looks as in Fig. 3.6(a). Obviously the centroid is mx = 12 (a + b).

To find the variance, shift fx (x) along the x-axis so that it lies symmetrically about

x2 =
Z ba

ba
2
2
x
Sh
x = 0, i.e., work with the pdf in Fig. 3.6(b). Note that the shift does not affect the
variance. Then

1
2
(b a)
dx =
2
(b a)
Z

0
ba
2
x2 dx =
1
12
(b a)2
&
For special case of a = b then mx = 0 and x2 = b2 /3.
f x ( x) f x ( x)
1
1 ba
ba
en

a x a x
0 b 0 b

(a) (b)
uy

Figure 3.6: Uniform densities.

(b) For the Rayleigh density, we have no choice but to do the integrations. The following
general results from G&R, p. 337 are useful:
Ng

Z r
2 (2n 1)!!
x2n epx dx = , for p > 0 (Eqn. 3.461-2)
0 2(2p)n p
Z
2 n!
x2n+1 epx dx = , for p > 0 (Eqn. 3.461-3)
0 2pn+1
These equations allow us to compute all the moments of a Rayleigh random variable, if
we so desired. But here we are interested in only the 1st and 2nd moments.
The first moment is:
Z s r
1 x2
2 22 1 1!!
mx = 2 x e dx = 2 1 1 = 1.25.
0 2 2 22 2 2
2

Solutions Page 38
Nguyen & Shwedyk A First Course in Digital Communications

The second moment is:


Z
s

k
2 1 x
3 22
2
1 1!
E{x } = 2 x e dx = 2 2 1 = 2 2 .
0 2 12 2 2
2

dy
Therefore the variance is
2 2
x2 = E{x2 } m2x = 2 2 = 2 0.429 2 .
2 2
Remark: The Rayleigh random variable is important in the study of fading channels

we
(Chapter 10). As shall be seen there, the parameter 2 is also a variance (as the notation
suggests), but of course not of the Rayleigh random variable but of the 2 underlying
Gaussian random variables that make up the Rayleigh random variable.
(c) For the Laplacian density:
c
fx (x) = ec|x| (3.24)
2

2
x =

Integrate by parts to obtain:


Z
Sh
By inspection fx (x) is an even function (symmetric about x = 0), hence mx = 0. The
variance of x is Z
c 2 c|x|
2
x e =c
Z

0
x2 ecx dx

1 2 2
&
x2 ecx dx = x2 ecx 2 xecx 3 ecx (3.25)
c c c
Thus Z
2 2
x2 ecx dx =
3
x2 = 2 (3.26)
0 c c
en

P
(d) y is discrete random variable, given by y = ni=1 xi where xi are statistically indepen-
dent and identically distributed random variables with the pmf: P (xi = 1) = p and
P (xi = 0) = 1 p.
uy

First, the mean and variance of each random variable xi can be found as:

mxi = E(xi ) = p 1 + (1 p) 0 = p (3.27)


E{x2i } 2
= p 1 + (1 p) 0 = p 2

x2 i = E{x2i } m2xi = p p2 = p(1 p) (3.28)


Ng

Using linear property of the expectation operation E{}, one has:


( n ) n
X X
E{y} = E xi = E{xi } = np (3.29)
i=1 i=1

To compute the variance of y, first compute E{y2 }. Write y2 as


n !2 n X
n n n n
X X X X X
2
Y = xi = xi xj = x2i + xi xj
i=1 i=1 j=1 i=1 i=1 j=1,j6=i

Solutions Page 39
Nguyen & Shwedyk A First Course in Digital Communications

Since xi , i = 1, 2, . . . , n, are statistically independent, then


2
p i 6= j

k
E{xi xj } = (3.30)
p i=j

Using (3.30) and the linear property of E{}, E{y2 } can be computed as:

dy
( n )
X X n Xn
E{y2 } = E x2i + E xi xj = np + n(n 1)p2 (3.31)

i=1 i=1 j=1,j6=i

y2 = E{y } [E{y}] = np + n(n 1)p2 (np)2 = np(1 p)


2 2
(3.32)

we
P3.8 (a) P (AB) = area A+area Bcommon area (which is added twice). But area A = P (A),
area B = P (B) and common area = P (A B). Hence

P (A B) = P (A) + P (B) P (A B).

(b)

Therefore Sh
P (A B C) = P (A) + P (B C) P (A (B C),
where we consider B C as a single event. But A (B C) = (A B) (A C).

P (ABC) = P (A)+P (B)+P (C)P (BC)P (AB)P (AC)+P (A


| B {z
A C}).
ABC
&
P3.9 (a) Given B then probability of A occurring is that it lies in the shaded area. Also B is
the new sample space. Therefore
P (A B)
P (A|B) = .
P (B)
en

Note that P (A) can be written as


P (A )
P (A) = .
P ()
uy

(b) Statistical independence can be expressed in 2 ways.


(i) P (A|B) = P (A). In this case the interpretation is that the ratio of the common
area (P (A B)) to the new sample space area (P (B)) is the same as the ratio of
the original area (P (A)) to the original sample space (P ()).
Ng

(ii) P (A B) = P (A)P (B). Here the common area P (A B) is equal to the product
of the areas P (A) and P (B).
(c) No. One could say they are totally dependent. Knowledge of one occurring tells you a
lot about the other occurring, i.e., P (A|B) = 0 and vice versa.
P3.10 Need to determine if the 3 axioms are satisfied. Given B all possible events now lie in the
shaded area.
Axiom 1: P (any event) is certainly 0 and also 1.
Axiom 2: P (B|B) = 1.
Axiom 3: Holds. It is inherited from .

Solutions Page 310


Nguyen & Shwedyk A First Course in Digital Communications

Sample space

k
A

dy
B

A B

we
Figure 3.7

Sample space

Sh A

B
&
A B

Figure 3.8
en

P3.11 (a) P (A B C) = P (A)P (B)P (C), P (A B) = P (A)P (B), P (A C) = P (A)P (C);


P (B C) = P (B)P (C).
(b) (i) All conditions of (a) satisfied, therefore statistically independent.
(ii) Note that P (A B C) = 0 6= P (A)P (B)P (C), therefore statistically dependent.
uy

(iii) P (B C) = 1/8 6= P (B)P (C) = (1/2)(1/2) = 1/4, therefore statistically dependent.


(c) Yes. Consider the Venn diagram in Fig. 3.9. Let a2 = b, where a = P (A) = P (B) =
P (C). Then P (A B C) = 0 while P (A B) = P (A C) = P (B C) = b = a2 =
P (A)P (B) = P (A)P (C) = P (B)P (C).
P P
Ng

(d) Number of conditions is nk=2 nk = nk=2 k!(nk)!


n!
.

P3.12 First calculate P (A), P (B) and P (C) as follows:

P (A) = 0.08 + 0.03 + 0.02 + 0.07 = 0.20


P (B) = 0.08 + 0.07 + 0.03 + 0.02 = 0.20
P (C) = 0.245 + 0.07 + 0.07 + 0.02 = 0.405

Now P (A B C) = 0.02 6= P (A)P (B)P (C) = (0.02)(0.02)(0.405) = 0.000162. Therefore


these events are statistically dependent.

Solutions Page 311


Nguyen & Shwedyk A First Course in Digital Communications

A B

k
a a

dy
we
a
C

Figure 3.9

Next,

Sh
P (A|C) =
P (A B C)
P (A B|C) =
P (C)
P (A C)
P (C)
=
=

0.09
0.405
0.02
0.405

=
2
9
=
4
81

P (B C) 0.09 2
&
P (B|C) = = =
P (C) 0.405 9
2
2 4
P (A|C)P (B|C) = = = P (A B|C).
9 81
Therefore the events A and B are conditionally independent.
en

Problems 3.3, 3.4, 3.13, 3.14 and 3.15 are important special cases of discrete-input discrete-
output channel models. The general model is described in Fig. 3.10.
The quantities of interest are usually P [y = yj ], P [x = xi |y = yj ] and also E{xy}. The
general relationships for the quantities are:
uy

m
X m
X m
X
P [y = yj ] = P [(y = yj ) (x = xi )] = P [y = yj |x = xi ]P [x = xi ] = Pij Pi
i=1 i=1 i=1

P [(x = xi ) (y = yj )] P [y = yj |x = xi ]P [x = xi ] Pij Pi
P [x = xi |y = yj ] = = = Pm
Ng

P [y = yj ] P [y = yj ] k=1 Pkj Pk

m X
X n
E{xy} = xi yj P [(x = xi ) (y = yj )]
i=1 j=1
Xm X n
= xi yj P [y = yj |x = xi )]P [x = xi ]
i=1 j=1
m X
X n
= xi yj Pij Pi
i=1 j=1

Solutions Page 312


Nguyen & Shwedyk A First Course in Digital Communications

Input x Output y
P11
x1 P1

k
y1
P12
P1n
x2 P2

dy
y2
where P[y = y j | x = xi ], Pi = P[x = xi ]
and typically n m.
  Further Pi = 1, 0 Pi 1.
m

i =1

we
Note typically most of the
Pij Pij would be equal to 0.
xi Pi yj

 
Pm1
xm Pm
Pmn

Sh yn

Figure 3.10

P3.13 Consider the toy channel model shown in Fig. 3.25 in the textbook.
&
(a) From the figure, we have P (x = 1) = 0.5, P (x = 1) = 0.5. All the transition
probabilities of y given x are:

P (y = 1|x = 1) = 0.4 P (y = 1|x = 1) = 0.4


P (y = 0|x = 1) = 0.2 P (y = 0|x = 1) = 0.2 (3.33)
en

P (y = 1|x = 1) = 0.4 P (y = 1|x = 1) = 0.4;

Then

P (y = 1) = P (y = 1|x = 1)P (x = 1) + P (y = 1|x = 1)P (x = 1)


uy

= 0.4 0.5 + 0.4 0.5 = 0.4. (3.34)

Observe that P (y = 1) = P (y = 1|x = 1) = P (y = 1|x = 1) = 0.4. Similarly, one


can show that P (y = 0) = P (y = 0|x = 1) = P (y = 0|x = 1), and P (y = 1) =
P (y = 1|x = 1) = P (y = 1|x = 1). Since the distribution of the output y does not
Ng

depend on which value of the input x was transmitted, the two random variables x and
y are statistically independent.
(b) We have

E{x} = x1 P (x = x1 ) + x2 P (x = x2 )
= 1 0.5 + (1) 0.5 = 0, (3.35)

and

E{y} = y1 P (y = y1 ) + y2 P (y = y2 ) + y3 P (y = y3 )
= 1 0.4 + 0 0.2 + (1) 0.4 = 0, (3.36)

Solutions Page 313


Nguyen & Shwedyk A First Course in Digital Communications

Since x and y are statistically independent, one can compute E{xy} as

k
E{xy} = E{x}E{y} = 0. (3.37)

P3.14 Consider P (y = 1) = 12 (0.3) + 21 (0.4) = 0.35. But P (y = 1|x = 1) = 0.3 6= P [y = 1]

dy
Statistical dependence.
The correlation is
X 3
2 X 2 3
1 XX
xi yj Pij Pi = xi yj Pij
2

we
i=1 j=1 i=1 j=1

X3 X 3
1
= yj P1j yj P2j
2
j=1 j=1
1 1
= [1(0.3) + 0(0.3) 1(0.4)] [1(0.4) + 0(0.1) 1(0.5)]
2 2
= 0.

Sh
Therefore the random variables are uncorrelated.
Notes:

1) Uncorrelatedness does not mean (or imply) statistical independence. But statistical
(3.38)

independence always implies uncorrelatedness.


&
2) One extremely important case where uncorrelatedness implies statistical independence
is the case of jointly Gaussian random variables. Communication theory depends very
crucially on this fact as shall be seen in later chapters.
3) Statistical independence can increase or decrease the conditional probability of an event.
Two extreme cases are:
en

(a) Mutually exclusive events, A, B then P (A|B) = 0.


(b) An event B is contained in event A, i.e., B A. Then P (A|B) = 1 where P (A),
P (B) 6= 0.
uy

P3.15 Definitely statistically dependent since


1
P (y = 1) = (1 ) 6= P (y = 1|x = 1) = 1 p . (3.39)
|2 {z } | {z }
1
1/2
Ng

Also correlated because



3 3
1 X X
E{xy} = yj P1j yj P2j
2
j=1 j=1
1
= {(1 p p) [p (1 p )]}
2
= 1 2p > 0, for p, << 1. (3.40)

Solutions Page 314


Nguyen & Shwedyk A First Course in Digital Communications

P3.16 Consider two random variables x and y with means mx , my and variances x2 , y2 respec-
tively. Further let the two random variables be uncorrelated, which means that E{xy} =

k
E{x}E{y} = mx my . Now rotate x and y to obtain new random variables xR and yR as
follows:
xR cos sin x (cos )x + (sin )y

dy
= = (3.41)
yR sin cos y ( sin )x + (cos )y
where is some arbitrary angle.
First one has

we
xR yR = [(cos )x + (sin )y] [(sin )x + (cos )y]
= (sin cos )x2 (sin2 )xy + (cos2 )xy + (sin cos )y2
= [cos2 sin2 ]xy sin cos [x2 y2 ]. (3.42)

It follows that

E{xR yR } = E{[cos2 sin2 ]xy sin cos [x2 y2 ]}

Sh
= {cos2 sin2 }E{xy} sin cos [E{x2 } E{y2 }].

But E{xy} = mx my , E{x2 } = m2x + x2 , E{y2 } = m2y + y2 . Therefore

E{xR yR } = [cos2 sin2 ]mx my sin cos [m2x + x2 m2y y2 ]. (3.43)


&
Now compute E{xR }E{yR } as follows:

E{xR }E{yR } = [(cos )mx + (sin )my ][(sin )mx + (cos )my ]
= (cos sin )m2x + cos2 mx my sin2 (mx my ) + (sin cos )m2y
= [cos2 sin2 ]mx my sin cos [m2x m2y ] (3.44)
en

Comparing (3.43) and (3.44) shows that the two random variables xR and yR are uncorrelated
for any ., i.e., E{xR yR } = E{xR }E{yR }, if and only if x2 = y2 .
Remarks:
uy

1) Note that when one says uncorrelated one actually means uncovarianced. However,
universally the terminology is to say uncorrelated.
2) A lot of algebra could be avoided by realizing that the mean values do not place any role
as to whether the random variables are uncorrelated and therefore mx and my could
Ng

have been set to zero without loss of generality.


3) If the two random variables x, y are correlated, then one can find a rotation angle,
, that would make the random variables xR , yR uncorrelated. However, the angle
is determined by the correlation (and variances) of x and y. Here the rotation angle
is arbitrary and other considerations can be used to determine the rotation. This is
exploited in Chapter 5.

P3.17 (a) fx (x)dx


(b) fy (y)dy
(c) Equal

Solutions Page 315


Nguyen & Shwedyk A First Course in Digital Communications

(d) Replace y by g(x) and fy (y)dy by fx (x)dx and sweep over x, i.e., from x = to
x = + to obtain the relationship.

k
P3.18 Start with the definition for the characteristic function:

dy
Z
+
j2f x
x (f ) = E{e }= ej2f x fx (x)dx. (3.45)

Differentiate x (f ) n times:

we
Z
+
dn x (f )
= (j2x)n ej2f x fx (x)dx
df n

Z
+
n
= (j2) xn ej2f x fx (x)dx. (3.46)

Set f = 0 and divide by (j2)n


Sh
1 dn x (f )
(j2)n df n f =0
=

Z
+

xn fx (x)dx = E{xn }. (3.47)



&
P xn
P3.19 Maclaurin expansion of ex = n=0 n! .
P

(j2f )n xn P

(j2f )n
ej2f x = n! and x (f ) = E{ej2f x } =
n! E{xn }.
n=0 n=0
en

The characteristic function is completely determined by the moments of the random vari-
able.
Note that expectation is a linear operation. Thus the expectation of a sum is the sum of the
expectations.
uy

P3.20 The characteristic function is the Fourier transform of the probability density function
and everything that was discussed in Chapter 2 should carry over here. There are some
differences that must be taken into account. In Chapter 2 we went from the time domain
to the frequency domain and back whereas here we are going from the probability density
domain to the frequency domain and back. But this is a trivial difference. Indeed the Fourier
Ng

transform could be applied to a function of any independent variable, say temperature, length,
etc..
The main (and only) difference between the Fourier transform in Chapter 2 and the charac-
teristic function is the sign convention for f , i.e.,

R
+
x(t) X(f ) = x(t)ej2f t dt

k (3.48)
R
+
X(f )ej2f t df

Solutions Page 316


Nguyen & Shwedyk A First Course in Digital Communications

R
+

k
fx (x) x (f ) = fx (x)ej2f x dx

k (3.49)

dy
R
+
x (f )ej2f x df

which shows that f f .


2 p 2 2
af
(a) Example 2.17, Chapter 2, establishes that V eat V ae are a Fourier transform

we
x2
pair. Here we need the characteristic function (or Fourier transform) of 1 e 22 .
2
Identify V 1 ,a = 1
, substitute and do the algebra.
2 2 2

1 2 2 2 2 2 2
x (f ) = 2 2 e (f ) 2 = e2 f . (3.50)
2

Sh
(b) Example 2.9, Chapter 2, states V ea|t|
Here we are concerned with fx (x) =

x (f ) =
2e

(c/2)(2c)
V 2a
a2 +4 2 f 2
c c|x|
. Identify

c2 + 4 2 (f )2
=
.

c2
V 2c , a c. Then

c2 + 4 2 f 2
. (3.51)
&
(c) Example 2.11, Chapter 2, shows that a rectangular pulse of width T and height V has
the Fourier transform, V T sin(f T)
f T . Here we are dealing with a pdf of width 2A, height
1/2A, i.e., T = 2A, V = 1/2A.
sin (f )2A sin(2f A)
x (f ) = = . (3.52)
(f )2A (2f A)
en

(Note that both rectangular pulses are centered at the origin).


Remark: Because the pdfs were even functions the f convention did not matter. Is it possible
to have a pdf that is odd?
uy

P3.21 Obviously the odd moments are zero since the integrand is an odd function. Consider now
the relationship
Z
+
1 x2
e 22 dx = 1. (3.53)
2 2
Ng

Write this as
Z
+
2 1
ex = , where is defined as 2 . (3.54)
2

Differentiate both sides with respect to :


Z
+
d 2 x2 1
: x e dx = 3/2 (3.55)
d 2

Solutions Page 317


Nguyen & Shwedyk A First Course in Digital Communications

and again
Z
+

k
d2 2 2 x2 1 3
: (x )(x )e dx = (3.56)
d2 2 2 5/2

dy
and again
Z
+
d3 2 3 x2 1 3 5
: (x ) e dx = . (3.57)
d3 2 2 2 7/2

we
The pattern is such that after the nth differentiation
Z
+
dn 2 n x2 1 3 5 2n 1
: (x ) e dx = . (3.58)
dn 2 2 2 2 (2n+1)/2

Z
+


2 Sh
Note that the LHS has the form of E{x2n } but some algebra is needed to see this

(1)n x2n ex dx = (1)n


[1 3 5 (2n 1)]
2 n


n 1/2
(3.59)

Now bring back the definition of :


&
Z
+
x2 [1 3 5 (2n 1)] n 2 n
x2n e 22 dx = 2 ( ) 2 (3.60)
2n

en

Rearrange:
Z
+
2n 1 2
E{x } = x2n ex dx = 1 3 5 (2n 1)( 2 )n , n = 0, 1, 2, . . . (3.61)
2

uy

P3.22 (a) Let y = x mx , i.e., y is a zero mean r.v., variance x2 . Then y (f ) = E{ej2f y } =
E{ej2f (xmx ) } = ej2f mx E{ej2f x } or y (f ) = ej2f mx x (f ).
2 f 2 2
(b) x (f ) = ej2f mx ej2 x .
P3.23 (a)
Ng

Z
+ Z
+
j2f (x1 +x2 )
E{e } = ej2f (x1 +x2 ) fx1 x2 (x1 , x2 ) dx1 dx2
| {z }
=fx1 (x1 )fx2 (x2 )

Z
+ Z
+
j2f x1
= e fx1 (x1 )dx1 ej2f x2 fx2 (x2 )dx2 . (3.62)

| {z } | {z }
x1 (f ) x2 (f )

Therefore y (f ) = x1 (f )x2 (f ).

Solutions Page 318


Nguyen & Shwedyk A First Course in Digital Communications

f x1 ( x1 ) = f x2 ( x2 ) fy ( y )

k
1/ 2A 1/ 2A

dy
A 0 A 2 A A 0 A 2A y

Figure 3.11: Plots of fx1 (x1 ) and fy (y).

R
+

we
(b) Convolution fy (y) = fx1 (x1 ) ~ fx2 (x2 ) = fx1 ()fx2 (y )d. Convolving graph-

ically gives fy (y) as shown in Fig. 3.11.
Q
n
(c) y (f ) = xk (xk ) or fy (y) = fx1 (x1 ) ~ fx2 (x2 ) ~ ~ fxn (xn ).
k=1
Above is true as long as the xk s are statistically independent, i.e., it does not depend on
the random variables being Gaussian. However, if Gaussian, the characteristic function
of the kth r.v. is:

xk (f ) = ej2f mxk e2

and y (f ) =
n
Y

k=1
Sh 2 f 2 2

ej2f mxk e2
xk

2 f 2 2
xk
=e
j2f

P
n

k=1
mxk

2 2 f 2
e

P
n

k=1
2
x
k

(3.63)

(3.64)
&
P
n
which is a characteristic function of a Gaussian r.v., with mean my = mxk and
k=1
P
n
variance y2 = x2 k .
k=1
P
n
en

Note that the above result is easily generalized to a weighted linear sum, i.e., y = ak xk .
k=1
P
n P
n
Then y is Gaussian, mean my = ak mxk , variance y2 = a2k x2 k where xk are still
k=1 k=1
statistically independent Gaussian random variables.
uy

P3.24 (a) For y to lie in the region (y, y + dy] one of three mutually exclusive events has to occur:
x has to fall in the region [x1 + dx1 , x1 ) or (x2 , x2 + dx2 ] or [x3 + dx3 , x3 ), i.e.,

P [y < y y + dy] = P [x1 + dx1 x1 < x1 ] + P [x2 < x2 x2 + dx2 ]


+P [x3 + dx3 x3 < x3 ] (3.65)
Ng

In general P [z < z z + dz] = fz (z)dz. Therefore:

fy dy = fx (x1 )|dx1 | + fx (x2 )dx2 + fx (x3 )|dx3 |. (3.66)

(b) Infinitesimals dx1 , dx3 are negative quantities and the very least we should expect of a
probability is for it to be 0.

Solutions Page 319


Nguyen & Shwedyk A First Course in Digital Communications

(c) For the specific example we have

k
3
X fx (xi )|dxi |
fy (y) =
|dy|
i=1

dy
no harm in putting a magnitude sign on a positive quantity, dy.
X3 3
fx (xi ) X fx (xi )
= = (3.67)
dy dy
i=1 dxi i=1 dx
x=xi

we
fx (xi )
More generally fy (y) = dy where the xi s are the real roots of g(x) = y for the
i | dx |x=xi
specific value of y.

P3.25 Consider the graph of g(x) in Fig. 3.12.

0
Sh y

x
&
1
y=x

y=1(1x)2
en

4
3 2 1 0 1 2 3 4
uy

Figure 3.12

Inspection shows that:


Ng

(a) For y in the range (, 1) there is only one real root, namely the positive root of

1 (1 x)2 = y xr = 1 + 1 y, < y < 1.
(b) At y = 1 there are an infinite number of roots (actually a noncountable infinity)
which
R 1 implies that P (y = 1) is nonzero. Indeed P (y = 1) = P ( < x < 1) =
1 1
f
x (x)dx = 2 e . Therefore fy (y) has an impulse at y = 1 of this strength.
(c) For 1 < y 0 there are two roots to the equation g(x) = y, one from the relationship
x = y; the other from the relationship 1 (1 x)2 = y (again the positive root). The

roots are therefore x1 = y, x2 = 1 + 1 y.

(d) For 0 < y < 1 there again are two roots from 1(1x)2 = y which are x1,2 = 1 1 y.

Solutions Page 320


Nguyen & Shwedyk A First Course in Digital Communications

(e) Finally for y > 1 there are no root fy (y) = 0.

k
As always
X f (x ) dg(x)
fy (y) = x r . Here = 2(1 x). (3.68)

dy
dg(x) dx
xr : the roots of g(x)=y dx
xr

Therefore:
1 e|1+

1y|
(a) < y < 1: xr = 1 + 1 y fy (y) = 2 |2( 1y)| .

we
(b) y = 1: fy (y) = 12 e1 (y + 1).

1 e|1+ 1y|
(c) 1 < y < 0: fy (y) =
2 |2( 1y)| + 12 e|y| .

1 e|1+ 1y| 1 e|1 1y|
(d) 0 < y < 1: fy (y) =
2 |2( 1y)| + 2 |2( 1y)| .

(e) y > 1: fy (y) = 0.

2.5 Sh
A plot of the pdf is shown in Fig. 3.13

f (y)
y
&
2

1.5
en

1
uy

1/2 e1 (y+1)
0.5
Ng

y
0
3 2 1 0 1 2 3

Figure 3.13

P3.26 (a) y = g() = cos( + ). Consider the graph of the nonlinearity g() in Fig. 3.14.
The roots are 1 = cos1 y ; 2 = + d = 2 2 1 = 2 cos1 y.
dy
d = sin( + ).

Solutions Page 321


Nguyen & Shwedyk A First Course in Digital Communications

k
1
cos
0 1 2

dy
y 2
1

d d d = 1

Figure 3.14: Plot of the nonlinearity y = g() = cos( + ).

we
f ()| f ()|
fy (y) = 1 + 2 . (3.69)
d 1 d 2
dy dy

Now f ()|1 = f ()|2 =


dy
d 1
Sh 1
2 ( is uniform over (0, 2]).

p
= sin( + cos1 y ) = sin(cos1 y) = 1 y 2 (3.70)

dy p
&
= sin( + 2 cos1 y) = sin(cos1 y) = 1 y2 (3.71)
d 2

Therefore
(
1 , 1 y 1
en

fy (y) = 1y 2 (3.72)
0, elsewhere

which plots as in Fig. 3.15.


As a check, we know
uy

Z
+ Z1
1 ?
fy (y)dy = p dy = 1 (3.73)
1 y2
1
Z1
1 ?
or p dy = . (3.74)
Ng

1 y2 2
0

1

The LHS is sin 1
(y) = sin1 (1) sin1 (0) =
2 0 = 2 . So the question mark can be
0
removed.
No, it does not depend on .
(b) A sketch of the relationship y = cos( + ) where 0 /9 (see Fig. 3.16) shows that
there is only one real root at r = cos1 y and that y lies in the range cos( + /9)
y cos ).

Solutions Page 322


Nguyen & Shwedyk A First Course in Digital Communications

2.5

k
f y(y)

dy
1.5

we
1

0.5

1/

Sh
y
0
1.5 1 0.5 0 0.5 1 1.5

Figure 3.15
y
&
cos

r
0
9
( 20o )
en


cos +
9

Figure 3.16
uy



dy
Again d = sin( + cos1 y ) = 1 .
1y 2
=r

Now f () = 9
u(0) u( 9 ) and therefore f ()

= 9
u(0) u(cos1 y 9 )
r
Ng

where cos( + /9) y cos .


Therefore
9

1 y ) n o
u(0) u(cos
p 9
fy (y) = u[cos( + ) u(cos )] , (3.75)
1 y2 9

which depends on .

In communication models is invariably 2f t, i.e., y = cos(2f t + ). The above shows


that if is uniform over [0, 2) then the random process is stationary (at least 1st order).
However if it is restricted then in general it becomes nonstationary.

Solutions Page 323


Nguyen & Shwedyk A First Course in Digital Communications

P3.27 (a) 0
(b) Since it is always 0 the function g() = E{(x + y)2 } cannot cross the axis, i.e.,

k
g() = E{x2 } + 2E{xy} + 2 E{y2 } 0. (3.76)

dy
This means that the equation g() = 0 has two complex roots at
p
2E{xy} 4E 2 {xy} 4E{x2 }E{y2 }
1,2 = . (3.77)
2E{y2 }

we
b b2 4ac
(Recall from high school that the roots of ax2 + bx + c = 0 are at 2a ).

For the roots to be complex the expression under p


the square root must be 0. This
requires that E 2 {xy} E{x2 }E{y2 } or E{xy} E{x2 }E{y2 }.
(c) Equality holds when y = kx, i.e., there is a linear relationship between the random

Sh
variables x, y. (Statisticians are wont to call this a linear regression).
(d) Consider the random variables x mx and y my . Then from (b) we have
q q
E {(x mx )(y my )} E {(x mx )2 } E {(y my )2 } = x2 y2 = x y

E {(x mx )(y my )}
&
or 1.
x y
| {z }
=x,y

But the expression in (b) should properly be written as


p p
en

E {x2 } E {y2 } E {xy} E {x2 } E {y2 } (3.78)

p
or |E {xy} | E {x2 } E {y2 } (3.79)
uy

from which it follows that 1 1.


(e)
p
|E {x(t)x(t + )} | E {x2 (t)} E {x2 (t + )} (3.80)
Ng

p
or |Rx ( )| Rx (0)Rx (0) = Rx (0) (3.81)

where it is assumed that the process is stationary.

P3.28 (a)

P (y < y y + y, x < x x + x)
P (y < y y + y|x < x x + x) = . (3.82)
P (x < x x + x)

(b) Assuming continuous random variables the expression becomes 00 .

Solutions Page 324


Nguyen & Shwedyk A First Course in Digital Communications

(c) Rewrite the expression in (a) as:

k
P (y < y y + y|x < x x + x) P (y < y y + y, x < x x + x)
= . (3.83)
y xy P (x<xx+x) x

dy
As x, y 0 the LHS becomes fy (y|x = x) (more appropriately fy|x (y|x = x)) and
fxy (x,y)
the RHS becomes fx (x) .

P3.29

we
fx,y (x, y) a quadratic in x, y a quadratic in y.
fy (y|x) = (3.84)
fx (x) a quadratic in x Therefore Gaussian.

By symmetry fx (x|y) is also quadratic.

P3.30 Consider

Sh
P (A|r < r r + r) =

=
P (r < r r + r, A)
P (r < r r + r)
P (r < r r + r|A)P (A)
P (r < r r + r)
P (r<rr+r|A)
r P (A)
. (3.85)
P (r<rr+r)
&
r

f (r|A)P (A)
Let r 0. Therefore P (A|r = r) = fr (r) .

P3.31 To be a valid (not necessarily useful) pdf, a function must satisfy two, and only two, conditions:
(i) the area under it must be equal to one, and (ii) it must always be 0 (nonnegative).
en

Therefore:

(a) (i) Satisfies both conditions. It is a non-stationary Gaussian random process (or random
variable for any fixed t), whose mean is sgn(t) and variance is 2 .
R
(ii) No, neither condition is satisfied. fx (0) < 0 for t < 0. Also fx (x; t)dx =
uy

sgn(t) 6= 1.
(b) Answered above.

Note that the functions are considered to be functions of x; t is looked upon as a parameter.
Ng

P3.32 A random process is generated as follows: x(t) = ea|t| , where a is a random variable with
pdf fa (a) = u(a) u(a 1) (1/seconds).

(a) Sketches of several members of the ensemble are shown in Fig. 3.17. They are generated
with the Matlab codes below. Note that the Matlab function rand generates a random
number according to a uniform distribution over [0, 1].
t=[-2:0.001:2];
for i=1:4
a=rand(1,1);
x=exp(-a*abs(t));
subplot(4,1,i);plot(t,x,linewidth,1.5);

Solutions Page 325


Nguyen & Shwedyk A First Course in Digital Communications

tname=[{\bfa}=,num2str(a)];
title(tname,FontName,Times New Roman,FontSize,16);

k
grid on;
xlabel(\itt,FontName,Times New Roman,FontSize,16);
set(gca,FontSize,16,XGrid,on,YGrid,on,GridLineStyle,:,...

dy
MinorGridLineStyle,none,FontName,Times New Roman);
ylim([0,1.1]);
end

a=0.82141

we
1
0.5
0
2 1.5 1 0.5 0 0.5 1 1.5 2
t
a=0.4447
1
0.5
0
2 1.5 1 Sh
0.5 0
t
a=0.61543
0.5 1 1.5 2

1
&
0.5
0
2 1.5 1 0.5 0 0.5 1 1.5 2
t
a=0.79194
1
en

0.5
0
2 1.5 1 0.5 0 0.5 1 1.5 2
t
uy

Figure 3.17: Several member functions of the random process x(t) = ea|t| .

(b) Since a ranges between 0 and 1, for a specific time, t, the random variable x(t) ranges
from e|t| (when a = 1) to 1 (when a = 0).
(c) Again, similar to P3.31, we consider x to be a function of a with t a parameter. The
Ng

mean and mean-squared values of x(t) are:

Z Z1
a|t| ea|t| 1 1 |t|

E{x(t)} = x(t)fa (a)da = e da = = 1 e . (3.86)
|t| 0 |t|
0

Z Z1
2 2 2a|t| e2a|t| 1 1 2|t|

E{x (t)} = x (t)fa (a)da = e da = = 1 e . (3.87)
2|t| 0 2|t|
0

Solutions Page 326


Nguyen & Shwedyk A First Course in Digital Communications


0 by LHospitale et

Note that at t = 0 the mean value is E{x(t)} = 0 = 1 = 1 (expected
t=0

k
intuitively), the MSV is E{x2 (t)} = 1. Therefore the variance is 0, again expected
intuitively. This is not true, of course, at other values of t (except t = ).

dy
(d) We now determine the first-order pdf of x(t).
For x e|t| or x > 1, we have fx(t) (x; t) = 0.
1
For e|t| < x 1, the equation x = g(a) = ea|t| has only one solution a = |t| ln x, e|t| <
x 1. Furthermore

we
dg(a)
= |t|ea|t| = |t|x. (3.88)
da
1
a= |t| ln x 1
a= |t| ln x

Therefore
1
fa (a = |t| ln x) 1

Sh
fx(t) (x; t) =
=
. (3.89)
dg(a) |t|x

da
1
a= |t| ln x

To conclude: (
1
|t|x , e|t| < x 1
fx(t) (x; t) = (3.90)
0, otherwise
&
Are the 2 conditions for this to be a valid pdf satisfied?
1
P3.33 (a) The transfer function of the filter is H(f ) = 1+j2f RC . The 3-dB frequency is given
1
by f3 dB = 2RC . For f3 dB = 10 kHz and Cnom = 1.5 109 F the resistor value is
Rnom = 10.6 103 10 k.
en

(b) The time constant = RC is given by = (Rnom + R)(Cnom + C) Rnom Cnom +


Cnom R + Rnom C (where the term RC is ignored).
The random variables R and C have pdfs shown in Fig. 3.18.
uy

f R (R ) f C (C )
5
Cnom
10
Rnom

R C
Ng

0.05Rnom 0 0.05Rnom 0.1Cnom 0 0.1Cnom

Figure 3.18

The random variables x = Cnom R and y = Rnom C have the pdfs shown in Fig.
3.19:
Because x and y are statistically independent, the pdf of the random variable z = x + y
is the convolution of fx (x) and fy (y). It looks as in Fig. 3.20.
Finally, the pdf of RC is a shifted version of fz (z) and it is shown in Fig. 3.21.

Solutions Page 327


Nguyen & Shwedyk A First Course in Digital Communications

f x ( x) fy ( y)

k
10
Rnom Cnom
5
Rnom Cnom

dy
x y
.05RnomCnom 0 .05RnomCnom 0.1Rnom Cnom 0 0.1RnomCnom

Figure 3.19

we
fz ( z)
5
Rnom Cnom

z
0.15Rnom Cnom 0 0.15Rnom Cnom

f RC ( RC ) Sh Figure 3.20

2.5
Rnom Cnom
&
RC
0 0.85Rnom Cnom RnomCnom 1.15RnomCnom

Figure 3.21
en

(c) What is meant by average impulse response? Simplest is to let R = Rnom and C = Cnom
and let this be the average impulse response. More appropriately it should be called the
nominal impulse response.
Otherwise find the statistical average of h(t):
uy

ZZ
1 t
E{h(t)} = e RC fRC (R, C) dRdC
RC | {z }
=fR (R)fC (C)
Z C=1.1Cnom Z R=1.05Rnom
5 1 10 1 t
= dC dR e RC
Ng

C=0.9Cnom Cnom C R=0.95Rnom Rnom R


1
Let R.Then d = R12 dR, or dR = 12 d. The inner integral becomes:
Z = 1 " #
10 0.95Rnom 1 10
e a
d =
Ei(a) Ei(a) ,
Rnom = 1 Rnom = 1
= 1
1.05Rnom 0.95Rnom 1.05Rnom

t
R et
where a C and the Ei() function is defined as Ei(x) = x t dt.
Define the worst case impulse responses to occur when R and C to be fall at either the extreme
upper value of extreme lower value, i.e., R = 1.05Rnom and C = 1.1Cnom , or R = 0.95Rnom
and C = 0.9Cnom

Solutions Page 328


Nguyen & Shwedyk A First Course in Digital Communications

1
(d) Let x RC . Then the impulse response is given by:

h(t) = xext u(t).

k
To find the first-order pdf, treat t as a parameter. Therefore we have a nonlinear mapping

dy
from random variable x to random variable h:

h = xext

(of course the parameter t is restricted to be 0).


Now given fx (x) try to determine fh (h). But the question is how to find the roots of xext = h

we
in an explicit form?
1
(e) The 3-dB bandwidth is given by f3 dB = 2RC . The mean value is

1 1 1 1
E{f3 dB } = E = E E
2RC 2 R C
Z Z
=

=
1
2 R
1 10
2 Rnom

=
Sh1

1 10
fR (R)dR

ln R


0.95Rnom

(0.1)
5
1
C
1.05Rnom !
fC (C)dR

5
Cnom

(0.201)
1.05Rnom !

ln C
0.95Rnom

2 Rnom Cnom
&
1.005
=
2Rnom Cnom

To find the variance, determine the MSV of f3 dB , i.e., E{f32 dB }:



1 1 1 1 1.003 1.010
en

2
E{f3 dB } = E E = 2 2
4 2 R2 C2 4 Rnom Cnom 2

1.013
= 2 2 2
4 Rnom Cnom
The variance is
uy

0.003
E{f32 dB } (E{f3 dB })2 = 2
= 0.003f3-dB nominal
4 2 Rnom
2 2
Cnom
or the standard deviation is 0.055f3-dB nominal .
Ng

P3.34 (a) Check the two conditions for it to be a valid pdf.


The first condition: Is fx (x; t) 0 for all x and t? The answer is YES.
The second condition: Is the area equal to one?
Z Z Z
|t| |x|/|t| |t| |x|/|t|
fx (x; t)dx = e dx = 2 e dx
2 2 0


x/|t|
= |t| |t|e = |t|2 = t2 .
0

So the answer is NO.


|t| |t||x| 1 |x|/|t|
Therefore pdf is not valid. But 2e is a valid pdf. As is 2|t| e .

Solutions Page 329


Nguyen & Shwedyk A First Course in Digital Communications

(b) pdf is not valid.


(c) fx (x; t) = 0 at t = 0.

k
P3.35 (a) For any time t > 0 (where the time origin is when you start the measurement), the pdf is
a Gaussian distribution with zero mean and standard deviation (t) = [1e10t ]u(t) > 0.

dy
Hence it is a valid pdf but time-varying.
(b) For t > 0, the mean is zero, while the variance is 2 (t) = [1 e10t ]2 u(t), which is time
varying.
(c) Fig. 3.22 plots 2 (t). Observe that 2 (t) closely approaches 1 when t = 0.5 sec (500

we
msec). At this point in time the first-order pdf does not change much and the process
can be considered first-order stationary.

0.8
Sh
0.6
&
(t)
2

0.4
en

0.2
uy

0
0 0.2 0.4 0.6 0.8 1
t (sec)

Figure 3.22: Plot of 2 (t).


Ng

P3.36 The mean value of x is:


T
Z b ZTb
E {x} = E w(t)g(t)dt = E{w(t)}g(t)dt = 0. (3.91)

0 0

Solutions Page 330


Nguyen & Shwedyk A First Course in Digital Communications

The variance of x is (since mean is zero, it equals the MSV):


T

k
Z b ZTb

x2 = E x2 = E w(t)g(t)dt w()g()d

dy
t=0 =0
ZTb ZTb
= dtg(t) dg() E {w(t)w()}
| {z }
t=0 =0 N0
(t)
2
| {z }

we
N0
2
g(t)

ZTb
N0 N0 Eg
= g 2 (t)dt = . (3.92)
2 2
0

Note: Expectation is in essence an integration operation. Therefore in interchanging the

Sh
expectation and integration (w.r.t. time) operations we are simply changing the order of
integration.

Low-pass filter

1
H( f )
x(t ) y (t )
&
f
W 0 W

Figure 3.23: Passing a random process through a low-pass filter.


en

P3.37 (a) The power spectral density (PSD) of y(t) is


N0
Sy (f ) = Sx (f ) |H(f )| = 2 2 , |f | W
(3.93)
0, otherwise
uy

Sx ( f ) Sy ( f )

N0 N0
2 2
Ng

f f
0 W 0 W

Figure 3.24

Solutions Page 331


Nguyen & Shwedyk A First Course in Digital Communications

(b) The autocorrelation can be found as the inverse Fourier transform of the PSD:
Z +

k
1
Ry ( ) = F {Sy (f )} = Sy (f )ej2f df (3.94)

Z W
N0 j2f

dy
= e df (3.95)
W 2
Z W Z 2W
N0 j2f x=2 f N0
= e (2 )df = ejx dx (3.96)
4 W 4 2W
N0 N0 sin(2W )
= 2 sin(2W ) = (3.97)

we
4 2
= N0 W sinc(2W ) (3.98)

The (normalized) autocorrelation function Ry ( ) is sketched in Fig. 3.25.

0.8

0.6
Sh
&
R ()/(N W)

0.4
0

0.2
y

en

0
uy

0.2

0.4
3 2 1 0 1 2 3
Ng

Figure 3.25: Autocorrelation function Ry ( ).

(c) The DC level of y(t) is

my = E{y(t)} = mx H(0) = 0 1 = 0 (3.99)

The average power of y(t) is

y2 = E{y2 (t)} = Ry (0) = N0 W (3.100)

Solutions Page 332


Nguyen & Shwedyk A First Course in Digital Communications

(d) Since the input x(t) is a Gaussian process, the output y(t) is also a Gaussian process
and the samples y(kTs ) are Gaussian random variables. For Gaussian random variables,

k
statistical independence is equivalent to uncorrelatedness. Thus one needs to find the
smallest value for (or Ts ) so that the autocorrelation is zero. From the graph of Ry ( ),
the answer is:

dy
1
min = (Ts )min = (3.101)
2W

0.8

we
0.6
S y(f)/N0

0.4

0.2

0
1

0.4
0.5
Sh 0
f/W (normalized frequency)
0.5 1
&
0.3
R y()/(N0W)

0.2

0.1
en

0
4 3 2 1 0 1 2 3 4
W (normalized spacing)
uy

Figure 3.26: Plot of power spectral density and autocorrelation of the output process in P3.38.

P3.38 (a) As usual Sy (f ) = |H(f )|2 N20 . See Fig. 3.26 for the plot of Sy (f ).
(b)
Z
N0 W |f | 2 j2f
Ng

1
Ry ( ) = F {Sy (f )} = 1 e df
2 W W
Z W
|f | 2
= N0 1 cos(2f )df
0 W

2N0 sin(2W ) 2N0
= 1 = [1 sinc(2W )]
W (2 )2 2W W (2 )2
See Fig. 3.26 for the plot of Ry ( ).
RW |f |
2
N0 W
(c) DC level is zero. Average power is N0 0 1 W cos(2f )df = 3 (watts), which
can also be found from Ry (0).

Solutions Page 333


Nguyen & Shwedyk A First Course in Digital Communications

(d) Since Ry ( ) > 0 for | | < , there is no finite value of sampling period to yield statis-
tically independent noise samples.

k
P3.39 (a) The output noise power is N0 W watts.
The input signal PSD is

dy
Z
2aK
F{Rs ( )} = K ea| | ej2f d = (watts/Hz)
a2 + 4 2 f 2

Output signal power is

we
Z W
2aK 2K 2W
df = tan1 .
W a2 + 4 2 f 2 a a

Therefore
2K tan1 2W
a
SNR(W ) = .
a N0 W
(b) To find the maximum,

dSNR(W )
dW
=
2K
aN0

Sh
tan1
W2
2W
a
+
aW
q
2
1+ 4 2 W 2
a2

= 0,

which gives
&
r
4 2 W 2 1 2W 2W
1+ 2
tan = .
a a a

Let x 2W a . Then one needs to solve 1 + x2 tan1 (x) = x. The only solution is x = 0,
i.e., W = 0. This solution could also been obtained by recognizing that, since the noise PSD
en

is flat and the signal power decays as 1/f 2 , increasing the filter bandwidth can only reduces
the SNR.

A linear filter
uy

h (t )
x(t ) y (t )
1
T
t
0 T
Ng

Figure 3.27: System under consideration in Problem 3.40.

P3.40 (a) The filter is time-invariant (linearity does not matter). The input noise process x(t) is
WSS. The output noise process y(t) therefore is also WSS.

Solutions Page 334


Nguyen & Shwedyk A First Course in Digital Communications

(b) The frequency response of the filter in Fig. 3.27 is:


Z Z

k
T
1 j2f t
H(f ) = h(t)ej2f t dt = e dt
0 T
1 1 T 1 h j2f T i

dy
= ej2f t = e 1
T j2f 0 j2f T
e jf T e jf T
= ejf T
j2f T
j2 sin(f T ) jf T sin(f T ) jf T
= e = e = sinc(f T )ejf T

we
j2f T f T

If mx is the DC component in x(t), then the DC component in y(t) is

my = mx H(0) = mx sinc(0) = mx

(c) If x(t) is a zero-mean, white noise process with power spectral density N0 /2, the power

Sh
spectral density of y(t) is given by:

Sy (f ) = Sx (f ) |H(f )|2 =
2

f T

N0 sin(f T ) 2 N0
=
2
sinc2 (f T )

(d) The frequency components that are not present in the output process are the components
that make Sy (f ) = 0. Obviously, these components correspond to f T = k, or f =
&
k/T , k = 1, 2, . . ..

P3.41 Consider the system in Fig. 3.28.

System
en

Delay = T

x(t ) y (t )
d ( )
dt
uy

Figure 3.28: The system under consideration in Problem 3.41.


Ng

(a) Let the input x(t) and output y(t) be deterministic signals. Then

d
y(t) = [x(t) + x(t T )]
dt
d d
= x(t) + x(t T ) (3.102)
dt dt
Obviously, the system is LTI. Since the response of an LTI system to a stationary process
is also a stationary process, it follows that y(t) is a wide-sense stationary process.

Solutions Page 335


Nguyen & Shwedyk A First Course in Digital Communications

(b) Rewrite (3.102) as

k
d
y(t) = [x(t) + x(t T )]
dt

dy
Taking the Fourier transform of both sides and using the differentiating and shifting
properties of FT gives:

Y (f ) = (j2f )[1 + ej2f T ]X(f )

Thus, the frequency response of the system is:

we
Y (f )
H(f ) = = (j2f )[1 + ej2f T ]
X(f )

N0
(c) Now consider the input x(t) being a white noise process with PSD of 2 Sx (f ) =
N0
2 , f . The PSD of y(t) is:

Sh
Sy (f ) = Sx (f ) |H(f )|2 =

Since |H(f )|2 = 16 2 f 2 cos2 (f T ), it follows that


N0
2
|H(f )|2

Sy (f ) = 8N0 2 f 2 cos2 (f T )
&
The above (normalized) PSD is plotted in Fig. 3.29.
(d) Here we need to find frequency components such that Sy (f ) = 0:

2 2 2 f =0
Sy (f ) = 8N0 f cos (f T ) = 0
cos(f T ) = 0
en


f =0

f T = 2 + k; k = 0, 1, 2, . . .

f =0

f = 2k+1
2T ; k = 0, 1, 2, . . .
uy

P3.42 (a) The PSD of m(t) is found by taking the Fourier transform of the autocorrelation function
Rm ( ):
Z Z
j2f
Sm (f ) = F{Rm ( )} = Rm ( )e d = Ae| | ej2f d
Ng


Z0 Z
= Ae(1j2f ) d + Ae(1+j2f ) d
0
0
A
(1j2f ) A
(1+j2f )
= e + e
1 j2f (1 + j2f ) 0
A A 2A
= + = (3.103)
1 j2f 1 + j2f 1 + 4 2 f 2

Solutions Page 336


Nguyen & Shwedyk A First Course in Digital Communications

8000

k
7000

dy
6000

5000

we
Sy(f)/N0

4000

3000

2000

1000

0
Sh
10 5 0 5 10
&
fT

Figure 3.29: Plot of Sy (f ).

Ideal LPF
en

H( f )
r (t ) 1 m o ( t ) + n o (t )
m( t )
f
W 0 W
uy

n (t )

Figure 3.30: System under consideration in Problem 3.42.


Ng

(b) Since the filter is an ideal LPF of bandwidth W , the PSD of the output message is:

Sm (f ) ; W f W
Smo (f ) = (3.104)
0 ; otherwise

Solutions Page 337


Nguyen & Shwedyk A First Course in Digital Communications

k
1.8

dy
1.6

1.4

1.2

we
Sm(f)/A

0.8

0.6

0.4

0.2

0
Sh
3 2 1 0 1 2 3
&
f

Figure 3.31: The power spectral density Sm (f ).


en

Thus, the power of the output message is:

Z ZW
Pmo = Smo (f )df = Sm (f )df
W
uy

ZW ZW
2A 2A 1
= df = 2 2 1 df
1 + 4 2 f 2 4 ( 2 )+ f2
W 0
W
A
= (2)tan1
(2f ) = 2A tan1 (2W ). (3.105)
2
Ng

To find the power percentages at the filter output, note that the power of the input
message is Pm = Rm (0) = A. Therefore

W (Hz) Pmo Percentage = Pmo /A


10 0.9899A 98.99%
50 0.9979A 99.79%
100 0.9989A 99.89%

Solutions Page 338


Nguyen & Shwedyk A First Course in Digital Communications

(c) Since the PSD of the output noise is:


k
N0
2 ; W f W
Sno (f ) = (3.106)
0 ; otherwise

dy
Then the noise power at the output of the filter is:

ZW
N0 N0
Pno = df = 2W = N0 W. (3.107)
2 2
W

we
(d) Let A = 4 103 watts, W = 4 kHz and N0 = 108 watts/Hz. Determine the SNR in
dB at the filter output.
2A
Pmo = tan1 (2W ) A.

This is because the bandwidth W = 4 kHz is large enough to practically pass all the

P3.43
message power.

Sh
Pno = N0 W = 108 4 103 = 4 105 .

SNR = 10log10
Pmo
Pno
= 10log10
4 103
4 105
= 20dB.

Z
&

Ryw ( ) = E{y(t)w(t + )} = E h()[s(t ) + w(t )]d w(t + )


Z

= h()
E {s(t )w(t + )} + E {w(t )w(t + )} d
| {z } | {z }
en

=0 (they are uncorrelated) N0


= 2
( +)
N0
= h( )
2
uy
Ng

Solutions Page 339


k
dy
Chapter 4

Sampling and Quantization

we
P4.1 The spectrum of the analog signal is given by
5[(f 500) + (f + 500)] 2[(f 1800) + (f + 1800)]
S(f ) = + (volts/Hz). (4.1)
2

Sh 2.5

1
2

S ( f ) (V/Hz)
&
1800 500 0 500 1800 f (Hz)

Figure 4.1

The sampled spectrum Ssampled (f ) is the sum of all the shifted versions by multiples of fs
(and scaled by 1/Ts = fs which we will assume is understood). Ssampled (f ) plots as follows:
en

Ssampled ( f ) (V/Hz)

2.5
uy

1
 
200

1500
500

1800
2500

1500

500
1800

200

2500

0
f (Hz)
1 kHz
2 kHz
Ng

Figure 4.2

(a) Output spectrum is


2.5[(f 500) + (f + 500)] + [(f 200) + (f + 200)] . (4.2)
| {z } | {z }
This is due to the analog signal This is an alias - due to undersampling

Output time signal is


5 cos (2(500)t) + 2 cos (2(200)t) . (4.3)

1
Nguyen & Shwedyk A First Course in Digital Communications

(b) Output spectrum is

k
2.5[(f 500) + (f + 500)] + [(f 1800) + (f + 1800)]
| {z }
This is due to the analog signal

dy
+ [(f 200) + (f + 200)] + 2.5[(f 1500) + (f + 1500)] . (4.4)
| {z }
Aliases

Output time signal is

5 cos (2(500)t) + 2 cos (2(1800)t) + 2 cos (2(200)t) + 5 cos (2(1500)t) . (4.5)

we
P4.2 First we have the following transform pair:
S( f )

e
e j 2 Wt e j 2 Wt sin 2 Wt

Sh
W
j 2 ft
s (t ) = df = =
W
j 2 t t
W 0 W f (Hz)

Figure 4.3
&
Therefore
Z p Z p
sin 2W (t nTs ) j2f t =tnTs sin 2W j2f (+nTs )
Ts e dt = Ts e d
(t nTs )
| {z }
sn (t)
p
en

= Ts S(f )ej2f nTs . (4.6)

Z Z
uy

sn (t)sm (t)dt = Ts S(f ) ej2f nTs S(f ) ej2f nTs df


| {z }
||
u(f +W )u(f W )=S (f )

ZW
= Ts ej2f (nm)Ts df
Ng

W
ej2(nm)W Ts
ej2W Ts 2W Ts =1 sin((n m))
= Ts = (4.7)
j2(n m)Ts (n m)

0 n 6= m (orthogonal)
=
1 n = m (normal)

P4.3 (a)

sin(2W )
Rwout ( ) = F 1 {Swout (f )} = N0 W (watts). (4.8)
2W

Solutions Page 42
Nguyen & Shwedyk A First Course in Digital Communications

S w out ( f )
w (t ) w out (t ) N0 / 2

k
H( f )

dy
Sw ( f ) =
N0
watts/Hz Swout ( f ) = H ( f )
2 N0
watts/Hz W 0 W f (Hz)
2 2

Figure 4.4

we
(b) No. It depends on the transfer function, H(f ), and the input PSD. However if the input
is a Gaussian process then so is the output since it was obtained by the linear operations
essentially the output is a weighted linear sum of the input, w(t).
(c) At = kTs :

sin(2W kTs ) W Ts =1 sin(2k)


Rwout (kTs ) = N0 W = N0 W

Sh = 0,
2W kTs
k 6= 0.

The above means that the samples are uncorrelated. And because they are Gaussian,
they are statistically independent.
2k

(d) If increased then definitely correlated, except for very special cases such as if Ts is halved
(4.9)

then the set of alternate samples would be uncorrelated, etc. If decreased then if Ts is
&
increased by an integer multiple the samples are uncorrelated, otherwise correlated.
Statistically independent depends on the process being Gaussian and samples being
uncorrelated. If not Gaussian, nothing can be said about statistical independence even
if uncorrelated.
en

P4.4 8-bit 256 levels Step size = 2/256 = 7.81 mV.


12-bit 4096 levels Step size = 2/4096 = 0.488 mV.
16-bit 65,536 levels Step size = 2/216 = 30.5V.

P4.5 3-bit 8 levels; 4 on each side of zero.


uy

f m ( m)

1/4

Target levels
Ng

m
-4 -3 -2 -1 0 1 2 3 4
Thresholds

Figure 4.5

By inspection the uniform quantizer is the optimum one.

Solutions Page 43
Nguyen & Shwedyk A First Course in Digital Communications

Signal power is:

k
Z4 Z4
2 2
m = m fm (m)dm = 2 m2 fm (m)dm

dy
4 0
" #
+ 1 1 m3 1
= 11 (watts). m3 1
= 2 (4.10)
4 3 0 12 3 0
3

Quantization noise power:

we
Z1 Z2
1 21 3 2 1
q2 = 2 m dm + m dm
2 4 2 12
0 | {z } 1 | {z }
Change variables =m1/2 =m3/2
Z3 Z4
5 2 1 7 2 1
+

1
= 2
4
2


|
m

1/2
Z1/2 Sh
2
{z
=m5/2

2 d +
12
dm +
} 3 |

1
12
m

Z1/2

1/2
2 d +
1
12
2
{z
=m7/2
12

Z1/2

1/2
dm
}

2 d +
1
12
Z1/2

1/2


2 d
&
Z1/2 Z1
2
= 2 d = 2 2 d = (watts). (4.11)
3
1/2 0

Therefore
en

2
m 11/3
SNRq = 2
= = 5.5 or 10 log10 5.5 = 7.4dB.
q 2/3

fm (m) (1/volts)
uy

1/3
T1
m (volts)
-1 D1 1 0 1 D1 T2 1
Ng

-
4 4

Figure 4.6

Solutions Page 44
Nguyen & Shwedyk A First Course in Digital Communications

P4.6

k
1/4
R R1
D
R1
D 1
mfm (m)dm mdm + 3 mdm
0 0 1/4 1 + 8D12
T1 = = 1
= (4.12)
D1 14 13

dy
DR1 4 + 8 + 16D1
fm (m)dm
0
R1
mfm (m)dm
D1 1 D12 1 + D1
T2 = = = (4.13)
R1 2(1 D1 ) 2

we
fm (m)dm
D1
T1 + T2
D1 = (4.14)
2
1 + 8D12 1 + D1 5
2D1 = + 4D12 D1 + = 0 D1 = 0.4478 (4.15)
8 + 16D1 2 4

m
T1 = 0.1717; T2 = 0.7239.

2
= E{m } =
Sh
P4.7 Regardless of the quantizer used, the signal power is:

2
Z1
2
m (1 |m|) dm = 2
| {z }
Z1
m2 (1 m)dm =
1
6
watts
(4.16)

(4.17)
&
1 fm (m) 0

Rather than solving the 1-bit, 2-bit and 3-bit quantizers separately, we first set up general
expressions for the q2 , Di and Ti in terms of n, the number of bits used for quantization.
Before this, we observe that the pdf is symmetrical about zero and that the number of levels is
even (= 2n ). Therefore the decision boundaries and target levels shall be symmetrical about
en

zero need only consider the positive m axis. The picture looks as follows:
f m ( m)
L 2n
1 K= = = 2n 1
2 2
uy

( L number of target levels)

T0 Ti TK 1
m
D0 = 0 D1  Di Di +1  DK 1 DK = 1

Figure 4.7
Ng

General expressions for the uniform quantizer:


The decision intervals Di+1 Di are equal and are = 1/2n1 = 1/K.
The target levels Ti are in the middle of the interval, i.e., Ti = (Di +Di+1 )/2, i = 0, . . . , K 1,
or another way of expressing Ti is Ti = (i + 1/2), i = 0, . . . , K 1, = Di+1 Di = 1/2n1 .

Solutions Page 45
Nguyen & Shwedyk A First Course in Digital Communications

The quantization noise power is given by:


D 1/2 n
Zi+1 Z

k
K1
X K1
X
=mTi
q2 = 2 (m Ti )2 (1 m)dm = 2 2 (1 Ti )d
i=0 D i=0
1/2n

dy
i

1/2
Z
n 1/2
Z
n
X
K1



K1
X 1
2 3
= 2 (1 Ti ) d d = 4 (1 Ti )
3(23n )
i=0 1/2n 1/2n i=0
| {z }

we
=0
"K1 #
1 X
= (1 Ti ) . (4.18)
3(23n2 )
i=0

Consider
X

Sh
K1
X K1
1 (K 1)K K
(1 Ti ) = K i+ =K +
2 2 2
i=0 i=0
K K=1 K
= K K = = 2n2 (4.19)
2 2
1 1
q2 = 2n2 = watts. (4.20)
3(23n2 ) 3(22n )
&
It is now easy to find SNRq for any number of bits:
2
m 3(22n )
SNRq = = = 22n1 or (2n 1) log10 2 = (2n 1)3 dB. (4.21)
q2 6
en

n 1 2 3 4
SNRq 2 8 32 124
SNRq (dB) 3 9 15 21
uy

Turning our attention to the optimum quantizer


The equations for the decision boundaries are easy to write. They are:
Ti1 + Ti
Di = , i = 1, 2, . . . , K 1 = 2n1 1. (4.22)
2
Ng

For the target levels, consider the generic interval as shown:


The centroid of this probability mass is:
DRi+1
m(1 m)dm 2 D 2
Di+1 i
3 D 3
Di+1 i Di+1 +Di
2 +D
Di+1 2
i+1 Di +Di
Di 2 3 2 3
Ti = = =
Di+12+Di
DRi+1 D 2 D2
(Di+1 Di ) i+12 i 1
(1 m) dm
Di | {z }
fm (m)
2
3(Di+1 + Di ) 2(Di+1 + Di+1 Di + Di2 )
= , i = 0, 1, . . . , K 1 (4.23)
3 [2 (Di+1 + Di )]

Solutions Page 46
Nguyen & Shwedyk A First Course in Digital Communications

1 m

k
Ti
m

dy
Di Di +1

Figure 4.8

(Note that D0 = 0, DK = 1).

we
The above set of K + K 1 = 2n 1 nonlinear algebraic equations needs to be solved for the
decision boundaries and target levels. For n = 1 & 2 this can be done reasonably simply.
1-bit optimum quantizer
Only the target level, T0 , needs to be determined (D0 = 0, D1 = 1)

T0 =

2-bit optimum quantizer Sh


3(D1 + D0 ) 2(D12 + D1 D0 + D02 )
3 [2 (D1 + D0 )]

Have 22 1 = 3 equations in T0 , T1 , D1 with D0 = 0, D2 = 1.


=
32
3
1
= .
3
(4.24)

T0 + T1
&
D1 = or 2D1 = T0 + T1 (4.25)
2
3(D1 + D0 ) 2(D12 + D1 D0 + D02 ) 3D1 2D12
T0 = = (4.26)
3 [2 (D1 + D0 )] 3(2 D1 )
2
3(D2 + D1 ) 2(D2 + D2 D1 + D1 ) 2
T1 =
3 [2 (D2 + D1 )]
en

3(1 + D1 ) 2(1 + D1 + D12 ) 1 + D1 2D12


= = (4.27)
3 [2 (1 + D1 )] 3(1 D1 )
3D1 2D1 2 1 + D1 2D1 2
2D1 = + . (4.28)
3(2 D1 ) 3(1 D1 )
uy

This simplifies to D13 4D12 + 4D1 1 = 0. Use either roots function in Matlab or observe
that D1 = 1 is a root and therefore

polynomial factors as (D1 1)(D12 3D1 + 1). The
the
roots of the quadratic are: 3 294 = 32 5 .

3 5
Choosing (since D1 must lie in [0, 1]) we have D1 = 0.382 and T0 = 0.176, T1 = 0.588.
Ng

3-bit optimum quantizer


There are 23 1 = 7 equations in the unknowns T0 , D1 , T1 , D2 , T2 , D3 , T3 . In particular, 3 for
the decision boundaries: D1 = T0 +T2 , D2 =
1 T1 +T2
2 , D3 =
T2 +T3
2 , and 4 for the target levels:

Solutions Page 47
Nguyen & Shwedyk A First Course in Digital Communications

(D0 = 0, D4 = 1)

k
3D1 2D12
T0 = (4.29)
3(2 D1 )
3(D2 + D1 ) 2(D22 + D2 D1 + D12 )

dy
T1 = (4.30)
3[2 (D2 + D1 )]
3(D3 + D2 ) 2(D32 + D3 D2 + D22 )
T2 = (4.31)
3[2 (D3 + D2 )]
1 + D3 2D32
T3 = . (4.32)

we
3(1 D3 )

To solve the above set of nonlinear algebraic equations the function fsolve in the optimization
toolbox of Matlab was used. The obtained solution is
D1 = 0.1879; D2 = 0.3920; D3 = 0.6243;

Sh
T0 = 0.0907; T1 = 0.2851; T2 = 0.4990; T3 = 0.7495.

Turning to the quantization noise power, q2 , for the optimum quantizer we derive at first a
general expression for the noise power when m falls in the region Di , Di+1 . It is
D
Zi+1 Di+1
Z Ti Di+1
Z Ti
=mTi
q2 (i) = (m Ti )2 (1 m)dm = (1 Ti ) 2 d 3 d
&
Di Di Ti Di Ti
(1 Ti ) 1
= (Di+1 Ti )3 (Di Ti )3 + (Di Ti )4 (Di+1 Ti )4
3 4
(4.33)
en

P
K1
and q2 = 2 q2 (i).
i=0
Programming this in Matlab gives:
2-bit quantizer: q2 = 0.0155;
uy

3-bit quantizer: q2 = 0.0041;


The quantization signal-to-noise ratios are therefore:

1-bit 2-bit 3-bit


(1/6) (1/6) (1/6)
Ng

SNRq (1/18) =3 0.0155 = 10.75 0.0041 = 40.65


SNRq (dB) 4.77 10.3 16.1

Remark : Compared to the uniform quantizer, the optimum quantizer gives the biggest perfor-
mance improvement at 1 bit. As the number of bits increases the performance improvement
becomes less and less the optimum quantizer tends to be a uniform quantizer for n (number
of bits) large.
Matlab code

(i) Thresholds and decision levels (2-bit quantizer)

Solutions Page 48
Nguyen & Shwedyk A First Course in Digital Communications

function F=quantizer(x)
F=[x(1) - (1 + 8*x(3)^2)/(8 + 16*x(3)); x(2) - 0.5*(1 + x(3));

k
x(3) - 0.5*(x(1) + x(2))] % x_1 = T_1; x(2)=T_2; x(3)=D_1
x0=[0.25 0.75 0.5];
% initial guess - basically uniform quantizer values

dy
options = optimset(Display,iter)
[x, fval]=fsolve(@quantizer, x0, options)

(ii) sigmaq=0; K= ;
D_vec=[0, D_1, D_2,...,D_{K-1}, 1];

we
T=[T_0,...,T_{K-1}];
for i=1:1:K
a=1-T(i); b=D_vec(i+1)-T(i); c=D_vec(i)-T(i);
sigmaq= a*(b^3-c^3)/3 + 0.25*(c^4 - b^4);
end
sigmaq=2*sigmaq;

Tl = R D
Dl
Sh
P4.8 (a) Since fm (m) is symmetric about zero, one only needs to compute the decision and target
values for the positive m axis:
R Dl+1
Dl
l+1
mfm (m)dm
fm (m)dm
(4.34)
RD R Dl+1
&
(c/2) Dll+1 mexp(cm)dm Dl mexp(cm)dm
= R Dl+1 = RD (4.35)
l+1
(c/2) Dl exp(cm)dm Dl exp(cm)dm

Applying integral by parts to obtain the following:


Z
en

1
exp(cm)dm = exp(cm) (4.36)
c
Z
1 1
mexp(cm)dm = mexp(cm) 2 exp(cm) (4.37)
c c
uy

Thus
Dl+1 exp(cDl+1 ) Dl exp(cDl ) 1
Tl = + (4.38)
exp(cDl+1 ) exp(cDl ) c
The average signal power is
Z + Z +
Ng

2 2
m = m fm (m)dm = c m2 exp(cm)dm
0
2
= . (4.39)
c2
The following integrals may be useful when computing the average quantization noise

Solutions Page 49
Nguyen & Shwedyk A First Course in Digital Communications

power:
Z

k
b
I1 (a, b) = m2 exp(cm)dm
a
b
m2 2m

dy
2 ,
= exp(cm) + 2 + 3 (4.40)
c c c a
Z b
I2 (a, b) = mexp(cm)dm
a
b
m 1
.

we
= exp(cm) + 2 (4.41)
c c a

(b) One-bit uniform quantizer: L = 2, D0 = 0, D1 = +, one need to find T0 . Since


mmax +, one can assume that mmax will not exceed a value p with the proba-
bility and then compute Dl , Tl corresponding to p and . Therefore,
Z p
=

Sh 0 2
1

1
p = ln(1 2).
c
Then the target value for this case will be
1
cexp(cm)dm = [1 exp(cp)]
2
(4.42)

(4.43)

1
&
T0 = ln(1 2). (4.44)
2c
The average quantization noise power is:
Z
2
q = 2 (m T0 )2 fm (m)dm
en

Z 0
= c (m T0 )2 exp(cm)dm
0
T02
= c I1 (0, +) 2T0 I2 (0, +) +
c
uy


1 1 2
= 2 2 + ln(1 2) + ln (1 2) . (4.45)
c 4

The quantization signal-to-noise ratio is therefore:


Ng

2
m 2
SNRq = = . (4.46)
2
q 2 + ln(1 2) + 41 ln2 (1 2)

(c) One-bit optimum quantizer: L = 2, D0 = 0 and D1 = +. Use (4.38) to find T0 :

D1 exp(cD1 ) D0 exp(cD0 ) 1
T0 = +
exp(cD1 ) exp(cD0 ) c
D1 exp(cD1 ) 1
= +
exp(cD1 ) 1 c
1
= (4.47)
c

Solutions Page 410


Nguyen & Shwedyk A First Course in Digital Communications

Thus the quantizer maps all the positive values of the sampled message to 1c and all the
negative values to 1c .

k
Similar to (4.45), the quantization noise power is
Z
2
(m T0 )2 fm (m)dm

dy
q = 2
0
Z
= c (m T0 )2 exp(cm)dm
0
T02
= c I1 (0, +) 2T0 I2 (0, +) +
c

we
1
= 2. (4.48)
c
The quantization signal-to-noise ratio is therefore:
2
m 2/c2
SNRq = = = 2. (4.49)
q2 1/c2

Sh
(d) Two-bit uniform quantizer: Similar to the one-bit uniform quantizer, one has L = 4, D0 =
0, D2 = 1c ln(12). Thus, D1 = 2c
1 1
ln(12), T0 = 4c 3
ln(12), T1 = 4c ln(12).
Two-bit optimum quantizer: L = 4, D0 = 0, D2 = + Need to find T0 , T1 and D1 .
Using (4.38) one has:
D1 exp(cD1 ) 1
T0 = + (4.50)
&
exp(cD1 ) 1 c
1
T1 = D1 + (4.51)
c
Next, the equation to solve for D1 is:
T0 + T1
en

D1 =
2
1 D1 exp(cD1 ) 1
= D1 + + (4.52)
c exp(cD1 ) 1 c
Let x = cD1 , then after some manipulations, the above equation becomes:
uy

x = 2 2ex (4.53)
Equation (4.53) can be solved by trial and error. One method is to plot 2 2ex and x
versus x and look at the intersection point (see Fig. 4.9). The solution is x = 1.59
D1 = 1.59 T1 = D1 + 1c = 2.59 1
c and T0 = D1 c = c .
0.59
Ng

c
For these two quantizers, the average quantization noise power can be computed as

q2 = 2 q2 (0) + q2 (1) , (4.54)
where

c T02
q2 (0)= I1 (0, D1 ) 2T0 I2 (0, D1 ) + (1 exp(cD1 )) , (4.55)
2 c

2 c T12
q (1) = I1 (D1 , +) 2T1 I2 (D1 , +) + exp(cD1 ) , (4.56)
2 c
2 / 2 .
and the quantization signal-to-noise ratio can be found by SNRq = m q

Solutions Page 411


Nguyen & Shwedyk A First Course in Digital Communications

k
1.6

dy
x
22e
1.2

we
0.8

0.4

Sh
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

Figure 4.9: Plots of 2 2ex and x versus x.

P4.9 To be added.
&
P4.10 To be added.

P4.11 By inspection the optimum quantizer is a uniform quantizer with decision boundaries at
k, (k + 1), etc., and a target level of k + /2. With these values the target level is the
center of gravity of the probability mass density and the decision boundary is the arithmetic
en

of the two adjacent target values, i.e., the optimum equations are satisfied.
Note: This implies that as the number of quantization bits, R, increases the approximation
becomes better and better and that the uniform quantizers performance approaches that of
an optimum quantizer. Given the manufacturing simplicity of a uniform quantizer there is
little or no profit going to an optimum quantizer.
uy

P4.12 Ignoring for the moment the range of mout , the first observation is that for fmout (mout ) to be
uniform the RHS of (P4.5) must be a constant, dg(m)
dm = fm (m) that g(m) is the cumulative
distribution function of m(t), i.e., g(m) = Fm (m). Fm (m) is monotonic and ranges from 0
(at ) to 1 (at +). Therefore fmout (mout ) is uniform over [0, 1].
Ng

Pass mout (t) through a linear (mathematically more appropriately called affine) transforma-
tion which shifts it to lie in the [1, 1] range.

1
y(t) = 2 mout (t) (4.57)
2
1
fy (y) = [u(y + 1) u(y 1)] . (4.58)
2

Refuses, because the quantizer in all likelihood would not be robust enough. It would be
sensitive to the assumed pdf model for m(t).

Solutions Page 412


Nguyen & Shwedyk A First Course in Digital Communications

P4.13 Crest factor F is defined as


Peak value of the signal mmax

k
F = = (4.59)
RMS value of the signal m

dy
(a) A sinusoid with peak amplitude of mmax : VRMS = mmax / 2 F = 2 .
(b) A square wave of period T and amplitude range [mmax , mmax ]: VRMS = mmax
F =1.
R
2 = E{m2 } = 2 mmax m2 1
(c) Uniform over the amplitude range [mmax , mmax ]: m 0 2mmax dm =

we
m2max
3 VRMS = mmax
3
F = 3 = 1.73 .
2 : V
(d) Zero-mean Gaussian with variance m RMS = m . Equation to solve for mmax is
Z mmax
1 m2 0.99
exp 2 dm = (4.60)
2m 0 2m 2

By changing the variable t =



m
2
Z

0
mmax
2m
Sht2
m
2m
the above can be rewritten as

e dt = 0.99 erf

mmax
2m

= 0.99

max = erf 1 (0.99) = 1.8214 mmax = 2 1.8214 m
2m
&
F = 2.5758 (4.61)

2 = E{m2 } = 2 2
(e) Zero-mean Laplacian: m m = c .
Now solve for mmax :
c2

Z mmax
c mmax 0.99

exp (cm) dm = exp(cm) = 0.99
en

2 0 2
0
ln(0.01) 4.6052
1 exp(cmmax ) = 0.99 mmax = =
c c
4.6052
F = = 3.2563 (4.62)
uy

(f) Zero-mean Gamma. The pdf is


s
k
fm (m) = exp(k|m|) (4.63)
Ng

4|m|

Z r Z

2 2 k k
m = 2 m exp(km)dm = m3/2 exp(km)dm (4.64)
0 4m 0

Solutions Page 413


Nguyen & Shwedyk A First Course in Digital Communications

Integrate by parts to obtain:


Z Z

k

3/2 1 3/2 3
m exp(km)dm = m exp(km) + m1/2 exp(km)dm
0 k 2k 0
0
| {z }

dy
=0
" Z #
3 1 exp(km)
= 2 m1/2 exp(km) dm
2k 0 2 0 m
| {z }
=0
Z Z

we
3 exp(km) t = km 3 2 2
= 2 dm et dt
4k 0 m = 4k k 0
2

3 3
= erf() = (4.65)
2
4k k | {z } 4k 2 k
=1

2 = 3 = 3.

Sh
Combining (4.64) and (4.65) yields m m
4k 2 2k
Now mmax is found as follows:
Z m Z kmmax
k max
exp(km) 0.99 t = km 1 2
dm = et dt = 0.99
2 0 m 2 0
p p
erf kmmax = 0.99 kmmax = erf 1 (0.99)
&
1 2
erf (0.99) 3.3174
mmax = = (4.66)
k k

3.3174 2
F = = 3.8307 (4.67)
3
en

Therefore if one was to list the signals from least peaked to most peaked, they are:

square wave, sinusoid, uniform, Gaussian, Laplacian, Gamma,


uy

which, from either the waveshapes or the pdfs, makes some intuitive sense.

P4.14 (a) Sinusoid: mmax cos(2fr t) T = 1/fr . Then over half a period the average value of
|m(t)| is

ZT /4 1/(4f
Z r)
Ng

1 2 2mmax
= mmax cos(2fr t)dt = mmax cos(2fr t)dt = (4.68)
(T /2) T
T /4 1/(4fr )

mmax E{|m(t)|} 2 2
VRMS = = = 0.9 (4.69)
2 VRMS

(b)

E{|m(t)|}
E{|m(t)|} = mmax , VRMS = mmax =1 (4.70)
VRMS

Solutions Page 414


Nguyen & Shwedyk A First Course in Digital Communications

(c)
Zmax

k
m
1 mmax
E{|m|} = |m| dm = (4.71)
2mmax 2
mmax

dy
mmax
m = (4.72)
3

E{|m|} 3
Therefore m = 2 = 0.87.
(d)

we
Z 2 Z r
2 = m2
1 m2 2m 2m 2
E{|m|} = |m|e m dm =
2 e d = m (4.73)
2m 2
0
r
E{|m|} 2
= = 0.8 (4.74)

Sh
m

(e)
Z
c 1
E{|m|} = |m|ec|m| dm = (4.75)
2 c


&
2 E{|m|} 1
m = (from P 4.13e) = = 0.707 (4.76)
c m 2

(f)
r Z r Z
k |m| k|m| k
en

E{|m|} = e dm = m1/2 ekm dm


4 |m|1/2
0
r
k ( 32 ) 1
= = (4.77)
(k)3/2 2k

uy

3 E{|m|} 1
m = (from P4.13e) = = 0.58 (4.78)
2k m 3

Listing the parameters from largest to smallest results in:


Ng

square wave, sinusoid, uniform, Gaussian, Laplacian, Gamma.

The above is the same order as in P4.13, but the parameter values are now decreasing instead
of increasing.
Note: The relationship can be used (and has been used) to design an RMS meter. The block
diagram looks as in Fig. 4.10. The advantage of this approach is a true RMS voltmeter
requires a squaring circuit whereas here only a fullwave rectifier is needed. The disadvantage
is that the voltmeter is good only as an RMS voltmeter for the specific signal that the signal
is calibrated.

Solutions Page 415


Nguyen & Shwedyk A First Course in Digital Communications

Fullwave Rectifier Everaging Circuit


m (t ) m(t ) E{ m } Scale Calibrated

k
Typically a (very)
E{ m }
lowpass filter using
m

dy
e.g., for sinusoid
E{ m }
RMS = m = = 1.11E { m }
0.9

Figure 4.10: Block diagram of an RMS meter.

we
P4.15 Eqn. (4.46) states that

3L2 2 n2
SNRq (n2 ) = 2
. (4.79)
ln (1 + ) 1 + 2n E{|m|} + 2 2
m n

From P4.14 the signal parameter

Gaussian:
q

Laplacian:
2
;
1 ;
Sh E{|m|}
m is as follows:

2
&
Gamma: 1 ;
3

3
Uniform: 2 .

Matlab code for the plot:


en

L=256; % 8-bit quantizer


mu=255; % mu-law parameter
SPar= sqrt(2/pi); % appropriate signal parameter, entered is for Gaussian
K=3*L^2*mu^2/((log(1+mu))^2);
SPB =[-100:1:0]; % signal power from -100 to 0 dB, 1 dB increments
uy

SP=10.^(SPB/10); % signal power


SNRq=K*SP./(1 + 2*mu*SPar*sqrt(SP) + mu^2*SP);
plot(SPB,10*log10(SNRq)); % plot SNRq in dB
hold;
%(repeat for next signal model)
Ng

From the plots it can be seen that the quantizer is quite insensitive to variations in input
signal power over a wide range ( 40 dB) and also insensitive to the actual pdf model of the
input signal - Both desirable properties.

P4.16 Consider positive m. Note that nonlinearity is an odd function. Therefore the derivative shall

Solutions Page 416


Nguyen & Shwedyk A First Course in Digital Communications

40

k
30

dy
20

SNR (dB)
10
q

we
0

Gaussian
10 Laplacian
Gamma
Uniform
20
100 80 60 40 20 0

Sh
Normalized signal power 10log10(2n) (dB)

Figure 4.11: Plots of SNRq for different signal models with 8-bit -law quantizer (L = 256, = 255).

be an even function.
&
mmax dy Aymax
For 0 < m : = (4.80)
A dm mmax (1 + ln A)
mmax dy ymax
For < m mmax : = (4.81)
A dm m(1 + ln A)
2 ( A2 ymax
2
en

dy m2max (1+ln A)2


, 0 < m mmaxA
= 2
ymax mmax
(4.82)
dm , < m m max
m2 (1+ln A)2 A
uy
Ng

Solutions Page 417


Nguyen & Shwedyk A First Course in Digital Communications

Zmax
m

k
y2 fm (m)
Nq = max 2 dm
3L2 dy
mmax dm

dy
mZmax /A mmax
Z /A
y2 m2 (1 + ln A)2 m2max (1 + ln A)2
= max fm (m)dm + fm (m)dm
3L2 2
ymax A2 ymax
2
mmax mmax /A
Zmax
m
m2 (1 + ln A)2

we
+ 2
fm (m)dm
ymax
mmax /A

mZmax /A Zmax
m mmax
Z /A
(1 + ln A)2 m2
= 2
m fm (m)dm + m fm (m)dm + max
2
fm (m)dm
3L2 A2
mmax mmax /A mmax /A
| {z }

=
(1 + ln A)2
3L2

2
m
mmax
Z /A

mmax /A
Sh 2
m fm (m)dm +
m2max mmax
A2
P
A
m
mmax
A

=P ( mmax
A
m mmax
A )
&
3L2
2
(1+ln A)2 m
SNRq = ! (4.83)
m mmax
R /A
2 m2max mmax
m + A2
P A m
max A m2 fm (m)dm
mmax /A
en

mmax mmax
R /A
m2max mmax
The term A2
P A m A m2 fm (m)dm depends on the ratio mmax /A
mmax /A
and on the pdf model assumed for fm (m). In a well-engineered system mmax /A 1 and
2 the SNR = 3L2
therefore for high m q (1+ln A)2
(analogous to -law behavior for large ). In
uy

essence, it appears that both -law and A-law are equally (perhaps similarly is a better
word) robust to the assumed signal model. Plots of SNRq are shown below for Laplacian and
uniform pdf models since these are fairly easy to handle analytically.
For uniform and Laplacian densities the 2 correction factors are:
Ng

(a) Uniform:
mmax
Z /A
m mmax 1 1
max
P m = dm = (4.84)
A A 2mmax A
mmax /A
mmax
Z /A mmax
Z /A
2 1 m2max 2
m
m fm (m)dm = m2 dm = = . (4.85)
mmax 3A3 A3
mmax /A 0

(b) Laplacian:

Solutions Page 418


Nguyen & Shwedyk A First Course in Digital Communications

First define mmax to be the value that captures % (percent) of the probability, given
mRmax
c c|m|
by 2 2e dm cmmax = ln(1 /100). Then

k
mmax

mmax
Z /A
m mmax

dy
max c cm cmmax
P m = 2 e dm = 1 e A (4.86)
A A 2
0

and
mmax
Z /A mmax
Z /A

we
c
2 c|m|
m e dm = c m2 ecm dm
2
mmax /A 0

1 cmA
max cmmax 2 cmmax
= 2 2e +2 +2 . (4.87)
c A A

Sh
2 = 2/c2 for Laplacian. Therefore
But m
mmax
Z /A
c 2 cmmax 2 cmmax
cmA
max
m 2
e c|m|
dm = m 2e +2 +2 (4.88)
2 2 A A
mmax /A
&
38.1725
Laplacian
Uniform

38.172
en

38.1715
SNRq (dB)

38.171
uy

38.1705

38.17
100 80 60 40 20 0
Ng

2
Signal power 10log10(m) (dB)

Figure 4.12: Plots of SNRq for different signal models with 8-bit A-law quantizer (L = 256,
A = 87.6, = 0.9999 for Laplacian pdf).

Plots in Fig. 4.12 show that A-law quantizer is more robust to -law quantizer, both over
input signal power and pdf models.

P4.17 The -law nonlinearity can be visualized in 2 different ways as illustrated in Fig. 4.13.
Consider (a) which is more of an engineering model (model (b) is left as a further exercise).

Solutions Page 419


Nguyen & Shwedyk A First Course in Digital Communications

y
y

k
g ( m) ymax
ymax

dy
mmax mmax
m m
mmax mmax

ymax ymax

we
(a ) (b)

Figure 4.13: Visualization of -law nonlinearity.

The pdf, fy (y), shall have two impulses, one at y = ymax , the other at y = ymax each of
strength P (mmax m < ).

ymax
1+
m
Sh
In the range mmax m mmax , there is only one root to the equation g(m) = y. Solve
this for 0 y ymax . Note that because of the symmetry in fm (m) & g(m), fy (y) is an even
function.

= y mroot =
mmax h
(1 + )y/ymax 1
i
(4.89)
ln(1 + ) mmax
&

dy ymax 1 ymax 1
= = (4.90)
dm ln(1 + ) 1 + m
mmax mroot mmax ln(1 + ) (1 + )y/ymax
mroot
mmax
en

fm (m)|m mmax ln(1 + ) y/ymax 1 2


(1 + )y/ymax e 2 [(1+) ]
1
fy (y) = root = (4.91)
dy
dm 2ymax
mroot
uy

where 0 y ymax and fm (m) = N (0, 1).


Since fy (y) = fy (y) we have:
|y|/ymax 1 2
(1 + )|y|/ymax e 2 [(1+) ] , y
1


mmax ln(1+)
max y ymax
2ymax
fy (y) = (4.92)
Ng

P ( < m mmax )(y + ymax ) + P (mmax m < )(y ymax )



= 0, elsewhere

To plot let mmax be such that 99% of pdf is captured, which gives mmax = 2.576 and
P ( < m mmax ) = P (mmax m < ) = 0.005. Let ymax = 1. The plot is shown in
Fig. 4.14, which does not seem to confirm the intuition.
x
P4.18 Consider x > 0, y(x) = (1 + S) 1+Sx .

dy 1+S dy 1+S
= 2
= , |x| < 1 (4.93)
dx (1 + Sx) dx (1 + S|x|)2

Solutions Page 420


Nguyen & Shwedyk A First Course in Digital Communications

0.03

k
0.025

dy
0.02

f y( y)
0.015

we
0.01

0.005

0
1 0.5 0 0.5 1

Sh
y/ ymax

Figure 4.14: Pdf plot of the output of the -law compressor when input is Gaussian.

Z1
1 m
Nq = (1 + S|x|)4 fx (x)dx, where x = . (4.94)
&
3L2 (1 + S)2 mmax
1

In terms of the unnormalized variable m, one has


dy dy dm dy dy 1 dy
= = mmax or = . (4.95)
dx dm dx dm dm mmax dx
en

Then
Zmax
m
m2max |m| 4
Nq = 1+S fm (m)dm
3L2 (1 + S)2 mmax
mmax
uy

mZmax
m2max |m| |m|2 |m|3 |m|4
= 1 + 4S + 6S 2 2 + 4S 3 3 + S 4 4 fm (m)dm
3L (1 + S)2
2 mmax mmax mmax mmax
mmax
2
2 3 4

mmax E{|m|} 2 E{|m| } 3 E{|m| } 4 E{|m| }
= 1 + 4S + 6S + 4S +S
3L2 (1 + S)2 mmax m2max m3max m4max
Ng

2 /m2
To write this in terms of n2 = m max note that
E{|m|} m E{|m|} p 2 E{|m|}
= = n (as seen in P4.15, Eqn. (P4.9)) (4.96)
mmax mmax m m
E{|m|2 }
= n2 (4.97)
m2max
E{|m|3 } m3 E{|m|3 }
2 3/2 E{|m| }
3
= = ( n ) (4.98)
m3max m3max m 3 m3

E{|m|4 } m4 E{|m|4 }
2 2 E{|m| }
4
= = ( n ) (4.99)
m4max m4max m 4 4
m

Solutions Page 421


Nguyen & Shwedyk A First Course in Digital Communications

3L2 (1 + S)2 n2
SNRq = p E{|m|} 3} 4} (4.100)
1 + 4S n2 m + 6S 2 n2 + 4S 3 (n2 )3/2 E{|m| + S 4 n4 E{|m|

k
3 m 4 m

2 the quantities are:


For a zero-mean Gaussian signal with power level m
r r

dy
E{|m|} 2 E{|m|3 } 2 E{|m|4 }
= ; 3
= 2 ; 4
= 3. (4.101)
m m m

Aside: If x is N (0, 2 ) then



1 3 (n 1) n , n even

we
n
E{x } = (4.102)
0, n odd
(
3 (n 1) n ,
1 q n = 2k
n
E{|x| } = 2 k 2k+1 , (4.103)
2 k! n = 2k + 1

Plots for S = 5, 10, 50, 100 are shown in Fig. 4.15. Note that the performance is not that

Sh
attractive in that the SNRq varies considerably with n2 . Some reflection leads one to conclude
that this is due to the higher order moments presented in the Nq expression, namely n3 , n4 .
One likes no moments higher than n2 because then SNRq constant as n2 becomes larger.
The bottom line is that any old saturating nonlinearity is not suitable in a compander. It
has to be one that results in a n2 term in Nq and this is what ln() nonlinearities of the -law
and A-law give you.
&
50

40

30
en

20
SNR (dB)

10
q

0
uy

10

20 S=5
S=10
30 S=50
S=100
40
Ng

100 80 60 40 20 0
Normalized signal power 10log10(2n) (dB)

Figure 4.15: Plots of SNRq for 8-bit S-law quantizer with Gaussian input signal.

P4.19 Since we would like the mapping to be unique then the number of possible mappings from
the L = 2R target levels to the R-bit sequences is L! which for R = 2, 4, 8, 16 bits are
4! = 24; 16! = 2.0923 101 3; 256! = Matlab says inf; 216 ! = more than 256! but still
countable. Though some of these mappings may be considered to be trivially different, there
is still a humongous number of them.

Solutions Page 422


Nguyen & Shwedyk A First Course in Digital Communications

P4.20 Let P [bit error] = Pe . Then E{q2 } = E{q2 |no bit error}(1 Pe ) + E{q2 |1 bit error}Pe .
Assume a uniform quantizer and also a uniform pdf for the quantization error (Eqn. (4.21)).

k
Then E{q2 |no bit error} is that of Equation (4.22) and it is 2 /12 (watts).
With one bit error the quantization error, q, shall still have a uniform pdf but with a nonzero

dy
mean. The mean value depends on which bit is in error and the bit sequence considered. The
picture looks as shown in Fig. 4.16.
k
ml = m j + k

we
mj m ml

Transmitted Because of bit error


bit pattern we demodulate to
this target level

Sh Figure 4.16

E{(m ml )2 } = E{(m mj k)2 }


= E{(m mj )2 } E{2k(m mj )} + E{k2 2 } (4.104)
2
&
Now E{(m mj )2 } = (i.e., as if no bit error occurred) (4.105)
12
E{(m mj )} = 0 (4.106)
X
E{k2 2 } = k 2 2 P [k] (4.107)
k
2
!
en

X
E{q2 } = q2 (1 Pe ) + q2 + k2 P [k] Pe
k
X
= q2 + Pe 2 2
k P [k]. (4.108)
k
uy

So now it is a matter of determining the k profile along with the value of P [k]. We do this
for the three mappings of Fig. 4.19 and L = 8.
Bit pattern 000 001 010 011 100 101 110 111
NBC {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4}
Ng

Gray {1, 3, 7} {1, 1, 5} {1, 3, 1} {1, 1, 3} {1, 3, 7} {1, 1, 5} {1, 3, 1} {1, 1, 3}


FBC {1, 2, 1} {1, 1, 3} {1, 2, 5} {1, 2, 7} {1, 2, 1} {1, 2, 3} {1, 2, 5} {1, 2, 7}
| {z }
values of k

Now assuming that each bit sequence is equally probable and that which bit is in error is
equally probable, the P [k] is as follows:

NBC: P [k = 1] = 1/3; P [k = 2] = 1/3; P [k = 4] = 1/3


Gray: P [k = 1] = 14/24 = 7/12; P [k = 3] = 6/24 = 3/12; P [k = 5, 7] = 2/24 = 1/12
FBC: P [k = 1] = 11/24; P [k = 2] = 7/24; P [k = 3, 5, 7] = 2/24 = 1/12

Solutions Page 423


Nguyen & Shwedyk A First Course in Digital Communications

Therefore the quantization noise variance is:


NBC: E{q2 } = q2 + 13 [2 + 42 + 162 ]Pe = 2 [ 12
1
+ 7Pe ].

k
7 3 1 1 2 1
Gray: E{q2 } = q2 + [ 12 2 + 2 2 2
12 (9 ) + 12 (25 ) + 12 (25 )]Pe = [ 12 + 9Pe ].
1 7 2 2 2 1
FBC: E{q2 } = q2 +[ 24 2 + 24 (42 )+ 24 (92 )+ 24 (252 )+ 24 (492 )]Pe = 2 [ 12 +8.46Pe ].

dy
If Pe 103 the effect is negligible.
P4.21 (a) = 2V256
max
= V128
max
. (Note: Here the optimum quantizer is a uniform quantizer).
2
m = Rm ( = 0) = 1 (watts)

we
2
Vmax
2 =
Since m 2
3 Vmax = 3.
2
2 Vmax
q2= =12 12(27 )2
= 2116 .

Rm (1) Rm (Ts )
(b) E [m(k) m(k 1)]2 = 0 = Rm (0) = Rm (0) = e1/100 .
(c) y(k) = m(k) m(k 1) = m(k) e1/100 m(k 1).

the variance is (ymax )2 /12 ymax




q2
Sh
E{y(k)} = 0; E{y2 (k)} = (1 + 2 ) Rm (0) 2 Rm (1) = 1 2 = 0.0198.
| {z }

0
| {z }
=1

= 0.48745.

=
0
(ymax )2
=
=
y0 (kTs ) has a mean of zero and a variance of 0.0198. Because it is uniform we know that
0

0.2376
(4.109)
32 16 3 216
&
diff. quan.

2
1
q = (4.110)
no diff. quan. 216


q
2
0.2376
diff. quan.
en

= = 0.0792 = 11 dB. (4.111)


2 3
q
no diff. quan.

P4.22 The USB drive can store 8 109 bits.


uy

(a) 4 kHz speech, 8 bits per sample.


The Nyquist sampling rate is 8 kHz, giving a bit rate of 8 8, 000 = 64, 000 bits/sec.
8 109 bits
TD = = 1.25 105 seconds = 34.7 hrs = 1.446 days
64, 000 bits/sec
Ng

(b) 22 kHz audio signal, stereo, 16 bits per sample in each stereo channel.
Have 44, 000 samples/sec 16 bits/sample 2 channels.
8 109 bits
TD = = 5.682 103 seconds = 94.7 minutes = 1.58 hrs
44, 000 16 2 bits/sec
(c) 5 MHz video signal, 12 bits per sample, combined with the audio signal above.
Have 10 106 samples/sec 12 bits/sample + 44, 000 16 2bits/sec.
8 109 bits 8 109 bits
TD = = = 65.9 seconds = 1.1 minutes
120 106 + 1.408 106 121.408 106 bits/sec

Solutions Page 424


Nguyen & Shwedyk A First Course in Digital Communications

(d) Digital surveillance video, 1024 768 pixels per frame, 8 bits per pixel, 1 frame per
second.

k
Have (1024 768) pixels/frame 8 bits/pixel 1 frame/sec = 6, 291, 456 bits/sec.

8 109 bits

dy
TD = = 1.272 103 seconds = 21.20 minutes
6, 291, 456 bits/sec

P4.23 We divide the range [mmax , mmax ] into 2R equally spaced quantization regions, each of
width = 2mmax /2R . If we place a target level in the middle of each quantization region,
the maximum value of the quantization noise is qmax = /2 = mmax /2R . The PSNRq is then

we
given by

mmax R
PSNRq = 20 log10 = 20 log 10 2 = 6.02R (dB) (4.112)
mmax /2R

To achieve distortion D requires R at least D/6.02, so the smallest value of R is

Sh R = dD/6.02e bits

Note that with this definition, the 6dB/bit rule-of-thumb hold as well.
(4.113)
&
en
uy
Ng

Solutions Page 425


k
dy
Chapter 5

Optimum Receiver for Binary Data

we
Transmission

P5.1 First consider


Z Tb

0
0
[2 (t)]2 dt =
Z

=
1
E2
0
Tb

|0

Sh
s2 (t)

Tb
E2
1 (t)

s22 (t)dt 2
{z }
2
dt

|0
Z
s2 (t)
E
{z2
Z Tb
Tb
1 (t)dt
}
+2
Z

0
Tb
21 (t)dt

=E2
| {z } 1
&
= s2 (t)s1 (t)dt
=1 E2 E1 0
| {z }
=

2 2 2
= 1 2 + = 1 .

1 0 1 s2 (t) 1 s2 (t) s1 (t)
2 (t) = 2 2 (t) = 2 E2 1 (t) = 2 E2 E .
en

1 1 | {z } 1 1
s1 (t)

E1

P5.2 (a)

1 (t) cos sin 1 (t)
= (5.1)
uy

2 (t) sin cos 2 (t)

1 (t) = cos 1 (t) + sin 2 (t),


2 (t) = sin 1 (t) + cos 2 (t), (5.2)
1 (t) 2 (t) = sin cos 21 (t) + sin cos 22 (t) 2 2
+ [cos sin ]1 (t)2 (t),
Ng

ZTb ZTb ZTb


1 (t)2 (t)dt = sin cos 21 (t)dt + sin cos 22 (t)dt
0
|0 {z } |0 {z }
=1 =1
ZTb
2 2

+ cos sin 1 (t)2 (t)dt
|0 {z }
=0
= sin cos + sin cos = 0 (5.3)

1
Nguyen & Shwedyk A First Course in Digital Communications

Therefore {1 (t), 2 (t)} are orthogonal. To see if they are normal, consider first
ZTb ZTb

k
21 (t)dt = [cos 1 (t) + sin 2 (t)]2 dt
0 0

dy
ZTb ZTb ZTb
= cos2 21 (t)dt +2 cos sin 1 (t)2 (t)dt + sin2 22 (t)dt
|0 {z } |0 {z } |0 {z }
=1 =0 =1
= cos2 + sin = 1.
2

we
(5.4)
Similar derivation shows that 2 (t) also has unit energy. Therefore {1 (t), 2 (t)} form
an orthonormal set.
cos sin cos sin
What if the minus sign is put elsewhere, i.e., or .
sin cos sin cos
Are {1 (t), 2 (t)} still orthonormal? If so does the matrix represent just a rotation?

ZTb

0
RT

Sh
(b) Need to find such that 0 b 1 (t)[s1 (t) s2 (t)]dt = 0:

1 (t) [s1 (t) s2 (t)] dt =


ZTb

0
T
Zb
[cos 1 (t) + sin 2 (t)] [s1 (t) s2 (t)] dt

ZTb

&
= cos s1 (t)1 (t)dt s2 (t)1 (t)dt
0 0
T
Zb ZTb
+ sin s1 (t)2 (t)dt s2 (t)2 (t)dt
0 0
en

= cos (s11 s21 ) + sin (s12 s22 ) = 0. (5.5)

sin (s12 s22 ) = cos (s21 s11 )


s21 s11
tan =
s12 s22
uy

s21 s11
= tan1 . (5.6)
s12 s22
Of course is determined by the signal set. You must know the signal set in order to
rotate the original basis set as desired: 2 (t) is perpendicular to the line joining s1 (t)
Ng

and s2 (t).
The above result of can also be obtained geometrically (see the signal space diagram
on Fig. 5.1). Use the following fact from geometry: Given the equation of a straight
1
line y = mx + b, then the perpendicular to the line has a slope of m .
The slope of the line joining s1 (t) and s2 (t) is s22 s12
. Therefore the slope of 1 (t) is
s21 s11
tan = ss22
21 s11
s12 , or = tan
1 s21 s11
s12 s22 .
Since we choose such that
ZTb
1 (t) [s1 (t) s2 (t)] dt = 0, (5.7)
0

Solutions Page 52
Nguyen & Shwedyk A First Course in Digital Communications

2 (t ) 1 (t )
s2 (t )
s22

k
s11 = s21
2 (t )

dy
s12 s1 (t )


1 (t )
0
s21 s11

we
Figure 5.1: Determining the rotation angle .

It follows that

Sh
ZTb

0
s1 (t)1 (t)dt =

s11 = s21 ,

i.e., the components of s1 (t) and s2 (t) along 1 (t) are the same.
ZTb

0
s2 (t)1 (t)dt

(5.8)

Z Z
&
1
(x)2 (= x

) 1 2 T
P5.3 Area = e 2 dx
2 = e 2 d = Q .
2 T (d= dx
) 2 T

As shown in Fig. P5.40 of the text, T > 0 (i.e., T > ) but result is true even if
T < . Of course the argument of Q(x) is now negative and one would use the relationship
Q(x) = 1 Q(x).
en

Stated in English, the area is a Q function whose argument is the distance from the mean to
the threshold divided by the RMS value (standard deviation).
( x )2
1
e 2
2
Late Early Early
2 Laggards Majority
uy

Majority Adoptors Innovators

"The
Chasm"
Ng

Technology Adoption Process x


T 0
Area = ?

Figure 5.2: Writing the shaded area in terms of the Q function.

Of course one must pay attention on which side of the mean the threshold lies and write the
expression accordingly. For instance in terms of the Q function what is the shaded area in
Fig. 5.2?

Solutions Page 53
Nguyen & Shwedyk A First Course in Digital Communications

P5.4 (a) Since si (t) = si1 1 (t) + si2 2 (t), i = 1, 2, one has:
Z Tb Z Tb

k
Ei = s2i (t)dt = [si1 1 (t) + si2 2 (t)]2 dt
0 0
Z Tb

dy
= [s2i1 21 (t) + 2si1 si2 1 (t)2 (t) + s2i2 22 (t)]dt
0
Z Tb Z Tb Z Tb
2 2 2
= si1 1 (t)dt +2si1 si2 1 (t)2 (t)dt +si2 22 (t)dt
0 0 0
| {z } | {z } | {z }
=1 =0 =1

we
= s2i1 + 2
si2

(b) Now let


(q
2
Tb cos(2fc t), 0 t Tb
1 (t) = (5.9)
0, otherwise
and

Sh
(q
2
Tb sin(2fc t), 0 t Tb
2 (t) = (5.10)
0, otherwise
Compute
Z Tb Z Tb
2
1 (t)2 (t) = cos(2fc t) sin(2fc t)dt
0 Tb 0
&
Z Tb
1 Tb
1
= sin(4fc t)dt = cos(4fc t)
Tb 0 4Tb fc 0
1
= [cos(4fc Tb ) 1] (5.11)
4Tb fc
en

Thus, 1 (t) and 2 (t) are orthogonal if cos(4fc Tb ) = 1 4fc Tb = 2k fc = 2Tk b ,


where k is a positive integer (k = 1, 2, . . .). There must be an integer number of half
cycles of the cos and sin in the interval [0, Tb ] for them to be orthogonal over the interval
of Tb seconds. No other frequency suffices.
Finally, (fc )min = 2T1 b when k = 1. It is graphically illustrated in Fig. 5.3.
uy

sin

t
0 Tb
Ng

cos

Figure 5.3: Orthogonality of sin and cos with a minimum frequency.

P5.5 Consider the following two signals s1 (t) and s2 (t) (this is the same signal set considered in
Example 5.5):
s1 (t) = V cos(2fc t)
k
s2 (t) = V cos(2fc t + ), 0 t Tb , fc = , k integer (5.12)
2Tb

Solutions Page 54
Nguyen & Shwedyk A First Course in Digital Communications

(a) The energies of two signals can be computed as:


Z Z

k
Tb Tb
E1 = s21 (t)dt = V 2 cos2 (2fc t)dt
0 0
Z Tb Z Tb Z Tb
[1 + cos(4fc t)] V2

dy
2
= V dt = dt + cos(4fc t)dt
0 2 2 0 0
V 2 Tb
= (5.13)
2

we
Z Tb Z Tb
E2 = s22 (t)dt = V 2 cos2 (2fc t + )dt
0 0
Z Tb Z Tb Z Tb
[1 + cos(4fc t + 2)] V2
= V2 dt = dt + cos(4fc t + 2)dt
0 2 2 0 0
V 2 Tb
= (5.14)
2

Sh
where we have used the fact that cos(4fc t+) is a periodic function with a fundamental
period of Tb /k (regardless of the value of the phase ) and an integration over multiple
periods of this function equals to zero.
The two signals s1 (t) and s2 (t) have unit energy if E1 = E2 =
q
V 2 Tb
2 = 1. Therefore,
V = T2b .
&
Remark: In communications we invariably take fc to be an integer multiple of T1b typ-
ically, sometimes 2T1 b so the results for E1 and E2 hold (and those in P5.4). Typically
fc is very large, on the order of 106 (MHz) or 109 (GHz) so that even if there arent an
integer number of cycles (or half cycles) in the time interval, for engineering purposes
en

E1 and E2 still are closely approximated by V 2 Tb /2.


(b) Since E1 = E2 = E, the correlation coefficient of the two signals is
Z Z
1 Tb V 2 Tb
= s1 (t)s2 (t)dt = cos(2fc t) cos(2fc t + )dt
E 0 E 0
uy

Z Z Tb
V 2 Tb V 2 Tb
= cos dt + cos(4fc t + )dt = cos = cos (5.15)
2E 0 2E
|0 {z }
k
= 0 since fc =
2Tb
Ng

Two signals are orthogonal when the correlation coefficient = 0, which is equivalent to
cos = 0. So, = k 2 , k = 1, 2, 3, . . .
(c) Plot of as a function of is shown in Fig. 5.4.
p
(d) d = 2E 1 dmax = 2E 1 (1) = 2 E when = 1, or = (antipodal
signals).

To relate of part (c) to that of P5.4b note when = /2 then s2 (t) = V cos(2fc t/2) =
V sin(2fc t), i.e., except for a scale factor it is 2 (t) of P5.4b. On the other hand = /2
results in 2 (t) of P5.4b. The signal space of the signal set is shown in Fig. 5.12 of the text
and it is reproduced in Fig. 5.5.

Solutions Page 55
Nguyen & Shwedyk A First Course in Digital Communications

= cos

+1

k


0 2 3 2 2

dy
1

Figure 5.4: Plot of = cos .

we
2
2 (t ) = sin(2 f c t )
Ts
= 3 2
=0

=
Sh 0

s1 (t )
1 (t ) =

locus of s2 (t ) as
2
Ts
cos(2 f c t )

= 1 E
&
varies from 0 to 2 .
s 2 (t )

= 2
=0
en

Figure 5.5: Signal space representation of Problem 5.5.

P5.6 (a)
uy

Z Tb
1
= V1 V2 cos(2(fc f /2)t) cos(2(fc + f /2)t)dt
E1 E2
"0 Z Z #
Tb Tb
V V2
= 1 cos(2(2fc )t)dt + cos(2f t)dt
2 E1 E2 0 0
| {z } | {z }
Ng

=0 (integer number of cycles in Tb ) =


sin(2f Tb )
2f

cos(x + y) + cos(x y)
since cos x cos y =
2
V V T sin(2f Tb )
= 1 2 b (5.16)
2 E1 E2 2f Tb


V1 Tb V2 Tb
But 2
= E1 = E and = E. Therefore
E2
sin(2f Tb )
(f ) = .
2f Tb

Solutions Page 56
Nguyen & Shwedyk A First Course in Digital Communications

Plot of versus the normalized frequency f Tb is shown in Fig. 5.6.


k
1 1
fTb = or f = (minimum frequency separation
2 2Tb

dy
for orthogonality)

fTb
0
min

we
minimum point maximum distance point
between the 2 points

Figure 5.6: Plotting of (f ).

(b) One can differentiate with respect to the (normalized) variable f Tb to find where the

Sh
minimum occurs. From the plot of we expect it to be between 0.5 and 0.75. Formally
let x 2f Tb . Then (x) = sinx x and d(x)
points) or x = tan x. Solve this numerically.
cos x sin x
dx = x x2 = 0 (for maximum/minimum

Another approach (still numerical) is to set up a table of values of sinx x between x =


2 0.65 and 2 0.75 in increments of say 0.01 and pick out value of x where the
minimum occurs.
&
?
The simplest approach is to trust the answer and check it by seeing if x= tan x at x = 2
?
0.715 (within acceptable engineering numerical accuracy), i.e., 20.715= tan(2 0.715).
p
For f Tb = 0.715, = 0.2172. Therefore d = 2E(1.2172).

When = 0, d = 2E. The distance has increased by a factor of 1.1033.
en

A more relevant way to quantify this increase is to state that for = 0 the energy E
needs to be increased by 1.2172 to achieve the same distance between the 2 signals as in
the = 0.2172 case. This is a 10 log10 (1.2172) = 0.85 dB increase in energy (& hence
power).
uy

Nonetheless for other reasons, namely synchronization, f is chosen so that = 0.


Remarks:
1
(i) When the frequency separation f is chosen to be the two signals are said to
2Tb
1
be coherently orthogonal, when f is chosen to be k (k 2) they are called
Ng

Tb
noncoherently orthogonal. Chapter 7 discusses this further.
(ii) To obtain the signal space representation is not that straightforward, except for = 0.
All that can be said of the top of ones head
is that the signal space is 2dimensional,
and that the 2 signals lie at a distance of E (because of how we adjust V1 , V2 ) from
the origin. One basis function can be chosen to be, say 1 (t) = s1 (t)
E
. The other basis
function is a function of f .

P5.7 Using the Gram-Schmidt procedure, construct an orthonormal basis for the space of quadratic
polynomials {a2 t2 + a1 t + a0 ; a0 , a1 , a2 R} over the interval 1 t 1.

Solutions Page 57
Nguyen & Shwedyk A First Course in Digital Communications

It is simple to see that the equivalent problem is to find an orthonormal basis for three signals
s1 (t) = 1, s2 (t) = t and s3 (t) = t2 over the interval 1 t 1.

k
The first orthonormal basis function is simply 1 (t) = 1 since the energy of s1 (t) = 1 over
2
the interval 1 t 1 is 2.

dy
Next observe that s2 (t) = t is an odd function, while 1 (t) is an even function. It follows that
s2 (t) is already orthogonal to 1 (t). Thus, we set:
s2 (t) t p
2 (t) = qR = qR = t 3/2
1 2 1 2
1 s2 (t)dt 1 t dt

we
Next we project s3 (t) = t2 on the subspace spanned by 1 (t) and 2 (t). It is clear that since
s3 (t) is
R 1an even function and 2 (t) is an odd function, their product is odd, and we have
s32 = 1 s3 (t)2 (t)dt = 0. On the other hand,
Z 1
1 2
s31 = t2 dt =

The orthogonal signal is given by


0
Sh 2
1 2 3 2

3 (t) = s3 (t) 1 (t) 0 2 (t) = t2


3 2
1
3
Finally, the third orthonormal basis function is
&
r r
1 p 45 1 5 2
3 (t) = t2 / 8/45 = t2 = (3t 1).
3 8 3 8
In summary, (r r r )
1 3 5 2
en

, t, (3t 1)
2 2 8
is an orthonormal basis set for the quadratic functions over [1, 1]. The set is plotted in Fig.
5.7.
To find the coefficients of any quadratic polynomial, p(t) = a2 t2 + a1 t + a0 , in terms of 1 (t),
uy

2 (t) and 3 (t) we find the projections of p(t) onto the 3 basis functions, i.e.,

p(t) = p1 1 (t) + p2 2 (t) + p3 3 (t)

where
Ng

Z Z r
1 1
1 2
p1 = p(t)1 (t)dt = [a2 t2 + a1 t + a0 ] dt = a2 + 2a0 (5.17)
1 1 2 3
Z 1 Z 1 r r
3 2
p2 = p(t)2 (t)dt = [a2 t2 + a1 t + a0 ] tdt = a1 (5.18)
1 1 2 3
Z 1 Z 1 "r # r
2 3 2 4 3
p3 = p(t)1 (t)dt = [a2 t + a1 t + a0 ] (3t 1) dt = a2 . (5.19)
1 1 8 5 8

One, of course, does not achieve very much by approximating a quadratic polynomial by
another quadratic polynomial. Of more practical interest is the situation where we are given

Solutions Page 58
Nguyen & Shwedyk A First Course in Digital Communications

k
1.5

dy
1

we
0.5

Sh
0.5

1.5
&
1 0.5 0 0.5 1
t

Figure 5.7: Orthonormal basis set for quadratic polynomials.


en

a more general time function, say s(t) of finite energy, and wish to approximate it by a
quadratic polynomial over the interval of [1, 1]. Then the set of {1 (t), 2 (t), 3 (t)} found
above can be used to approximate s(t) where the coefficients in the approximation s(t)
s(t) = s1 1 (t) + s2 2 (t) + s3 3 (t) are found by:
uy

Z 1 Z 1 Z 1
s1 = s(t)1 (t)dt, s2 = s(t)2 (t)dt, s3 = s(t)3 (t)dt.
1 1 1

Furthermore the mean-squared error [s(t) s(t)]2 is minimum (see P5.9).


Remark: The 3 polynomials {1 (t), 2 (t), 3 (t)} are normalized versions of the first three
Ng

members of what are referred to as Legendre polynomials. These are orthogonal polynomials
over [1, 1] that are solutions of Legendres differential equations:

d2 pn (t) dpn (t)


(1 t2 ) 2
2t + n(n + 1)pn (t) = 0.
dt dt
1 dn 2
The nth member is given by pn (t) = 2n n! dtn (t 1)n (Rodriques formula) and
Z 2
pn (t)pm (t)dt = 2n+1 , n=m
.
0, n 6= m

Solutions Page 59
Nguyen & Shwedyk A First Course in Digital Communications

P5.8 Orthogonal : Z T
si (t)sj (t)dt = 0, (i 6= j)

k
0
Equal energy: Z Z
T T

dy
E= s21 (t)dt = = s2M (t)dt.
0 0

First, let us find the energy, say in sm (t):


Z Z " M
#2
T T
1 X

we
=
E s2m (t)dt = sm (t) sk (t) dt
0 0 M
k=1
Z Z M Z "M #2
T T
2 X T
1 X
= s2m (t) sm (t) sk (t)dt + sk (t) dt
0 0 M 0 M2
k=1 k=1
Z T
2
= E sm (t) [s1 (t) + . . . + sM (t)] dt

= E
1
M2
M
Z

M
2

Z
0
Z

T
0
T

0
T
s21 (t)


+ ... +

Z T
Sh
s2M (t)

Z T

s1 (t)sm (t) + . . . + s2m (t) + . . . + sm (t)sM (t) dt
Z

+ s1 (t)s2 (t) + . . . + sM 1 (t)sM (t) dt


1
&
2 2
+ s1 (t)dt + . . . + sM (t)dt + s1 (t)s2 (t)dt + . . . + sM 1 (t)sM (t)dt
M2 0 0 0 0
2 1 1 E
= E E + 2 (M E + 0) = E E= (M 1)
M M M M

Now let us find the correlation coefficient:


en

Z T
sm (t)
sn (t)
mn = dt
0
E
Z " M
#" M
#
1 T 1 X 1 X
= sm (t) sk (t) sn (t) sk (t) dt
E 0 M M
uy

k=1 k=1
Z T" XM XM M
#
1 1 1 1 X
= sm (t)sn (t) sm (t) sk (t) sn (t) sk (t) + ( sk (t))2 dt
E 0 M M M
k=1 k=1 k=1
Z T Z T XM Z T XM
1 1 1
= sm (t)sn (t)dt sm (t) sk (t)dt sn (t) sk (t)dt
Ng

E 0 M 0 M 0
k=1 k=1
Z T "X M
#2
1
+ sk (t) dt
M2 0
k=1

1 1 1 1 1 E E/M 1
= 0 E E + 2 M E = ( ) = =
E M M M E M E/M (M 1) 1 M

The signal space plots for M = 2, 3, 4 are as follows.


M = 2: It is easy to show that s1 (t) = s1 (t)s
2
2 (t)
and s2 (t) = s2 (t)s1 (t)
2 =
s1 (t), i.e., the

signals are antipodal with energy E = E/2.

Solutions Page 510


Nguyen & Shwedyk A First Course in Digital Communications

Note that
M M M
M ! M M
M
!

k
X X 1 X X X 1 X X
sm (t) = sm (t) sk (t) = sm (t) sk (t) 1 = 0.
M M
m=1 m=1 m=1 k=1 m=1 k=1 m=1
| {z }

dy
=M

sm (t)}M
Therefore the signals { m=1 are linearly independent since any one of them is a linear
combination of the remaining
q M 1 of them. They therefore lie in an (M 1)-dimensional
M 1
signal space at distance M from the origin. Further, since they are equally correlated

we
the angle between any 2 vectors from the origin to the signal points is the same for each pair.
Putting this all together we have for:
q
M = 3: A 2-dimensional signal space. Signal points are at distance 23 E from origin, laying
at a angle spacing of 120 , i.e., on vertices of an equilateral triangle.
q
M = 4: Lying in a 3-dimensional space, distance of 34 E, on the vertices of a regular tetra-
hedron.

Sh 2 (t )
s2 (t )

2
E
&
s2 (t ) s1 (t ) 3

1 (t ) 1 (t )
0
E 0 E
2 2
s3 (t ) s1 (t )
M =2
en

2 (t )
M =3
s1 (t )
uy

1 (t )
0
3
4
E
s3 (t ) s2 (t )
Ng

3 (t ) M =4

s4 (t )

Figure 5.8

PN
P5.9 (a) The approximation error is simply e(t) = s(t) s(t) = s(t) k=1 sk k (t). Thus the

Solutions Page 511


Nguyen & Shwedyk A First Course in Digital Communications

energy of the estimation error is:


Z + " XN
#2

k
Ee = s(t) sk k (t) dt
k=1
Z Z

dy
+ +
= Es 2 s(t)
s(t)dt + [s1 1 (t) + . . . + sN N (t)]2 dt

Z + Z +
= Es 2 s(t)
s(t)dt + [s21 21 (t) +

2 2
+ ... + sN N (t) + s1 s2 1 (t)2 (t) + . . . + sN 1 sN N 1 (t)N (t)]dt

we
Z + Z + X N
= Es 2 s(t)
s(t)dt + s2k 2k (t)dt + 0
k=1
Z + N Z +
X
= Es 2 s(t)
s(t)dt + s2k 2k (t)dt
k=1

= Es 2

= Es 2
Z

N
X

k=1
+

Sh
sk
s(t)


s(t)dt +

+
XN

k=1

s(t)k (t)dt +
s2k

N
X

k=1
s2k (5.20)

Obviously Ee is a function of sk and therefore sk can be chosen to minimize Ee . To find the


&
optimal sk , differentiate Ee with respect to each sk and set the result to zero:
hP R + i hP i
N N 2
dEe 2d k=1 s k s(t)k (t)dt d s
k=1 k
= 0 +
dsk dsk dsk
Z +
en

= 2 s(t)k (t)dt + 2sk = 0



Z +
sk = s(t)k (t)dt

R +
(b) Substituting s(t)k (t)dt = sk into (5.20), the minimum mean square approximation
uy


error is given by:
N
X Z + N
X
(Ee )min = Es 2 sk s(t)k (t)dt + s2k
k=1 k=1
N N N
Ng

X X X
= Es 2 s2k + s2k = Es s2k (joules)
k=1 k=1 k=1
Basically the energy of the approximation error is the signal energy minus the energy in the
approximation signal.
P5.10 (a) The coefficients sij , i, j {1, 2} are found as follows:
Z 1 Z 1
s11 = s1 (t)1 (t)dt = 1, s12 = s1 (t)2 (t)dt = 1,
0 0
Z 1 Z 1
s21 = s2 (t)1 (t)dt = 1, s22 = s2 (t)2 (t)dt = 1.
0 0

Solutions Page 512


Nguyen & Shwedyk A First Course in Digital Communications

Note that these results can also be found by inspecting that s1 (t) = 1 (t) + 2 (t) and
s2 (t) = 1 (t) 2 (t).

k
(b)
R
1 (t) cos sin 1 (t)
= (5.21)

dy
R
2 (t) sin cos 2 (t)


=600 1 3
R
1 (t) = (cos )1 (t) + (sin )2 (t) = 1 (t) + 2 (t)
2 2

we

=600 3 1
R
2 (t) = (sin )1 (t) + (cos )2 (t) = 1 (t) + 2 (t) (5.22)
2 2
The time waveforms of R R 0
1 (t) and 2 (t) for = 60 are shown in Fig. 5.9.

1R (t ) 2R (t )
(1 + 3 )
2

(1 3 )
2
0 0 .5
Sh 1
t (1 3 )
2
0 0 .5 1
t
&
(1 + 3 )

2

Figure 5.9: Plots of R R


1 (t) and 2 (t).
en

(c) Determine the new set of coefficients sR


ij , i, j {1, 2} in the representation:

s1 (t) sR R
11 s12 R
1 (t)
= (5.23)
s2 (t) s21 sR
R
22 R
2 (t)
Z
uy

1
One method is a straightforward calculation: sR
ij = si (t)R
j (t)dt. Thus
0
! !
1 1+ 3 1+ 3 R 1 1 3 1 3
sR
11 = 2 = , s12 = 2 =
2 2 2 2 2 2
Ng

! !
1 1 3 1 3 R 1 (1 + 3) 1+ 3
sR
21 = 2 = , s21 = 2 = .
2 2 2 2 2 2

A second method is based on sij and directly:


R R R
s1 (t) s11 sR12 1 (t) s11 sR12 cos sin 1 (t)
= =
s2 (t) sR R
21 s22 R2 (t) sR R
21 s22 sin cos 2 (t)

s11 s12 1 (t)
=
s21 s22 2 (t)

Solutions Page 513


Nguyen & Shwedyk A First Course in Digital Communications

Therefore
s11 s12 sR R
11 s12 cos sin
=

k
s21 s22 sR R
21 s22 sin cos

dy
sR R
11 s12 s11 s12 cos sin
=
s21 sR
R
22 s21 s22 sin cos

s11 s12 cos sin
=
s21 s22 sin cos
" 1 #
3
"
3+1

1 3
#
1 1

we
=600 2 2 2 2
= =
1 1 3 1 1 3
3+1
2 2 2 2

(d) Geometrical picture for both the basis function sets (original and rotated) and for the
two signal points is shown in Fig. 5.10. Note: As seen from the geometrical picture,

Sh
2R (t )
2 ( t )
1R (t )
s1 (t )
&
= 60 0

450 1 (t )

d
en

s2 (t )
uy

Figure 5.10: Geometrical representation of the basis function sets (original and rotated) and
the two signals.

sR R R R
11 = s22 and s12 = s21 . This is seen in (c) above and is true for any . But do
Ng

not jump to conclusions that this is true all the time. It comes about because of the
locations of s1 (t), s2 (t) in the signal space.
(e) Determine the distance d between the two signals s1 (t) and s2 (t) in two ways:
(i) Algebraically:
Z 1 Z 0.5 Z 1
2 2 2
d = [s1 (t) s2 (t)] dt = 2 dt + 22 dt = 4 d = 2 (5.24)
0 0 0.5

(ii) Geometrically: From the signal space plot of (d) one has d = 2 + 2 = 2 (Note that
both signal have energy E1 = E2 = 2).

Solutions Page 514


Nguyen & Shwedyk A First Course in Digital Communications

(f) Thought 1 (t) and 2 (t) can represent the two given signals, they are by no means
a complete basis set because they cannot represent an arbitrary, finite-energy signal

k
defined on the time interval [0, 1]. As a start to complete the basis, we wish to plot
the next two possible orthonormal functions 3 (t) and 4 (t). Obviously there are may
possible choices for 3 (t) and 4 (t). One simple choice is shown below.

dy
3 ( t ) 4 ( t )

2 2

we
1 t 1 t
0 1 1 0 1 3
4 2 2 4
2 2

Figure 5.11: An example of 3 (t) and 4 (t).

1 s1 (t).
2
Sh
P5.11 (a) The first orthonormal basis function is chosen, somewhat arbitrarily, as 1 (t) =

Next s2 (t) is already orthogonal to 1 (t). Thus, we set:



s1 (t)

E1
=

1

2 , 0t1
&
s2 (t) 1
q
2 (t) = R = , 1t2
(5.25)
2 2
0, 2 elsewhere
s
0 2 (t)dt

Then project s3 (t) onto 1 (t) and 2 (t) to have


Z Z Z
en

2
1 1
1 2
1
s31 = dt = 2, s32 = dt dt = 0
0 2 0 2 1 2

The third orthogonal signal is given by



uy

0 1, 2 t 3
3 (t) = s3 (t) 21 (t) 0 2 (t) =
0, elsewhere
0 0
Since the energy of 3 (t) equal 1, one sets 3 (t) = 3 (t).
Finally, the fourth orthonormal basis function is obtained by fist projecting s4 (t) onto
Ng

1 (t), 2 (t), 3 (t):


Z 2 Z 1 Z 2
1 1 1
s41 = (1)dt = 2, s42 = (1)dt + (1)dt = 0
0 2 0 2 1 2
Z 3
s43 = 1 (1)dt = 1
2

The orthogonal signal is given by


0
4 (t) = s4 (t) + 21 (t) 02 (t) 3 (t) = 0

Solutions Page 515


Nguyen & Shwedyk A First Course in Digital Communications

In conclusion, we need only 3 orthonormal basis functions to represent the signal set.
They are plotted in Fig. 5.12.

k

( 1
1 , 0 t 2 2 , 0t1
2 1 1, 2 t 3
1 (t) = , 2 (t) = 2 , 1 t 2 , 3 (t) =

dy
0, elsewhere
0, 0, elsewhere
elsewhere

1 (t ) 3 ( t )

we
1 1
2
t t
0 1 2 0 2 3
1

2 ( t )


1
2
1
2
0 1 2
Sh t
&
Figure 5.12: Three basis functions.

In terms of the above basis functions, the four signals are:



en

s1 (t) 2 0 0
s2 (t) 0 2 0 1 (t)
2 (t)
s3 (t) = 2 0 1
3 (t)
s4 (t) 2 0 1
uy

(b) First the 3 functions are certainly orthogonal (they are what one calles time orthogonal
since they do not overlap in time) and obviously each of them has unit energy. The
question is do they represent the 4 time functions exactly. Well,

s1 (t) = v1 (t) + v2 (t) + 0 v3 (t), s2 (t) = v1 (t) v2 (t) + 0 v3 (t)


Ng

s3 (t) = v1 (t) + v2 (t) v3 (t), s4 (t) = v1 (t) v2 (t) v3 (t)

(c) Geometrical representation of the four signals is plotted in Fig. 5.13.

P5.12 The signal set is given as:


(
3A cos 2t
Tb , 0 t Tb
s1 (t) =
0, otherwise
(
2t
A sin Tb , 0 t Tb
s2 (t) =
0, otherwise

Solutions Page 516


Nguyen & Shwedyk A First Course in Digital Communications

v2 (t )

k
s3 (t )

dy
s1 (t )
1
1

we
1 v1 (t )
0 1

s4 (t )

Sh 1
s2 (t )
v3 (t )

Figure 5.13: Geometrical representation of the 4 signals by {v1 (t), v2 (t), v3 (t)}.
&
sin
Tb
en

2
t
0 Tb

cos
uy

Figure 5.14
Ng

(a) There is one cycle of the cos and one cycle of the sin in the interval [0, Tb ]. Graphically:

Obviously the area of the product is zero, regardless of the amplitudes of the sine and
cos. Therefore s1 (t) and s2 (t) are orthogonal.

Solutions Page 517


Nguyen & Shwedyk A First Course in Digital Communications

Show mathematically:
Z

2Z

k
Tb Tb
2t
2t
s1 (t)s2 (t)dt = 3A cos sin dt
0 0 TbTb

Z Tb Tb

dy
3 2 4t 3 2 Tb 4t
= A sin dt = A cos =0
2 0 Tb 2 4 Tb
0

The energies of the two signals are:

we
Z !2
Tb
3A 3A2 Tb
E1 = s21 (t)dt
= Tb =
0 2 2
Z Tb
A 2 A2 Tb
E2 = s22 (t)dt = Tb =
0 2 2

1 (t) =

2 (t) =
s1 (t)
E1
s2 (t)
Sh
Simply choose the basis functions {1 (t), 2 (t)} to be scaled versions of s1 (t) and s2 (t):

=
r

r
2
Tb
2
cos

sin


2t
Tb
2t

);
0 t Tb

0 t Tb
E2 Tb Tb
&
1 (t ) 2 (t )

2 2
en

Tb Tb

Tb Tb
2 2 Tb
t t
0 Tb 0
uy

2 2

Tb Tb

Figure 5.15
Ng

(b) The optimum decision rule for the minimum distance receiver is:

Solutions Page 518


Nguyen & Shwedyk A First Course in Digital Communications

2 (t )
T
Decide 1T

k
dy
s2 (t )
Decide 0T
E2

1 (t )
0 s1 (t )
=

we
3 E1

Figure 5.16

Sh p 0D p
0D
(r1 s21 )2 + (r2 s22 )2 T (r1 s11 )2 + (r2 s12 )2
1D

r12 + (r2 E2 )2 T (r1 E1 )2 + r22


&
1D

p p 0D
2r2 E2 + E2 T 2r1 E1 + E1
1D
r r 0
Tb A2 Tb 3Tb 3A2 Tb D
en

2r2 A + + 2r1 A T 0
2 2 2 2 1
D
0D
A Tb
3r1 r2 T 0
2 1D
uy

(c) From the signal space diagram, the squared distance between the two signals is:
r
3A2 Tb A2 Tb
d221 = E1 + E2 = + = 2A2 Tb = 2Tb (when A = 1 volt)
2 2
Ng

It follows that the bit error rate bit is:


! ! r !
d21 /2 2Tb /2 Tb
P [error] = Q p =Q p =Q 106
N0 /2 N0 /2 N0
r
Tb
4.75 Tb (4.75)2 N0
N0
1 1 1
rb = 2
=
Tb (4.75) N0 (4.75)2 .108
= 4.43 106 bits/sec = 4.43 Mbps

Solutions Page 519


Nguyen & Shwedyk A First Course in Digital Communications

(d) To implement the optimum receiver that uses only matched filter, one needs to rotate
the basis functions as shown in the signal space diagram to obtain 1 (t) and 2 (t): 1 (t)

k
is perpendicular to the line joining s1 (t) and s2 (t). Since
s
E1 3A2 Tb /2

dy
tan() = = 2
= 3=
E2 A T b /2 3

It follows that

2 (t) = sin 1 (t) + cos 2 (t)


r r

we
2 2t 2 2t
= cos sin + sin cos
Tb Tb Tb Tb
r r
2 2t 2 2t
= sin = sin
Tb Tb Tb Tb 3

Sh
The block diagram of the optimum receiver is shown below:

t = Tb
Received signal 2 2 t
"1D "

h(t ) = cos r2 T
r (t ) Tb Tb 3 "0 D "

Figure 5.17
&
The impulse response of the filter is found as:
r
2 2(Tb t)
h(t) = 2 (Tb t) = sin
Tb Tb 3
r r
en

2 2t 5 2 2t
= sin + = cos
Tb Tb 3 Tb Tb 3

The threshold can be found from the signal space diagram to be:
" r # r
uy

p d p
21 Tb 3 1 A Tb
T = E1 sin = 3A A 2Tb =
3 2 2 2 2 2 2

(e) The average energy of the two signals s1 (t) and s2 (t) is :
Ng

E1 + E2
Eb = = A2 Tb
2
For a given (fixed) energy per bit Eb , antipodal signalling achieves the smallest error
probability amongst all binary signalling schemes. This is because the distance between
the two signals in antipodal signalling is the largest (again, for a fixed Eb ). Thus, the
two signals can be modified as:
p
s1 (t) = s2 (t) = Eb 1 (t)
r
p 2 2t 2t
= A Tb cos = 2A cos , 0 t Tb
Tb Tb Tb

Solutions Page 520


Nguyen & Shwedyk A First Course in Digital Communications

s1 (t ) s 2 (t )

k
V 2V V
t + 2V
2V Tb
t
Tb

dy
Tb
t t
0 Tb Tb 0 Tb
2 2
V 2Tb
E1 = E2 = V 2Tb
3 V

we
Figure 5.18

s1 (t ) s2 (t ) 1 (t ) 2 (t )

V2
3
1

Sh
Tb Tb
Tb Tb Tb
2
t t t
0 Tb 0 Tb Tb 0
1
2 2
Tb
V 2 total area
=0
&
Figure 5.19

RT
P5.13 (a) To show s1 (t) is orthogonal to s2 (t), one needs to show that 0 b s1 (t)s2 (t)dt = 0.
The above is quite obvious by plotting s1 (t)s2 (t).
en

Since s1 (t) and s2 (t) are orthogonal, choose:



s1 (t) 3 s2 (t) 1
1 (t) = = s1 (t); 2 (t) = = s2 (t)
E1 V Tb E2 V Tb

Plots of 1 (t) and 2 (t) are shown above.


uy

(b) Plots of signal space diagram and the optimum decision regions are shown below:
Ng

Solutions Page 521


Nguyen & Shwedyk A First Course in Digital Communications

2 (t )
2 (t ) 1 (t )
Decide 1T

k
dy
s2 (t )
Decide 0T
modified
E2

=
s2 (t ) E2 6
1 (t )
0 E1 s1 (t )

we
2
V Tb
T E1 =
3
E2 = V 2Tb

Figure 5.20

Sh
The optimum decision rule is that of the minimum distance rule:

p 0D
p
0D
(r1 s21 )2 + (r2 s22 )2 T (r1 s11 )2 + (r2 s12 )2
1D

r12 + (r2 E2 )2 T (r1 E1 )2 + r22


&
1D

p p 0D
2r2 E2 + E2 T 2r1 E1 + E1
1D
r 0
p Tb V 2 Tb D
en

2r2 V Tb + V 2 Tb + 2r1 V T 0
3 3 1
D
0
r1 V Tb D
r2 + T 0
3 3 1
D
uy

(c) The squared distance between the two signals:

V 2 Tb 4V 2 Tb 4Tb
d221 = E1 + E2 = + V 2 Tb = = (when V = 1 volt)
3 3 3
Ng

The error probability of the minimum distance receiver is:


! ! r !
d21 /2 2 Tb /2 3 Tb 2
P [error] = Q p =Q p =Q 106
N0 /2 N0 /2 N0 3
r
Tb 2 3
4.75 Tb (4.75)2 N0
N0 3 2
1 2 1
rb = 2
=
Tb (4.75) 3 N0 (4.75) 1.5 108
2

= 2.955 106 bits/sec = 2.955 Mbps

Solutions Page 522


Nguyen & Shwedyk A First Course in Digital Communications

(d) Need to rotate 1 (t) and 2 (t) to obtain 1 (t) and 2 (t) as shown in the signal space
diagram: 1 (t) is perpendicular to the line joining s1 (t) and s2 (t). Since

k

E1 V Tb / 3 1 1 3
tan() = = = = sin = ; cos =
E2 V Tb 3 6 2 2

dy
To realize the optimum receiver that uses only one matched filter, one needs to obtain
2 (t) and the threshold T .

2 (t) = sin 1 (t) + cos 2 (t)


we
1 3
= 1 (t) + 2 (t)
2 2
the filters impulse response is h(t) = 2 (Tb t). Both 2 (t) and h(t) are plotted below

2 (t ) h(t ) = 2 (Tb t )


3
2 Tb

3
0

2 Tb
Tb
2
Sh Tb
t


3
2 Tb

3
2 Tb
0 Tb
2
Tb
t
&
3 3

Tb Tb

Figure 5.21
en

The block diagram of the optimum receiver:

t = Tb V Tb
r2 T = : 1D
r (t ) 2 3
h(t )

uy

V Tb
r2 < T = 2 3 : 0 D

Figure 5.22

where the threshold can be found from the signal space diagram to be:
Ng

d21 p 2V T
V Tb 1

V Tb
b
T = E1 sin = =
2 6 2 3 3 2 2 3
Note: One can also make use of the result obtained in Question 2 to come up with the
following implementation:

ZTb 1D
E2 E1
r(t)[s2 (t) s1 (t)]dt T
0D
2
0

Solutions Page 523


Nguyen & Shwedyk A First Course in Digital Communications

t = Tb E2 E1
r (t ) 2 : 1D
s2 (Tb t ) s1 (Tb t )

k
< E2 E1 : 0 D
2

dy
Figure 5.23

(e) Need to modify s2 (t) so that the Euclidean distance d21 is


maximized. Since the energy
of s2 (t) cannot be changed move s2 (t) to the point ( E2 , 0) so that the maximized

we
distance is !
p p 3+1 p
d21 = E2 + E1 = V Tb
3

s2 (t ) = E2 1 (t )

Sh
Tb
2 Tb
t
0

3V

Figure 5.24
&
A2 T
P5.14 (a) The two signals are (time) orthogonal with equal energy E = 2 (joules). The orthog-
onal basis is 1 (t) = s1 (t)
E
& 2 (t) = s2 (t)
E
.

1 (t ) 2 (t )
en

2 A 2
=
T E T

t t
uy

0 T 0 T T
2 2

Figure 5.25
Ng

(b)+(c) See above plots.


q q q
E A2 Tb 1
(d) P [bit error] = Q =QN0 =Q 2N0 2rb 103
= 106
q
1
12 2rb 101
3 = erfc 2 106 rb = 1
2.
(4103 )[erfc1 (2106 )]
s2 (t)s1 (t)
(e) Rotate the orthogonal basis by 450 x and look at projection along R
2 (t) = normalizing factor .

(f) We want s2 (t) as far away from s1 (t) as possible. Therefore make it antipodal, i.e.,
s2 (t) = s1 (t).

Solutions Page 524


Nguyen & Shwedyk A First Course in Digital Communications

r2 2 (t )

s2 ( t ) 1T

k
E
1D

dy
0 E
1 (t )
s1 ( t ) 0T r1

we
0D

r2

Figure 5.26

( ) dt
r (t ) 1 (t )
2 (t )
Sh
Tb

0
t = Tb
r1

+
r2 r1
Comparator
Threshold = 0

( ) dt
Tb
&
0 t = Tb r2

Figure 5.27

2R (t )
en

1
T

t
uy

0 T 2 T

1

T

Figure 5.28
Ng

( ) dt
r (t ) Tb
r2R
0
t = Tb

2R (t )

Figure 5.29

P5.15 The noise, n(t) = n, is simply a DC level but the amplitude of the level is random and is

Solutions Page 525


Nguyen & Shwedyk A First Course in Digital Communications

n (t )

k
si (t ) r (t ) = si (t ) + n
0t T

dy
Figure 5.30: A binary communication system with additive noise.
s1 (t ) s2 (t )
V
Gaussian distributed with a probability density function given by

we
t T t
0 T 1 0 n2
fn (n) = exp (5.26)
2N0 V 2N0
The received signal in the first bit interval can be written as:
t =T
r1 t 0T0b D
r ( t ) r(t) 1=T si (t) + n(t) = si (t) + n, 1 (5.27)
( )dt

Sh
T 0 r1 < 0 1D
(a) The autocorrelation function of the noise n(t) is
s1 (tR
) n ( ) = E{n(t)n(t + )} s=2 (tE{n
) n} = E{n2 } = N0 ,
Since Rn ( ) =V N0 for every , it follows that
V any two noise samples, no matter how far
they are, are correlated. In fact, because the noiseTis/ 2 modeled
T as a random DC level,
t t
knowing the value 0
0 of the noiseT at one time instant gives a full knowledge about the noise
&
at any other time instants. V
The power spectral density of the noise is
Sn (f ) = nF{R
(t ) n ( )} = F{N0 } = N0 (f )
t =T 2
Note that Sn (f ) 6= 0 only when T 2 f = 0, or Sn (f ) = 0 for f 6= 0. This is of course due to
en

the noise being modeled ( )


si (t )as 0a random
d t
DC level!
r (t ) = si (t ) + n
(b) Consider rthe
(t ) following signal set and the receiver0 t T(as usual, s+1 (t) is used for 2the
r VT 1trans-
D
mission of bit 0 and s2 (t) is for the transmission of bit 1). In essence,
r < VT 2 system
the 0
T
t =T D

s1 (t ) ( )dt s2 (t )
uy

T 2
V
T
t t
0 T 0
V
Ng

t =T
r (t ) 1
T
r1 0 0D
T ( )dt
0
r1 < 0 1D

s1 (t ) Figure 5.31 s 2 (t )
V V RT
uses antipodal signalling. The decision variable is r1 = T1 0 r(t)dt. More specifically,
T /2 T
0 T
t
0
t
r1 = V + n1 if 0 was transmitted
r1 = V + n1 ifV 1 was transmitted

Solutions Page 526


t =T 2
T 2

( )dt
i i

0t T

Nguyen & Shwedyk s1 (t ) A First


s2 (t ) Course in Digital Communications
V
where the noise n1 is Gaussian, zero-mean and has a variance
T of N0 /T . Therefore the
t t
error probability of the receiver
0 T is: 0

k
V ! s
2V /2 V 2T
P [error] = Q p = Q

dy
N 0 /Tt = T N 0
T
r (t ) 1 r1 0 0D
T ( )dt r1 < 0 1D
(c) Consider the signal set plotted
0
in Fig. 5.32.

s1 (t ) s 2 (t )

we
n (t )
V V
T /2 T
t t
0s (t )
i T r (t ) = si 0(t ) + n
0t VT

s1 (t )
V
To achieve a probability of zero,
are many possibilities. Perhaps
0
r(t2 ), where
half of the bit interval, respectively.
Sh
Figure 5.32:s2A
T 2
(t )signal set.

V The decision rule is as follows:


t =T r
VT and
< VT
t =T 2
the receiver needs to remove the noise completely. There
0 t ( )dtthe simplest is toT take 2t samples of r(t), at r(t ) and
0
r (t ) 0 < t1 < TT /2 and T /2 < t2 < T , i.e., they are in+ the firstr half
1
2 1second
2 0
D

D
T
&
If sample r(t1 ) equals r(t(2 ),
)dtthen tdecide
=T
s1 (t).
r (t )
1
T T 2
r1 0 0D
( )
If sample r(tT1 ) is not equal to r(t2 ), then decide
d t
0
r1 < 0 s12D(t) (note that r(t1 ) > r(t2 )).
When s1 (t) is transmitted, r(t) = V + n, 0 t T r(t1 ) = r(t2 ).
s1 (t ) s 2 (t )
en

When s2 (t) is transmitted, then


V V

V + n, 0 t T/ 2T /2T
r(t) = t r(t
t 1 ) 6= r(t2 ).
0 T V + n,0 T /2 t T
V
uy

Therefore it is possible to determine whether s1 (t) or s2 (t) was transmitted perfectly.


Another receiver implementation is as follows:

t =T 2
T 2

( )dt
Ng

0
r (t ) + r VT 2 1D
t =T r < VT 2 0 D
T

( )dt
T 2

Figure 5.33

The noise process in this problem is an extreme example of colored noise. Using the
signal space ideas developed in this chapter one can see that the noise lies along one

Solutions Page 527


Nguyen & Shwedyk A First Course in Digital Communications

s1 (t)
axis only, namely that given by 1 (t) = .
E
It does not have any component along
s2 (t)
2 (t) = .This is illustrated in Fig. 5.34. This allows you to determine which signal

k
E
is sent exactly. In this context, another receiver implementation is shown in Fig. 5.34.
2 (t )

dy
E s2 ( t )

we
t =T
1
( )dt
0 E r (t )
T
r2 > 0
r =00
1 (t )
D

s1 ( t ) 0 2 D

Noise projection is on 1 (t ) only 2 (t )

Figure 5.34: Noise projection and another receiver implementation.

P5.16 (Matched Filter ) Sh


s( t ) + w (t )
h (t ) H ( f )
sout (t ) + w out (t )
&
Figure 5.35

(a) The response of the filter to the input signal component can be written as
Z
en

sout (t) = F 1 {S(f )H(f )} = S(f )H(f )ej2f t df



Z Z 2
sout (t0 ) = S(f )H(f )ej2f t0 df [sout (t0 )]2 = S(f )H(f )ej2f t0 df

uy

(5.28)

(b) Since the input noise is white with power spectral density (PSD) N0 /2, it follows that
the PSD of the output noise is N20 |H(f )|2 . The total power of the output noise Pwout is
thus given by,
Ng

Z Z
N0 2 N0
Pwout = |H(f )| df = |H(f )|2 df (5.29)
2 2

From (5.28) and (5.29) the SNRout can be written as,


hR i2
j2f t0 df
[sout (t0 )]2 S(f )H(f )e
SNRout = = Z (5.30)
Pwout N0
2 |H(f )|2 df

Solutions Page 528


Nguyen & Shwedyk A First Course in Digital Communications

R
(c) Now [a(t) + b(t)]2 dt 0 (always). Considered as a quadratic in :

k
2 + + 0 (5.31)

with coefficients , and given as:

dy
Z
= b2 (t)dt (5.32)

Z
= 2 a(t)b(t)dt (5.33)

Z

we

= a2 (t)dt (5.34)

The above inequality requires that the quadratic 2 ++ cannot have any real roots,
otherwise it was cross the real axis for some value of (actually at two points) and would
be less than 0. From the formula for the roots this means that = 2 4 < 0, which,
upon substituting for , , is
Z

Sh 2 Z
a(t)b(t)dt

2
a (t)dt
Z
b2 (t)dt

which is known as Cauchy-Schwartz inequality. The equality holds when a(t) = Kb(t),
(5.35)

where K is an arbitrary (real) scaling factor, i.e., the shapes of a(t) and b(t) are the
&
same. Z Z

(d) Using Parsevals relationship, a(t)b(t)dt = A(f )B (f )df (see P2.26, page 69),

5.35 becomes
Z 2 Z Z
2
A(f )B (f )df |A(f )| df |B(f )|2 (f )df (5.36)
en

where equality holds when A(f ) = KB(f ), K an arbitrary (real) scaling constant.
(e) Now let A(f ) = H(f ) and B (f ) = S(f )ej2f t0 , then (5.36) becomes:
Z 2 Z Z
uy


j2f t0 2
H(f )S(f )e df |H(f )| df |S(f )|2 (f )df (5.37)

Combining (5.30) and (5.37) gives:


R 2
R 2 Z
|H(f )| df |S(f )| (f )df |S(f )|2
Ng

SNRout R = df (5.38)
N0 |H(f )|2 df N0 /2
2
(f) The maximum value of SNRout is achieved by choosing A(f ) = KB(f ), i.e.,
H(f ) = KS (f )ej2f t0 . Finally, the corresponding impulse response of the filter is

h(t) = Ks(t0 t) (5.39)

where K is any real number. Note that to arrive at the above relation, the following two
properties of the Fourier transform have been used (x(t) is a real function here):
1. Time shift: x(t t0 ) X(f )ej2f t0 .

Solutions Page 529


Nguyen & Shwedyk A First Course in Digital Communications

s1 (t ) s2 (t )

k
K

dy
t t
0 T 3 2T 3 T 0 T 3 2T 3 T

we
Figure 5.36: Signal set matched to the filters impulse response.

2. Time reversal: x(t) X (f ).


In our situation we take t0 to be Tb , the bit duration. Therefore h(t) = Ks(Tb t).

E=
Z Tb
2
s1 (t)dt =
Z Tb /3
3 2
Sh
P5.17 (a) The signal set would be Kh(Tb t) where K is a scaling factor that determines the
transmitted energy (or average power). They plot as in Fig. 5.36. This makes h(t) a
matched filter to the signal set.
(b) Assume K = 1. The transmitted energy of either signal is

t dt +
Z Tb

3
t+

3 2
dt =
Tb
(joules)
T 2T 2 3
&
0 0 b Tb /3 b
r ! r ! r
2E 2Tb 2
P [bit error] = Q =Q =Q 103
N0 3N0 3rb 109
2 109
rb = 6.98 107 bits/second = 69.8 Mbps
3[Q1 (103 )]2
en

P5.18 (Antipodal signalling)


s(t )
uy

A
Ng

t
0 Tb / 3 2Tb / 3 Tb

y (t ) Figure 5.37

2 A2Tb 3
(a) The impulse response of the filter matched to s(t) is

h(t) = s(Tb t) = s(t) (5.40)


2
where to arrive atA Tthe
b 3 last equality we recognize that s(t) is even about t = T /2.
b
Therefore, the impulse response h(t) looks exactly the same as s(t).

t
Solutions 0 2Tb / 3 Tb 4Tb / 3 Page
2Tb 530
Nguyen & Shwedyk A First Course in Digital Communications

(b) The output of the matched filter when s(t) is applied at the input is
Z t

k
y(t) = s(t) h(t) = s()s(t )d
0

0 t<0

dy



A 2t 0 t < Tb /3



A2 (2T /3 t)
b T b /3 t < 2Tb /3

2
2A (t 2Tb /3) 2Tb /3 t < Tb
s(t ) = 2 (4T /3 t) (5.41)

2A b T b t < 4Tb /3

2
A (t 4Tb /3)
4Tb /3 t < 5Tb /3

we
A
2 (2T t)

A b 5Tb /3 t < 2Tb

0 6t

The output y(t) is sketched in Fig. 5.38. Observe that the output peaks at the sampling
time t = Tb = 3. This is of course not a coincidence but due to the property of the
matched filter,
0 namely
Tb / 3
maximizing
2Tb / 3
theT signal-to-noise
t ratio at the sampling instant (see

Sh
b
P5.16).

y (t )

2 A2Tb 3
&
A2Tb 3

t
0 2Tb / 3 Tb 4Tb / 3 2Tb
en

Figure 5.38: Output of the matched filter: only signal s(t) is present at the input.

(c) The output of the matched filter when s(t) is applied at the input is the negative version
uy

of the waveform in Fig. 5.38. This is simply because [s(t)] h(t) = [s(t) h(t)].
(d) To compute the SNR, the variance (i.e., average power) of the noise at the output of the
filter at t = Tb needs to be found. This can be accomplished as follows.
Approach 1: At the output of the matched filter and for t = Tb the noise is
Ng

Z Tb
n = n()h(Tb )d
0
Z Tb Z Tb
= n()s(Tb (Tb ))d = n()s()d (5.42)
0 0

The variance of the noise is


Z Tb Z Tb Z Tb Z Tb
2
n = E n()n(v)s()s(v)ddv = E{n()n(v)}s()s(v)ddv
0 0 0 0
Z Tb Z Tb Z Tb
N0 N0 Tb
= ( v)s()s(v)ddv = s2 ()d = N0 A2 (5.43)
2 0 0 2 0 3

Solutions Page 531


Nguyen & Shwedyk A First Course in Digital Communications

N0
Approach 2: The PSD of the noise at the output of the matched filter is |H(f )|2 =
2
N0

k
|S(f )|2 . Then the noise power is
2
Z Z
N0 N0 Tb 2 Tb

dy
2 2
n = |S(f )| df = s ()d = N0 A2 (5.44)
2 2 0 3
The signal-to-noise ratio (SNR) at the output of the matched filter at the sampling
instant is given by
[y(Tb )]2 (2A2 Tb /3)2 4A2 Tb

we
SNR = = = (5.45)
n2 N0 A2 T3b 3N0
q
2
(e) For antipodal signalling, the distance between the two signals is d21 = 2 E = 2 2A3 Tb .
Therefore the probability of bit error is
! q 2 s
2A Tb

Sh
d21 /2 4A 2T
3 b
P [error] = Q p = Qp = Q (5.46)
N0 /2 N0 /2 3N0

Finally, it follows from (5.46) and (5.45) that



P [error] = Q SNR (5.47)
&
Equation (5.47) holds for general antipodal signalling, i.e., regardless of what s(t) is
used. The relationship in (5.47) also clearly shows that for antipodal signalling over an
AWGN channel maximizing the SNR at the output of a receiving filter is the
same as minimizing the probability of error!
(f) Plot of P [error] as a function of SNR is shown in Fig. 5.39. The minimum SNR to
achieve a probability of error of 106 is
en

SNR = [Q1 (106 )]2 = 22.595 = 13.54 (dB) (5.48)

P5.19 (Bandwidth) Define W to be the bandwidth of the signal s(t) if % of the total energy of s(t)
is contained inside the band [W, W ]:
uy

Z W Z W
|S(f )|2 df 2 |S(f )|2 df

ZW
= 0
= (5.49)
2 E 100
|S(f )| df
Ng


q
(a) (i) Rectangular pulse: s(i) (t) = 1 0tT .
Tb , b
Z Tb
1 Tb ejf Tb ejf Tb
S(i) (f ) = ej2f t dt = ejf Tb
Tb 0 Tb 2jf Tb
p sin(f T b )
= Tb ejf Tb .
(f Tb )
Then
2
S(i) (f )2 = Tb sin (f Tb ) .
2 (f Tb )2

Solutions Page 532


Nguyen & Shwedyk A First Course in Digital Communications

0
10

k
2
10

dy
4
10

Pr[error]
6
10

we
8
10

10
10

Sh
0 2 4 6 8 10 12 14 16
SNR (dB)

Figure 5.39: Plot of P [error] as a function of SNR.

q
(ii) Half-sine: s(ii) (t) = 2 sin t , 0 t T . To find S (f ), write s (t) as
b (ii) (ii)
Tb Tb
n o
s(ii) (t) = 2s(i) (t) sin 2 2Tb . Then S(ii) (f ) = 2S(i) (f ) F sin 2 2T1 b
1
.
&
But
1 (f 1/2Tb ) (f + 1/2Tb )
F sin 2 = = Ssin (f )
2Tb 2j
and
Z
en

S(i) (f ) Ssin (f ) = S(i) ()Ssin (f )d



Z
Tb jTb sin(Tb ) (f 1/2Tb ) (f + 1/2Tb )
= e d
2j (Tb ) 2j
uy

Using the sifting property of the impulse function we have



Tb 2
S(ii) (f ) =
2j

sin f 1 T sin f + 1
2Tb Tb
Ng

1 2Tb b j f + 2T1 Tb
ej f 2Tb Tb e b
1 1
f 2Tb Tb f + 2Tb Tb

j f 2T1 Tb j f + 2T1 Tb
Now e b = ejf Tb ej/2 = jejf Tb and e b = jejf Tb ,

Solutions Page 533


Nguyen & Shwedyk A First Course in Digital Communications


1
sin f 2Tb Tb = sin f Tb 2 = cos(f Tb ). Therefore
" #

k

Tb 2 jejf Tb 1 1
S(ii) (f ) = cos(f Tb ) +
2j Tb f 2T1 b f + 2T1 b

dy
| {z }
1 4T 2
= ! = b
Tb (4f 2 T 2 1)
Tb f 2 1 2 b
4T
b

2 2 Tb cos(f Tb ) jf Tb
= e
(4f 2 Tb2 1)

we
Then
2
S(ii) (f )2 = 8Tb cos (f Tb ) . (5.50)
2 (4(f Tb )2 1)2
q h i
(iii) Raised cosine: s(iii) (t) = 3T 2 1 cos 2t , 0 t T . It can be written as
Tb b
q h b i

n o
But F cos 2t
S(iii) (f ) =
r
2 Sh
s(iii) (t) = 23 s(i) (t) s(i) (t) cos 2t

3 (i)
Tb . Therefore,

S (f ) S(i) (f ) F cos

= (f 1/Tb )+(f +1/Tb )



2t
Tb

. Again the impulse functions just sift (or if


Tb 2

&
you wish shift) the spectrum of S(i) (f ) to S(i) f T1b and S(i) f + T1b . Therefore

r 1 1
2 S(i) f Tb + S(i) f + Tb
S(iii) (f ) = S(i) (f )
3 2
en

Doing all the necessary algebra, one obtains:



2Tb sin(f Tb )
S(iii) (f ) = ejf Tb . (5.51)
3 (f Tb )[1 (f Tb )2 ]
uy

and
sin2 (f Tb )
S(iii) (f )2 = 2Tb . (5.52)
3 2 (f Tb )2 [1 (f Tb )2 ]2
Substituting the above into (5.49) and changing the variable = f Tb one has the fol-
Ng

lowing equations to solve for W Tb :


(i) Rectangular pulse:
Z W Z W Tb
Tb sin2 (f Tb ) sin2 ()
2 df = 2 d = . (5.53)
0 (f Tb )2 0 () 2 100

(ii) Half-sine:
Z W Z W Tb
8 Tb cos2 (f Tb ) 16 cos2 ()
2 2 2 2
df = 2 2 2
d = (5.54)
0 [1 4(f Tb ) ] 0 (1 4 ) 100

Solutions Page 534


Nguyen & Shwedyk A First Course in Digital Communications

(iii) Raised cosine:


Z 2 Z W Tb

k
W
2Tb sin(f Tb ) 4 sin2 ()
2 df = d = (5.55)
0 3 2 (f Tb )[1 (f Tb )2 ] 3 2 0 2 (1 2 )2 100

dy
(b) The above equations can only be solved numerically. Using the numerical integration
routine in MATLAB (quad or quad8 in MATLAB 5.3 and quadl in MATLAB 6.0), the
following table is obtained.

Table 5.1: Values of W Tb

we
% Rectangular Half-Sine Raised Cosine
90.0% 0.8487 0.7769 0.9501
95.0% 2.0740 0.9116 1.1146
99.0% 10.2860 1.1820 1.4093
99.9% 31.1677 2.7355 1.7290

Sh
Note that the bandwidth is normalized in terms of the signalling rate rb = 1/Tb (bits/second).
Asymptotically, as f becomes very large, the energy spectral densities behave as follows:
Rectangular: 1/f 2 ; Half-sine: 1/f 4 and Raised cosine: 1/f 6 .
Thus the raised cosine decays the fastest. This is related to the number of times a
&
signal can be differentiated before impulses appear. In the above, impulses appear in
the rectangular pulse after the first derivative, in the half-sine after the second derivative
and in the raised cosine after the third derivative. Thus one would feel that the raised
cosine is the smoothest, then the half-sine and finally the rectangular pulse. This agrees
with the plot of all three signals in Figure 5.40.
In the frequency domain, this translates to the smoother signal occupies the lesser
en

amount of frequency axis (though this may also depends on the value of used for
bandwidth definition. Note from Table 5.1 that though the raised cosines spectral den-
sity decays (asymptotically) the fastest, it does not start to occupying less frequency
axis until the energy bandwidth criterion is 99.9%, a value that is probably not of engi-
uy

neering interest. A criterion of 95% or 99% would be more realistic values for engineering
design and analysis.
Figures 5.41 and 5.42 plot the energy spectral densities of all three signals in linear and
logarithmic scales, respectively. It can be seen that the faster the density decays with
f , the wider the main lobe. This is reasonable because all three signals have the same
Ng

energy (of 1 joule). Thus the energy of the respective signals has to wide up someplace
on the frequency axis. If not at the higher frequency then the main lobe will do.

P5.20 (a) When the two bits are equally likely (uniform source), the error performance of a digital
binary communications system only depends on the Euclideandistance between the two
signals. This distance is 2 E for antipodal signalling, where E is the energy of s(t).
It follows that, for the two communication systems to have the same error performance,
the energies of the two waveforms in Figure 5.43 have to be the same. That is:

B2T
A2 T = B=A 3
3

Solutions Page 535


Nguyen & Shwedyk A First Course in Digital Communications

2
Rect.
HalfSine

k
1.8
Raised cosine
1.6

dy
Amplitude (scaled by T1/2)
1.4

c
1.2

0.8

we
0.6

0.4

0.2

Sh
0 0.2 0.4 0.6 0.8 1
t/Tc

Figure 5.40: Plot of three signals in the time domain.

Rectangular
&
1 HalfSine
Energy Spectral Densisty (joules/Hz)

Raised Cosine

0.8
en

0.6

0.4
uy

0.2

0
3 2 1 0 1 2 3
fTb
Ng

Figure 5.41: Plot of Energy Spectral Densities of three signals.

(b) Define W to be the bandwidth of the signal s(t) if 95% of the total energy of s(t) is
contained inside the band [W, W ]. The bandwidth of the rectangular signal has been
found in P5.19 to be W(i) = 2.07/T .
For the triangular signal, to find the bandwidth W(ii) we first find S(ii) (f ). To this end,
differentiate s(ii) (t) twice, find the Fourier transform of the resulting signal and divide

Solutions Page 536


Nguyen & Shwedyk A First Course in Digital Communications

0
10
Rectangular

k
HalfSine
1 Raised Cosine
10

Energy Spectral Densisty

dy
2
10

3
10

we
4
10

5
10

Sh
5 0 5
fTb

Figure 5.42: Plot of Energy Spectral Densities of three signals.

s(t ) s(t )
&
B
A

t t
0 T 0 T 2 T
en

(i) (ii)

Figure 5.43: Two waveforms for s(t).

by (j2f )2 .
uy

Since
d2 s(ii) (t) 2B 4B T 2B
2
= (t) t + (t T )
dt T T 2 T
and ( )
d2 s(ii) (t)
Ng

2B 4B jf T 2B j2f T
F = e + e
dt2 T T T

n o
d2 s(ii) (t)
F 2B
1 + ej2f T 4B jf T
dt2 T T e
S(ii) (f ) = =
(j2f )2 (j2f )2
jf T jf T
4B jf T e +e
T e 2 4B
T e
jf T
=
(j2f )2
4B cos(f T ) 1
=
T (jf )2

Solutions Page 537


Nguyen & Shwedyk A First Course in Digital Communications

Then

S(ii) (f )2

k
2 48T (cos(f T ) 1)2
S(ii) (f )2 = 16B 2 (cos(f T ) 1) =
2f 2T 2 E(ii) 2 f 2T 2

dy
So the equation to find W(ii) is
Z
96T W(ii) (cos(f T ) 1)2
2 2 2
df = = 0.95
0 f T 100
Z
96 W(ii) T (cos() 1)2
d = 0.95

we
2 0 2
which gives W(ii) = 1.00/T .
Since both systems have the same bit rate of 2 Mbps, the bit duration in both system
is T = 1/(2 106 ) = 0.5 106 seconds.
The required bandwidths of the two systems are therefore W(i) = 2.07/(0.5 106 ) =

Sh
4.14 106 Hz = 4.14 MHz and W(ii) = 1.00/(0.5 106 ) = 2.00 106 Hz = 1.82 MHz.
Clearly, when the two systems have the same error performance, system-(ii) is preferred
since it requires less transmission bandwidth.
(c) Consider the system that uses s(t) in Fig. 5.43-(i). Then the error probability is
r
2E
! s
2
2A T

P [error] = Q = Q = 106
&
N0 N0

2A2 T
= [Q1 106 ]2 = 4.752
N0
1/2 1/2
4.752 N0 4.752 2 108
A= = = 0.6718 (volts)
en

2T 2 0.5 106

P5.21 Consider a diversity system in Fig. 5.44.

(a) The decision variable r can be written as:


uy

r = KrA + rB = (KVA + VB ) + (KwA + wB )


= V + w

Note that the signal components always have the same sign in both branches. In the
above one has
Ng

V = KVA + VB : total signal component


w = KwA + wB : total noise component

Both wA and wB are zero-mean, with variances A 2 and 2 respectively; furthermore


B
they are uncorrelated, thus, the noise w is zero-mean Gaussian random variable with
variance 2 = K 2 A
2 + 2 .
B
Rewrite the decision variable r as:

r=V +w if 1 was transmitted
r = V + w if 0 was transmitted

Solutions Page 538


Nguyen & Shwedyk A First Course in Digital Communications

Channel A

k
VA w A (t ) s(t )

( i )dt
rA (t ) Tb rA = VA + w A

dy
K
0

s (t ) r 0 1D
r < 0 0D

( i )dt
Tb

we
rB (t ) 0 rB = VB + w B

VB w B (t ) s(t )

Channel B

Sh
Figure 5.44: A Diversity System.

The probability of error for the decision rule r


1D
0 is
0D
&


V KVA + VB
Pr[error] = Q = Q q (5.56)
2 + 2
K 2 A B

(b) To minimize Pr[error] one needs to maximize


en

V2 (KVA + VB )2 VA2 (K + )2
g(K) = = = 2 K2 +
2 K 2 A + B A

VB 2
where = ,= B
2 . Now
uy

VA A

dg(K) VA2 2(K + )(K 2 + ) (K + )2 (2K)
= 2
dK A (K 2 + )2
2VA2 (K + )[K 2 + (K + )K]
Ng

= 2
A (K 2 + )2
2VA2 (K + )( K)
= 2 =0
A (K 2 + )2

2 VA
gives K = or K = = B2 V . Since K > 0, the optimum value of K to minimize
A B
Pr[error] is
2 VA
K = = B2 V
A B

Solutions Page 539


Nguyen & Shwedyk A First Course in Digital Communications


(c) When K = K = we have
h i2

k
B2V
A
(K VA + VB )2 2
A VB
VA + VB
g(K ) = 2 + 2 = 2
(K )2 A B VA 2 2 2
B A + B

dy
A2 V
B

(B2 V 2 + 2 V 2 )2 (B2 V 2 + 2 V 2 )2
A A B A A B
= 4 2 V 2 + 4 2 V 2 = 2 2 ( 2 V 2 + 2 V 2 )
B A A A B B A B B A A B
2 V 2 + 2 V 2
B VA2 VB2
A A B
= 2 2 = 2 + 2
A A

we
B B
(5.57)

which means that


V2 V 2 VB2
2
= A
2 + 2 (5.58)
A B

Sh
Note that each ratio in the above is the signal-to-noise ratio (SNR). Thus (5.58) shows
that the best possible SNR is the sum of the SNRs in the two different branches. This
is also called the maximal-ratio combining.

The probability of error for maximal-ratio combining is:


s !
VA2 VB2
&
Pr[error] = Q 2 + 2
A B

Remark : The diversity system considered in this assignment is a very popular model in
mobile communications when multiple antenna are used at the transmitter/receiver.
(d) If K is set to 1 then the combining of two channels A and B is called
en

equal-gain combining:


V VA + VB
Pr[error] = Q = Q q (5.59)
2 + 2 A B
uy

Of course, as shown above, equal-gain combining (although simple) is inferior to maximal-


ratio combining in terms of error performance.

P5.22 Let s1 (t) 0T , s2 (t) 1T .


Ng

(a) f (rj |1T ) = 2c ec|rj V | ; f (rj |0T ) = 2c ec|rj | . Since the samples are statistically indepen-
dent, the conditional pdfs are:
m
Y m
Y
c c|rj V | c c|rj |
f (r1 , . . . , rM |1T ) = e ; f (r1 , . . . , rM |0T ) = e .
2 2
j=1 j=1

(b) The log-likelihood ratio becomes:


( Qm c ) m
c|rj V | X
j=1 2 e
ln Qm c c|r | =c [|rj | |rj V |] .
j=1 2 e
j
j=1

Solutions Page 540


Nguyen & Shwedyk A First Course in Digital Communications

If rj < 0 then |rj | = rj and |rj V | = (rj V ). Therefore the sum becomes cm1 V .
P

k
If rj < V then |rj | = rj and |rj V | = (rj V ). The sum is 2c 0<rj <V rj cm2 V .

dy
If rj > V then |rj | = rj and |rj V | = (rj V ). The sum is cm3 V .
P
Therefore the sum becomes 2c rj (0,V ) rj + cV (m3 m1 m2 ).
P
But m = m3 +m2 +m1 , therefore the sum can be written as 2c rj (0,V ) rj +cV (m3 m).

we
Note: m is known to us beforehand but m3 and of course rj are not. They depend on
the transmitted signal and the noise samples.

Further what if rj = 0 or V where should one put these samples? It should not matter
but let us place them with the m1 and m3 count, respectively.

Sh
(c) The decision rule to maximize the error probability is:

f (r1 , . . . , rm |1T ) 1D P1
R .
f (r1 , . . . , rm |0T ) 0D P2

In deriving this there were no assumptions made regarding the conditional pdfs. After
&
taking ln, we have
X 1D
1 P1
rj + m3 V R mV + ln .
0D c P2
rj (0,V )

Remark: The receiver derived is not necessarily the best there is for the given noise
en

model but it is the optimum on (in terms of minimizing error probability) under the
given conditions.

P5.23 (a) The observable is the number of received photons in the time interval Tb seconds. There-
fore:
uy

[(s + n )Tb ]k e(s +n )Tb


P [k photons emitted in Tb seconds|1T ] = ,
k!
[n Tb ]k en Tb
P [k photons emitted in Tb seconds|0T ] = .
k!
Ng

The likelihood ratio test becomes:



[(s + n )Tb ]k s Tb 1D P1
e R
[n Tb ]k 0D P2

Taking ln gives:
1D
n P1 P1 =P2 n
k R ln s Tb + ln = ln s Tb = Th .
0D s + n P2 s + n

Solutions Page 541


Nguyen & Shwedyk A First Course in Digital Communications

(b) Again assuming the bits are equally probable, one has
1 1

k
P [bit error] = P [k Th |0T ] + P [k < Th |1T ]
2 2
1 1 1 1
= [P [k Th |0T ] + (1 P [k Th |1T ])] = + P [k Th |0T ] P [k Th |1T ]

dy
2 ( " 2 2 # )2
1 1 X k
(n Tb ) e n Tb k
[(s + n )Tb ] e( s +n )Tb
= + .
2 2 k! k!
k = Th | {z }
| {z } h k i
strictly (n Tb )k n T
= e b 1 1+ s es Tb

we
speaking, k! n
should be
ceiling,
i.e., dTh e

The average transmitted energy due to signal photons in one bit interval is hf s Tb ,
while that due to background photons is hf n Tb . Therefore the signal-to-noise ratio is
hf s Tb s
SNR = hf n Tb = n . So P [bit error] depends on the SNR, but in a very complicated
manner.

(a) Simple manipulations give:

B
Sh
P5.24 (Signal design for non-uniform sources)

s
(s22 s12 )2

1p
s
2N0
A = 0.5 ln
&
A 2N0 p s22 s12 )2
(

p
s22 s12 N0 ln 1p
= +
2N0 2(
s22 s12 )


en

p
T s12 0.5(s12 + s22 ) s12 + 12 s22N
0
s12 ln 1p
p = p
N0 /2 N0 /2

p
s22 s12 N 0 ln 1p
= +
uy

2N0 2(
s22 s12 )
It then follows that:
!
B T s12 B T s12
A = p Q A =Q p
A N0 /2 A N0 /2
Ng

Similarly,

B s22 T s22 s12 N0 ln 1p p
A+ = p = +
A N0 /2 2N0 2(
s22 s12 )

Since Q(x) = 1 Q(x), then:


! " !#
s22 T T s22 B
Q p = 1Q p =Q A+
N0 /2 N0 /2 A

Solutions Page 542


Nguyen & Shwedyk A First Course in Digital Communications

Thus, the error probability can be written as:



B B

k
Pr[error] = pQ A + (1 p)Q A+
A A
Since Pr[error] is a decreasing functions of A Pr[error] is minimized when A is maxi-

dy
mized.
Using the energy constraint E b = E1 p + E2 (1 p) one can write A as the function of a
single variable E1 as follows:
q
b E1 p
E b E1 E 2 p
E
E + 2 1

we
E2 2 E1 E2 + E1 1 1p 1p
A= = (5.60)
2N0 2N0
b /p
Thus, the design problem is to find the optimal value of E1 in the range 0 E1 E
(E1 cannot exceed Eb /p because E2 0) so that A is maximum.
(b) For the case of orthogonal signalling, = 0 and A is simplified to

Sh A =
b + E1 (1 2p)
E
(1 p)2N0
It is straightforward to see from the above equation that the maximum value of A is
Amax = E b /(2pN0 ), which occurs when E1 takes on its maximum value of E1 = E
(and thus E2 = 0). This implies the following optimal signal set:
b /p

q
&
s1 (t) = Eb /p1 (t)
s2 (t) = 0

i.e., we do not use the second basis function. In fact, we have on-off keying (OOK).
If we select E1 = E2 = E b as is usually done (conventional design), then A = (E
b /N0 ).
en

1
Since p 0.5, Amax /A = (2p) 1 with equality only when p = 0.5.
Note also that, for p = 0.5, the two solutions E1 = 2E b , E2 = 0 and E1 = E2 = E b ,
are special cases of the general solution (for orthogonal signal set) E1 = 2E b cos ,
2
2
E2 = 2Eb sin , where is an arbitrary angle.
(c) Fig. 5.45 plots the error performance of the optimum design and the conventional design
uy

when p = 0.1. Observe that, at BER=105 , the gain in SNR by the optimum design is
about 7.1 dB.
(d) Nonorthogonal Signals. The first and second derivatives of A in (5.60) with respect to
E1 are, respectively:

Ng

dA 1 1 2p E b (E
b 2E1 p)
= b E1 E 2 p) = 0 (5.61)
dE1 2N0 1 p (1 p)0.5 (E 1
d2 A 2p(Eb E1 E 2 p) + (E
b 2E1 p)2
1
= b E1 E 2 p)1.5 N0
dE 2
1 2(1 p)0.5 (E 1

It can be seen that the sign of the second derivative is determined by the sign of .
Thus, for > 0, A is a convex function of E1 (with a minimum), while for < 0, A is a
concave function of E1 (with a maximum). Figure 5.46 plots the normalized quantity

2 p 0.5
2N0 E1n (1 2p) + 1 E1n E1n
An = A = 2
Eb 1p 1p

Solutions Page 543


Nguyen & Shwedyk A First Course in Digital Communications

k
0
10
Conventional design
Optimum design

dy
1
10

2
10

3
10
BER

we
4
10

5
10

6
10

10
7
0
Sh
5 10
E /N (dB)
b 0
15

Figure 5.45: Error performance of the optimum and conventional designs.


20
&
12
en

10

=1
8
uy n

6
A

=0

4
=1
Ng

0
0 2 4 6 8 10
E =E /E
1n 1

Figure 5.46: Variation of An with E1n when varies between 1 and +1 in the step of 0.1
(p = 0.1).

Solutions Page 544


Nguyen & Shwedyk A First Course in Digital Communications

0
10
Conventional design

k
1 =1 Optimum design
10

dy
2
10
=0.75
3
10

BER
=0.5
4
10

we
5
10
=0.25
6
10
=0
7
10

Sh
0 5 10 15 20
E /N (dB)
b 0

Figure 5.47: Variation of BER of the optimal and conventional systems for 0 (p = 0.1).

as a function of normalized E1 , E1n = E1 /E b , for several values of and p = 0.1. We


can observe the convexity for > 0 and concavity for < 0.
&
Equating (5.61) to zero, the following two roots which give a minimum for > 0 and a
maximum for < 0 are obtained:
b
E 1 2p

E1 = 1 , >0
2p [1 4p(1 p)(1 2 )]0.5
b
en

E 1 2p
E1,opt = 1+ , <0 (5.62)
2p [1 4p(1 p)(1 2 )]0.5

Thus for > 0, the maximum of An occurs at the edge with E1,opt = E b /p, E2,opt = 0,
Amax = E b /(2pN0 ), which is OOK again.
uy

Fig. 5.47 plots the BER as a function of E b /N0 for optimal E1 and E2 , and also for
conventional system (with E1 = E2 = E b , hence, A = Eb /(1 )/N0 ) for several values
of nonnegative and p = 0.1. We can see from this figure that for nonnegative , the
BER of the optimal system (OOK) is much better than that of the conventional system.
For example, when the BER is 107 , the improvement in E b /N0 is about 7.1 dB for
Ng

= 0 and is even greater for higher values of .


For < 0, the maximum of A occurs when E1 = E1,opt of (5.62). From (5.62) and (5.60)
one has:
b
E 1 2p
E2,opt = 1+ (5.63)
2(1 p) [1 4p(1 p)(1 2 )]0.5

Substituting (5.62) and (5.63) in (5.60), we obtain the maximum value

Eb 1

(1 2p)2

2||

Amax = 1+
2N0 2p(1 p) [1 4p(1 p)(1 2 )]0.5 [1 4p(1 p)(1 2 )]0.5

Solutions Page 545


Nguyen & Shwedyk A First Course in Digital Communications

0
10
Conventional design

k
Optimum design
1
10

=0.25,0.5,0.75,1.0

dy
2
10

3
10

BER 4
10

we
5
10

6
10

7
10

Sh
0 5 10 15 20
E /N (dB)
b 0

Figure 5.48: Variation of BER of the optimal and conventional systems for < 0 (p = 0.1).

Fig. 5.48 plots the BER as a function of Eb /N0 for optimal E1 and E2 , and for the conventional
system, for several values of negative when p = 0.1. It can be seen from this figure that for
&
negative values of , there is also a significant improvement in performance relative to the
conventional system. For example, when BER is 107 , the improvement in is about 4.5 dB
for = 1 and about 6.0 dB for = 0.25.
Finally, if there is no constraint on the specific value of (which should be the case when
6= 0), it is not hard to see that the optimum signal set corresponds to = 1.
en

Late
Laggards Innovators
Majority
f ( m3 ) f ( m2 ) f ( m1 )
uy

"The
Chasm"

Technology Adoption Process 


E A 0 A E
Ng

m3 m2 m1

Figure 5.49

P5.25 (a) The decision variable can be expressed as:



Z T E + w, if m1 was sent
`= r(t)s(t)dt = w, if m2 was sent
0
E + w, if m3 was sent
RT
where E is the energy of s(t) and w = 0 w(t)s(t)dt is zero-mean Gaussian random

Solutions Page 546


Nguyen & Shwedyk A First Course in Digital Communications

N0
variable with variance 2 = 2 E.
Thus, given m1 was sent:

k
!
AE EA
P [error|m1 ] = P (` < A) = 1 Q =Q p

dy
EN0 /2

Given m3 was sent:


!
A + E
P [error|m3 ] = P (` > A) == Q p

we
EN0 /2

Given m2 was sent:


!
A
P [error|m2 ] = Pr(` < A or ` > A) = 2Q p
EN0 /2

P [error] =
Sh
(b) The average probability of error when the a priori probabilities of the three signals are
P [m1 ] = P [m3 ] = 14 and P [m2 ] = 12 is given by:

1
1 1
P [error|m1 ] + P [error|m3 ] + P [error|m2 ]
4
A + E
!4
A
2!

= Q p +Q p
&
2 EN0 /2 EN0 /2

(c) Since P [error] = f (A), to minimize f (A) one needs to find A from dfdA(A)
= 0. Using the
hint provided, one obtains:

en

A+E
dQ
EN0 /2 1 1 (E A)2
= p exp
dA EN0 /2 2 EN0
2

1 (E A)
= exp
EN0 EN0
uy


dQ A
EN0 /2 1 1 A2
= p exp
dA EN0 /2 2 EN0

df (A)
Ng

Thus, setting dA = 0 gives:



1 (E A)2 A2
exp = exp
2 EN0 EN0

1 (E A)2 A2 1 1
ln = A= E N0 ln
2 EN0 EN0 2 2

1 df (A) E
Finally, when P [m1 ] = P [m3 ] = P [m2 ] = 3, then setting dA = 0 yields A = ,
2
which is independent of the noise level N0 !

Solutions Page 547


Nguyen & Shwedyk A First Course in Digital Communications

P5.26

X

k
v(t) = [P1 s1 (t kTb ) + P2 s2 (t kTb )]
k=

dy
X
v(t + Tb ) = P1 s1 (t + Tb kTb ) + P2 s2 (t + Tb kTb ) .
| {z } | {z }
k= t(k1)Tb t(k1)Tb

Now change (dummy) index variable to l = k 1. Then


we
X
v(t + Tb ) = P1 s1 (t lTb ) + P2 s2 (t lTb ) = v(t).
l=

Therefore v(t) is periodic with period Tb .

P5.27 (Regenerative Repeaters) The number of repeaters needed: K = 2000km/20km = 100.

Pb = Q

Eb
r
Sh
(i) If analog repeaters are used:

2Eb
KN0
K 1
!
= 106

2 100 1 6 2
= Q (Pb ) = Q 10 = 1130 = 30.53 dB
&
N0 2 2

(ii) If regenerative repeaters are used:


r !
2Eb
Pb = KQ
N0
en

2 6 2
Eb 1 1 Pb 1 1 10
= Q = Q = 15.74 = 11.97 dB
N0 2 K 2 100

P5.28 (Simulation of a binary communication systemp using antipodal


signalling) The error prob-
uy

ability of BPSK signalling is P [error] = Q 2Eb /N0 . Since Eb = V 2 Tb , Tb = 1 second


and N0 /2 = 1 watts/Hz, one has P [error] = Q(V ), or V = Q1 (P [error]). The following
MATLAB script can be used to perform the simulation.

Pe_vec=[10^(-1) 10^(-2) 10^(-3) 10^(-4) 10^(-5)];


Ng

L=[10^5,10^5,10^6,10^6,10^7];
% Numbers of information bits to run, corresponding to each Pe
for i=1:length(Pe_vec)
Pe=Pe_vec(i);
V(i)=Qinv(Pe);
% This is the sequence of information bits (0,1):
b=round(rand(1,L(i)));
% Convert bits to amplitudes levels of +V and -V volts
% and add to it AWGN with variance of 1:
r=V(i)*(2*b-1)+randn(1,length(b));
% Receiver:
temp=sign(r);

Solutions Page 548


Nguyen & Shwedyk A First Course in Digital Communications

1
10
Simulation

k
Theory

2
10

dy
BER
3
10

we
4
10

5
10

Sh
2 4 6 8 10 12 14
V/ (dB)

Figure 5.50: BER performance of BPSK signalling.

% if r=0 then arbitrarily decide that bit 1 was transmitted.


temp(find(temp==0))=1;
&
% This is the sequence of demodulated bits (0,1):
b_hat=(temp+1)/2;
% Estimation of the error probability:
BER(i)=length(find(b_hat-b))/length(b);
end
Fig. % Plot the experimental and theoretical probabilities of error (BER)
en

sim=semilogy(20*log10(V),BER,marker,x,markersize,8,color,k);
hold on;
theo=semilogy(20*log10(V),Q(V),marker,o,markersize,8,color,k);
legend([sim theo],Simulation,Theory,+1);
xlabel({\itV}/{\it\sigma} (dB),FontName,Times New
Roman,FontSize,16); ylabel(BER,FontName,Times New
uy

Roman,FontSize,16); h=gca;
set(h,FontSize,16,XGrid,on,YGrid,on,GridLineStyle,:,...
MinorGridLineStyle,none,FontName,Times New Roman);

Fig. 5.50 plots both the theoretical and experimental BER curves. As can be seen, the two
Ng

curves are almost identical, except for P [error] = 105 . This difference is merely due to the
insufficient data for simulation of this BER value (only 107 input bits were generated). The
difference would disappear if more bits were simulated.
The numbers of bits in error that I got after running the above script are {9992 1021 986 108 124}.
Of course these numbers will differ slightly for each execution of the script.

Solutions Page 549


k
dy
Chapter 6

Baseband Data Transmission

we
P6.1 (a) See Fig. 6.1-(a).
(b) dk = bk dk1 ; d1 = 0. See Fig. 6.1-(b).
bk :

V
1

Tb
0

2Tb
1

3Tb
Sh
1

4Tb
0

5Tb
0

6Tb
0

7Tb
1

8Tb
1

9Tb
1

10Tb
t

t
&
Tb 10Tb
V NRZI waveform

Initial condition (a)


en

dk : 1 1 0 1 1 1 1 0 1 0 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

V
uy

NRZ-L waveform (with input to modulator the encoded bits dk )


Ng

(b)

Figure 6.1

Note that the waveforms of (a), (b) are the same, i.e., one can implement the modulator
either as in (a) or as in (b), end result is identical.
(c) With polarity reversal at 4Tb , the transmitted waveform is as shown in Fig. 6.2.
(d) The question is ambiguous a bit since it may refer to either no polarity reversal or a
polarity reversal at at t = 4Tb . So do it for both situations.

1
Nguyen & Shwedyk A First Course in Digital Communications

Polarity reversal
V

k
t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb


V

dy
d k : 1 1 0 1 0 0 0 1 0 1 t

b k = d k d k 1 ; d 1 = 0 (initial condition assumed)


b k : 1 0 1 1 1 0 0 1 1 1 t

we
d 1 = 0
Error (due to polarity reversal)

Figure 6.2

(i) With polarity reversal at t = 4Tb and d1 = 1 (assumed). The picture looks as in

b k :

d 1 = 1
Fig. 6.3

Error
0 1 Sh
1 1

Error
0 0 1 1 1 t

Figure 6.3
&
(ii) With no polarity reversal at t = 4Tb , see Fig. 6.4.

b k : 0 0 1 1 0 0 0 1 1 1 t

d 1 = 1
en

Error

Figure 6.4

Remarks: dk sequence of (b) was used since with no polarity reversal and no noise,
uy

dk = dk . Also from (c) we might expect that only 1 error would occur since assuming
d1 = 1 is equivalent to a polarity reversal. The conclusion is that a polarity reversal
causes one bit to be in error.
(e) See Fig. 6.5.
(f) A polarity reversal leads to a bit error in the interval in which the polarity reversal
Ng

takes place. The decoder, in the absence of random noise, recovers after this interval.
In the presence of random noise, an error due to the random noise leads to a bit
error in the next interval as shall be seen in the next problem.
Note that if the polarity reversal takes place not at t = kTb then, at least in the
absence of random noise this is easily detected and can be corrected.
P6.2 Look at the modulator as first a mapping (encoding) from bk to dk and then an NRZ-L
modulator.
(a) The signals of the NRZ-L modulator are antipodal, in general with energy = Eb . The
signal space looks as in Fig. 6.6

Solutions Page 62
Nguyen & Shwedyk A First Course in Digital Communications

Inadvertent Deliberate
polarity reversal polarity reversal
V

k
t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

dy
V
d k : 1 1 0 1 0 0 0 0 1 0 t

b k : 1 0 1 1 1 0 0 0 1 1 t

we
d 1 = 1
Error Error

Figure 6.5

1 (t )

Sh
1
dk = 0 dk = 1 Tb
1 (t ) t
0 0 Tb
Eb Eb

(Here we let Eb = 1)

Figure 6.6
&
(b) With the energy level set to 1 joule the output sk is: dk = 1 sk = +1 volt; dk =
0 sk = 1 volt. Using dk sequence determined in P6.1(b), the output sequence is
shown in Fig. 6.7.
en

sk : +1 +1 1 +1 +1 +1 +1 1 +1 1 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

(Use d k sequence determined in P6.1(b))


uy

Figure 6.7

(c) rk = sk + wk (volts). See Fig. 6.8


Ng

rk : 0.6 0.2 0.8 1.2 0.6 0.8 0.2 0.2 1.2 1 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

d k : 1 0 0 1 1 1 1 1 1 0 t

Error Error
0 10Tb

Figure 6.8

Solutions Page 63
Nguyen & Shwedyk A First Course in Digital Communications

k =0
d
Demodulator is a bit-by-bit demodulator, and the decision rule is rk 0. Note the
k =1

k
d
dk bit errors at t = Tb and t = 7Tb . See Fig. 6.9

b k :

dy
1 1 0 1 0 0 0 0 0 1 t

d 1 = 0
Error Error Error Error
0 10Tb

Figure 6.9

we
The observation is that a bit error in d k due to random noise results in 2 bit errors in

the decoded information bits bk . This is the influence of the memory in the system. If
dk is in error then since it participates in the determination of both bk and bk+1 , i.e.,
k = d
b k d k1 and b
k+1 = d
k+1 d
k , then both b
k, b
k+1 are both affected by d
k . In
other words, the error propagates but not that severely.

Sh
(d) See Fig. 6.10.
sk : +1 +1 1 +1 1 1 1 +1 1 +1 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb


Polarity reversal
rk : 0.6 0.2 0.8 1.2 1.4 1.2 1.8 2.2 0.8 1 t
&
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

d k : 1 0 0 1 0 0 0 1 0 1 t

0 10Tb
b k : 1 1 0 1 1 0 0 1 1 1 t
en

d 1 = 0 0 10Tb
Errors due to Error due to Note: polarity reversal appears to help
random noise polarity reversal noise errors!!! But be cautious rarely
in life do 2 wrongs make 1 right.
uy

Figure 6.10

P6.3 We assume, as we almost invariably do throughout, that the information bits are equally
likely, i.e., P [bk = 0] = P0 = 1/2; P [bk = 1] = P1 = 1/2. In P6.6, we will discuss the more
general case.
Ng

(a) Now dk = bk dk1 . Note that dk = 0 if (bk = 0 and dk1 = 0) or (bk = 1 and
dk1 = 1), which are 2 mutually exclusive random events. Therefore,

P [dk = 0] = P [(bk = 0 and dk1 = 0] + P [(bk = 1 and dk1 = 1)] .

Using Bayes rule:

P [dk = 0] = P [bk = 0|dk1 = 0] P [dk1 = 0] + P [bk = 1|dk1 = 1)] P [dk1 = 1] .

Solutions Page 64
Nguyen & Shwedyk A First Course in Digital Communications

But bits bk are statistically independent of the dk bits, i.e.,


1

k
P [bk = 0|dk1 = 0] = P [bk = 0] = = P [bk = 1|dk1 = 1]
2
1 1
P [dk = 0] = P [dk1 = 0] + P [dk1 = 1] = = P [dk = 1]

dy
2 | {z } 2
surely, this equals 1

(b)
2Eb
P [dk bit in error] = Q .
N0

we
1
Eb Eb sk Tb
1 (t ) t
0 0 Tb
(d k = 0) (d k = 1) rk
dk = 0

Sh
dk = 1

Figure 6.11

(c) (i) No.


&
(ii) No. 2 wrongs do make a right as we observed in P6.2(d).
(iii) Yes.
(iv) Yes.

h i
en

P b k error


= P (dk correct and dk1 incorrect) or (dk incorrect and dk1 correct) .
| {z }
mutually exclusive events
uy

Also decisions d k, d
k1 are statistically independent, since noise terms wk , wk1 are
statistically independent. Therefore,
h i h i h i
P b k error = 2P d k correct P d k1 incorrect
r !" r !#
Ng

2Eb 2Eb
= 2Q 1Q .
N0 N0
h i q q
With NRZ-L, we have P b k error = Q 2Eb
. Assume that Q 2Eb
1, which
N0 N0
h i q
is the practical case. Then P b k error for NRZI = 2Q 2Eb
N0 , i.e., 2 times that of
NRZ-L. The factor 2 reflects the error propagation, i.e., whenever there is a bit error
due to random noise, the succeeding bit is also in error.

P6.4 (a) First thing is to define the state set, i.e., what is needed from the past that along with
the present input bit, bk , allows one to determine the output.

Solutions Page 65
Nguyen & Shwedyk A First Course in Digital Communications

d k = 1 / bk = 1 +V / bk = 1

k
prev. mod. prev. mod.
d k 1 = 0 d k 1 = 1 ouput = -V ouput = +V

d k = 0 / bk = 0 d k = 0 / bk = 0 V / bk = 0 +V / bk = 0

dy
d k = 0 / bk = 1 V / bk = 1

encoder output input modulator output input

Figure 6.12

we
Mapping from bk to dk : state is value of dk1 .
Mapping from bk to modulator level: state is previous value of modulator output.
The state diagrams then look as in Fig. 6.12
(b) The trellis looks as in Fig. 6.13
State
(prev. mod.
output)

V
d k 1

0
0, V / 0

1, +
Sh modulator
output value
0, V / 0
input bit bk
&
V /1
/1 
0,
V
+V 1
1, +V / 0
(bk = 0)
encoded bit output d k
(bk = 1)
en

Figure 6.13

(c) The branch metrics are (rk Eb )2 and (rk + Eb )2 respectively, Eb if the branch
corresponds to the +V being transmitted and + Eb if the level transmitted is V (see
uy

Fig. 6.14).

(r + )
2
k Eb

where in our case


Ng

Eb = 1

(r )
2
k Eb

Figure 6.14

Therefore the best path can be found as illustrated in Fig. 6.15.


Note that there are 4 bit errors, which is the same as in P6.2(c). Thus, in this case, the
VA does not perform better. But 1 swallow does not make a summer and more will be

Solutions Page 66
Nguyen & Shwedyk A First Course in Digital Communications

rk : 0.6 0.2 0.8 1.2 0.6 0.8 0.2 0.2 1.2 1 t

( rk 1) :
2

k
0.16 1.44 3.24 0.04 0.16 0.04 0.64 0.64 0.04 4

( rk + 1) :
2
2.56 0.64 0.04 4.84 2.56 3.24 1.44 1.44 4.84 0

dy
2.56 0.8 0.84 5.68 3.44 4.28 2.52 3.16 7.20 2.40
V

+V
0.16 1.60 4.04 0.88 1.04 1.08 1.72 2.36 2.40 6.40

we
2.40

Sh 6.40
&
b k : 1 1 0 1 0 0 0 0 0 1

or d k : 1 0 0 1 1 1 1 1 1 0
en

b k : 1 1 0 1 0 0 0 0 0 1

Figure 6.15

said at the end of the problem. But next, we look at P6.4(d).


uy

(d) Refer to Fig. 6.16 for the solution.


Note that there are only 3 bit errors making it appear that the polarity reversal actually
helps. But again one should not jump to conclusions. For a proper evaluation of the
comparison between bit-by-bit demodulation and sequence demodulation one should
Ng

resort to simulation, say in Matlab.

P6.5 (a) Consider 2 sections of the trellis as in Fig. 6.17.


Bit pattern bk1 = 0, bk = 0 corresponds the possible paths as shown in Fig. 6.18.
Bit pattern bk1 = 1, bk = 0 corresponds to the possible paths shown in Fig. 6.19.
Similarly, bit patterns bk1 = 0, bk = 1 and bk1 = 1, bk = 1 correspond to paths
shown in Fig. 6.20.
So the bit bk is represented by the signals shown in Fig. 6.21.
(b) Choose basis functions as in Fig. 6.22 to represent bit bk . The signal space plot looks
as in Fig. 6.23.

Solutions Page 67
Nguyen & Shwedyk A First Course in Digital Communications

rk : 0.6 0.2 0.8 1.2 1.4 1.2 1.8 2.2 0.8 1 t

( rk 1) :
2

k
0.16 1.44 3.24 0.04 5.76 4.84 7.84 1.44 3.24 0

( rk + 1) :
2
2.56 0.64 0.04 4.84 0.16 0.04 0.64 10.24 0.04 4

dy
2.56 0.8 0.84 5.68 1.04 1.08 1.72 11.96 3.20 7.20

0.16 1.60 4.04 0.88 6.64 5.88 8.92 3.16 6.40 3.20

we
7.20

3.20

b k : 1 1 0
Sh 1 1 0 0 1 1 1
&
Figure 6.16

V / 0 V / 0
V
+V
/1
en

/1
+V V
+V / 0 +V / 0
( k 2 ) Tb ( k 1) Tb kTb
bk 1 bk
uy

Figure 6.17

V V ( k 2 ) Tb ( k 1) Tb kTb
t
V
Ng

+V
+V +V
t
( k 2 ) Tb ( k 1) Tb kTb

Figure 6.18

(c) The demodulator is to project r(t) onto 1 (t), 2 (t) to obtain sufficient statistics r1 , r2
and choose the signal point they are closest to. The decision space is shown in Fig. 6.23.

Solutions Page 68
Nguyen & Shwedyk A First Course in Digital Communications

V ( k 2 ) Tb ( k 1) Tb kTb
t
+V V

k
V +V
+V

dy
t

Corresponding output signals


for each path (or bit pattern)

Figure 6.19

we
+V
V
t
+V V

+V
+V V

+V +V
Sh +V
V

V
t

t
&
V V +V
t
V

Figure 6.20
en

+V
bk = 0 2Tb or
t t
V V 2Tb
uy

+V +V
bk = 1
t or t
V V

Figure 6.21
Ng

(d) As for Miller scheme, one has


!" !# r ! " r !#
E cos 45 E cos 45 E E
P [bk is in error] = 2Q p 1Q p = 2Q 1Q .
N0 /2 N0 /2 N0 N0

To compare with the expression in P6.3(c) note that E = 2Eb (because we are looking
at the signal over 2 bit intervals). Therefore, the error probability is the same.
Remark: We shall use the ideas here when the fading channel and differential phase shift
keying (DPSK) is discussed - Chapter 10.

Solutions Page 69
Nguyen & Shwedyk A First Course in Digital Communications

1
2Tb
1 (t ) =

k
t
0 Tb 2Tb

dy
2Tb 2Tb
2 (t ) = t
0 Tb 1

2Tb

we
Figure 6.22

2 (t ) r2

E bk = 1
b k = 1 r2 = r1

E bk = 0
Sh
b k = 0 0
E bk = 0

b k = 0
1 (t )
r1
&
r2 = r1
b k = 1
E bk = 1

Figure 6.23
en

P6.6 Consider Table 6.1.

Table 6.1
uy

dk1 i dk j Product ij
0 1 0 1 +1
0 1 1 1 1
1 +1 0 1 1
1 +1 1 +1 +1
Ng

The average or expected value of the product ij is given by a weighted sum of the 4 possible
values of ij where the weight are given by the probability that that value occurs. Therefore,
X X
E {dk1 dk } = E {ij} = ij P [ij = ij] = ij P [dk1 dk = ij]
values of ij values of ij
X
= ij P [dk = i|dk1 = j] P [dk1 = j]
values of ij
= P [dk = 0|dk1 = 0] P [dk1 = 0] P [dk = 1|dk1 = 0] P [dk1 = 0]
P [dk = 0|dk1 = 1] P [dk1 = 1] + P [dk = 1|dk1 = 1] P [dk1 = 1] .

Solutions Page 610


Nguyen & Shwedyk A First Course in Digital Communications

Now dk = dk1 bk , which means that

k
P [dk = 0|dk1 = 0] = P [bk = 0] = 1/2, since when dk1 = 0, dk = 0 iff bk = 0.
P [dk = 1|dk1 = 0] = P [bk = 1] = 1/2
P [dk = 0|dk1 = 1] = P [bk = 1] = 1/2

dy
P [dk = 1|dk1 = 1] = P [bk = 0] = 1/2.

Therefore,
1 1 1 1
E {dk1 dk } = P [dk1 = 0] P [dk1 = 0] P [dk1 = 1] + P [dk1 = 1]

we
2 2 2 2
= 0, i.e., uncorrelated.

Since the PSD depends only on second order statistics, we conclude that the modulator sees
an input of uncorrelated bits (strictly speaking uncorrelated impulses) just as in the NRZ-L
case. Therefore the PSD of NRZI is the same as that of NRZ-L.

Sh
The above assumes that the input bits bk are equally probable. More generally, P [bk = 0] =
P0 and P [bk = 1] = P1 = 1 P0 . In this case the correlation becomes:

E {dk1 dk } = P0 P [dk1 = 0] P1 P [dk1 = 1] P1 P [dk1 = 0] + P0 P [dk1 = 0]



= (P0 P1 ) P [dk1 = 0] P [dk1 = 1] 6= 0, if P0 6= P1 .

Consider now the issue of statistical independence. Intuitively, one feels that because of the
&
functional relationship between dk and dk1 , they would be statistically dependent. Consider
the case of equally likely {bk } bits. In P6.3 we showed that in this case the differential bits
{dk } are equally likely, i.e., P [dk ] = P [dk1 ] = 1/2.
Now consider P [dk |dk1 ]. Then
en

P [dk = 0|dk1 = 0] = P [bk = 0] = 1/2; P [dk = 1|dk1 = 0] = P [bk = 1] = 1/2


P [dk = 0|dk1 = 1] = P [bk = 1] = 1/2; P [dk = 1|dk1 = 1] = P [bk = 0] = 1/2

which means that P [dk |dk1 ] = P [dk ], i.e., they are statistically independent.
But this is the case only when the input bits are equally probable. Otherwise as shown above
uy

the differential bits are correlated and therefore definitely not statistically independent.
Finally, if P0 6= P1 , i.e., the input bits are not equally probable, it can be shown that as
k becomes large, the differential bits {dk } tend to be equally probable, i.e., P [dk = 1]
1/2, P [dk = 0] 1/2 which means that the differential bits are uncorrelated. However, in
this case they are statistically dependent since P [dk |dk1 ] 6= P [dk ].
Ng

P6.7 (a) See Figs. 6.24 and 6.25.


(b) Yes, it is. As in P6.1(c) assume a polarity reversal at t = 4Tb . Then as in P6.1(c) the
demodulated dk would be reversed (or complemented) from this point on. The output
bk would be identical to that in P6.1(c), an error for the bit in interval 4Tb to 5Tb and
then correct decisions.
(c) Nothing much would change. The error performance would be the same, the demodula-
tion/decoding procedure would be the same. What would change is the basis function
for the signal space diagram, the signal would now be a normalized biphase signal instead
of an normalized NRZ-L signal. The signal space diagram would look the same.

Solutions Page 611


Nguyen & Shwedyk A First Course in Digital Communications

bk : 1 0 1 1 0 0 0 1 1 1 t

k
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

d k = bk d k 1 :

dy
1 1 0 1 1 1 1 0 1 0 t
d 1 = 0
0 10Tb

Figure 6.24

we
V
t

Tb 2Tb 3Tb 7Tb 8Tb 9Tb 10Tb

Sh Figure 6.25

P6.8 Error performance is independent of the antipodal signal set. Assuming, of course, the same
energy levels and an AWGN channel. The basis function for the signal space would be a
normalized version of the signal and the demodulator would project the received signal onto
this basis function to generate the sufficient statistic. After this the sufficient statistic is
&
processed the same, regardless of the antipodal signal set.
1 2
The PSD would depend on the actual signals used and would be Tb |S(f )| where S(f ) =
F{s(t)}.

P6.9 (a) See Fig. 6.26.


en

bk : 1 0 1 1 0 0 0 1 1 1 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

V
uy

0 Tb 4Tb 5Tb 6Tb 10Tb

V
Ng

Figure 6.26

(b) Yes and no. Depends on how one demodulates. See (c) as to how one demodulates so
that the modulation is immune to polarity reversals.
(c) Note that a 0T results in one of 2 signals namely s0 (t) and a 1T results in one of s1 (t)
(see Fig. 6.27).
Therefore the signal space looks as shown in Fig. 6.28.
Demodulate by projecting the received signal onto 0 (t), 1 (t) to generate sufficient
statistics r0 , r1 and decide according to the decision boundaries shown in the signal
space diagram.

Solutions Page 612


Nguyen & Shwedyk A First Course in Digital Communications

s0 (t ) s1 (t )

k
V V
Tb
t t

dy
0 0
Tb
V

Figure 6.27

we
s1 (t )
1 (t ) =
E
r1

(0, E ) 1T
r1 = r0
1D

Sh
0D

( E , 0) 0T ( E , 0) 0T s0 (t )
0 (t ) =
bk = 0

0 bk = 0
 E
r0

0D
1D r1 = r0
&
(0, E ) 1T

Figure 6.28
en

(d) By symmetry

P [error] = P [r0 , r1 fall in 1D region | 0T ]


" !# " !# r ! " r !#
E cos 45o E sin 45o E E
uy

=2 Q p 1Q p = 2Q 1Q
N0 /2 N0 /2 N0 N0
r !
E
= 2Q
N0
Ng

Remark: Signal space is identical to that of Miller modulation and hence the error
performance is the same. The actual transmitted sequence and resultant PSD is however
different from Miller.

P6.10 Let 0T s0 (t) and 1T s1 (t) where s0 (t) s1 (t), each of energy E. Polarity reversals
mean that 0T s0 (t) and 1T s1 (t). The modification to the demodulator to obtain
polarity reversal immunity is to modify the decision regions in the signal space from Fig. 6.29
to Fig. 6.30. Note that the price you pay is a factor of 2 in the error probability expression.
However, it only applies if there no polarity reversals. If there are, then you gain a lot.

P6.11 (a) See Fig. 6.31. Yes, immune to polarity reversal.

Solutions Page 613


Nguyen & Shwedyk A First Course in Digital Communications

s1 (t )
1 (t ) =
E
r1

k
(0, E ) 1T
r1 = r0

dy
1D

( E , 0) 0T s0 (t )
0 (t ) =
0 E
r0

we
0D
E
P[error] = Q
N
0

Sh
1 (t ) =
r1
Figure 6.29

(0, E ) 1T
s1 (t )
E

r1 = r0
&
1D
0D

( E , 0) 0T ( E , 0) 0T s0 (t )
0 (t ) =
b k = 0 E
en

0 r0

E
0D
P[error] = 2Q
N
1D r1 = r0
0
(0, E ) 1T
uy

Figure 6.30

R
Ng

(b) See Fig. 6.32. The sufficient statistic is r = tTb r(t)(t)dt.



(c) f (r|0T ) N (0, N0 /2), f (r|1T , V ) N ( E, N0 /2); f (r|1T , +V ) N ( E, N0 /2).
Therefore,
1 1
f (r|1T ) N ( E, N0 /2) + N ( E, N0 /2).
2 2

Solutions Page 614


Nguyen & Shwedyk A First Course in Digital Communications

bk : 1 0 1 1 0 0 0 1 1 1 t

k
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

V
sT (t ) :

dy
t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

Figure 6.31

we
E 1T 0 0T E 1T
(t )
0
1
Tb

Sh Figure 6.32

Late
Laggards Majority
0 Tb

Innovators
t
&
f (r 1T ) f (r 0T ) f (r 1T )

N0 N0 N0
2 2 2
"The
Chasm"

Technology Adoption Process r


en

E 0 E

Figure 6.33
uy

(d) The LRT is:



(r E)2

(r+ E)2

1 1 1
2 e 2(N0 /2) + e 2(N0 /2)
1D
f (r|1T ) 2 N0 /2 2 N0 /2
= r 2
R1
f (r|0T ) 1
e 2(N0 /2) 0D
Ng

2 N0 /2

2 E
r 2N E r 1D
e +e 0N0 E
R e N0
2 0D
!
1D E
2 E
cosh r R e N0 .
N0 0D

(e) f (`|0) N (0, 4E/N02 ); f (`|V ) N (2E/N0 , 4E/N02 ); f (`|+V ) N (2E/N0 , 4E/N02 ).

Solutions Page 615


Nguyen & Shwedyk A First Course in Digital Communications

Late
Laggards Innovators
Majority

k
f ( V ) f ( 0) f ( +V )

4E 4E 4E
N 02 N 02 N2
"The

dy
0

Chasm"

Technology Adoption Process 2E 


2E
N0
Th 0 Th N0

1D 0D 1D

we
Figure 6.34


T
h

Sh
P [error | 0T ] = 2Q
2 E/N0
1 1
P [error | 1T ] = P [error | 1T , +V ] + P [error | 1T , V ] = P [error | 1T , +V ]
2 2
= P [Th ` Th | 1T , +V ] = P [` Th | 1T , +V ] P [` Th | 1T , +V ]
2E ! 2E !
Th + T h
= Q N0 Q N0 .
2 E/N0 2 E/N0
&
P6.12 (a) Since we are only interested in what bit was transmitted, not the signal levels, V , 0 we
determine the region(s) of r where either P1 f (r| V ) or P3 f (r| + V ) is P2 f (r|0). In
this region(s) we decide that a 1 was transmitted, in the remaining region(s) we decide
a 0 was transmitted.
en

Let 2 N0 /2. Then



1 (r+ E)2
f (r| V ) = e 22
2
1 r2
uy

f (r|0) = e 22
2
1 r2
f (r| + V ) = e 22 .
2
Now
Ng


1 1 (r E)2 1 1 r2
P3 f (r| + V ) > P2 f (r|0) e 22 > e 22
4 2 2 2
2

(after some E
r > ln 2 + .
algebra) E 2

In the above region of r choose a 1 transmitted, i.e., 1D .


Similarly,

!
1 1 (r+ E)2 1 1 r2 2 E
P1 f (r|V ) > P2 f (r|0) e 22 > e 22 r < ln 2 + .
4 2 2 2 E 2

Solutions Page 616


Nguyen & Shwedyk A First Course in Digital Communications

In this region of r we decide on a 1 being transmitted, i.e., 1D .


Note: The 2nd result we would expect from the symmetry of the problem. Also the

k
2
factor E ln 2 arises from the fact that P2 6= P1 (or P3 ).

Therefore 2N 0 ln 2 + E r
2
N
0 ln 2 + E choose 0, otherwise choose 1.
2

dy
E 2 E

(b) Multiply the decision rule of (P6.8) by 2N0E , i.e., scale the decision space:
(
ln 2 + NE0 2N0E r = ` ln 2 + NE0 choose 0 transmitted
otherwise choose 1 transmitted

we
The new decision variable ` is Gaussian with a mean value equal to
!
2 E
(mean value of r)
N0 | {z }
depends on the signal transmitted

4E
and a variance equal to N02
(variance of r). Therefore,

Sh
| {z }
N0
= 2

2E 2E 2E 2E 2E
f (`| V ) N , ; f (`|0) N 0, ; f (l| + V ) N , .
N0 N0 N0 N0 N0
To aid in deriving the error probabilities, let us sketch the decision regions(s) as in Fig.
6.35.
&
Late
Laggards Innovators
Majority
f ( V ) f ( 0) f ( +V )

2E 2E 2E
N0 N0 N0
"The
en

Chasm"

Technology Adoption Process 2E 


2E
N0
Th 0 Th N0

1D 0D 1D
uy

Figure 6.35


2Eb
Ng

ln 2 + N0
P [error |0T ] = P [(` > T1 )|0T or (` < T1 )] = 2P [` > T1 |0T ] = 2Q q
4Eb
N0
P [error|1T ] = P [{T1 < ` < T1 | V xmitted} or {T1 < ` < T1 | + V xmitted}]
= 2P [{T1 < ` < T1 | + V xmitted}]
" 2E ! 2E !#
N0 T1 N0 + T1
b b

= 2 Q Q
var var

2Eb 6Eb
ln 2 + ln 2
= 2 Q Nq0 Q Nq 0
4Eb 4Eb
N0 N0

Solutions Page 617


Nguyen & Shwedyk A First Course in Digital Communications

P [bit error] = P [bit 0 in error]P [0T ] + P [bit 1 in error]P [1T ]


1 1
= P [error |0T ] + P [error |1T ]

k
2 2

P6.13 (a) The sketch(es) looks as follows in Fig. 6.36.

dy
Late
Laggards Innovators
Majority
f (r V ) f (r 0) f (r +V )

we
N0
2
"The
Chasm"

Technology Adoption EProcess r


E E
2
0
2 E

1D 0D 1D

The decision rule is Sh (


Figure 6.36



2
E

r 2E , choose 0
otherwise, choose 1.
&
(b)
! r !
E/2 Eb
P [error |0T ] = 2Q = 2Q
N0 /2 N 0
" r ! r !#
Eb Eb
en

P [error |1T ] = 2 Q Q 3
N0 N0
1 1
P [bit error] = P [error |0T ] + P [error |1T ]
2 2
r ! r !
Eb Eb
uy

= 2Q Q 3
N0 N0

P6.14 (a) Yes, there should be. The first one, that of P6.11.
(b) Matlab works to be added.
Ng

P6.15 (a) What we need to know to from the past is whether the previous level to represent a 1T
is +V or V . Choose these as the states and the state diagram is shown in Fig. 6.37.
(b) Fig. 6.38 shows a trellis diagram.

P6.16 Change the mapping rule as follows (See Fig. 6.39:

A 0 is always mapped to a level of 0 volts.


A 1 is mapped to a pulse of level +V , if previous it was mapped to a pulse of level V
and vice versa.

Solutions Page 618


Nguyen & Shwedyk A First Course in Digital Communications

+V /1T

k
Previous Previous
level was -V level was +V

0 / 0T 0 / 0T

dy
V /1T

Figure 6.37

State

we
+V


V
0T
1T

Sh Tb

Figure 6.38
2Tb 3Tb

V
&
t
0 Tb
2
Tb

0 t 1
0 Tb
Tb
en

2
Tb
2
t
0
Tb
V
uy

Figure 6.39

2
Tb
Ng

1
Tb

t t
0 Tb
0 Tb
2
Tb 2
Tb

(a) (b)

Figure 6.40

In essence nothing much changes. The main difference is the basis function used to represent
the signals transmitted. Here it would be the function in Fig. 6.40(a) instead of the one in

Solutions Page 619


Nguyen & Shwedyk A First Course in Digital Communications

Fig. 6.40(b).
If we adjust V here so that the energy level for AMI-RZ is the same as for AMI-NRZ then

k
all error expressions and error performances for the 3 (including that of P6.13) demodulation
methods are identical. If the voltage levels are kept the same then the energy E (and hence

dy
Eb ) is halved for AMI-RZ.
Another difference would be in the PSD.

P6.17 (a) See Fig. 6.41.

bk :

we
1 0 1 1 0 0 0 1 1 1 t

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

+I
t

(b) See Fig. 6.42.


Tb 2Tb 3Tb

Sh 4Tb 5Tb

Figure 6.41
6Tb 7Tb 8Tb 9Tb 10Tb

1T 0T 1T
&
Energy
0 E
2
E

Figure 6.42
en

(c) Let the sufficient statistics be

I1 = intensity, or some measure of it, in the 1st half


I2 = intensity, or some measure of it, in the 2nd half.
uy

Then a possible decision rule would be as illustrated in Fig. 6.43.


I1
+ 0D

0 (or a threshold of + 4I )
<
1D
I2
Ng

Figure 6.43

Take the polarity inversion to mean that the laser is off when it is on and vice versa. Then
the above decision rule is not insensitive to polarity inversion. Indeed the decision would
always be 1D . However it can be modified to be polarity insensitive by the following
decision rule:
0D
I
|I1 I2 | R + .
1D 4

Solutions Page 620


Nguyen & Shwedyk A First Course in Digital Communications

P6.18 (a) So k, the number of counted photons in time interval Tb , is our sufficient statistic.
(n Tb )k en Tb
P [k photons emitted in a Tb interval |1T , 1T represented by laser off] =

k
k!
[(s +n )Tb ]k e(s +n )Tb
P [k photons emitted in a Tb interval |1T , 1T represented by laser on] = k! i
h (s +n )Tb
1 [(s +n )Tb ]k e (n Tb )k en Tb

dy
P [k photons emitted in a Tb interval |1T ] = 2 k! + k!
" T T
#
Tb k (s +n ) b Tb k n b
1 [(s +n ) ] e 2 (n ) e 2
P [k photons emitted in a Tb interval |0T ] = 2
2
k! + 2
k!

The likelihood ratio test (LRT) is:

we
1 (s + n )k Tbk e(s +n )Tb + kn Tbk en Tb 1D
R 1
2 ( + )k Tbk e(s +n ) T2b + k Tbk en T2b 0D
s n 2 k n 2 k
k
1 + ns es Tb + 1 en Tb 1D
2
k R k = 2k+1
2


Sh
1 + ns es Tb /2 + 1 en Tb /2 0D
k
1 + ns es Tb + 1
k
1 + ns es Tb /2 + 1

2 (k1)
1D
R en Tb /2 .
0D
&
(b) Unfortunately, an analytical expression for the error performance of the receiver in (a)
appears to be intractable. Nonetheless one can resort to simulation.

P6.19 (a) See Fig. 6.44.

bk : 1 0 1 1 0 0 0 1 1 1 t
en

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

+I
uy

Tb 2Tb 3Tb 4Tb 5Tb 6Tb 7Tb 8Tb 9Tb 10Tb

Figure 6.44
Ng

(b) Yes. But again depends on how decision rule is implemented. But basically if the
demodulator decides on whether the laser is on or off for the whole bit interval OR on
for only 1/2 the bit interval then the modulation is immune to polarity reversals.
(c) Let the axis be the energy of the transmitted pulse. The signals then plots as in Fig.
6.45.
(d) Let the states be
(i) previous 1 represented by +I, i.e., laser on for the whole bit interval,
(ii) previous 1 represented by 0, i.e., laser off for the whole bit interval.
The state diagram is shown in Fig. 6.46.

Solutions Page 621


Nguyen & Shwedyk A First Course in Digital Communications

1T 0T 1T
Energy

k
0 E
2
E

Figure 6.45

dy
laser on for Tb sec. /1T

Previous 1T Previous 1T
represented represented
laser on for 2nd half / 0T by laser on by laser off
laser on for 1st half / 0T

we
0 / 0T
laser off for Tb sec. /1T

Figure 6.46

P6.20 (a) P [ck = 1] = P [ck = 1] = 14 ; P [ck = 0] = 12 .

Sh
(b) E{c2k } = 0P [ck = 0] + 1P [ck = 1] + 1P [ck = 1] = 12 .
Consider now E{ck ck+1 }. Set up a table (see Table 6.2) for the possibilities for ck , ck+1
and the product. E{ck ck+1 } = 1(1/4) + 0(1/4) + 0(1/4) + 0(1/4) = 1/4

Table 6.2
&
bk bk+1 P [bk , bk+1 ] ck ck+1 ck ck+1 P [ck ck+1 ] = P [bk bk+1 ]

1 1
1 1 1/4 1 1/4
1 1
0 1
0 1 1/4 0 1/4
0 1
en

1 0
1 0 1/4 0 1/4
1 0
0 0 1/4 0 0 0 1/4
uy

Now consider E{ck ck+n }, n > 1 and the Table 6.3. If the bits bk are statistically
independent and equally probable then each of combinations of ck , ck+n is equally
probable, i.e., P [ck ck+n ] = 1/9.
E{ck ck+n } = 1(1/9) 1(1/9) 1(1/9) + 1(1/9) = 0. Therefore they are uncorrelated.
Now Equation (5.127) in the textbook states that:
Ng


|P (f )|2 X
S(f ) = Rc (m)ej2mf Tb
Tb m=

Solutions Page 622


Nguyen & Shwedyk A First Course in Digital Communications

Table 6.3

k
ck ck+n ck ck+n
1 1 1
1 1 1

dy
1 0 0
1 1 1
1 1 1
1 0 0
0 1 0

we
0 1 0
0 0 0

where Rc (m) is the autocorrelation we have just determined and

Sh
Z Tb
P (f ) = F {NRZ signal} = V ej2f t dt
0

V jf Tb ejf Tb ejf Tb
= e
f 2j
| {z }
sin(f Tb )

sin2 (f Tb )
&
|P (f )|2 = V 2 Tb2
(f Tb )2

Finally,

sin2 (f Tb ) h i
S(f ) = V 2 Tb R (1)ej2f Tb
+ R (0) + R (1)e j2f Tb
.
en

c c c
(f Tb )2 | {z }
1 1
= [2 cos(2f Tb )] +
| 4 {z 2}
=sin2 (f Tb )
uy

qR
nTb
P6.21 (a) The decision rule of (6.14) is compute di = 0 [r(t) Si (t)]2 dt for each possible
transmitted sequence and choose the smallest. Since di is obviously 0 this decision
rule can be restated as compute d2i for each possible transmitted sequence and choose
the smallest, i.e.,
Ng

compute
Z nTb Z nTb Z nTb
d2i = r2 (t)dt 2 r(t)Si (t)dt + Si2 (t)dt, i = 1, . . . , M = 2n ,
0 0 0
and choose smallest.
R nT
Now the term 0 b r2 (t)dt is the same for each i and can be discarded. Similarly
R nTb 2
0 Si (t)dt is the same regardless of i, i.e., for Miller modulation each transmitted
signal (sequence) has the same energy. It may also be discarded. Therefore the decision

Solutions Page 623


Nguyen & Shwedyk A First Course in Digital Communications

rule can be stated as:

k
Compute
Z nTb
2 r(t)Si (t)dt, i = 1, . . . , m = 2n ,

dy
0
and choose smallest.

The factor 2 does not affect which i yields the smallest relative value. Multiplying though
by 1 changes the smallest into largest. Therefore the decision rule is:
Compute

we
Z nTb
r(t)Si (t)dt, i = 1, . . . , m = 2n ,
0
and choose smallest.
n Z n
" #
X jTb X (j) (j)
(j)
si1
Writing this as rj (t)Sij (t)dt = [r1 , r2 ] (j) we get the desired re-

sult.
j=1 (j1)Tb

Sh j=1
si2

(b) The steps are essentially the same as before, except now one computes the crosscorre-
lation between the signal projections and the signal components along a branch, adds
to the surviving value of the state from which the branch emanates and compares with
all the incoming path values at the state on which the branch terminates. Choose the
largest and discard all others. Do this for each state.
&
Table 6.4: Cross correlation table: r1 s1 + r2 s2
(s1 , s2 ) 0 Tb Tb 2Tb 2Tb 3Tb 3Tb 4Tb
s1 (t) (1, 0) 0.2 0.2 0.61 1.1
en

s2 (t) (0, 1) 0.4 0.8 0.5 0.1


s3 (t) (1, 0) 0.2 0.2 0.61 1.1
s4 (t) (0, 1) 0.4 0.8 0.5 0.1
uy

(j) (j) (j) (j)


(c) Trellis showing the branch correlations, i.e., r1 si1 + r2 si2 , is in Fig. 6.47.
The pruned trellis showing the survivors & partial path correlations is in Fig. 6.48.
To compare with the minimum distance approach remember that the minimum distance
corresponds to maximum correlation, i.e., the relationship is an inverse one.
P6.22 (a) By error probability, what is meant is bit error probability. We are not all that interested
Ng

in symbol error probability. From the symmetry, we can say that


P [bit error] = P [bit error |0T ] = P [bit error |s1 (t)] (or P [bit error |s2 (t)]).

A bit error occurs when s1 (t) is transmitted whenever the sufficient statistics (r1 , r2 ) are
closer to either s3 (t), s4 (t) then to s1 (t) (or s2 (t)). The geometrical picture looks as in
Fig. 6.49.
Clearly, ! r !

E E
P [bit error] = Q p =Q .
2 N0 /2 N0

Solutions Page 624


Nguyen & Shwedyk A First Course in Digital Communications

Input bit=0

Input bit=1

k
s1 (t ) 0.2 0.61 1.1

dy
0.5 0.1
0.4
0.2 0.61 1.1
s3 (t )


0.8
0.5 0.1
0.2

we
1.1
s2 (t )
0.8 0.1
0.61
1.1

s4 (t ) 0.5 0.1

Sh
0 Tb 2Tb 3Tb 4Tb
r1(1) , r2(1) r1( 2) , r2( 2 ) r1( 3) , r2(3) r1( 4 ) , r2( 4)
0.2, 0.4 + 0.2, 0.8 .61, + 0.5 1.1, + 0.1

Figure 6.47

0.4 0.39 0.29


&
s1 (t )

0.2 0.6 0.81 2.6


s3 (t )


en

0.4 1.5 0.4


s2 (t )

1.0 1.1 1.4


uy

s4 (t ) 0.5

0 Tb 2Tb 3Tb 4Tb

Figure 6.48
Ng

(b) Matlab works to be added.

P6.23 Matlab works to be added.

Solutions Page 625


Nguyen & Shwedyk A First Course in Digital Communications

k
dy
we
r2 = r1

Sh s 2 (t )
r2
&
s3 ( t ) E s1 (t )

r1
0 4
E
Error occurs if 2

( r1 , r2 ) fall here xmitted signal


en

s4 ( t )

Figure 6.49
uy
Ng

Solutions Page 626


k
dy
Chapter 7

Basic Digital Passband Modulation

we
P7.1

Antenna diameter: d = c 3 108
4

Sh c 3 108 d= =
wave length: = =
4f 4f
f f
(a) Direct coupling:
3 108
f = 4kHz d = = 18.75 103 (m) = 18.75(km)
4 4 103
&
(b) Via modulation with carrier frequency fc = 1.2GHz
3 108 3 108
d= = = 62.5 103 (m) = 6.25(cm)
4fc 4 1.2 109
The above results clearly show that carrier-wave or passband modulation is an essential step
for all systems involving radio transmission.
en

P7.2 Express NRZ signal in terms of NRZ-L signal as follows:


sNRZ (t) = 1/2 [sNRZL (t) + 1] (7.1)
Note that the factor 1/2 scales the autocorrelation by (1/2)2 = 1/4, the constant 1 shifts
uy

(vertically) the autocorrelation by 1 and sNRZL (t) has an autocorrelation of RsNRZL ( ).


Putting all this together we have
(
1 1 | |
1 + 1 Tb , | | Tb
RsNRZ ( ) = RsNRZL ( ) + 1 = 4 4
4 1
, | | > Tb
4
Ng

Try to generalize this: Let y(t) = K[x(t) + A] where K, A are constants. x(t) has mean value
mx and autocorrelation, Rx ( ). What then is Ry ( )?
P7.3 Consider (7.41), P [m2 |m1 ]. This probability is the volume under f ( r2 , r1 |m1 ) in the 1st
quadrant as shown below. Note that given m1 , r2 and
q r1 are
q statistically independent Gaus-
sian random variables, variance = N20 , and means E2s , E2s , respectively. The volume is
therefore
Z Z Z Z
f (
r2 |m1 )f (
r1 |m1 )d
r2 d
r1 = f (
r2 |m1 )d
r2 f (
r1 |m1 )d
r1
r2 =0 r1 =0 r2 =0 r1 =0

1
Nguyen & Shwedyk A First Course in Digital Communications

2 2
r2 + Es r1 + Es
2 2
Z Z
1 2
N0
1 2
N0

P [m2 |m1 ] = q e r2 q
d e d
r1

k
2 2
2 N0 r2 =0 2 N0 r1 =0
2 2
" p !# p !

dy
Es /2 Es /2 Es Es
= 1Q Q = 1Q Q
N0 /2 N0 /2 N0 N0

Graphically the 2 integrals above compute the areas illustrated in Fig. 7.1

we
r2 r1
r2 2nd integral
f ( r1 | m1 ) Es
= 1 Q
N0
m1 ( s1 ( t ) )
r1
0
f ( r2 | m1 )

Sh Es 2
1st integral = Q

Figure 7.1: Areas to compute P [m2 |m1 ].





Es
N0




&
Similarly P [m3 |m1 ] is given by the volume under f (
r2 , r1 |m1 ) in the 2nd quadrant, i.e., by
the product of the areas in Fig. 7.2.

r2 r1 Es 2 Es
r2 = Q = Q
en

N 0 2 N0
Es 2

r1
0
uy

Es 2

Es 2 Es
= Q = Q
N 0 2 N 0
Es
P [ m3 m1 ] = Q 2
Ng

N 0

Figure 7.2: Areas to compute P [m3 |m1 ].

Finally P [m4 |m1 ] = P [m2 |m1 ] as can be seen graphically.

P7.4 (a) As in the case of QPSK with Gray mapping, to determine the bit error probability for
the above mapping it is necessary to distinguish between the different message errors
(or signal errors). Again because of symmetry it is sufficient to consider only a specific

Solutions Page 72
Nguyen & Shwedyk A First Course in Digital Communications

2 (t )

k
s2 (t )

dy
Bit Pattern Signal Transmitted
E
s3 (t ) s1 (t ) 00 s1 (t)
1 (t ) 11 s2 (t)
0

we
10 s3 (t)
01 s4 (t)

s4 ( t )

( r1, r2 ,K
Figure 7.3: QPSK modulation.
m - dimensiona l observation space
signal,
Sh
, rm ) say s1 (t). Then the different error probabilities are
r !" r !#
1 2Eb 2Eb
&
P [s2 (t)|s1 (t)] = Q 1Q (7.2)
Choose s1 (t ) or N
m10 N0
4 r !
2Eb
Choose s4 (t ) or m4 P [s3 (t)|s1 (t)] = Q2 (7.3)
N0
r !" r !#
2Eb 2Eb
en

P [s4 (t)|s1 (t)] = Q 1Q (7.4)


N20
N0
3
Choose s2 (t ) or m2
Choose s3 (t ) or m3 Note that the above result does not depends on the specific mapping. Then the bit error
probability is given by
uy

P [bit error] =
= 1.0P [s2 (t)|s1 (t)] + 0.5P [s4 (t)|s1 (t)] + 0.5P [s3 (t)|s1 (t)] =
r ! r ! r !
2Eb 2 2Eb 2Eb
= 1.5Q Q 1.5Q (7.5)
N0 N0 N0
Ng

q
2Eb
(b) Recall that the bit error probability of QPSK employing Gray mapping is Q N0 .
Compared to (7.5) we see that with anti-Gray mapping the bit error probability is
increased by a factor of 3/2. The plots of both error probabilities are shown in Figure
7.4.

P7.5 (a) Since the symbols are equiprobable and the channel is AWGN the demodulator is a
minimum distance one. The decision boundary and decision regions are shown in Fig.
7.5.

Solutions Page 73
Nguyen & Shwedyk A First Course in Digital Communications

k
0
10

dy
1
10

2
10

we
Pr[error]

3
10

4
10

5
10

10
6
0
Sh
Gray mapping
AntiGray mapping
2 4
Eb/N0 (dB)

Figure 7.4: Error performance of QPSK with Gray and Anti-Gray mapping.
6 8 10
&
en

2
2 ( t ) = sin(2 f c t )
Ts
r2
2 Decision region
b1b2 b1b2
for s1 (t )
1
uy

01 d1 11
s 2 (t ) s1 (t )
d2 Es
/6 2
1 (t ) = cos(2 f c t )
0 r1 Ts
Ng

s3 ( t ) s 4 (t )
b1b2 b1b2
00 10
3 4

Figure 7.5: Asymmetric QPSK.

Solutions Page 74
Nguyen & Shwedyk A First Course in Digital Communications


(b) In Fig. 7.5, d1 = Es cos and d2 = Es sin , where = /6.

k
P [symbol error] = 1 Pr[symbol correct]
= 1 Pr[symbol correct|s1 (t)] (due to symmetry)

dy
= 1 P [(r1 , r2 ) <1 |s1 (t)] = 1 P [r1 0|s1 (t)]P [r2 0|s1 (t)]
" !# " !#
d1 d2
=1 1Q p 1Q p
N0 /2 N0 /2
! ! !
d1 d2 d1 d2
=Q p +Q p Q Q p

we
N0 /2 N0 /2 N0 /2
r ! r ! r ! r !
2Es 2Es 2Es 2Es
=Q cos + Q sin Q cos Q sin .
N0 N0 N0 N0

(c)

Similarly,
Sh
P [b1 in error] = P [b1 in error|s1 (t)] = P [(r1 , r2 ) <2 or <3 |s1 (t)]

= P [r1 < 0|s1 (t)] = Q p
d1
N0 /2
!
=Q
r
2Es
N0
!
cos . (7.6)

! r !
&
d2 2Es
P [b2 in error] = Q p =Q sin . (7.7)
N0 /2 N0

With = /6, sin = 12 < cos = 23 . Since the Q-function is a monotonically
decreasing function of its argument, P [b1 in error] < P [b2 in error] and thus bit b1 is
en

more protected than bit b2 .


Note that in general, if 0 < < /4, then bit b1 is more protected. If /4 < < /2,
then b2 is more protected. And if = /4?
(d)
uy

r !
Es Es 2
P [b1 in error] = Q 1.5 103 1.5 Q1 (103 ) = 3.092
N0 N0
3.092 3.092
Es N0 = 106 = 6.37 106 (joules). (7.8)
1.5 1.5
Ng

Choose equality to keep Es as small as possible.

P7.6

Sending: s(t) = cos(2fc t); fc = 1.2GHz


Receiving: r(t) = cos[2fc (t + Td )] = cos(2fc t + 2fc Td )

where Td is the propagation delay corresponding to the distance d. Assuming ideal electro-
d
magnetic propagation at the speed of light, then Td = , where c = 3 108 m/s.
c

Solutions Page 75
Nguyen & Shwedyk A First Course in Digital Communications

(a) If the mobile user moves in-line away from the base station (to point B), or in-line toward
the base station (to point C), then the difference in propagation delay is:

k
0
0 d
Td =
c

dy
The difference in phase shift is:
0
0 2fc d
= 2fc Td = = 2
c
2c c 3 108
d0 = = = = 0.25(m) = 25 cm

we
2fc fc 1.2 109
Thus a movement of only 25cm causes a maximum phase rotation of 2 radian.
(b) Of course we do not care about a 2 phase rotation because it does not change the
transmitted signal. Typically, a phase rotation of /2 or is a more serious problem (as
will be seen in Problem 7.7). The minimum distances of users movement to cause a 2

Sh
and phase rotations are as follows:
= /2 d = d0 /4 = 25/4 = 6.25 cm
= d = d0 /2 = 25/2 = 12.5 cm

If BPSK is used then it would appear that moving your cell phone from one ear to another
would (depending on the size of your head) have a severe effect on the receiver performance.
One should either track the incoming phase, say by a phase-locked loop which is the subject
&
of Chapter 12 or consider modulation/demodulation techniques that are present in the face
of phase uncertainty this is the subject of Chapter 10.
P7.7
sR
1 (t) + w(t) = 0 + r
w(t) : 0T
r(t) = 2
en

sR
2 (t) + w(t) = E cos(2fc t + ) + w(t) : 1T
Tb
Without the noise, the receiver sees one of the two signals
sR
1 (t) = 0 : 0T
q2
uy

R
s2 (t) = E Tb cos(2fc t + ) : 1T

Write sR
2 (t) as follows:
r 2 r 2
sR
2 (t) = E cos cos(2fc t) + E sin sin(2fc t) .
Ng

Tb Tb
| {z } | {z }
1 (t) 2 (t)

Thus, for an arbitrary two basis functions are required to represent sR R


1 (t) and s2 (t) as shown
in Fig. 7.6(a).
(a) If the receiver assumes that = 0, then the optimum decision boundary is a line per-
pendicular to 1 (t) and at E/2 distance from sR 1 (t) (see Figure 7.6-(a)). With this
receiver, the decision is based on r1 :
Z Tb
0 + w1 : 0T
r1 = r(t)1 (t)dt =
0 E cos + w 1 : 1T

Solutions Page 76
Nguyen & Shwedyk A First Course in Digital Communications

2 (t ) 2 (t )

locus of s2R (t )

k
s2R (t ) s2R (t )

dy
E

s1 (t ) s1 (t )
1 (t ) 1 (t )
0 0
D
E

we
2
decision boundary
decision boundary taking phase uncertaint y
assuming = 0 into account

(a) (b)

Sh
Figure 7.6: Signal space diagram for ASK with phase uncertainty.

where, as usual, w1 is a zero-mean Gaussian random variable with variance N0 /2. The
decision rule can be expressed as

1T
&

r1 R E/2 (7.9)
0T

The error performance of the receiver that implements the above decision rule can be
calculated as follows:
en


P [error] = P [0T ]P [r1 E/2|s1 (t)] + P [1T ]P [r1 E/2|s2 (t)]
! !
1 E/2 1 E cos E/2
= Q p + Q p
2 N0 /2 2 N0 /2
r ! r !
uy

E 2E
= 0.5Q + 0.5Q [cos 0.5] (7.10)
2N0 N0

Equation (7.10) shows that P [error] increases as increases from 0 to . This implies that
the phase uncertainty degrades the error performance of the optimum receiver designed
Ng

for ASK assuming no phase shift.



3
For = 30o cos = 2 . Then
r ! r !
E E
P [error] = 0.5Q + 0.5Q ( 3 1)
2N0 2N0

For = 60o cos = 12 . Then


r !
E 1
P [error] = 0.5Q +
2N0 4

Solutions Page 77
Nguyen & Shwedyk A First Course in Digital Communications

0
10

k
1
10

dy
Noncoherent

Pr[error]
2
10

we
3
10
=0
=/6
=/3
4
=/2
10

Sh
0 5 10 15
Eb/N0 (dB)

Figure 7.7: Error performance of ASK under phase uncertainty.

For = 90o cos = 0. One has


r ! r !
&
E E
P [error] = 0.5Q + 0.5Q
2N0 2N0
r ! " r !#
E E
= 0.5Q + 0.5 1 Q = 0.5
2N0 2N0
en

The result for = 90o is expected. Since in this case the 2 signals have the same
projection along 1 (t), i.e., they are indistinguishable. We would expect that the
best we can do is to guess, since r1 gives no information about which signal was
transmitted and the error probability should be 1/2.
uy

Fig. 7.7 plots the error performance of the conventional optimum receiver for dif-
ferent values of as calculated above. It is clear that the knowledge of the phase
is crucial for the implementation of the conventional optimum receiver. All the
receivers discussed in the text so far require this knowledge and they are called the
COHERENT receivers. For coherent receivers, a circuit known as phase-locked
loop (PLL) needs to be built to estimate the phase of the incoming signals (see
Ng

Chapter 12).
(b) Without PLL, the receiver needs to partition the signal space into two regions that are
robust to the phase uncertainty. A rather obvious choice for the decision boundary is a
circle centered at sR
1 (t) and with some diameter D as shown in Fig. 7.6-(b). In essence,
this receiver makes the decision based on the output energy, not the voltage values.

An interesting question is what is the optimum value of D to minimize the probability


of error for the above receiver?. Finding the answer to this question involves Rician and
Rayleigh probability density functions (see Chapter 10). It can be shown that the optimum

Solutions Page 78
Nguyen & Shwedyk A First Course in Digital Communications

value of D is:
E 4N0
D= 1+ . (7.11)

k
2 E
The above shows that the optimum threshold depends on the actual noise level (channel

dy
condition). Typically E/N0 1 or N0 /E 1 and D is well approximated by
r
E
D . (7.12)
2

we
t = Tb

()dt
Tb r1

0
Compute
r (t ) Decision
r12 + r2 2
1 (t ) t = Tb

2 (t )
Tb

0
Sh
()dt
r2
and compare to D 2

Figure 7.8: Implementation of the noncoherent receiver for ASK.


&
The receiver in Fig. 7.8 does not require the phase information and it is called the nonco-
herent receiver. With D given by (7.12), it can be shown that the error probability of the
noncoherent receiver is approximately:
en

1 E
P [error] e 4N0
2
The above performance is also plotted in Fig. 7.7 for comparison. Observe that there is a
loss of about 1 to 2 dB in SNR compared to coherent receiver. However, the noncoherent
uy

receiver is much simpler to implement since it does not require a PLL.


A final remark is that the notation E in this question is the energy of signal s2 (t); not the
average energy per bit. The average energy per bit for this case is Eb = E/2.

P7.8 (a) See Fig. 7.9.


Ng

(b) The receiver in this case projects onto the 1 (t) axis to get r1 and compares r1 to a
1D
threshold of 0, i.e., r1 R 0 where it is assumed that 1T sT1 (t), 0T sT2 (t).
0D

The signal projection is either k Eb cos and the noise projection has a variance of
N0 /2. Therefore
! r !
k Eb cos 2Eb
P [error] = Q p =Q k cos .
N0 /2 N0

Solutions Page 79
Nguyen & Shwedyk A First Course in Digital Communications

2
2 ( t ) = sin ( 2 f c t )
Tb

k
locus of s2R ( t )
s1R ( t )

dy
2
s2T ( t )

s1T ( t ) 1 ( t ) = cos ( 2 f ct )
k Eb Tb
4
Eb 0
Eb
s R
2 (t ) 4

we
locus of s1R ( t )

Figure 7.9

2 ( t ) , r2 = r ( t ) 2 ( t ) dt
Tb

0
Z1: decide 1T

0D

s2R ( t )
k Eb
0
Sh 1D

k Eb s1 ( t )
R

1
r2 = ( tan ) r1

1 ( t )
r1 = r ( t ) 1 ( t ) dt
Tb

r2 = r1 0
&
tan
Z2

Figure 7.10
en

(c) See Fig. 7.10.


r !
2Eb
P [error] = Q k
N0
uy

t = Tb

( ) dt
Tb r1

0
r1 , r2 Z1
Ng

r (t )
2
1 ( t ) = cos ( 2 f ct ) choose 1T .
Tb
Otherwise
t = Tb choose 0T

( ) dt
Tb
r2
0

2
2 ( t ) = sin ( 2 f c t )
Tb

Figure 7.11

Solutions Page 710


Nguyen & Shwedyk A First Course in Digital Communications

Remark: Another receiver, and a simpler one, is one that needs only ONE projection.
What is it?

k
P7.9 Signal space diagrams at the transmitter and receiver are shown below:
2 ( t )

dy
0
m1 = 01 m0 = 00
T s0R (t ) s0T (t )
s (t )
1
s1R (t )

{ we
Es
1 (t )

k Es
s3R (t )
T
s (t ) s3T (t )
2
s2R (t )
m2 = 11 m3 = 10

Sh Figure 7.12

(b) The symbol error probability can be determined as follows:


&
3
X
P [error] = P [error|sTi (t)]P [sTi (t)] = P [error|sTi (t)] = P [error|sT0 (t)]
i=0
The above is obvious given the symmetry of both the transmit and receive signals and
the fact that the four transmit signals are equally likely. Given sT0 (t) was transmitted,
en

one has
r(t) = sR
0 (t) + w(t); 0 t Ts
Therefore,
p
r1 = sR
0,1 + w1 = k
Es cos + + w1
uy

4
p
r2 = sR0,2 + w2 = k Es sin + + w2
4
N0
where w1 and w2 are i.i.d Gaussian random variables with zero mean and variance 2 .
Thus:
Ng

P [error|sT0 (t)] = 1 P [correct|sT0 (t)] = 1 P [(r1 , r2 ) R0 |sT0 (t)]


= 1 P [r1 0|sT0 (t)]P [r2 0|sT0 (t)]
" !# " !#
k Es sin ( + /4) k Es cos ( + /4)
= 1 1Q p 1Q p
N0 /2 N0 /2
! r r !
2Es 2Es
= Q k sin ( + /4) + Q k cos ( + /4)
N0 N0
r ! r !
2Es 2Es
Q k sin ( + /4) Q k cos ( + /4)
N0 N0

Solutions Page 711


Nguyen & Shwedyk A First Course in Digital Communications

Note that the third term in the above expression can be safely ignored.
(c) To compute the bit error probability, need to compute the following message error prob-

k
abilities given that sT0 (t) (or message m0 = 00) was transmitted:

dy
P [m1 |m0 ] = P [r1 0|m0 ]P [r2 0|m0 ]
r !" r !#
2Es 2Es
= Q k cos ( + /4) 1 Q k sin ( + /4)
N0 N0
P [m2 |m0 ] = P [r1 0|m0 ]P [r2 0|m0 ]
! !

we
r r
2Es 2Es
= Q k cos ( + /4) Q k sin ( + /4)
N0 N0
P [m3 |m0 ] = P [r1 0|m0 ]P [r2 0|m0 ]
r !" r !#
2Es 2Es
= Q k sin ( + /4) 1 Q k cos ( + /4)
N0 N0

Finally:

Sh
P [bit error] = 0.5P [m1 |m0 ] + 1.0P [m2 |m0 ] + 0.5P [m3 |m0 ]
"
= 0.5 Q k cos ( + /4)
r
2Es
N0
!
+ Q k sin ( + /4)
r
2Es
N0
!#
(7.13)
&
The bit error error probability of QPSK with no phase and attenuation uncertainty is
r ! r !
Es 2Eb
P [bit error] = Q =Q (7.14)
N0 N0
Thus, as longq as 6= 0 and/or k 6= 1, one of the arguments of Q functions in (7.13) is
en

Es
smaller than N0 . Therefore, it makes the bit error probability in (7.13) worse than
that in (7.14).
P7.10 (a) See Fig. 7.13.
uy

Eb 2 Tb
Ng

t t
0 Tb 0 Tb

Eb 2 Tb

Signal set #1 Signal set #2

Figure 7.13

Signal #1 has a discontinuity PSD decays asymptotically as f12 .


Signal #2 has a discontinuity after 1 differentiation PSD decays asymptotically as
1
f4
.

Solutions Page 712


Nguyen & Shwedyk A First Course in Digital Communications

Tb
(b) To determine the PSDs, shift the signal set by (to the left) and use the modulation
q
2
model (for antipodal signalling) in Fig. 7.14, where h(t) = Eb T2b cos(2fc t) or h(t) =

k
q2
Eb Tb sin(2fc t), |t| T2b .

dy
H(f) Modulated
h (t ) Output
Tb  Sout ( f ) = H ( f ) Sin ( f ) watts/Hz
2

we
1
Sin ( f ) = watts/Hz
Tb

Figure 7.14

q
q 1
signal set #1; K = Eb T2b 2j
Sh
To find H(f ) express h(t) as K ej2fc t ej2fc t where K = Eb T2b 12 , + sign for
, sign for signal set #2.
Eb
Then |H(f )|2 = |K|2 |F ej2fc t ej2fc t |2 . Note |K|2 = 2T
signal set.
n o Z Tb
n
b
is the same for either

o
2
j2fc t j2fc t
ej2fc t ej2fc t ej2f t dt
&
F e e = Tb
2
Z Tb
n o
2
= Tb
ej2(f fc )t ej2(f +fc )t dt
2

ej2(f fc )Tb ej2(f fc )Tb 1 ej2(f +fc )Tb ej2(f +fc )Tb 1
en

=
j2 (f fc ) j2 (f + fc )
| {z } | {z }
sin (f fc )Tb sin (f +fc )Tb
(7.15)

Now fc = Tkb (usual assumption). Therefore sin (f fc )Tb = sin(f Tb k) =


uy

sin(f Tb ) cos(k) and sin (f + fc )Tb = sin(f Tb + k) = sin(f Tb ) cos(k). There-


fore the Fourier transform becomes

1 1 (f + fc ) (f fc )
sin(f Tb ) cos(k) = sin(f Tb ) cos(k) .
f fc f + fc f 2 fc2
Ng

h i
So for signal set #1, it is (sin f Tb cos(k)) f 22ffc2 , while for signal set #2, it is =
h i
(sin f Tb cos(k)) f 22f c
f 2
. Therefore the PSD of signal set #1 is
c


4f 2 Eb 4f 2
|K|2 sin2 (f Tb ) = sin2
(f T b ) ,
(f 2 fc2 )2 2Tb (f 2 fc2 )2
and that of signal set #2 is:

2 2 4fc2 Eb 2 4fc2
|K| sin (f Tb ) = sin (f Tb ) ,
(f 2 fc2 )2 2Tb (f 2 fc2 )2

Solutions Page 713


Nguyen & Shwedyk A First Course in Digital Communications

where cos2 (k) = 1.


1 1
Observe that the PSD of signal set #1 and that of signal set #2 for large f ,

k
f2 f4
as expected.
(c) Need to set up equations for plotting. Note that

dy
4f 2 4f 2 4f 2 Tb4 2 (f Tb )2
= = = 4Tb ,
(f 2 fc2 )2 2
(f 2 Tk 2 )2 (f 2 Tb2 k 2 )2 [(f Tb )2 (fc Tb )2 ]2
b

and
4fc2 4fc2 Tb4 (fc Tb )2

we
2
= 2 = 4Tb .
(f 2 fc2 )2 (f 2 Tb k 2 )2 [(f Tb )2 (fc Tb )2 ]2
Therefore the two PSDs are:

2 (f Tb )2
(2Eb Tb ) sin (f Tb ) (signal set #1)
[(f Tb )2 (fc Tb )2 ]2

2

Sh
(2Eb Tb ) sin (f Tb )
(fc Tb )2
[(f Tb )2 (fc Tb )2 ]2
(signal set #2)

The normalized PSDs, normalized by (2Eb Tb ) are plotted in Fig. 7.15. The Plots are
versus f Tb with fc Tb as a parameter. Observe that as more and more cycles (i.e., fc Tb is
larger and larger) the difference between the two PSDs becomes negligible. The reason
is that the maximum slope is larger and larger (ignoring the discontinuity for signal set
&
#1).
(d) So how do we reconcile this? I.e., how does one reconcile that the PSD ignores phase
information (a statement made in Chapter 3) and that phase appears to play a role in
the slope of the PSD, at least in this case.
en

P7.11 (a)
! r r !
2Eb 2Eb
P[error] = pQ + (1 p)Q
N0 N0
r !

uy

2Eb
= Q =Q 2SNR (7.16)
N0

which is the same as in the case of that p is actually 1/2. Here SNR = Eb /N0 .
(b) Now the receiver knows that p 6= 1/2. It needs to adjust the threshold accordingly to
Ng

minimize P[error]. The optimal threshold is found as follows:

f (r|1T ) 1D 1 p
R
f (r|0T ) 0D p


1
e(r Eb )2 /N0
q 1D
2 N0 /2 1p
1
R (7.17)
q e(r+ Eb )2 /N0 0D p
2 N0 /2

Solutions Page 714


Nguyen & Shwedyk A First Course in Digital Communications

f T =1
c b

Normalized PSD (dB)

k
0 Signal set #1
Signal set #2
20

dy
40

60
0 5 10 15 20 25
fTb
f T =2
c b

we
Normalized PSD (dB)

0 Signal set #1
Signal set #2
20

40

60

Sh
0 5 10 15 20 25
fTb

f T =5
c b
Normalized PSD (dB)

0 Signal set #1
Signal set #2
20
&
40

60
0 5 10 15 20 25
fTb
fcTb=20
Normalized PSD (dB)

en

0 Signal set #1
Signal set #2
20

40
uy

60
0 5 10 15 20 25
fTb

Figure 7.15: Normalized PSDs of two signal sets.


Ng

1D
N0 1p Eb 1p
r R ln = ln .
0D 4 Eb p 4SNR p
| {z }
Th

Solutions Page 715


Nguyen & Shwedyk A First Course in Digital Communications

k
Eb 0T Eb 1T
0 2
(t ) = cos(2 f ct )
Tb

dy
0D r 0 r 0 1D

Figure 7.16: Decision regions of the BPSK demodulator with the assumption of p = 1/2.

we
f (r | 0T ) f (r |1T )

(t )
0
Eb 0T Eb 1T

Sh
0D

Th =
4SNR
Eb
1D

1
ln
p
p

Figure 7.17: Decision regions of the optimum BPSK demodulator when p is known at the
receiver.
&
! !
Eb + Th Eb Th
P[error] = (1 p)Q p + pQ p
N0 /2 N0 /2

1
1 + 4SNR ln 1p
p 1 1
4SNR ln
1p
p
= (1 p)Q + pQ
en

1 1
2 SNR 2 SNR

1 1p
= (1 p)Q 2 SNR + ln
4 SNR p

1 1p
uy

+pQ 2 SNR ln (7.18)


4 SNR p

As a small check, when p = 1/2 one has P[error] = Q 2 SNR , as expected.
(c) The error performance curves based on the demodulator of (a) and that of (b) for
Ng

p = 0.6, 0.7, 0.8, 0.9 are shown in Fig. 7.18. Observe that the error performance curves
are quite insensitive to the a priori probability value p.
P7.12 close all;
clear all;
[y,fs,nbits]=wavread(Filename);
%Normalize the values in y
y=2^(nbits-1)+2^(nbits-1)*y;
% Change each floating point value to an unsigned integer
if nbits==8
y=uint8(y);
elseif nbits==16

Solutions Page 716


Nguyen & Shwedyk A First Course in Digital Communications

1
10

k
2
10

dy
3
10

Pr[error]
4
10

we
Question (a)
Question (b), p=0.6
5
10 Question (b), p=0.7
Question (b), p=0.8
Question (b), p=0.9
6
10
0 2 4 6 8 10
SNR (dB)

probability p.

y=uint16(y);
elseif nbits==32
Sh
Figure 7.18: Error performance of BPSK demodulators with different values of the a priori
&
y=uint32(y);
elseif nbits==64
y=uint64(y);
else
fprintf(The number of bits per sample used in...
this wav file is out of the range !);
en

exit
end
% Count the number of bit 1s and 0s
count_bit1=0;
count_bit0=0;
L=numel(y);
uy

for l=1:L
sample=bitget(y(l),1:nbits);
count_bit1=count_bit1 + sum(sample);
count_bit0=count_bit0 + nbits - sum(sample);
end
Ng

fprintf(\n Number of bits per sample: $\%d$,nbits);


fprintf(\n Number of bit 1s: $\%d$,count_bit1);
fprintf(\n Number of bit 0s: $\%d$,count_bit0);
fprintf(\n Percentage of bit 1s: $\%3.2f$,100*count_bit1/(L*nbits));
fprintf(\n Percentage of bit 0s: $\%3.2f$,100*count_bit0/(L*nbits));
The following are results for the wave file tested:

Number of bits per sample: 16


Number of bit 1s: 2832449
Number of bit 0s: 2132159
Percentage of bit 1s: 57.05

Solutions Page 717


Nguyen & Shwedyk A First Course in Digital Communications

Percentage of bit 0s: 42.95

k
P7.13 mark_vec=[^;o;s;d]; color_vec=[r;b;m;g];
L=1000; % number of QPSK symbol;
Pevec=[10^(-1),10^(-2),10^(-3),10^(-4)]; for h=1:4

dy
Pe=Pevec(h);
sigma=(Qinv(Pe)*sqrt(2))^(-1);
text_title=[{\itL}=,num2str(L),;Pr[bit error]=,num2str(Pe)];
b=round(rand(1,2*L)); % This is the binary information sequence
for i=1:L
BP=b((i-1)*2+1:i*2); % Take two bits at a time to map to a QPSK symbol

we
if BP==[0 0]
phi_1(i)=1;phi_2(i)=0;
elseif BP==[0 1]
phi_1(i)=0;phi_2(i)=1;
elseif BP==[1 1]
phi_1(i)=-1;phi_2(i)=0;
elseif BP==[1 0]

end

Sh
phi_1(i)=0;phi_2(i)=-1;

r1=phi_1(i)+sigma*randn(1,1);
r2=phi_2(i)+sigma*randn(1,1);
figure(h); % Plot in a seperate figure for each value of Pr[error]
plot(r1,r2,marker,mark_vec(bi2de(BP)+1,:),markersize,6,markerfacecolor,...
color_vec(bi2de(BP)+1,:),color,color_vec(bi2de(BP)+1,:));
&
hold on;
figure(5); % Plot in the same figure for all four values of Pr[error]
subplot(2,2,h);
plot(r1,r2,marker,mark_vec(bi2de(BP)+1,:),markersize,6,markerfacecolor,...
color_vec(bi2de(BP)+1,:),color,color_vec(bi2de(BP)+1,:));
hold on;
en

end
x=[-3:0.01:3];
figure(h);
plot(x,x,x,-x,marker,none,color,k,linewidth,1.5);
title(text_title,FontName,Times New Roman,FontSize,16)
xlabel({\it\phi}_1({\itt}),FontName,Times New Roman,FontSize,16);
uy

ylabel({\it\phi}_2({\itt}),FontName,Times New Roman,FontSize,16);


h1=gca;
set(h1,FontSize,16,XGrid,on,YGrid,on,GridLineStyle,:,...
MinorGridLineStyle,none,FontName,Times New Roman);
axis([-3 3 -3 3]);axis equal;
Ng

figure(5);
subplot(2,2,h);
plot(x,x,x,-x,marker,none,color,k,linewidth,1.5);
title(text_title,FontName,Times New Roman,FontSize,16)
xlabel({\it\phi}_1({\itt}),FontName,Times New Roman,FontSize,16);
ylabel({\it\phi}_2({\itt}),FontName,Times New Roman,FontSize,16);
h1=gca;
set(h1,FontSize,16,XGrid,on,YGrid,on,GridLineStyle,:,...
MinorGridLineStyle,none,FontName,Times New Roman);
axis([-3 3 -3 3]);axis equal;
end
Note that the program is by no means optimal. Your program may be different, even

Solutions Page 718


Nguyen & Shwedyk A First Course in Digital Communications

though they accomplish the same task.

k
L=500;Pr[bit error]=0.1
3

dy
2

1
2(t)

we
0

3
4 3

Sh
2 1 0
1(t)
1

L=500;Pr[bit error]=0.01
2 3 4

2
&
1
2(t)

0
en

2
uy

3
3 2 1 0 1 2 3
1(t)
Ng

Figure 7.19

Fig. 7.20 plots the received QPSK signals in the signal space diagram. It is clear that with a
higher signal-to-noise ratio the received signals stay closer to the corresponding transmitted
signals. This makes it easier for the demodulation, hence a smaller bit or symbol error
probability.

P7.14 (a) BASK signal set is:


(
0T : 0
q2 (7.19)
1T : E Tb [u(t) u(t Tb )] cos 2fc t

Solutions Page 719


Nguyen & Shwedyk A First Course in Digital Communications

L=500;Pr[bit error]=0.001
3

k
2

dy
1

2(t) 0

we
1

3
3

Sh
2 1 0
1(t)
1

L=500;Pr[bit error]=0.0001
2 3

2
&
1
2(t)

0
en

2
uy

3
3 2 1 0 1 2 3
1(t)
Ng

Figure 7.20

which maybe rewritten as:


(
0T : R n0ej2f
q
ct
o sBB (t) =
0 q
2
1T : R E Tb [u(t) u(t Tb )]ej2f c t sBB (t) = E T2b [u(t) u(t Tb )]
(7.20)
They are plotted in Fig. 7.21.
In baseband the signal is NRZ or BASK is NRZ modulation shifted up by the carrier
frequency, fc Hz.

Solutions Page 720


Nguyen & Shwedyk A First Course in Digital Communications

sBB ( t )
2
E (1T )

k
Tb

dy
t
0 Tb
0 ( 0T )

Figure 7.21

we
(b) BPSK signal set is:
q
0T : Eb T2b [u(t) u(t Tb )] cos(2fc t)
1T : q2
Eb Tb [u(t) u(t Tb )] cos(2fc t)

which maybe written as:


1T : R
n q
n q Tb
Sh
0T : R Eb 2 [u(t) u(t Tb )]ej2fc t

Eb T2b [u(t) u(t Tb )]ej2fc t


o
o
s BB (t) =

q2
Eb Tb [u(t) u(t Tb )]
q
sBB (t) = Eb T2b [u(t) u(t Tb )]
&
The baseband signals are shown in Fig. 7.22.

sBB ( t )

2 1T
E
en

Tb
Tb
t
0
2
E
Tb 0T
uy

Figure 7.22

In baseband the signal set is that of NRZ-L or BPSK is NRZ-L modulation shifted up
by the carrier frequency, fc Hz.
Ng

q2
0T : Eb Tb cos(2f1 t)
(c) BFSK signal set is: q 0 t Tb (assume f1 < f2 ).
1T : Eb T2b cos(2f2 t)
The signal set can be rewritten as:
p r
2 j2f1 t
0T : R Eb [u(t) u(t Tb )]e ,
Tb
p r
2 j2f2 t
1T : R Eb [u(t) u(t Tb )]e
Tb

Solutions Page 721


Nguyen & Shwedyk A First Course in Digital Communications

q
(i) Let fs = f1 . Then immediately, when for 0T : sBB (t) = Eb T2b [u(t) u(t Tb )].
p r

k
2 j2f2 t j2f1 t j2f1 t
For 1T , write the passband signal as: R Eb [u(t) u(t Tb )]e e e
Tb
| {z }

dy
This is sBB (t).
r
p 2
0T : sBB (t) = Eb [u(t) u(t Tb )] (7.21)
Tb
r
p 2
1T : sBB (t) = Eb [u(t) u(t Tb )]ej2(f2 f1 )t (7.22)
Tb

we
r
p 2
= Eb [u(t) u(t Tb )] {cos 2(f2 f1 )t + j sin 2(f2 f1 )t} (7.23)
Tb

(ii) Let fs = f2 . Basically the roles of f1 and f2 are interchanged. Therefore from (i):
r
p 2
0T :

1T :

(iii) Let fs =
sBB (t) = Eb

f1 +f2
2 .
Then
p r
p
sBB (t) = Eb
r

ShTb
2
Tb
[u(t) u(t Tb )] {cos 2(f2 f1 )t j sin 2(f2 f1 )t}

[u(t) u(t Tb )]


2 f1 f2
j2f1 t j2 2 t j2 2 t j2
f2 +f1
t
&
0T : sPB (t) = R Eb [u(t) u(t Tb )]e e e e 2
Tb
| {z
}
f f
q j2 2
2
1 t
=sBB (t)= Eb T2 [u(t)u(tTb )]e
b
p r
2 f f f +f
j2 21 t j2 22 t j2 2 2 1 t
1T : sPB (t) = R Eb [u(t) u(t Tb )]ej2f2 t e e e
en

Tb
| {z
}
f2 f1
q j2 2 t
=sBB (t)= Eb T2 [u(t)u(tTb )]e
b

Or if we let f f2 f1 be the frequency separation then:


r
uy

p 2 f
0T : sBB (t) = Eb [u(t) u(t Tb )]ej2( 2 )t
T
r b
p 2 f
1T : sBB (t) = Eb [u(t) u(t Tb )]ej2( 2 )t
Tb
Ng

To plot the signals in baseband, let f2 f1 = f = T1b , the minimum frequency separation
for noncoherent orthogonality. The various signals are plotted in Figs. 7.23, 7.24 and
7.25.
Remarks:
(i) In passband the transmitted energy is 0 or E (for BASK) or Eb (for BPSK or
BFSK). What is the energy in the equivalent R baseband signal(s)?
R Note for a
complex signal, x(t), the energy is given by |x(t)| dt = x(t)x (t)dt. How
2

would you explain the factor of 2?


(ii) BPSK, BFSK are antipodal and orthogonal (with the proper choice of f2 f1 ). Are
these characteristics preserved in baseband?

Solutions Page 722


Nguyen & Shwedyk A First Course in Digital Communications

2
0T : E 1T :
Tb 2 2

k
Eb Eb
Tb Tb
Tb
t t +j t

dy
0 Tb 0 Tb 0
2 2
Eb Eb
Tb Tb

Figure 7.23: Case (i): fs = f1 .

we
2
0T : 1T : E
2 2 Tb
Eb Eb
Tb Tb

+j 0 t
t t

Sh
0 Tb Tb 0 Tb
2 2
Eb Eb
Tb Tb

Figure 7.24: Case (ii): fs = f2 .


&
0T : 1T :
2 2 2
Eb Eb Eb
Tb Tb Tb
Tb 0 Tb Tb
+j t +j t
0 0 0 Tb
2 2 2
en

Eb Eb Eb
Tb Tb Tb

f1 +f2
Figure 7.25: Case (iii): fs = 2 .
uy

(iii) What would happen in (b) to the sBB (t)s if instead of fs = fc , the shift frequency,
also commonly called representation frequency, is chosen to be fs = fc + foffset ? Try
to draw some general conclusions or go to P7.15.

P7.15 One gets a real equivalent baseband signal if the shift or representation frequency, fs , is
Ng

chosen so that passband signals Fourier transform has the following properties: its magnitude
function, |SP B (f )|, is even about fs and its phase function, S(f ), is odd about fs or |S(f
fs )| = |S(f fs )| and S(f fs ) = S(f fs ).
Otherwise the equivalent baseband signal is complex.
So given the spectrum of a passband signal as in Fig. 7.26, how would you choose fs to have
a real equivalent baseband signal?
TO CONCLUDE THE CHOICE OF THE REPRESENTATION FREQUENCY IS ARBI-
TRARY AND IS YOUR DECISION. BUT THERE ARE GOOD, BAD & UGLY CHOICES.

P7.16 Let the QPSK signals be represented by the following signal set:

Solutions Page 723


Nguyen & Shwedyk A First Course in Digital Communications

S PB ( f )

k
f
0

dy
S PB ( f )

Figure 7.26

q2 choosing fs =fc q2

we
00 Es Ts cos(2fc t) = sBB (t) = Es Ts [u(t) u(t Ts )]
q2 q2
01 Es Ts cos(2fc t + 2 ) sBB (t) = Es Ts [u(t) u(t Ts )]ej 2
q q
10 Es T2s cos(2fc t + ) sBB (t) = Es T2s [u(t) u(t Ts )]ej
q q
11 Es T2s cos(2fc t + 3
2 ) sBB (t) = Es T2s [u(t) u(t Ts )]ej 2

Sh
What do the plots of the baseband signals look like?
What happens to the baseband signals if the transmitted signal set is shifted by say radians?

Before going to Problems 7.17 to 7.20 we first become comfortable with Equation (7.4). The
term SB (f fs ) represents SB (f ) shifted to the right by fs Hz, while the term SB (f fs )
represents SB (f ) flipped around the vertical axis, i.e., SB (f ) and then (because of the flip
&
or f argument) shifted to the left by fs Hz. Summing the two gives Sn (f ). Here we worked
from SB (f ) to Sn (f ) but one can work the other way. As another graphical example consider
Sn (f ) as shown in Fig. 7.27.

Sn ( f )
en

f
f2 f1 0 f1 f2
uy

Figure 7.27

Whatever it is, Sn (f ), is real positive and even and therefore a valid PSD. To determine the
PSD of the equivalent baseband process the representation frequency, fs , must be chosen.
Let us choose it to be f2 +f
Ng

2 . Then the PSD of the equivalent baseband process is shown in


1

Fig. 7.28.
A word about the auto-correlation
R function. It is as usual the inverse Fourier transform of
the PSD, i.e., RB ( ) = SB (f )ej2f df .
Note that the PSD of Fig. 7.31 results in a complex auto-correlation function while the PSD
above results in a real auto-correlation function.
Real or complex depends on the shape of the passband PSD and also on the choice of fs .
A complex auto-correlation function means that the equivalent baseband process is complex.
Keep in mind that it is the passband process that is the physical (real) process, the baseband
process is first an equivalent one, i.e., a mathematical construct.

Solutions Page 724


Nguyen & Shwedyk A First Course in Digital Communications

SB ( f )
A

k
f

dy
f 2 f1 0 f 2 f1

2 2

Figure 7.28

we
Some questions you can ask yourself are: What is the equivalent baseband process PSD when
fs above is chosen to be = f1 ?, = f2 ? When would the auto-correlation function be real?
P7.17 (a) Let Sn (f ) be as in Fig. 7.29. Let H(f ) be as in Fig. 7.30. Certainly |H(f )|2 = Sn (f ).
Choose fs = f1 which implies that HB (f ) is as in Fig. 7.31

Sn ( f )

f2
( )
Sn

Sh
(f)

f1 0 f1
Sn(

f2
+)
(f)

f
&
Figure 7.29

H(f)
H(f)
S n(
)
(f) S n(
+)
(f)
( even function )
en

f
f2 f1 0 f1 f2
H ( f )
( odd function )
uy

Figure 7.30

HB ( f )
Ng

HB ( f )

f
0 f 2 f1

H B ( f )

Figure 7.31

Now show that indeed H(f ) = HB (f fs ) + HB (f fs ) . Term HB (f fs ) of course


gives the spectrum in Fig. 7.32

Solutions Page 725


Nguyen & Shwedyk A First Course in Digital Communications

( the part of the positive f axis )

k
f
0 f1 f2

dy
Figure 7.32

Now HB (f fs ) is again a flip of HB (f ) about the vertical axis and a shift to the left
by fs = f1 Hz. The result is shown on the left of Fig. 7.33

we
i i

H B ( f fS ) H B* ( f f S )

f f
f2 f1 0 f2 f1 0
H B ( f f S )

Sh Figure 7.33
H B ( f f S )

But we want the overall phase function H(f ) to be an odd function (h(t) is real).
Therefore conjugate (not as a French verb but as complex conjugate) HB (f fs )
HB (f fs ) . This flips the phase over since if x = |x|ejx , x = |x|ej(x) x =
&
x. Therefore the result is as shown on the right of Fig. 7.33.
Remarks:
The reason that there was no complex conjugate in Eqn. (P7.4) is because Sn (f ) is
a PSD and hence the phase is zero (always).
en

Different fs can be chosen. You may wish to see what is the effect?
R
(e) nB (t) = ej2fs t w(t )hB ()ej2fs d. Should be a baseband process. Facetious
answer would be because of the subscript B in nB (t).
More seriously n(t) is a passband process with its PSD lying around fs Hz. Therefore
uy

nB (t) should be a baseband process with its PSD around 0 Hz and shifted up and
down by ej2fs t to lie around fs Hz.
nhR i o
j2fs d ej2fs t
(f) nI (t) = R {nB (t)} = R w(t )h B ()e
Z
j2fs j2fs t
nQ (t) = I {nB (t)} = I w(t )hB ()e d e .
Ng

Solutions Page 726


Nguyen & Shwedyk A First Course in Digital Communications

P7.18 (a) They are Gaussian because they are obtained by linear operations.
Z

k
j2fs j2fs t
E {nB (t)nB (t + )} = E w(t )hB ()e d e
=
Z

dy
j2fs u j2fs (t+ )
w(t + u)hB (u)e du e
u=
j2fs t j2fs (t+ )
= e
Z e
Z
E {w(t )w(t + u)} hB (u)ej2fs (+u) du hB ()d
= u= | {z }

we
=( u+) (w(t) is white noise)
| {z }
hB ( +)ej2fs (t+2) (Impulse sifts out value at =0 u= +)
Z
j2fs (2t+ )
= e hB ()hB ( + )ej2fs ej2(2fs ) d
=
Z
= ej4fs t hB ()hB ( + )ej2(2fs ) d

Sh
=

Now Fourier transform with respect to :


Z Z
j4fs t j2(2fs )
F {RnB (t, )} =e hB ()hB ( + )e d ej2f d
= =
Z Z
= ej4fs t hB ()ej2(2fs ) hB ( + )ej22f d d
&
= =

(b)

Sn (f ) = SB (f fs ) + SB (f fs ) = |H(f )|2 = |HB (f fs ) + HB (f fs )|2


= [HB (f fs ) + HB (f fs )][HB (f fs ) + HB (f fs )]
en

= HB (f fs )HB (f fs ) + HB (f fs )HB (f fs ) + HB (f fs )HB (f fs )


| {z } | {z } | {z }
=|HB (f fs )|2 =SB (f fs ) 0 0

+ HB (f fs )HB (f fs )
| {z }
=|HB (f fs )|2 =SB (f fs )
uy

SB (f ) = |HB (f )|2

(c) We know that H(f ) = HB (f fs ) + HB (f fs ). Therefore we show that


Ng

n n oo
F 2R hB (t)ej2fs t = H(f )

which means h(t) = 2R hB (t)ej2fs t . To show this express the real operation as

Solutions Page 727


Nguyen & Shwedyk A First Course in Digital Communications

x + x
follows; R {x} = .
2

k
n o
j2fs t j2fs t
F {h(t)} = H(f ) = F hB (t)e + hB (t)e
Z Z

dy
= hB (t) e|j2fs t{z
ej2f}t dt + hB (t)ej2fs t ej2f t dt

ej2(f fs t)t | {z }
| {z } Z
HB (f fs ) j2(f fs )t
= hB (t)e dt

| {z }

we
HB (f fs )
| {z }
H (f fs )
B

or H(f ) = HB (f fs ) + HB (f fs ) as we wished to show.


R R
(d) n(t) = w(t )h()d = w(t ) R hB ()ej2fs t d d
Interchange the integral and R {} operation. Note that w() is real and can be pulled

n(t) = R Sh
inside the R {} operation. Therefore
Z
w(t )hB ()e
=
j2fs

d .

Change the variables in the inner bracket to u = + . It becomes


Z
&
j2f
e hB (u)ej2f u du = ej2f HB (f )
u=
Z
j4fs t
F {RnB (t, )} = e HB (f ) dhB ()ej2(f 2fs )
=
| {z }
HB (f 2fs )
en

But HB (f ) and HB (f 2fs ) do not overlap product is zero. Therefore RnB (t, ) = 0
But, on the other hand, nB (t) = nI (t) jnQ (t). Therefore

E {nB (t)nB (t + )} = E {[nI (t) jnQ (t)][nI (t + ) jnQ (+ )]}


uy

= [E {nI (t)nI (t + )} E {nQ (t)nQ (t + )}]


j [E {nI (t)nI (t + )} + E {nQ (t)nQ (t + )}]
= [RI ( ) RQ ( )] j [RI ( ) + RQ ( )]
Ng

But we just showed that this expectation is equal to zero. Therefore RI ( ) = RQ ( )


and RI,Q ( ) = RQ,I ( )
Consider now:
Z
j2fs
E {nB (t)nB (t + )} = E w(t )hB ()e d ej2fs t
=
Z
j2fs u j2fs (t+ )
w(t + u)hB (u)e du e
u=
Z Z
j2fs j2fs (u)
=e E {w(t )w(t + u)} hB ()hB (u)e ddu
= u= | {z }
=( u+) as impulse occurs at u= +

Solutions Page 728


Nguyen & Shwedyk A First Course in Digital Communications

Considering the integral w.r.t. u, the impulse sifts out the value of the integral at
u = + . Therefore

k
Z
j2fs
E {nB (t)nB (t + )} = e hB ()hB ( + )ej2fs ( ) d
=
Z

dy
= hB ()hB ( + )d.
=

Now Fourier transform the above w.r.t , call it SB (f ):


Z Z
hB ()hB ( + )d ej2f

we
SB (f ) =
= =
Z Z
j2f
= hB ( + )e d hB ()d
= =

Change variable in the integral to u = + . It becomes


Z Z

Sh
j2f (u) j2f
hB (u)e du = e hB (u)ej2f u du = ej2f HB (f )
u= u=
Z Z
j2f j2f
SB (f ) = HB (f ) hB ()e d = HB (f ) hB ()e d
= =
| {z }
HB (f )

SB (f ) = HB (f )HB (f ) 2
= |HB (f )| watts/Hz
&
(e) From (a) we have that RI ( ) = RQ ( ) and RI,Q ( ) = RQ,I ( ). InR(b) we find out that

E {nB (t)nB (t + )} is the inverse Fourier transform of SB (f ), i.e., = SB (f )ej2f df .
But it also equals
E {[nI (t) + jnQ (t)][nI (t + ) jnQ (t + )]} = [RI ( ) + RQ ( )] + j[RI,Q ( ) + RQ,I ( )]
or
en

Z
2RI ( ) j2RI,Q ( ) = SB (f )ej2f df
| {z }
or (2RQ ( )+j2RQ,I ( ))
Z Z
uy

= SB (f ) cos(2f )df + j SB (f ) sin(2f )df


Z
1
RI ( ) = RQ ( ) = SB (f ) cos(2f )df
2
Z
1
Ng

RI,Q ( ) = RQ,I ( ) = SB (f ) sin(2f )df


2

(f) It is even. The crosscorrelation function are 0, since SB (t) sin(2f ) is an odd function
and the area under it is zero. Therefore the I & Q processes are statically independent.
P7.19
Y (f ) = X(f )H(f ) = [XB (f fs ) + XB (f fs )][HB (f fs ) + HB (f fs )]
= XB (f fs )HB (f fs ) + XB (f fs )HB (f fs )
+ XB (f fs )HB (f fs ) + XB (f fs )HB (f fs )
| {z } | {z }
=0 =0

Solutions Page 729


Nguyen & Shwedyk A First Course in Digital Communications

But Y (f ) can be also written as Y (f ) = YB (f fs ) + YB (f fs ), i.e.,

k
YB (f fs ) + YB (f fs ) = XB (f fs )HB (f fs ) + XB (f fs )HB (f fs )
YB (f fs ) = XB (f fs )HB (f fs ) YB (f ) = XB (f )HB (f )

dy
yB (t) = xB (t) hB (t)

P7.20 hB (t) = sB (Tb t).

we
Sh
&
en
uy
Ng

Solutions Page 730


k
dy
Chapter 8

M -ary Signaling Techniques

we
P8.1 (a) Basically the procedure is to take the previous Gray code, flip it over to create a bottom
half. Insert a leading zero in the top half and the leading one in the bottom half. So for
3-bit sequences, one obtains:

Sh
leading zeros







0 0 0
0 0 1
0 1 1
0 1 0



1 1 0
top half

1 1 1
&
leading ones bottom half

1 0 1

1 0 0
(b)
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5
en

0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1
0 0 0 1 0 0 0 0 1 1
0 0 0 1 1 0 0 0 1 0
0 0 1 0 0 0 0 1 1 0
uy

0 0 1 0 1 0 0 1 1 1
0 0 1 1 0 0 0 1 0 1
0 0 1 1 1 0 0 1 0 0
0 1 0 0 0 0 1 1 0 0
0 1 0 0 1 0 1 1 0 1
Ng

0 1 0 1 0 0 1 1 1 1
0 1 0 1 1 0 1 1 1 0
0 1 1 0 0 0 1 0 1 0
0 1 1 0 1 0 1 0 1 1
0 1 1 1 0 0 1 0 0 1
0 1 1 1 1 0 1 0 0 0
For
| the bottom
{z half}
For the bottom half: flip the top half over and set b1 =1.

(c) The direct method appears to be more straightforward since it appears that you do
not need to generate the previous Gray bit codes to get the final one. But in reality

1
Nguyen & Shwedyk A First Course in Digital Communications

there is no difference. One can adapt the inductive method to produce a Gray code by
considering the pattern of zeros and ones in successive columns - Left as an exercise.

k
P8.2 (4-ASK modulation)

dy
(a)+(b) With M = 4,

3 1 2
P[symbol error] = Q =Q P[symbol error] . (8.1)
2 2N0 2N0 3

we

The results of 2N , Q 2N , Q 32N0
, Q 5
2N0
corresponding to different
0 0
P[symbol error] are given in the Table 8.1.

Table 8.1

P[symbol error] Q 2N
Q 3 Q 5
2N0 2N0 2N0

Sh
0

101 1.5011 6.67 102 3.35 106 3.06 1014


102 2.4747 6.67 103 5.67 1014 1.81 1035
103 3.2087 6.67 104 3.10 1022 3.17 1058
104 3.8202 6.67 105 1.04 1030 1.28 1081
&
(c) It is clear that
! !
3
p 2 p 2
P[{01}D |{00}T ] = Q Q
N0 /2 N0 /2

3
en

= Q Q Q . (8.2)
2N0 2N0 2N0
Similarly, one can find that

3 5 3
P[{11}D |{00}T ] = Q Q Q , (8.3)
uy

2N0 2N0 2N0



5
P[{10}D |{00}T ] = Q . (8.4)
2N0

(d) It can be seen from the signal space that the same set of conditional symbol error
Ng

probabilities applies for the case that symbol 10 is transmitted. The set will be different
when either {01} or {11} is transmitted.


Compared with Q 2N the term Q 3
2N0
and Q 5
2N0
are negligible at any level
0
of P[symbol error] and can be ignored. It is the error that occurs when a signal is
mistaken with a nearest neighbor that dominates. Therefore, in terms of neighbors the
most important ones are the nearest ones.

Solutions Page 82
Nguyen & Shwedyk A First Course in Digital Communications

P8.3 (a)

k
P [bit error] = P [bit error| {00}T ]P [{00}T ] + P [bit error| {01}T ]P [{01}T ]
+P [bit error| {11}T ]P [{11}T ] + P [bit error| {10}T ]P [{10}T ]
1

dy
P [{00}T ] = P [{01}T ] = P [{11}T ] = P [{10}T ] =
4
P [bit error| {00}T ] = P [bit error| {10}T ]
P [bit error| {01}T ] = P [bit error| {11}T ]
1 1
P [bit error] = P [bit error| {00}T ] + P [bit error| {01}T ]
2 2

we

1 3
P [bit error| {00}T ] = Q Q
2 2N0 2N0

3 5 1 5
+ Q Q + Q
2N0 2N0 2 2N
0
1 1 3
P [bit error| {01}T ] = Q + Q Q


3
Sh
P [bit error] = Q
4

1
2

+ Q
2

2N0
3
2N
2N0

0
1

+ Q
2
2

3
2N0

2N0

1
Q
4

5
2N0

.
2N0

(b) Due to symmetry P [b1 in error] = P [b1 in error|b1 = 0]


&


P [b1 in error|b1 = 0] = P rI > 0 sT = , b1 = 0 P sT = b1 = 0
2 2

3 3

+P rI > 0 sT = , b1 = 0 P sT = b1 = 0
2 2
en


1 3
P [b1 in error] = Q +Q .
2 2N0 2N0

P [b2 in error] = P [b2 in error|b2 = 0]P [b2 = 0] + P [b2 in error|b2 = 1]P [b2 = 1]
uy


3 3
P [b2 in error|b2 = 0] = 2P < rI < sT = , b2 = 0 P sT = b2 = 0
2 2

5 1 5
= 2 Q Q =Q Q
Ng

2N0 2N0 2 2N0 2N0




P [b2 in error|b2 = 1] =
2P rI < or rI > sT = , b2 = 1 P sT = b2 = 1
2 2

3 1 3
= 2 Q +Q =Q +Q
2N0 2N0 2 2N 2N0
0
1 3 5
P [b2 in error] = 2Q +Q Q
2 2N 2N0 2N0
0
1 3 1 5
= Q + Q Q .
2N0 2 2N0 2 2N0

Solutions Page 83
Nguyen & Shwedyk A First Course in Digital Communications

The arithmetic averaging of the two individual bit error probabilities gives:

3 3 1 5

k
P [bit error] = Q +Q Q (same as in (a)).
4 2N0 2N0 4 2N0
The first observation is that the individual bit errors are not the same, the probability

dy
of b2 being in error is higher by approximately a factor of 2. The overall bit error
probability is somewhere between the two individual bit error probabilities. If one makes
the reasonable approximations based on the result of P8.2(b) then

3
P [bit error] = Q

we
4 2N0

1
P [b1 in error] = Q
2 2N0


P [b2 in error] = Q
2N0

the exact expression.


Sh
Using the approximate expression we have P [bit error] = 12 Q 2N
0
, which is close to

P8.4 (a) From Fig. 8.27 in the textbook, one can obtain the decision rule for bit b2 as follows:
b2 =0
|rI | R . (8.5)
b2 =1
&
(b) Denote s1 (t) = {00}, s2 (t) = {01}, s3 (t) = {11}, s4 (t) = {10}. The error probabilities
can be calculated as:
4
1X
P[b1 in error] = P[b1 in error|si (t)]
4
en

i=1

1 3 3
= Q +Q +Q +Q
4 2N0 2N0 2N0 2N0

1 3
= Q +Q . (8.6)
2 2N0 2N0
uy

4
1X 1
P[b2 in error] = P[b2 in error|si (t)] = (P[b2 in error|s1 (t)] + P[b2 in error|s2 (t)])
4 2
i=1

1 5 3
= Q Q +Q +Q
Ng

2 2N0 2N0 2N0 2N0



1 3 1 5
= Q + Q Q . (8.7)
2N0 2 2N0 2 2N0
Then the average bit error probability can be calculated as
P[b1 in error] + P[b2 in error]
P[bit error] =
2
3 1 3 1 5
= Q + Q Q . (8.8)
4 2N0 2 2N0 4 2N0
The error probabilities are identical to those of P8.3.

Solutions Page 84
Nguyen & Shwedyk A First Course in Digital Communications

00 01 11 10
rI
3 0 3

k

2 2 2 2

Figure 8.1

dy
P8.5 (a) With symbol demodulation, P [b1 in error] remains unchanged from P8.3(a) since the
pattern for b1 is identical. On the other hand,

we
P [b2 in error] = P [b2 in error|b2 = 0]P [b2 = 0] + P [b2 in error|b2 = 1]P [b2 = 1]
1 1
= P [b2 in error|b2 = 0] + P [b2 in error|b2 = 1]
2 2


3 3
P [b2 in error|b2 = 0] = P < rI < 0 sT = , b2 = 0 P sT = b2 = 0
2 2

Sh





+P rI > sT =


3
2



, b2 = 0 P sT =

+P rI > sT = , b2 = 0 P sT = b2 = 0

+P < rI < 0 sT = , b2 = 0 P sT = b2 = 0
2


3
2

2


b2 = 0

2 2
&

1 3 5
= Q Q +Q
2 2N 2N 2N
0 0 0
3
+Q Q +Q
2N0 2N0 2N0

en

3 3 1 5
= Q Q + Q
2 2N0 2N0 2 2N0

1 3 1 5
P [bit error] = Q Q + Q .
2N0 4 2N0 4 2N0
uy

Compared
with Gray coding the bit error probability is somewhat larger, basically

Q 2N as compared to 34 Q 2N

, but not dramatically so.
0 0

(b) Now consider demodulation directly to the bits. The decision rules are:
b1 =1
Ng

rI R 0
b1 =0


(rI < ) or (0 < rI < ) b2 = 0
( < rI < 0) or (rI > 0) b2 = 1

In terms of error probabilities, P [b1 in error] should be the same as in P8.4(b). P [b2 in error]

Solutions Page 85
Nguyen & Shwedyk A First Course in Digital Communications

is determined as follows:
P [b2 in error|b2 = 0]

k

3 3
= P < rI < 0 sT = , b2 = 0 P sT = b2 = 0
2 2

dy

3 3

+P rI > sT = , b2 = 0 P sT = b2 = 0
2 2


+P < rI < 0 sT = , b2 = 0 P sT = b2 = 0
2 2

we

+P rI > sT = , b2 = 0 P sT = b2 = 0
2 2

1 3 5
= Q Q +Q
2 2N0 2N0 2N0

3
+ Q Q +Q
2N0 2N0 2N0

=
3
2

Q

2N0
Sh
P [b2 in error|b2 = 1] =

Q

1
2


3
2N0



1
+ Q
2



5
2N0


P rI < sT = , b2 = 1
2


1
&

+ P 0 < < rI sT = , b2 = 1
2 2

1 3
+ P rI < sT = , b2 = 1
2 2

1 3
+ P 0 < rI < sT = , b2 = 1
en

2 2

1 3
= Q + Q Q
2 2N0 2N0 2N0

5 3
+Q + Q Q
2N0 2N0 2N0
uy


3 3 1 5
= Q Q + Q
2 2N0 2N0 2 2N0
1 1
P [b2 in error] = P [b2 in error|b2 = 0] + P [b2 in error|b2 = 1]
2 2
Ng

3 3 1 5
= Q Q + Q .
2 2N0 2N0 2 2N0

1 1
P [bit error] = P [b2 in error] + P [b1 in error]
2 2
1 1 3 1 5
= 2Q Q + Q
2 2N 2 2N 2 2N
0 0 0
1 3 1 5
= Q Q + Q .
2N0 4 2N0 4 2N0
Again, the result is the same as demodulating to the symbol.

Solutions Page 86
Nguyen & Shwedyk A First Course in Digital Communications

b1b2b3

000 011

k
001 010 110 111 101 100
rI
7 5 3 0 3 5 7

dy
2 2 2 2 2 2 2 2

Figure 8.2

P8.6 (a)

we
b1 =0 b2 =1
rI R 0; |rI | R 2; If {|rI | > 3 or |rI | < } b3 = 0, else b3 = 1.
b1 =1 b2 =0

Graphically, the decision regions are illustrated in Fig. 8.3.


2

Sh
3 0 2 3
rI
000 001 011 010 110 111 101 100
b1 = 0 b1 = 1
b2 = 0 b2 = 1 b2 = 0
b3 = 0 b3 = 1 b3 = 0 b3 = 1 b3 = 0
&
Figure 8.3

(b) Consider first bit b1 :


en

P [b1 in error] = P [b1 in error|b1 = 0]P [b1 = 0]


+P [b1 in error|b1 = 1]P [b1 = 1]
= P [b1 in error|b1 = 0] (by symmetry)

7 5 3
= P rI > 0 sT is one of , , , , b1 = 0
2 2 2 2
uy


7 5 3
P sT is one of , , , b1 = 0
2 2 2 2
| {z }
=1/4
Ng


1 3 5 7
P [b1 in error] = Q +Q +Q +Q .
4 2N0 2N0 2N0 2N0

Now consider bit b2 :

P [b2 in error] = P [b2 in error|b2 = 0]P [b2 = 0]


+P [b2 in error|b2 = 1]P [b2 = 1]

Solutions Page 87
Nguyen & Shwedyk A First Course in Digital Communications

k
P [b2 in error|b2 = 0] =

7 5 5 7
P 2 < rI < 2 sT is one of , , , , b2 = 0
2 2 2 2

dy

7 5 5 7
P sT is one of , , , b2 = 0
2 2 2 2
| {z }
=1/4

3 11 9 1

we
= 2 Q Q +Q Q .
2N0 2N0 2N0 2N0 4

P [b2 in error|b2 = 1] =

3 3
P rI < 2 or rI > 2 sT is one of , , , , b2 = 1

Sh
2 2 2 2

3 3
P sT is one of , , , b2 = 1
2 2 2 2
| {z }
=1/4

3 5 7 1
= 2 Q +Q +Q +Q .
2N0 2N0 2N0 2N0 4
&

1 3 5
P [b2 in error] = 2Q + 2Q +Q
2 2N0 2N0 2N0

7 9 11
+Q Q Q .
2N0 2N0 2N0
en

Lastly consider bit b3 :

P [b3 in error|b3 = 0] =

7 7
uy

P 3 < rI < or < rI < 3 sT is one of , , , , b3 = 0


2 2 2 2

7 7
P sT is one of , , , b3 = 0
2 2 2 2
| {z }
=1/4

Ng

1 5 9 13
= 2 Q Q +Q Q
4 2N0 2N0 2N0 2N0

5 3 7
+2 Q Q +Q Q
2N0 2N0 2N0 2N0

1 3 5
= 2Q +Q 2Q
2 2N0 2N0 2N0

7 9 13
Q +Q Q .
2N0 2N0 2N0

Solutions Page 88
Nguyen & Shwedyk A First Course in Digital Communications

k
P [b3 in error|b3 = 1] =

1 5 3 3 5

P (rI < 3) or ( < rI < ) or (rI > 3) sT is one of , , , , b3 = 1
4 2 2 2 2

dy

1 3 7 11
= 2 Q +Q Q +Q
4 2N0 2N0 2N0 2N0

3 5 9
+Q +Q Q +Q
2N0 2N0 2N0 2N0

we
1 3 5
= 2Q + 2Q Q
2 2N 2N0 2N0
0
7 9 11
Q +Q +Q .
2N0 2N0 2N0


P [b3 in error] =

2Q

7
2N0
Sh

1
4
4Q

+ 2Q

2N0
9
2N0

+ 3Q

+Q

3
2N0
11
2N0

3Q

Q
5
2N0
13
2N0


Using the result of P8.2 we ignore, with confidence, all terms except Q 2N
.



. There-
0
&
fore:

1
P [b1 in error] = Q
4 2N0

1
P [b2 in error] = Q
2 2N0
en



P [b3 in error] = Q
2N0

1 7 7
P [bit error] = Q = Q .
3 4 2N0 12 2N0
uy

P8.7 Consider rectangular QAM of bits with I bits assigned to the I axis, Q bits to the Q axis
( = I + Q ). Gray code the I , Q bit patterns separately, concatenate them and assign
the corresponding bit pattern to the I, Q signal.
Example: I = Q = 2. Gray code for either axis is 00, 01, 11, 10. Therefore the result is as
Ng

shown in Fig. 8.4.

P8.8 (a) See Fig. 8.4 for the bit pattern b1 b2 b3 b4 . Then the decision rules are:
b1 =1 b2 =0 b3 =1 b4 =0

rQ R 0; |rQ | R ; rI R 0; |rI | R .
b1 =0 b2 =1
2 b3 =0 b4 =1
2

Note: rI , rQ are statistically independent Gaussian random variables, variance N0 /2 and


a mean value which depends on the transmitted signal.
(b) Since rectangular 16-QAM is basically 2 independent 4-ASK modulations the results of
P8.2 apply directly.

Solutions Page 89
Nguyen & Shwedyk A First Course in Digital Communications

Q axis, rQ
1000 1001 1011 1010

k
10

dy
1100 1101 1111 1110

11
Q Gray code
I axis
0 rI

01

we
0100 0101 0111 0110

00

0000 0001 0011 0010

b1b2b3b4 00 01 11 10
I Gray code

(c) Make this bit a b1 or b3 bit.


Sh Figure 8.4

P8.9 (a) Constellation (a): The average symbol energy is equal to the squared distance of any
&
2
signal from the origin. In terms of this is Es = 2 sin22.5 = 1.712 joules/symbol.

Constellation
2 (b): The
2sum ofthe squared distances of each signal from the origin is
2 9 2 122
= 4 4 + 4 + 4 4 + 4 = 12 . The average energy is Es = 8 = 1.52
2
en

joules/symbol.
2 2
2
2
Constellation (c): Again sum of the squared distances is = 4 4 + 4 +4 + 2 =

(8 + 4 2)2 . Therefore Es = 1 + 22 2 = 1.712 joules/symbol.
uy


2 92 2
Constellation (d): Sum of the squared distances is 2 4 +2 4 +4 4 + 2 =
102
102 . Therefore Es = 8 = 1.252 joules/symbol.

So from most efficient to least efficient the ranking is (d), (b), (c), (a).
Ng

Note: Since each symbol (in each constellation) represents 3 bits, the average bit energy,
Eb = Es /3 joules/bit.
(b) See Fig. 8.5.
(c) See Fig. 8.6.
(d) The triangular constellation is most efficient at 1.1252 joules/symbol, which is

1.25
10 log10 = 0.46 dB
1.125

better than the best constellation in Fig. 8.28.

Solutions Page 810


Nguyen & Shwedyk A First Course in Digital Communications

2 (t )

k
1 100 101 111 110

dy
1 (t )
0

0 000 001 011 010


Gray code for Q-axis bits
00 01 11 10

we
Gray code for I-axis bits

Figure 8.5

2 (t )

0
Sh 1 (t )
&
These signals are most
susceptible to error. They have
the least space to maneuver in,
at least the corresponding
received signal does.
en

Figure 8.6

P8.10 (a) Fig. 8.7 shows the signal constellation with the signals to be Gray coded numbered. A
solution is also given and explained on the figure.
uy

(b) Assume that all the 16 signal points are equally likely. The optimum decision boundaries
of the minimum distance receiver are also sketched in Fig. 8.7.
Ng

Solutions Page 811


Nguyen & Shwedyk A First Course in Digital Communications

k
11

dy
6

we
1

12 7 2 4 5 10
3

Sh 8
&
9

A graph of the signals to be Gray coded and The 4-bit Gray code is:
their nearest neighbours looks as follows:
en

0000 1
1 6 11 0001 2
0011 3 Somewhat heuristically
2 7 12 0010 4 assign the bit patterns to
signals as shown. The

remaining 4 bit patterns
0100
uy

389 0110 5 can be arbitrarily


0111 10 assigned to the 4
4 5 10 0101 7 remaining signals.
1101 12
Ng

1100
1111
1110
1010 11
1001 9
1011 8
1000 6

Figure 8.7: V.29 constellation and Gray mapping.

Solutions Page 812


Nguyen & Shwedyk A First Course in Digital Communications

P8.11 (a) The two constellations have roughly the same symbol error performance since the mini-
mum distance for both constellations is 2.

k
To see which one is more power-efficient, one needs to compute the average (or total)
energies for the two constellations.

dy

(a) = 1 4{(32 + 32 ) + (12 + 12 ) + 2(32 + 12 )} = 10.0 (joules)
E
16

n o
(b) = 1 4[( 2)2 + ( 2 + 2)2 + ( 2 + 4)2 + ( 2 + 6)2 ] = 24.49 (joules)
E
16

we
From the above results it can be seen that constellation (a) is much more power-efficient
than constellation (b).
(b) See Fig. 8.8. Gray code by trial and error - but intelligent trial and error.
2 ( t )

Sh 0100

0101

0111
&
0110
0000 0001 0011 0010 1110 1111 1101 1100
1 (t )

1010
en

1011

1001
uy

1000

Figure 8.8
Ng

(c) See Fig. 8.8. The four outer (corner) signals are least susceptible to error since they
have the least nearest neighbors, hence and the most space for the received signal to
roam in.

P8.12 (a) By observation, dmin is maximized when h = v = (= dmin ), which means that
(/2)
tan = (3/2) = 13 = 18.4 .
(b) Gray code vertical bits and horizontal bits separately and then concatenate them.
The result is shown in Fig. 8.9.
2 2
(c) For the present constellation: Es = 32 dmin + 12 dmin = 52 (dmin )2 dmin = 0.63 Es .

For 8-PSK the geometry looks like in Fig. 8.10 and dmin = 0.765 Es , which is better.

Solutions Page 813


Nguyen & Shwedyk A First Course in Digital Communications

Q (t )
100 101 111 110

k
1 d min

vertical bits
d min
I (t )

dy
0
0 000 001 011 010

00 01 11 10

we
horizontal bits

Figure 8.9

450 d min = 2 Es sin ( 22.50 ) = 0.765 Es

Sh
Figure 8.10

2
d
2
d min d min
+ = 2 = 3
&
min
2 4 4

en

Figure 8.11
uy

P8.13 (a) See Fig. 8.11.


(b) Have 6 signals with energy = d2min = 32 joules.
1 signal with zero energy.
2
Ng

1 signal with (dmin cos 30 )2 + 3d2min = 34 + 49 32 = 92 . Therefore

[6(3) + 9] 2 27
Es = = 2 joules/signal.
8 8
Es (joules/signal) 9
Eb = = 2 joules/bit.
3( bits/signal) 8
(c) See Fig. 8.12.
(d) No, not possible. The signal at the origin has 6 nearest neighbors. Not possible to find
six 3-bit patterns that differ from its 3-bit pattern by only 1 bit. At most there are three
3-bit patterns.

Solutions Page 814


Nguyen & Shwedyk A First Course in Digital Communications

rQ

k
decision
boundaries

dy
rI
0

we
Figure 8.12

0010
Sh
1010
Q

1000

0000

1110 1100 0100


&
0110
R1

/12
I
R2

0111
1111 1101 0101
en

0011 0001

1011 1001
uy

Figure 8.13: The 16APSK constellation used in DVB-S2.

P8.14 (a) From Fig. 8.13, one can calculate R1 , R2 in terms of as follows:


Ng

= R2 sin
R2 = = 1.932. (8.9)
2 12 2 sin 12

= R1 2 R1 = = 0.707. (8.10)
2
The average symbol energy is:
1 1
Es(1) = 12R22 + 4R12 = 44.792 + 22 = 2.92452 . (8.11)
16 16

Solutions Page 815


Nguyen & Shwedyk A First Course in Digital Communications

k
(8,8)

dy
R2

R1

we

8

Sh
6

Figure 8.14: The modified 16APSK constellation.

(b) Similarly, from Fig. 8.14, R1 , R2 can be calculated as:


&

R1 = = 1.307. (8.12)
2 sin 8

R2 = + cos = 2.073. (8.13)
2 tan 8 6
en

The average symbol energy is:


1 2 1 2
Es(2) = 8R1 + 8R22 = R1 + R22
16 2
1
1.708 + 4.2972 = 3.0032 .
2
uy

= (8.14)
2
Since the minimum distances of both constellations are the same, the two constellations
perform approximately identically in AWGN. The modified constellation has a larger
(2) (1)
average energy (Es > Es ). Therefore it is less power-efficient compared to DVB-S2
Ng

16APSK.
Furthermore, the demodulator for the (8,8) signal constellation appears to be more
complicated than that of the (4,12) constellation. But to back this up you should propose
block diagrams of the demodulators.
(c) The (8,8) constellation cannot be Gray coded. This is based on the observation that 3
signals which are at equal distance (dmin ) from each other, i.e., lie on the vertices of an
equilateral triangle, cannot be Gray coded. This is true regardless of the length of the
bit sequence.
Proof:
1) Let a, b, c be 3 unique n-bit sequences.

Solutions Page 816


Nguyen & Shwedyk A First Course in Digital Communications

2) Let a and b differ in one bit, say in the jth position.


3) Now suppose c differs from a by only one bit. This cannot be in the jth position

k
because then it would be identical to b. So let it differ in the ith position.
4) But this means that c differs from b in the jth and ith positions, hence violating the

dy
Gray mapping rule. Remember Gray code means that 2 binary sequences that are
at minimum distance differ by only one bit.
The (4,12) constellation, however, can be Gray coded. See a Gray code table for 4-bit
sequences in Fig. 8.15.

we
0000 0000
0001 0100 For the signals on the
0011 1100 inner circle
0010 1000
0110 0001
0111 0011
0101
0100
1100
1101
1111
1110
Sh 0010
0110
0111
0101
1101
1111
For the signals on the
outer circle
&
1010 1110
1011 1010
1001 1011
1000 1001

Figure 8.15
en

P8.15 (a) The probability of symbol error can be written as

P [symbol error] = P [(error on I axis) or (error on Q axis)]


uy

= P [(error on I axis)] + P [(error on Q axis)]


P [(error on I axis) and (error on Q axis)]

Now P [(error on I axis)] is that of M -ASK with M = MI = 2I (number of signal


components along I axis).
Ng

Similarly P [(error on Q axis)] is that of M -ASK with M = MQ = 2Q .


Finally P [(error on I axis) and (error on Q axis)] = P [(error on I axis)]P [(error on Q axis)]
because the 2 events are statistically independent. This is because the noise components
wI , wQ along the respective I and Q axes are statistically independent.
Using Eqn. (8.18) on page 307 we obtain

2(MI 1) 2(MQ 1)
P [symbol error] = Q + Q
MI 2N0 MQ 2N0

4(MI 1)(MQ 1) 2
Q
MI MQ 2N0

Solutions Page 817


Nguyen & Shwedyk A First Course in Digital Communications

Since it is more meaningful to express the error probability in terms of Eb , the average
transmitted energy per bit, we find the relationship between Eb and . First we deter-

k
mine the average transmitted energy per symbol (or signal). Now the transmitted signal
can be written as
r r

dy
2 2
sij (t) = (2i 1 MI ) cos(2fc t) + (2i 1 MQ ) cos(2fc t)
2 Ts 2 Ts
i = 1, 2, . . . , MI = 2I ; j = 1, 2, . . . , MQ = 2Q
2 2
The energy of the (i, j)th signal is Eij = (2i 1 MI )2 4 + (2j 1 MQ )2 4 . Therefore

we
MQ
MI X MQ
MI X
1 X 2 X
Es = Eij = (2i 1 MI )2 + (2j 1 MQ )2
MI MQ 4m
i=1 j=1 i=1 j=1

PMI PMQ
Consider the sum i=1 j=1 (2i 1 MI )2 (other sum is the same except MI , MQ
nP PMQ o
are interchanged). It equals

sum becomes
Pn
(MI2 1)M
3 ,
More algebra yields Es =
2
12
6
Sh
identities k=1 k 2 = n(n+1)(2n+1)
MI
i=1

;
4i
P n
2 4(1 + M )i + (1 + M )2

k=1 k =
where M = MI MQ = 2 .
n
M I Q
o
2
I
n(n+1)

2 + M 2 2 joules/symbol.

I j=1 1 . Using the

and straightforward algebra, the

2 3 log M E
Now Eb = Es = logEsM joules/bit which means that = q 2 2 2 b . Therefore
&
2 MI +MQ 2

s !
2(MI 1)
6 log2 M Eb
P [symbol error] = Q 2 2 N
MIMI2 + MQ 0
s !
en

2(MQ 1) 6 log2 M Eb
+ Q 2 2 N
MQ MI2 + MQ 0
s !
4(MI 1)(MQ 1) 2 6 log2 M Eb
Q 2 2 N
M MI2 + MQ 0
uy

It is worthwhile, because of all the algebra, to see if the above error probability
expres-
sion reduces to (8.47), page 320 of the text for square QAM, i.e., MI = MQ = M . Also
Eb
plots of P [symbol error] versus N 0
for non-square QAM are also of interest. To reduce
the possibilities consider the special case of I = Q + 1 I = +1 1
2 , Q = 2 , odd.
Ng

Special case but the most practical one. Left as a further exercise.

It is also of interest to see how the average energies divide between the inphase and
quadrature axes.
Along the quadrature axis we have
MQ
1 X 2 2
EQ = (2i 1 MI )2 = 2
(MQ 1) joules/quadrature symbol
MQ 4 12
j=1

2 2
Similarly EI = 12 (MI 1) joules/inphase symbol.

Solutions Page 818


Nguyen & Shwedyk A First Course in Digital Communications

Es
Note EQ + EI = Es (as would be expected); further for square QAM, EI = EQ = 2 ,
i.e., the energy is divided evenly.

k
Now consider non-square QAM

EI M2 1 22I 1

dy
= I2 = 2
EQ MQ 1 2 Q 1

Now let I = Q + 1. Then the ratio becomes

EI 22 1

we
= 2 , odd.
EQ 2 1

EI
As becomes large, EQ 4, indeed it does this quite rapidly.
(b) The union upper bound is obtained by simply ignoring the overlapping regions, i.e.,
P [(error on I axis) and (error on Q axis)] or the Q2 () term.

P8.16 In general,

P [symbol error] =

=
M
X

i=1
M
1 X
Sh
P [error|si (t)] P [si (t)]

P [error|si (t)] (since signals are equally probable)


M
&
i=1
P [worst-case conditional error]

From the geometry the worst-case conditional error occurs when an inner signal is trans-
mitted and equals the volume under the 2-dimensional pdf in the region outside the square
en

or 1 (volume under pdf inside the square). See Fig. 8.16.


uy


signal point
Ng

Figure 8.16

2
Volume under pdf inside the square = 1 2Q . Therefore,
2 N0 /2

" !#2

P [symbol error] 1 1 2Q p .
2 N0 /2

Solutions Page 819


Nguyen & Shwedyk A First Course in Digital Communications


Now from P8.15, = q 2 3 Es . It then follows that
MI2 +MQ
2 2

k
" s !#2
6Es
P [symbol error] 1 1 2Q 2 2 2)N .
(MI + MQ

dy
0

A somewhat looser upper bound can be obtained by noting that 2M MI2 + MQ 2 < 2M 2 .

For square QAM, MI2 + MQ 2 = 2M and the result, Eqn. (8.48), follows immediately. For

non-square QAM where MI2 + MQ2 < 2M 2 the reasoning is as follows:

we
Replacing MI2 + MQ 2 by 2M 2 the argument of the Q() function is smaller that Q() is

larger 1 2Q() is smaller [1 2Q()]2 is smaller 1 [1 2Q()]2 is larger


" s !#2
6Es
P [symbol error] 1 1 2Q . (8.15)
(2M 2 2)N0

Note that

1 I = +k 2
6Es
(2M 2 2)N0

, Q = k
2
=

. Then
3Es

2
Sh
(M 2 1)N0

Remark: We show that indeed MI2 + MQ


=

2k + 2k > 2 for any k 1, then indeed MI2 + MQ


3Eb
(M 2 1)N0
.

2 2M or 22I + 22Q 2 2 . Let = + k, k


I Q
2I + 22Q = 2 2k + 2 2k = 2 (2k + 2k ) and since
2 > 2M . The inequality M 2 + M 2 < 2M 2
I Q
&
is quite obvious.
q
P8.17 The verification involves 2 changes of variable. First in the integral w.r.t. , let x = N20 .
The integral becomes
Z q 2 r1
N0 1 x2
e 2 dx
en

x= 2
q
Now let y = N20 r1 . Then
r 2
Z Z y M 1 y 2Es
1 1 2 N0
uy

x2
P [correct] = e dx e 2 dy
2 y= 2 x=

Finally observe that Es = Eb = log2 M Eb and that P [error] = 1 P [correct].

P8.18 The transmitted signal can be written as:


Ng

p p p p
si (t) = ( Eb )1 (t)+( Eb )2 (t)+ +( Eb )j (t)+ +( Eb ) (t), i = 1, 2, . . . , M,

where
the jth component is + Eb if the jth bit of the transmitted bit sequence is 1 and
Eb if the jth bit is 0; j = 1, 2, . . . , .
The demodulator consists of projecting the received signal r(t) = si (t)+w(t) onto 1 (t), 2 (t),
. . . , (t) to generate the sufficient statistics r1 , r2 , . . . , r , and using minimum distance to
carve up the decision space. Geometrically this consists of choosing the signal point that
lies in the quadrant that r1 , r2 , . . . , r fall in (convince yourself of this for = 2 and = 3;
perhaps = 4 if you are really ambitious).

Solutions Page 820


Nguyen & Shwedyk A First Course in Digital Communications

Now because of symmetry (and the fact that the signals are equally probable), one has

k
P [error] = P [error|a specific signal or bit sequence] = 1 P [correct|a specific signal]

Choose for the specific signal the bit sequence (111 11), or ( Eb , Eb , Eb , , Eb , Eb ),

dy
the signal that lies in the first quadrant. Now,

P [correct|this specific signal] = P [r1 > 0, r2 > 0, , r > 0]



where the rj s are Gaussian, mean value Eb , variance N0 /2 and statistically independent.
Therefore,

we

" r !#
Y 2Eb
P [correct|this specific signal] = P [rj > 0] = 1 Q .
N0
j=1

Finally,
" r !#
2Eb

Sh
P [error] = 1 1 Q

Remark: The signal points also lie on the surface of a hypersphere of radius
N0

P8.19 (a) Simply put, the transmission bandwidth is halved. It is directly proportional to the
dimensionality of the signal space.
.

Eb .

(b) The sufficient statistics are generated by projecting the received signal onto the M 2 or-
&
thonormal bases (which are si (t) normalized to unit energy) to obtain r1 , r2 , . . . , rj , . . . , rM/2 .
N0
of 2 and a mean value of 0, + Eb , or Eb .
Statistically rj is Gaussian with variance
Zero if the transmitted is not sj (t), + Eb if the transmitted signal is +sj (t) and Eb
is the transmitted signal is sj (t). Note that we assume, for convenience, that the signal
sj (t) has energy Eb . To obtain the decision rule consider just the special case of M = 4
en

and generalize from there.


r2 = r1 2 (t ), r2 r2 = r1

Eb s2 (t )
Choose s1 (t )
uy

Choose s1 (t )

s1 (t ) s1 (t )
1 (t ), r1
Eb 0 Eb

s2 (t )
Ng

Eb

Figure 8.17
(
r1 > 0 choose s1 (t)
The decision rule is: If |r1 | > |r2 | and .
r1 < 0 choose s1 (t)
To generalize:
Compute |rj |, j = 1, 2, . . . , M M
2 . Determine maximum |rj |, j = 1, 2, . . . , 2 . Then the
decision is si (t) if rj > 0 and si (t) if rj < 0.

Solutions Page 821


Nguyen & Shwedyk A First Course in Digital Communications

(c) Due to the symmetry of the decision space we have

k
P [symbol error] = P [symbol error|any specific transmitted signal]
= 1 P [correct decision|any specific transmitted signal].

dy
Choose the specific transmitted signal to be s1 (t). Then s1 (t) is chosen if

(r1 > 0) and (r1 r2 r1 ) and (r1 r3 r1 ) and and (r1 r M r1 ).


2

To determine the probability of the above happening, fix the random variable r1 at a

we
specific
value, say r1 = r1 0. Note that the probability of this is fr1 (r1 |s1 (t))dr1 =
N Eb , N20 dr1 . Then the probability of the event
h i
(r1 r2 r1 ) and (r1 r3 r1 ) and and (r1 r M r1 )|s1 (t)
2

Q M2
is given by i=2

dent.

1
Z
Sh
P [r1 ri r1 |s1 (t)] where the random variables are Gaussian, zero-
mean (given that the transmitted signal is s1 (t)), variance N20 and statistically indepen-

Therefore the probability becomes



r1 r2
2(N i /2)
M 1
2 r M2 1
2
dri
qN e 0 = 1 2Q r1
&
2 20 r1 N0

Now 0 r1 , i.e., we find the weighted sum of the above, weighted by the probability
of r1 = r1 :

r M2 1 (r E )2
en

Z
1 2 1 b
P [symbol error] = 1 1 2Q r1 e N0 dr1 .
N0 r1 =0 N0

It would be useful to plot and compare the above error probability with that of orthogonal
uy

modulation (such as M -FSK), either for the same number of signal points, or the same
dimensionality of the signal space.

P8.20 (a) See Fig. 8.18.


(b) It can be seen from Fig. 8.18 that the USSB waveform does not have the desirable
Ng

property of constant envelope of BPSK signal. This is the price one has to pay for a
reduced (half) bandwidth of the USSB signal.

P8.21 One has to establish a bandwidth definition and also the error probability at which the
signalling techniques are compared. In the text (Section 8.7 and Fig. 8.26) the bandwidth W
is defined as the reciprocal of the symbol duration. Other possible definition that could have
been used are null-to-null bandwidth, 95% power bandwidth and so on. The error probability
used to determine the SNR is 105 .
r
(a) Using Eqn. (8.33), W b
FSK
= 2 logM2 M , we have
rb 2 log2 21
2-FSK: W = 2 = 1.

Solutions Page 822


Nguyen & Shwedyk A First Course in Digital Communications

(a) BPSK signal

k
0 1 1 0 1 0
V

dy
0

0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb

we
t
(b) USSBBPSK signal

0 1 1 0 1 0
V

0 Tb Sh 2Tb

Figure 8.18
3Tb
t
4Tb 5Tb 6Tb
&
rb 2 log2 22
4-FSK: W = 4 = 1.
And using Fig. 8.23 we have at P [symbol error] = 105 :
Eb
2-FSK: N0 = 12.6 dB.
en

Eb
4-FSK: N0 = 10.2 dB.
(b) The coordinates for the plot are given below.
Eb rb
(i) For FSK, use Fig. 8.83 to determine N0
and Eqn. (8.83) for W . The results are:
M = 8 16 32 64
uy

Eb
N0 @ 102 = 5.1 4.5 4.2 3.5 (dB)
Eb 7
N0 @ 10
= 10.4 9.3 8.5 7.8 (dB)
rb
W = 0.75 0.5 0.3125 0.1875 (bits/sec/Hz)
Eb
(ii) For PSK, use Fig. 8.15 and Eqn. (8.39) for M = 64 to determine N0 . Use Eqn.
rb
(8.81) for W . The results are:
Ng

M = 2 4 8 16 32 64
Eb 2
N0 @ 10 = 4.4 5.3 8.4 13 18 22.7 (dB)
Eb 7
@
N0 10 = 11.25 11.6 14.8 19.5 24.5 29.7 (dB)
rb
W = 1 2 3 4 5 6 (bits/sec/Hz)
Eb rb
(iii) For QAM, use Fig. 8.20 to determine N0 and use Eqn. (8.81) for W. The results
are:
M = 4 16 64
Eb 2
N0 @ 10 = 5.3 9.7 14.4 (dB)
Eb 7
N0 @ 10
= 11.4 15.6 20 (dB)
rb
W = 2 4 6 (bits/sec/Hz)

Solutions Page 823


Nguyen & Shwedyk A First Course in Digital Communications

1 1 rb rb
Note for QAM: W = Ts = Tb = log2 M W = log2 M .
The plots on power-bandwidth plane are shown in Fig. 8.19. Note that the points shift

k
toward Shannons capacity curve for a larger value of SER. This does not mean that it
is beneficial to increase the SER requirement. It merely says that one can approach the

dy
Shannons curve easier by lowering the transmission reliability requirement. Shannons
work, however, promises to approach the curve with an arbitrarily low SER!

10
M=64

we
5 SER=102
M=32
M=16
3
M=8
2
rb/W (bits/s/Hz)

M=4
SER=107

Sh
M=2
1
M=8
0.5 M=16

0.3 M=32
0.2 PSK
&
M=64
QAM
FSK
0.1
10 5 1.6 0 5 10 15 20 25 30
SNR per bit, Eb/N0 (dB)
en

Figure 8.19

P8.22 (a) Invoking the sampling theorem we have 2W samples/sec.


(b) The answer in (a) means that in T seconds there are 2W T independent time samples;
uy

independent in the sense that they are all needed to represent the time signal.
(c) Any set of orthogonal functions are linearly independent in the usual mathematical
sense that there is no set of nonzero
PN coefficients {c1 , c2 , . . . , cN }, N being the number of
orthogonal functions such that k=1 ck k (t) = 0. Each orthogonal function can provide
an independent sample. Since we need 2W T independent samples then it follows that
Ng

one needs 2W T orthogonal functions.


(d)
Z T
2 sin(f T )
SRECT (f ) = ej2T dt = V T
T2 f T
and in passband this becomes

V T sin((f fc )T ) sin((f + fc )T )
+ .
2 (f fc )T (f + fc )T

Solutions Page 824


Nguyen & Shwedyk A First Course in Digital Communications

From the sketch of the above function it is (or should be) readily seen that the null-to-
null bandwidth is W = T2 Hz. This means that W T = 2 and that 2W T = 4, i.e., we can

k
fit 4 orthogonal time functions in this bandwidth.
Remark: The above derivation is very heuristic, it however agrees well with the sophis-

dy
ticated mathematical analysis given by Landau, Pollack & Slepian. Their definition of
bandwidth is different; remember no time-limited signal can be bandlimited - the Gor-
dian knot of signal analysis. The achieved result however agrees very well with their
conclusions. Perhaps the moral is: one should not let sophisticated mathematics stand
in the way of good engineering. Always keep in mind math for engineers is a tool, albeit
a very important one.

we
P8.23 (a) To show that two
R time functions are orthogonal over an interval of T seconds, one needs
to show that tT s1 (t)s2 (t)dt = 0. Since here we are dealing with sinusoids we can
dispense with the integration by recalling that an integer number of cycles of a sinusoid
in the time interval T means the area (or the integral of it) under it is zero. Recall
further the following trig identities:

Sh
cos x cos y =

sin x sin y =

cos x sin y =
1
2
1
2
1
[cos(x + y) + cos(x y)]

[cos(x y) cos(x + y)]

[sin(x + y) sin(x y)]


2
&
Show 01 (t) 02 (t)

t t 1 2t 1 k
01 (t)02 (t) = cos sin 2
cos (2fc t) = sin 1 + cos 2 t
T T 2 T 2 T

1 2t 1 k+1 1 k1
en

= sin + sin 2 t sin 2 t


4 T 2 T 2 T

The sinusoids have 1, (k + 1), (k 1) cycles, respectively, over T sec and therefore the
area under them is 0. Therefore 01 (t) and 02 (t) are orthogonal.
uy

Show 01 (t) 03 (t)



t 1 2t k
01 (t)03 (t) = cos 2
cos (2fc t) sin (2fc t) = 1 + cos sin 2 t
T 4 T T

1 k 1 k+1 1 k1
Ng

= sin 2 t + sin 2 t + sin 2 t


4 T 2 T 2 T

Here there are k, (k+1), (k1) cycles respectively over T sec area = 0 01 (t) 03 (t).

Show 01 (t) 04 (t)



t t 1 2t k
01 (t)04 (t) = cos sin cos (2fc t) sin (2fc t) = sin sin 2 t
T T 4 T T

1 k1 k+1
= cos 2 t cos 2 t
8 T T

Solutions Page 825


Nguyen & Shwedyk A First Course in Digital Communications

(k 1), (k + 1) cycles over T area = 0 01 (t) 04 (t).

k
Show 02 (t) 03 (t)

t t

dy
02 (t)03 (t) = sin cos cos (2fc t) sin (2fc t)
T T

Note this product is the same as 01 (t)04 (t). Therefore it follows that the area under
02 (t)03 (t) is zero over T sec and that 02 (t) 03 (t).

we
Show 02 (t) 04 (t)

t
02 (t)04 (t) = sin 2
cos (2fc t) sin (2fc t)
T

t
= 1 cos2 cos (2fc t) sin (2fc t)
T

Sh
= cos (2fc t) sin (2fc t) cos

2 t
T
cos (2fc t) sin (2fc t)

The area under the first term is 0 over T sec because as noted in the problem the two
carriers are orthogonal over T sec. The second term is the same as 01 (t)03 (t) and we
have shown that this area is zero. Therefore 02 (t) 04 (t).
&
Show 03 (t) 04 (t)

t t
03 (t)04 (t) = cos sin sin2 (2fc t)
T T

t t t t
en

= cos sin cos sin cos2 (2fc t)


T T T T
| {z } | {z }
area=0 because p1 (t) p2 (t) area=0, from 01 (t) 02 (t)

Therefore 03 (t) 04 (t).


uy

Remark: Given n functions, one needs to check (n 1)! conditions to see that they form
an orthogonal set. Here n = 4 and (n 1)! = 3! = 6 conditions.
(b) Though one should find the normalizing factor for each basis function, well determine
it for the 1st one and trust the gods (or intuition) it is the same for the other three.
Ng

Now
Z T Z T
2 2 2 t
2
0
1 (t) dt = cos cos2 (2fc t) dt
T2 T2 T
Z T
2 1 1 2t 1 1 2(2fc )t T
= + cos + cos dt =
T 2 2
2
T 2 2 T 4
q
4 2
Therefore multiply each basis function by T = T
to normalize.
(c) Let i (t) = 2 0 (t), i = 1, 2, 3, 4, T2 t T
T i 2. See Fig. 8.20.

Solutions Page 826


Nguyen & Shwedyk A First Course in Digital Communications

b1b0b1b2b3 b b b b b
4 i 4 i +1 4 i + 2 4 i + 3 4 i + 4 Demultiplexer
b4i
b4i +1

k
b4i + 2
Tb b4i +3
4Tb = Ts = T input bit sequence

dy
1 (t iT )
b4i

2 (t iT )
b4i +1

we
s (t )
3 (t iT )
b4i + 2

4 (t iT )
b4i +3

L = Eb
Sh
NRZ-L modulators

i = , 1, 0,1, 2,

Figure 8.20
&
(d) The transmitted signal s(t) is given by
p p p p
s(t) = Eb 1 (t) Eb 2 (t) Eb 3 (t) Eb 4 (t)

where it is + Eb if the bit is 1 and Eb if the bit is 0 (assumed).
en

The signal space is 4 dimensional, with basis functions i (t), i = 1, 2, 3, 4. The sig-
nals
lie onthe vertices
ofa four-dimensional
hypercube
with
coordinates ranging from
( Eb , Eb , Eb , Eb ) to (+ Eb , + E b , + Eb ,
+ Eb ) corresponding to bit pat-
terns (0000) to (1111) (assuming that 0 Eb , 1 Eb ).
uy

There are 16 signals, one on each vertex. The minimum distancebetween two signals
occurs when a pair differ in only one component and is equal to 2 Eb .
Finally the number of nearest neighbors is 4. Note that in an n-dimensional hypercube
Ng

any vertex has n nearest vertices.

P8.24 (a) The generation of the sufficient statistics is the same for the symbol demodulator and for
the bit demodulator(s). See Fig. 8.21. The difference between the two is in the decision
rule.

The decision rule for the symbol demodulator is: determine which quadrant r1 , r2 , r3 , r4
fall in and choose the signal (or bit pattern it represents) that lies in that quadrant.

For example, if (r1 > 0, r2 < 0, r3 > 0, r4 > 0) choose signal (+ Eb , Eb , + Eb , + Eb )
(or bit pattern 1011).

Solutions Page 827


Nguyen & Shwedyk A First Course in Digital Communications

t = kTs
1 (t )

( )dt
r1

k
tT

2 (t )

( )dt

dy
r2

Decision Decision
tT
r (t )
3 (t )
rule
( )dt
r3

tT

we
4 (t )

( )dt
r4

tT

Figure 8.21

b4i =1
r1 R 0;
b4i =0
Sh
For the bit demodulator the decision rule is:

r2
b4i+1 =1
R
b4i+1 =0
0; r3
b4i+2 =1
R
b4i+2 =0
0; r4
b4i+3 =1
R
b4i+3 =0
0.
&
(b) Due to symmetry we have

P [symbol error] = P [symbol error|a specific signal]


= 1 P [correct|a specific signal]

p p p p

= 1 P correctspecific signal + Eb , + Eb , + Eb , + Eb
en


p p p p
P correct + Eb , + Eb , + Eb , + Eb

p
uy

p p p

= P r1 0, r2 0, r3 0, r4 0 + Eb , + Eb , + Eb , + Eb


Under the given conditions the random variables ri are Gaussian with mean Eb , vari-
ance N0 /2, and statistically independent. The probability is therefore
Ng

" Z #4 " r !#4


(x Eb )2
1 2Eb
p e 2(N0 /2) dx = 1Q
2 N0 /2 0 N0

Therefore
!#4 " r
2Eb
P [symbol error] = 1 1 Q
N0
r !
2Eb
P [bit error for the bit demodulator] = Q
N0

Solutions Page 828


Nguyen & Shwedyk A First Course in Digital Communications

Note that the above result is just a special case of the more general result derived in
P8.18 for a hypercube constellation.

k
The bit error probability of the symbol demodulator is the same as that of the bit demod-
ulator. To see this consider bit b4i , i.e., the one that corresponds to sufficient statistic ri .

dy
A symbol error is caused by any one of r1 , r2 , r3 , r4 or any subset of them being of the
wrong sign. However bit b4i would still be correct q if ri is
of the right polarity regardless
2Eb
of what r2 , r3 , r4 are. The probability of this is Q N0 .

The bit error probability of Q2 PSK is the same as BPSK and QPSK/OQPSK/MSK.

we
P8.25 (a) Substituting in for the i (t)s and collecting terms we have:

2 Eb t t t t
s(t) = b1 cos + b2 sin cos (2fc t) + b3 cos + b4 sin sin (2fc t)
T T T T T

2Eb
The envelope is given by (ignoring the constant ):

e(t) =
"

Sh
b1 cos
t
T
+ b2 sin
T

t 2

+ b3 cos
T

t
T
+ b4 sin
T
#1
t 2 2

Using the fact that cos2 x + sin2 x = 1 (always) and that b2i = 1, one has
&
1 1
t t 2 2t 2
e(t) = 2 + 2(b1 b2 + b3 b4 ) cos sin = 2 + 2(b1 b2 + b3 b4 ) sin
T T T
1
t 2
= 2 + 2(b1 b2 + b3 b4 ) sin . (8.16)
2Tb
en

Plot of e(t) for all three different combinations of b1 b2 + b3 b4 is shown in Fig. 8.22.
(b) Need b1 b2 + b3 b4 = 0 b1 = b3b2b4 . Price paid is a reduction in the information rate.
Instead of 4 information bits transmitted every T seconds, there are only 3 information
bits transmitted now.
uy

The coding scheme is represented by the following table:


Real Number Arithmetic Bolean Algebra
b2 b3 b4 b1 b2 b3 b4 b1
1 1 1 +1 0 0 0 1
1 1 1 1 0 0 1 0
Ng

1 1 1 1 0 1 0 0
1 1 1 +1 0 1 1 1
1 1 1 1 1 0 0 0
1 1 1 +1 1 0 1 1
1 1 1 +1 1 1 0 1
1 1 1 1 1 1 1 0
Note that b1 = 1 if there is an even number of ones, otherwise b1 = 0. The encoder can
be implemented by the following logic:

b1 = b2 b3 b4

Solutions Page 829


Nguyen & Shwedyk A First Course in Digital Communications

k
1.8

1.6

dy
b1b2+b3b4=2 b1b2+b3b4=0 b1b2+b3b4=2
1.4

1.2
e(t)

we
0.8

0.6

0.4

Sh
0.2

0
0 4Tb 8Tb 12Tb
t

Figure 8.22
&
(c) Eight one for each 3-bit allowable bit pattern.
p p 2 12
2 p
dmin = 2 Eb + 2 Eb = 2 2 Eb .
en

(Note that signals closest together differ in 2 components).


(d) The block diagram is the same as that of P8.24(a). One difference could be that the
sufficient statistics for b1 (b4i ) does not need q
to be generated.
Only bits b2 , b3 , b4 represent
2Eb
information. The bit error probability is Q N0 .
uy

(e) Yes, it can be used for error detection. After demodulating the bits to b1 , b2 , b3 , b4 ,
determine if b1 = b2 b3 b4 . If this is not true then an error has been made. Note that
the error could be on the redundant bit b1 . If the relationship is satisfied then either
no errors are made or an undetectable number of errors have been made. You may wish
Ng

to convince yourself that an odd number errors (i.e., 1 or 3) are detectable while an even
number (2 or 4) are not.

P8.26 (a) Consider E {si (t)sj (t + )}, i 6= j. This, with a bit of creativity in notation, is:

X
X
cos t cos (t + )
E {bik bjl } or or
sin 4Tb sin 4Tb
k= l=

But E {bik bjl } = 0 i, j; i 6= j. Therefore any two signals are uncorrelated.

Solutions Page 830


Nguyen & Shwedyk A First Course in Digital Communications

(b)

k

X
X t (t + )
Rs1 ( ) = E {s1 (t)s1 (t + )} = E {b1k b1l } cos cos
4Tb 4Tb
k= l=

dy

X X
t (t + )
Rs3 ( ) = E {b3k b3l } cos cos
4Tb 4Tb
k= l=

But E {b1k b1l } = E {b3k b3l }, since both = 1 if k = l and = 0 if k 6= l. Therefore the
two autocorrelations are the same.

we
Use the same argument to show that Rs2 ( ) = Rs4 ( ).
(c) The PSD component due to s1 (t) (or s3 (t)) is 4T1 b |H(f )|2 .
n o
t
To find H(f ), determine F cos 4T and F {u(t + 2Tb ) u(t 2Tb )} which are
h i b
1 1 1 4Tb sin (4f Tb )

Sh
2 (f 8Tb ) + (f + 8Tb ) and 4f Tb , respectively.
Now convolve and simplify to obtain:

cos (4f Tb )
H(f ) = 2Tb 2
(4f Tb )2 4

t
For s2 (t), h(t) = sin 4T [u(t + 2Tb ) u(t 2Tb )] and we now convolve
&
h b
i
1
2j (f 1
8Tb ) + (f + 1
8Tb ) and 4Tb sin (4f Tb )
4f Tb . This gives

16Tb f Tb cos (4f Tb )


H(f ) = 2
j (4f Tb )2 4
en

The overall PSD is


1 + 64(f Tb )2
2 Tb cos2 (4f Tb ) h i2 (watts/Hz).
2
(4f Tb )2 4
uy

(d) Plot of the resultant PSD is shown in Fig. 8.23.


Ng

Solutions Page 831


Nguyen & Shwedyk A First Course in Digital Communications

k
dy
we
10

Sh
Power spectral densisty (dB)

10

20
&
30

40
en

50

60
5 0 5
fT
uy

Figure 8.23
Ng

Solutions Page 832


k
dy
Chapter 9

Signaling Over Bandlimited Channels

we
P9.1 A bandwidth of 4 kHz in the passband (i.e., around fc ) means that in baseband we have 2 kHz
of bandwidth. Since a bit rate of 9600 bit/second means that at minimum 2T1 b = r2b = 4800 Hz
of bandwidth is needed to transmit with zero ISI, obviously an M -ary modulation is required

Sh
to reduce the bandwidth requirement. Since QAM has been specified as the modulation
paradigm, determine the number of bits a symbol needs to carry (or represent) to achieve not
only zero ISI but also a 0.5 (the roll-off factor spec.). Do this by trial and error: start
with = 3, i.e., 3 bits/symbol. Then the symbol transmission rate is rs = 9600 bits/second
3 bits/symbol =
3200 symbols/sec. Using the hint this means that the required bandwidth needed for zero ISI
is 2T1 s = r2s = 1.6 kHz. Since we have 2 kHz this means that we have an excess of bandwidth
&
with the excess being 0.4 kHz. But is this enough to achieve a 0.5. Lets check this
1+ rs rs 0.4
= + = 2 kHz = = 0.25 < 0.5
2Ts 2 2 1.6
Back to the drawing board. (An auxiliary question is: When the developer of the drawing
en

board made an error in the drawing board what did she go back to?)
rs
Try = 4. Now rs = 9600 4 = 2400 symbols/sec. Since 2 = 1.2 kHz we have 800 Hz (or 0.8
kHz) excess bandwidth 1.2 kHz + 0.8 kHz = 2 kHz = 0.8 2
1.2 = 3 > 0.5 so this spectrum
is satisfied, i.e., 16-QAM is the modulation to use.
uy

Finally need to satisfy the bit error probability of 106 . To do this consider the 16-QAM
modulation as 2 4-ASK modulation, i.e., 2 bits on the inphase axis, 2 bits on the quadrature
axis. From Eqn. (8.27) we have

P [symbol error] = 2P [bit error] = 2 106


Ng

Eb
And from Fig. 8.8, using the M = 4 curve, this means N 0
' 14.5 dB or 101.45 = 28.2 so
Eb = 28.2 108 joules. The average transmitted power is
Es 4Eb
Pav = = = 4 28.2 108 2400 = 2.707 103 (watts) = 2.707 (mW) (9.1)
Ts Ts

P9.2 First in baseband the channel looks like in Fig. 9.1, where hc (t) = h0c (t)2 cos(2fc t);
fc = 1.8 kHz. We therefore have 1.5 kHz if bandwidth to play with.
rb 9600
(a) Now Ts = Tb or = rs = 2400 = 4 bits/symbol 16-QAM.

1
Nguyen & Shwedyk A First Course in Digital Communications

H C' ( f )

k
dy
f (kHz)
1.5 kHz 0 1.5 kHz

we
Figure 9.1

This means that 2T1 s = r2s = 1.2 kHz is the minimum bandwidth needed to achieve zero
ISI. But we have 1.5 kHz of bandwidth or an excess of 0.3 kHz. Therefore can be
larger than 0. Utilizing the entire bandwidth, is determined from:
1+
2Ts
rs
2
Sh
= (1 + ) = 1.2(1 + ) = 1.5 = 0.25

(b) In passband the spectrum looks as in Fig. 9.2 (showing just the positive frequency
portion).
&
1

0.5
en

f (kHz)
0 0.3 0.6 0.9 1.8 2.7 3.0 3.3

Figure 9.2
uy

P9.3 With 75% excess bandwidth, then = 0.75. Now,


1 1
|SR (f )| 2 K2 |SR (f )| 2
|HR (f )| = 1 ; |HT (f )| =
|HC (f )| 2 K1 |HC (f )| 12
Ng

Assume K2 = K1 and that SR (f ) is raised-cosine. Therefore



Ts

1 , |f | 0.25
2Ts

|1+ cosh(2f T
s )| 2
i
Ts cos T s |f | 0.25
HR (f ) = HT (f ) = 1.5 2Ts
, 0.25 1.75

1 2Ts |f | 2Ts

|1+ cos (2f Ts )| 2
0, |f | 1.75
2Ts

1.75 7
Note that 1 + cos (2f Ts ) for 0 < f < 2Ts = 8Ts plots as in Fig. 9.3 (where it is assumed
that 0 < < 1).

Solutions Page 92
Nguyen & Shwedyk A First Course in Digital Communications

1+ 1 + cos(1.75 )

k
dy
1

f
0 1 7
2Ts 8Ts

we
Figure 9.3

P9.4 (a)
h j2f t0 i
HC (f ) = 1 + e + ej2f t0 ; s(t) S(f )
2
Therefore

Sh
Y (f ) = S(f )HC (f ) = S(f ) +

y(t) = s(t) +


2


s(t + t0 ) + s(t t0 )

S(f )ej2f t0 + S(f )ej2f t0 .
2
Now, multiplying S(f ) by ej2f t0 means a time shift of t0 seconds in the time domain
(see Table 2.3 Property 2). Therefore

2 2
&
Remark: The fact that s(t) is bandlimited to W Hz is necessary. Otherwise we could
not make use of the property. Why?
(b) The output of the matched filter, based on the hint is
Z h i

y0 (t) = s() + s( + t0 ) + s( t0 ) s(t (T ))d.
en

2 2
And at t = kT :
Z Z

y0 (kT ) = s()s((k 1)T )d + s( t0 )s((k 1)T )d
2
uy

Z

+ s( + t0 )s((k 1)T )d
2

(c) If t0 = T , the above becomes


Z Z

Ng

y0 (kT ) = s()s((k 1)T )d + s( T )s((k 1)T )d


2
Z

+ s( + T )s((k 1)T )d
2
Change variables in the last two integrals to T and + T , respectively.
Then
Z Z

y0 (kT ) = s()s((k 1)T )d + s()s((k 2)T )d
2
Z

+ s()s(kT )d
2

Solutions Page 93
Nguyen & Shwedyk A First Course in Digital Communications

Let x(t) = s(t) s(t T ). Then the above can also be written as:

k
y0 (kT ) = x(kT ) + x((k 1)T ) + x((k + 1)T )
2 2
where the first term is the desired signal component, while the last two terms account

dy
for ISI.
P9.5 (a) The sufficient statistic is
p p
1T : rk = Eb 0.25 Eb + wk
p p
0T : rk = Eb 0.25 Eb + wk

we
| {z } |{z}

due to ISI N 0,
N0
2

The decision space looks as shown in Fig. 9.4. By symmetry, we have


! !
1 0.75 Eb 1 1.25 Eb
P [bit error] = Q p + Q p
2 2

Sh
N0 /2 N0 /2
r ! r !
1 9 Eb 1 25 Eb
= Q + Q
2 8 N0 2 8 N0

Eb Eb
rk
&
1.25 Eb 0.75 Eb 0 0.75 Eb 1.25 Eb

0D 1D

Figure 9.4
en


(b) With no ISI, one has P [bit error] = Q 2Eb . The two error probabilities are plotted
N0
in Fig. 9.5. Observe that at the bit error probability of 105 the difference in SNR is
about 2.2 dB.
uy

P9.6 (a) Solve questions of this type graphically (usually), i.e., slide SR (f ) along the f axis until
the sum results in a constant level between 2T1 b to 2T1 b Hz. See Fig. 9.6.
1
(b) No. Spectrum would not be flat in 0 to 2Tb Hz.
(c) Excess bandwidth is 1 kHz.
Ng

P9.7 (a) To transmit with zero-ISI we need to find a signalling rate rb = 1/Tb bits/sec such that
in the frequency band r2b = 2T1 b f 2T1 b = r2b (Hz) the spectrum SR (f ) and its

aliases SR f T1b and SR f + T1b add up to a constant value. For the two spectra
given, the answers are (see Figs. 9.7 and 9.8):

rb = 2 2000 = 4000 bits/sec = 4 kbps for (a)


rb = 2 1500 = 3000 bits/sec = 3 kbps for (b)
rb
Remark: One could also determine 2 from the frequency at which SR (f ) has the required
symmetry about r2b (which is?).

Solutions Page 94
Nguyen & Shwedyk A First Course in Digital Communications

1
10
no ISI

k
with ISI
2
10

dy
3
10

P[bit error]
4
10

we
5
10

6
10
0 2 4 6 8 10 12 14

Sh Eb/N0 (dB)

Figure 9.5

excess bandwidth
first alias
&
f (Hz)
3 kHz 1 kHz 0 1 kHz 3 kHz 4 kHz
en

1 1 = r = 4000 bits/sec
2Tb Tb b

Figure 9.6
uy

first alias

0.5
Ng

f (kHz)
3 2 1 0 1 3

1 r 1
= b =2 rb = =4
2Tb 2 Tb

Figure 9.7

Solutions Page 95
Nguyen & Shwedyk A First Course in Digital Communications

first alias

k
1

0.5

dy
f (kHz)
3 0 3
1
1
rb =

we
2Tb
Tb

Figure 9.8

(b) Clearly the excess bandwidths are

(c) See Fig. 9.9.


Sh
3000 2000 = 1000 Hz (or 50%) for (a)
3000 1500 = 1500 Hz (or 100%) for (b)
&
Raised-cosine; = 0.5
1

0.5
en

f (kHz)
3 2 1 0 1 2 3

Figure 9.9
uy

(d) The raised-cosine spectrum is preferred because its eye diagram is considerably more
open. The wide opening is desirable under synchronization error and additive white
Gaussian noise in order to minimize the effect of ISI. Note that the relative shapes of
the two eye diagrams are expected since the RC spectrum is smoother than the other
Ng

spectrum for the same bit rate and excess bandwidth.

P9.8 As mentioned on page 348, SR (f ) should have a certain symmetry about the 2T1 b point.
Graphically, at least for SR (f ) functions that have zero phase, this symmetry can be checked
by checking to see if SR (f ) is: i) flat from 0 to some f1 2T1 b ; ii) the portion from f1 to

1 1 1 1
Tb f1 has odd symmetry about the 2Tb 2 point; iii) is zero for f > Tb f1 . Applying
,
this to the four spectra given:
1 2B 4
(i) Yes. Choose 2Tb to be 3 Hz rb = 3 B bits/second.
1 B
(ii) Yes. Choose 2Tb to be 2 Hz rb = B bits/second.

Solutions Page 96
Nguyen & Shwedyk A First Course in Digital Communications

1 2B
(iii) Yes. Here 2Tb = 3 Hz rb = 43 B bits/second.
(iv) No. The required symmetry does not exist on the f axis.

k
P9.9 See Table 9.1.

dy
Table 9.1

k = 0 1 2 3 4 5 6 7 8 9 10 11 12
bk = 1 1 0 0 1 0 1 0 1 1 0 1

we
dk = 0 1 0 0 0 1 1 0 0 1 0 0 1
Vk = V +V V V V V V V V V V V V (ignore noise)
yk = 0 0 2V 2V 0 2V 0 2V 0 0 2V 0
bk = 1 1 0 0 1 0 1 0 1 1 0 1

P9.10 (a)

Therefore
(sampled)
SR (t)Sh =

X

k=
s(kTb )(t kTb ) =
N
X 1

k=0
sk (t kTb )

N
X 1 Z N
X 1
&

(sampled) j2f t
SR (f ) = sk (t kTb )e dt = sk ej2f kTb
k=0 k=0

From the sampling theorem, Section 4.11, pages 136-139, Eqn. (4.5) we know that
X
k (sampled)
en

SR f = Tb SR (f )
Tb
k=

Ignoring the aliases, i.e., considering only k = 0 term:


( (sampled) P 1
Tb SR (f ) = N
k=0 Tb sk e
j2f kTb , 1 f 1
uy

SR (f ) = 2Tb 2Tb
0, otherwise.

(b)
N
X 1 Z 1
2Tb
1
ej2f kTb ej2f t df
Ng

sR (t) = F {SR (f )} = Tb sk
k=0 2T1
b
N
X 1 Z 1
2Tb
= Tb sk ej2(tkTb )f df
k=0 2T1
b
N
X 1 j2(tkTb ) 2T1 j2(tkTb )f 2T1
e e b b
= Tb sk
j2(t kTb )
k=0


N
X 1 sin Tb (t kT b )
= sk
k=0 Tb (t kTb )

Solutions Page 97
Nguyen & Shwedyk A First Course in Digital Communications

P9.11 (a)
Z 1
2Tb ej2f Tb ej2f Tb
ej2f t df

k
sR (t) = j2Tb
2T1 2j
b

After integration this becomes

dy


Tb sin Tb (t + Tb ) sin Tb (t Tb )
sR (t) =
(t + Tb ) (t Tb )

we

Now sin Tb (t + Tb ) = sin Tb (t Tb ) = sin Ttb so


sin t
sin t
Tb t 1 1 2Tb2 Tb 2 Tb
sR (t) = sin + = = 2
Tb t + Tb t Tb t2 Tb2 t
Tb 1

Sh
sR (t) decays as t12 which is expected since SR (f ) needs to be differentiated twice to
produce an impulse(s). Use the duality concept and the result of P2.41:
sR (kTb ) = 0 at k = 0, 2, 3, 4, . . . . At k = 1, i.e., Tb , sR (Tb ) becomes 00 . Using
LHospitales rule we have s(Tb ) = 1, s(Tb ) = +1.
(b) Ignoring any random noise: yk = Vk+1 + Vk1 Associate value Vk+1 with bit bk . Vk1 is
due to bit bk2 and an ISI term, i.e., the interference comes not from the previous bit
&
but from the one before it. Since Vk = V , there are three received levels, 2V and 0.
(c) As a precoder form dk = bk dk2 . See Fig. 9.10

bk dk
en

Delay 2 units

Figure 9.10
uy

(d) 2.1 dB. Note that this is very similar to duobinary.

P9.12 (a) Using the result of P9.10, Eqn. P9.3,



Ng

2(ej2f Tb +ej2f Tb )
T
b 2 + = 2Tb (1 + cos(2f Tb ))
2
SR (f ) = = 2T (1 + cos2 f T sin2 f T ) = 4T cos2 f T , 1 f 1

b b b b b 2Tb 2Tb

0, otherwise.

1
(b) It decays as t3
. Need to differentiate SR (f ) three times before an impulse(s) appears.
(c) yk = Vk1 + 2Vk + Vk+1 (ignoring the random noise).
(d) There are five levels, 4V, 2V, 0.

Solutions Page 98
Nguyen & Shwedyk A First Course in Digital Communications

(e) To design the encoder, one wants a unique yk to correspond to bk . Note further that
there are 2 interfering terms which implies that dk = f (bk , dk1 , dk2 ). Given this let

k
us build a truth table of (bk , dk1 , dk2 ) and the corresponding dk so that a unique yk
is produced. Finally let bk = 0 correspond to the levels 0, 4V and bk = 1 to the level
2V .

dy
bk dk1 dk2 dk yk1 = Vk2 + 2Vk1 + Vk
0 0 0 0 = 2 1 1 = 4
0 0 1 1 = 2 + 1 + 1 = 0
0 1 0 0 = +2 1 1 = 0

we
0 1 1 1 = +2 + 1 + 1 = +4
1 0 0 1 = 2 1 + 1 = 2
1 0 1 0 = 2 + 1 1 = 2
1 1 0 1 = +2 1 + 1 = +2
1 1 1 0 = +2 + 1 2 = +2

4V

0D
Sh
Note that if yk1 = 4V or 0 then bk = 0 and if yk1 = 2V then bk = 0. Therefore
the decision space looks as in Fig. 9.11:

2V

1D 0D
0 2V

1D
4V

0D
yk
&
Figure 9.11

Note further that we are assuming, as usual, that the random noise sample wk is Gaus-
2 (watts).
sian, zero-mean and of variance w
2
en

(f) To determine the SNR degradation, we need to determine V2 from Eqn. (9.37)
w max
where |SR (f )| = 4Tb cos2 (f Tb ). Therefore
"r Z 1
#2
V2 N0 2Tb
2
= PT Tb 4Tb cos (f Tb )
uy

2
w 2
max 2T1
b

2
Note that cos2 (f Tb ) = 12 [1+cos(2f Tb )] and do the integration to get V2 = P2N
T Tb
0
.
q w max
PT Tb
Therefore the probability of error is Q 2N0 (ignoring constant factors before the
Ng

Q() function(s)).
q
PT Tb
Compared with the ideal binary case where P [error] Q 2N0 , we see that PT
needs to be increased by a factor of 4 which implies an SNR degradation 10 log10 4 = 6
dB.

P9.13 (a) The Fourier transform of the sampled spectrum


sampled
SR (f ) = Tb F {(t + Tb ) + 2(t) (t Tb )}
n o
= Tb ej2f Tb + 2 ej2f Tb = 2Tb [1 cos(2f Tb )]

Solutions Page 99
Nguyen & Shwedyk A First Course in Digital Communications

The spectrum of the continuous time signal is


(

k
2Tb [1 cos2 (f Tb )] = 4Tb sin2 (f Tb ), |f | 2T1 b
SR (f ) =
0, otherwise.

dy
(b) The spectrum of SR (f ) looks as in Fig. 9.12. Since it has a discontinuity, sR (t) decays
as 1t .

4Tb

we
f
0
1 1

Sh
2Tb 2Tb

Figure 9.12

(c) Without random noise yk = Vk1 + 2Vk Vk+1 where Vj = V .


(d) The sampled output yk = (V ) + 2(V ) (V ). Looking at all combinations sk =
4V, 2V, 0, +2V, +4V , i.e., there are 5 levels.
&
(e) To establish the coders table, note that there are 2 interfering terms which implies that
a coded bit should depend on 2 previous coded bits as well as the present information
bit, i.e., dk = f (bk , dk1 , dk2 ). Further we wish bk = 0 and bk = 1 to be associated
with levels that are distinct. Let bk = 0 correspond to the levels 0, 4V and bk = 1
correspond to levels 2V . Therefore
en

bk dk1 dk2 dk yk1 = Vk2 + Vk1 + Vk


0 0 0 0 = 1 2 1 = 4
0 0 1 0 = 1 + 2 1 = 0
0 1 0 1 = +1 2 + 1 = 0
uy

0 1 1 1 = +1 + 2 + 1 = +4
1 0 0 1 = 1 2 + 1 = 2
1 0 1 1 = 1 + 2 + 1 = +2
1 1 0 0 = +1 2 1 = 2
1 1 1 0 = +1 + 2 1 = +2
Ng

(f) Using Eqn. (9.39) we have


"r Z 1
#2
V2 N0 2Tb PT Tb
2
= PT Tb 2Tb [1 cos(2f Tb )] df =
w max 2 2T1 2
b
q
PT Tb
so P [error] Q 2N0 a 6 dB degradation in SNR.

P9.14 See Fig. 9.13. The eye diagram with the duobinary signalling (left figure) is significantly
more open, reflecting the fact that its overall impulse response, sR (t), decays as 1/t2 , while
that of P9.11 decays as 1/t.

Solutions Page 910


Nguyen & Shwedyk A First Course in Digital Communications

3 3

k
2 2

dy
1 1
y(t)/V

y(t)/V
0 0

1 1

we
2 2

3 3
0 1.0 2.0 0 1.0 2.0
t/Tb t/Tb

1
Sh
Figure 9.13: Eye diagrams with the duobinary modulation (left) and with the spectrum of
P9.11 (right). Note that for duobinary modulation the sampling times need to be delayed by
Tb /2.

Eye Digram for P9.15 with RC = 0.2T


Note that the eye is quite wide open
b
&
0.8

0.6

0.4
y(t) (normalized)

0.2
en

0.2

0.4
Sampling Time
uy

0.6

0.8

1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t/Tb
Ng

Figure 9.14

P9.15 See Figs. 9.14, 9.15, 9.16.

Solutions Page 911


Nguyen & Shwedyk A First Course in Digital Communications

k
Eye Diagram For P9.15 with RC = Tb
1

dy
0.8

0.6

0.4
y(t) (normalized)
0.2

we
0

0.2

0.4

0.6 Sampling Time

Sh
0.8

1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t/Tb

Figure 9.15
&
Eye Diagram For P9.15: RC = 2T
b
Note that the eye is closed very severe ISI
en

0.8

0.6

0.4
uy y(t) (normalized)

0.2

0.2
Ng

0.4

0.6

0.8 Sampling Time

1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t/T
b

Figure 9.16

Solutions Page 912


k
dy
Chapter 10

Signaling Over Fading Channels

we
P10.1 The two equations of interest are
" r r !#
1 T 2 /N0 1 2E 2
P [error]BASK

Sh
= e h
2
+
2

where Th is the threshold used in BASK and E = 2Eb , and


1Q

1
N0

P [error]BFSK = eEb /2N0


2
,
N0
Th (10.1)

(10.2)
&

E 2Eb
When one chooses Th = 2 = 2 then P [error]BASK simplifies to:
" r r !#
1 Eb /2N0 1 Eb Eb
P [error]BASK = e + 1Q 2 ,
2 2 N0 N0
en

Since the first term in the above expression is P [error]BFSK , while the second term is always
positive for finite Eb /N0 , it follows immediately that P [error]BASK > P [error]BFSK .
(normalized)
P10.2 The plot of the normalized optimum threshold, namely Th = qTEh is shown in Fig.
b
q 2
uy


Eb
10.1. The plot shows that Th 2E = 2 . This fact can be shown mathematically as
follows.

0 I 1 eE/N0 where E = 2Eb . Therefore
Start with Th = 2N E 0

r
Ng

N0 Eb 1
Th = I01 e2Eb /N0 = I01 e2Eb /N0
2 2Eb 2 2Eb
N0

or
(normalized) Th I01 e2Eb /N0
Th =q =
Eb 2Eb
2 N0

P
Now ex cos = I0 (x) + 2 k=1 Ik (x) cos(k) and Ik (x) > 0 for all k > 1 and x > 0.
P
Therefore with = 0 we have ex = I0 (x) + 2 k=1 Ik (x) I0 (x), which also means
(normalized)
I01 (ex ) x, x > 0. Therefore Th 1.

1
Nguyen & Shwedyk A First Course in Digital Communications

1.8

k
1.7

Normalized optimum threshold value


1.6

dy
1.5

1.4

we
1.3

1.2

1.1

Sh
0 5 10 15 20
Eb/ N0 (dB)

Figure 10.1

P10.3 The transmitted signal set and signal space are as follows:

&
1T : s(t) = E cos(2fc t)
0T : s(t) = 0 (10.3)
en

0T 1T 2
T (t ) = cos(2 f c t )
Tb
0 E

The received signal set and signal space is


uy


1T : r(t) = E cos(2fc t ) + w(t)
0T : r(t) = 0 + w(t) (10.4)

0R 1R 2
Ng

R (t ) = cos(2 f c t )
Tb
0 E

Since we assume , are estimated accurately for each transmission, we use to set the basis
function R (t) appropriately and to set the threshold. The decision rule is
r
1D E Eb
r =
0D 2 2
Z Tb
E
where Eb = 2 and r = r(t)R (t)dt.
0

Solutions Page 102


Nguyen & Shwedyk A First Course in Digital Communications


Note that f (r|0T ) N 0, N20 and f (r|1T ) N 2Eb , N20 . Conditioned on and , the
error probability is given by

k
p ! r !
Eb /2 Eb
P [error|, ] = Q p =Q
N0 /2 N0

dy
The error probability depends on the specific value (but not on the specific value of ) that
the random variable assumes during the a specific transmission. Now takes on values
2 2
from 0 to according to the Rayleigh pdf, f () = 22 e /F u(). To find the overall
F
average error probability, average the conditional bit error probability over f (), i.e., find

we
Z r ! Z " r !#
Eb 1 Eb
P [error] = Q f ()d = 1 erf f ()d
0 N0 2 0 N0
h i
1
where we have substituted Q(x) = 2 1 erf x2 . It follows that

1
Sh
1
P [error] = 2
2 F
Z

Perusing Gradstyn & Ryzhik we notice, page 649, Eqn. 6.287-1, that
Z
erf
r

Eb
N0
!
e F
2
2

d

2
x(x)ex dx = p [Re() > Re( 2 ), Re() > 0]
&
0 2 + 2

Rx 2
where (x) = 2 et dt (G&R, p.930, Eqn. 8.250-1), i.e., (x) = erf(x).
0
q
Eb 1
Identifying = 2N0 , = 2
F
, and substituting we get
en

q
1 1 F2 Eb /2N0
P [error] = q (10.5)
2 2 1 + 2 E /2N
F b 0
uy

Interpreting F2 Eb as the received energy per bit and comparing with coherent BFSK (Eqn.
(10.80)) we see that coherent BASKs performance is identical to that of coherent BFSK and
both are 3 dB less efficient than coherent BPSK (Eqn. (10.81)).
As aside, the following shows that F2 Eb is indeed the average received energy per bit. During
any transmission the received energy is ER () = 21 (0) + 12 (2 E) = 21 2 (2Eb ) = 2 Eb .
Ng

Therefore the average received energy per bit is


2
( ) = 2
Z 2
F Z
2 2 z}|{
2
E {ER ()} = Eb e
F d = F2 Eb e d = F2 Eb (joules/bit).
0 F2 0
| {z }
f ()

Note that this is true for both BPSK and BFSK.

Solutions Page 103


Nguyen & Shwedyk A First Course in Digital Communications

P10.4 We use the approach developed in Section 10.3.3. The main difference is that not only is
the received phase, , random but so is the received amplitude which is scaled by . Eqn.

k
(10.42) which expresses the received signals over 2 bit intervals becomes:
q
2Eb cos(2fc t ) [u(t) u(t 2Tb )] + w(t), 0T

dy
r(t) = q Tb
2Eb cos(2fc t ) {[u(t) u(t Tb )] [u(t Tb ) u(t 2Tb )]} + w(t), 1T
Tb

Rewrite this as
q
2Eb
q Tb [nF,I cos(2fc t) + nF,Q sin(2fc t)] [u(t) u(t 2Tb )] + w(t),
0T

we
r(t) = 2Eb [nF,I cos(2fc t) + nF,Q sin(2fc t)]

Tb

{[u(t) u(t Tb )] [u(t Tb ) u(t 2Tb )]} + w(t), 1T

where nF,I = cos , nF,Q = sin are statistically independent, zero-mean Gaussian ran-
2
F
dom variables with identical variance 2 .




Sh
Again we need the 4 basis functions of Eqn. (10.43) to generate the sufficient statistics. These
are as in (10.44), (10.45), the difference being that rather than cos , sin we have cos ,
sin , i.e., nF,I , nF,Q .
The sufficient statistics are therefore

r1,I = 2Eb nF,I + w1,I





r1,I = 2Eb nF,I + w1,I

r
r
&
1,Q = 2Eb nF,Q + w1,Q 1,Q = 2Eb nF,Q + w1,Q
0T : OR

r2,I = w2,I
r2,I = w2,I



r r
2,Q = w2,Q 2,Q = w2,Q


r1,I = w1,I
r1,I = w1,I



en

r r
1,Q = w1,Q 1,Q = w1,Q
1T : OR
r2,I = 2Eb nF,I + w2,I
r2,I = 2Eb nF,I + w2,I




r r
2,Q = 2Eb nF,Q + w2,Q 2,Q = 2Eb nF,Q + w2,Q

Observe that, regardless of the case considered, r1,I , r1,Q , r2,I , r2,Q are statistically indepen-
uy

dent Gaussian random variables. Consider the case of 0T , i.e., f (r1,I , r1,Q , r2,I , r2,Q |0T ). In
2 2 N0
this situation r1,I is a zero-mean Gaussian
random variable of variance 2 = Eb F + 2 . So is
r1,Q . This is regardless of whether + 2Eb or 2Eb is considered. r2,I , r2,Q are zero-mean
Gaussian random variables of variance 12 = N20 .
With 1T the situation is reversed, i.e., r1,I , r1,Q N (0, 12 ) and r2,I , r2,Q N (0, 22 ).
Ng

The likelihood ratio test


f (r1,I , r1,Q , r2,I , r2,Q |1T ) 1D
1
f (r1,I , r1,Q , r2,I , r2,Q |0T ) 0D
becomes 2 2 2 2 2 2 2 2
er1,I /21 er1,Q /21 er2,I /22 er2,Q /22 1D
2 2 2 2 2 2 2 2 1
er1,I /22 er1,Q /22 er2,I /21 er2,Q /21 0D
Taking the natural logarithm, gathering terms and rearranging, the test simplifies to

2 2
1 1 1D 2 2
1 1
r2,I + r2,Q 2 + 2 r1,I + r1,Q 2 + 2
2 1 0D 2 1

Solutions Page 104


Nguyen & Shwedyk A First Course in Digital Communications

Now 22 > 12 12 + 12 > 0 can cancel this function from both sides without affecting
2 1
the inequality relationship. Therefore the decision rule becomes

k
1D
2 2 2 2
r2,I + r2,Q r1,I + r1,Q

dy
0D

The above makes intuitive sense. Since both the phase and the amplitude information is
destroyed by the fading channel the only way to distinguish as to whether a 1 or 0 was
transmitted is to look at whether there is more energy in the {2,I (t), 2,Q (t)} plane or in the
{1,I (t), 1,Q (t)} plane, which is what the above decision rule does.

we
The decision rule can be rewritten as
q 1D q
2 + r2
r2,I 2 + r2
2,Q r1,I 1,Q
0D

which looks at the envelope of the received signal in the appropriate plane.

i.e., it is written as
Sh
To obtain the error performance, the energy form of the decision rule is preferred. Also for
purposes of error performance the variables in the decision rule are considered to be random,

1D
`2 r22,I + r22,Q r21,I + r21,Q `1
0D
&
Due to symmetry the error probability can be written as

P [error] = P [error|0T ] = P [`2 `1 |0T ]

Under the condition that a zero is transmitted the random variable `2 is Chi-square with
2 degrees of freedom and parameter 12 while `1 is also Chi-square, 2 degrees of freedom,
en

parameter 22 .
Now fix `1 at a specific value, say `1 = `1 . Then
Z Z 2
e`2 /21 2
P [error|0T , `1 = `1 ] = f`2 (`2 |0T )d`2 = 2 d`2 = e`1 /21
uy

`1 `1 2(1)1

Sweep `1 over all possible values, weighted, of course, by the probability of that value occur-
ring, i.e.,
Z Z `1 /222
`1 /212 e
Ng

P [error|0T ] = P [error|0T , `1 = `1 ]f`1 (`1 |0T )d`1 = e d`1


0 0 2(1)22
1 1 1 1
= 22
= 2 +N /2 = . (10.6)
1+ 1+
Eb F 0 2 1 + F2 Eb
12 N0 /2 N0

Comparison of (10.6) with the performances of coherent BFSK and coherent BPSK (Eqns.
(10.80) and (10.81) in Section 10.4.3) is shown in Fig. 10.2. Observe that noncoherently-
demodulated DBPSK performs basically the same as coherently demodulated BFSK.

Solutions Page 105


Nguyen & Shwedyk A First Course in Digital Communications

0
10

k
1
10

dy
2
10

P[error]
3
10

we
4
10
Coherently demodulated BFSK
Coherently demodulated BPSK
5
Noncoherently demodulated DBPSK
10

Sh
0 5 10 15 20 25 30 35
Received SNR per bit (dB)

Figure 10.2

P10.5 (a) The received signal is


q r
&
2
1T : rj (t) = Eb0
j cos(2fc t j ) + w(t)
Tb
q r2
0T : rj (t) = Eb0 j cos(2fc t j ) + w(t)
Tb

where (j 1)Tb t jTb , j = 1, 2, . . . , N .


en

Though the terminology is somewhat ambiguous, indeed one might say misleading, co-
herent demodulation means that not only the phase j is estimated perfectly (at least
perfectly enough for engineering purpose) but so is the attenuation factor j .
uy

With j known, a set of sufficient statistics


q is generated by projecting the received
2
signal(s) onto the N basis functions of Tb cos(2fc t j ), (j 1)Tb t jTb ,
j = 1, 2, . . . , N . They are:
q
Ng

1T : rj = Eb0 j + wj
q
0T : rj = Eb0 j + wj

where j = 1, 2, . . . , N . The rj s are i.i.d. Gaussian random variables of means Eb j
and variance 2 = N0 /2. The LRT

f (r1 , r2 , . . . , rN |1T ) 1D
1
f (r1 , r2 , . . . , rN |0T ) 0D

Solutions Page 106


Nguyen & Shwedyk A First Course in Digital Communications

becomes 0
e(rj Eb j )/2 1D
2
N 1
j=1 2

k
1
1 (rj + Eb0 j )/2 2 0D
N
j=1 2
e

dy
Taking the ln and simplifying gives the following decision rule:
N
X 1D
j rj 0.
0D
j=1

we
Remark: The above decision rule is actually known as the maximum-ratio-combining
(MRC) detection rule. This is because the signal-to-noise ratio of the decision variable
is maximized (the proof is left as a further problem).
(b) In developing the receiver, the j s were assumed to be known (i.e., perfectly estimated
at the receiver). To determine the error performance, however, we consider many, many
transmissions (typically > 108 ) and the j s will take on a gamut of values, i.e., they are

Sh
considered to be random variables with a Rayleigh pdf.

Start the calculation of the error performance by observing that due to symmetry

P [error] = P [error|1T ] = P [error|0T ]

and that
&
N
X
P [error|0T ] = P ` = j r j 00T .
j=1

To evaluate the above, start by assuming the j s take on a specific set of values and
then average over all possible values. That is, let
= (1 , 2 , . . . , N ) =

=
en

(1 , 2 , . . . , N ). Then

N
X
P [error|0T ,

=

] = P ` = j r j 00T ,

=

.
j=1
uy

p
Note that the quantities j rj = Eb0 j2 + j wj are statistically independent Gaussian
p 0 2
variables of mean Eb j and variance j2 N0 /2. Therefore ` is a Gaussian random
p P N0 PN
variable whose mean is m = Eb0 N 2
j=1 j and variance = 2
2 2
j=1 j . Therefore,
s v
Ng

Z m uN
1 2Eb0 uX
P [error|0T ,

=
t
2 /2 2
] = e(`+m) d` = Q = Q j2 .
2 0 N0
j=1

But, as mentioned, the above error expression is for a specific set of j s and we view the
j s as random. Therefore the error expression should be averaged over all possible values
of the j s with a weighting provided by the joint pdf of the j s, i.e., f
Though
( ).P
this can be done, here we take a (slightly) different approach. Namely let = N 2
j=1 j
2
2j 2j /F
where fj (j ) = 2 e
F
u(j ) are the statistically independent Rayleigh random
variables. Now recognize that each 2j can be represented as a sum of squares of two

Solutions Page 107


Nguyen & Shwedyk A First Course in Digital Communications

i.i.d. Gaussian random variables. As such, is essentially a sum of squares of 2N i.i.d.


Gaussian random variables.

k
The pdf of can be readily determined from Eqn. (10.90), i.e.,
1 2

dy
f () = N 1 e/F u().
2N (N 1)!

Therefore
s
Z 2Eb0
P [error] = P [error|0T ] = Q f ()d

we
=0 N0
s
Z
1 2Eb0 N 1 /2
= Q e F d
2N =0 N0 (N 1)!

Eb0

Sh
By changing variables x = N0 , the above expression can be rewritten as
Z xN 1 ex/SNR
P [error] = Q 2x dx
0 SNRN (N 1)!
Eb0 F
2 R y 2 /2 dy
where SNR = . Now Q 2x = 1
N0 2 y= 2x e and therefore
&
Z Z
1 y 2 /2 xN 1 ex/SNR
P [error] = e dy dx
x=0 2 y= 2x SNRN (N 1)!

The above integral essentially finds the volume under the 2-dimensional function (sur-
1
face) of g(x, y) = 2(N 1)!SNR N 1 ex/SNR ey 2 /2 in the region illustrated as in Fig.
Nx
en

10.3.
y Region for integration
uy

2
y = 2 x or x = y
2
dx
Ng

0 x

Figure 10.3

The key to performing the integration is to change the order of integration, i.e., sweep
first w.r.t. x and then w.r.t. y as illustrated in Fig. 10.4.

Solutions Page 108


Nguyen & Shwedyk A First Course in Digital Communications

y Region for integration

k
dy
2
y = 2 x or x = y

dy
2
sweeping first w.r.t. y from 2 x
dx
to and then w.r.t. to x from 0 to
0 x
sweeping first w.r.t. x from 0

we
to y 2 2 and then w.r.t. to y from 0 to

Figure 10.4

Therefore:

P [error] =

Consider the inner integral


Sh Z

x=0

y=0

y2
2
1 Z y22 xN 1 ex/SNR

N
2 x=0 SNR (N 1)!


2
dx ey /2 dy

xN 1 ex/SNR dx, which looks like an incomplete Gamma


&
2
function, and from G&R, page 317, Eqn. 3.381.1 with u = y2 , = N , = SNR
1
we get
1 N y 2
the integral to be SNR N, 2SNR where (, ) is the incomplete Gamma function.
Therefore:
Z
1 1 y 2 /2 y2
P [error] = e N, dy
en

(N 1)! 2 y=0 2SNR

y2
Using the relationship from G&R, page 940, Eqn. 8.352.1 with 1 + n N , x 2SNR
the incomplete Gamma function becomes:
( )
uy

N
X 1
y2 y2 y 2k
N, = (N 1)! 1 e 2SNR
2SNR 2k SNRk k! k=0

Therefore the error probability can be written as:


Ng

Z Z N
X 1
1 y2 1 ( 21 + 2SNR
1
)y2 y 2k
P [error] = e 2 dy e dy
2 y=0 2 y=0 k=0
2k SNRk k!

R y2
Now 1 1
y=0 e dy = (half the area of a zero-mean, unit variance Gaussian pdf) and
2
2 2
y2
changing variables in the 2nd integral, 2 , simplifying, etc., we get

N 1 Z
1 1 X 1
k 2 e( SNR ) d
1 SNR+1
P [error] = k
2 2 SNR k! =0
k=0

Solutions Page 109


Nguyen & Shwedyk A First Course in Digital Communications

The integral looks now like a complete Gamma function and from G&R, page 317, Eqn.
3.381.4 with 1 k 12 = k + 12 , = SNR+1SNR we see that the integral is

k
1 1
k+ 1
k + 2 where () is the complete Gamma function.
( SNR+1
SNR )
2

Now k + 12 = 2 (2k 1)!! (Notation: (2k 1)!! = 1 3 5 (2k 3) (2k 1)).

dy
Using this relationship and after some algebra, one gets:
N 1 k+ 1
1 1 X (2k 1)!! SNR 2
P [error] =
2 2
k=0
k!2k SNRk SNR + 1

we
q k+ 1 k+ 1
SNR 2 SNR 2
Define parameter SNR+1 SNR = 12
and SNR+1 = 2 2
= 2k+1 .

In terms of , the error probability is


N 1 k
1 1 X (2k 1)!! 1 2
2k+1

Sh
P [error] =
2 2 k!2k 2
k=0

k k k
12 1 1+
Now 2
= 2 2 2k 2k 12k . Therefore

N 1
1 1 X (2k 1)!! k 1 k 1 + k
P [error] = 2
2 2 k! 2 2
&
k=0

(2k1)!!
Write k! = 135(2k3)(2k1)
k! 246(2k2)(2k)
246(2k2)(2k) . But 2 4 6 (2k 2) (2k) =
(2k1)!! (2k)!
2k k! k! = k!2k k! = 21k 2k k . Finally,

N 1
en

1 1 X 2k 1 k 1+ k
P [error] = (10.7)
2 2 k 2 2
k=0

except we want to write it in the form of (P10.1)!


uy

We next show that (10.7) is indeed the same as (P10.1) by induction. This means that
we first confirm that (10.7) and (P10.1) are the same for N = 1 (quite obvious) and
assume that they also the same for N = L, i.e.,
L1 " L1 #
1 1 X 2k 1 k 1+ k 1 L 1 X L1+m 1+ m
Ng

= .
2 2 k 2 2 2 2 m 2
k=0 m=0
(10.8)
Then we need to show that (10.7) and (P10.1) are indeed the same for N = L + 1. This
is accomplished with the following manipulations:
" L #
1 X 2k 1 k 1+ k
1
2 k 2 2
k=0
" #
1 X 2k 1 k 1 + k
L1
1 2L

1 L 1+ L

= 1
2 k 2 2 2 L 2 2
k=0

Solutions Page 1010


Nguyen & Shwedyk A First Course in Digital Communications

L
Using the assumption of (10.8) this can be written as (where the term 1
2 is factored
out):

k
" L1 #
1 L X L1+m 1 + m 1 2L 1+ L
(10.9)

dy
2 m 2 2 L 2
m=0

Now

L1+m (L 1 + m)! L + m L (L + m)! L
= =
m m!(L 1)! L L+m m!L! L + m

we

L+m m
= 1
m L+m

Therefore (10.9) becomes


" L1 L2
1 L X L+m 1+ m X L+m 1 + m+1

where

2

1 2L
2 L
m=0

1+ L
2

m


#
Sh
2

m=0
m 2

(10.10)
&
L1
X L2
X
m L+m 1+ m k=m1 k+1 L+k+1 1 + k+1
=
L+m m 2 L+k+1 k+1 2
m=0 k=1
L2
X
k+1 L+k+1 1 + k+1
=
L+k+1 k+1 2
en

k=0

But

k+1 L+k+1 k + 1 (L + k + 1)! (L + k)! L+k
= = =
L+k+1 k+1 L + k + 1 L!(k + 1)! L!k! k
uy

Changing the dummy index k to m we get (10.10) in the following form:


" L1 L1
1 L X L+m 1+ m X L+m 1 + m+1

2 m 2 m 2
m=0 m=0
L #
Ng

L
1 2L 1+ 1 2L 1+
+ (10.11)
2 L 2 2 L 2

1+ L
1 2L
where the term 2 L 2 is added and subtracted.
Since, obviously
m+1 m m m
1+ 1+ 1 1 1+ 1+
= + = +
2 2 2 2 2 2 2 2

Solutions Page 1011


Nguyen & Shwedyk A First Course in Digital Communications

(10.11) can be written as


" L1 L1

k
1 L 1 X L+m 1+ m X L+m 1+ m

2 2 m 2 2 m 2
m=0 m=0
#

dy

1 2L 1 + L 1 2L 1+ L
+ (10.12)
2 L 2 2 L 2

Gathering terms appropriately (10.13) becomes


" L #

we
L
1 L 1 X L+m 1+ m X L+m 1+ m
,
2 2 m 2 2 m 2
m=0 m=0

which (finally) equals:


L+1 " L #
1 1 X L+m 1+ m

P [error]
2

Sh
1
1
2
m=0

(c) With approximations, (10.13) with L = N 1 becomes


N NX
m


N 1+k k
2


1 2N 1
1 = N

1
(10.13)

4SNR k 4 N SNRN
&
k=0

The error probability decreases as the N th power of the signal-to-noise ratio. N is


usually called the diversity gain (or order) of the system. Note that this performance
behavior is the same as with noncoherent FSK. But there is a 3 dB saving in power but
at the expense of a more complicated receiver since the phase and attenuation need to
en

be estimated.

P10.6 To be added.

P10.7 (a) A constant does not vary very much, indeed not at all. Therefore its variance is equal to
0. Further the expected or average value of a constant is the constant itself1 . Therefore
uy

the amount of fading is AF = 0/constant = 0.


Z
2 2
(b) var{2 } = E{4 } E 2 {2 }. E{2 } = 22 3 e /F d and after a change of vari-
Z F 0
2 2
R 2 2
able this becomes F e d = F . E{4 } = 22 0 5 e /F d and with the
Ng

0 Z F
4
same change of variable this becomes F 2 e d = 2F4 . Therefore
0
4 4
2F
AF = (F2 )2
F
= 1.
(c) First find a general expression for E{n } and then evaluate it for n = 4 and n = 2.
Z
2mm 2 / 2
E{n } = n+2m1 em d.
(m) 2m 0
1
For an anecdote about a totally absurd proof that the expected value of a constant is the constant, contact the
second author.

Solutions Page 1012


Nguyen & Shwedyk A First Course in Digital Communications

m2
Let 2
. Then
Z

n

k
n mm n+2m2 n+2m2 n
E{ } = 2 e d = m +
(m) 2m m n+2m2
2 m 0 (m)mn/2 2

dy
For n = 2,
2 2 m(m)
E{2 } = (m + 1) = = 2
(m)m (m)m
For n = 4,
4 4 (m + 1) 4
E{4 } = (m + 2) = = 4
+

we
(m)m2 m m
1
Therefore AF = m . Note that when m = 1 the Nakagami distribution is Rayleigh and
as m , it tends to a Gaussian pdf. Plots of the Nakagami-m pdf for various values
of m are shown in Fig. 10.5.
2
Normalized Nakagamim Distribution (with =1)
3

2.5

2
Sh m=1 (Rayleigh)
m=2
m=4
m=10
&
f ()

1.5

1
en

0.5

0
0 0.5 1 1.5 2 2.5 3

uy

Figure 10.5

P10.8 (a) If the phase is radians then the sign(s) of the transmitted signal is reversed, i.e., a bit
zero is represented by a signal pattern where the transmitted signal in the (k 1)th bit
Ng

interval and kth bit interval are the same: either (, ) or (+, +). On the other hand,
for bit one the pattern is (+, ) or (, +). In essence, nothing changes.
(b) Start by observing that we want dk = dk1 if the present bit bk = 0 and dk = dk1 if
bk = 1. The truth table for dk , dk1 , bk is
dk1 bk dk
0 ( rad) 0 0 ( rad)
0 ( rad) 1 0 (0 )
1 (0 ) 0 1 (0 )
1 (0 ) 1 0 ( rad)

Solutions Page 1013


Nguyen & Shwedyk A First Course in Digital Communications

From the truth table one sees (easily) that dk = dk1 bk . In block diagram form this
means

k
bk dk

dy
d k 1 Delay
(Tb second)

we
Remark: The above is what is discussed in Chapter 6, Section 6.6, differential modula-
tion, pages 251-252.
q
2
(c) Let (k1) (t) = cos(2fc t) be the orthogonal basis for the (k 1)th bit interval
q Tb
and (k) (t) = T2b cos(2fc t) for the kth bit interval. Then the signal space plot of the
signal set of (10.40) becomes

1T
Sh Eb
( k ) (t )

0T
&
Eb Eb
( k 1) (t )
0

0T 1T
en

Eb

It is identical (perhaps one should say isomorphic) to the signal space plot for Miller
modulation.
uy

(d) Based on (b), consider the following coherent demodulation: first demodulation to dk
and then to bk . The block diagram looks as follows:

kTb
dk
()dt b k
Ng

r (t )
( k 1) Tb
t = kTb

2 Comparator Delay d k 1
cos(2 f ct ) (Tb seconds)
Tb
Inverse of mapping
at modulator

Of course it is possible to demodulate by projecting r(t) onto {(k1) (t), k (t)} (the
signal space of bit dk ) and choosing the closest signal point. But this would require two
matched filters (or two correlators).

Solutions Page 1014


Nguyen & Shwedyk A First Course in Digital Communications

(e) First, observe that P [error] = P [error|0T ]. Second, note that bk is in error if (i) dk1 is
in error and dk is correct or (ii) if dk1 is correct and dk is in error, i.e., if both dk , dk1

k
are correct then bk is correct and if both dk , dk1 are incorrect then bk is also correct
(who said two wrongs do not make a right).
Since the events dk1 , dk are statistically independent (remember noise is white and

dy
Gaussian) and the 2 events (i), (ii) above are mutually exclusive, one has
!" !# r !" r !#
Eb Eb 2Eb 2Eb
P [error] = 2Q p 1Q p = 2Q 1Q .
N0 /2 N0 /2 N0 N0

we
Plots of the error performance for this phase uncertainty model and that of (10.48) are
shown in Fig. 10.6. Observe that coherent demodulation saves about 0.5 dB in Eb /N0 .
1
10

10

10
2

3
Sh
P[bit error]

4
&
10

5
10

DBPSK
en

6
Coherently demodulated DBPSK
10
0 2 4 6 8 10 12 14
Eb/N0 (dB)

Figure 10.6
uy

(f) CDDBPSK perhaps.


Remark: Because of the memory introduced by the differential mapping one can de-
termine a state diagram, a trellis and use Viterbis algorithm to perform a sequence
demodulation. Left as exercise or convince yourself that Fig. 6.19 is applicable here.
Ng

P10.9 (a) One possible relative phase change, , mapping is as follows:

00 = 0 (radians)

01 =
2
11 =
3
10 = or
2 2
(b) Since there is memory, let us assume an initial condition of 0 radians. Parse the sequence
into 2 bit sequences and determine the transmitted phase sequence.

Solutions Page 1015


Nguyen & Shwedyk A First Course in Digital Communications

01 11 11 00 01 10 10 ...
3
0 2 2 2 2 2 0 . . . (transmitted phase)

k

initial condition
As can be seen and as expected, the absolute transmitted phase has no meaning. It is

dy
the relative (or differential) phase that conveys the information.
(c) The trellis looks as shown below, where the state is defined as the previous transmitted
phase. Note that the trellis is fully developed after one Ts interval.

State Output phase

we
d k 1 (rad) Input 2 bit sequence

0 / 00
0
01
2
3 01 11
2


2
3
2
0 /10

11

0 / 11

2
00
01
Sh
00

2
10

2
11


&
3 01
2
0 / 01 10

3
2 3 00
en

2
(k 1)Ts kTs

(d) Because of the memory of one symbol interval we shall look at the signal in the previous
symbol interval along with q the present symbol
q interval. A signal in a symbol interval lies
uy

in 2 dimensions, that of { Tb cos(2fc t), T2b sin(2fc t)}, which means 4 dimensions
2

are needed to represent the signal over 2 symbol intervals.


Along each dimension the
signal component takes on one of 2 possible value ( Eb ). Therefore there are 24 = 16
possible signals lying in the 4-dimensional signal space. Each signal lies in one of the 16
quadrants, indeed each quadrant has only 1 signal lying in it.
Ng

Remark: More detail as to the actual signal points is provided in the next problem.
(e) Start by considering the following truth table, perhaps it is more appropriately called
a desired table.

Solutions Page 1016


Nguyen & Shwedyk A First Course in Digital Communications

bk,Q bk,I dk,Q dk,I


Phase stays the same
0 0 dk1,Q dk1,I

k
as the previous phase

Phase changes Now how dk,Q , dk,I behave depends

dy
0 1
by + 2 radians on the bit pattern dk1,Q , dk1,I
dk1,Q dk1,I

1 1 dk1,Q dk1,I

we
0 0 dk1,Q dk1,I

1 0 dk1,Q dk1,I

0 1 dk1,Q dk1,I

1 0
Sh
Phase changes
by 2 radians
dk1,Q dk1,I

Again change in dk,Q , dk,I depends
on the bit pattern dk1,Q , dk1,I

1 1 dk1,Q dk1,I
&
0 0 dk1,Q dk1,I

1 0 dk1,Q dk1,I
en

0 1 dk1,Q dk1,I

Phase changes
1 1 dk1,Q dk1,I
by + radians
uy

Remark: To see the above relationships between the {dk,Q , dk,I } bits and the {dk1,Q , dk1,I }
bits, especially for the 01, 10 bit patterns for bk,Q , bk,I you may wish to sketch a few
(I, Q) signal space diagrams.

Now to form the Boolean expressions. Let us first consider the inphase bit dk,I . The
Ng

formation shall not proceed by formal logic design procedures but more by intuition, in
other words by (hopefully) intelligent trial and error.

The first observation is that the pattern for the dk,I bit is similar for two subsets of the
bk,Q , bk,I bit pattern, namely (00, 11) and (01, 10). So let us start with the expression
bk,Q bk,I which would create a selector bit of 0 or 1, respectively for the two subsets.
Consider subset (00, 11). Expression bk,Q bk,I shall be equal to 1 in this case. Note
that bk,I dk1,I has as its output dk1,I when bk,I = 0 and dk1,I when bk,I = 1. So
AND this with bk,Q bk,I and the subset (00,11) is accounted for. Thus far we have
dk,I = (bk,Q bk,I ) AND (bk,I dk1,I ).

Solutions Page 1017


Nguyen & Shwedyk A First Course in Digital Communications

To account for the 2nd subset, OR the above expression with an expression to be deter-
mined for the 2nd subset. Since when the 2nd subset is relevant, the above expression

k
is 0, we can develop this expression independently.
To see what this expression is, consider the table below where just dk,I is considered and
the Boolean expressions (bk,Q dk1,Q ) and (bk,I dk1,I ) are also shown.

dy
bk,Q bk,I dk1,Q dk1,I dk,I (bk,Q dk1,Q ) (bk,I dk1,Q )

0 1 1 1 dk,I = dk1,I = 0 1 0

we
0 0 dk,I = dk1,I = 1 0 1

1 0 dk,I = dk1,I = 0 1 0

1 1 dk,I = dk1,I = 1 0 1

1 0 1

1
Sh1

0
dk,I = dk1,I = 1

dk,I = dk1,I = 0

dk,I = dk1,I = 1
0

0
1

1
&
0 1 dk,I = dk1,I = 0 1 0

Note that both track the dk,I bit, albeit one in a complementary fashion. Therefore
form the following Boolean expression for the 2nd subset: (bk,Q bk,I ) AND (bk,Q dk1,Q ).
en

The final overall expression for the inphase bit is



dk,I = (bk,Q bk,I ) AND (bk,I dk1,I ) OR (bk,Q bk,I ) AND (bk,Q dk1,Q )

Remark: One could also use (bk,I dk1,Q ). Do it? Also convince yourself that (bk,I
dk1,I ) or (bk,Q dk1,I ) are not appropriate to be used. For the quadrature bit reason
uy

as above, or use symmetry, or guess, to get



dk,Q = (bk,Q bk,I ) AND (bk,Q dk1,Q ) OR (bk,Q bk,I ) AND (bk,I dk1,Q )

P10.10 (a) There are four basis functions, namely


Ng

r
2
k,I (t) = cos(2fc t)

Ts (k1)Ts tkTs
r (present symbol interval)
2

k,Q (t) = sin(2fc t)
Ts
and r
2
k1,I (t) = cos(2fc t)

Ts (k2)Ts t(k1)Ts
r (previous symbol interval)
2

k1,Q (t) = sin(2fc t)
Ts

Solutions Page 1018


Nguyen & Shwedyk A First Course in Digital Communications

(b) 24 , i.e., 16 possible bit patterns.


(c) 24 quadrants.

k
(d) Using Table 10.2 of P10.9
bk3 bk2 bk1 bk Previous phase Present phase

dy

0 0 0 0 i) 4 4
3 3
ii) 4 4
5 5
iii) 4 4
7 7
iv) 4 4
Since the present 2 bits are 00, the present phase is the same as the previous phase, i.e.,

we
does not change.
Signal ii)
k 1,Q (t ) k ,Q (t )
Eb Eb

Eb

Signal iii)

Eb
0

k 1,Q (t )
Sh k 1, I (t )

k 1, I (t )
Eb

Eb
0

k ,Q (t )
k , I (t )

k , I (t )
&
0 0

Eb Eb
Signal iv)
k 1,Q (t ) k ,Q (t )
en

Eb Eb Eb
k 1, I (t ) k , I (t )
0 0

Eb Eb
uy

Figure 10.7

To determine which quadrant(s) the signals fall in, the notation needs to be clarified
further. Let the components along a specific axis be 1 as implied and let the axes be
Ng

ordered as follows: (k1,I , k1,Q , k,I , k,Q ). Then the quadrants in this notation are:
i) (+1, +1, +1, +1) or if we let 1111 or in decimal, 15
ii) (1, +1, 1, +1) 1 bit 0, +1 bit 1 0101 taking the left 5
iii) (1, 1, 1, 1) then in binary 0000 most bit as being 0
iv) (+1, 1, +1, 1) the quadrants are 1010 the least significant 10
(e) The previous phase can still be any one of the 4 values, ( 4 , 3 5 7
4 , 4 , 4 ) and since the
present 2 bits are 00, which means = 0, the present phase is also one of ( 4 , 3 5 7
4 , 4 , 4 )
as in (d).

Solutions Page 1019


Nguyen & Shwedyk A First Course in Digital Communications

(f)
bk3 bk2 bk1 bk Previous phase Present phase

k
3
0 0 0 1 i) 4 4
3 5
ii) 4 4

dy
5 7
iii) 4 4
7
iv) 4 4

Signal space
i)
k 1,Q (t ) k ,Q (t )

we
Eb Eb

k 1, I (t ) k , I (t )
0 Eb Eb 0

ii)

Eb
k 1,Q (t )

0
Eb
Sh k 1, I (t )
Eb
k ,Q (t )

0
k , I ( t )
&
Eb
iii)
k 1,Q (t ) k ,Q (t )
Eb Eb
en

k 1, I (t ) k , I ( t )
0 0

Eb Eb
uy

k 1,Q (t ) k ,Q (t )
iv)
Eb

Eb
k 1, I (t ) k , I ( t )
Ng

0 0 Eb

Eb

Figure 10.8

Solutions Page 1020


Nguyen & Shwedyk A First Course in Digital Communications

The signals fall in the following quadrants


Quadrant Binary Decimal

k
i) (+1, +1, 1, +1) 1101 13
ii) (1, +1, 1, 1) 0100 4

dy
iii) (1, 1, +1, 1) 0010 2
iv) (+1, 1, +1, +1) 1011 11
If the previous bits bk3 bk2 are 01, 10, or 11 we still have the same set of signals. As
in (e), the previous phase can still be any of the 4 values and the present phase is a
relative shift of 4 with respect to these 4 values.

we
Consider now

bk3 bk2 bk1 bk Previous phase Present phase


5
0 0 1 1 i) 4 4
3 7
ii)

Sh
4 4
5
iii) 4 4
7 3
iv) 4 4
&
en
uy
Ng

Solutions Page 1021


Nguyen & Shwedyk A First Course in Digital Communications

Signal space
i)
k 1,Q (t ) k ,Q (t )

k
Eb

dy
Eb
k 1, I (t ) k , I (t )
0 Eb 0

Eb

we
ii)
k 1,Q (t ) k ,Q (t )
Eb

Eb
k 1, I (t ) k , I (t )
Eb

Sh
0 0

Eb
iii)
k 1,Q (t ) k ,Q (t )
Eb
&
Eb
k 1, I (t ) k , I (t )
0 0 Eb

Eb
en

iv)
k 1,Q (t ) k ,Q (t )
Eb
uy

Eb
k 1, I (t ) k , I (t )
0 Eb 0

Eb
Ng

Figure 10.9

Quadrant Binary Decimal


i) (+1, +1, 1, 1) 1100 12
ii) (1, +1, +1, 1) 0110 6
iii) (1, 1, +1, +1) 0011 3
iv) (+1, 1, 1, +1) 1001 9

Solutions Page 1022


Nguyen & Shwedyk A First Course in Digital Communications

When bk3 bk2 are 01, 10, or 11, same argument as before, i.e., have the same signal
set.

k
Finally, consider

dy
bk3 bk2 bk1 bk Previous phase Present phase
7
0 0 1 0 i) 4 4
3
ii) 4 4
5 3
iii) 4 4
7 5
iv)

we
4 4

Sh
&
en
uy
Ng

Solutions Page 1023


Nguyen & Shwedyk A First Course in Digital Communications

Signal space
i)

k
k 1,Q (t ) k ,Q (t )
Eb

dy
Eb
k 1, I (t ) k , I (t )
0 Eb 0

we
Eb
ii)
k 1,Q (t ) k ,Q (t )
Eb Eb

Eb

iii)
0

k 1,Q (t )
Sh k 1, I (t )
0

k ,Q (t )
Eb
Eb
k , I (t )
&
Eb
k 1, I (t ) k , I (t )
0 Eb 0
en

Eb
iv)
k 1,Q (t ) k ,Q (t )
Eb Eb
uy

k 1, I (t ) k , I (t )
0 0

Eb Eb
Ng

Figure 10.10

Quadrant Binary Decimal


i) (+1, +1, +1, 1) 1110 14
ii) (1, +1, +1, +1) 0111 7
iii) (1, 1, 1, +1) 0001 1
iv) (+1, 1, 1, 1) 1000 8

Again bk3 bk2 = 01, 10, 11 results in the same set of signal points for bk1 bk = 10.

Solutions Page 1024


Nguyen & Shwedyk A First Course in Digital Communications

(g) To develop the demodulator, observe that the signal points lie as follows for the various
possibilities of bk1 bk

k
bk1 bk 00 01 11 10
15 13 12 14
5 4 6 7

dy
Quadrant
0 2 3 1

10 11 9 8
The observation is that they lie in disjoint quadrants (or disjoint quadrant subsets).
Therefore, to make a decision on the symbol bk1 bk decide which quadrant the received

we
signal lies in, more accurately which quadrant the sufficient statistics lie in, and choose
the bk1 bk symbol that belongs to this quadrant.
The sufficient statistics are the projection of r(t) onto the basis functions k1,I (t),
k1,Q (t), k,I (t), k,Q (t). The block diagram looks as follows:

( k 1)Ts

()dt

r (t )
k 1, I (t )

k 1,Q (t )
Sh
( k 2)Ts

( k 1)Ts


( k 2)Ts
()dt
t = (k 1)Ts

t = (k 1)Ts
Decide which
quadrant r1I , r1Q ,
r2 I , r2Q fall in b k b k 1
&
( k 1)Ts and choose
()dt the symbol bk bk 1
( k 2)Ts
t = kTs

k , I (t )
( k 1)Ts

()dt
t = kTs
en

( k 2)Ts

k ,Q (t )

Figure 10.11
uy

Remarks:
(i) The above block diagram is a minimum-distance receiver, i.e., the signal point r(t) is
closest to is chosen and the symbol bk bk1 it corresponds to is decided on. Remember
we are in white Gaussian noise.
Ng

(ii) As an example quadrant 6 is chosen and symbol bkbk1 = 11 is decided on if


(r1,I < 0) and (r1,Q > 0) and (r2,I > 0) and (r2,Q < 0).

Solutions Page 1025


Nguyen & Shwedyk A First Course in Digital Communications

P10.11 From Problems 10.8 and 10.9 let us make the following observations:
(i) The received signal over two symbol intervals can be written as:

k
p p p p
r(t) = Eb k1,I (t) Eb k1,Q (t) Eb k,I (t) Eb k,Q (t) + w(t)

dy
Therefore the sufficient statistics r1,I , r1,Q , r2,I , r2,Q are of theform Eb + w, i.e.,
statistically independent Gaussian random variables of mean Eb and variance N0 /2.
Note that Eb is readily interpreted as the transmitted energy per bit.
(ii) Because, as usual, the information bits bk are assumed to be equally probable and
statistically independent which leads to symmetry in the transmitted (and received)

we
signal space. We have
P [symbol error] = P [symbol error|bk bk1 = 00]
Indeed we can refine this further as:
P [symbol error] = P [symbol error|a specific quadrant signal transmitted]

Sh
(iii) Let us choose as a specific quadrant
signal theone that corresponds
to quadrant

(1, +1, 1, +1) or the signal Eb k1,I (t)+ Eb k1,Q (t) Eb k,I (t)+ Eb k,Q (t).
5, or

A symbol error is made if r1,I , r1,Q , r2,I , r2,Q fall in quadrants 1, 2, 3, 4, 6, 7, 8, 9, 11,
12, 13, 14.

Note that if r1,I , r1,Q , r2,I , r2,Q fall in quadrants 0, 10, or 15, though an error is made in
the transmitted signal, a correct decision is still made on the transmitted information
&
bits bk bk1 = 00.
(iv) So one needs to calculate the volume under f (r1,I , r1,Q , r2,I , r2,Q |signal in quadrant 5) in
the quadrants in which a symbol error is made. This can be done (almost) by inspection.
Consider the volume in quadrant 2, i.e., quadrant (1, 1, +1, 1). Note that there is
agreement in 2 of the components, in this case specifically r1,I , r2,Q and disagreement
en

in 2 components, namely r1,Q , r1,I . The contributions to the volume when there are
agreement and disagreement are shown below:

N0
2
uy

Disagreement Agreement

0 Eb
Ng

Figure 10.12

Quantified mathematically,
r !
2Eb
Agreement contribution = 1 Q
N0
r !
2Eb
Disagreement contribution = Q
N0

Solutions Page 1026


Nguyen & Shwedyk A First Course in Digital Communications

The volume in this quadrant is then a product of the four contributions, 1 agreement
and 3 disagreementshin this q
case. The
i contribution
q of this quadrant volume to the total

k
2Eb 2Eb
volume is therefore 1 Q N0 Q3 N0 . Based on this reasoning, set up the
q
2Eb
following table where the argument of the Q() function is understood.

dy
N0

Quadrant Decimal Error volume contribution


(1, +1, 1, +1) (5) (1 Q)Q3
(1, 1, +1, 1) (2) (1 Q)Q3
(1, +1, 1, 1) (4) (1 Q)3 Q

we
(+1, 1, +1, +1) (11) (1 Q)Q3
(+1, +1, 1, +1) (13) (1 Q)3 Q
(1, 1, 1, +1) (1) (1 Q)3 Q
(1, +1, +1, +1) (7) (1 Q)3 Q
(+1, 1, 1, 1) (8) (1 Q)Q3
(1 Q)Q3

Sh
(+1, +1, +1, 1) (14)
(1, 1, +1, +1) (3) (1 Q)2 Q2
(1, +1, +1, 1) (6) (1 Q)2 Q2
(+1, 1, 1, +1) (9) (1 Q)2 Q2
(+1, +1, 1, 1) (12) (1 Q)2 Q2

The probability of symbol error is the sum of the above volumes, remember the events
&
are mutually exclusive. A little algebra yields
" r ! r ! r ! r !#
2Eb 2 2Eb 3 2Eb 4 2Eb
P [symbol error] = 4 Q 2Q + 2Q Q .
N0 N0 N0 N0
(10.14)
en

Remark: An error expression is given in Telecommunication Systems Engineering, W.


C. Lindsay & M. K. Simon, Doves, 1973, p. 246. The derivation is quite different.
Compare. Also the Viterbi algorithm can be applied here. Left as an exercise.

For comparison, the symbol error probability of coherently-demodulated QPSK is given


uy

from Eqns. (7.39) and (7.40) in the text as:


" r !#2 r ! r !
2Eb 2Eb 2 2Eb
P [symbol error] = 1 1 Q = 2Q Q (10.15)
N0 N0 N0
Ng

Fig. 10.13 shows plots of (10.14) and (10.15). Observe that coherently-demodulated
QPSK outperforms coherently-demodulated DQPSK over the whole range of Eb /N0 . At
high SNR (i.e., high Eb /N0 ), the difference in Eb /N0 to achieve the same symbol error
probability is quite small, about 0.2 dB only.

Solutions Page 1027


Nguyen & Shwedyk A First Course in Digital Communications

1
10

k
2
10

dy
P[symbol error]
3
10

4
10

we
5
10

Coherently demodulated QPSK


6
Coherently demodulated DQPSK
10
0 2 4 6 8 10 12 14

Sh Figure 10.13

P10.12 (a) 16-QAM symbols are 4-bit sequences. Consider 2 contiguous symbols
Eb/N0 (dB)
&
bk 7bk 6bk 5bk 4 bk 3bk 2bk 1bk

(k 1)Ts kTs Ts = 4Tb

Arbitrarily, let bk1 bk be the differentially encoded bits, which determine the transmitted
phase, i.e.,
en

bk1 bk
00 Transmitted phase = Previous xmitted phase
01 Transmitted phase = Previous xmitted phase +/2
uy

01 Transmitted phase = Previous xmitted phase +


01 Transmitted phase = Previous xmitted phase +3/2 (or /2)

(b) The remaining 2 bits, bk3 bk2 amplitude


q modulate the transmitted sinusoid by +/2, +3/2.
Ng

Write the transmitted sinusoid as T1s cos(2fc t+k ) where k = /4, 3/4, 5/4, 7/4
radians (k is the relative phase as determined by bk bk1 ). In the symbol interval
{(k 1)Ts , kTs } the transmitted signal lies in the signal space as follows:

Solutions Page 1028


Nguyen & Shwedyk A First Course in Digital Communications

Q (t )
bits bk 2bk 3

k
(11) (10) (01) (11)

dy
(01) (00) (00) (10)

2
I (t )
0
2
(10) (00) (00) (01) I (t ) = cos(2 f c t )
Ts

we

2
2
(11) (01) (10) (11) Q (t ) = sin(2 f c t )
Ts

Figure 10.14

Sh
Remark: Which quadrant the transmitted signal lies in depends on what the differential
encoding tells one.
(c) 64-QAM has 6-bit signals bk5 bk4 bk3 bk2 bk1 bk . Again we let bk1 bk be the differen-
tially encoded bits while the bits bk5 bk4 bk3 bk2 amplitude modulate the transmitted
carrier by +/2, +3/2, +5/2, +7/2. Showing only one quadrant of the transmitted
signal space:
&
Q (t )

To map the bits


bk 5bk 4bk 3bk 2
use a Gray code mapping
en


uy

I (t )
0
2
2
I (t ) = cos(2 f c t )
Ng

Ts

2
Q (t ) = sin(2 f c t )
Ts

Figure 10.15

Again the transmitted signal space occupies all 4 quadrants. The signal points in the
other 3 quadrant are rotated version of the ones shown qin the first quadrant. The bits
bk5 bk4 bk3 bk2 would amplitude modulate the carrier T1s cos(2fc t + k ), where k
is the differential phase determined by bk1 bk .

Solutions Page 1029


Nguyen & Shwedyk A First Course in Digital Communications

(d) To generalize to any square QAM, proceed as above. Simply take the first 2 bits and
differentially encode the phase. The remaining bits then amplitude modulate the carrier.

k
Out of curiosity which, if any, of the 16-QAM constellations in Fig. 8.17 lend themselves
to differentially encoding (against k/2 phase ambiguity)?

dy
(e) Modulator:

3
+ , +
bk PAM 2 2
modulator

we
2
cos ( 2 f c t )
Ts
dk ,I
NRZ-L
{+1, 1} sT (t )
Differential d k 1, I
bk 3bk 2bk 1bk bk Ts
S/P {+1, 1}
mapper
d k ,Q
NRZ-L

Sh
bk 1
Ts
d k 1,Q 2
sin ( 2 f c t )
PAM
bk 1 Ts
modulator 3
+ , +
2 2
&
Figure 10.16

Demodulator: The block diagram consists of 2 parts. One of these demodulates the
differential bits bk bk1 , and is that of P10.10(g) where one looks at the signal over 2
symbol intervals (k 2)Ts to (k 1)Ts . Decide which of the 16 quadrants the (4)
en

sufficient statistics fall in and then determine the differential bits bkbk1 .
The second part demodulates the amplitude bits. It carves up the signal space of (b)
using a minimum-distance criterion (as we are in AWGN).

decision
uy

rk ,Q
boundaries
(11) (10) (01) (11)

(01) (00) (00) (10)



Ng

2 rk , I
0

(10) (00) (00) (01)



2

(11) (01) (10) (11)

Figure 10.17

Solutions Page 1030


Nguyen & Shwedyk A First Course in Digital Communications

P10.13 (a) The received signal space has 4 basis functions, namely
r

k
2
1,I (t) = cos(2f1 t); (10.16)
Tb
r
2

dy
1,Q (t) = sin(2f1 t); (10.17)
Tb
r
2
2,I (t) = cos(2f2 t); (10.18)
T
r b
2
2,Q (t) = sin(2f2 t). (10.19)

we
Tb
In terms of these basis functions the received signal can be written as
p p p
0T : r(t) = E1 1,I (t) + E2 nF,I 1,I (t) + E2 nF,Q 1,Q (t) + w(t), (10.20)
p p p
1T : r(t) = E1 2,I (t) + E2 nF,I 2,I (t) + E2 nF,Q 2,Q (t) + w(t), (10.21)

r2 =
Sh
where the fading parameters nF,I = cos , nF,Q = sin are statistically independent
Gaussian random variables, zero mean, variance F2 /2.
Project r(t) onto the 4 basis functions to obtain a set of sufficient statistics:

0T : r1 =

E1 + E1 nF,I + w1,I
E2 nF,Q + w1,Q
1T : r1 =
r2 =
w1,I
w
1,Q
&
r3 = w2,I r3 = E1 + E2 nF,I + w2,I
r4 = w2,Q r4 = E2 nF,Q + w2,Q

where w1,I , w1,Q , w2,I , w2,Q , due to the AWGN, are the usual statistically independent,
zero-mean Gaussian random variables, variance 2 N0 /2. Note that whether 0T or
1T the sufficient statistics r1 , r2 , r3 , r4 are statistically independent Gaussian random
en

variables.
(b) The LRT is therefore (as usual, bits are assumed equally probable):

Q
4
f (ri |1T ) 1D
uy

i=1
R1 (10.22)
Q
4
0D
f (ri |0T )
i=1

Ignoring the 2 factor which cancels out top and bottom and letting t2 E2 F2 /2 +
Ng

N0 /2, the LRT is



(r3 E1 )2 r2
r2 r2 42
1 12 1 22 1 2t2 1 1D
e e t e e
2 2 2t
t

(r1 E1 )2 r2
R 1 (10.23)
2 r2 r2
1 2t2 1 2t2 1 32 1 42 0D
t e t e e e
2 2

Canceling and taking ln, we get:



(r1 E1 )2 r22 r23 r24 1D r21 r22 (r3 E1 )2 r24
+ + + R + + + (10.24)
t2 t2 2 2 0D 2 2 t2 t2

Solutions Page 1031


Nguyen & Shwedyk A First Course in Digital Communications

Rewrite this as:



r24 1D r21

k
r23 (r3 E1 )2 r24 (r1 E1 )2 r22 r22
+ R + (10.25)
2 t2 2 t2 0D 2 2t2 2 t2

dy
After some algebra, namely consisting of completing the square by adding and subtract-
ing terms, the decision rule can be written as:
2 1D
E1 2 2 E1
r3 + + r4 R r1 + + r22 (10.26)
ct2 ct
2

we
0D

E1 2 E1 2
where c 12 12 > 0. Also note that ct2 = E2 F2 /2 2 ct2
= 2
E 2 F
m.
t
Therefore the decision rule can be written as
1D 2
(r3 + m)2 + r24 R r1 + m + r22 .

Sh
0D

The parameter m reflects the direct LOS path, which results in a DC received com-
ponent.

P10.14(a-d) Though it is easy enough to find the characteristic functions of what should read x2
2 2
and y2 , i.e., E{ejx } and E{ejy } where x N (m, 2 ) and y N (0, 2 ), and then
multiply the 2 characteristic functions, the difficulty is in finding the inverse transform.
&
Specifically,

n o Z
2 1 2 (xm)2
x2 () = E ej2x = ejx e 2 2 dx
2
en


1 m2
= p e 1j22 (10.27)
1 j2 2

and
uy

m2
1 e 1j22
y2 () = p z () =
1 j2 2 (1 j2 2 )
Z m2
1 e (1j22 ) jz
fz (z) = e d
Ng

2 (1 j2 2 )

Unfortunately, we are not able to perform this integration.

Well,
p if one fails in one approach, try another one. Consider random variable z =
x2 + y2 where x, y are statistically independent Gaussian random variables, common
variance, 2 , and mean mx , my , respectively. First let us find the probability distribution
function Fz (z) = P [z = x2 + y2 z]. Graphically this is finding the probability that z
falls in the region indicated in Fig. 10.18.

Solutions Page 1032


Nguyen & Shwedyk A First Course in Digital Communications

k
z = x2 + y 2

dy
x
0

we
Figure 10.18

This probability is
ZZ

Z Sh
fxy (x, y)dxdy =
1
2 2
ZZ

Z
e
(xmx )2
2 2 e

Now change to polar coordinates: 2 = x2 + y 2 ; dxdy = dd; x = cos ; y = sin


and region Z is 0 z; 0 < 2.
(ymy )2
2 2 dxdy (10.28)
&
Zz Z2 2 2
1 22 mx cos +m y sin (m +m )
x 2 y
Fz (z) = e 2 e 2 e 2 dd
2 2
=0 =0

(m2 +m2 )
Zz Z2
1 x 2 y
2
2 1
(mx cos +my sin )
= e e e 2 d d (10.29)
en

2 2
2 2
=0 =0
q
my
Write mx cos + my sin as m2x + m2y cos tan1 mx and recognize the inner

m2x +m2y
uy

integral to be I0 2
. Therefore

(m2 2 q
x +my ) Zz m2x + m2y
e 2 2 2
2
Fz (z) = e 2 I0 d (10.30)
2 2
=0
Ng

But what is desired is the probability density function fz (z) = dFz (z)/dz. Differentiating
the expression for Fz (z) with respect to z, (using Leibnitz rule), we get
q
z (z 2 +m2
x +m2)
y z m2x + m2y
fz (z) = 2 e 2 2 I0 (of course it is = 0 for z < 0) (10.31)
2

which is (P10.10) with mx = m and my = 0.

Solutions Page 1033


Nguyen & Shwedyk A First Course in Digital Communications

2
F 2
F
(e) We have m2 = (1+) and 2 = 2(1+) .
Substituting and simplifying by letting zn z/F ,
one has p

k
F fz (zn ) = 2zn ( + 1)e(zn + 1+ )(1+) I0 2zn (1 + )
2

Plots of F fz (zn ) with F = 1 and various values of are shown in Fig. 10.19. As

dy
increases the pdf tends to a Gaussian pdf. This is expected since the fading becomes
less and less of a factor.
2
Normalized Rician Distribution (with F=1)
2.5

we
=0 (Rayleigh)
=1
=4
2
=16

1.5

Sh
fz(zn)

0.5
&
0
0 0.5 1 1.5 2 2.5 3
zn

Figure 10.19
en
uy
Ng

Solutions Page 1034


k
dy
Chapter 11

Advanced Modulation Techniques

we
P11.13 (a) Because in general s1 (t) and s2 (t) are correlated, two orthornormal basis functions 1 (t)
and 2 (t) are required to completely represent them. Since the four signals y1 (t), y2 (t),
y3 (t) and y4 (t) are just linear combinations of s1 (t) and s2 (t), they can also be represented


Sh
as linear combinations of 1 (t) and 2 (t). Thus the dimensionality of the four signals is
two. One possible set of orthornormal functions 1 (t) and 2 (t) are given below (using
GramSchmidt procedure):

1 (t) = s1 (t)
1 (11.1)
2 (t) = p s2 (t) p s1 (t)
&
1 2 1 2

(b) First the two signature waveforms s1 (t) and s2 (t) can be written in terms of the the
basis functions as follows:

s1 (t) = 1 (t) p (11.2)
en

s2 (t) = 1 (t) + 1 2 2 (t)

It follows that the four signals {y1 (t), y2 (t), y3 (t), y4 (t)} can be written as follows:
p
y1 (t) = +s1 (t) + s2 (t) = (1 + )1 (t) + p1 2 2 (t)
uy

y2 (t) = +s1 (t) s2 (t) = (1 )1 (t) p1 2 2 (t)


(11.3)
y3 (t) = s1 (t) + s2 (t) = (1 )1 (t) + p1 2 2 (t)
y4 (t) = s1 (t) s2 (t) = (1 + )1 (t) 1 2 2 (t)

These four signals are plotted in Fig. 11.1-(a)


Ng

(c) The joint optimum receiver for this CDMA system is a minimum distance receiver,
which jointly demodulate the transmitted bits of both user 1 and user 2 based on the
signal (y1 (t), y2 (t), y3 (t) or y4 (t)) that is closest to r(t). The decision boundary and
the decision regions for this minimum distancep receiver when = 0.5 are shown in Fig.
11.1-(b). Note that for = 0.5 one has 1 2 = 0.866.

1
Nguyen & Shwedyk A First Course in Digital Communications

k
dy
2 (t )
1 2
s2 ( t )
y 3 (t ) y1 (t )

we
(1 + ) 1 s1 (t )

(1 ) 0 1+
1 (t )

y 4 (t )

Sh 1 2

(a)
y2 ( t )
&
2 (t )
0.866 s2 ( t )
y3 ( t ) y1 (t )
en

1.5 s1 (t )

0.5 0 0.5 1.0 1.5 1 (t )


uy

y 4 (t ) 0.866 y2 ( t )

(b)
Ng

Figure 11.1: Two-User CDMA: (a) Signal space plot for four possible received signals in the
absence of the background noise, (b) Decision regions of the jointly minimum distance receiver
when = 0.5.

Solutions Page 112


k
dy
Chapter 12

Synchronization

we
P12.1 The attenuation 2
affects the received energy, Eb , which is now Eb . From equations (12.1), (12.2)
with Eb Eb .

Sh
r !
2Eb
BPSK: P [bit error] = Q cos , (12.1)
N0
r !
1 2Eb
QPSK: P [bit error] = Q (cos sin )
2 N0
r !
1 2Eb
&
+ Q (cos + sin ) . (12.2)
2 N0


BPSK, =0 ; =[1:0.02:0.8] (from left to right) BPSK, =10 ; =[1:0.02:0.8] (from left to right)
en

2 2
10 10
P[bit error]

P[bit error]

4 4
10 10
uy

6 6
10 10
0 5 10 0 5 10
Eb/N0 (dB) Eb/N0 (dB)

BPSK, =20 ; =[1:0.02:0.8] (from left to right) BPSK, =30 ; =[1:0.02:0.8] (from left to right)
Ng

2 2
10 10
P[bit error]

P[bit error]

4 4
10 10

6 6
10 10
0 5 10 0 5 10
E /N (dB) E /N (dB)
b 0 b 0

Figure 12.1

1
Nguyen & Shwedyk A First Course in Digital Communications


QPSK, =0 ; =[1:0.02:0.8] (from left to right) QPSK, =10 ; =[1:0.02:0.8] (from left to right)

k
2 2
10 10

P[bit error]

P[bit error]

dy
4 4
10 10

6 6
10 10
0 5 10 0 5 10
Eb/N0 (dB) Eb/N0 (dB)

we

QPSK, =20 ; =[1:0.02:0.8] (from left to right) QPSK, =30 ; =[1:0.02:0.8] (from left to right)

2 2
10 10
P[bit error]

P[bit error]
4 4
10 10

10
6

0 5
E /N (dB)
b 0
Sh
10

Figure 12.2
10
6

0 5
E /N (dB)
b 0
10
&
The Matlab plots allow one to make design judgements as to the margins needed for trans-
mitted power and phase locked loop performance based usually on worst-case scenarios.
Note that this is always somewhat of a subjective judgement. For instance what error
probability does one choose to make the decision(s)?
en

P12.2 (a) Refer to Figure 12.3. Note that x1 = d (i 1) where d = r cos( + ).



2

2 1/2 1 2j 1
x1 = (2i 1) + (2j 1) cos tan + (i 1)
2 2i 1
uy


2

2 1/2 1 2j 1
x2 + x1 = x2 = i (2i 1) + (2j 1) cos tan +
2 2i 1
Similarly, d1 = r sin( + ), y2 = d1 (j 1).

1 2j 1
Ng

2 2 1/2
y2 = (2i 1) + (2j 1) sin tan + (i 1)
2 2i 1

2

2 1/2 1 2j 1
y1 = y2 = j (2i 1) + (2j 1) sin tan +
2 2i 1
(b) The error probabilities are determined by finding a volume under a 2-dimensional Gaus-
sian pdf in the appropriate regions. Because the 2 Gaussian random variables (nI , nQ if
you wish) are statistically independent (each of variance N0 /2 and a mean determined
by the signal point under consideration) the volume(s) are a product of 2 Q-functions
and basically determined by inspection from the geometrical picture. Let 2 N0 /2.

Solutions Page 122


Nguyen & Shwedyk A First Course in Digital Communications

Q
(2 j 1)
tan = 2

k
(2i 1)
y1 2
= tan 1
(2 j 1)

dy
(2 j 1) (2i 1)
2 y2
r = (2i 1) 2 + (2 j 1) 2
( j 1) 1/ 2

d1 r 2

we
x1 x2

0 I
d

(i 1) (2i 1)
2

Figure 12.3

Sh y1
II

I y2 III
&
x1 x2

VI
en

Figure 12.4

Inner signals (refer to Figure 12.4)


uy

Need to find the volumes under the 2-D Gaussian pdf in the 4 regions I, II, III, IV .
These are:
x
1
Volume I = Q (12.3)
h x x i y
1 2 1
Volume II = 1 Q Q Q (12.4)
Ng

x
2
Volume III = Q (12.5)
h x x i y
1 2 2
Volume IV = 1 Q Q Q . (12.6)

The total volume is of course the sum, and therefore:
x x
1 2
Pinner [symbol error] = Q +Q
h y i h
y2 x x i
1 1 2
+ Q +Q 1Q Q . (12.7)

Solutions Page 123


Nguyen & Shwedyk A First Course in Digital Communications

Outer-vertical signals (refer to Figure 12.5)

k
II

dy
y1

I y2
x1

III

we
Figure 12.5

Sh
Volume I = Q
h

h
x

Volume II = 1 Q

Volume III = 1 Q

1

x i y


1
Q

Q
x h y

1

x i y
1

2

y i h x i
(12.8)

(12.9)

(12.10)
1 1 2 1
PouterV [symbol error] = Q + Q +Q 1Q (12.11)
&

Outer-horizonal signals (refer to Figure 12.6)


en

x1 x2
II III
y2
uy

Figure 12.6
Ng

y
2
Volume I = Q (12.12)
h y i x
2 1
Volume II = 1 Q Q (12.13)
h
y i x
2 2
Volume III = 1 Q Q (12.14)

y h x x i h y i
2 1 2 2
PouterH [symbol error] = Q + Q +Q 1Q (12.15)

Solutions Page 124


Nguyen & Shwedyk A First Course in Digital Communications

Corner signals (refer to Figure 12.7)

k
dy
I y2
x1

II

we
Figure 12.7

x
1
Volume I = Q (12.16)
h x i y

Sh
Volume II = 1 Q

Pcorner [symbol error] = Q


x


1

+Q
1

(c) Consider the (i, j)th signal, i.e., the signal at (2i 1)
2 2 2 2
Q
y h

2

2

1Q


x i



1
(12.17)

(12.18)

2 , (2j 1) 2 . Its energy is Eij =


4 (2i 1) + 4 (2j 1) joules. (Note - phase rotation does not affect energy, therefore
&
ignore ).
NQ
NI X
X NI = 2I /2 , NQ = 2Q /2
Es = 4 Eij P [signal ij]
|| = I + Q , M = 2
(4 quadrants) i=1 j=1
1 = 1
M 2
en

NI XNQ 2
4 X 2 2 2
= (2i 1) + (2j 1) (joules/symbol)
M 4 4
i=1 j=1

N NQ N NQ
2 XX I XXI

= (4i2 4i + 1) + (4j 2 4j + 1) (12.19)


uy

M
i=1 j=1 i=1 j=1

Now
n
X n
X
n(n + 1) n(n + 1)(2n + 1)
k= ; k2 = .
2 6
Ng

k=1 k=1
Pn
After some algebra and by realizing that k=1 1 = n, we get

2 NI NQ
Es = (4NI2 1) + (4NQ
2
1) (12.20)
3M
1/2
But NI NQ = 2(I +Q )/2 = 2 = M 1/2

22 2 2
Es 22 2 2

Es = 2NI + 2NQ 1 Eb = = 2NI + 2NQ 1
3 M 3 M

Solutions Page 125


Nguyen & Shwedyk A First Course in Digital Communications

2 1/2
With square QAM, we have I = Q = /2 NI2 = 2/4 = 2 2.
= M = NQ

22

k
Eb = 4 M 1 joules/bit (12.21)
3 M
1/2

dy
3 M
or = (12.22)
2 4 M 1

Remark: Can express as a function of and M only for square QAM.


(d) The expression for P [symbol error] is quite lengthy. To simplify it, at least notationally

we
note that the distances x1 , x2 , y1 , y2 are functions of i, j which means the error probabil-
ities in (b) are functions of i, j. So write Pinner [error] as Pinner [error, i, j], Pouter-V [error]
as Pouter-V [error, i, j], and so on. Then
only 1 quadrant
considered

P [symbol error] =

+
Sh
NQ 1
X
4
M

symbols are
equally probable

I 1 NX
NX

i=1

Pouter-V [error, i = NI , j]
Q 1

j=1
Pinner [error, i, j]
&
j=1
NX
)
I 1

+ Pouter-H [error, i, j = NQ ] + Pcorner [error, i = NI , j = NQ ] (12.23)


i=1

(e) Use the expression for in terms of Eb /N0 derived in (c) and the general expression of
en

(d) to guide you in writing the Matlab program.


P12.3 (a) See Fig. 12.8.
(b)
! r
uy

2Eb
P[error] = Q cos (12.24)
N0
Z2 r !
2Eb
E {P[error]} = Q cos f ()d
N0
Ng

0
Z2 r !
2Eb 1
= Q cos em cos d
N0 2I0 (m )
0
Z2 r !
1 2Eb
= Q cos em cos d. (12.25)
2I0 (m ) N0
0

There is no closed-form integration available. Therefore resort to numerical integra-


tion. Specifically, make use of the routine quadl in Matlab. Plots of P [bit error] for
different values of m are shown in Fig. 12.9.

Solutions Page 126


Nguyen & Shwedyk A First Course in Digital Communications

Tikhonov Distribution
1.8
m=0

k
1.6 =1
m

1.4 m=10

dy
=20
m
1.2

1
f ()
0.8

we
0.6

0.4

0.2

Sh
0 pi 2*pi

Figure 12.8

0
10
&
2
10

4
10
P[bit error]

6
10
en

8
10

10
10 =0
m
=1 No phase error
m
uy

12
10 m=10
m=20
14
10
0 2 4 6 8 10 12 14
Eb/N0 (dB)
Ng

Figure 12.9

Note that for m = 0, i.e., the uniform pdf case, the performance breakdowns completely,
equivalent to flipping a coin to make a decision. This is expected since the phase (which
carries the information) is completely random.

Solutions Page 127


Nguyen & Shwedyk A First Course in Digital Communications

P12.4 The demodulator looks like:

k
2
r (t ) = Eb cos ( 2 f c t ) + w (t ) t = Tb

( ) dt
Tb r
Tb

dy
0

cos ( 2 ( f c + f ) t )
2
Tb

we
Figure 12.10

Performing the integration one gets:


p sin(2f Tb ) f Tb

where
r = Eb
2f Tb

w=
Sh 1+

ZTb r

0
2(2fc + f )

2
Tb
+w

cos (2(fc + f )t) w(t)dt


(12.26)
&
is a Gaussian r.v., zero-mean and a variance that, strictly speaking, is not N0 /2. We shall
take it to be N0 /2 as a very good approximation.
Remark: The interested 2
h i reader may wish to show that the variance, E{w }, is given by
N0 sin(4(f +f )T )
2 1+
c
4(fc +f )Tb
b
. Since fc is on the order of 106 to 109 while Tb on the order of 103
to 104 , the second term is neglected.
en

Therefore the error probability is given by:


r !
2Eb sin(2f Tb )
P [bit error] = Q , (12.27)
uy

N0 2f Tb

f Tb
where the approximation 1 + 2(2fc +f ) 1 is used for the signal part of r.
From the plot, take f Tb < 0.1 (at an error rate of 105 ). Then if Tb 103 f <
0.1(103 ) = 102 , i.e., 100Hz. So the oscillator specification would read 16Hz 100Hz. Need to
Ng

be quite accurate and stable. Note that as Tb becomes smaller (i.e., a higher bit rate rb ) the
allowable frequency deviation is larger.

Solutions Page 128


Nguyen & Shwedyk A First Course in Digital Communications

fTb=[0.01:0.02:0.20] (curves are from left to right)


1
10

k
2
10

dy
3
10
P[bit error]

we
4
10

5
10

Sh
6
10
0 2 4 6 8 10 12 14
Eb/N0 (dB)

Figure 12.11

P12.5 The demodulators block diagram looks as shown in Fig. 12.12, where fi = f1 if 0T and
&
fi = f2 if 1T .
t = Tb
r1
( ) dt
Tb

0
2
r (t ) = Eb cos ( 2 fi t ) + w (t )
en

Tb bk
cos ( 2 ( f1 + f ) t )
2
Tb
t = Tb
r2
( ) dt
Tb
uy

cos ( 2 ( f 2 + f ) t )
2
Tb
Ng

Figure 12.12

Solutions Page 129


Nguyen & Shwedyk A First Course in Digital Communications

The sufficient statistics r1 , r2 are:

k
p ZTb
2
0T : r1 = Eb cos(2f1 t) cos (2(f1 + f1 )t) dt
Tb

dy
0
r ZTb
2
+ cos (2(f1 + f1 )t) w(t)dt
Tb
0
p sin (2(f1 + f1 )Tb ) p sin (2f1 Tb )
r1 = Eb + Eb + w1 (12.28)

we
2(f1 + f1 )Tb 2f1 Tb

An easy assumption to make is to ignore the 1st term since f1 >> 1, typically 106 109 .
Therefore,
p sin (2f1 Tb )
r1 = Eb + w1
2f1 Tb
On the other hand,

r2 =
p
Eb

2
2
Tb

ZTb
ZTb

0
Sh
cos(2f1 t) cos (2(f2 + f2 )t) dt

+ cos (2(f2 + f2 )t) w(t)dt


&
Tb
0
p sin (2(f1 + f2 + f2 )Tb ) p sin (2(f2 f1 + f2 )Tb )
= Eb + Eb + w2
2(f1 + f2 + f2 )Tb 2(f2 f1 + f2 )Tb
(12.29)
en

Again the 1st term is easily ignored. To ignore the 2nd term is more problematical. Let
f2 f1 = 1/Tb , the minimum separation for (noncoherent) orthogonality. Then the term
sin 2f2 Tb sin 2f2 Tb
becomes Eb 2(1+f 2 Tb )
. If f T
2 b << 1, say < 0.1, then Eb 2(1+f2 Tb ) < 0.09 Eb and is
ignored in the plots. Therefore r2 = w2 .
uy

Similarly for 1T :
psin (2(f2 + f1 + f1 )Tb ) p sin (2(f1 f2 + f1 )Tb )
r1 = Eb + Eb + w1 w1
2(f2 + f1 + f1 )Tb 2(f1 f2 + f1 )Tb
p sin (2f2 Tb )
where again the double
r2 = Eb + w2 (12.30)
Ng

2f2 Tb frequency term is ignored

Solutions Page 1210


Nguyen & Shwedyk A First Course in Digital Communications

The received signal space looks as follows:


r2

k
sin(2f 2Tb )
Eb
2f 2Tb r1 = r2

dy
d2

0D

we
d1
0
r1
sin(2f1Tb )
Eb 0T
2f1Tb

Figure 12.13

Sh
The terms w1 , w2 are zero-mean Gaussian random variables. However because of the fre-
quency offsets they are not of variance N0 /2 (see the solution to P12.4). Further they are
correlated because cos (2(f1 + f1 )t), cos (2(f2 + f2 )t) are not orthogonal. However, this
correlation is
sin (2(f2 + f1 + f2 + f1 )Tb ) sin (2(f2 f1 + f2 f1 )Tb )
&
+
2(f2 + f1 + f2 + f1 )Tb 2(f2 f1 + f2 f1 )Tb

sin 2(f2 f1 )Tb 1
note that f2 f1 (12.31)
2 (1 + (f2 f1 )Tb ) Tb
en

And since it is reasonable to assume that f2 f1 then the correlation is negligible and
ignored, i.e., w1 , w2 are treated as zero-mean statistically independent Gaussian random
variables of variance N0 /2.
Therefore
! !
uy

1 d1 1 d2
P [bit error] = Q p + Q p (12.32)
2 N0 /2 2 N0 /2

If we assume (which we do for the plots) that f2 f1 f d1 d2 d


! r !
Ng

d Eb sin(2f Tb )
P [bit error] = Q p =Q (12.33)
N0 /2 N0 2f Tb

where d = Eb sin2f
2f Tb
Tb cos(45 ). Plots are those of P12.4 except for a 3dB (loss) factor. See
Fig. 12.14.

Solutions Page 1211


Nguyen & Shwedyk A First Course in Digital Communications

f1Tb f2Tb fTb=[0.01:0.02:0.20] (curves are from left to right)


1
10

k
2
10

dy
3
10
P[bit error]

we
4
10

5
10

Sh
6
10
0 2 4 6 8 10 12 14
Eb/N0 (dB)

Figure 12.14

P12.6 The block diagram looks as follows:


&
(t ) Phase vin (t ) = g ( (t ) ) v0 (t ) = h(t ) g ( (t ) )
h(t )
detector
en

t
(t ) = K VCO v0 ( )d
VCO
uy

Figure 12.15

Now (t) = (t) (t) or (t) + (t) = (t).

d(t) d(t) d(t)


+ = .
Ng

dt dt dt

d(t)
But dt = Kvco v0 (t) = Kvco h(t) g[(t)].

d(t) d(t)
+ Kvco h(t) g[(t)] = .
dt dt

Solutions Page 1212


Nguyen & Shwedyk A First Course in Digital Communications

P12.7 The impulse response of the loop filter of Figure 12.7(a) is h(t) = (t). Therefore the differ-
ential equation becomes

k
d(t) d(t)
+ Kvco g() = or = Kvco g().
dt dt

dy
(a) The sawtooth characteristic looks as follows:

g ( )

Vd

we
2 0 2
3

Vd

The phase plane is


Sh Figure 12.16

d / dt
&
K VCOVd +

3
en

2 0 2

K VCOVd +
: stable operating point

uy

: unstable operating point


steady state error =
K VCOVd

Figure 12.17
Ng

Remark: Vd / can be called the gain of the phase detector.

Solutions Page 1213


Nguyen & Shwedyk A First Course in Digital Communications

(b) The triangular characteristic looks as follows:

g ( )

k
Vd

dy
3
2 0 2 2
3 2

2 2

we
Vd

Figure 12.18

The phase plane is

3
Sh
d / dt

K VCOVd +

3

&
2 2 0 2 2

2 2

K VCOVd +
: stable operating point

steady state error =
en

: unstable operating point


2 K VCOVd

Figure 12.19

Remark: One feature of the above 2 characteristics is that their implementation is


uy

accomplished by digital circuits. See Gardner, Phaselock techniques, for a discussion.


Ng

Solutions Page 1214


Nguyen & Shwedyk A First Course in Digital Communications

P12.8 (a)

k
i(t ) R
+ +

dy
vin (t ) = sin (t ) C v0 (t )

we
Figure 12.20

Using i(t) = C dvdt


0 (t)
and KVL, we have
dv0 (t)
vin (t) = v0 (t) + RC (12.34)
dt
Rt


d(t)
dt
=
dt
Sh
Now (refer to Fig. 12.10b): (t) = (t) (t) and (t) = Kloop


d(t)
(t) = (t) (t);

dt
dt
;
= Kloop v0 (t);
d(t) d(t) d2 (t)
dt2
=
d2 (t)
dt 2

d2 (t) d2 (t)
dt2

= Kloop

dt2
dt
v0 ()d.
dv0 (t)

Therefore
&
dv0 (t) RC d2 (t) 1 d(t)
RC + v0 (t) = 2
+ = sin (t)
dt Kloop dt Kloop dt
RC d2 (t) 1 d(t) RC d2 (t) 1 d(t)
2
+ + sin (t) = 2
+
Kloop dt Kloop dt Kloop dt Kloop dt
en

2
d (t) 1 d(t) Kloop 2
d (t) 1 d(t)
or + + sin (t) = +
dt2 RC dt RC dt2 RC dt

(b) The state variables are (t) and (t) = d(t)
dt . We can write the state equation(s) in two
different ways. The standard approach is to define x = (t) and y = d(t)
dt as the state
uy

variables. Then the two state equations are:


dx d
= =y
dt dt
dy d2 1 d(t) Kloop 1
= 2 = sin (t) = [y Kloop sin x]
dt dt RC dt RC RC
Ng

One can then solve (numerically) the above two 1st order nonlinear differential equations
and plot the phase plane, namely versus .
for state variables x, y (i.e., , )
The other approach is to obtain directly a differential equation for with as the
independent variable and solve this numerically. Proceeding as in the text (see page
518) we get:
d 1 Kloop sin
= .
d RC RC
Remark: For the state (or phase) plane we assumed that the forcing function (t) is
zero. Have only non-zero initial conditions.

Solutions Page 1215


Nguyen & Shwedyk A First Course in Digital Communications

(c) The phase plane plots are shown in Fig. 12.21 for Kloop = 1/2 and Kloop = 2 and with
RC = 1.

k
RC=1; K =1/2 RC=1; K =2
loop loop
4 4

dy
Derivative of the phase error, d()/d 3 3

Derivative of the phase error, d()/d


2 2

we
1 1

0 0

1 1

4
Sh
2

4
0 0
&
Phase error, () (radians) Phase error, () (radians)

Figure 12.21

P12.9 Use the Laplace transform to obtain the transfer function and then convert to the time domain
en

d
using the relationship between operators of s dt .

i (t ) R1
+ +
uy

R
Vin ( s ) V0 ( s )
1
sC

Ng

Figure 12.22

1
V0 R + sC 1 + sRC
= 1 =
Vin R + R1 + sC 1 + s(R + R1 )C
(R1 + R)CsV0 (s) + V0 (s) = RCsVin (s) + Vin (s)
dv0 (t) dvin (t)
(R1 + R)C + v0 (t) = RC + vin (t)
dt dt
Rt
As in P12.8, using the relationships: vin (t) = sin (t); (t) = Kloop v0 ()d; (t) =

Solutions Page 1216


Nguyen & Shwedyk A First Course in Digital Communications

dvin (t)
(t) (t) and observing that dt = cos (t) d(t)
dt we get

k
d2 (t) 1 + RC cos (t) d(t) Kloop d2 (t) 1 d(t)
2
+ + sin = 2
+
dt (R1 + R)C dt (R1 + R)C dt (R1 + R)C dt

dy
d
The state equations are obtained as follows. Let x = and y = dt be the state variables.
Then the two state equations are:
dx
=y
dt

we
dy 1 + RC cos x Kloop
= y sin x (12.35)
dt (R1 + R)C (R1 + R)C

Or the differential equation for as a function of is

d 1 + RC cos Kloop sin

Sh
= . (12.36)
d (R1 + R)C (R1 + R)C

Remarks:

(i) As a quick (partial) check let R = 0 and R1 R. Then equations (12.35), (12.36)
should agree with those of P12.8.
(ii) One can write the state equations (12.35) with a non-zero forcing function, if desired.
&
(iii) One can use numerical methods to solve the nonlinear 1st order differential equation
and plot versus . It seems preferable to solve (12.36) and
of (12.35) to obtain ,
obtain as a function of . Why?
en
uy
Ng

Solutions Page 1217


Nguyen & Shwedyk A First Course in Digital Communications

The phase plane plots are shown in Fig. 12.23 for Kloop = 1/2 and Kloop = 2 and with
R1 C = RC = 1.

k
R1C=RC=1; Kloop=1/2 R1C=RC=1; Kloop=2

dy
4 4

Derivative of the phase error, d()/d 3 3

Derivative of the phase error, d()/d


2 2

we
1 1

0 0

1 1

4
Sh 2

4
0 0
&
Phase error, () (radians) Phase error, () (radians)

Figure 12.23

Remarks: For the interested reader, generic Matlab codes to generate the phase plane plots
en

of P12.9 are given below.

t_range=[0:0.05:20];
d_phase_error=[-3.5:0.5:3.5, -10^(-4),10^(-4)];
PLL=inline([-0.5*(1+cos(y(2)))*y(1)-(2)*0.5*sin(y(2));y(1)],t,y);
uy

% Set R_1C=RC=1, parameter K_loop is entered in this expression


for i=1:length(d_phase_error);
if d_phase_error(i)<0
phase_error=pi;
else
phase_error=-pi;
Ng

end
ini_con=[d_phase_error(i),phase_error];
[T,Y]=ode45(PLL,t_range,ini_con);
x=Y(:,2);fx=Y(:,1);
figure(1);
if d_phase_error(i)<0
plot(x,fx,k,linewidth,0.5,linestyle,--);hold on;
else
plot(x,fx,k,linewidth,0.5,linestyle,-);hold on;
end
end

Solutions Page 1218


Nguyen & Shwedyk A First Course in Digital Communications

P12.10 The transfer function of the inverting (ideal) operational amplifier is


k
1
V0 (s) Zf (s) R + sC R 1
= = = 1+ . (12.37)
Vin (s) Zin (s) R1 R1 RCs

dy
1
Let a RC and R1 = R, we have the desired transfer function of H(s) = 1 + as . The negative
sign is easily accounted for by inserting an unity-gain inverting amplifier.

P12.11 The question is what straight line to use. Let us use the simplest one. Choose its slope to be

that of sin at the origin, which is d sin
d =0 = 1. Over the range (/6, /6) the maximum

we
error occurs at /6 and is /6 sin(/6) = 0.024 % error = 0.024/6 100 = 4.5%.

Kloop H(s)
P12.12 (a) (s) = s (s); (s) = (s) (s).

Kloop H(s) Kloop H(s)/s


(s) = [(s) (s)] (s) = (s) = T (s)(s).
s 1 + Kloop H(s)/s

Sh
For the reader knowledgeable in control theory the above result follows immediately by
K
considering (s) as the output of a (linear) unity feedback system with loops
forward loop transfer function.
(b) (s) = (s) (s) = (s) T (s)(s) = [1 T (s)] (s).
H(s)
as the

P12.13
&
0 Kloop s
(s) = 2
+ ; T (s) = ; 1 T (s) =
s s s + Kloop s + Kloop
0 0
(s) = + = +
s(s + Kloop ) s + Kloop Kloop s Kloop (s + Kloop ) s + Kloop
en


(t) = 1 eKloop t u(t) + 0 eKloop t u(t).
Kloop


The steady-state error is Kloop which agrees.
uy

Matlab work to be done for (0) = /4, /10; d = 0.1, 0.5.



For the plot of the transient response, write the error expression in terms of d Kloop and
tn Kloop t. Note that 0 = (0), (t) = d (1 etn ) + (0)etn .
Kloop H(s)/s
P12.14 From P12.12 we have (s) = [1 T (s)] (s) where T (s) =
Ng

1+Kloop H(s)/s

s s2
1 T (s) = =
s + Kloop H(s) s2 + Kloop (s + a)
s2 + s0 s0 +
(s) = = 2
s2
+ Kloop (s + a) s2 s + Kloop (s + a)
lim s(s) = 0 = lim (t) = steady-state error.
s0 t

Result agrees with that of text - see discussion on page 517.

Solutions Page 1219


Nguyen & Shwedyk A First Course in Digital Communications

P12.15 (a) From P12.9 we have

k
1
R s + RC
H(s) = 1
R1 + R s + (R +R)C
1

dy
R 1 1
K= ; a= ; b= .
R1 + R RC (R1 + R)C

(b)

Kloop H(s) Kloop (s + a)

we
T (s) = =
s + Kloop H(s) s(s + b) + Kloop (s + a)
Kloop (s + a)
= 2
s + (Kloop + b)s + aKloop
s(s + b)
1 T (s) = 2
s + (Kloop + b)s + aKloop

lim s(s) =
s0
Sh
(s) =

=
2

lim 2
s(s + b)
s + (Kloop + b)s + aKloop
(s + b)( + s0 )
s0 s + (Kloop + b)s + aKloop
R
R1 + R Kloop

+ s0

=
s2
b
aKloop

(the steady-state error).


&
Compared to the 1st-order loop the steady state error is reduced by the factor R1R+R .
However compared to the perfect integrator it is non-zero since the filter does not real-
ize the perfect integrator but only approximates it. This approximation becomes better
as R1 increases relative to R (perfect when R1 ). But practically R1 can only be
en

made so large otherwise the high gain amplifier used for compensation shall start giving
problems.

P12.16 (a) The phase detector output is:

[V sin (c t + (t)) + nI (t) cos c t + nQ (t) sin c t] K cos (c t + t(t))


uy

Using the trigonometric identities

cos(x + y) + cos(x y) sin(x + y) + sin(x y)


cos x cos y = and sin x cos y =
2 2
Ng

and ignoring the 2c terms which are fairly easily filtered out, we get
KV K K
vout (t) = sin ((t) (t)) + nI (t) cos (t) nQ (t) sin (t)
2 2 2

(b) Define the noise term n0 (t) K2 nI (t) cos (t) K2 nQ (t) sin (t). The phase detector
output is vout (t) = KV 0
2 sin ((t)) + n (t) and the block diagram follows immediately.

Solutions Page 1220


Nguyen & Shwedyk A First Course in Digital Communications

(c) The output of the phase detector is v0 (t) = A sin (t). The output of the loop filter or
input to the VCO is

k
vin (t) = Kh(t) A sin (t) + Kh(t) n0 (t)
Zt

dy
d(t)
(t) = vin ()d = vin (t)
dt
0
d(t) d(t) d(t)
and (t) = (t) (t) =
dt dt dt
Zt Zt

we
d(t) d(t)
+ KA h(t ) sin ()d = K h(t )n()d
dt dt

The reasoning that n0 (t) can be modeled as a Gaussian process, indeed as a white
Gaussian process can be found in Principles of Coherent Communication by A.J.
Viterbi (McGraw-Hill); section 2.7, pages 2834.
P12.17 (a)

E {(t)} = E
t
Z

Sh 0



Zt

Kh() n ()d = K Kh() E n0 () d = 0

since E {n0 ()} = 0. Remember that expectation is a linear operation, as in inte-


&
gration and convolution, so the operations can be interchanged. Further, E {(t)} =
E {(t)} = E {(t)} 0.
(b) The block diagram relating (t) to n0 (t) is

+ ( ) dt
n(t ) t
Kh(t ) (t )
en

n( j ) 1/ j KH ( j ) ( j )

uy

Figure 12.24
Ng

The transfer function relating (j) to n0 (j) is



KH(j)/j KH(j)/j 2
S () = Sn0 () (watts/Hz).
1 + AKH(j)/j 1 + AKH(j)/j
Obviously S () = S ().
(b) Multiply the transfer function (top and bottom) by A
2
AKH(j)/j AKH(j)/j 2 N0
S () = Sn0 () = 1
A (1 + AKH(j)/j) A2 1 + AKH(j)/j 2
1 N0
= |T (j)|2 = S ()
A2 2

Solutions Page 1221


Nguyen & Shwedyk A First Course in Digital Communications

where T (j) is the closed loop transfer function as seen by the following block diagram.

k
+
0 ( ) dt
(t ) t
(t )
A Kh(t )

dy
we
Figure 12.25

(d)
Z Z
2 1 1
= S ()d = S ()d (using even property of a PSD)
2

=
1

0

Z
N0
2A2 Sh A
0

N0 1
|T (j)|2 d = 2

P12.18 Raising the signal to the nth power (ignore noise) means
2

Z

|T (j)|2 d =
N0 Bloop
A2
(watts).
&
hp 2k in p n 2nk
Es ej m = Es ej m .

Of course it is desirable to raise it to the smallest possible power to get a (non-zero) constant
value, more specifically a (non-zero) constant positive real value for all k. It follows that
n = m.
en

P12.19 Again one wishes to create a (non-zero) constant positive real value for all kl . It follows that
n = LCM{kl }. Here LCM means the least common multiple. For example, for Fig. 12.26,
m = LCM{4, 8, 12} = 24. (Taking C = 3, k1 = 4, k2 = 8, k3 = 12).
uy

P12.20 Every woman for herself here.


Ng

Solutions Page 1222

You might also like