You are on page 1of 19

real so

Φzz (f ) = Φxx (f ) + jΦyx (f ) (1.106)


indicates that Φyx (f ) must be purely imaginary.

1.8 Signal Space Representations


Everyone is familiar with traditional vector spaces, the most common being the three-
dimensional vector space of the physical world. Vector spaces are very useful, since they
can provide a geometrical interpretation which may give a valuable insight into a problem.
This geometrical representation can be used in digital communication systems by abstractly
representing the signals as vectors in a vector (linear) space.
The most familiar example of a linear space is a Euclidean space. In a Euclidean space, a
vector is represented by its coordinates, an n-dimensional space requires n coordinates. The
vector X is represented as [x1 , x2 , · · · , xn ]T . In a linear space the two operations permitted
are addition of vectors and multiplication by a scalar. For a Euclidean space the following
properties hold: If X, Y and Z are vectors then

X + Y = Y + X commutative law
X + (Y + Z) = (X + Y ) + Z associative law

A linear space must have a zero vector 0, and every vector must have an additive inverse,
denoted −X, such that

0+X = X
X + (−X) = 0

Multiplying a vector X by a scalar, α , produces a new vector, αX, which must be in the
linear space. For vectors X and Y , and constants α and β,

α(βX) = (αβ)X associative law (multiplication)


1×X = X
0×X = 0
α(X + Y ) = αX + αY distributive law
(α + β)X = αX + βX distributive law
(1.107)

Another important definition for the Euclidean space is the inner product (dot product)
of two vectors given by
X
n
X ·Y = xi yi∗ (1.108)
i=1

The inner product can also be evaluated using

X ·Y = XT Y ∗ (1.109)
X ·Y = k X k k Y k cos (θ) (1.110)

29
Y

kY k
X
θ

kX k

Figure 1.34: Vector Terminology

where θ, k X k and k Y k are defined in Figure 1.34. k X k and k Y k are the lengths of the
vectors X and Y . This length can also be determined from the inner product using
X
n
X · X =k X k2 = |xi |2 . (1.111)
i=1

Note that k X k is also referred to as the norm and


• X·Y
kXk
is the length of the component of Y in the direction of X.

• X·Y
kY k
is the length of the component of X in the direction of Y .
The following rules apply to the inner product
(X + Y ) · Z = X ·Z +Y ·Z
αX · Y = αX · αY
X ·Y = Y ·X
X ·X > 0 for X 6= 0.
Also two vectors are defined as orthogonal if
X ·Y =0 (1.112)
All of the above definitions, generated for the Euclidean space, can also be applied to other
linear spaces.
In communications, there are two other important linear spaces, these are, the space of
discrete-time signals (a generalization of the Euclidean space to infinite dimensions) and the
space of continuous time signals. These linear spaces are usually referred to as signal spaces
in communications. The interest in this work is mainly in continuous time signal spaces. In
this context define a vector, Y , which corresponds to a continuous time signal y(t), where it
is assumed that y(t) has finite energy
Z ∞
|y(t)|2dt < ∞. (1.113)
−∞

30
The inner product of two continuous time signals is defined by
Z ∞
X ·Y = x(t)y ∗ (t)dt (1.114)
−∞

All the other properties which apply to the Euclidean space also apply to the continuous
time space. For example, multiplication by a scalar and vector addition
αY is equivalent to αy(t)
X + Y is equivalent to x(t) + y(t)
Another example is the length of the vector, k X k which for continuous time signals is given
by Z ∞
k X k2 = |x(t)|2 dt (1.115)
−∞
2
Note that, k X k , is also the energy in the signal x(t), thus the norm k X k is the square
root of the energy.

1.8.1 Signal Space Representation for Digital Communications


The transmission of digital information over a communications channel requires a mod-
ulator, which maps the digital information into analog waveforms that match the charac-
teristics of the channel. M-ary signaling involves taking k = log2 M binary digits at a time
and selecting one of M = 2k deterministic, finite energy waveforms, si (t), i = 1, · · · , M,
for transmission over the channel. A signal space representation for these signals provides a
geometric interpretation which is very useful for signal design and receiver design.
For example, two signals, s1 (t) and s2 (t), could be defined for an M = 2 system, where
s2 (t) = −s1 (t). These two signals are referred to as binary antipodal signals. A plot of
two possible antipodal signals is shown in Figure 1.35. If the vector represenations for these

s1 (t)

s2 (t)

Figure 1.35: Two Binary Antipodal Signals

continuous time signals are defined as


s1 for s1 (t) (1.116)
s2 for s2 (t), (1.117)

31
s2 s1

Figure 1.36: Binary Antipodal Signal Space

then a possible signal space for these signals is shown in Figure 1.36. In this binary signal
system, the purpose of the receiver is to determine which of the two signals was sent given
r(t) was received. A simple and interactive approach is to take the difference between the
received signal, r(t) and each of the signals, s1 (t) and s2 (t), and choose the one with the
smaller difference. The signals, s1 (t) and s2 (t) and a noisy r(t) are shown in Figure 1.37.
Obviously, in this case r(t) is closer to s1 (t). Determining the signal closer to r(t) is usually

r(t)

s1 (t)

s2 (t) t

Figure 1.37: Binary System Showing the Received Signal

done by minimizing the mean squared error. For example, a receiver could calculate the
mean squared error for both signals
Z ∞
|r(t) − s1 (t)|2 dt
−∞
Z ∞
|r(t) − s2 (t)|2 dt, (1.118)
−∞

and comparing the results, choose the smallest. This is also referred to as the minimum-
energy criterion, since the above integrals calculate the error energy. Note that in terms of
the signal space representation the minimum energy criterion is equivalent to a minimum
distance criterion. This is because the signals can be interpreted as vectors in a vector space,
and an equivalent inner product and norm (length) of this abstract vector can be defined.
The length of the vector which represents the continuous time signal has been defined as the
square root of the signal energy. Thus the energy of the difference of two signals, equation
(1.118), can thus be interpreted as the square of the distance between the two signal vectors.

32
1.8.2 Orthogonal Signal Space
The possible number of signals, M, can be quite large for many signal designs. In general,
the dimensionality of the signal set is much smaller than M. Thus the M signals could be
represented in terms of this smaller dimensional space. Lets assume that the M signals can
be represented in an N-dimensional orthogonal space, where N ≤ M. This signal space is
characterized by a set of N linearly independent functions, fi (t), i = 1, 2, · · · , N, 0 ≤ t ≤ T ,
which are called basis functions. Any arbitrary function (waveform) can be generated by
using a linear combination of these basis functions,
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.119)
j=1

These basis functions must satisfy


Z (
T Ki if i = j
fi (t)fj (t)dt = . (1.120)
0 0 if i 6= j
Note that (1.120) is just the inner product definition of orthogonality for continuous time
signals. Using the continuous time vector space, previously defined, associate the basis
vectors F i and F j with the continuous time function fi (t) and fj (t). These two vectors are
orthogonal if
Fi · Fj = 0 i 6= j, (1.121)
and the length of the vectors (square root of signal energy) is
q
k Fi k= Fi · Fi = Ki (1.122)
This set of basis functions (vectors) defines an orthogonal signal space: If the basis functions
(vectors) are normalized, such that Ki = 1, then the space is called an orthonormal signal
space. The set of basis functions, fi (t), i = 1, · · · , N can be viewed geometrically using the
continuous time vectors, Fi , i = 1, · · · , N. For example, for N = 3, the basis vector space is
shown in Figure 1.38. As stated previously, the signal, si (t), can be represented using these
basis functions as
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.123)
j=1

The sij constants are the components of the signal vector si ,


 
si1
 
 si2 
 
 si3 
si =   (1.124)
 .. 
 . 
 
siN
For an N = 3 signal space the signal vector is
 
si1
 
si =  si2  (1.125)
si3

33
F2 f2 (t)

F1

f1 (t)

F3
f3 (t)

Figure 1.38: N = 3 Signal Space

The corresponding signal transmitted is

si (t) = si1 f1 (t) + si2 f2 (t) + si3 f3 (t)


X
N
= sij fj (t) 0 ≤ t ≤ T (1.126)
j=1

which is shown in Figure 1.39.

f2 (t)

f1 (t)
si2 si3

si1
f3 (t)

Figure 1.39: Signal Representation in an N = 3 Signal Space

An important aspect in the signal space representation is the approach used for deter-
mining the coefficients, sij , in

X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.127)
j=1

34
given a set of orthornormal basis functions. One method to determine these coefficients is
derived from the more general case of approximating a signal, s(t), with the summation

X
K
ŝ(t) = sk fk (t). (1.128)
k=1

It is desired to minimize the error

e(t) = s(t) − ŝ(t). (1.129)

One method to minimize the error is to select the coefficients, sk , such that the mean squared
error Z ∞
Ee = |s(t) − ŝ(t)|2 dt (error energy) (1.130)
−∞

is minimized. This mean squared error (signal error energy) is minimized if the coefficients
are determined from the projection of s(t) onto each fk (t) (ie. the inner product of the
functions), Z ∞
sk = s(t)fk∗ (t)dt k = 1, · · · , K. (1.131)
−∞

A familiar example that uses a set of orthonormal basis functions to represent a signal is the
Fourier series given by

X
s(t) = sn fn (t) 0 ≤ t ≤ T0 (1.132)
n=−∞

where

fn (t) = ejnω0 t ω0 = (1.133)
T0
giving

X
s(t) = sn ejnω0 t 0 ≤ t ≤ T0 (1.134)
n=−∞

In this example the basis functions are not normalized to l, thus the energy is given by
Z ∞
E fn = fn (t)fn∗ (t)dt
−∞
Z T0
= ejnω0 t e−jnω0 t dt
0
Z T0
= dt
0
= T0 (= Kn from before) (1.135)

For this case a normalizing factor must be included when determining the coefficients
∞ Z
1
sn = s(t)fn∗ (t)dt
Kn −∞
Z
1 T0
= s(t)e−jnω0 t dt (1.136)
T0 0

35
This derivation results in the Fourier series pair

X
s(t) = sn ejnω0 t 0 ≤ t ≤ T0 (1.137)
n=−∞
Z
1 T0
sn = s(t)e−jnω0 t (1.138)
T0 0

A Fourier series example for the approximation of a square wave, shown in Figure 1.40,
is shown in Figure 1.41 for N = 1, 3, 5, 7. Figure 1.41 also shows the instantaneous error,
N (t), in the approximation.

f (t)
1

−1

0 1 2

Figure 1.40: Rectangular Function for the Fourier Series Example

In the above square wave Fourier series example the mean square error will not be zero
for a finite N since Z ∞
|s(t) − ŝ(t)|2 dt 6= 0 for finite N. (1.139)
−∞

But in systems using digital modulation there is a finite number of signals, si (t), so the basis
functions could be selected to give a mean squared error of zero. In this case the coefficients,
sij , in
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.140)
j=1

will completely represent the signal, si (t), in the sense the approximation error has zero
energy. In this situation (zero error energy) the signal energy can be determined directly
from the coefficients using
Z ∞ X
K
Esi = |si(t)|2 dt = |sik |2 (1.141)
−∞ k=1

If the mean square error is not zero it can be determined by expanding (1.128) to give
Z ∞ X
K
Ee = |si (t)|2 dt − |sik |2 (1.142)
−∞ k=1

36
N (t)

N =1 t t

N =3 t t

N =5 t t

N =7 t t

0 1 2 0 1 2

Figure 1.41: Approximation of a Rectangular Function Using Orthogonal Functions and the
Instantaneous Error of the Approximation.

37
1.8.3 Representation of Finite Energy Signals by Orthonormal Ex-
pansions
Any set of signal waveforms, even if they are not orthogonal, can be transformed into a linear
combination of orthonormal waveforms (basis functions).
Define an arbitrary finite set of waveforms, si (t), i = 1, · · · , M, where each member of
the set is physically realizable and of duration T . A set of N orthonormal basis functions,
fi (t), i = 1, · · · , N where N ≤ M, can be generated such that the signals, si (t) can be
represented as
XN
0≤t≤T
si (t) = sij fj (t) (1.143)
j=1
i = 1, 2, · · · , M
where Z T i = 1, · · · , M
sij = si (t)fj (t)dt (1.144)
0 j = 1, · · · , N
Schematically, these two equations can be represented as shown in Figure 1.42 and 1.43,
respectively.

si1
X

f1 (t)
si2
X

f2 (t) si (t)
Σ

siN
X

fN (t)

Figure 1.42: Signal Analysis Form

Given a finite set of energy signal waveforms, si (t), i = 1, 2, · · · , M, how can a set of
orthonormal basis functions be generated? One approach is to use Gram-Schmidt orthogo-
nalization.

1.8.4 Gram-Schmidt Procedure


First the Gram-Schmidt procedure will be demonstrated using Euclidean vectors, then it will
be applied to continuous-time signals. Suppose a system consists of N linearly independent

38
RT si1
X 0 dt

f1 (t)
RT si2
X 0 dt

si (t) f2 (t)

RT siN
X 0 dt

fN (t)

Figure 1.43: Signal Synthesis Form

vectors
X 1 , X2 , · · · , X N where Xn = [xn1 , xn2 , · · · , xnN ]T . (1.145)
Starting with X1 , an orthonormal basis vector, f1 , can be generated, in the direction of X1
by making X1 a unit vector.
X1 X1
f1 = =q (1.146)
k X1 k X1T X1
Thus q
X1 = X1T X1 f1 (1.147)
and the associated coefficient is
q q
c11 = X1T X1 = X1 · X 1 . (1.148)

The second orthonormal vector, f2 can be generated by, first, projecting X2 onto f1 . The
coefficient, c21 , in the direction of f1 is shown in Figure 1.44. The value of c21 can be
calculated using
c21 =k X2 k cos θ. (1.149)
Since k f1 k= 1, this is also given by the inner product

c21 = X2T f1 = X2 · f1 =k X2 k k f1 k cos θ =k X2 k cos θ. (1.150)

The next step is to subtract c21 f1 from X2 to give d2 = X2 − c21 f1 . The vector d2 , which is
shown in Figure 1.45, is ⊥ to f1 . The orthonormal vector, f2 , is generated by making d2 a

39
X2

θ
f1
c21

Figure 1.44: Projection of X2 onto f1

X2

d2 = X2 − c21 f1

f1

Figure 1.45: d2 is Perpendicular to f1

unit vector
d2 d2 d2
f2 = =q =q . (1.151)
k d2 k d2 · d2 dT2 d2
In the above derivation two orthonormal basis vectors, f1 and f2 , have been generated using
the first two vectors, X1 and X2 . This procedure can be extended to N vectors where
X
i−1
d i = Xi − cij fj , (1.152)
j=1

where
cij = Xi · fj = XiT fj (1.153)
and
di di di
fi = =q =q , i = 1, 2, · · · , M. (1.154)
k di k di · di dTi di
Gram-Schmidt orthogonalization can also be applied to continuous time waveforms.
Given a set of signals, si (t), i = 1, 2, · · · , M, 0 ≤ t ≤ T , select a set of N linearly in-
dependent waveforms from si (t), where N ≤ M. Starting with the first waveform, s1 (t), the
energy in s1 (t) is given by
Z T
Es1 = |s1 (t)|2 dt. (1.155)
0
Es1 is the continuous time dot product ofq s1 (t) with itself, and since the length of a vector
is the square root of the inner product, Es1 , is the continuous time length of s1 (t). Thus
the first basis function is given by
s1 (t)
f1 (t) = q (1.156)
Es1

40
Note that f1 (t) is simply s1 (t) normalized to unit energy. Rearranging (1.156) gives
q
s1 (t) = Es1 f1 (t) = c11 f1 (t) (1.157)
and thus q
c11 = Es1 . (1.158)
The second basis function is constructed from s2 (t). Since f1 (t) has unit energy the
projection of s2 (t) onto f1 (t) is given by the inner product.
Z T
c21 = s2 (t)f1∗ (t)dt (c12 was used in Proakis) (1.159)
0

As was done for the Euclidean vectors, subtract c21 f1 (t) from s2 (t) to give
d2 (t) = s2 (t) − c21 f1 (t). (1.160)
d2 (t) is orthogonal to f1 (t) in the interval 0 ≤ t ≤ T . This can be normalized by dividing by
the square root of the energy in d2 (t), to give f2 (t).
d2 (t)
f2 (t) = qR (1.161)
T
0 |d2 (t)|2 dt
This procedure can be extended to N basis functions using
X
i−1
di (t) = si (t) − cij fj (t) (1.162)
j=1

where Z T
cij = si (t)fj∗ (t)dt j = 1, 2, · · · , i − 1 (1.163)
0
and
di (t)
fi (t) = qR i = 1, 2, · · · , N. (1.164)
T
0 |di (t)|2 dt
which gives a complete set of orthonormal basis functions.
The set of N orthonormal basis functions, as stated before, can be used to represent the
signals as
XN
0≤t≤T
si (t) = sij fj (t) (1.165)
j=1
i = 1, 2, · · · , M
where
Z T i = 1, · · · , M
sij = si (t)fj (t)dt j = 1, · · · , N (1.166)
0
N ≤M
Thus each signal in the set {si (t)} is essentially determined by the coefficients, which can be
put in vector format as follows.
 
si1
 
 si2 
si = 
 .. 
 i = 1, · · · , N (1.167)
 . 
siN

41
This vector of coefficients, si , can be viewed as a vector in N dimensional Euclidean space.
Thus the M signals, si (t), i = 1, · · · , M, can be represented as M vectors, si , i = 1, · · · , M,
in an N-dimensional coordinate system. An example of a signal space for N = 2 and M = 4
is given in Figure 1.46. In this figure, the signals are represented as vectors in Euclidean

f2 (t)

s4 s1

f1 (t)

s2
s3

Figure 1.46: Two Dimensional Signal Space

space, thus the standard Euclidean space operations can be applied to these vectors. For
example, an inner product can be done

where θ is the angle


s1 · s2 = sT1 s2 =k s1 k k s2 k cos θ (1.168)
between s1 and s2

The energy of the signal can be determined directly from the vector by

X
N Z T
Esi = si · si = sTi si = s2ij = |si (t)|2 dt (1.169)
j=1 0

The Euclidean distance between two signal points can also be determined by using
q
k si − s k k = (si − sk ) · (si − sk )
q
= (si − sk )T (si − sk )
s
Z T
= |(si (t) − sk (t))|2 dt (1.170)
0

Note on Gram-Schmidt Orthogonalization - The set of basis functions {fn (t)}


produced by Gram-Schmidt orthogonalization are not unique. A different set of {fn (t)} will
result if the order in which the set of signals, {sn (t)} considered, is altered. But the set of

42
signal vectors {sn } will retain their geometrical configuration (distance between vectors) and
their lengths will be the same for all sets of {fn (t)}.

Example
Four signals are given in Figure 1.47. Determine a set of orthonormal basis functions and

s1 (t) s2 (t) s3 (t) s4 (t)


1 1 1 1

T 2T T
3
t 3
t 3
T t T t

Figure 1.47: Example Signals

the signal vectors representing the signals.


First choose a set of linearly independent signals. For the above signals s4 (t) depends on
s1 (t) and s3 (t),
s4 (t) = s1 (t) + s3 (t). (1.171)
This leaves s1 (t), s2 (t) and s3 (t) as linear independent signals. The energy in s1 (t) is
Z T /3
Es1 = (1)2 dt = T /3 (1.172)
0

The first basis function is


( q
3
s1 (t) 0 ≤ t ≤ T /3
f1 (t) = q = T (1.173)
Es1 0 otherwise

The projection of s2 (t) onto f1 (t) is given by


s s
Z ∞ Z T /3 3 T
c21 = s2 (t)f1∗ (t)dt = (1) dt = (1.174)
−∞ 0 T 3
d2 (t) = s2 (t) − c21 f1 (t) (1.175)
where d2 (t) is shown in Figure 1.48. The second basis function is obtained by normalizing
d2 (t) to give

d2 (t) s2 (t) − c21 f1 (t)


f2 (t) = qR ∞ = r
R 2T /3
−∞ |d2 (t)| dt
2 2
T /3 (1) dt
( q
s2 (t) − c21 f1 (t) 3
T /3 ≤ t ≤ 2T /3
= q = T (1.176)
T /3 0 otherwise

43
d2 (t)
1

T 2T
3 3
t

Figure 1.48: Signal d2 (t) for the Example

The third basis function is derived as follows:


Z ∞
c31 = s3 (t)f1∗ (t)dt = 0 (1.177)
−∞

s
Z ∞ Z 2T /3 3
c32 = s3 (t)f2∗ (t)dt = (1) dt
−∞ T /3 T
s s
 
3 2T T T
= − = (1.178)
T 3 3 3
d3 (t) = s3 (t) − c31 f1 (t) − c32 f2 (t) (1.179)
The signals used to calculate d3 (t) are shown in Figure 1.49. Thus

s3 (t) c31 f1 (t) c32 f2 (t)


1 1 1

T
0 T 2T
3
T t t 3 3
t

Figure 1.49: Signals Used to Determine d3 (t) for the Example

(
1 2T /3 ≤ t ≤ T
d3 (t) = (1.180)
0 otherwise
( q
3
d3 (t) d3 (t) 2T /3 ≤ t ≤ T
f3 (t) = qR ∞ =q = T . (1.181)
2 0 otherwise
−∞ (d3 (t)) dt T /3
The three basis functions are shown in Figure 1.50. Using these orthonormal basis functions,
the signal vectors are given by
 q   q     q 
T T 0 T
   q3   q  q3 
3 T 
s1 =  
 0  s2 = 

T 
 s3 =  
 q3  s4 = 
 T 
 (1.182)
3  q3 
0 T T
0 3 3

44
f1 (t) f2 (t) f3 (t)
q q q
3 3 3
T T T

T
0 T 2T 2T
3
t 3 3
t 3
T t

Figure 1.50: Set of Basis Functions for the Example

As a second example, change the order in which s1 (t), s2 (t) and s3 (t) are used to generate
the basis functions. Starting with s1 (t), the first basis function (here φ1 (t) is used instead of
f1 (t))is ( q
3
s1 (t) 0 ≤ t ≤ T /3
φ1 (t) = q = T (1.183)
Es1 0 otherwise
The second basis function will be determined using s3 (t). The projection of s3 (t) onto φ1 (t)
is given by Z ∞
s31 = s3 (t)φ∗1 (t)dt = 0 (1.184)
−∞

d3 (t) = s3 (t) − s31 d1 (t) = s3 (t). (1.185)


The second basis function is obtained by normalizing d3 (t) to give
d3 (t) s3 (t)
φ3 (t) = qR ∞ = qR T
−∞ |d3 (t)| dt
2 2
T /3 (1) dt
( q
3
s3 (t) T /3 ≤ t ≤ T
= q = 2T (1.186)
2T /3 0 otherwise

The third basis function is derived from s2 (t) as follows:


s s
Z ∞ Z T /3 3 T
s21 = s2 (t)φ∗1 (t)dt = (1) dt = (1.187)
−∞ 0 T 3
s
Z ∞ Z 2T /3 3
s23 = s2 (t)φ∗3 (t)dt = (1) dt
−∞ T /3 2T
s s
 
3 2T T T
= − = (1.188)
2T 3 3 6
d2 (t) = s2 (t) − s21 φ1 (t) − s23 φ3 (t) (1.189)
The signals used to calculate d2 (t) are shown in Figure 1.51. Thus
 1

 2
T /3 ≤ t ≤ 2T /3
d2 (t) = − 12 2T /3 ≤ t ≤ T (1.190)


0 otherwise

45
s2 (t) s21 φ1 (t) s23 φ3 (t)
1 1
1
2

2T
0 T T
3
t 3
t 3
T t

Figure 1.51: Signals Used to Determine d2 (t) for the Example

 q
 3

 q2T T /3 ≤ t ≤ 2T /3
g2 (t) d2 (t)
φ2 (t) = qR ∞ =q = − 3
2T /3 ≤ t ≤ T . (1.191)
2 dt 
 2T
−∞ (d2 (t)) T /6 
0 otherwise
The three basis functions are shown in Figure 1.52. Using these orthonormal basis functions,

φ1 (t) φ2 (t) φ3 (t)


q
3
T q q
3 3
2T 2T

T T T
3
t 3
T t 3
T t
q
3
− 2T

Figure 1.52: Set of Basis Functions for the Example

the signal vectors are given by


 q   q     q 
T T
T
 q3  0 3
 3       
s1 =  
 0  s2 =  T  
s3 =  q0 
 s4 = 
 q0

 (1.192)
 q6  2T
0 T 2T
6 3 3

Note
s s
Z ∞ Z 2T /3 3 T
s22 = s2 (t)φ∗2 (t)dt = (1) dt =
−∞ T /3 2T 6
Z∞
s32 = s3 (t)φ∗2 (t)dt = 0
−∞
s s
Z ∞ Z T
3 2T
s33 = s3 (t)φ∗3 (t)dt = (1) dt =
−∞ T /3 2T 3
s s
Z ∞ Z T /3 3 T
s41 = s4 (t)φ∗1 (t)dt = (1) dt =
−∞ 0 T 3
Z∞
s42 = s4 (t)φ∗2 (t)dt =0
−∞

46
s s
Z ∞ Z T 3 2T
s43 = s4 (t)φ∗3 (t)dt = (1) dt =
−∞ T /3 2T 3

Changing the order of the signals used to generate the basis functions produces a different
set of basis functions and thus a different set of signal vectors which can be seen by comparing
equations (1.182) and (1.192). But the lengths of these vectors and the distance between
the vecotrs are the same for both sets of signal vectors.

1.9 Representation of Digitally Modulated Signals


The digital modulator, shown in Figure 1.53, maps the digital information sequence, {an },
into analog waveforms, {sm (t)}, which match the characteristics of the channel. This map-

{an } Digital {sm (t)}


Modulator

Figure 1.53: Digital Modulator

ping involves taking k = log2 M sets of {an } and selecting one of M = 2k deterministic, finite
energy waveforms from {sm (t)} to transmit over the channel.

1.9.1 Modulator Characteristics


memory/memoryless - A modulator has memory if the waveforms transmitted depends
on one or more previously transmitted waveforms.
linear/nonlinear - In general for analog signals, if m(t) is the modulating signal and s(t) is
the modulated signal, the modulation is linear if ddm(t)
s(t)
is independent of m(t). For a digitally
modulated signal the modulation is considered linear, if x(t) and y(t) are constants in the
following representation

s(t) = <{u(t)ejωc t }
= x(t) cos (ωc t) − y(t) sin (ωc t) (1.193)

over any symbol interval, 0 ≤ t ≤ T , where x(t) = a(t) cos (θ(t)) , y(t) = a(t) sin (θ(t)) and
a(t) and/or θ(t) is a function of {an }, the information sequence.
Thus in digital modulation, amplitude and phase modulation is considered linear and
frequency modulation is considered non-linear (for memoryless schemes). In linear modula-
tion systems the spectrum of the modulated signal is simply the frequency translation of the
baseband spectrum. But, since frequency modulation is a nonlinear modulation technique,
the spectral properties of the modulated signal cannot, in general, be deduced from the
baseband spectrum. Frequency modulation techniques generally alter the baseband spectral

47

You might also like