Professional Documents
Culture Documents
X + Y = Y + X commutative law
X + (Y + Z) = (X + Y ) + Z associative law
A linear space must have a zero vector 0, and every vector must have an additive inverse,
denoted −X, such that
0+X = X
X + (−X) = 0
Multiplying a vector X by a scalar, α , produces a new vector, αX, which must be in the
linear space. For vectors X and Y , and constants α and β,
Another important definition for the Euclidean space is the inner product (dot product)
of two vectors given by
X
n
X ·Y = xi yi∗ (1.108)
i=1
X ·Y = XT Y ∗ (1.109)
X ·Y = k X k k Y k cos (θ) (1.110)
29
Y
kY k
X
θ
kX k
where θ, k X k and k Y k are defined in Figure 1.34. k X k and k Y k are the lengths of the
vectors X and Y . This length can also be determined from the inner product using
X
n
X · X =k X k2 = |xi |2 . (1.111)
i=1
• X·Y
kY k
is the length of the component of X in the direction of Y .
The following rules apply to the inner product
(X + Y ) · Z = X ·Z +Y ·Z
αX · Y = αX · αY
X ·Y = Y ·X
X ·X > 0 for X 6= 0.
Also two vectors are defined as orthogonal if
X ·Y =0 (1.112)
All of the above definitions, generated for the Euclidean space, can also be applied to other
linear spaces.
In communications, there are two other important linear spaces, these are, the space of
discrete-time signals (a generalization of the Euclidean space to infinite dimensions) and the
space of continuous time signals. These linear spaces are usually referred to as signal spaces
in communications. The interest in this work is mainly in continuous time signal spaces. In
this context define a vector, Y , which corresponds to a continuous time signal y(t), where it
is assumed that y(t) has finite energy
Z ∞
|y(t)|2dt < ∞. (1.113)
−∞
30
The inner product of two continuous time signals is defined by
Z ∞
X ·Y = x(t)y ∗ (t)dt (1.114)
−∞
All the other properties which apply to the Euclidean space also apply to the continuous
time space. For example, multiplication by a scalar and vector addition
αY is equivalent to αy(t)
X + Y is equivalent to x(t) + y(t)
Another example is the length of the vector, k X k which for continuous time signals is given
by Z ∞
k X k2 = |x(t)|2 dt (1.115)
−∞
2
Note that, k X k , is also the energy in the signal x(t), thus the norm k X k is the square
root of the energy.
s1 (t)
s2 (t)
31
s2 s1
then a possible signal space for these signals is shown in Figure 1.36. In this binary signal
system, the purpose of the receiver is to determine which of the two signals was sent given
r(t) was received. A simple and interactive approach is to take the difference between the
received signal, r(t) and each of the signals, s1 (t) and s2 (t), and choose the one with the
smaller difference. The signals, s1 (t) and s2 (t) and a noisy r(t) are shown in Figure 1.37.
Obviously, in this case r(t) is closer to s1 (t). Determining the signal closer to r(t) is usually
r(t)
s1 (t)
s2 (t) t
done by minimizing the mean squared error. For example, a receiver could calculate the
mean squared error for both signals
Z ∞
|r(t) − s1 (t)|2 dt
−∞
Z ∞
|r(t) − s2 (t)|2 dt, (1.118)
−∞
and comparing the results, choose the smallest. This is also referred to as the minimum-
energy criterion, since the above integrals calculate the error energy. Note that in terms of
the signal space representation the minimum energy criterion is equivalent to a minimum
distance criterion. This is because the signals can be interpreted as vectors in a vector space,
and an equivalent inner product and norm (length) of this abstract vector can be defined.
The length of the vector which represents the continuous time signal has been defined as the
square root of the signal energy. Thus the energy of the difference of two signals, equation
(1.118), can thus be interpreted as the square of the distance between the two signal vectors.
32
1.8.2 Orthogonal Signal Space
The possible number of signals, M, can be quite large for many signal designs. In general,
the dimensionality of the signal set is much smaller than M. Thus the M signals could be
represented in terms of this smaller dimensional space. Lets assume that the M signals can
be represented in an N-dimensional orthogonal space, where N ≤ M. This signal space is
characterized by a set of N linearly independent functions, fi (t), i = 1, 2, · · · , N, 0 ≤ t ≤ T ,
which are called basis functions. Any arbitrary function (waveform) can be generated by
using a linear combination of these basis functions,
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.119)
j=1
33
F2 f2 (t)
F1
f1 (t)
F3
f3 (t)
f2 (t)
f1 (t)
si2 si3
si1
f3 (t)
An important aspect in the signal space representation is the approach used for deter-
mining the coefficients, sij , in
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.127)
j=1
34
given a set of orthornormal basis functions. One method to determine these coefficients is
derived from the more general case of approximating a signal, s(t), with the summation
X
K
ŝ(t) = sk fk (t). (1.128)
k=1
One method to minimize the error is to select the coefficients, sk , such that the mean squared
error Z ∞
Ee = |s(t) − ŝ(t)|2 dt (error energy) (1.130)
−∞
is minimized. This mean squared error (signal error energy) is minimized if the coefficients
are determined from the projection of s(t) onto each fk (t) (ie. the inner product of the
functions), Z ∞
sk = s(t)fk∗ (t)dt k = 1, · · · , K. (1.131)
−∞
A familiar example that uses a set of orthonormal basis functions to represent a signal is the
Fourier series given by
∞
X
s(t) = sn fn (t) 0 ≤ t ≤ T0 (1.132)
n=−∞
where
2π
fn (t) = ejnω0 t ω0 = (1.133)
T0
giving
∞
X
s(t) = sn ejnω0 t 0 ≤ t ≤ T0 (1.134)
n=−∞
In this example the basis functions are not normalized to l, thus the energy is given by
Z ∞
E fn = fn (t)fn∗ (t)dt
−∞
Z T0
= ejnω0 t e−jnω0 t dt
0
Z T0
= dt
0
= T0 (= Kn from before) (1.135)
For this case a normalizing factor must be included when determining the coefficients
∞ Z
1
sn = s(t)fn∗ (t)dt
Kn −∞
Z
1 T0
= s(t)e−jnω0 t dt (1.136)
T0 0
35
This derivation results in the Fourier series pair
∞
X
s(t) = sn ejnω0 t 0 ≤ t ≤ T0 (1.137)
n=−∞
Z
1 T0
sn = s(t)e−jnω0 t (1.138)
T0 0
A Fourier series example for the approximation of a square wave, shown in Figure 1.40,
is shown in Figure 1.41 for N = 1, 3, 5, 7. Figure 1.41 also shows the instantaneous error,
N (t), in the approximation.
f (t)
1
−1
0 1 2
In the above square wave Fourier series example the mean square error will not be zero
for a finite N since Z ∞
|s(t) − ŝ(t)|2 dt 6= 0 for finite N. (1.139)
−∞
But in systems using digital modulation there is a finite number of signals, si (t), so the basis
functions could be selected to give a mean squared error of zero. In this case the coefficients,
sij , in
X
N
si (t) = sij fj (t) 0 ≤ t ≤ T (1.140)
j=1
will completely represent the signal, si (t), in the sense the approximation error has zero
energy. In this situation (zero error energy) the signal energy can be determined directly
from the coefficients using
Z ∞ X
K
Esi = |si(t)|2 dt = |sik |2 (1.141)
−∞ k=1
If the mean square error is not zero it can be determined by expanding (1.128) to give
Z ∞ X
K
Ee = |si (t)|2 dt − |sik |2 (1.142)
−∞ k=1
36
N (t)
N =1 t t
N =3 t t
N =5 t t
N =7 t t
0 1 2 0 1 2
Figure 1.41: Approximation of a Rectangular Function Using Orthogonal Functions and the
Instantaneous Error of the Approximation.
37
1.8.3 Representation of Finite Energy Signals by Orthonormal Ex-
pansions
Any set of signal waveforms, even if they are not orthogonal, can be transformed into a linear
combination of orthonormal waveforms (basis functions).
Define an arbitrary finite set of waveforms, si (t), i = 1, · · · , M, where each member of
the set is physically realizable and of duration T . A set of N orthonormal basis functions,
fi (t), i = 1, · · · , N where N ≤ M, can be generated such that the signals, si (t) can be
represented as
XN
0≤t≤T
si (t) = sij fj (t) (1.143)
j=1
i = 1, 2, · · · , M
where Z T i = 1, · · · , M
sij = si (t)fj (t)dt (1.144)
0 j = 1, · · · , N
Schematically, these two equations can be represented as shown in Figure 1.42 and 1.43,
respectively.
si1
X
f1 (t)
si2
X
f2 (t) si (t)
Σ
siN
X
fN (t)
Given a finite set of energy signal waveforms, si (t), i = 1, 2, · · · , M, how can a set of
orthonormal basis functions be generated? One approach is to use Gram-Schmidt orthogo-
nalization.
38
RT si1
X 0 dt
f1 (t)
RT si2
X 0 dt
si (t) f2 (t)
RT siN
X 0 dt
fN (t)
vectors
X 1 , X2 , · · · , X N where Xn = [xn1 , xn2 , · · · , xnN ]T . (1.145)
Starting with X1 , an orthonormal basis vector, f1 , can be generated, in the direction of X1
by making X1 a unit vector.
X1 X1
f1 = =q (1.146)
k X1 k X1T X1
Thus q
X1 = X1T X1 f1 (1.147)
and the associated coefficient is
q q
c11 = X1T X1 = X1 · X 1 . (1.148)
The second orthonormal vector, f2 can be generated by, first, projecting X2 onto f1 . The
coefficient, c21 , in the direction of f1 is shown in Figure 1.44. The value of c21 can be
calculated using
c21 =k X2 k cos θ. (1.149)
Since k f1 k= 1, this is also given by the inner product
The next step is to subtract c21 f1 from X2 to give d2 = X2 − c21 f1 . The vector d2 , which is
shown in Figure 1.45, is ⊥ to f1 . The orthonormal vector, f2 , is generated by making d2 a
39
X2
θ
f1
c21
X2
d2 = X2 − c21 f1
f1
unit vector
d2 d2 d2
f2 = =q =q . (1.151)
k d2 k d2 · d2 dT2 d2
In the above derivation two orthonormal basis vectors, f1 and f2 , have been generated using
the first two vectors, X1 and X2 . This procedure can be extended to N vectors where
X
i−1
d i = Xi − cij fj , (1.152)
j=1
where
cij = Xi · fj = XiT fj (1.153)
and
di di di
fi = =q =q , i = 1, 2, · · · , M. (1.154)
k di k di · di dTi di
Gram-Schmidt orthogonalization can also be applied to continuous time waveforms.
Given a set of signals, si (t), i = 1, 2, · · · , M, 0 ≤ t ≤ T , select a set of N linearly in-
dependent waveforms from si (t), where N ≤ M. Starting with the first waveform, s1 (t), the
energy in s1 (t) is given by
Z T
Es1 = |s1 (t)|2 dt. (1.155)
0
Es1 is the continuous time dot product ofq s1 (t) with itself, and since the length of a vector
is the square root of the inner product, Es1 , is the continuous time length of s1 (t). Thus
the first basis function is given by
s1 (t)
f1 (t) = q (1.156)
Es1
40
Note that f1 (t) is simply s1 (t) normalized to unit energy. Rearranging (1.156) gives
q
s1 (t) = Es1 f1 (t) = c11 f1 (t) (1.157)
and thus q
c11 = Es1 . (1.158)
The second basis function is constructed from s2 (t). Since f1 (t) has unit energy the
projection of s2 (t) onto f1 (t) is given by the inner product.
Z T
c21 = s2 (t)f1∗ (t)dt (c12 was used in Proakis) (1.159)
0
As was done for the Euclidean vectors, subtract c21 f1 (t) from s2 (t) to give
d2 (t) = s2 (t) − c21 f1 (t). (1.160)
d2 (t) is orthogonal to f1 (t) in the interval 0 ≤ t ≤ T . This can be normalized by dividing by
the square root of the energy in d2 (t), to give f2 (t).
d2 (t)
f2 (t) = qR (1.161)
T
0 |d2 (t)|2 dt
This procedure can be extended to N basis functions using
X
i−1
di (t) = si (t) − cij fj (t) (1.162)
j=1
where Z T
cij = si (t)fj∗ (t)dt j = 1, 2, · · · , i − 1 (1.163)
0
and
di (t)
fi (t) = qR i = 1, 2, · · · , N. (1.164)
T
0 |di (t)|2 dt
which gives a complete set of orthonormal basis functions.
The set of N orthonormal basis functions, as stated before, can be used to represent the
signals as
XN
0≤t≤T
si (t) = sij fj (t) (1.165)
j=1
i = 1, 2, · · · , M
where
Z T i = 1, · · · , M
sij = si (t)fj (t)dt j = 1, · · · , N (1.166)
0
N ≤M
Thus each signal in the set {si (t)} is essentially determined by the coefficients, which can be
put in vector format as follows.
si1
si2
si =
..
i = 1, · · · , N (1.167)
.
siN
41
This vector of coefficients, si , can be viewed as a vector in N dimensional Euclidean space.
Thus the M signals, si (t), i = 1, · · · , M, can be represented as M vectors, si , i = 1, · · · , M,
in an N-dimensional coordinate system. An example of a signal space for N = 2 and M = 4
is given in Figure 1.46. In this figure, the signals are represented as vectors in Euclidean
f2 (t)
s4 s1
f1 (t)
s2
s3
space, thus the standard Euclidean space operations can be applied to these vectors. For
example, an inner product can be done
The energy of the signal can be determined directly from the vector by
X
N Z T
Esi = si · si = sTi si = s2ij = |si (t)|2 dt (1.169)
j=1 0
The Euclidean distance between two signal points can also be determined by using
q
k si − s k k = (si − sk ) · (si − sk )
q
= (si − sk )T (si − sk )
s
Z T
= |(si (t) − sk (t))|2 dt (1.170)
0
42
signal vectors {sn } will retain their geometrical configuration (distance between vectors) and
their lengths will be the same for all sets of {fn (t)}.
Example
Four signals are given in Figure 1.47. Determine a set of orthonormal basis functions and
T 2T T
3
t 3
t 3
T t T t
43
d2 (t)
1
T 2T
3 3
t
s
Z ∞ Z 2T /3 3
c32 = s3 (t)f2∗ (t)dt = (1) dt
−∞ T /3 T
s s
3 2T T T
= − = (1.178)
T 3 3 3
d3 (t) = s3 (t) − c31 f1 (t) − c32 f2 (t) (1.179)
The signals used to calculate d3 (t) are shown in Figure 1.49. Thus
T
0 T 2T
3
T t t 3 3
t
(
1 2T /3 ≤ t ≤ T
d3 (t) = (1.180)
0 otherwise
( q
3
d3 (t) d3 (t) 2T /3 ≤ t ≤ T
f3 (t) = qR ∞ =q = T . (1.181)
2 0 otherwise
−∞ (d3 (t)) dt T /3
The three basis functions are shown in Figure 1.50. Using these orthonormal basis functions,
the signal vectors are given by
q q q
T T 0 T
q3 q q3
3 T
s1 =
0 s2 =
T
s3 =
q3 s4 =
T
(1.182)
3 q3
0 T T
0 3 3
44
f1 (t) f2 (t) f3 (t)
q q q
3 3 3
T T T
T
0 T 2T 2T
3
t 3 3
t 3
T t
As a second example, change the order in which s1 (t), s2 (t) and s3 (t) are used to generate
the basis functions. Starting with s1 (t), the first basis function (here φ1 (t) is used instead of
f1 (t))is ( q
3
s1 (t) 0 ≤ t ≤ T /3
φ1 (t) = q = T (1.183)
Es1 0 otherwise
The second basis function will be determined using s3 (t). The projection of s3 (t) onto φ1 (t)
is given by Z ∞
s31 = s3 (t)φ∗1 (t)dt = 0 (1.184)
−∞
45
s2 (t) s21 φ1 (t) s23 φ3 (t)
1 1
1
2
2T
0 T T
3
t 3
t 3
T t
q
3
q2T T /3 ≤ t ≤ 2T /3
g2 (t) d2 (t)
φ2 (t) = qR ∞ =q = − 3
2T /3 ≤ t ≤ T . (1.191)
2 dt
2T
−∞ (d2 (t)) T /6
0 otherwise
The three basis functions are shown in Figure 1.52. Using these orthonormal basis functions,
T T T
3
t 3
T t 3
T t
q
3
− 2T
Note
s s
Z ∞ Z 2T /3 3 T
s22 = s2 (t)φ∗2 (t)dt = (1) dt =
−∞ T /3 2T 6
Z∞
s32 = s3 (t)φ∗2 (t)dt = 0
−∞
s s
Z ∞ Z T
3 2T
s33 = s3 (t)φ∗3 (t)dt = (1) dt =
−∞ T /3 2T 3
s s
Z ∞ Z T /3 3 T
s41 = s4 (t)φ∗1 (t)dt = (1) dt =
−∞ 0 T 3
Z∞
s42 = s4 (t)φ∗2 (t)dt =0
−∞
46
s s
Z ∞ Z T 3 2T
s43 = s4 (t)φ∗3 (t)dt = (1) dt =
−∞ T /3 2T 3
Changing the order of the signals used to generate the basis functions produces a different
set of basis functions and thus a different set of signal vectors which can be seen by comparing
equations (1.182) and (1.192). But the lengths of these vectors and the distance between
the vecotrs are the same for both sets of signal vectors.
ping involves taking k = log2 M sets of {an } and selecting one of M = 2k deterministic, finite
energy waveforms from {sm (t)} to transmit over the channel.
s(t) = <{u(t)ejωc t }
= x(t) cos (ωc t) − y(t) sin (ωc t) (1.193)
over any symbol interval, 0 ≤ t ≤ T , where x(t) = a(t) cos (θ(t)) , y(t) = a(t) sin (θ(t)) and
a(t) and/or θ(t) is a function of {an }, the information sequence.
Thus in digital modulation, amplitude and phase modulation is considered linear and
frequency modulation is considered non-linear (for memoryless schemes). In linear modula-
tion systems the spectrum of the modulated signal is simply the frequency translation of the
baseband spectrum. But, since frequency modulation is a nonlinear modulation technique,
the spectral properties of the modulated signal cannot, in general, be deduced from the
baseband spectrum. Frequency modulation techniques generally alter the baseband spectral
47