You are on page 1of 15

School of Computer Science and Engineering

University of New South Wales


COMP3121/3821/9101/9801
A. Ignjatovic
4/4/2013
Polynomial Multiplication and The Fast Fourier Transform (FFT)
We now continue elaborating on the methods from the previous lecture on fast multi-
plication of large integers, focusing our attention on the problem of ecient multiplication
of polynomials. Read pages 822-838 of the second edition of the textbook (CLRS) or 776-
791 of the rst edition. Why are we doing the FFT??? Besides being a great example
of divide-and-conquer design strategy, it is BY FAR the MOST EXECUTED algorithm
today; it runs huge number of times each second in your mobile phone, your modem, your
digital camera, your MP3 player ... It is arguably the MOST important algorithm today,
without any serious competition!
Multiplication of Polynomials
Let A(x) =

n
j=0
A
j
x
j
, B(x) =

n
j=0
B
j
x
j
be two polynomials of degree n; (if one of
the polynomials is of lower degree, we can pad it with leading zero coecients). Let us
set C(x) = A(x) B(x); then C(x) is of degree (at most) 2n. Thus, it can be written as
C(x) =

2n
j=0
c
j
x
j
, and if we set A
i
and B
i
to zero for i > n, we have
(1) C(x) =
2n

j=0
c
j
x
j
= A(x) B(x) =
2n

j=0
_
j

i=0
A
i
B
ji
_
x
j
.
Thus, we have to nd an ecient algorithm for nding the coecients
c
j
=
j

i=0
A
i
B
ji
for j 2n, from A
i
, B
i
, i n. Prima facie, nding the coecients of C(x) directly still
involves (n + 1)
2
multiplications, because all pairs of the form A
i
B
j
, 0 i, j n appear
in the coecients c
j
=

j
i=0
A
i
B
ji
.
1
2
Let A
n
. . . A
0
and B
n
. . . B
0
be an arbitrary pair of number sequences; lets us pad them
with zeros from the left to length 2n + 1, i.e., let us set A
i
= 0 and B
i
= 0 for n < i 2n.
Then the sequence
_
j

i=0
A
i
B
ji
_
2n
j=0
is called the (linear) convolution of the sequences A and B, and is denoted A B:
A B = A
n
B
n
, . . . , A
2
B
0
+ A
1
B
1
+ A
0
B
2
, A
1
B
0
+ A
0
B
1
, A
0
B
0

Thus, we need ecient algorithms for evaluating linear convolution of two sequences.
Coecient vs value representation of polynomials. Every polynomial A(x) of degree n is
uniquely determined by its values at n + 1 distinct input values for x:
A(x) (x
0
, A(x
0
)), (x
1
, A(x
1
)), . . . , (x
n
, A(x
n
))
If A(x) = A
n
x
n
+ A
n1
x
n1
+ . . . + A
0
, we can write in matrix form:
(2)
_
_
_
_
_
_
_
_
1 x
0
x
2
0
. . . x
n
0
1 x
1
x
2
1
. . . x
n
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 x
n
x
2
n
. . . x
n
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
A
0
A
1
.
.
.
A
n
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
A(x
0
)
A(x
1
)
.
.
.
A(x
n
)
_
_
_
_
_
_
_
_
.
The determinant of the above matrix is the Van Der Monde determinant, and if all x
i
are distinct, it can be shown that it is non-zero, because
(3) det
_
_
_
_
_
_
_
_
1 x
0
x
2
0
. . . x
n
0
1 x
1
x
2
1
. . . x
n
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 x
n
x
2
n
. . . x
n
n
_
_
_
_
_
_
_
_
=

1 x
0
x
2
0
. . . x
n
0
1 x
1
x
2
1
. . . x
n
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 x
n
x
2
n
. . . x
n
n

i=j
(x
i
x
j
) ,= 0.
Thus, if all x
i
are distinct, given any values A(x
0
), A(x
1
), . . . , A(x
n
) the coecients
A
0
, A
1
, . . . , A
n
are uniquely determined:
3
(4)
_
_
_
_
_
_
_
_
A
0
A
1
.
.
.
A
n
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
1 x
0
x
2
0
. . . x
n
0
1 x
1
x
2
1
. . . x
n
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 x
n
x
2
n
. . . x
n
n
_
_
_
_
_
_
_
_
1
_
_
_
_
_
_
_
_
A(x
0
)
A(x
1
)
.
.
.
A(x
n
)
_
_
_
_
_
_
_
_
Why do we consider value representation of polynomials? Because polynomials in value
representation are easy to multiply: If
A(x) (x
0
, A(x
0
)), (x
1
, A(x
1
)), . . . , (x
n
, A(x
n
))
and
B(x) (x
0
, B(x
0
)), (x
1
, B(x
1
)), . . . , (x
n
, B(x
n
)),
then their product C(x) = A(x)B(x) can be value represented as
C(x) (x
0
, A(x
0
)B(x
0
)), (x
1
, A(x
1
)B(x
1
)), . . . , (x
n
, A(x
n
)B(x
n
)),
which involves only n multiplications of the form A(x
i
)B(x
i
). Thus, unlike the polynomials
in coecient form, polynomials in value form are easy to multiply in linear time. For this
reason our strategy will be as follows:
We will nd a fast algorithm for converting coecient representation of
polynomials into value representation, we will multiply the polynomials in
their value form in linear time, and then we will nd a fast algorithm for
converting value representation into the standard coecient representation.
However, if A(x) and B(x) are of degree n, the product polynomial C(x) = A(x)B(x)
is of degree 2n, and to uniquely determine it, we need 2n + 1 of its values:
C(x) = A(x) B(x) (x
0
, A(x
0
)B(x
0
)), (x
1
, A(x
1
)B(x
1
)), . . . , (x
2n
, A(x
2n
)B(x
2n
))
Thus, we must overdetermine A(x) and B(x) by starting with 2n + 1 values of these two
polynomials:
4
A(x) (x
0
, A(x
0
)), (x
1
, A(x
1
)), . . . , (x
2n
, A(x
2n
))
B(x) (x
0
, B(x
0
)), (x
1
, B(x
1
)), . . . , (x
2n
, B(x
2n
))
C(x) = A(x) B(x) (x
0
, A(x
0
)B(x
0
)), (x
1
, A(x
1
)B(x
1
)), . . . , (x
2n
, A(x
2n
)B(x
2n
))
We will then use these 2n + 1 values of C(x) to nd the coecients c
i
, 0 i 2n; this
is called interpolation. Finding the values at certain set of points (knowing the coecients
of the polynomial) is called evaluation.
Thus, to nd the coecients of a polynomial of order 2n we need only nd its values at
2n + 1 points. In case of large integer multiplication, instead of looking at values for large
x like 2
k
, we choose small values of x, namely
x
i
n, (n 1), . . . , 0, . . . (n 1), n.
However, as we saw, this produced gigantic constants to be used in our algorithm, for
example, multiplications with n
2n
, which rendered our algorithm useless in practice. Thus
we need inputs for our polynomials whose all powers are of the same size, and to do that
we must resort to complex numbers. Besides controlling the sizes of numbers involved,
using complex roots of unity will provide another key feature which will make our divide-
and-conquer algorithms fast (the cancelation lemma below).
Complex numbers z = a +ib can be represented using their modulus [z[ =

a
2
+ b
2
and
their argument, dened as arg z = arctan
b
a
, where the arctan function (pronounced arcus
tangens) is dened so that it takes values in (, ]:
z = [z[e
i arg z
= [z[(cos arg z + i sin arg z),
see gure below.
As you can recall, z
n
= [z[
n
e
inarg z
; thus, if we take the primitive n
th
root of unity, i.e.,

n
= e
2
n
i
, since [
n
[ = 1, we have [
m
n
[ = [
n
[
m
= 1, for all m. Note that
k
n
= e
2k
n
i
; thus,
all powers of
n
belong to the unit circle and are equally spaced, having arguments which
are integer multiples of
2
n
.
5

) arg(z

|z|
z

n
2

1
i
n
e
2

b
a
z = a + b i
Besides remaining of constant size (modulus), roots of unity satisfy the following
cancelation property:
(
dn
)
dk
=
k
n
.
Thus, taking the primitive root of unity of order d times n to the power d times k, is the
same as taking the root of unity of order n to the power k. This is demonstrated by the
following simple calculation:
(
dn
)
dk
= (e
2
dn
i
)
dk
= (e
2
n
i
)
k
= (
n
)
k
.
This fact has the following simple consequence, crucial for our algorithm.
Lemma 0.1 (Halving Lemma). If n > 0 is an even number, then the squares of the n
th
root of unity are exactly the n/2 complex roots of unity of order n/2.
Proof. By the above cancelation property we have
(
k
n
)
2
= (
2
n
2
)
2k
=
k
n
2
.

Thus, the total number of squares of roots of unity of order n is n/2. This fact is crucial
for our FFT algorithm.
6
The Discrete Fourier Transform
Let A = (A
0
, A
1
, . . . , A
n
) be a sequence of n + 1 real or complex numbers. We can
then form the corresponding polynomial A(x) =

n
j=0
A
j
x
j
, and evaluate it at all complex
roots of unity of order n + 1, i.e., we can evaluate A(
k
n+1
) for all 0 k n. The
sequence of values (A(1), A(
n+1
), A(
2
n+1
), . . . , A(
n
n+1
)), is called the Discrete Fourier
Transform (DFT) of the sequence A = (A
0
, A
1
, . . . , A
n
).
To multiply two polynomials of degree (at most) n we will evaluate them at the roots
of unity of order 2n + 1, thus in eect taking the DFT of the (0 padded) sequence of
their coecients (A
0
, A
1
, . . . , A
n
, 0, . . . , 0
. .
n
); we will then multiply the corresponding values
at these roots of unity, and then use the inverse transformation for DFT, namely IDFT, to
recover the coecients of the product polynomial from its values at these roots of unity:
A(x) = A
0
+ A
1
x + . . . + A
n1
x
n1
DFT
= A(1), A(
2n+1
), A(
2
2n+1
), . . . , A(
2n
2n+1
)
B(x) = B
0
+ B
1
x + . . . + B
n1
x
n1
DFT
= B(1), B(
2n+1
), B(
2
2n+1
), . . . , B(
2n
2n+1
)
multiplication
A(1)B(1), A(
2n+1
)B(
2n+1
), . . . , A(
2n
2n+1
)B(
2n
2n+1
)
IDFT
C(x) = A(x) B(x) =
2n

j=0
_
j

i=0
A
i
B
ji
_
x
j
.
We now have to show that both DFT and the IDFT can be computed eciently, rather
than in time O(n
2
) which the brute force polynomial multiplication would require. Thats
precisely what our FFT algorithm accomplishes.
FFT
The FFT is thus a fast algorithm which, given a polynomial (or, equivalently, the se-
quence of its coecients) produces its values at all the roots of unity of the appropriate
order (i.e., the DFT of the sequence of its coecients). To make our divide and conquer al-
gorithm runs smoothly, we will assume that we are evaluating a polynomial of degree 2
k
at
7
2
k
roots of unity of order 2
k
. This adds inessential cost, because if the starting polynomial
is of degree n, there is a power of two smaller than 2n (how would you nd it?). Thus, we
would pad the original polynomial with 0 coecients for the leading powers, so that it be-
comes a polynomial of order 2
k
. We can now proceed with the divide-and-conquer method:
We break the original polynomial into two, separating even and odd degrees (recall we
are assuming that n is of the form 2
k
):
A(x)
= (A
0
+ A
2
x
2
+ A
4
x
4
+ . . . A
n2
x
2(n/21)
) + (A
1
x + A
3
x
3
+ . . . A
n1
x
n1
)
= (A
0
+ A
2
(x
2
) + A
4
(x
2
)
2
+ . . . + A
n2
(x
2
)
n/21
) + x(A
1
+ A
3
x
2
+ A
5
(x
2
)
2
+ . . . + A
n1
(x
2
)
n/21
)
= A
0
(x
2
) + xA
1
(x
2
),
where the two polynomials
A
0
(y) = A
0
+ A
2
y + A
4
y
2
+ . . . A
n
y
n/21
and
A
1
(y) = A
1
+ A
3
y + A
5
y
2
+ . . . A
n1
y
n/21
have n/2 coecients (they are both of degree n/2 1), and they have to be evaluated at
all values (
k
n
)
2
, because we got A(x) = A
0
(x
2
) + xA
1
(x
2
).
In order to use divide-and-conquer strategy, we have to reduce a problem of size n to
two problems of size n/2. But what is a problem of size n?
Evaluate a polynomial given by n coecients at n input values.
Thus, a problem of size n/2 is:
8
Evaluate a polynomial given by n/2 coecients at n/2 input values.
We have reduced evaluation of a polynomial given by n coecients into two subproblems
of evaluating two polynomials given by n/2 coecients, but for successful reduction we also
have to make sure that these two polynomials are evaluated for only n/2 values, and this
is where our Halving Lemma comes into the play: we need the values of A(x) = A
0
(x
2
) +
xA
1
(x
2
) for x
j
=
k
n
, but this involves evaluating A
0
(x) and A
1
(x) only at x
2
j
= (
k
n
)
2
. As
we saw, by our Halving Lemma, there are only n/2 distinct squares of the roots
k
n
. Thus,
we have succeeded in reducing our problem into two subproblems of size n/2. To combine
the solutions we need to form sums A(
k
n
) = A
0
((
k
n
)
2
) +
k
n
A
1
((
k
n
)
2
), and this involves
n multiplications (
k
n
with A
1
((
k
n
)
2
) and n subsequent additions. Thus to combine the
solutions we need O(n) operations, and we get the following recurrence:
T(n) = 2T(n/2) + O(n).
By the Master theorem we get that T(n) = (nlog n).
We can make the above algorithm slightly faster by realizing that
k
n
=
nk
n
, so we
can halve the total number of multiplications by going through only
0
n
,
1
n
, . . . ,
n/2
n
, and
just use
0
n
,
1
n
, . . . ,
n/2
n
instead of
n/2
n
,
n/2+1
n
, . . . ,
n1
n
. Thus we get the following
pseudo code for our FFT algorithm:
FFT(A)
(1) n length[A]
(2) if n = 1
(3) return a
(4) A
[0]
(A
0
, A
2
, . . . A
n2
)
(5) A
[1]
(A
1
, A
3
, . . . A
n1
)
(6) y
[0]
FFT(A
[0]
)
(7) y
[1]
FFT(A
[1]
)
(8)
n
e
2
n
;
9
(9) 1
(10) for k = 0 to k = n/2 1 do:
(11) y
k
y
[0]
k
+ y
[1]
k
(12) y
k+n/2
y
[0]
k
y
[1]
k
(13)
n
(14) return y
Steps 11 and 12 form the buttery operation, often implemented in processors with a
separate hardware for speed, see the diagram below:
Inverse DFT. The above evaluation of a polynomial A(x) = A
0
+ A
1
x + . . . + A
n1
x
n1
at roots of unity
k
n
of order n can be represented in the matrix form as follows:
(5)
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
n

2
n
. . .
n1
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
n1
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
A
0
A
1
A
2
.
.
.
A
n
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
A(1)
A(
n
)
A(
2
n
)
.
.
.
A(
n1
n
)
_
_
_
_
_
_
_
_
_
_
.
Thus, if we have the values A(1) = A(
0
n
), A(
n
), A(
2
n
), . . . , A(
n1
n
), we can get the
coecients from
(6)
_
_
_
_
_
_
_
_
_
_
A
0
A
1
A
2
.
.
.
A
n
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
n

2
n
. . .
n1
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
n1
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
1 _
_
_
_
_
_
_
_
_
_
A(1)
A(
n
)
A(
2
n
)
.
.
.
A(
n1
n
)
_
_
_
_
_
_
_
_
_
_
.
This is another place where something remarkable about the roots of unity is true: to
obtain the inverse of the above matrix, all we have to do is just change the signs of the
exponents:
10
(7)
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
n

2
n
. . .
n1
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
n1
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
1
=
1
n
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
1
n

2
n
. . .
(n1)
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
(n1)
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
To see this, note that if we evaluate the product
(8)
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
n

2
n
. . .
n1
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
n1
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
1
n

2
n
. . .
(n1)
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
(n1)
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
we get that the (i, j) entry in the product matrix is equal to
(9) (1
i
n

2i
n
. . .
i(n1)
n
)
_
_
_
_
_
_
_
_
_
_
1

j
n

2j
n
.
.
.

(n1)j
n
_
_
_
_
_
_
_
_
_
_
=
n1

k=0

ik
n

jk
n
=
n1

k=0

(ij)k
n
We now have two possibilities:
(1) i = j: then

n1
k=0

(ij)k
n
=

n1
k=0

0
n
=

n1
k=0
1 = n;
11
(2) i ,= j: then

n1
k=0

(ij)k
n
represents a geometric series with the ratio
n
, and thus,
(10)
n1

k=0

(ij)k
n
=
1
(ij)n
n
1
ij
n
=
1 (
n
n
)
ij
1
ij
n
=
1 1
1
ij
n
= 0
This proves our claim that (7) holds. Thus, (6) implies that
(11)
_
_
_
_
_
_
_
_
_
_
A
0
A
1
A
2
.
.
.
A
n
_
_
_
_
_
_
_
_
_
_
=
1
n
_
_
_
_
_
_
_
_
_
_
1 1 1 . . . 1
1
1
n

2
n
. . .
(n1)
n
1
2
n

22
n
. . .
2(n1)
n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
(n1)
n

2(n1)
n
. . .
(n1)(n1)
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
A(1)
A(
n
)
A(
2
n
)
.
.
.
A(
n1
n
)
_
_
_
_
_
_
_
_
_
_
.
But this means that, in order to invert DFT, all we have to do is to apply our FFT
algorithm with
1
n
in place of
n
, and then just divide the result with n. Consequently,
we can use the same algorithm and the same hardware for computing both the DFT and
the IDFT (the Inverse Discrete Fourier Transform) with a minor change mentioned above!
1. Interpretation of DFT
So far we have followed the textbook (CLRS); however, what Cormen at al. call DFT,
namely, the sequence (A(
0
n
), A(
1
n
), A(
2
n
), . . . , A(
n1
n
)) is usually considered the Inverse
transform of the sequence of the coecients (a
0
, a
1
, a
2
, . . . , a
n1
) of the polynomial A(x),
while (A(
0
n
), A(
1
n
), A(
2
n
), . . . , A(
(n1)
n
)) is considered the forward operation i.e.,
the DFT. Clearly, since
1
n
(DFT IDFT) = I (I is the identity mapping), both choices are
equally legitimate, but taking (A(
0
n
), A(
1
n
), A(
2
n
), . . . , A(
(n1)
n
)) as the forward op-
eration has an important conceptual advantage and is used more often than the textbooks
choice.
To explain this, recall that the scalar product (also called the dot product) of two vectors
with real coordinates, x = (x
0
, x
1
, . . . , x
n1
) and y = (y
0
, y
1
, . . . , y
n1
), x, y R
n
, denoted
12
by x, y (or x y) is dened as
x, y =
n1

i=0
x
i
y
i
.
If the coordinates of our vectors are complex numbers, i.e., if x, y C
n
, then the scalar
product of such two wectors is dened as
x, y =
n1

i=0
x
i
y
i
,
where z denotes the complex conjugate of z, i.e., a + i b = a i b.
Since e
2ik
n
= e

2ik
n
, we now see that equations (9) and (10) actually show that any
two distinct rows (or columns) of the matrix corresponding to DFT are orthogonal, or,
in other words, that for i ,= j vectors w
i
= (1
i
n
,
2i
n
, . . .
i(n1)
n
) and w
j
=
(1
j
n
,
2j
n
, . . .
j(n1)
n
) are mutually orthogonal. Thus, the set w
i
: 0 i n is an
orthogonal basis for the space C
n
. From the same equations it is also clear that the norm
| w
i
|
2
= w
i
, w
i
= n. Thus, if we set e
i
=
1

n
w
i
, then | e
i
|
2
= e
i
, e
i
=
1
n
w
i
, w
i
= 1,
which means that the set of vectors B = e
i
: 0 i n form an orthonormal base for
the vector space C
n
of complex sequences of length n.
If we accept that the forward operation involves negative powers of the roots of unity

n
, it is easy to see what DFT of a sequence c = c
0
, c
1
, . . . , c
n1
represents: it is just the
sequence of the coordinates of the vector c, because for A(x) = c
0
+c
1
x +. . . +c
n1
x
n1
we have
(12) A(
k
n
) =
n1

i=0
c
i
(
k
n
)
i
=
n1

i=0
c
i

ki
n
= c, e
k

Thus, A(
k
n
) is simply the projection of the vector c onto the basis vector e
k
. We can now
represent c in the base e
i
: 0 i n :
(13) c =
n1

i=0
c, e
i
e
i
=
n1

i=0
A(
i
n
) e
i
Sequence (c
0
, c
1
, c
2
, . . . , c
n1
) can be represented in the usual base
B = (1, 0, 0, 0, . . . 0), (0, 1, 0, 0, . . . , 0), . . . , (0, 0, 0, . . . , 1)
13
< c,e
1
>
<
c
,e 2
>
<
c
,
e
3
>
e
3
e
1
e
2
c

=

<
c
,
e
1
>

e
1
+

<
c
,
e
2
>

e
2
+

<
c
,
e
3
>

e
3
Representing vector c as a linear combination of the
basis vectors e
1
,e
2
,e
3
with projections as coefficients
Figure 1
in the obvious way:
(c
0
, c
1
, c
2
, . . . , c
n1
) = c
0
(1, 0, 0, 0, . . . 0) + c
1
(0, 1, 0, 0, . . . , 0) + . . . + c
n1
(0, 0, 0, . . . , 1)
Thus, taking the Discrete Fourier Transform of a sequence (c
0
, c
1
, . . . , c
n1
) amounts to
representing such a sequence in a dierent basis, namely the basis B = e
i
: 0 i n .
Both sides of the equation (13) represent the same vector c; the m
th
coordinate of the
left side is c
m
; the m
th
coordinate of the right side is
1
n

n1
k=0
A(
k
n
)
mk
n
; thus, changing
index of summation from i to k we get
(14) c
m
=
1
n
n1

k=0
A(
k
n
)e
2i mk
n
Note that e
2i m(nk)
n
= e
2i mn
n
e
2i m(k)
n
= e
2i m
e
2i m(k)
n
= e
2i m(k)
n
. Thus, if we assume for
simplicity that the sequence c is of odd length 2n + 1, then from the above equation,
(15) c
m
=
1
2n + 1
n

k=n
A(
k
2n+1
)e
2i mk
2n+1
Assume now that the elements c
m
of the sequence corresponding to c are samples of a
sound f(t), taken at equidistant (unit) intervals, i.e., c
m
= f(m); then (15) states that
14
(16) f(m) =
1
2n + 1
n

i=n
A(
k
2n+1
)e
2i mk
2n+1
i.e., that the equation
(17) f(t) =
1
2n + 1
n

i=n
A(
k
2n+1
)e
2i tk
2n+1
holds at all integer points n, . . . , 1, 0, 1, . . . , n.
The values A(
k
2n+1
) provided by DFT are complex numbers so we can represent them
via their absolute value and their argument, i.e., A(
k
2n+1
) = [A(
k
2n+1
)[e
arg(A(
k
2n+1
))
; thus,
the signal has been, in a sense which can be made precise, represented as a sum of complex
exponentials, or, equivalently, as a sum of sine waves (cosines are shifted sines):
f(t)
1
2n + 1
n

i=n
[A(
k
n
)[e
i arg (A(
k
n
))
e
2i k
n
t
=
1
2n + 1
n

i=n
[A(
k
n
)[e
i(
2 k
n
t+arg (A(
k
n
)))
=
1
2n + 1
n

i=n
[A(
k
n
)[
_
cos
_
2 k
n
t + arg (A(
k
n
))
_
+ i sin
_
2 k
n
t + arg (A(
k
n
))
__
If the signal is real valued (rather than complex valued), then it is easy to see that A(
k
n
) =
A(
k
n
) and that the imaginary parts of the above expression cancel out and thus we get that
the signal is represented as a sum of cosine waves, shifted for arg (A(
k
n
)) with amplitudes
2
2n+1
[A(
k
n
)[, so that at integers (0, . . . , n 1) the values of the signal match the values of
such sum of cosine waves (i.e., that f(t) is interpolated) by such a sum. So, in a sense, DFT
roughly tells us what frequencies in the range
2n
2n1
,
2(n1)
2n1
, . . . ,
2
2n1
, 0,
2
2n1
,
2(n1)
2n1
,
2n
2n1
are present in the signal and with what amplitudes and phase shifts! This (partly) explains
why DFT is so useful in signal processing: it gives an insight in approximate spectral con-
tent of the signal (what frequencies are present) and on top of it, DFT can very eciently
computed via FFT! But this is where the story only begins; for example, one can show
that if you apply a lter to a signal, for example if you want to attenuate or emphasize
certain frequencies, then all you have to do is convolve the samples of the signal with a
sequence of xed coecients corresponding to the lter. Just as in the case of polynomial
multiplication, to obtain such a convolution you can simply compute the FFT of the signal
15
and multiply it with the FFT of the lter coecients and then take the inverse FFT! But
more about that from Dolby specialists next week!

You might also like