Professional Documents
Culture Documents
Abstract— We develop a framework for analog-to-information simple: it consists of just a single sinusoid. Surely we should
conversion that enables sub-Nyquist acquisition and processing of be able to acquire such a signal with fewer than 2(f2 − f1 )
wideband signals that are sparse in a local Fourier representation. samples per second.
The first component of the framework is a random sampling
system that can be implemented in practical hardware. The Fortunately, the past several years have seen several ad-
second is an efficient information recovery algorithm to compute vances in the theory of sampling and reconstruction that
the spectrogram of the signal, which we dub the sparsogram. A address these very questions. Leveraging the theory of stream-
simulated acquisition of a frequency hopping signal operates at ing algorithms, we introduce in this paper a new framework
33× sub-Nyquist average sampling rate with little degradation for analog-to-information conversion that enables sub-Nyquist
in signal quality.
acquisition and processing of LFS signals. Our framework has
I. I NTRODUCTION two key components. The first is a random sampling system
that can be implemented in practical hardware. The second
Sensors, signal processing hardware, and algorithms are un-
is an efficient information recovery algorithm to compute the
der increasing pressure to accommodate ever faster sampling
spectrogram of LFS signal, which we dub the sparsogram.
and processing rates. In this paper we study the acquisition
This paper is organized as follows. We review the requisite
and analysis of wideband signals that are locally Fourier
theory and algorithms for random sampling in Section II and
sparse (LFS) in the sense that at each point in time they are
develop two implementations of random samplers in Section
well-approximated by a few local sinusoids of constant fre-
III. We conduct a number of experiments to validate our
quency. Examples of LFS signals include frequency hopping
approach in Section IV and close with a discussion and
communication signals, slowly varying chirps from radar and
conclusions in Section V.
geophysics, and many acoustic and audio signals.
LFS signals are sparse in a time-frequency representation II. R ANDOM S AMPLING AND I NFORMATION R ECOVERY
like the short-time Fourier transform (STFT). In discrete time, To set the stage for the random sampling and information
the STFT corresponds to a Fourier analysis on a sliding recovery algorithm, we start with a description of the problem
window of the signal in the discrete setting. Let s be a discrete-time signal of length
∞
X N (not discrete samples of an analog signal but simply a vector
S(τ, ω) = s(ν) w(ν − τ ) e−jων ; (I.1) of length N ). We know that s is perfectly represented by its
ν=−∞ Fourier coefficients or spectrum. Suppose that we wish to use
that is, S(τ, ω) is the Fourier spectrum of the signal s only m Fourier coefficients to represent the signal or spectrum.
localized around time sample τ by the N -point window w. To minimize the MSE in our choice of m coefficients, the
The spectrogram is the squared magnitude |S(τ, ω)|2 . The optimal choice of Fourier coefficients is the m largest Fourier
defining property of an LFS signal is that S(τ, ω) ≈ 0 for coefficients in magnitude. Denote the optimal m-term Fourier
most τ and ω. representation of a signal s of length N by ropt . We assume
While LFS signals are simply described in time-frequency, that, for some M , we have
they are wideband when there is no a priori restriction on the (1/M ) ≤ ks − ropt k2 ≤ ksk2 ≤ M.
frequencies of the local sinusoids. Hence, the requirements of
traditional Nyquist-rate sampling at two times the bandwidth Gilbert et. al [1] have developed an algorithm that uses at
can be excessive and difficult to meet. As a practical exam- most m · (log(1/δ), log N, log M, 1/)O(1) space and time and
ple, consider sampling a frequency-hopping communications outputs a representation r such that
signal that consists of a sequence of windowed sinusoids with
ks − rk22 ≤ (1 + )ks − ropt k22 ,
frequencies distributed between f1 and f2 Hz. The bandwidth
of this signal is f2 − f1 Hz, which dictates sampling above with probability at least 1 − δ.
the Nyquist rate of 2(f2 − f1 ) Hz to avoid aliasing. However The algorithm is randomized and the probability is over
the description of the signal at any point in time is extremely random choices made by the algorithm, not over the signal.
That is, it is a coin-flipping algorithm that uses coin-flips to where K ≤ O(1/η) is sufficiently large, so that each
choose the sample set upon which we observe the signal. For ω that is η-significant in s − r is likely to be (1 − γ)-
each signal, with high probability, the algorithm succeeds. significant in some Fk for some small constant γ.
As a bonus, the running time of the algorithm is much less To do this, we
than N , the length of the signal. This is exponentially faster ∗ P ERMUTE the spectrum of s − r by a random
than the running time of FFT (which is O(N log N )). One dilation σ, getting permσ(s − r). Observe that we
reason the algorithm is so much faster is that it does not read can sample from the signal perms(s−r) by mod-
all the input; we sample exponentially fewer positions than the ulating the (time-domain) samples of this signal
length of the signal and we assume that our algorithm can read thus keeping our time and memory requirements
s[t] for a t of its choice in constant time. The set of samples low.
t where the algorithm observes the signal is chosen randomly ∗ F ILTER permσ(s − r) by a filterbank of ap-
(from a non-uniform distribution), and the set does not adapt proximately m equally-spaced frequency-domain
to the signal. translations of the Boxcar Filter with bandwidth
Let us describe the distribution on sample points before approximately N/m and approximately m com-
we discuss how the information recovery algorithm uses these mon taps. Again, this procedure can be done on
signal samples. Let us assume for simplicity that N is a prime the the (time-domain) samples within the adver-
number. We generate two uniformly random integers r and s tised time and memory constraints. We obtain
from [0, . . . , N − 1] with the stipulation that s is non-zero. m new signals, some of which have a single
For input parameter m, we generate the set A = r + k · s overwhelming Fourier mode, σω, corresponding
mod N where k = 0, . . . , 4m. The set A is a random to significant mode ω in s − r.
arithmetic progression of length 4m. For each point p in the ∗ We undo the above permutation, thereby making
arithmetic progression, we also sample at (p + 2l ) mod N ω overwhelming instead of σω.
where l = 0, . . . , log2 N . For example, we observe the signal – G ROUP - TEST each new signal to locate the one
at a point p and at p + N/2. Thus, our sample set is overwhelming mode, ω. Learn the bits of ω one at
S = {(p + 2l ) mod N |p ∈ A, l = 0, . . . , log2 N } a time, least to most significant. For example, to
learn the least significant bit, we project each Fk
where A is a random arithmetic progression. To meet the onto the space of even frequencies and the space of
accuracy and success probability guarantees stated above, odd frequencies. Next, we estimate (via our samples)
we take multiple independent sample sets drawn from this the energy of each projection and learn the least
distribution. The number of independent sample sets depends significant bit of ω. Observe that we have can carry
on the desired accuracy and success probability. Note that with out this projection as we have sampled our signal at
low probability, this distribution generates samples points that a random point p and at p + N/2.
are separated by distance 1. In other words, the probability
• E STIMATE the Fourier coefficients of these “significant”
that we choose consecutive sample points is low and can be
frequencies by computing the Fourier coefficients of the
kept under control.
sampled residual using an unequally-spaced fast Fourier
Once we have specified the sample set, our algorithm
transform algorithm and normalizing appropriately.
proceeds in a greedy fashion. Given a signal s, we set the rep-
• I TERATE in a greedy pursult.
resentation r to zero and consider the residual s − r. We make
The above discussion assumes that the signal s is a discrete
progress on lowering ks − rk by sampling from the residual,
vector of length N . A naı̈ve use of this algorithm would
identifying a set of “significant” frequencies in the spectrum
sample a wideband analog signal at Nyquist rate and then
of the residual, estimating the Fourier coefficients of these
input this vector of discrete samples to the random sampling
“significant” frequencies, and subtracting their contribution,
algorithm. This naı̈ve use, however, does not take advantage
thus forming a new residual signal upon which we iterate.
of the strength of the algorithm, namely that we do not need
Below, we sketch each iteration of our algorithm. The details
to use all of the samples to recovery information about the
may be found in [1].
Each iteration of our algorithm proceeds as follows: signal. Instead, the algorithm and its analysis suggest that we
can subsample the wideband analog signal at an average rate
• S AMPLE from s − r in K ≈ m correlated random
that is significantly lower than the Nyquist rate and use those
positions, where r has L terms, in total time (K +
samples to recover significant information about the spectrum
L) logO(1) (N ). We take the given samples from the signal
of the signal in considerably less time than it would take to
s on the sample set and sample from the representation
perform an FFT on all of the samples (taken at Nyquist rate).
r by performing an unequally-spaced fast Fourier trans-
Application of the algorithm on local (Boxcar) windows of
form.
the signal yields the sparsogram coefficients.
• I DENTIFY a set of “significant” frequencies in the spec-
trum of s − r. III. I MPLEMENTATION
– I SOLATE one or more modes of s − r. We generate The random sampling algorithm of Section II suggests that
a set {Fk : k < K} of K new signals from s − r we can build a Random Analog to Digital Converter (RADC)
RADC Signal
Signal DSP for I/P Analog O/P Low rate
x(t)
(Random y(n) reconstruction ~ DEMUX Register MUX ADC
Sampling) x(t)
90 0.9
3
80 0.8
2
70 0.7
1
2
60 0.6
frequency bin
0 50 0.5
40 0.4
−1
30 0.3
−2
1.5
−3
10 0.1
−4 0
0.5 1 1.5 2 2.5 3 3.5 4 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
time x 10
6 time window
1
(a) (b)
AAFFT sparsogram: samples=2.8687% run time=0.792 AAFFT error in sparsogram
100 1 100 1
90 0.9 90 0.9
80 0.8 80 0.8
0.5
70 0.7 70 0.7
60 0.6 60 0.6
frequency bin
frequency bin
50 0.5 50 0.5
40 0.4 40 0.4
0
0.5 1 1.5 2 2.5 3 3.5 4 4.5
30 0.3 30 0.3
Percentage of samples used
20 0.2 20 0.2
10 0.1 10 0.1
0 0
0.5 1 1.5 2 2.5 3
time window
3.5 4 4.5 5 5.5 0.5 1 1.5 2 2.5 3
time window
3.5 4 4.5 5 5.5
Fig. 6. Run time (y-axis) of the RADC/sparsogram algorithm grows
sublinearly with the number of samples (x-axis).
(c) (d)
N = 67108864
N = 33554432
0.8
from Texas Instruments. ACG and MJS were supported by
0.7
N = 16777216
0.8
N = 8388608
NSF DMS 03546000. MJS and MI were supported by NSF
Success probability
Success probability
0.6
0.7 0.5
DMS 0510203. ACG is an Alfred P. Sloan Fellow.
0.4
0.6
0.3
For more information on random sampling and compressive
0.5
0.2 sensing, see the website dsp.rice.edu/cs.
0.1
0.4
0 0.2 0.4 0.6 0.8
Percentage of samples used
1 1.2 1.4
0
0.5 1 1.5 2 2.5
Percentage of samples
3 3.5 4 4.5 R EFERENCES
(a) (b) [1] A. C. Gilbert, S. Muthukrishnan, and M. J. Strauss, “Improved time
bounds for near-optimal sparse Fourier representation via sampling,” in
Proc. SPIE Wavelets XI, San Diego, 2005.
Fig. 5. (a) Success probability vs. sample rate for varying bandwidth, fixed
[2] S. R. Duncan, V. Gregers-Hansen, and J. P. McConnell, “A Stacked
SNR signal. (b) Success probability vs. sample rate for fixed bandwidth, fixed
Analog-to-Digital Converter Providing 100 dB of Dynamic Range,” in
SNR signal.
IEEE International Radar Conference, May 2005, pp. 31–36.
[3] N. D. Dalt, “Effect of Jitter on Asynchronous Sampling with Finite
Number of Samples,” IEEE Transactions on Circuits and Systems II,
While these sampling rates are impressive, the algorithm vol. 51, pp. 660–664, Dec. 2004.
is also computationally efficient. Figure IV shows that the [4] T. Sato, R. Sakuma, D. Miyamori, and M. Fukaset, “Performance
run time of the algorithm varies linearly with the number Analysis of Wave-Pipelined LFSR,” in IEEE International Symposium
on Communications and Information Technology, ISCIT, Oct. 2004, pp.
of samples and not with the Nyquist rate, while any FFT 694–699.
algorithm must grow (essentially) linearly with the Nyquist [5] M. Frigo and S. Johnson, “The design and implementation of FFTW3,”
rate. Proc. IEEE 93 (2), pp. 216–231, 2005.