Professional Documents
Culture Documents
Digital Communication
UNIT I
Digital Transmission is the transmittal of digital signals between two or more points in a
communications system. The signals can be binary or any other form of discrete-level digital pulses.
Digital pulses can not be propagated through a wireless transmission system such as earths
atmosphere or free space.
Alex H. Reeves developed the first digital transmission system in 1937 at the Paris
Laboratories of AT & T for the purpose of carrying digitally encoded analog signals, such as the
human voice, over metallic wire cables between telephone offices.
Disadvantages
--Requires more bandwidth
--Additional encoding (A/D) and decoding (D/A) circuitry
Pulse Modulation
--Pulse modulation consists essentially of sampling analog information signals and then converting
those samples into discrete pulses and transporting the pulses from a source to a destination over a
physical transmission medium.
Pulse Modulation
--PAM is used as an intermediate form of modulation with PSK, QAM, and PCM, although it is
seldom used by itself.
--PWM and PPM are used in special-purpose communications systems mainly for the military but are
seldom used for commercial digital transmission systems.
--PCM is by far the most prevalent form of pulse modulation and will be discussed in more detail.
--The analog-to-digital converter (ADC) converts the PAM samples to parallel PCM codes, which are
converted to serial binary data in the parallel-to-serial converter and then outputted onto the
transmission linear serial digital pulses.
--The transmission line repeaters are placed at prescribed distances to regenerate the digital pulses.
--In the receiver, the serial-to-parallel converter converts serial pulses received from the transmission
line to parallel PCM codes.
--The digital-to-analog converter (DAC) converts the parallel PCM codes to multilevel PAM signals.
--The hold circuit is basically a low pass filter that converts the PAM signals back to its original
analog form.
PCM Sampling:
--The function of a sampling circuit in a PCM transmitter is to periodically sample the continually
changing analog input voltage and convert those samples to a series of constant- amplitude pulses that
can more easily be converted to binary PCM code.
--A sample-and-hold circuit is a nonlinear device (mixer) with two inputs: the sampling pulse and the
analog input signal.
--For the ADC to accurately convert a voltage to a binary code, the voltage must be relatively constant
so that the ADC can complete the conversion before the voltage level changes. If not, the ADC would
be continually attempting to follow the changes and may never stabilize on any PCM code.
--Essentially, there are two basic techniques used to perform the sampling function
1) natural sampling
2) flat-top sampling
--Natural sampling is when tops of the sample pulses retain their natural shape during the sample
interval, making it difficult for an ADC to convert the sample to a PCM code.
--The most common method used for sampling voice signals in PCM systems is flat- top sampling,
which is accomplished in a sample-and-hold circuit.
-- The purpose of a sample-and-hold circuit is to periodically sample the continually changing analog
input voltage and convert those samples to a series of constant-amplitude PAM voltage levels.
Sampling Rate
--The Nyquist sampling theorem establishes the minimum Nyquist sampling rate (fs) that can be used
for a given PCM system.
--For a sample to be reproduced accurately in a PCM receiver, each cycle of the analog input signal
(fa) must be sampled at least twice.
--Consequently, the minimum sampling rate is equal to twice the highest audio input frequency.
--Mathematically, the minimum Nyquist sampling rate is:
fs 2fa
--If fs is less than two times fa an impairment called alias or foldover distortion occurs.
Quantization
--Quantization is the process of converting an infinite number of possibilities to a finite number of
conditions.
--Analog signals contain an infinite number of amplitude possibilities.
--Converting an analog signal to a PCM code with a limited number of combinations requires
quantization.
Quantization
--With a folded binary code, each voltage level has one code assigned to it except zero volts, which
has two codes, 100 (+0) and 000 (-0).
--The magnitude difference between adjacent steps is called the quantization interval or quantum.
--For the code shown in Table 10-2, the quantization interval is 1 V.
--If the magnitude of the sample exceeds the highest quantization interval, overload distortion (also
called peak limiting) occurs.
--Assigning PCM codes to absolute magnitudes is called quantizing.
--The magnitude of a quantum is also called the resolution.
--The resolution is equal to the voltage of the minimum step size, which is equal to the voltage of the
least significant bit (Vlsb) of the PCM code.
--The smaller the magnitude of a quantum, the better (smaller) the resolution and the more accurately
the quantized signal will resemble the original analog sample.
--For a sample, the voltage at t3 is approximately +2.6 V. The folded PCM code is
--There is no PCM code for +2.6; therefore, the magnitude of the sample is rounded off to the nearest
valid code, which is 111, or +3 V.
--The rounding-off process results in a quantization error of 0.4 V.
--The likelihood of a sample voltage being equal to one of the eight quantization levels is remote.
--Therefore, as shown in the figure, each sample voltage is rounded off (quantized) to the closest
available level and then converted to its corresponding PCM code.
--The rounded off error is called the called the quantization error (Qe).
--To determine the PCM code for a particular sample voltage, simply divide the voltage by the
resolution, convert the quotient to an n-bit binary code, and then add the sign bit.
1) For the PCM coding scheme shown in Figure 10-8, determine the quantized voltage, quantization
error (Qe) and PCM code for the analog sample voltage of + 1.07 V.
A) To determine the quantized level, simply divide the sample voltage by resolution and then round
the answer off to the nearest quantization level:
+1.07V
= 1.07 = 1
1V
The quantization error is the difference between the original sample voltage and the quantized level, or
Qe = 1.07 -1 = 0.07
From Table 10-2, the PCM code for + 1 is 101.
Dynamic Range (DR): It determines the number of PCM bits transmitted per sample.
-- Dynamic range is the ratio of the largest possible magnitude to the smallest possible magnitude
(other than zero) that can be decoded by the digital-to-analog converter in the receiver. Mathematically,
DR
Vmax
Vmax
2n 1
Vmin resolution
DR dB 20log 2n 1
Vmin
where DR = dynamic range (unitless)
Vmin = the quantum value
Vmax = the maximum voltage magnitude of the DACs
n = number of bits in a PCM code (excluding the sign bit)
For n > 4
DR 2n 1 2n
The maximum quantization noise is half the resolution. Therefore, the worst possible signal voltageto-quantization noise voltage ratio (SQR) occurs when the input signal is at its minimum amplitude
(101 or 001). Mathematically, the worst-case voltage SQR is
SQR = resolution =
Qe
Vlsb =2
Qe
resolution
2
V lsb /2
Vmin resolution
2
Qe
Qe
For input signal maximum amplitude
SQR min
SQR max
SQR is not constant
SQR (dB) = 10 log
Vmax
Qe
v2 /R
(q 2 /12)/R
Companding
--Companding is the process of compressing and then expanding
--High amplitude analog signals are compressed prior to txn. and then expanded in the receiver
--Higher amplitude analog signals are compressed and Dynamic range is improved
--Early PCM systems used analog companding, where as modern systems use digital companding.
Analog companding
-- There are two methods of analog companding currently being used that closely approximate a
logarithmic function and are often called log-PCM codes.
The two methods are 1) -law and
2) A-law
-law companding
Vmax ln 1 in
V
max
Vout
ln 1
A-law companding
--A-law is superior to -law in terms of small-signal quality
A| x |
1
1 log A , 0 | x | A
y
1 log( A | x |) 1
, | x | 1
1 log A
A
where y = Vout
x = Vin / Vmax
--With digital companding, the analog signal is first sampled and converted to a linear PCM code, and
then the linear code is digitally compressed.
-- In the receiver, the compressed PCM code is expanded and then decoded back to analog.
-- The most recent digitally compressed PCM systems use a 12- bit linear PCM code and an 8-bit
compressed PCM code.
bits
sample
Delta Modulation
--Delta modulation uses a single-bit PCM code to achieve digital transmission of analog signals.
--With conventional PCM, each code is a binary representation of both the sign and the magnitude of a
particular sample. Therefore, multiple-bit codes are required to represent the many values that the
sample can be.
--With delta modulation, rather than transmit a coded representation of the sample, only a single bit is
transmitted, which simply indicates whether that sample is larger or smaller than the previous sample.
--The algorithm for a delta modulation system is quite simple.
--If the current sample is smaller than the previous sample, a logic 0 is transmitted.
--If the current sample is larger than the previous sample, a logic 1 is transmitted.
Differential DM
--With Differential Pulse Code Modulation (DPCM), the difference in the amplitude of two successive
samples is transmitted rather than the actual sample. Because the range of sample differences is
typically less than the range of individual samples, fewer bits are required for DPCM than
conventional PCM.
UNIT II
DELTA MODULATION
Delta modulation (DM or -modulation)is an analog-to-digital and digital-to-analog signal
conversion technique used for transmission of voice information where quality is not of primary
importance. DM is the simplest form of differential pulse-code modulation (DPCM) where the
differences between successive samples are encoded into n-bit data streams. In delta modulation, the
transmitted data is reduced to a 1-bit data stream.
each segment of the approximated signal is compared to the original analog wave to determine
the increase or decrease in relative amplitude
the decision process for establishing the state of successive bits is determined by this
comparison
only the change of information is sent, that is, only an increase or decrease of the signal
amplitude from the previous sample is sent whereas a no-change condition causes the
modulated signal to remain at the same 0 or 1 state of the previous sample.
To achieve high signal-to-noise ratio, delta modulation must use oversampling techniques, that is, the
analog signal is sampled at a rate several times higher than the Nyquist rate.
Derived forms of delta modulation are continuously variable slope delta modulation, delta-sigma
modulation, and differential modulation. Differential pulse-code modulation is the super set of DM.
Principle
Rather than quantizing the absolute value of the input analog waveform, delta modulation quantizes
the difference between the current and the previous step, as shown in the below block diagram.
The modulator is made by a quantizer which converts the difference between the input signal and the
average of the previous steps. In its simplest form, the quantizer can be realized with a comparator
referenced to 0 (two levels quantizer), whose output is 1 or 0 if the input signal is positive or negative.
It is also a bit-quantizer as it quantizes only a bit at a time. The demodulator is simply an integrator
(like the one in the feedback loop) whose output rises or falls with each 1 or 0 received. The integrator
itself constitutes a low-pass filter.
Transfer characteristics
The transfer characteristics of a delta modulated system follows a signum function,as it quantizes only
two levels and also one-bit at a time.
The two sources of noise in delta modulation are "slope overload", when steps are too small to track
the original waveform, and "granularity", when steps are too large. But a 1971 study shows that slope
overload is less objectionable compared to granularity than one might expect based solely on SNR
measures.
Bit-rate
If the communication channel is of limited bandwidth, there is the possibility of interference in either
DM or PCM. Hence, 'DM' and 'PCM' operate at same bit-rate.[dubious discuss]
ADM provides robust performance in the presence of bit errors meaning error detection and correction
are not typically used in an ADM radio design, this allows for a reduction in host processor workload
(allowing a low-cost processor to be used).
Applications
A contemporary application of Delta Modulation includes, but is not limited to, recreating legacy
synthesizer waveforms. With the increasing availability of FPGAs and game-related ASICs, sample
rates are easily controlled so as to avoid slope overload and granularity issues. For example, the
C64DTV used a 32MHz sample rate, providing ample dynamic range to recreate the SID output to
acceptable levels
The original proposal in 1974 used a state-of-the-art 24kbit/s Delta Modulator with a single integrator
and a Shindler Compander modified for gain error recovery. This proved to have less than full phone
line speech quality. In 1977one engineer with two assistants in the IBM Research Triangle Park, NC
laboratory was assigned to improve the quality.
The final implementation replaced the integrator with a Predictor implemented with a two pole
complex pair low pass filter designed to approximate the long term average speech spectrum. The
theory was that ideally the integrator should be a predictor designed to match the signal spectrum. A
nearly perfect Shindler Compander replaced the modified version. It was found the modified
compander resulted in a less than perfect step size at most signal levels and the fast gain error recovery
increased the noise as determined by actual listening tests as compared to simple signal to noise
measurements. The final compander achieved a very mild gain error recovery due to the natural
truncation rounding error caused by twelve bit arithmetic.
The complete function of Delta Modulation, VAC and Echo Control for six ports was implemented in
a single digital integrated circuit chip with twelve bit arithmetic. A single DAC was shared by all six
ports providing voltage compare functions for the modulators and feeding sample and hold circuits for
the demodulator outputs. A single card held the chip, DAC and all the analog circuits for the phone
line interface including transformers.
UNIT III
DELTA MODULATION
Digital modulation methods
In digital modulation, an analog carrier signal is modulated by a discrete signal. Digital modulation
methods can be considered as digital-to-analog conversion, and the corresponding demodulation or
detection as analog-to-digital conversion. The changes in the carrier signal are chosen from a finite
number of M alternative symbols (the modulation alphabet).
A simple example: A telephone line is designed for transferring audible sounds, for example tones,
and not digital bits (zeros and ones). Computers may however communicate over a telephone line by
means of modems, which are representing the digital bits by tones, called symbols. If there are four
alternative symbols (corresponding to a musical instrument that can generate four different tones, one
at a time), the first symbol may represent the bit sequence 00, the second 01, the third 10 and the
fourth 11. If the modem plays a melody consisting of 1000 tones per second, the symbol rate is 1000
symbols/second, or baud. Since each tone (i.e., symbol) represents a message consisting of two digital
bits in this example, the bit rate is twice the symbol rate, i.e. 2000 bits per second. This is similar to
the technique used by dialup modems as opposed to DSL modems.
According to one definition of digital signal, the modulated signal is a digital signal, and according to
another definition, the modulation is a form of digital-to-analog conversion. Most textbooks would
consider digital modulation schemes as a form of digital transmission, synonymous to data
transmission; very few would consider it as analog transmission.
Fundamental digital modulation methods
The most fundamental digital modulation techniques are based on keying:
QAM (quadrature amplitude modulation): a finite number of at least two phases and at least
two amplitudes are used.
= Bit duration
= Symbol duration
= Noise power spectral density (W/Hz)
= Probability of bit-error
= Probability of symbol-error
will give the probability that a single sample taken from a random process with zero-mean and
unit-variance Gaussian probability density function will be greater or equal to . It is a scaled form of
the complementary Gaussian error function:
.
The error-rates quoted here are those in additive white Gaussian noise (AWGN). These error rates are
lower than those computed in fading channels, hence, are a good theoretical benchmark to compare
with.
In QAM, an inphase signal (or I, with one example being a cosine waveform) and a quadrature phase
signal (or Q, with an example being a sine wave) are amplitude modulated with a finite number of
amplitudes, and then summed. It can be seen as a two-channel system, each channel using ASK. The
resulting signal is equivalent to a combination of PSK and ASK.
In all of the above methods, each of these phases, frequencies or amplitudes are assigned a unique
pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal number of bits.
This number of bits comprises the symbol that is represented by the particular phase, frequency or
amplitude.
If the alphabet consists of M = 2^N alternative symbols, each symbol represents a message consisting
of N bits. If the symbol rate (also known as the baud rate) is f_{S} symbols/second (or baud), the data
rate is N f_{S} bit/second.
In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is constant, the
modulation alphabet is often conveniently represented on a constellation diagram, showing the
amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the y-axis, for each symbol.
Phase-shift keying
Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing, or modulating,
the phase of a reference signal (the carrier wave). Any digital modulation scheme uses a finite number
of distinct signals to represent digital data. PSK uses a finite number of phases, each assigned a unique
pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits
forms the symbol that is represented by the particular phase. The demodulator, which is designed
specifically for the symbol-set used by the modulator, determines the phase of the received signal and
maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to
be able to compare the phase of the received signal to a reference signal such a system is termed
coherent (and referred to as CPSK).
Alternatively, instead of operating with respect to a constant reference wave, the broadcast can operate
with respect to itself. Changes in phase of a single broadcast waveform can be considered the
significant items. In this system, the demodulator determines the changes in the phase of the received
signal rather than the phase (relative to a reference wave) itself. Since this scheme depends on the
difference between successive phases, it is termed differential phase-shift keying (DPSK). DPSK can
be significantly simpler to implement than ordinary PSK since there is no need for the demodulator to
have a copy of the reference signal to determine the exact phase of the received signal (it is a noncoherent scheme). In exchange, it produces more erroneous demodulation.
BPSK (also sometimes called PRK, phase reversal keying, or 2PSK) is the simplest form of phase
shift keying (PSK). It uses two phases which are separated by 180 and so can also be termed 2-PSK.
It does not particularly matter exactly where the constellation points are positioned, and in this figure
they are shown on the real axis, at 0 and 180. This modulation is the most robust of all the PSKs
since it takes the highest level of noise or distortion to make the demodulator reach an incorrect
decision. It is, however, only able to modulate at 1 bit/symbol (as seen in the figure) and so is
unsuitable for high data-rate applications.
In the presence of an arbitrary phase-shift introduced by the communications channel, the demodulator
is unable to tell which constellation point is which. As a result, the data is often differentially encoded
prior to modulation. BPSK is functionally equivalent to 2-QAM modulation.
This yields two phases, 0 and . In the specific form, binary data is often conveyed with the following
signals:
where 1 is represented by
and 0 is represented by
course, arbitrary. This use of this basis function is shown at the end of the next section in a signal
timing diagram. The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator
would produce. The bit-stream that causes this output is shown above the signal (the other parts of this
figure are relevant only to QPSK).
or
Since there is only one bit per symbol, this is also the symbol error rate.
Sometimes this is known as quaternary PSK, quadriphase PSK, 4-PSK, or 4-QAM. (Although the root
concepts of QPSK and 4-QAM are different, the resulting modulated radio waves are exactly the
same.) QPSK uses four points on the constellation diagram, equispaced around a circle. With four
phases, QPSK can encode two bits per symbol, shown in the diagram with gray coding to minimize
the bit error rate (BER) sometimes misperceived as twice the BER of BPSK.
The mathematical analysis shows that QPSK can be used either to double the data rate compared with
a BPSK system while maintaining the same bandwidth of the signal, or to maintain the data-rate of
BPSK but halving the bandwidth needed. In this latter case, the BER of QPSK is exactly the same as
the BER of BPSK - and deciding differently is a common confusion when considering or describing
QPSK.
Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage of QPSK over
BPSK becomes evident: QPSK transmits twice the data rate in a given bandwidth compared to BPSK at the same BER. The engineering penalty that is paid is that QPSK transmitters and receivers are
more complicated than the ones for BPSK. However, with modern electronics technology, the penalty
in cost is very moderate.
As with BPSK, there are phase ambiguity problems at the receiving end, and differentially encoded
QPSK is often used in practice.
The implementation of QPSK is more general than that of BPSK and also indicates the
implementation of higher-order PSK. Writing the symbols in the constellation diagram in terms of the
sine and cosine waves used to transmit them:
This yields the four phases /4, 3/4, 5/4 and 7/4 as needed.
The first basis function is used as the in-phase component of the signal and the second as the
quadrature component of the signal.
The factors of 1/2 indicate that the total power is split equally between the two carriers.
Comparing these basis functions with that for BPSK shows clearly how QPSK can be viewed as two
independent BPSK signals. Note that the signal-space points for BPSK do not need to split the symbol
(bit) energy over the two carriers in the scheme shown in the BPSK constellation diagram.
in-phase component of the carrier, while the odd (or even) bits are used to modulate the quadraturephase component of the carrier. BPSK is used on both carriers and they can be independently
demodulated.
As a result, the probability of bit-error for QPSK is the same as for BPSK:
However, in order to achieve the same bit-error probability as BPSK, QPSK uses twice the power
(since two bits are transmitted simultaneously).
If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the probability of
symbol error may be approximated:
Frequency-shift keying
The demodulation of a binary FSK signal can be done using the Goertzel algorithm very efficiently,
even on low-power microcontrollers.
Claude Shannon Published a landmark paper in 1948 that was the beginning of the
branch of information theory
We are interested in communicating information from a source to a destination
In our case, the messages will be a sequence of binary digits
Does anyone know the term for a binary digit?
One detail that makes communicating difficult is noise
noise introduces uncertainty
Suppose I wish to transmit one bit of information what are all of the possibilities?
tx 0, rx 0 - good
tx 0, rx 1 - error
tx 1, rx 0 - error
tx 1, rx 1 - good
Two of the cases above have errors this is where probability fits into the picture
In the case of steganography, the noise may be due to attacks on the hiding
algorithm
Claude Shannon introduced the idea of self-information
1
1
I ( X j )X, where
lg
lg algparticular
Pj
Suppose we have an event
X
represents
outcome of the
P( X j )i
Pj
event
Consider flipping a fair coin, there are two equiprobable outcomes:
say X0 = heads, P0 = 1/2, X1 = tails, P1 = 1/2
The amount of self-information for any single result is 1 bit
In other words, the number of bits required to communicate the result of the event
is 1 bit. When outcomes are equally likely, there is a lot of information in the
result. The higher the likelihood of a particular outcome, the less information that
outcome conveys However, if the coin is biased such that it lands with heads up
99% of the time, there is not much information conveyed when we flip the coin
and it lands on heads. Suppose we have an event X, where Xi represents a
particular outcome of the event. Consider flipping a coin, however, lets say there
are 3 possible outcomes: heads (P = 0.49), tails (P=0.49), lands on its side (P =
0.02) (likely MUCH higher than in reality)
Note: the total probability MUST ALWAYS add up to one
1
1
I ( X j ) lg for either
lg a head
lgorPaj tail is 1.02 bits
The amount of self-information
P( X j )
Pj
For landing on its side: 5.6 bits
Entropy is the measurement of the average uncertainty of information
We will skip the proofs and background that leads us to the formula for
entropy, but it was derived from required properties
Also, keep in mind that this is a simplified explanation
H entropy
P probability
X random variable with a discrete set of possible outcomes
(X0, X1, X2, Xn-1) where n is the total number of possibilities
n1
n1
Entropy H ( X ) Pj lg Pj Pj lg
1
Pj
What to do if we have a noisy channel and you want to send information across
reliably?
Information Capacity Theorem (Shannon Limit)
The information capacity (or channel capacity) C of a continuous channel with
bandwidth BHertz can be perturbed by additive Gaussian white noise of power
spectral density N0/2,
C=B log2(1+P/N0B) bits/sec
provided bandwidth B satisfies where P is the average transmitted power
P = Eb Rb ( for an ideal system, Rb= C). Eb is the transmitted energy per bit,
Rb is transmission rate.
Shannon Limit
Equipment failure
Lighting interference
Source Coding
lossy; may consider semantics of the data
depends on characteristics of the data
e.g. DCT, DPCM, ADPCM, color model transform
A code is
distinct if each code word can be distinguished from every other (mapping is
one-to-one)
uniquely decodable if every code word is identifiable when immersed in a
sequence of code words
e.g., with previous table, message 11 could be defined as either ddddd or bbbbbb
Measure of Information
Consider symbols si and the probability of occurrence of each symbol p(si)
Assign a '1' to the first probability class, and a '0' to the second
Character
X6
Probability
Code
0.25
X3
0.2
X4
0.15
X5
0.15
X1
0.1
X7
0.1
X2
0.05
11
10
1
0
0
011
010
001
0001
0000
Huffman Encoding
Statistical encoding
To determine Huffman code, it is useful to construct a binary tree
Leaves are characters to be encoded
Nodes carry occurrence probabilities of the characters belonging to the subtree
Example: How does a Huffman code look like for symbols with statistical
symbol occurrence probabilities:
P(A) = 8/20, P(B) = 3/20, P(C ) = 7/20, P(D) = 2/20?
Step 1 : Sort all Symbols according to their probabilities (left to right) from Smallest
to largest these are the leaves of the Huffman tree
Step 2: Build a binary tree from left toRight
Policy: always connect two smaller nodes together (e.g., P(CE) and P(DA) had both
Probabilities that were smaller than P(B), Hence those two did connect first
Step 3: label left branches of the tree With 0 and right branches of the tree With 1
Step 4: Create Huffman Code
Symbol A = 011
Symbol B = 1
Symbol C = 000
Symbol D = 010
Symbol E = 001
Encoding
Hamming codes
Hamming [7,4] Code
The seven is the number of digits that make the code.
E.g. 0100101
The four is the number of information digits in the code.
E.g. 0100101
Encoded with a generator matrix. All codes can be formed from row operations on
matrix. The code generator matrix for this presentation is the following:
1
0
G
0
1000011
0010110
1100110
1001100
0101010
1101001
1111111
0011001
0100101
0001111
1010101
0110011
0011001
1001010
0111100
0000000
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
0
1
1
1
1
I kk : Pnk k
0
27 128
Possible codes
1
0
G
0
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
0
1
1
1
1
0
The distance between two codes u and v is the number of positions which differ
e.g.
u=(1,0,0,0,0,1,1)
v=(0,1,0,0,1,0,1)
dist(u,v) = 4
dist (u, u ) 0
For each generator matrix G, there exists an (n k) x n matrix H, such that the
rows of G are orthogonal to the rows of G; i.e.,
GH T 0
where HT is the transpose of H, and 0 is an k x (n k) all zeros matrix .
The matrix H is called the parity-check matrix, that can be used to decode the
received code words.
0 1 1 1 1 0 0
H 1 0 1 1 0 1 0
1 1 0 1 0 0 1
PT : I 33
P3 x 4 : I 3 x 3
Channel Decoding
Syndrome Decoding
Consider a transmitted code cm and y is the received sequence, y can be expressed as,
where e denotes binary error vector.
y cm e
The decoder calculate product
yH t
s yH t
cm e H t
c m H t eH t
eH t
GH T 0
Syndrome example
1
0
G
0
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
0
1
1
1
1
0
0 1 1 1 1 0 0
H 1 0 1 1 0 1 0
1 1 0 1 0 0 1
0 1 1 1 1 0 0
H 1 0 1 1 0 1 0
1 1 0 1 0 0 1
1
0
0
0
1
0
1
syn( y) HyT
1
1
1
A code of minimum weight d is called perfect if all the vectors in V are contained in
the sphere of radius t = [(d 1)/2] about the code-word.
The Hamming [7,4] code has eight vectors of sphere of radius one about each codeword, times sixteen unique codes. Therefore, the Hamming [7,4] code with minimum
weight 3 is perfect since all the vectors (128) are contained in the sphere of radius 1.
Block Codes
Information is divided into blocks of length k
r parity bits or check bits are added to each block
(total length n = k + r),.
Code rate R = k/n
Decoder looks for codeword closest to received
vector (code vector + error vector)
Tradeoffs between
Efficiency
Reliability
Encoding/Decoding complexity
Block Codes: Linear Block Codes
Linear Block Code
The block length c(x) or C of the Linear Block Code is
c(x) = m(x) g(x) or C= m G
where m(x) or m is the information codeword block length,
g(x)is the generator polynomial, G is the generator matrix.
G = [P | I],
where pi= Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I is unit matrix.
The parity check matrix
H = [PT | I], where PT is the transpose of the matrix P.
Message vector m
Generator matrix G
Code
Vector C
Code Vector C
Parity check matrix HT
Null
Vector 0
Operations of the generator matrix and the parity check matrix
The parity check matrix H is used to detect errors in the received code by using the
fact that c * HT = 0 ( null vector)
Let x=c
e be the received message where c is the correct code and e is the error
Compute
S=x* HT =( c
e)* HT=c* HT e* HT=e* HT
If S is 0 then message is correct else there are errors in it, from common known error
patterns the correct message can be decoded.
Block Codes: Example
Example : Find linear block code encoder G if code generator polynomial
g(x)=1+x+x3 for a (7, 4) code.
We have n = Total number of bits = 7, k = Number of information bits = 4,
p2 0 1 0
.
. . . .
pk 0 0 1
where pi= Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I is unit matrix
Cyclic Codes
It is a block code which uses a shift register to perform encoding and decoding
The code word with n bits is expressed as
c(x)=c1xn-1+c2xn-2+ cn
where each ci is either a 1 or 0.
c(x) = m(x) xn-k + cp(x)
where cp(x) = remainder from dividing m(x) xn-k by generator g(x)
if the received signal is c(x) + e(x) where e(x) is the error.
To check if received signal is error free, the remainder from dividing
c(x) + e(x) by g(x) is obtained(syndrome).
If this is 0 then the received signal is considered error free else error pattern is
detected from known error syndromes.
Cyclic Redundancy Check (CRC)
Using parity, some errors are masked - careful choice of bit combinations can lead to
better detection.
Binary (n, k) CRC codes can detect the following error patterns
1. All error bursts of length n-k or less.
2. All combinations of minimum Hamming distance d min - 1 or fewer errors.
3. All error patters with an odd number of errors if the generator polynomial g(x) has
an even number of nonzero coefficients.
Common CRC Codes
Code
Generator polynomial g(x)
Parity check bits
2
11
12
CRC-12
1+x+x +x3+x +x
12
2
15
16
CRC-16
1+x +x +x
16
5
15
16
CRC-CCITT
1+x +x +x
16