You are on page 1of 15

Turbo code

Sylvie Keroudan and Claude Berrou (2010), Scholarpedia, revision #91891 [link to/cite this
5(4):6496. doi:10.4249/scholarpedia.6496 article]

Curator and Contributors

1.00 - Sylvie Keroudan

Dr. Sylvie Keroudan, Associate Professor, ENST Bretagne,


France
Dr. Claude Berrou, ENST Bretagne, France
Turbo codes are error-correcting codes with performance close to the Shannon theoretical limit
[SHA]. These codes have been invented at ENST Bretagne (now TELECOM Bretagne), France, in the
beginning of the 90's [BER]. The encoder is formed by the parallel concatenation of two
convolutional codes separated by an interleaver or permuter. An iterative process through the two
corresponding decoders is used to decode the data received from the channel. Each elementary
decoder passes to the other soft (probabilistic) information about each bit of the sequence to decode.
This soft information, called extrinsic information, is updated at each iteration.

Figure 1: Concatenated encoder and decoder

Contents
[hide]
1 Precursor
2 The genesis of turbo codes
3 Turbo-decoding
4 Applications of turbo codes
5 References
6 Further reading
7 See also

Precursor
In the 60s, Forney [FOR] introduced the concept of concatenation to obtain coding and decoding
schemes with high error-correction capacity. Typically, the inner encoder is a convolutional code and
the inner decoder, using the Viterbi algorithm, is able to process soft information, that is,
probabilities or logarithms of probabilities in practice. The outer encoder is a block encoder, typically
a Reed-Solomon encoder and its associated decoder works with the binary decisions supplied by the
inner decoder, as shown in Figure 1. As the former may deliver errors occurring in packets, the role
of the deinterleaver is to spread these errors so as to make the outer decoding more efficient.
Though the minimum Hamming distance dmin is very large, the performance of such concatenated
schemes is not optimal, for two reasons. First, some amount of information is lost due to the inability
of the inner decoder to provide the outer decoder with soft information. Second, if the outer decoder
takes benefit from the work of the inner one, the converse is not true. The decoder operation is
clearly dissymmetric.

To allow the inner decoder to produce soft decisions instead of binary decisions, modified versions of
the Viterbi algorithm (SOVA: Soft-Output Viterbi algorithm) were proposed by Battail [BAT] and
Hagenauer & Hoeher [HAG]. But soft inputs are not easy to handle in a Reed-Solomon decoder.

The genesis of turbo codes


The invention of turbo codes finds its origin in the will to compensate for the dissymmetry of the
concatenated decoder of Figure 1. To do this, the concept of feedback - a well-known technique in
electronics is implemented between the two component decoders ( Figure 2).

Figure 2: Decoding the concatenated code with feedback

The use of feedback requires the existence of Soft-In/Soft-Out (SISO) decoding algorithms for both
component codes. As the SOVA algorithm was already available at the time of the invention, the
adoption of convolutional codes appeared natural for both codes. For reasons of bandwidth
efficiency, serial concatenation is replaced with parallel concatenation. Actually, parallel
concatenation combining two codes with rates R1 and R2 gives a global rate Rp equal to:
Rp=R1R21(1R1)(1R2)

This rate is higher than that of a serially concatenated code, which is:

Rs=R1R2

for the same values of R1 and R2 , and the lower these rates, the larger the difference. Thus, with the

same performance of component codes, parallel concatenation offers a better global rate, but this
advantage is lost when the rates come close to unity. Furthermore, in order to ensure a sufficiently
large dmin for the concatenated code, classical non-systematic non-recursive convolutional codes
( Figure 3.a) have to be replaced with recursive systematic convolutional (RSC) codes ( Figure 3.b).

Figure 3: (A) Non-systematic non-recursive convolutional code with polynomials 13,15. (B) Recursive
systematic convolutional (RSC) code with polynomials 13 (recusivity), 15 (parity)
Figure 4: A turbo code with component codes 13, 15

What distinguishes both codes is the minimum input weight wmin . The input weight w is the

number of "1" in an input sequence. Suppose that the encoder of Figure 3.a is initialized in state 0,
then fed with an all-zero sequence, except in one place (that is,w=1). The encoder will retrieve state
0 as soon as the fourth 0 following the 1 will appear at the input. We have then wmin=1 . In the same

conditions, the encoder of Figure 3.b needs a second 1 to retrieve state 0. Without this second 1, this
encoder will act as a pseudo-random generator, with respect to its output Y . So, wmin=2 and this
property is very favourable regarding dminwhen parallel concatenation is implemented. A typical
turbo code is depicted in Figure 2. The data are encoded both in the natural order and in a permuted
order by two RSC codes C1 and C2 that issue parity bits Y1 and Y2 . In order to encode finite-length

blocks of data, RSC encoding is terminated by tail bits or has tail-biting termination. The
permutation has to be devised carefully because it has a strong impact on dmin . The natural coding
rate of a turbo code is R=1/3 (three output bits for one input bit). To deal with higher coding rates,
the parity bits are punctured. For instance, transmitting Y1 and Y2 alternately leads to R=1/2 .

The original turbo code [BER] uses a parallel concatenation of convolutional codes. But other
schemes like serial concatenation of convolutional codes [BEN] or algebraic turbo codes [PYN] have
since been studied. More recently, non-binary turbo codes have also been proposed [DOU].

Turbo-decoding
Decoding the code of Figure 2 by a global approach is not possible, because of the astronomical
number of states to consider. A joint probabilistic process by the decoders of C1and C2 , has to be

elaborated. Because of latency constraints, this joint process is worked out in an iterative manner in
a digital circuit. Turbo decoding relies on the following fundamental criterion:
when having several probabilistic machines work together on the estimation of a
common set of symbols, all the machines have to give the same decision, with the
same probability, about each symbol, as a single (global) decoder would.
To make the composite decoder satisfy this criterion, the structure of Figure 1 is adopted. The double
loop enables both component decoders to benefit from the whole redundancy. The term turbo was
given to this feedback construction with reference to the principle of the turbo-charged engine.
Figure 5: A turbo decoder

The components are SISO decoders, permutation () and inverse permutation (1) memories. The
node variables of the decoder are Logarithms of Likelihood Ratios (LLR). An LLR related to a
particular binary datum d is defined as:

LLR(d)=ln(Pr(d=1)Pr(d=0))

The role of a SISO decoder is to process an input LLR and, thanks to local redundancy (i.e. y1 for
DEC1, y2 for DEC2), to try to improve it. The output LLR of a SISO decoder may be simply written as
LLRout(d)=LLRin(d)+z(d)

where z(d) is the extrinsic information about d , provided by the decoder. If this works
properly, z(d)is most of the time negative if d=0 , and positive if d=1 . The composite decoder is

constructed in such a way that only extrinsic terms are passed by one component decoder to the
other. The input LLR to a particular decoder is formed by the sum of two terms: the information
symbols (x) stemming from the channel and the extrinsic term (z) provided by the other decoder,

which serves as a priori information. The information symbols are common inputs to both decoders,
which is why the extrinsic information must not contain them. In addition, the outgoing extrinsic
information does not include the incoming extrinsic information, in order to cut down correlation
effects in the loop. There are two families of SISO algorithms, those based on the SOVA [BAT][HAG],
the others based on the MAP (also called BCJR or APP) algorithm [BAH] or its simplified versions.
Turbo decoding is not optimal. This is because an iterative process has obviously to begin, during the
first half-iteration, with only a part of the redundant information available (either y1 or y2).
Fortunately, loss due to sub-optimality is small (less than 0.5dB).
Applications of turbo codes

Figure 6: Applications of turbo codes

Table 1 summarizes normalized or proprietary applications of turbo codes, known to date. Most of
these applications are detailed and commented on in [GRA].

References
[BAH] L.R. Bahl, J. Cocke, F. Jelinek and J. Raviv : Optimal
decoding of linear codes for minimizing symbol error rate,
IEEE Trans. Inform. Theory, IT-20, pp. 248-287, Mar. 1974.
[BAT] G. Battail, Pondration des symboles dcods par
lalgorithme de Viterbi, Annales des Tlcommunications, vol.
42, N1-2, pp. 31-38, Jan. 1987.
[BEN] S. Benedetto and G. Montorsi, Serial concatenation of
block and convolutional codes, Electronics Letters, vol. 32, N
10, pp. 887-888, May 1996.
[BER] C. Berrou, A. Glavieux and P. Thitimajshima, Near
Shannon limit error-corrrecting coding and decoding: turbo-
codes, International Conference on Communications, ICC93,
Geneva, Switzerland, pp. 1064-70, May 1993.
[DOU] C. Douillard and C. Berrou, Turbo Codes with Rate-m /
(m + 1) Constituent Convolutional Codes IEEE Trans.
Commun, Vol. 53, N 10, pp. 1630-1638, Oct. 2005.
[FOR] G. D. Forney, Jr., Concatenated codes, MIT Press, 1966.
[GRA] K. Gracie and M. H. Hamon, Turbo and turbo-like
codes: principles and applications in telecommunications,
Proc. of the IEEE, vol. 95, N 5, pp. 1228-1254, June 2007.
[HAG] J. Hagenauer and P. Hoeher, A Viterbi algorithm with
soft-decision outputs and its applications, IEEE Global
Communications Conference, Globecom89, Dallas, Texas, pp.
4711-17, Nov. 1989.
[PYN] R. Pyndiah, A.Glavieux, A. Picart and S.Jacq, Near
optimum decoding of product codes, in proc. of IEEE
GLOBECOM '94 Conference, vol. 1/3, San Francisco, pp. 339-
343, Nov-Dec. 1994.
[SHA] C. E. Shannon, Probability of error for optimal codes in
Gaussian channel, Bell Syst. Tech. Journal, vol. 38, pp. 611-
656, 1959.
Internal references
Andrew J. Viterbi (2009) Viterbi algorithm. Scholarpedia,
4(1):6246.

Further reading
See also
Viterbi algorithm
Sponsored by: Eugene M. Izhikevich, Editor-in-Chief of Scholarpedia, the peer-reviewed open-access
encyclopedia
Sponsored by: Prof. Francesco Vatalaro, University of Rome Tor Vergata, Italy
Reviewed by: Anonymous
Accepted on: 2008-11-18 05:52:19 GMT
Category:
Telecommunications
Log in / create account
Page
Discussion
Read
View source
View history

Main page
Propose a new article
About
Random article
Help
F.A.Q.'s
Blog

Focal areas
Astrophysics
Computational neuroscience
Computational intelligence
Dynamical systems
Physics
Touch

Activity

Toolbox

This page was last modified on 21 October 2011, at 04:18.

This page has been accessed 22,716 times.

Served in 0.537 secs.


"Turbo code" by Sylvie Keroudan and Claude Berrou is licensed under

a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported

License. Permissions beyond the scope of this license are described in

the Terms of Use


Privacy policy
About Scholarpedia
Terms of Use
Contact
Idealized system model[edit]

This section describes a simple idealized OFDM system model suitable for a time-
invariant AWGN channel.

Transmitter[edit]

An OFDM carrier signal is the sum of a number of orthogonal sub-carriers, with baseband data on
each sub-carrier being independently modulated commonly using some type ofquadrature amplitude
modulation (QAM) or phase-shift keying (PSK). This composite baseband signal is typically used to
modulate a main RF carrier.

is a serial stream of binary digits. By inverse multiplexing, these are first demultiplexed into
parallel streams, and each one mapped to a (possibly complex) symbol stream using some
modulation constellation (QAM, PSK, etc.). Note that the constellations may be different, so some
streams may carry a higher bit-rate than others.
An inverse FFT is computed on each set of symbols, giving a set of complex time-domain samples.
These samples are then quadrature-mixed to passband in the standard way. The real and imaginary
components are first converted to the analogue domain using digital-to-analogue converters (DACs);
the analogue signals are then used to modulatecosine and sine waves at the carrier frequency, ,
respectively. These signals are then summed to give the transmission signal, .

Receiver[edit]

The receiver picks up the signal , which is then quadrature-mixed down to baseband using
cosine and sine waves at the carrier frequency. This also creates signals centered on , so low-
pass filters are used to reject these. The baseband signals are then sampled and digitised
using analog-to-digital converters (ADCs), and a forward FFT is used to convert back to the
frequency domain.

This returns parallel streams, each of which is converted to a binary stream using an appropriate
symbol detector. These streams are then re-combined into a serial stream, , which is an estimate
of the original binary stream at the transmitter.

Mathematical description[edit]

If sub-carriers are used, and each sub-carrier is modulated using alternative symbols, the
OFDM symbol alphabet consists of combined symbols.

The low-pass equivalent OFDM signal is expressed as:


where are the data symbols, is the number of sub-carriers, and is the OFDM symbol
time. The sub-carrier spacing of makes them orthogonal over each symbol period; this
property is expressed as:

where denotes the complex conjugate operator and is the Kronecker delta.

To avoid intersymbol interference in multipath fading channels, a guard interval of length


is inserted prior to the OFDM block. During this interval, a cyclic prefix is transmitted such
that the signal in the interval equals the signal in the interval . The
OFDM signal with cyclic prefix is thus:

The low-pass signal above can be either real or complex-valued. Real-valued low-pass
equivalent signals are typically transmitted at basebandwireline applications such as
DSL use this approach. For wireless applications, the low-pass signal is typically
complex-valued; in which case, the transmitted signal is up-converted to a carrier
frequency . In general, the transmitted signal can be represented as:

Usage[edit]

OFDM is used in:

Digital Audio Broadcasting (DAB);

Digital television DVB-T/T2 (terrestrial), DVB-H (handheld), DMB-T/H, DVB-


C2 (cable);
Wireless LAN IEEE 802.11a, IEEE 802.11g, IEEE 802.11n, IEEE 802.11ac,
and IEEE 802.11ad;

WiMAX;

ADSL (G.dmt/ITU G.992.1);

the LTE and LTE Advanced 4G mobile phone standards.

rthogonal frequency-division multiplexing (OFDM) is a method of encoding digital data on


multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital
communication, whether wireless or over copper wires, used in applications such as digital television
and audio broadcasting, DSL Internet access, wireless networks, powerline networks,
and 4G mobile communications.

OFDM is essentially[weasel words] identical to coded OFDM (COFDM)[clarification needed] and discrete multi-
tone modulation(DMT), and is a frequency-division multiplexing (FDM) scheme used as a digital
multi-carrier modulation method. The word "coded" comes from the use of forward error
correction (FEC).[1] A large number of closely spaced orthogonal sub-carrier signals are used to
carry data[1] on several parallel data streams or channels. Each sub-carrier is modulated with a
conventional modulation scheme (such as quadrature amplitude modulation or phase-shift keying) at
a low symbol rate, maintaining total data rates similar to conventional single-carrier modulation
schemes in the same bandwidth.

The primary advantage of OFDM over single-carrier schemes is its ability to cope with
severe channel conditions (for example,attenuation of high frequencies in a long copper wire,
narrowband interference and frequency-selective fading due to multipath) without complex
equalization filters. Channel equalization is simplified because OFDM may be viewed as using many
slowly modulatednarrowband signals rather than one rapidly modulated wideband signal. The low
symbol rate makes the use of a guard interval between symbols affordable, making it possible to
eliminate intersymbol interference (ISI) and utilize echoes and time-spreading (on analogue TV
these are visible as ghosting and blurring, respectively) to achieve a diversity gain, i.e. a signal-to-
noise ratio improvement. This mechanism also facilitates the design of single frequency
networks (SFNs), where several adjacent transmitters send the same signal simultaneously at the
same frequency, as the signals from multiple distant transmitters may be combined constructively,
rather than interfering as would typically occur in a traditional single-carrier system.
What is IS-136?
IS-136 is a mobile communications interim standard which extends the functions of the dual-mode system standard
IS-54B. IS-54B is also known as NADC/TDMA (North American dual mode cellular, time division multiple access).
NADC/TDMA is a dual mode, full duplex cellular communications system in which each voice channel can be defined
by both a frequency and a time slot. In earlier cellular communications systems the voice channel was defined only
by a frequency. Thus, more calls can be transmitted by sharing one transmission frequency. The dual mode system
uses a digital traffic channel (DTC) or an analog voice channel (AVC). The dual mode system uses only an analog
control channel (ACC) to control transmissions on the voice and traffic channels. IS-136 has the DTC, AVC, ACC, and
adds a digital control channel (DCCH). IS-136 expands the capability of IS-54B to include:

Sleep mode for decreased battery usage during non-talk times.

Public, private, and semi-private cells such as picocells in office buildings and personal base stations

Short Message Service (SMS) for both point-to-point and broadcast information

Greatly improved security (using DCCH and authentication)


Basic Features of IS-136:

Time Slots per Channel: 6

Users per Channel: 3 (full rate), 6 (half rate), 9 (future)

Modulation: Digital: Pi/4 DQPSK, Nyquist Filter factor = 0.35

Analog: FM

Data Structure: TDMA

Speech Coding: VSELP (vector sum excited linear predictive) 8 kbps

Modulation Data Rate: 24,300 symbols per second (1 symbol = 2 bits)

EIA/TIA Standards: IS-136.1 and IS-136.2 for system

IS-137 for mobile stations

IS-138 for base stations

The Control Channels:

IS-136 has both digital (DCCH) and analog (ACC) control channels.

The ACC controls the analog transmissions and guarantees backward compatibility with systems such as AMPS and
IS-54B.

The DCCH controls digital transmissions and enables the specialized features of IS-136.

A mobile station (cell phone) on an ACC has an idle mode. During this state, the mobile waits for messages from the
base station, or it can originate a call.
A mobile on a DCCH has a similar state, called camping. Refer to Transactions for transactions which can be
processed during the idle and camping states.

You might also like