You are on page 1of 17

Low-density parity-check (LDPC) codes are a class of linear block LDPC codes.

Their main advantage is that they provide a performance which is very close to the capacity for a lot of different channels and linear time complex algorithms for decoding. Furthermore are they suited for implementations that make heavy use of parallelism. They were first introduced by Robert Gallager in his PhD thesis in 1960. But due to the computational effort in implementing coder and encoder for such codes and the introduction of ReedSolomon codes. They were mostly ignored until about ten years ago.

Representations for LDPC codes


Basically there are two different possibilities to represent LDPC codes. Like all linear block codes they can be described via matrices. The second possibility is a graphical representation.

Matrix Representation
Lets look at an example for a low-density parity-check matrix first. The matrix defined in equation (1) is a parity check matrix with dimension n m for a (8, 4) code. We can now define two numbers describing this matrix. wr for the number of 1s in each row and wc for the columns. For a matrix to be called low-density the two conditions wc <<n and wr << m must satisfied. In order to do this, the parity check matrix should usually be very large, so the example matrix cant be really called low-density.

f0

f1

f2

f3

fj

c0

c1

c2

c3

c4

c5

c6

c7

Graphical Representation:
Tanner introduced an effective graphical representation for LDPC codes. Not only provide these graphs a complete representation of the code, they also help to describe the decoding algorithm. Tanner graphs are bipartite graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two different types. The two types of nodes in a Tanner graph are called variable nodes (v-nodes) and check nodes (c-nodes). The above is an example for such a Tanner graph and represents the same code as the matrix in 1. The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a codeword). Check node fi is connected to variable node cj if the element hij of H is a 1.

Regular and irregular LDPC codes:


A LDPC code is called regular if wc is constant for every column and wr = wc (n/m) is also constant for every row. The example matrix from equation (1) is regular with wc = 2 and wr = 4. Its also possible to see the regularity of this code while looking at the graphical representation. There is the same number of incoming edges for every v-node and also for all the c-nodes. If H is low density but the numbers of 1s in each row or column arent constant the code is called a irregular LDPC code.

Construction of LDPC codes:


Several different algorithms exist to construct suitable LDPC codes. Gallager himself introduced one. Furthermore MacKay proposed one to semi-randomly generate sparse parity check matrices. This is quite interesting since it indicates that constructing good performing LDPC codes is not a hard problem. In fact, completely randomly chosen codes are good with a high probability. The problem that will arise, is that the encoding complexity of such codes is usually rather high. Before describing decoding algorithms, the feature of LDPC codes to perform near the Shannon limit1 of a channel exists only for large block lengths. For example there have been simulations that perform within 0.04 dB of the Shannon limit at a bit error rate of with an block length of . An interesting fact is that those high performance codes are irregular. The large block length results also in large parity-check and generator matrices. The complexity of multiplying a codeword with a matrix depends on the amount of 1s in the matrix. If we put the sparse matrix H in the form [ I] via Gaussian elimination the generator matrix G can be calculated as G = [I P]. The submatrix P is generally not sparse so that the encoding complexity will be quite high.

Since the complexity grows in O( ) even sparse matrices dont result in a good performance if the block length gets very high. So iterative decoding (and encoding) algorithms are used. Those algorithms perform local calculations and pass those local results via messages. This step is typically repeated several times. The term local calculations already indicates that a divide and conquere strategy, which separates a complex problem into manage- able sub-problems, is realized. A sparse parity check matrix now helps this algorithms in several ways. First it helps to keep both the local calculations simple and also reduces the complexity of combining the sub-problems by reducing the number of needed messages to exchange all the information. Furthermore it was observed that iterative decoding algorithms of sparse codes perform very close to the optimal maximum likelihood decoder.

Encoding:
Encoding of LDPC codes uses the property H =0 where x is the codeword and H is the sparse parity check matrix A straightforward encoding scheme requires three steps: a) Gaussian elimination to transform the H matrix into a lower triangular form b) split x into information bits and parity bits, i.e., x = (s; p1; p2) where s is vector of information bits, p1, p2 are vectors of parity bits c) Solve the equation H =0 using forward substitution method Since afterwards the H matrix will no longer be sparse, it takes

O ( ), or more precisely,
actual encoding.

) XOR operations for the

Richardson Urbanke LDPC Encoding Model


RU method for constructing efficient encoders for LDPC codes
Assume we are given an m x n parity-check matrix H over F. By definition, the associated code consists of the set of n-tuples x over F such that

Through Gaussian elimination bring the H matrix to lower triangular form as shown below

) Split the vector x into systematic part s, ( and the parity part p, such that x=(s, p) Construct a systematic encoder as follows: i) Fill s with the (n - m) desired information symbols. ii) Determine the m parity-check symbols using backsubstitution More precisely, for l [m]

We bring the H matrix to the form

where A is of size (m - g) x (n - m), B is (m - g) x g, C is g x (n - m), D is g x g and finally E is g x (m x g ) Multiplying the above matrix from left by

we get

Let x = (s; p1; p2) where s denotes the systematic part, p1 and p2 combined denote the parity part, p1 has length g, and p2 has length (m

- g).
=0 , we have

So according to the equation H

+
and

=0

)+ + (

+ )

=0

Now if

= (

+ ) and for the moment assume

is

non-singular, then


And once the -

= - (

+ )

+ ) matrix has been computed, the

determination of p1 can be done.

In a similar manner, from the first equation, p2 can be determined as

(A

Computation of

= -

+ )

Computation of

= -

B+D)

Label and Decide Algorithm Identify information bits and parity bits through a labelling process on the Tanner graph. Assaign numerical values to the bit nodes labelled as information bits and also calculate the missing values of parity bits sequentially.

Algorithm:
flag <- 0 Get the values of all the bits labelled as information bits while there are parity bits undetermined do

if there exists one undetermined parity bit x whose value can be computed from the available information bits and already determined parity bits then Compute the value of x. else flag <- 1, exit the while loop end if end while if flag=1 then encoding is unsuccessful else output the encoded codeword end if Steps for encoding: Step 1: Determine the values of all the information bits x14, x15, x16, x10, x12, x13, x5, x7, and x8(marked as graph above). Step 2: Compute the parity bit (marked as graph) x11 from the parity check equation C7 : x11 = x14 x15 x16; Step 3: Compute the parity bit x6 from the parity check equation C5 : x6 = x10 x15 x11 x13; Compute the parity bit x9 from the parity check equation in the above in the tanner

C6 : x9 = x11 x12 x16 x13;

Step 4: Compute the parity bits x1, x2, x3, and x4 in the first tier by the parity check equations C1, C2, C3, and C4 respectively.

C1= x1 = x10 x5 x7, C2 =x2 = x5 x6 x12 x9, C3= x3 =x5 x7 x11 x14 x8, C4 =x4 = x6 x8 x10 x9 x13.

Decoding: Hard Decision Decoding All v-nodes ci send a message to their c-nodes fj s (the information which the v-node ci has , is the corresponding received i th bit of c ,yi ).

Every check nodes fj calculate a response to every connected variable node. The response message contains the bit that fj believes to be the correct one for this v-node ci assuming that the other v-nodes connected to fj are correct. (This might also be the point at which the decoding algorithm terminates (if all the check equations are fulfilled)).

The v-nodes receive the messages from the check nodes and use this additional information to decide if their originally received bit is OK.

Go to step 2.

overview over messages received and sent by the c-nodes in step 2 of the message passing algorithm

Step 3 of the described decoding algorithm. The v-nodes use the answer messages from the c-nodes to perform a majority vote on the bit value

Soft Decision Decoding Based on the concept of belief propagation.(yields in a better decoding performance ) 1. All variable nodes send their qij messages. Since no other information is available at this step, qij(1) = Pi and qij(0) =1 Pi.

2.The check nodes calculate their response messages

rji

rji(0)= 1/2 + 1/2 _( )(12qij (1)) rji(1)= 1- rji(0) The variable nodes update their response messages to the check nodes. equations "qij (0)= Kij(1Pi)" qij (1)=Kij(Pi)" This is done according to the following

j c \j rji(0)
i

j ci\j rji(1)

At this point the v-nodes also update their current estimation ci of their variable ci. This is done by calculating the probabilities for 0 and 1 and voting for the bigger one. The used equations

Qi(0)

= Ki (1Pi)

j ci rji(0)

Qi(1)

= Ki Pi

j ci rji(1)

are quite similar to the ones to compute qij (b) but now the information from every c-node is used. ^ci ={ 1 0 if Qi(1) >Qi(0)

If the current estimated codeword fulfils now the parity check equations the algorithm terminates. Otherwise termination is ensured through a maximum number of iterations.

4.

Go to step 2.

Applications:
LDPC codes are used as error correcting code in the new DVBS2 standard for the satellite transmission of digital television

LDPC codes used as the FEC scheme for the ITU-T G.hn standard

G.hn chose LDPC over turbo codes because of its lower decoding complexity (especially when operating at data rates close to 1 Gbit/s) and because the proposed turbo codes exhibited a significant error floor at the desired range of operation

LDPC is also used for 10GBase-T Ethernet, which sends data at 10 gigabits per second over twisted-pair cables.

LDPC codes are also used as a part of the Wi-Fi 802.11 standard as an optional part of 802.11n, in the High Throughput (HT) PHY specification

References:
R. G. Gallager, Low-Density Parity Check Codes, MIT Press, Cambridge, MA, 1963. D. J. C. Mackay, Good error-correcting codes based on very sparse matrices, IEEE Trans. Inform. Theory, vol. 45, no. 2 C. E. Shannon, A mathematical theory of communication," Bell System Technical Journal,vol. 27 L. Bazzi, T. Richardson, and R. Urbanke, \Exact thresholds and optimal codes for the binary symmetric channel and Gallager's decoding algorithm A," IEEE Trans. Inform. Theory, vol. 47, 2001. T. Richardson and R. Urbanke, \Ecient encoding of low-density parity-check codes," IEEE Trans. Inform. Theory, vol. 47, 2001.

You might also like