You are on page 1of 26

INFORMATION THEOTY AND CODING TECHNIQUES

EXPERIMENT NO. 1

Aim: Implementation of algorithm for determination of various


entropies and mutual information of the given channel.
Test various types of channel such as
1. Noise free channel
2. Error free channel
3. Binary symmetric channel
4. Noisy channel

Equipment: PC with ‘C’ programming tool.

Theory:
1. What is discrete memory less source and discrete memory less
channel.
2. Write definition, formula and units for the following
i} Information
ii) Entropy
iii) Information rate.
iv) Mutual information
3. What are the different types of entropies?

Algorithm:
Part: I
1.Input the no. of inputs of a channel.
2.Input the no. of outputs of a channel.
3.Input the channel matrix. Test the condition that sum of all the

1
INFORMATION THEOTY AND CODING TECHNIQUES

entries in each row should be exactly equal to 1.


4.Input the channel input probabilities. i.e. P[X].Condition
mentioned in step no.3 remains the same. Calculate the entropy
of the channel input. i.e. H(X)
5.Calculate output probability matrix P[Y], by multiplying input
probability matrix by channel matrix. Also calculate entropy of
channel output. i.e. H(Y).
6.Convert input probability matrix into diagonal matrix.
i.e. P[X]d
7.Calculate the joint probability matrix by multiplying input
probability matrix in diagonal form by channel matrix.
8.Calculate joint entropy with the help of formula

m n
H(X,Y)= ∑ ∑ p( xi yj)(log p( xi yj))-1
i=0 j=0

9. From the values of H(X),H(Y) and H(X,Y) we can calculate


conditional entropies as
H(Y/X)=H(X,Y)-H(X)
H(X/Y)=H(X,Y)-H(Y)
10.Also we can calculate mutual information as
I(X;Y)=H(X)-H(X,Y)
I(X;Y)=H(Y)-H(Y/X)

2
INFORMATION THEOTY AND CODING TECHNIQUES

Part-II
A) Noise free channel
1. For noise free channel enter only diagonal elements of
the joint probability matrix. Condition mentioned on step
no.3 should be satisfied.
2. Repeat steps from 4 to 10.
B) Error free channel
1. A channel is said to be error free if capacity of the
channel is greater than entropy of the channel. So at first
calculate the capacity of the channel using the formula
Capacity C= log M bits/symbol
Where M:- No. of inputs of the channel.
2 2. Calculate entropy of the input.
3.Compare capacity of the channel with channel input
entropy.Test the condition for error free channel .
C) Binary Symmetric Channel
1.BSC channel is characterized by
No.of input = No. of output = 2
2.It’s conditional probability matrix is as follows

1-P P
P(Y/X) =
P 1-P
3. Derive the joint probability matrix from this matrix by
multiplying it by
P(X) = [ 0 1]
So the matrix which we take input from user is

3
INFORMATION THEOTY AND CODING TECHNIQUES

0 0
P(X,Y) = P 1-P
Where P is the probability which user should enter.
4.Then repeat steps 4 to 8 to calculate all the required
quantities.
D) Noisy channel
1.Repeat first two steps from Part I.
2.Enter the joint probability matrix just by removing the
condition of checking sum of all the elements equal to 1.
3.Repeat all the remaining steps to calculate required
quantities.

Conclusion:
1.Why we calculate entropy?
2.What is the significance of conditional entropy?

4
INFORMATION THEOTY AND CODING TECHNIQUES

EXPERIMENT NO. 2

Aim: Implementation of algorithm for generation and evaluation of


variable length source coding using
1) Shannon-Fano Coding
2) Huffman Coding

Equipment:
PC with C as a programming tool

PART 1) Shannon-Fanno coding

Theory:

1) Need of source coding


2) Explain source encoder with block diagram
3) What is variable length source coding? Explain with example.
4) State Shannon’s first theorem.

Algorithm:
1) Accept number of symbols and their respective probabilities.
Sort the probabilities in the descending order.
2) Partition these probabilities such that their sum should be equal
or nearly equal.
3) Assign ‘0’ to upper group symbols and ‘1’ to lower group
symbols.
4) Repeat steps 2 and 3 till all the probabilities are finished.
5) For the formation of code read assigned code from left to right.

5
INFORMATION THEOTY AND CODING TECHNIQUES

Example:

Symbol Respectiv Coding Steps Codewor


I II III IV
e d
Probabiliti
es
S1 0.4 0 0
S2 0.3 1 0 10
S3 0.2 1 1 0 110
S4 0.05 1 1 1 0 1110
S5 0.05 1 1 1 1 1111

PART 2) Huffman Coding

Theory:
1) What are the types of source coding?
2) State advantages of Huffman coding.
3) How we calculate average codeword length and efficiency?
Explain with formula and example.

Algorithm:

1) List the source symbols in order of decreasing probability.


2) Combine the probabilities of the two symbols having the lowest
probabilities and reorder the resultant probabilities. The same procedure
is repeated until there are two ordered probabilities remaining.
3) Start encoding with the last reduction, which consist of exactly two
ordered probabilities. Assign 0 as the first digit in the codeword for all

6
INFORMATION THEOTY AND CODING TECHNIQUES

the source symbols associated with the first probability; assign 1 to the
second probability.
4) Now go back and assign 0 and 1 to the second digit for the two
probabilities that were combined in the previous reduction step,
retaining all assignments made in step 3.
5) Keep regressing this way until the first column is reached.

Example:

Symbol step 1 step 2 step 3 Code word

0.4 0.4 0.4 0.6 0 1

0.2 0.2 0.4 0.4 1 01

0.2 0.2 0.2 000

0.1 0.2 0010

0.1 0011

Conclusion:
Write your own conclusion.

EXPERIMENT NO. 3

7
INFORMATION THEOTY AND CODING TECHNIQUES

Aim: Implementation of algorithm for generating and decoding of


linear block code.

Apparatus: PC with MATLAB programming tool.

Theory:
Channel Coding: In channel coding, we map the incoming data
sequence to a channel input sequence. This encoding procedure is done
by the channel encoder. The encoder sequence is then transmitted over
the noisy channel. The channel output sequence at the receiver is
inverse mapped an input data sequence. This is called as the decoding
procedure and is carried out by the channel decoder. But the encoder
and decoder are under the designer’s control.
The encoder introduces redundancy in the prescribed manner.
The decoder exploits redundancy in a prescribed manner in order to
reconstruct the source sequence as accurately as possible. The channel
coding makes possible reliable communication over unreliable
channels. Channel coding is also called as ‘error control coding’.
Questions: 1) Explain the necessity of the channel coding.
2) Define and explain the following terms.
1) Weight of the code
2) Minimum distance
3) Syndrome
4) Hamming bound.
Encoding and decoding procedure of the linear block code:
Consider the (n,k) systematic linear block code(LBC) code, where n is
length of the codeword and k is the length of the message.

8
INFORMATION THEOTY AND CODING TECHNIQUES

Encoding : Let’s consider the (6,3) LBC with the following parity
matrix

1 0 1
P = 0 1 1
1 1 0

Here n=6 and k=3 hence no. of redundant bits are (n-k) =3.

Generator matrix G is,

G = [ In-k: P]

1 0 0 1 0 1
G = 0 1 0 0 1 1
0 0 1 1 1 0

Now from the above generator matrix and message vector we can
calculate the codewords.
No. of message vectors = 2k = 23 = 8
For the first message M0 = [ 0 0 0], codeword C0 is
1 0 0 1 0 1
C 0 = [ 0 0 0] 0 1 0 0 1 1
0 0 1 1 1 0
C 0 = [ 0 0 0 0 0 0]

Similarly for the second message M1 = [0 0 1], codeword C1 is


1 0 0 1 0 1
C1 = [ 0 0 1] 0 1 0 0 1 1
0 0 1 1 1 0
C1 = [ 0 0 1 1 1 0]

9
INFORMATION THEOTY AND CODING TECHNIQUES

In the same way calculate the code vectors for all the eight message
vectors. All codewords are as follows
Data Codeword
000 000000
001 001110
010 010011
011 011101
100 100101
101 101011
110 110110
111 111000
Decoding:
The parity check matrix P is

1 0 1
P = 0 1 1
1 1 0

The PT can be represented as

1 0 1
P = 0 1 1
1 1 0

The parity check matrix is given by


H = [ PT : In-k]
1 0 1 1 0 0
H = 0 1 1 0 1 0
1 1 0 0 0 1

The syndrome table can be calculated as


S t = HT

10
INFORMATION THEOTY AND CODING TECHNIQUES

1 0 1 
0 1 1 
 
1 1 0 
St =  
1 0 0
0 1 0
 
0 0 1

Let the received codeword R is


R=[101110]
The syndrome is calculated as
S = R * HT
By calculation S= [1 0 1]
This matches with first row of the syndrome. It means that error has
occurred in the first bit. Now take the all zero vector of size n=6 and
make the first bit of it 1.
Let this vector as error vector e, which is
E=[100000]
Now corrected codeword Cc is
Cc = R ⊕ E
Cc = [ 0 0 1 1 1 0 ]

Algorithm:
1) Enter the values of n and k.
2) Enter the parity matrix.
3) Calculate generator matrix G.
4) Enter the data matrix Mi.e. any one combination out of the
total possible combinations.
5) Calculate the code using C = M * G
6) In the same way calculate all the codewords.

11
INFORMATION THEOTY AND CODING TECHNIQUES

7) Introduce error in the m bit position changing it either from 1


to 0 or 0 to 1.
8) Calculate the syndrome.
9) Compare the error pattern with corresponding syndrome.
10) Evaluate correct code Cc.

Conclusion :
What are the other types linear block code? Also comment on
error correcting capability of linear block code.

EXPERIMENT NO. 4

Aim: Implementation of algorithm for generation and decoding of


cyclic codes.

12
INFORMATION THEOTY AND CODING TECHNIQUES

Equipment: PC with ‘MATLAB’ programming tool.

Theory: Cyclic codes form a subclass of linear block codes. An


advantage of cyclic codes over most other types of codes is that they are
easy to encode. Furthermore, they possess a well defined mathematical
structure which has led to the development of very efficient decoding
schemes for them.

A code C is cyclic if
1. C is linear code and
2. any cyclic shift if a codeword is also a codeword i.e. if the

codeword is a0a1….an-1 is in C then an-1a0……..an-2 is also in C.

Que. List out the properties of the cyclic codes and explain them in
order.

Encoding and decoding procedure of the cyclic codes:


a) Encoding:
Consider the (7,4) cyclic code. i.e. n=7 and k=3
The generator polynomial is
G(P) = P3 + P + 1
The message polynomial is
M(P) = P3 + P2 + 1

C(P) = Q(P)*G(P)

Where Q(P) is quotient polynomial

13
INFORMATION THEOTY AND CODING TECHNIQUES

Q(P)= quotient of [M(P)*Xn-k / G(P)]

P3 + P 2 + P + 1
P3 + P + 1 P 6 + P 5 + P 3
P6 + P 5 + P 3
P5 + P 4
P5 + P 3 + P 2
P4 + P 3 + P 2
P4 + P 2 + P
P3 + P
P3 + P + 1
1

Q(P) = P3 + P2 + P + 1

C(P)= G(P) * Q(P)


= (P3 + P + 1)*( P3 + P2 + P + 1)
= P6 + P5 + P3 + 1
Hence C = 1101001. This is the codeword generated.

b) Decoding:
Let the received codeword is 1111001
Hence the corresponding polynomial is
R(P) = P6 + P5 + P4 + P3 + 1

14
INFORMATION THEOTY AND CODING TECHNIQUES

P3 + P 2 + 1
P3 + P + 1 P 6 + P 5 + P 4 + P 3 + 1
P6 + P 4 + P 3
P5 + 1
P5 + P 3 + P 2
P3 + P 2 + 1
P3 + P + 1
P2 + P
Hence syndrome is S(P) = P2 + P
In the next step we have to consider error pattern E(P).
Let E(P)= 1000000
In polynomial form E(P) = P6

P3 + P + 1
P3 + P + 1 P 6
P6 + P 4 + P 3
P4 + P 3
P4 + P 2 + P
P3 + P 2 + P
P3 + P + 1
P2 +1

Hence syndrome is S’(P) = P2 +1

But in this case S(P) != S’(P). So error is not present at position 6. Same
situation occurs when we consider error at position 5.

15
INFORMATION THEOTY AND CODING TECHNIQUES

Now consider the error at position 4.


P
P3 + P + 1 P 6
P4 + P 2 + P
P2 + P
Hence syndrome is S’(P) = P2 +P
Here S(P) = S’(P)
So error is present at position 4.
Valid codeword is
C(P) = R(P) ⊕ E(P)

C(P) = P6 + P5 + P3 + 1
C = 1101001
Algorithm:
1. Start
2. Accept the values of n,k,G(P) , M(P).
3. Evaluate the value of [M(P)*Xn-k] / G(P)

4. Determine the codeword.


5. Transmit data with error.
6. Determine the received data.
7. Compare S and obtain corresponding error pattern from
syndrome table.
8. Evaluate corrected code.
9. Stop.

Conclusion:
Que.1. When linear code is called as cyclic code?
2. What are the properties of the syndrome table?

16
INFORMATION THEOTY AND CODING TECHNIQUES

EXPERIMENT NO. 5

Aim: Implementation of algorithms for generating convolution code


using

17
INFORMATION THEOTY AND CODING TECHNIQUES

i] Code Tree
ii] Trellis code

Equipment: PC with MATLAB programming tool


Theory:
Requirement of convolutional code:-
In encoding of block of k-information symbols are
encoded into a block of n coded symbols. There is always one-to-one
correspondence between the uncoded block of symbols & the coded
block of symbols. It is used for high data rate applications, A large
block length is imp because:-
1] Many of good codes that have large distance properties are of large
block lengths.
2] Larger block length imp to that the encoding overhead is small.
But large block lengths causes delays. There is
another coding scheme in which much smaller blocks of uncoded data
of length k0 are used. These are called information frames.
Convolutional codes have memory which retain the previous incoming
information frames. Thus decoding & encoding in this code is based on
past information i.e. memory is required.
Convolutional codes are often used to improve the performance of
digital radio, mobile phones and satellite links.

Convolutional Encoding:

To convolutionally encode data, start with k memory registers,


each holding 1 input bit. Unless otherwise specified, all memory
registers start with a value of 0. The encoder has n modulo-2 adders,
and n generator polynomials — one for each adder (see figure below).

18
INFORMATION THEOTY AND CODING TECHNIQUES

An input bit m1 is fed into the leftmost register. Using the generator
polynomials and the existing values in the remaining registers, the
encoder outputs n bits. Now bit shift all register values to the right (m1
moves to m0, m0 moves to m-1) and wait for the next input bit. If there
are no remaining input bits, the encoder continues output until all
registers have returned to the zero state.The figure below is a rate 1/3
(m/n) encoder with constraint length (k) of 3. Generator polynomials are
G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are
calculated (modulo 2) as follows:

n1 = m1 + m0 + m-1
n2 = m0 + m-1
n3 = m1 + m-1.

Questions:
1] Define constraint length & code rate.
2] Design of convolution encoder for
constraint length N=3

19
INFORMATION THEOTY AND CODING TECHNIQUES

code rate is ½ & no of module-2 adders v=2

Steps For Code Tree :-


1] Tree becomes repetitive after 3rd branch.Beyond the 3rd branch,the 2
nodes labeled a are identicals.

2] The encoder has memory M=K-1=2 msg bits.Hence when third msg
bit enters the encoder,the 1st msg bit is shifted out of the regi.

3] In the code tree starting with 1 & 0, if there is ‘1’ in the i/p seq.then
go downward.(This is showm by dotted line) & note down correct code
written on that line.
4] If there is ‘0’ in the i/p seq then go upward(shown by solid line)
& note down code written on that line.

5] Thus trace the code-tree upto level = no of bits in i/p sequence to get
the corresponding o/p seq.

Steps For Code Trellis :-


1] If there is ‘0’ in k0,then trace upper i.e. solid line & note down code
written above the line.

2] If there is ‘1’ in k0.then trace lower i.e. dotted line & note down code
written above the line.
Thus for k0 = 1 1 0 1 0 0 0
We get n0 = 11 01 01 00 10 11 00

20
INFORMATION THEOTY AND CODING TECHNIQUES

Conclusion:
Que. What are the disadvantages of code tree and advantages of
code trellis?

EXPERIMENT NO. 6

Aim: Implementation of algorithms for decoding convolution codes


using Viterbi algorithm.

21
INFORMATION THEOTY AND CODING TECHNIQUES

Equipment: PC with ‘MATLAB’ programming tool.

Theory:
Convolutional decoding can be performed using a Viterbi
algorithm. A Viterbi decoder logically explores in parallel every
possible user data in sequence. It encodes and compare each one against
the received sequence and picks up the closest match: it is a maximum
likelihood decoder. To reduce the complexity (the number of possible
data sequence double with each additional data bit), the decoder
recognizes at each point that certain sequences cannot belong to the
maximum likelihood path and it discards them. The encoder memory is
limited to K bits; a Viterbi decoder in steady-state operation keeps only
paths. Its complexity increases exponentially with the constraint
length K.

Que. Explain maximum likelihood decoding of convolution


codes?
The Viterbi Algorithm:
a) Initialization :-
i) Label the leftmost state on the Trellis (i.e.
the all zero state at level 0)as 0,since
there is no discrepancy at this point in the
computation.
b) Computation step j+1
1) Let j= 0,1,2 ,…… & suppose that
previous step j we have done two things
* All survivor paths are identified.
* The survivor path & its metric for each state
of the trellis are stored .

22
INFORMATION THEOTY AND CODING TECHNIQUES

2) Then at level j+1 ,compute the metric for


all the paths entering each state of the trellis by
adding the metric of the incoming branches to the
metric of connecting survivor path from level j.
3) Hence for each state ,identify the path
with the lowest metric as the survivor of step j+1
,thereby updating the computation.
c) Final Step.
1) Continue the computation until the
algorithm completes its forward search through the
trellis & reaches the termination mode at which
time it makes a decision on the maximum
likelihood path.
2) Like a block decoder ,the sequence on
symbols associated with that path is released to the
destination as the decoded version of the received
sequence .
3) It is therefore more correct to refer to the
viterbi algorithm as a maximum likelihood
sequence estimator.

Conclusion:
1) Why viterbi decoding is efficient?
2) What is the difficulty in its application?

23
INFORMATION THEOTY AND CODING TECHNIQUES

EXPERIMENT NO. 7
Aim: Study of radio link design
Theory:
A) LINK BUDGET ANALYSIS:
Link budget analysis is an important issue in the design of
satellite communication system. A ‘link budget’ or ‘link power budget’

24
INFORMATION THEOTY AND CODING TECHNIQUES

is the totaling of all the gains and losses incurred in operating a


communication link. In particular, the balance sheet provides a detailed
accounting of three broadly defined items:
1. Apportionment of the resources available to the
transmitter and receiver
2. Sources responsible for the loss of signal power
3. Sources of noise
Que. Explain the waterfall curve relating the probability of error
and bit energy to noise ratio (Eb/No).What is the role of link margin?

B) THE FREE SPACE PROPAGATION MODEL:


The free space propagation model assumes the ideal propagation
condition that there is only one clear line-of-sight path between the
transmitter and receiver.
Que. Write the Friis free space equation.

C) ANTENNA:
In communication system, the propagation of the modulated
signal is accomplished by means of transmitting and receiving antenna.
Functions of transmitting antenna:
1. To convert the electrical signal to electromagnetic
signal.
2. To radiate the electromagnetic energy in desired
directions.
Functions of receiving antenna:
1. To convert the electromagnetic signal to electrical
signal.

25
INFORMATION THEOTY AND CODING TECHNIQUES

2. To receive the electromagnetic energy from desired


directions.

Que. With respect to antenna, explain the following terms in


detail.
1. Directive gain
2. Directivity
3. Power gain
4. Effective aperture

D) DOWNLINK BUDGET ANALYSIS:


In a digital satellite communication system, one of the key
elements in the overall design and analysis of the system is the
downlink power budget, which is usually more critical than the uplink
power budget because of practical constraints imposed on downlink
power and satellite antenna size.
Que. In detail, explain the downlink budget analysis of
digital satellite communication system.

Conclusion:
Que. How the link budget analysis helps in the design of
radio link?

26

You might also like