You are on page 1of 94

1

IMAGE COMPRESSION
TECHNIQUES



S.Esakkirajan
Assistant Professor
I&CE Department
PSG College of Technology
Coimbatore
2
Topics of Presentation

Need for image compression

Transform based image compression

Vector Quantization

Image Compression Standards

3
What is Compression?
Compact representation.



Representing the data with minimum
number of bits.
4
Need for Compression
To minimize the storage space



To enable higher rate of data transfer



Philosophy of Compression
5
DATA INFORMATION
DATA=USEFUL DATA + UNWANTED DATA

Unwanted
data
Redundant
Data
Irrelevant data
Useful information = Data [Redundant + Irrelavant data]
6
Philosophy of Compression(Cont.,)
H
2
O
H
2
O
7
Classification of Redundancy
Redundancy in Image
Spatial
Redundancy
PsychoVisual
Redundancy
Coding
Redundancy
8
Classification of Compression
Techniques
Original Data
Compressed
Data
Lossless
Original Data
Approximated
9
9
Transform Coding
Transform
Quantization
Entropy
Coding
Input
Image
Compressed
bitstream
Inverse
Transform
Inverse
Quantization
Entropy
DeCoding
Reconstructed
Image
Compressed
bitstream
Image Encoding
Image Decoding
10
Transform

Compact energy into a few coefficients.

Decorrelate (reduce linear dependence)
among coefficients.

KL transform, DCT, Wavelet.




KL transform
11
Input Image
Partition of the image
into blocks
Compute the Mean
Compute the
Covariance Matrix
Eigen Vector of the
Covariance Matrix
Discrete Cosine Transform
Fourier basis: Complex exponential (e
jt
)


cos(t)= 0.5[e
jt
+ e
-jt
]


Symmetrical extension of DFT
12
DCT (Cont.,)
13
n
x
e
[n]
n
x[n]
x
e
[n]
n
DCT (Cont.,)
14
( ) ( )

s s
s s
|
|
.
|

\
| +
|
|
.
|

\
| +
=

=
otherwise
N k
N k
N
k n
N
k n
n n x
k k C
N
n
N
n
x
, 0
1 0
1 0
,
2
1 2
cos
2
1 2
cos ] , [ 4
] , [
2 2
1 1
2
2 2
1
1 1
2 1
1
0
1
0
2 1
1
1
2
2
t t
( ) ( )

s s
s s
|
|
.
|

\
| +
|
|
.
|

\
| +
=

=
otherwise
N k
N k
N
k n
N
k n
k k C k w k w
N N
n n x
N
k
N
k
x
, 0
1 0
1 0
,
2
1 2
cos
2
1 2
cos ] , [ ] [ ] [
1
] , [
2 2
1 1
2
2 2
1
1 1
2 1
1
0
1
0
2 2 1 1
2 1
2 1
1
1
2
2
t t
Wavelet Transform
Oscillatory function of finite duration.

CWT and DWT

Multi-resolution Analysis

Wide variety of basis function
15
DWT - Implementation
16
LPF
HPF
2
2
LPF
2
HPF
2
LPF
2
HPF
2
Input
Image
Column
processing
Row
processing
LL
LH
HL
HH
SUBBAND DECOMPOSITION
0
0
0
0 0
0
0 0
0
0 0
0
0
0
0
0
0 0 0
0
0 0
0
0
0 0
0
0
0 1
1
0
1
1 1
1 1
1
1
1
0 0
1
0
1
1
1
1
1
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
Low pass filter is

,
INPUT IMAGE
RESULTANT IMAGE
AFTER ROW
PROCESSING
0 0 0
0 0
0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0 0
0
0
Step 1b :COLUMN PROCESSING OF THE
RESULTANT
0
0
0 0 0
0 0
0
0
0 0
0
0
0 0 0
0
0
0
0
0
0
0
0
LOW PASS FILTERED IMAGE
LL SUBBAND
0 0 0
0
0
0 0 0
0
0
0
0
2 2
2 2
STEP 2:TO FIND THE SUBBAND LH
0
0
0 0 0
0 0
0
0
0 0
0
0
0 0 0
0
0
0
0
0
0
0
0
LOW PASS FILTERED IMAGE ALONG ROW
THE LH SUBBAND
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
STEP 3:TO FIND HL SUBBAND
STEP 3a :HIGH PASS FILTERING ALONG ROW
0
0
0
0 0
0
0 0
0
0 0
0
0
0
0
0
0 0 0
0
0 0
0
0
0 0
0
0
0 1
1
0
1
1 1
1 1
1
1
1
0 0
1
0
1
1
1
1
1
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
IMPUT IMAGE
HIGH PASS FILTERED
IMAGE
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0 0
0
0
0
0
0 0
0 0
0
STEP 3B: LOW PASS FILTERING OF
RESULTANT IMAGE ALONG COLUMN
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0 0
0
0
0
0
0 0
0 0
0
HIGH PASS FILTERED IMAGE
OBTAINED IN STEP 3a
HL SUBBAND
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
STEP 4 :
TO FIND HH SUBBAND (HIGH PASS FILTERING
ALONG ROW & THEN HIGH PASS FILTERING ALONG
COLUMN)
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0 0
0
0
0
0
0 0
0 0
0
HIGH PASS FILTERED IMAGE
HH SUBBAND
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
STEP 5:DECOMPOSITION OF INPUT IMAGE INTO
FOUR SUBBANDS LL,LH,HL,HH
0
0
0
0 0
0
0 0
0
0 0
0
0
0
0
0
0 0 0
0
0 0
0
0
0 0
0
0
0 1
1
0
1
1 1
1 1
1
1
1
0 0
1
0
1
1
1
1
1
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
INPUT IMAGE
FIRST LEVEL
WAVELET
DECOMPOSITION
0 0 0 0
0
0
0
2 2
0
2 2
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
LL
LH
HL
HH
0 0 0 0
0
0
0
2 2
0
2 2
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
24
SECOND LEVEL DECOMPOSITION
0 0 0 0
0
0
0
2 2
0
2 2
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
0 0 0 0
0
0
0
0 0
0
0 0
0
0
0
0
SECOND LEVEL
WAVELET
DECOMPOSITION
0 1 0 1
1
0
0
1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
LL
HL HH
LH
0 1 0 1
1
0
0
1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
0 -1 0 -1
-1
0
0
-1 0
0
0 0
0
0
0
0
FIRST LEVEL DECOMPOSED IMAGE
25 25
Properties of Wavelet Filters
Compact support

Symmetric

Vanishing Moment

Regularity
Quantization
26
Approximation.

Mapping large set of values to small set of
values.

It is a non-linear and irreversible process



Range of Marks Grade
0 to 49 F
50-54 E
55-59 D
60-69 C
70-79 B
80-89 A
90-100 S
27
Quantization Techniques - Classification
Quantization
Vector
Quantization
Embedded
Quantization
Scalar
Quantization
Uniform
Non-
Uniform
Mid-tread Mid-rise
EZW SPIHT
SPECK
TSVQ MSVQ HVQ
28
Vector Quantization
Search
Engine

..

..

..



Codebook Indices
The Encoder
Output
Vector

..

..

..



Indices Codebook
The Decoder
Channel
Input
Vector
29

2

4

6

8

10

11

16

15

9

3

1

7

12

14

13

5
Consider an input image
of size 4 by 4
Choose the dimension
as two
16
1
Here Maximum value=16
Minimum value =1
Dynamic range=Maximum-Minimum
=16-1=15
30


4 8 12 16 0
4
8
12
16
*
*
*
*
*
*
*
*
* *
* *
*
*
*
*
C12
C8
C4
C0
C13
C9
C14 C15
C10 C11
C5 C6 C7
C1 C2 C3
(2,2)
(2,6)
(2,10)
(2,14)
(6,2)
(6,6)
(10,6)
(14,6)
(10,2)
(10,6)
(10,10)
(14,2)
(14,6)
(14,10)
(14,14)
(14,10)
In this example,
rate(R)=2,
dimension(L)=2

Number of code
vectors=2^(R*L)=16
Code vectors are
C0 to C15
Fixing the interval

(
(
(

=
L R
ge DynamicRan
Interval
In our case
interval=4
C0 to C15 is
obtained using
Centroid Method
31

2

4

6

8

10

11

16

15

9

3

1

7

12

14

13

5
0 4 8
16
4
8
12
16
12
*
*
* * *
*
*
*
* * *
*
*
*
* *
.
.
.
.
.
.
.
.
Mapping of Input Image Vector to Code Vector
32
0 4 8
16
4
8
12
16
12
*
*
* * *
*
*
*
* * *
*
*
*
* *
(1,7)
(6,8)
(2,4)
(10,11)
(12,14)
(16,15)
(9,3) (13,5)
(2,6) (6,6)
(2,2)
(10,10)
(14,14) (14,14)
(10,2)
(14,6)
Adjust the Input Vectors to fall into one of the
Code Vectors
33
12 (2,4)
Input Image
Vectors
Transmitted
Indices
(6,8)
(10,11)
(9,3)
(1,7)
(16,15)
(12,14)
(13,5)
9
6
3
11
3
8
14
34
































2 2 6 6
10 10 14 14
10 2 2 6
14 14
14 6
Original image
Reconstructed image

2

4

6

8

10

11

16

15

9

3

1

7

12

14

13

5
12 9 6 3 14 8 11 3
35
Detachment from Attachment
Attachment
Signal Coefficient
Transform
Decorrelation
Coefficients are correlated
Embedded Quantization
Progressive Quantization

(a) EZW (b) SPIHT (c) SPECK etc

Starting point Wavelet Coefficients

Parent-child relationship in wavelet
domain
36
Terminologies in EZW
Root node/ Parent node
Child node
(a) Significant Positive (SP)
(b) Significant Negative (SN)
(c) Zero-tree Root (ZR)
(d) Isolated Zero (IZ)
Dominant Pass and Refinement pass
37
Terminologies in EZW (Cont.,)
38
R C
1

C
3
C
2

X X
X X
X X
X X
X X
X X
Maximum Coefficient Value C
max
= R
T
0
= 2
floor(log2(Cmax))

Magnitude of the coefficient > Magnitude of Threshold = SP
Magnitude of the coefficient > Magnitude of Threshold = SN
Magnitude of the coefficient < Magnitude of Threshold &
ALL Descendents magnitude <Threshold = ZR
Magnitude of the coefficient < Magnitude of Threshold &
SOME Descendents magnitude >Threshold = IZ
Computation of Threshold
EZW - Illustration
39
34 0
0 0
1 -1
1 -1
4
-4
-4
4
10 -6
6 -10
T
0
= 2
floor(log2(34))
= 2
5

= 32
Dominant Pass
34>32 = SP ZR ZR
ZR Ls = {34}
Data transmitted: Threshold, SP, ZR,ZR,ZR
48 0
0 0
0 0
0 0
0
0
0
0
0 0
0 0
Reconstructed value: (3/2)T0 =48


Refinement Pass
Ls Reconstructed value = -14
Correction term sign:
Correction term = (T
0
/4) = (32/4) = 8
Corrected value= 48-8 = 40
40
EZW - Illustration
40
40 0
0 0
0 0
0 0
0
0
0
0
0 0
0 0
X
Dominant Pass
Threshold value T
1
= (T
0
/2) =16
zr zr zr Ls = {34}
0
0 0
0 0
0 0
0
0
0
0
0 0
0 0
40
Refinement Pass
Correction term sign:
Ls Reconstructed value =- 6
Correction term = (T
1
/4) = (16/4) = 4
36
41
Runlength Coding
0(12)
0(12)
0(12)
0(12)
0(12)
0(5)1(2)0(5)
0(5)1(2)0(5)
0(12)
0(12)
0(12)
0(12)
0(12)

42
Runlength Coding
0(12)
0(12)
0(12)
0(12)
0(12)
0(5)1(2)0(5)
0(5)1(2)0(5)
0(12)
0(12)
0(12)
0(12)
0(12)

43
Runlength limitation
44
HUFFMAN CODE

Based on probability and entropy
Basic philosophy of Huffman code
Variable length code
Prefix code
45
Huffman code - Example
Symbol Probability
1/2
1/4
1/8
1/8
Spade
Heart
Diamond
Club
46
Huffman code Example (Cont.,)
Code Symbol Probability Step 1 Step 2
1/2
1/4
1/8
1/8
1/4
1/4
1/2
1/2
1/2 (0)
(1)
(0)
(1)
(0)
(1)
0
10
110
111
47
LOSSLESS DPCM
48
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
49
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
50
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
51
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
92
52
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
53
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
54
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Received Signal
0
0
92
55
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
56
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
92
0
92 92
57
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
0
0
92
92
58
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
92
59
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
92
94
60
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
92
94 2
61
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
2
62
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
2
63
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
2
64
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
92
94
94
65
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
66
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
67
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
-3
68
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
-3
69
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
-3
70
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
-3
91
71
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92
94
94
94
91
-3
91
72
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
94
91
91
73
92 94
97
91
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
94
91
91
91
74
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
75
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
97
76
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
97
6
77
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
6
97
78
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
6
79
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
6
80
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
6
97
81
92 94
97
Signal to be Transmitted
Predictor
Entropy
Encoder
-
Predictor
Entropy
Decoder
+
Channel
Channel
Received Signal
92 94
91
91
91
91
6
97
97
Performance Indices
Compression Ratio (CR)

Bitrate


PSNR

82
file compressed the of size
file original the of size
CR =
image the in pixels
file compressed the of size
bpp =
( )
|
|
.
|

\
|

=
MSE
1 PSNR
b
2
10
1 2
log 0
83
JPEG
JPEG CODEC

Input
Image
FDCT Quantizer
Entropy
Encoder
Channel
IDCT Dequantizer
Entropy
Decoder
DCT based Decoder
DCT based Encoder
8 X 8 blocks
zigzag
Reverse
zigzag
84
JPEG MODES
Sequential Mode
Progressive Mode
Hierarchical Mode
Lossless Mode
85
Drawback of JPEG
Artifact
Blocking artifact Ringing artifact
Due to block processing
Sharp oscillation or
ghost shadows
86
Blocking artifact
87
JPEG 2000
Wavelet Trasform.
Compress once and decode many times
Supports different scalability
Supports Region of Interest Coding
Error Resilience
88
JPEG 2000
2D FDWT
Scalar
Quantization
Block based
Arithmetic
coding
2D IDWT
Inverse
Quantization
Block based
Arithmetic
decoding
Bitstream j2k
3HH
3HL
3LH
2HL
2LH
1HL
2HH
1LH
1HH
0LL
Rate-distortion
Allocation
optimisation
89
Progressive Transmission by
resolution
Compressed Image bitstream
90
Compressed Image bitstream
Progressive Transmission by
position
91
Compressed Image bitstream
Progressive Transmission by
components
Image Segmentation
Subdivide the image into its component
regions

92
Segmentation Algorithms
Discontinuity
Similarity
Robert Prewitt Sobel
Region
growing
Region
Splitting
Split and
Merge
Image Segmentation - Example
93
94
THANK YOU

You might also like