You are on page 1of 43

A

Project Report on

Image Compression Using Wavelet Transform

In partial fulfillment of the requirements of


Bachelor of Technology (Computer Science &
Engineering)

Submitted By
Saurabh Sharma (Roll No.10506058)
Dewakar Prasad(Roll No.10506060)
Manish Tripathy(Roll No 10406023)
Session: 2008-09

Department of Computer Science &Engineering


National Institute of Technology
1
Rourkela-769008
Orissa
A
Project Report on

Image Compression Using Wavelet Transform

In partial fulfillment of the requirements of


Bachelor of Technology (Computer Science &
Engineering)

Submitted By
Saurabh Sharma (Roll No.10506058)
Dewakar Prasad(Roll no.10506060)
Manish tripathy(Roll No.10406023)
Session: 2008-09

Under the guidance of


Prof. Baliar Singh

2
Department of Computer Science & Engineering
National Institute of Technology
Rourkela-769008
Orissa

National Institute of Technology


Rourkela
CERTIFICATE

This is to certify that that the work in this thesis report entitled Image compression using
wavelet transform submitted by Saurabh Sharma ,Dewakar Prasad and Manish tripathy in
partial fulfillment of the requirements for the degree of Bachelor of Technology in Computer
Science & Engineering Session 2005-2009 in the department of Computer Science &
Engineering, National Institute of Technology Rourkela, This is an authentic work carried out by
them under my supervision and guidance.
To the best of my knowledge the matter embodied in the thesis has not been submitted to any
other University /Institute for the award of any degree.

Date: Proff. Baliar Singh

Department of computer science & engineering


National Institute of Technology, Rourkela

3
ACKNOWLEDGEMENT

I owe a debt of deepest gratitude to my thesis supervisor, Dr.Baliar Singh, Professor,


Department of Computer Science & Engineering, for his guidance, support, motivation and
encouragement through out the period this work was carried out. His readiness for consultation
at all times, his educative comments, his concern and assistance even with practical things have
been invaluable.

I am grateful to Prof. B Majhi, Head of the Department, Computer Science of


Engineering for providing us the necessary opportunities for the completion of our project. I also
thank the other staff members of my department for their invaluable help and guidance.

Saurabh Sharma(10506058)
Dewakar Prasad(10506060)
Manish tripathy(10406023)

B.Tech. Final Yr. CSE.

4
CONTENTS
Chapter Page No.
Certificate 3
Acknowledgement 4
Abstract 7
Chapter 1 Literature Review 8-12
1.1 Introduction 9
1.2 Why compression is needed 10
1.3 Fundamental of image compression technique 11
1.4 Objective 12
1.5 Organization of Report 12
Chapter 2 Image compression methodology 13-25
2.1 Overview 14
2.2 Different types of transform used for coding 15
2.3 Entropy coding 24
Chapter 3 Wavelet Transform 23-47
3.1 Overview 23
3.2 What are basis function 26
3.3 Fourier Analysis 30
3.4 Similarities between Fourier and wavelet transform 32
3.5 Dissimilarities between Fourier and wavelet transform 33
3.6 List of Wavelet transform 34
Chapter 4 Results and Dicussion 48-58
4.2 Results 49
4.3 Conclusion 58

NOMENCLATURE 59
REFERENCES 60-61

5
6
ABSTRACT

Abstract: - Data compression which can be lossy or lossless is required


to decrease the storage requirement and better data transfer rate. One of
the best image compression techniques is using wavelet transform. It is
comparatively new and has many advantages over others.
Wavelet transform uses a large variety of wavelets for decomposition of
images. The state of the art coding techniques like EZW, SPIHT (set
partitioning in hierarchical trees) and EBCOT(embedded block coding
with optimized truncation)use the wavelet transform as basic and
common step for their own further technical advantages. The wavelet
transform results therefore have the importance which is dependent on
the type of wavelet used .In our project we have used different wavelets
to perform the transform of a test image and the results have been
discussed and analyzed. The analysis has been carried out in terms of
PSNR (peak signal to noise ratio) obtained and time taken for
decomposition and reconstruction.

7
CHAPTER 1

LITURATURE REVIEW

8
CHAPTER 1: LITURATURE REVIEW

1.1 INTRODUCTION

Uncompressed multimedia (graphics, audio and video) data


requires considerable storage capacity and transmission bandwidth.
Despite rapid progress in mass-storage density, processor speeds, and
digital communication system performance, demand for data storage
capacity and data-transmission bandwidth continues to outstrip the
capabilities of available technologies. The recent growth of data
intensive multimedia-based web applications have not only sustained the
need for more efficient ways to encode signals and images but have
made compression of such signals central to storage and communication
technology.

To enable Modern High Bandwidth required in wireless data services


such as mobile multimedia, email, mobile, internet access, mobile
commerce, mobile data sensing in sensor networks, Home and Medical
Monitoring Services and Mobile Conferencing, there is a growing
demand for rich Content Cellular Data Communication, including
Voice, Text, Image and Video.
One of the major challenges in enabling mobile multimedia data services
will be the need to process and wirelessly transmit very large volume of
this rich content data. This will impose severe demands on the battery
resources of multimedia mobile appliances as well as the bandwidth of
the wireless network. While significant improvements in achievable
bandwidth are expected with future wireless access technology,
improvements in battery technology will lag the rapidly growing energy
requirements of the future wireless data services. One approach to
mitigate this problem is to reduce the volume of multimedia data
transmitted over the wireless channel via data compression technique
such as JPEG, JPEG2000 and MPEG . These approaches concentrate on
achieving higher compression ratio without sacrificing the quality of the
Image. However these Multimedia data Compression Technique ignore
the energy consumption during the compression and RF transmission.

9
Here one more factor, which is not considered, is the processing power
requirement at both the ends i.e. at the Server/Mobile to Mobile/Server.
Thus in this paper we have considered all of these parameters like the
processing power required in the mobile handset which is limited and
also the processing time considerations at the server/mobile ends which
will handle all the loads.
Since images will constitute a large part of future wireless data, we focus
in this paper on developing energy efficient, computing efficient and
adaptive image compression and communication techniques. Based on a
popular image compression algorithm, namely, wavelet image
compression, we present an Implementation of Advanced Image
Compression Algorithm Using Wavelet Transform.

1.2 Why Compression is needed?

In the last decade, there has been a lot of technological transformation in


the way we communicate. This transformation includes the ever present,
ever growing internet, the explosive development in mobile
communication and ever increasing importance of video
communication.
Data Compression is one of the technologies for each of the aspect of
this multimedia revolution. Cellular phones would not be able to provide
communication with increasing clarity without data compression. Data
compression is art and science of representing information in compact
form.
Despite rapid progress in mass-storage density, processor speeds, and
digital communication system performance, demand for data storage
capacity and data-transmission bandwidth continues to outstrip the
capabilities of available technologies. In a distributed environment large
image files remain a major bottleneck within systems.
Image Compression is an important component of the solutions available
for creating image file sizes of manageable and transmittable
dimensions. Platform portability and performance are important in the
selection of the compression/decompression technique to be employed.
10
Four Stage model of Data Compression
Almost all data compression systems can be viewed as comprising four
successive stages of data processing arranged as a processing pipeline
(though some stages will often be combined with a neighboring stage,
performed "off-line," or otherwise made rudimentary).
The four stages are
(A) Preliminary pre-processing steps.
(B) Organization by context.
(C) Probability estimation.
(D) Length-reducing code.

The ubiquitous compression pipeline (A-B-C-D) is what is of interest.


With (A) we mean various pre-processing steps that may be appropriate
before the final compression engine

11
Lossy compression often follows the same pattern as lossless, but with
one or more quantization steps somewhere in (A). Sometimes clever
designers may defer the loss until suggested by statistics detected in (C);
an example of this would be modern zero tree image coding.
(B) Organization by context often means data reordering, for
which a simple but good example is JPEG's "Zigzag" ordering.
The purpose of this step is to improve the estimates found by the
next step.
(C) A probability estimate (or its heuristic equivalent) is formed
for each token to be encoded. Often the estimation formula will
depend on context found by (B) with separate 'bins' of state
variables maintained for each conditioned class.

(D) Finally, based on its estimated probability, each compressed file


token is represented as bits in the compressed file. Ideally, a 12.5%-
probable token should be encoded with three bits, but details become
complicated

Principle behind Image Compression


Images have considerably higher storage requirement than text; Audio
and Video Data require more demanding properties for data storage. An
image stored in an uncompressed file format, such as the popular BMP
format, can be huge. An image with a pixel resolution of 640 by 480
pixels and 24-bit colour resolution will take up 640 * 480 * 24/8 =
921,600 bytes in an uncompressed format.
The huge amount of storage space is not only the consideration but also
the data transmission rates for communication of continuous media are
also significantly large. An image, 1024 pixel x 1024 pixel x 24 bit,
without compression, would require 3 MB of storage and 7 minutes for
transmission, utilizing a high speed, 64 Kbits /s, ISDN line.
Image data compression becomes still more important because of the
fact that the transfer of uncompressed graphical data requires far more
bandwidth and data transfer rate. For example, throughput in a
12
multimedia system can be as high as 140 Mbits/s, which must be
transferred between systems. This kind of data transfer rate is not
realizable with todays technology, or in near the future with reasonably
priced hardware.

1.3 Fundamentals of Image Compression Techniques

A digital image, or "bitmap", consists of a grid of dots, or "pixels", with


each pixel defined by a numeric value that gives its colour. The term
data compression refers to the process of reducing the amount of data
required to represent a given quantity of information. Now, a particular
piece of information may contain some portion which is not important
and can be comfortably removed. All such data is referred as Redundant
Data. Data redundancy is a central issue in digital image compression.
Image compression research aims at reducing the number of bits needed
to represent an image by removing the spatial and spectral redundancies
as much as possible.
A common characteristic of most images is that the neighboring pixels
are correlated and therefore contain redundant information. The
foremost task then is to find less correlated representation of the image.
In general, three types of redundancy can be identified:
1. Coding Redundancy
2. Inter Pixel Redundancy
3.PsychovisualRedundancy

Coding Redundancy

If the gray levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each gray level, the
resulting image is said to contain coding redundancy. It is almost always
present when an images gray levels are represented with a straight or
natural binary code. Let us assume that a random variable r K lying in the
13
interval [0, 1] represents the gray levels of an image and that each r K
occurs with probability Pr (r K).
Pr (r K) = N k / n where k = 0, 1, 2 L-1
L = No. of gray levels.
N k =No. of times that gray appears in that image
N = Total no. of pixels in the image
If no. of bits used to represent each value of r K is l (r K), the average no.
of bits required to represent each pixel is
L avg = l (r K) Pr (r K)
That is average length of code words assigned to the various gray levels
is found by summing the product of the no. of bits used to represent each
gray level and the probability that the gray level occurs. Thus the total
no. of bits required to code an MN image is MN L avg.

Inter Pixel Redundancy

The Information of any given pixel can be reasonably predicted from the
value of its neighbouring pixel. The information carried by an individual
pixel is relatively small.
In order to reduce the inter pixel redundancies in an image, the 2-D pixel
array normally used for viewing and interpretation must be transformed
into a more efficient but usually non visual format. For example, the
differences between adjacent pixels can be used to represent an image.
These types of transformations are referred as mappings. They are called
reversible if the original image elements can be reconstructed from the
transformed data set.

14
Psycho visual Redundancy

Certain information simply has less relative importance than other


information in normal visual processing. This information is said to be
Psycho visually redundant, it can be eliminated without significantly
impairing the quality of image perception.

In general, an observer searches for distinguishing features such as edges


or textual regions and mentally combines them in recognizable
groupings. The brain then correlates these groupings with prior
knowledge in order to complete the image interpretation process.

The elimination of psycho visually redundant data


results in loss of quantitative information; it is commonly referred as
quantization. As this is an irreversible process i.e. visual information is
lost, thus it results in Lossy Data Compression. An image reconstructed
following Lossy compression contains degradation relative to the
original. Often this is because the compression scheme completely
discards redundant information.

15
Image Compression Model

As figure shows a compression system consists of two distinct structural


blocks: an encoder and a decoder. An input image f(x, y) is fed into the
encoder, which creates a set of symbols from the input data.

Image Compression Techniques

There are basically two methods of Image Compression:


1. Lossless Coding Techniques
2. Lossy Coding Techniques

Lossless Coding Techniques:


In Lossless Compression schemes, the reconstructed image, after
compression, is numerically identical to the original image. However
Lossless Compression can achieve a modest amount of Compression.
Lossless coding guaranties that the decompressed image is absolutely
identical to the image before compression. This is an important
requirement for some application domains, e.g. Medical Imaging, where
not only high quality is in the demand, but unaltered archiving is a legal
requirement. Lossless techniques can also be used for the compression
of other data types where loss of information is not acceptable, e.g. text
documents and program executables. Lossless compression algorithms
can be used to squeeze down images and then restore them again for
viewing completely unchanged.
Lossless Coding Techniques are as follows: Source Encoder Input
Image F(x, y)

1. Run Length Encoding


2. Huffman Encoding
3. Entropy Encoding
4. Area Encoding

16
Lossy Coding Techniques:

Lossy techniques cause image quality degradation in each


Compression / De-compression step. Careful consideration of the
Human Visual perception ensures that the degradation is often
unrecognizable, though this depends on the selected compression ratio.
An image reconstructed following Lossy compression contains
degradation relative to the original. Often this is because the
compression schemes are capable of achieving much higher
compression. Under normal viewing conditions, no visible loss is
perceived (visually Lossless).
Lossy Image Coding Techniques normally have three Components:

1. Image Modeling:
It is aimed at the exploitation of statistical characteristics of the
image (i.e. high correlation, redundancy). It defines such things as
the transformation to be applied to the Image.

2. Parameter Quantization:
The aim of Quantization is to reduce the amount of data used to
represent the information within the new domain.

3. Encoding:
Here a code is generated by associating appropriate code words to
the raw produced by the Quantizer. Encoding is usually error free. It
optimizes the representation of the information and may introduce
some error detection codes.

Measurement of Image Quality

The design of an imaging system should begin with an analysis of the


physical characteristics of the originals and the means through which the
images may be generated. For example, one might examine a
representative sample of the originals and determine the level of detail

17
that must be preserved, the depth of field that must be captured, whether
they can be placed on a glass platen or require a custom book-edge
scanner, whether they can tolerate exposure to high light intensity, and
whether specular reflections must be captured or minimized. A detailed
examination of some of the originals, perhaps with a magnifier or
microscope, may be necessary to determine the level of detail within the
original that might be meaningful for a researcher or scholar. For
example, in drawings or paintings it may be important to preserve
stippling or other techniques characteristic

18
CHAPTER 1: LITURATURE REVIEW

1.4 OBJECTIVE

The objective of this project is to compress an


image using haar wavelet transform.

19
CHAPTER 2
Image Compression Methodology

CHAPTER 2: Image compression methodology

20
2.1 Overview

The storage requirements for the video of a typical


Angiogram procedure is of the order of several hundred Mbytes
*Transmission of this data over a low bandwidth network results in very
high latency
* Lossless compression methods can achieve compression ratios of ~2:1
* We consider lossy techniques operating at much higher compression
ratios (~10:1)
* Key issues:
- High quality reconstruction required
- Angiogram data contains considerable high-frequency spatial texture
* Proposed method applies a texture-modelling scheme to the high-
frequency texture of some regions of the image
* This allows more bandwidth allocation to important areas of the image

2.2 Different types of Transforms used for coding are:

1. FT (Fourier Transform)
2. DCT (Discrete Cosine Transform)
3. DWT (Discrete Wavelet Transform)

21
2.2.2 The Discrete Cosine Transform (DCT):

The discrete cosine transform (DCT) helps separate the image into parts
(or spectral sub-bands) of differing importance (with respect to the
image's visual quality). The DCT is similar to the discrete Fourier
transform: it transforms a signal or image from the spatial domain to the
frequency domain.

2.2.3 Discrete Wavelet Transform (DWT):

The discrete wavelet transform (DWT) refers to wavelet transforms for


which the wavelets are discretely sampled. A transform which localizes
a function both in space and scaling and has some desirable properties
compared to the Fourier transform. The transform is based on a wavelet
matrix, which can be computed more quickly than the analogous Fourier
matrix. Most notably, the discrete wavelet transform is used for signal
coding, where the properties of the transform are exploited to represent a
discrete signal in a more redundant form, often as a preconditioning for
data compression. The discrete wavelet transform has a huge number of
applications in Science, Engineering, Mathematics and Computer
Science.
Wavelet compression is a form of data compression well suited for
image compression (sometimes also video compression and audio
compression). The goal is to store image data in as little space as
possible in a file. A certain loss of quality is accepted (lossy
compression).
Using a wavelet transform, the wavelet compression methods are better
at representing transients, such as percussion sounds in audio, or high-
frequency components in two-dimensional images, for example an

22
image of stars on a night sky. This means that the transient elements of a
data.

signal can be represented by a smaller amount of information than would


be the case if some other transform, such as the more widespread
discrete cosine transform, had been used.
First a wavelet transform is applied. This produces as many coefficients
as there are pixels in the image (i.e.: there is no compression yet since it
is only a transform). These coefficients can then be compressed more
easily because the information is statistically concentrated in just a few
coefficients. This principle is called transform coding. After that, the
coefficients are quantized and the quantized values are entropy encoded
and/or run length encoded.

Examples for Wavelet Compressions:


JPEG 2000
Ogg
Tarkin
SPIHT
MrSID

23
2.3 Quantization:

Quantization involved in image processing. Quantization techniques


generally compress by compressing a range of values to a single
quantum value. By reducing the number of discrete symbols in a given
stream, the stream becomes more compressible. For example seeking to
reduce the number of colors required to represent an image. Another
widely used example DCT data quantization in JPEG and DWT data
quantization in JPEG 2000.

Quantization in image compression

The human eye is fairly good at seeing small differences in brightness


over a relatively large area, but not so good at distinguishing the exact
strength of a high frequency brightness variation. This fact allows one to
get away with greatly reducing the amount of information in the high
frequency components. This is done by simply dividing each component
in the frequency domain by a constant for that component, and then
rounding to the nearest integer. This is the main lossy operation in the
whole process. As a result of this, it is typically the case that many of the
higher frequency components are rounded to zero, and many of the rest
become small positive or negative numbers.

2.3 Entropy Encoding

An entropy encoding is a coding scheme that assigns codes to symbols


so as to match code lengths with the probabilities of the symbols.
Typically, entropy encoders are used to compress data by replacing
symbols represented by equal-length codes with symbols represented by
codes proportional to the negative logarithm of the probability.
Therefore, the most common symbols use the shortest codes.

24
According to Shannon's source coding theorem, the optimal code length
for a symbol is logbP, where b is the number of symbols used to make
output codes and P is the probability of the input symbol.
Three of the most common entropy encoding techniques are Huffman
coding, range encoding, and arithmetic coding. If the approximate
entropy characteristics of a data stream are known in advance (especially
for signal compression), a simpler static code such as unary coding,
Elias gamma coding, Fibonacci coding, Golomb coding, or Rice coding
may be useful.
There are three main techniques for achieving entropy coding:
Huffman Coding - one of the simplest variable length coding
schemes.
Run-length Coding (RLC) - very useful for binary data
containing long runs of ones of zeros.
Arithmetic Coding - a relatively new variable length coding
scheme that can combine the best features of Huffman and run-
length coding, and also adapt to data with non-stationary statistics.

We shall concentrate on the Huffman and RLC methods for simplicity.


Interested readers may find out more about Arithmetic Coding in
chapters 12 and 13 of the JPEG Book.
First we consider the change in compression performance if simple
Huffman Coding is used to code the subimages of the 4-level Haar
transform.
This is an example DCT coefficient matrix:
A common quantization matrix is:
Using this quantization matrix with the DCT coefficient matrix from
above results in:
For example, using 415 (the DC coefficient) and rounding to the
nearest integer

25
26
Chapter 3
WAVELET TRANSFORM

27
CHAPTER 3: NUMERICAL MODELING

3.1 OVERVIEW

The fundamental idea behind wavelets is to analyze according to


scale. Indeed, some researchers in the wavelet field feel that, by using
wavelets, one is adopting a whole new mindset or perspective in
processing data.

Wavelets are functions that satisfy certain mathematical requirements


and are used in representing data or other functions. This idea is not
new. Approximation using superposition of functions has existed
since the early 1800's, when Joseph Fourier discovered that he could
superpose sines and cosines to represent other functions. However, in
wavelet analysis, the scale that we use to look at data plays a special
role. Wavelet algorithms process data at different scales or
resolutions. If we look at a signal with a large "window," we would
notice gross features. Similarly, if we look at a signal with a small
"window," we would notice small features. The result in wavelet
analysis is to see both the forest and the trees, so to speak.

This makes wavelets interesting and useful. For many decades,


scientists have wanted more appropriate functions than the sines and
cosines which comprise the bases of Fourier analysis, to approximate
choppy signals . By their definition, these functions are non-local (and
stretch out to infinity). They therefore do a very poor job in
approximating sharp spikes. But with wavelet analysis, we can use
approximating functions that are contained neatly in finite domains.
Wavelets are well-suited for approximating data with sharp
discontinuities.

The wavelet analysis procedure is to adopt a wavelet prototype


function, called an analyzing wavelet or mother wavelet. Temporal
analysis is performed with a contracted, high-frequency version of the
prototype wavelet, while frequency analysis is performed with a
dilated, low-frequency version of the same wavelet. Because the

28
original signal or function can be represented in terms of a wavelet
expansion (using coefficients in a linear combination of the wavelet
functions), data operations can be performed using just the
corresponding wavelet coefficients. And if you further choose the best
wavelets adapted to your data, or truncate the coefficients below a threshold,
your data is sparsely represented. This sparse coding makes wavelets
an excellent tool in the field of data compression.

Other applied fields that are making use of wavelets include


astronomy, acoustics, nuclear engineering, sub-band coding, signal
and image processing, neurophysiology, music, magnetic resonance
imaging, speech discrimination, optics, fractals, turbulence,
earthquake-prediction, radar, human vision, and pure mathematics
applications such as solving partial differential equations.

3.2 What are Basis Functions?

It is simpler to explain a basis function if we move out of the realm of


analog (functions) and into the realm of digital (vectors) (*). Every
two-dimensional vector (x,y) is a combination of the vector (1,0) and
(0,1). These two vectors are the basis vectors for (x,y). Why? Notice
that x multiplied by (1,0) is the vector (x,0), and y multiplied by (0,1)
is the vector (0,y). The sum is (x,y).

The best basis vectors have the valuable extra property that the
vectors are perpendicular, or orthogonal to each other. For the basis
(1,0) and (0,1), this criteria is satisfied.

Now let's go back to the analog world, and see how to relate these
concepts to basis functions. Instead of the vector (x,y), we have a
function f(x). Imagine that f(x) is a musical tone, say the note A in a
particular octave. We can construct A by adding sines and cosines
using combinations of amplitudes and frequencies. The sines and
cosines are the basis functions in this example, and the elements of
29
Fourier synthesis. For the sines and cosines chosen, we can set the
additional requirement that they be orthogonal. How? By choosing
the appropriate combination of sine and cosine function terms whose
inner product add up to zero. The particular set of functions that are
orthogonal and that construct f(x) are our orthogonal basis functions
for this problem.

What are Scale-Varying Basis Functions?

A basis function varies in scale by chopping up the same function or


data space using different scale sizes. For example, imagine we have a
signal over the domain from 0 to 1. We can divide the signal with two
step functions that range from 0 to 1/2 and 1/2 to 1. Then we can
divide the original signal again using four step functions from 0 to
1/4, 1/4 to 1/2, 1/2 to 3/4, and 3/4 to 1. And so on. Each set of
representations code the original signal with a particular resolution or
scale.

30
3.3 Fourier analysis
FOURIER TRANSFORM
The Fourier transform's utility lies in its ability to analyze a signal in
the time domain for its frequency content. The transform works by
first translating a function in the time domain into a function in the
frequency domain. The signal can then be analyzed for its frequency
content because the Fourier coefficients of the transformed function
represent the contribution of each sine and cosine function at each
frequency. An inverse Fourier transform does just what you'd expect,
transform data from the frequency domain into the time domain.
DISCRETE FOURIER TRANSFORM
The discrete Fourier transform (DFT) estimates the Fourier transform
of a function from a finite number of its sampled points. The sampled
points are supposed to be typical of what the signal looks like at all
other times.

The DFT has symmetry properties almost exactly the same as the
continuous Fourier transform. In addition, the formula for the inverse
discrete Fourier transform is easily calculated using the one for the
discrete Fourier transform because the two formulas are almost
identical.

WINDOWED FOURIER TRANSFORM

If f(t) is a nonperiodic signal, the summation of the periodic functions,


sine and cosine, does not accurately represent the signal. You could
artificially extend the signal to make it periodic but it would require
additional continuity at the endpoints. The windowed Fourier
transform (WFT) is one solution to the problem of better representing
the non periodic signal. The WFT can be used to give information
about signals simultaneously in the time domain and in the frequency
domain.
With the WFT, the input signal f(t) is chopped up into sections, and
each section is analyzed for its frequency content separately. If the

31
signal has sharp transitions, we window the input data so that the
sections converge to zero at the endpoint. This windowing is
accomplished via a weight function that places less emphasis near the
interval's endpoints than in the middle. The effect of the window is to
localize the signal in time.

FAST FOURIER TRANSFORM


To approximate a function by samples, and to approximate the
Fourier integral by the discrete Fourier transform, requires applying a
matrix whose order is the number sample points n. Since multiplying
an matrix by a vector costs on the order of arithmetic operations,
the problem gets quickly worse as the number of sample points
increases. However, if the samples are uniformly spaced, then the
Fourier matrix can be factored into a product of just a few sparse
matrices, and the resulting factors can be applied to a vector in a total
of order arithmetic operations. This is the so-called fast Fourier
transform or FFT.

3.4 SIMILARITIES BETWEEN FOURIER AND WAVELET


TRANSFORM

The fast Fourier transform (FFT) and the discrete wavelet transform
(DWT) are both linear operations that generate a data structure that
contains segments of various lengths, usually filling and
transforming it into a different data vector of length .

The mathematical properties of the matrices involved in the


transforms are similar as well. The inverse transform matrix for both
the FFT and the DWT is the transpose of the original. As a result,
both transforms can be viewed as a rotation in function space to a
different domain. For the FFT, this new domain contains basis
functions that are sines and cosines. For the wavelet transform, this
new domain contains more complicated basis functions called
wavelets, mother wavelets, or analyzing wavelets.

32
Both transforms have another similarity. The basis functions are
localized in frequency, making mathematical tools such as power
spectra (how much power is contained in a frequency interval) and
scale grams (to be defined later) useful at picking out frequencies and
calculating power distributions.

3.5 DISSIMILARITIES BETWEEN FOURIER AND


WAVELET TRANSFORM
The most interesting dissimilarity between these two kinds of
transforms is that individual wavelet functions are localized in space.
Fourier sine and cosine functions are not. This localization feature,
along with wavelets' localization of frequency, makes many functions
and operators using wavelets "sparse" when transformed into the
wavelet domain. This sparseness, in turn, results in a number of useful
applications such as data compression, detecting features in images,
and removing noise from time series.

33
3.6 LIST OF WAVELET RELATED TRANSFORM
1. Continuous wavelet transform
A continuous wavelet transform is used to divide a continuous-time
function into wavelets. Unlike Fourier transform, the continuous
wavelet transform possesses the ability to construct a time frequency
represented of a signal that offers very good time and frequency
localization.
2 .Multiresolution analysis
A multiresolution analysis (MRA) or multiscale approximation
(MSA) is the design methods of most of the practically relevant
discrete wavelet transform (DWT) and the justification for the
algorithm of the fast Fourier wavelet transform (FWT)

3. Discrete wavelet transform


In numerical analysis and functional analysis, a discrete wavelet
transform (DWT) is any wavelet transform for which the wavelets are
discretely sampled. As with other wavelet transforms, a key
advantage it has over Fourier transforms is temporal resolution: it
captures both frequency and location information.
4. Fast wavelet transform
The Fast Wavelet Transform is a mathematical algorithm designed to
turn a waveform or signal in the time domain into a sequence of
coefficients based on an orthogonal basis of small finite waves, or
wavelets. The transform can be easily extended to multidimensional
signals, such as images, where the time domain is replaced with the
space domain

34
35
36
3.2 HAAR WAVELET

In mathematics, the Haar wavelet is a certain sequence of functions.


It is now recognised as the first known wavelet.
This sequence was proposed in 1909 by Alfred Haar. Haar used these
functions to give an example of a countable orthonormal system for
the space of square integrable functions on the real line. The study of
wavelets, and even the term "wavelet", did not come until much later.
The Haar wavelet is also the simplest possible wavelet. The technical
disadvantage of the Haar wavelet is that it is not continous , and
therefore not differentiable.

The Haar wavelet's mother wavelet function (t) can be described as

and its scaling function (t) can be described as

37
Wavelets are mathematical functions that were developed by
scientists working in several different fields for the purpose of sorting
data by frequency. Translated data can then be sorted at a resolution
which matches its scale. Studying data at different levels allows for
the development of a more complete picture. Both small features and
large features are discernable because they are studied separately.
Unlike the discrete cosine transform, the wavelet transform is not
Fourier-based and therefore wavelets do a better job of handling
discontinuities in data.
The Haar wavelet operates on data by calculating the sums and
differences of adjacent elements. The Haar wavelet operates first on
adjacent horizontal elements and then on adjacent vertical elements.
The Haar transform is computed using:

38
CHAPTER 4
RESULTS AND DISCUSSION

39
CHAPTER 4 : RESULTS AND DISCUSSION

4.1 RESULTS
The image on the left is the original image and the image on
the right is the compressed one
(The point is that the image on the left you are right now
viewing is compressed using Haar wavelet method and the
loss of quality is not visible. Of course, image compression
using Haar Wavelet is one of the simplest ways.)

Original image compressed image

40
4.2 CONCLUSION

Haar wavelet transform for image compression is simple and


crudest algorithm.as compared to other algorithms it is more
effective.The quality of compressed image is also maintained

41
BIBILOGRAPHY :- [1

[1] Aldroubi, Akram and Unser, Michael (editors), Wavelets in


Medicine and Biology, CRC Press, Boca Raton FL, 1996.
[2] Benedetto, John J. and Frazier, Michael (editors), Wavelets;
Mathematics and Applications, CRC Press, Boca Raton
FL, 1996.
[3] Brislawn, Christopher M., \Fingerprints go digital," AMS
Notices 42(1995), 1278{1283.
[4] Chui, Charles, An Introduction to Wavelets, Academic
Press, San Diego CA, 1992.
[5] Daubechies, Ingrid, Ten Lectures on Wavelets, CBMS 61,
SIAM Press, Philadelphia PA, 1992.
[6] Glassner, Andrew S., Principles of Digital Image Synthesis,
Morgan Kaufmann, San Francisco CA, 1995.

42
43

You might also like