You are on page 1of 48

Image Processing: Transforms, Filters

and Applications

Under Guidance Of:


Dr. K Raj
Associate Professor
Electronics Engineering Dept.
H.B.T.I-Kanpur

Submitted By:
Ankur Singh
Deepak Dubey

(179/12)
(191/12)

Lecture Outline
Historical Development
Introduction of 2D signal
Transform of 2D signal

2D fourier transform
2D z transform
Sampling theorem in 2 D
sampling and aliasing of a image
2D filter
General introduction of image processing
Future application
Reference

History and
Development
Initial ideas back to 1920 for cable

transmission of pictures.
First Computer Processing introduced about
1964 at JPL Used in images from Ranger-7
video images.
Early work limited to space projects, due to
cost of computer systems and especially
display systems

Advancements I

Early/Mid 1980: Graphics workstations,


(SUN, Apollo, VaxStation)
Start of industrial inspection, scientific
imaging, computer vision.
Advancements II

Late 1980/early 1990s Supercomputer


graphics workstations, (Sun 10/40, HP9000, Dec-Alpha, Silicon-Graphics)
Start of all-digital publishing, image
processing going outside the research lab.

Advancements III

2000s The PC comes of age. Modern Pentium


machines have power and memory of Supercomputer
graphics workstations.
Many big image processing packages written, or ported to
PC.
Vast growth in digital photography, all PC based.
Digital imaging to PC systems now routine in many scientific
applications
2003-onwards Digital imaging makes the mass

market.
Digital camera in mobiles, PDA.
Image processing packages come free on PC.
Self-service digital image printing and enhancement.
All video and TV going digital.

What is a transform?
Transforms are decompositions of a function

f(x) into some basis functions (x, u). u is


typically the freq. index.

Illustration of
Decomposition
f = 11+22+33

Introduction of 2D
signal
1D signal has one independent variable - f(t)
2D signal has two independent variables - f(x,y)
Concepts of linearity, spectra, filtering, etc, carry over

from 1-D. But concept of causality not relevant as image


is a function of space, not time.

2-D systems are more complex, e.g we can factor 1-D

polynomials into a product of 1st and 2nd order


polynomial, and thus study stability and system response.
Stability for a 1-D system can be determined from system
poles, but for 2-D system, poles are surfaces in 4-D space.
2-D algorithms can offer more flexibility implementation,
e.g one can process image data in a non-causal way, in
parallel. We now extend 1-D results to 2-D where
possible.

A 2-D analog signal is a function of 2

continuous variables. A 2-D pulse is defined by


:
PA(x, y)=

{1 x, y A

0 x, y / A

It can be represented as product of two 1-D impulses:


(x, y)=(x).(y)
Response of a system to 2-D impulse is termed point-spread
function.

2D Fourier Analysis
Idea is to represent a signal as a sum of

pure sinusoids of different amplitudes and


frequencies.
In 1D the sinusoids are defined by

frequency and amplitude.


In 2D these sinusoids have a direction as

well e.g. f (x, y) = a cos( 1 x + 2 y + )

Spectral Analysis / Fourier


Transforms
Note the Euler formula : eit = cost + i sin t

The Fourier transform converts between the


spatial and frequency domain.
Real and imaginary components.
Forward and reverse transforms very similar.

How do 1 , 2 relate to direction?


In the direction : f() =a
cos(0 + ), where is just
some phase lag.
Given any point (x, y), = x
cos +y sin Therefore:
f(x, y) = a cos(0[x cos + y sin
] + )
Compare this with
f(x, y) = a cos(1x + 2y + )
(1)
Therefore 1 = 0 cos and
2 =0 sin Here a = 1.0; =
20; =0 sin Here a = 1.0;
= 20; =45; 0 =
0.05cycles per pel.

The 2D Fourier Transform

The 2D Fourier Transform is


separable!

Can do 1D transform of rows rst then do 1-D transform of

result along columns. Or vice-versa.


Fourier transform of separable signals is also separable. Let
f(x, y) = fx(x)fy (y) then F (1, 2) = Fx(1)Fy(2). [Note
that this is dierent from convolution identity because the
signal is SEPARABLE!].

The 2D z-transform
Recall 1D Z Transform of signal xn

Just a polynomial in z (a complex number). Used to solve di erence equations. Also helps with
stability of IIR lters.
Z-transform of a sequence g(h, k) is denoted G(z1, z2).

The z1 and z2 components act on the sequence p(h, k) along vertical and horizontal directions in

Sampling Theory
How many samples are required to represent

a given signal without loss of information?


What signals can be reconstructed without
loss for a given sampling rate?
A signal can be reconstructed from its

samples, if the original signal has no


frequencies above 1/2 the sampling frequency
Shannon
The minimum sampling rate for bandlimited
function is called Nyquist rate

2D Sampling Theorem

For no overlap in the spectra to occur, 0 1 > 1 and 0 2 > 2.


Thus we get Nyquists theorem for 2D (its similar to 1D).
for no aliasing to occur.

Sampling and
Reconstruction

Aliasing (in general)


In general:

Artifacts due to under-sampling or poor


reconstruction
Specifically, in graphics:
Spatial aliasing
Temporal aliasing

Sampling & Aliasing


Real world is continuous
The computer world is discrete
Mapping a continuous function to a discrete

one is called sampling


Mapping a continuous variable to a discrete
one is called quantizaion
To represent or render an image using a
computer, we must both sample and quantize

Antialiasing(removing aliasing)
Sample at higher rate

Not always possible


Doesnt always solve problem
Pre-filter to form bandlimited signal
Form bandlimited function (low-pass filter)
Trades aliasing for blurring
Convolve with sinc function in space domain
Optimal filter - better than area sampling.
Sinc function is infinite !!
Computationally expensive.
Cheaper solution : take multiple samples for each pixel and
average them together supersampling.
Can weight them towards the centre weighted average
sampling

How is antialiasing done?


We need some mathematical tools to

analyse the situation.


find an optimum solution.
Tools we will use :
Fourier transform.
Convolution theory.
Sampling theory.
We need to understand the behavior of the signal
in frequency domain

2D Filter
2D filter are used to process two dimensional signals

such as a image.
In 2d multidimensional filter can t be factored in
polynomials in general hence by manipulation of transfer
funtion coefficient required by particular network
structure cannot be determined as it is case in 1D.
In 2D FIR filter is implemented by using a non-recursive
algorithm where as IIR filter is implemented by using
recursive feedback algorithm structure.

Images and Digital Images


A digital image differs from a photo in that the values are all

discrete.
Usually they take on only integer values.
A digital image can be considered as a large array of discrete
dots, each of which has a brightness associated with it. These
dots are called picture elements, or more simply pixels.
The pixels surrounding a given pixel constitute its
neighborhood A neighborhood can be characterized by its
shape in the same way as a matrix: we can speak of a 3x3
neighborhood, or of a 5x7 neighborhood.

Recall: a pixel is a point


It is NOT a box, disc or teeny
wee light
It has no dimension
It occupies no area
It can have a coordinate
More than a point, it is a
SAMPLE

Aspects of Image
Processing
Image Enhancement: Processing an image so that the result is

more suitable for a particular application. (sharpening or deblurring an out of focus image, highlighting edges, improving
image contrast, or brightening an image, removing noise)
Image Restoration: This may be considered as reversing the

damage done to an image by a known cause. (removing of blur


caused by linear motion, removal of optical distortions)
Image Segmentation: This involves subdividing an image into

constituent parts, or isolating certain aspects of an image.(finding


lines, circles, or particular shapes in an image, in an aerial
photograph, identifying cars, trees, buildings, or roads.

Types of Digital Images


Binary: Each pixel is just black or white. Since there are only two possible

values for each pixel (0,1), we only need one bit per pixel.
Grayscale: Each pixel is a shade of gray, normally from 0 (black) to 255

(white). This range means that each pixel can be represented by eight bits, or
exactly one byte. Other greyscale ranges are used, but generally they are a
power of 2.
True Color, or RGB: Each pixel has a particular color; that color is described

by the amount of red, green and blue in it. If each of these components has a
range 0255, this gives a total of (256) 3 different possible colors. Such an
image is a stack of three matrices; representing the red, green and blue values
for each pixel. This means that for every pixel there correspond 3 values.

Binary Image

BLACK0
WHITE1

Grayscale Image

Color Image

General Commands
imread: Read an image
figure: creates a figure on the screen.
imshow(g): which displays the matrix g

as an image.
pixval on: turns on the pixel values in
our figure.
impixel(i,j): the command returns the
value of the pixel (i,j)
iminfo: Information about the image.

Spatial Resolution
Spatial resolution is the density of pixels over

the image: the greater the spatial resolution, the


more pixels are used to display the image.
Halve the size of the image: It does this by
taking out every other row and every other column,
thus leaving only those matrix elements whose row
and column indices are even.
Double the size of the image: all the pixels are
repeated to produce an image with the same size as
the original, but with half the resolution in each
direction.

Histograms
Given a grayscale image, its histogram consists of the

histogram of its gray levels; that is, a graph indicating the


number of times each gray level occurs in the image.
We can infer a great deal about the appearance of an image
from its histogram.
In a dark image, the gray levels would be clustered at the
lower end
In a uniformly bright image, the gray levels would be
clustered a the upper end.
In a well contrasted image, the gray levels would be well
spread out over much of the range.

Frequencies; Low and High Pass


Filters
Frequencies are the amount by which grey values change with

distance.
High frequency components are characterized by large changes
in grey values over small distances; (edges and noise)
Low frequency components are parts characterized by little
change in the gray values. (backgrounds, skin textures)
High pass filter: if it passes over the high frequency
components, and reduces or eliminates low frequency
components.
Low pass filter: if it passes over the low frequency
components, and reduces or eliminates high frequency
components.

Noise
Noise is any degradation in the image signal, caused by external

disturbance.
Salt and pepper noise: It is caused by sharp, sudden

disturbances in the image signal; it is randomly scattered white


or black (or both) pixels. It can be modeled by random values
added to an image
Gaussian noise: is an idealized form of white noise, which is

caused by random fluctuations in the signal.


Speckle noise: It is a major problem in some radar applications.

It can be modeled by random values multiplied by pixel values.

Salt & Pepper Noise

Gaussian Noise

Speckle Noise

Color Images
A color model is a method for specifying colors in some standard way. It

generally consists of a 3D coordinate system and a subspace of that system


in which each color is represented by a single point.
RGB: In this model, each color is represented as 3 values R, G and B,

indicating the amounts of red, green and blue which make up the color.
HSV:

Hue: The true color attribute (red, green, blue, orange, yellow, and so
on).
Saturation: The amount by which the color as been diluted with white. The
more white in the color, the lower the saturation.
Value: The degree of brightness: a well lit color has high intensity; a dark
color has low intensity.

Color Image

Color Conversion

Applications of Image Processing


Remote Sensing: satellite of aircraft images for earth resource, weather, sea surface,

etc.
Inspection and Automation: robotic control, manufacture control, quality
inspection, safety monitoring.
Medical Imaging: X-ray, Computer tomography, MRI, PET, g-camera, thermal-IR,
sample inspection.
Astronomical Applications: main observation tool, photon camera, radio image
formation, aperture synthesis, radio interferometry.
Scientific: microscope sample analysis, confocal imaging, x-ray analysis, surface
inspection,STM, AFM, etc.
Data Compression: document storage, data reduction, JPEG/MPEG, digital image
transmission.
Communications: video telephone, multi-media computer links, document
transmission, secure data links.
Military Applications: target tracking, surveillance, smart weapons, automated
guidance, secure data links.

References

http://www.cs.rit.edu/~jmg/cgII
http://www.nbb.cornell.edu/neurobio/land/Old

StudentProjects/cs49096to97/ans/index.html
http://www.jhu.edu/~Esignals/convolve/index.
html
http://ptolemy.eecs.berkeley.edu/eecs20/week
13/aliasing.html
J.S. sim,Two Dimensional Signal and Image
Processing,Prentice Hall International,1990

You might also like