Professional Documents
Culture Documents
IMAGE ENHANCEMENT FOR IMPROVING FACE DETECTION UNDER NON UNIFORM LIGHTING CONDITIONS
Submitted in partial fulfillment for the award of the degree of BACHELOR OF TECHNOLOGY IN ELECTRONICS AND COMMUNICATION ENGINEERING By N.NIKITHA V.SUKUMAR Under the esteemed guidance of Mrs. P.SHRAVANI Asst. Professor (08601A0480) (08601A04C0)
N.NIKITHA V.SUKUMAR
(08601A0480) (08601A04C0)
ACKNOWLEDGMENT
My sincere thanks to Prof C. Ashok, principal, Sri Sai Jyothi engineering college, v.n.pally, Rangareddy for providing this opportunity to carry out the present academic seminar work. I gratefully acknowledge Mr. S. Jagadeesh, professor and head of department of ECE, for his encouragement and advice during the course of this work. We sincerely thank our internal guide Mr. Naresh Kumar for giving us the encouragement for the successful completion of project work and providing the necessary facilities. We are grateful to the VISION KREST EMBEDDEDTECHNOLOGIES, for providing us an opportunity to carry out this project within the time. I would like to express my thanks to all the faculty members, ECE, who have rendered valuable pieces of advice in completion of academic seminar report. BY N.NIKITHA (08601A0480) V.SUKUMAR(08601A04C0)
ABSTRACT
A robust and efficient image enhancement technique has been developed to improve the visual quality of digital images that exhibit dark shadows due to the limited dynamic ranges of imaging and display devices which are incapable of handling high dynamic range scenes. The proposed technique processes images in two separate steps: dynamic range compression and local contrast enhancement. Dynamic range compression is a neighborhood dependent intensity transformation which is able to enhance the luminance in dark shadows while keeping the overall tonality consistent with that of the input image. The image visibility can be largely and properly improved without creating unnatural rendition in this manner. A neighborhood dependent local contrast enhancement method is used to enhance the images contrast following the dynamic range compression. Experimental results on the proposed image enhancement technique demonstrates strong capability to improve the performance of convolution face finder compared to histogram equalization and multiscale Retinex with color restoration without compromising the false alarm rate.
CONTENTS
Chapters
1. INTRODUCTION 1.1 Introduction 1.2 Image processing 2. IMAGE PROCESSING 2.1 Introduction 2.2 Digital data 2.3 Data formats for digital satellite imagery 2.4 Image resolution 2.5 How to improve your image 2.6 Digital image processing 2.7 Pre-processing of the remotely sensed image 3. AERIAL IMAGERY 3.1 Uses of imagery 3.2 Types of aerial photography 3.3 Non linear image enhancement techniques 4. WAVELETS 4.1 Continuous wavelet transform 4.2 Discrete wavelet transform 4.3 Algorithm
Page.No
8 8 9
4.4 Wavelet dynamic range compression and contrast enhancement 5. RESULT ANALYSIS 5.1 Results 5.2 Conclusion 5.3 Future work REFERENCES APPENDIX
LIST OF FIGURES
Chapters 3. Aerial Imagery 3.1 Aerial imagery of the test building 4. Wavelets 4.1 Wavelets of CWT 4.2 Same transform rotated to 250 degrees looking from 45 degrees above 4.3 Cosine signals corresponding to various scales 4.4 Block diagram 4.5 DWT waveforms
LIST OF TABLES
Chapters 2. Image Processing 2.1 Elements of image interpretation
CHAPTER 1 INTRODUCTION
1.1 INTRODUCTION
Aerial images captured from aircrafts, spacecrafts, or satellites usually suffer from lack of clarity, since the atmosphere enclosing Earth has effects upon the images such as turbidity caused by haze, fog, clouds or heavy rain. The visibility of such aerial images may decrease drastically and sometimes the conditions at which the images are taken may only lead to near zero visibility even for the human eyes. Even though human observers may not see much than smoke, there may exist useful information in those images taken under such poor conditions. Captured images are usually not the same as what we see in a real world scene, and are generally a poor rendition of it. High dynamic range of the real life scenes and the limited dynamic range of imaging devices results in images with locally poor contrast. Human Visual System (HVS) deals with the high dynamic range scenes by compressing the dynamic range and adapting locally to each part of the scene. There are some exceptions such as turbid (e.g. fog, heavy rain or snow) imaging conditions under which acquired images and the direct observation possess a close parity .The extreme narrow dynamic range of such scenes leads to extreme low contrast in the acquired images. To deal with the problems caused by the limited dynamic range of the imaging devices, many image processing algorithms have been developed .These algorithms also provide contrast enhancement to some extent. Recently we have developed a wavelet-based dynamic range compression (WDRC) algorithm to improve the visual quality of digital images of high dynamic range scenes with non-
uniform lighting conditions .The WDRC algorithm is modified in by introducing an histogram adjustment and non-linear color restoration process so that it provides color constancy and deals with pathological scenes having very strong spectral characteristics in a single band. The fast image enhancement algorithm which provides dynamic range compression preserving the local contrast and tonal rendition is a very good candidate in aerial imagery applications such as image interpretation for defense and security tasks. This algorithm can further be applied to video streaming for aviation safety. In this project application of the WDRC algorithm in aerial imagery is presented. The results obtained from large variety of aerial images show strong robustness and high image quality indicating promise for aerial imagery during poor visibility flight conditions.
or analog image. In order to produce a physical picture, the image is processed using the appropriate technology based on the input source type. In digital photography, the image is stored as a computer file. This file is translated using photographic software to generate an actual image. The colors, shading, and nuances are all captured at the time the photograph is taken the software translates this information into an image. Euclidean geometry transformations such as enlargement, reduction, and rotation.
y
Color corrections such as brightness and contrast adjustments, color mapping, color balancing, quantization, or color translation to a different color space. Digital compositing or optical compositing (combination of two or more images), which is used in film-making to make a "matte". Interpolation, demosaicing, and recovery of a full image from a raw image format using a Bayer filter pattern. Image registration, the alignment of two or more images. Image differencing and morphing. Image recognition, for example, may extract the text from the image using optical character recognition or checkbox and bubble values using optical mark recognition. Image segmentation. High dynamic range imaging by combining multiple images. Geometric hashing for 2-D object recognition with affine invariance.
y y y
y y y
10
allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of Multidimensional Systems. Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement. The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest. Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994. Digital image processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means.In particular, digital image processing is the only practical technology for:
y y y
11
y y
Pixelization Linear filtering Principal components analysis Independent component analysis Hidden Markov models Anisotropic diffusion Partial differential equations Self-organizing maps Neural networks Wavelets
12
13
number represented as DN (Digital Number) that depicts the average radiance of relatively small area within a scene. The size of this area effects the reproduction of details within the scene. As the pixel size is reduced more scene detail is preserved in digital representation.
Band Interleaved by Pixel (BIP) Band Interleaved by Line (BIL) Band Sequential (BQ)
Digital image analysis is usually conducted using Raster data structures - each image is treated as an array of values. It offers advantages for manipulation of pixel values by image processing system, as it is easy to find and locate pixels and their values. Disadvantages becomes apparent when one needs to represent the array of pixels as discrete patches or regions, where as Vector data structures uses polygonal patches and their boundaries as fundamental units for analysis and manipulation. Though vector format is not appropriate to for digital analysis of remotely sense data.
14
Resolution can be defined as "the ability of an imaging system to record fine details in a distinguishable manner". A working knowledge of resolution is essential for understanding both practical and conceptual details of remote sensing. Along with the actual positioning of spectral bands, they are of paramount importance in determining the suitability of remotely sensed data for a given applications. The major characteristics of imaging remote sensing instrument operating in the visible and infrared spectral region are described in terms as follow:
y y y y
2.4.1 Spectral Resolution refers to the width of the spectral bands. As different
material on the earth surface exhibit different spectral reflectances and emissivities. These spectral characteristics define the spectral position and spectral sensitivity in order to distinguish materials. There is a tradeoff between spectral resolution and signal to noise. The use of well -chosen and sufficiently numerous spectral bands is a necessity, therefore, if different targets are to be successfully identified on remotely sensed images.
15
instrument with a high noise level would necessarily, therefore, have a lower radiometric resolution compared with a high-quality, high signal-to-noise-ratio instrument.
16
2.4.4 Temporal Resolution refers to the frequency with which images of a given
geographic location can be acquired. Satellites not only offer the best chances of frequent data coverage but also of regular coverage. The temporal resolution is determined by orbital characteristics and swath width, the width of the imaged area. Swath width is given by 2tan(FOV/2) where h is the altitude of the sensor, and FOV is the angular field of view of the sensor.
17
Digital Image Processing is a collection of techniques for the manipulation of digital images by computers. The raw data received from the imaging sensors on the satellite platforms contains flaws and deficiencies. To overcome these flaws and deficiencies inorder to get the originality of the data, it needs to undergo several steps of processing. This will vary from image to image depending on the type of image format, initial condition of the image and the information of interest and the composition of the image scene. TABLE: 2.1 ELEMENTS OF IMAGE INTERPRETATION Elements of Image Interpretation Black and White Tone Primary Elements Color Stereoscopic Parallax Size Spatial Arrangement of Tone Shape & Color Texture Pattern Based on Analysis of Primary Height Elements Shadow Site Contextual Elements Association
18
2.6.1 Pre-processing consists of those operations that prepare data for subsequent
analysis that attempts to correct or compensate for systematic errors. The digital imageries are subjected to several corrections such as geometric, radiometric and atmospheric, though all these correction might not be necessarily be applied in all cases. These errors are systematic and can be removed before they reach the user. The investigator should decide which pre-processing techniques are relevant on the basis of the nature of the information to be extracted from remotely sensed data. After preprocessing is complete, the analyst may use feature extraction to reduce the dimensionality of the data. Thus feature extraction is the process of isolating the most useful components of the data for further study while discarding the less useful aspects (errors, noise etc). Feature extraction reduces the number of variables that must be examined, thereby saving time and resources.
The digital data (i.e. with spectral bands and resolution) The objectives of interpretation As an image enhancement technique often drastically alters the original
numeric data, it is normally used only for visual (manual) interpretation and not for further numeric analysis. Common enhancements include image reduction, image rectification, image magnification, transect extraction, contrast adjustments, band ratioing, spatial filtering, Fourier transformations, principal component analysis and texture transformation.
2.6.3 Information Extraction is the last step toward the final output of the
image analysis. After pre-processing and image enhancement the remotely sensed data is subjected to quantitative analysis to assign individual pixels to specific classes. Classification of the image is based on the known and unknown identity to classify
19
the remainder of the image consisting of those pixels of unknown identity. After classification is complete, it is necessary to evaluate its accuracy by comparing the categories on the classified images with the areas of known identity on the ground. The final result of the analysis consists of maps (or images), data and a report. These three components of the result provide the user with full information concerning the source data, the method of analysis and the outcome and its reliability.
20
combination of band values that carry information concerning systematic variation within the scene. Thus in a multispectral data it helps in portraying the necessity elements of the image. It also reduces the number of spectral bands that has to be analyzed. After the feature extraction is complete the analyst can work with the desired channels or bands, but in turn the individual bandwidths are more potent for information. Finally such a pre-processing increases the speed and reduces the cost of analysis.
21
22
day depending upon the location. Imagery can be requested and ordered by selecting an area on an interactive map or larger areas, such as cities or counties can be purchased in bundles. Many of the current digital datasets are available for download within a few minutes of purchase. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging. Aerial image measurement excludes the processing effects of printing and etching on the wafer. The successful application of aerial image emulation for CDU measurement traditionally, aerial image metrology systems are used to evaluate defect printability and repair success.
Figure:3.1 Areal image of the test the building Whereas, digital aerial imagery should remain in the public domain and be archived to secure its availability for future scientific, legal, and historical purposes. Aerial photography is the taking of photographs of the ground from an elevated position. The term usually refers to images in which the camera is not supported by a ground-based structure. Cameras may be hand held or mounted, and photographs may be taken by a photographer, triggered remotely or triggered automatically. Platforms
23
for aerial photography include fixed-wing aircraft, helicopters, balloons, blimps and dirigibles, rockets, kites, poles, parachutes, vehicle mounted poles . Aerial photography should not be confused with Air-to-Air Photography, when aircraft serve both as a photo platform and subject.
24
Recent (2006) FAA regulations grounding all commercial RC model flights have been upgraded to require formal FAA certification before permission to fly at any altitude in USA. Because anything capable of being viewed from a public space is considered outside the realm of privacy in the United States, aerial photography may legally document features and occurrences on private property.
3.2.3 Combinations
Aerial photographs are often combined. Depending on their purpose it can be done in several ways. A few are listed below.
y
Several photographs can be taken with one handheld camera to later be stitched together to a panorama. In pictometry five rigidly mounted cameras provide one vertical and four low oblique pictures that can be used together.
25
In some digital cameras for aerial photogrammetric photographs from several imaging elements, sometimes with separate lenses, are geometrically corrected and combined to one photograph in the camera.
3.2.4 Orthophotos
Vertical photographs are often used to create orthophotos, photographs which have been geometrically "corrected" so as to be usable as a map. In other words, an orthophoto is a simulation of a photograph taken from an infinite distance, looking straight down from nadir. Perspective must obviously be removed, but variations in terrain should also be corrected for. Multiple geometric transformations are applied to the image, depending on the perspective and terrain corrections required on a particular part of the image. Orthophotos are commonly used in geographic information systems, such as are used by mapping agencies (e.g. Ordnance Survey) to create maps. Once the images have been aligned, or 'registered', with known realworld coordinates, they can be widely deployed. Large sets of orthophotos, typically derived from multiple sources and divided into "tiles" (each typically 256 x 256 pixels in size), are widely used in online map systems such as Google Maps. Open Street Map offers the use of similar orthophotos for deriving new map data. Google Earth overlays orthophotos or satellite imagery onto a digital elevation model to simulate 3D landscapes.
26
other points of interest. Using GPS, video may be embedded with meta data and later synced with a video mapping program. This Spatial Multimedia is the timely union of digital media including still photography, motion video, stereo, panoramic imagery sets, immersive media constructs, audio, and other data with location and date-time information from the GPS and other location designs. Aerial videos are emerging Spatial Multimedia which can be used for scene understanding and object tracking. The input video is captured by low flying aerial platforms and typically consists of strong parallax from non-ground-plane structures. The integration of digital video, global positioning systems (GPS) and automated image processing will improve the accuracy and costeffectiveness of data collection and reduction. Several different aerial platforms are under investigation for the data collection
27
3.3.1.1 Basic Strategy The basic strategy of the proposed approach shares the same principle of the methods that is, assuming that the input image is denoted by I, then the enhanced image O is obtained by the following processing
O = I + NL(HP( I ))
where HP() stands for high-pass filtering and NL() is a nonlinear operator. As will become clear in subsequent sections, the non-linear processing includes a scale step and a clipping step. The HP() step is based on a set of Gabor filters. The performance of a perceptual image enhancement algorithm is typically judged through a subjective test. In most current work in the literature, such as this subjective test is simplified to simply showing an enhancement image along with the original to a viewer. While a viewer may report that a blurry image is indeed enhanced, this approach does not allow systematic comparison between two competing methods. Furthermore, since the ideal goal of enhancement is to make up the highfrequency components that are lost in the imaging or other processes, it would be desired to show whether an enhancement algorithm indeed generates the desired high frequency components. The tests in do not answer this question. (Note that, although showing the Fourier transform of the enhanced image may illustrate whether highfrequency components are added this is not an accurate evaluation of a method, due to the fact that the Fourier transform provides only a global measure of the signal spectrum. For example, disturbing ringing artifacts may appear as false highfrequency components in the Fourier transform.
28
CHAPTER 4 WAVELETS
Wavelet is a waveform of effectively limited duration that has an average value of zero. The Wavelet transform is a transform of this type. It provides the timefrequency representation. (There are other transforms which give this information too, such as short time Fourier transforms, Wigner distributions, etc.) Often times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know the time intervals these particular spectral components occur. For example, in EEGs, the latency of an event-related potential is of particular interest (Event-related potential is the response of the brain to a specific stimulus like flash-light, the latency of this response is the amount of time elapsed between the onset of the stimulus and the response). Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. How wavelet transform works is completely a different fun story, and should be explained after short time Fourier Transform (STFT). The WT was developed as an alternative to the STFT. The STFT will be explained in great detail in the second part of this tutorial. It suffices at this time to say that the WT was developed to overcome some resolution related problems of the STFT, as explained in Part II. To make a real long story short, we pass the time-domain signal from various high pass and low pass filter, which filters out either high frequency or low frequency portions of the signal. This procedure is repeated, every time some portion of the signal corresponding to some frequencies being removed from the signal.
29
Here is how this works: Suppose we have a signal which has frequencies up to 1000 Hz. In the first stage we split up the signal in to two parts by passing the signal from a high pass and a low pass filter (filters should satisfy some certain conditions, so-called admissibility condition) which results in two different versions of the same signal: portion of the signal corresponding to 0-500 Hz (low pass portion), and 5001000 Hz (high pass portion). Then, we take either portion (usually low pass portion) or both, and do the same thing again. This operation is called decomposition. Assuming that we have taken the low pass portion, we now have 3 sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz. Then we take the low pass portion again and pass it through low and high pass filters; we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz, 250-500 Hz, and 500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain level. Then we have a bunch of signals, which actually represent the same signal, but all corresponding to different frequency bands. We know which signal corresponds to which frequency band, and if we put all of them together and plot them on a 3-D graph, we will have time in one axis, frequency in the second and amplitude in the third axis. This will show us which frequencies exist at which time ( there is an issue, called "uncertainty principle", which states that, we cannot exactly know what frequency exists at what time instance , but we can only know what frequency bands exist at what time intervals , more about this in the subsequent parts of this tutorial). Higher frequencies are better resolved in time, and lower frequencies are better resolved in frequency. This means that, a certain high frequency component can be located better in time (with less relative error) than a low frequency component. On the contrary, a low frequency component can be located better in frequency compared to high frequency component. Take a look at the following grid:
30
--------------------------------------------> time Interpret the above grid as follows: The top row shows that at higher frequencies we have more samples corresponding to smaller intervals of time. In other words, higher frequencies can be resolved better in time. The bottom row however, corresponds to low frequencies, and there are less number of points to characterize the signal, therefore, low frequencies are not resolved well in time. ^frequency | | | | ******************************************************* | | * * * * * * * * * * * * * * * * * * * discrete time | |* | |* |* * * * * * * * * * * * * * * wavelet transform *
|----------------------------------------------------------> time In discrete time case, the time resolution of the signal works the same as above, but now, the frequency information has different resolutions at every stage too. Note that, lower frequencies are better resolved in frequency, where as higher
31
frequencies are not. Note how the spacing between subsequent frequency components increase as frequency increases. Below, are some examples of continuous wavelet transform: Let's take a sinusoidal signal, which has two different frequency components at two different times: Note the low frequency portion first, and then the high frequency.
Figure 4.1 WAVEFORM OF CWT Note however, the frequency axis in these plots are labeled as scale. The concept of the scale will be made clearer in the subsequent sections, but it should be noted at this time that the scale is inverse of frequency. That is, high scales correspond to low frequencies, and low scales correspond to high frequencies. Consequently, the little peak in the plot corresponds to the high frequency components in the signal, and the large peak corresponds to low frequency components (which appear before the high frequency components in time) in the signal.
32
Figure 4.2 SAME TRANSFORM ROTATED-250 DEGREES,LOOKING FROM 45 DEG ABOVE You might be puzzled from the frequency resolution shown in the plot, since it shows good frequency resolution at high frequencies. Note however that, it is the good scale resolution that looks good at high frequencies (low scales), and good scale resolution means poor frequency resolution and vice versa.
33
1. The Fourier transforms of the windowed signals are not taken, and therefore single peak will be seen corresponding to a sinusoid, i.e., negative frequencies are not computed. 2. The width of the window is changed as the transform is computed for every single spectral component, which is probably the most significant characteristic of the wavelet transform. The continuous wavelet transform is defined as follows
As seen in the above equation, the transformed signal is a function of two variables, tau and s, the translation and scale parameters, respectively. psi(t) is the transforming function, and it is called the mother wavelet . The term mother wavelet gets its name due to two important properties of the wavelet analysis as explained below: The term wavelet means a small wave. The smallness refers to the condition that this (window) function is of finite length (compactly supported). The wave refers to the condition that this function is oscillatory. The term mother implies that the functions with different region of support that are used in the transformation process are derived from one main function, or the mother wavelet. In other words, the mother wavelet is a prototype for generating the other window functions. The term translation is used in the same sense as it was used in the STFT; it is related to the location of the window, as the window is shifted through the signal. This term, obviously, corresponds to time information in the transform domain. However, we do not have a frequency parameter, as we had before for the STFT. Instead, we have scale parameter which is defined as $1/frequency$. The term frequency is reserved for the STFT. Scale is described in more detail in the next section.
34
4.1.1The Scale
The parameter scale in the wavelet analysis is similar to the scale used in maps. As in the case of maps, high scales correspond to a non-detailed global view (of the signal), and low scales correspond to a detailed view. Similarly, in terms of frequency, low frequencies (high scales) correspond to a global information of a signal (that usually spans the entire signal), whereas high frequencies (low scales) correspond to a detailed information of a hidden pattern in the signal (that usually lasts a relatively short time). Cosine signals corresponding to various scales are given as examples in the following figure .
35
Fortunately in practical applications, low scales (high frequencies) do not last for the entire duration of the signal, unlike those shown in the figure, but they usually appear from time to time as short bursts, or spikes. High scales (low frequencies) usually last for the entire duration of the signal. Scaling, as a mathematical operation, either dilates or compresses a signal. Larger scales correspond to dilated (or stretched out) signals and small scales correspond to compressed signals. All of the signals given in the figure are derived from the same cosine signal, i.e., they are dilated or compressed versions of the same function. In the above figure, s=0.05 is the smallest scale, and s=1 is the largest scale. In terms of mathematical functions, if f(t) is a given function f(st) corresponds to a contracted (compressed) version of f(t) if s > 1 and to an expanded (dilated) version of f(t) if s < 1 . However, in the definition of the wavelet transform, the scaling term is used in the denominator, and therefore, the opposite of the above statements holds, i.e., scales s > 1 dilates the signals whereas scales s < 1 , compresses the signal.
36
4.2.1 The Sub band Coding and The Multi resolution Analysis
The main idea is the same as it is in the CWT. A time-scale representation of a digital signal is obtained using digital filtering techniques. Recall that the CWT is a correlation between a wavelet at different scales and the signal with the scale (or the frequency) being used as a measure of similarity. The continuous wavelet transform was computed by changing the scale of the analysis window, shifting the window in time, multiplying by the signal, and integrating over all times. In the discrete case, filters of different cutoff frequencies are used to analyze the signal at different scales. The signal is passed through a series of high pass filters to analyze the high frequencies, and it is passed through a series of low pass filters to analyze the low frequencies. The resolution of the signal, which is a measure of the amount of detail information in the signal, is changed by the filtering operations, and the scale is changed by up sampling and down sampling (sub sampling) operations. Sub sampling a signal corresponds to reducing the sampling rate, or removing some of the samples of the signal. For example, sub sampling by two refers to dropping every other sample of the signal. Sub sampling by a factor n reduces the number of samples in the signal n times. Up sampling a signal corresponds to increasing the sampling rate of a signal by adding new samples to the signal. For example, up sampling by two refers to adding a new sample, usually a zero or an interpolated value, between every two samples of the signal. Up sampling a signal by a factor of n increases the number of samples in the signal by a factor of n. Although it is not the only possible choice, DWT coefficients are usually sampled from the CWT on a dyadic grid, i.e., s0 = 2 and
0
=k*2j, as described in Part 3. Since the signal is a discrete time function, the terms
37
function and sequence will be used interchangeably in the following discussion. This sequence will be denoted by x[n], where n is an integer. The procedure starts with passing this signal (sequence) through a half band digital low pass filter with impulse response h[n]. Filtering a signal corresponds to the mathematical operation of convolution of the signal with the impulse response of the filter. The convolution operation in discrete time is defined as follows:
A half band low pass filter removes all frequencies that are above half of the highest frequency in the signal. For example, if a signal has a maximum of 1000 Hz component, then half band low pass filtering removes all the frequencies above 500 Hz. The unit of frequency is of particular importance at this time. In discrete signals, frequency is expressed in terms of radians. Accordingly, the sampling frequency of the signal is equal to 2 radians in terms of radial frequency. Therefore, radians, if the signal the highest frequency component that exists in a signal will be signal); that is, the Nyquists rate corresponds to
is sampled at Nyquists rate (which is twice the maximum frequency that exists in the rad/s in the discrete frequency domain. Therefore using Hz is not appropriate for discrete signals. However, Hz is used whenever it is needed to clarify a discussion, since it is very common to think of frequency in terms of Hz. It should always be remembered that the unit of frequency for discrete time signals is radians. After passing the signal through a half band low pass filter, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of /2 radians instead of radians. Simply discarding every other sample will subsample the signal by two, and the signal will then have half the
38
number of points. The scale of the signal is now doubled. Note that the low pass filtering removes the high frequency information, but leaves the scale unchanged. Only the sub sampling process changes the scale. Resolution, on the other hand, is related to the amount of information in the signal, and therefore, it is affected by the filtering operations. Half band low pass filtering removes half of the frequencies, which can be interpreted as losing half of the information. Therefore, the resolution is halved after the filtering operation. Note, however, the sub sampling operation after filtering does not affect the resolution, since removing half of the spectral components from the signal makes half the number of samples redundant anyway. Half the samples can be discarded without any loss of information. In summary, the low pass filtering halves the resolution, but leaves the scale unchanged. The signal is then sub sampled by 2 since half of the number of samples are redundant. This doubles the scale. This procedure can mathematically be expressed as
Having said that, we now look how the DWT is actually computed: The DWT analyzes the signal at different frequency bands with different resolutions by decomposing the signal into a coarse approximation and detail information. DWT employs two sets of functions, called scaling functions and wavelet functions, which are associated with low pass and high pass filters, respectively. The decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original signal x[n] is first passed through a half band high pass filter g[n] and a low pass filter h[n]. After the filtering, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of /2 radians instead of . The signal can therefore be sub sampled by 2, simply by discarding every other sample. This
39
Where yhigh[k] and ylow[k] are the outputs of the high pass and low pass filters, respectively, after sub sampling by 2. This decomposition halves the time resolution since only half the number of samples now characterizes the entire signal. However, this operation doubles the frequency resolution, since the frequency band of the signal now spans only half the previous frequency band, effectively reducing the uncertainty in the frequency by half. The above procedure, which is also known as the sub band coding, can be repeated for further decomposition. At every level, the filtering and sub sampling will result in half the number of samples (and hence half the time resolution) and half the frequency band spanned (and hence double the frequency resolution). Figure illustrates this procedure, where x[n] is the original signal to be decomposed, and h[n] and g[n] are low pass and high pass filters, respectively. The bandwidth of the signal at every level is marked on the figure as "f".
40
Figure 4.4 BLOCK DIAGRAM The Sub band Coding Algorithm As an example, suppose that the original signal x[n] has 512 sample points, spanning a frequency band of zero to rad/s. At the first decomposition level, the signal is passed through the high pass and low pass filters, followed by sub sampling by 2. The output of the high pass filter has 256 points (hence half the time resolution), but it only spans the frequencies /2 to rad/s (hence double the frequency resolution). These 256 samples constitute the first level of DWT coefficients. The output of the low pass filter also has 256 samples, but it spans the other half of the frequency band, frequencies from 0 to decomposition. /2 rad/s. This signal is then passed through the same low pass and high pass filters for further
41
The output of the second low pass filter followed by sub sampling has 128 samples spanning a frequency band of 0 to of /4 to /4 rod/s, and the output of the second high pass filter followed by sub sampling has 128 samples spanning a frequency band /2 rad/s. The second high pass filtered signal constitutes the second level of DWT coefficients. This signal has half the time resolution, but twice the frequency resolution of the first level signal. In other words, time resolution has decreased by a factor of 4, and frequency resolution has increased by a factor of 4 compared to the original signal. The low pass filter output is then filtered once again for further decomposition. This process continues until two samples are left. For this specific example there would be 8 levels of decomposition, each having half the number of samples of the previous level. The DWT of the original signal is then obtained by concatenating all coefficients starting from the last level of decomposition (remaining two samples, in this case). The DWT will then have the same number of coefficients as the original signal. The frequencies that are most prominent in the original signal will appear as high amplitudes in that region of the DWT signal that includes those particular frequencies. The difference of this transform from the Fourier transform is that the time localization of these frequencies will not be lost. However, the time localization will have a resolution that depends on which level they appear. If the main information of the signal lies in the high frequencies, as happens most often, the time localization of these frequencies will be more precise, since they are characterized by more number of samples. If the main information lies only at very low frequencies, the time localization will not be very precise, since few samples are used to express signal at these frequencies. This procedure in effect offers a good time resolution at high frequencies, and good frequency resolution at low frequencies. Most practical signals encountered are of this type. The frequency bands that are not very prominent in the original signal will have very low amplitudes, and that part of the DWT signal can be discarded without any major loss of information, allowing data reduction. Figure 4.2 illustrates an
42
example of how DWT signals look like and how data reduction is provided. Figure 4.2a shows a typical 512-sample signal that is normalized to unit amplitude. The horizontal axis is the number of samples, whereas the vertical axis is the normalized amplitude. Figure 4.2b shows the 8 level DWT of the signal in Figure 4.2a. The last 256 samples in this signal correspond to the highest frequency band in the signal, the previous 128 samples correspond to the second highest frequency band and so on. It should be noted that only the first 64 samples, which correspond to lower frequencies of the analysis, carry relevant information and the rest of this signal has virtually no information. Therefore, all but the first 64 samples can be discarded without any loss of information. This is how DWT provides a very effective data reduction scheme.
FIGURE 4.5 DWT WAVEFORMS We will revisit this example, since it provides important insight to how DWT should be interpreted. Before that, however, we need to conclude our mathematical analysis of the DWT.
43
One important property of the discrete wavelet transform is the relationship between the impulse responses of the high pass and low pass filters. The high pass and low pass filters are not independent of each other, and they are related by
where g[n] is the high pass, h[n] is the low pass filter, and L is the filter length (in number of points). Note that the two filters are odd index alternated reversed versions of each other. Low pass to high pass conversion is provided by the (-1)n term. Filters satisfying this condition are commonly used in signal processing, and they are known as the Quadrature Mirror Filters (QMF). The two filtering and sub sampling operations can be expressed by
The reconstruction in this case is very easy since half band filters form orthonormal bases. The above procedure is followed in reverse order for the reconstruction. The signals at every level are up sampled by two, passed through the synthesis filters g[n], and h[n] (high pass and low pass, respectively), and then added. The interesting point here is that the analysis and synthesis filters are identical to each other, except for a time reversal. Therefore, the reconstruction formula becomes (for each layer)
However, if the filters are not ideal half band, then perfect reconstruction cannot be achieved. Although it is not possible to realize ideal filters, under certain conditions it is possible to find filters that provide perfect reconstruction. The most
44
famous ones are the ones developed by Ingrid Daubechies, and they are known as Daubechies wavelets. Note that due to successive sub sampling by 2, the signal length must be a power of 2, or at least a multiple of power of 2, in order this scheme to be efficient. The length of the signal determines the number of levels that the signal can be decomposed to. For example, if the signal length is 1024, ten levels of decomposition are possible. Interpreting the DWT coefficients can sometimes be rather difficult because the way DWT coefficients are presented is rather peculiar. To make a real long story real short, DWT coefficients of each level are concatenated, starting with the last level. An example is in order to make this concept clear. Suppose we have a 256-sample long signal sampled at 10 MHZ and we wish to obtain its DWT coefficients. Since the signal is sampled at 10 MHz, the highest frequency component that exists in the signal is 5 MHz. At the first level, the signal is passed through the low pass filter h[n], and the high pass filter g[n], the outputs of which are sub sampled by two. The high pass filter output is the first level DWT coefficients. There are 128 of them, and they represent the signal in the [2.5 5] MHz range. These 128 samples are the last 128 samples plotted. The low pass filter output, which also has 128 samples, but spanning the frequency band of [0 2.5] MHz, are further decomposed by passing them through the same h[n] and g[n]. The output of the second high pass filter is the level 2 DWT coefficients and these 64 samples precede the 128 level 1 coefficients in the plot. The output of the second low pass filter is further decomposed, once again by passing it through the filters h[n] and g[n]. The output of the third high pass filter is the level 3 DWT coefficients. These 32 samples precede the level 2 DWT coefficients in the plot. The procedure continues until only 1 DWT coefficient can be computed at level 9. This on coefficient is the first to be plotted in the DWT plot. This is
45
followed by 2 level 8 coefficients, 4 level 7 coefficients, 8 level 6 coefficients, 16 level 5 coefficients, 32 level 4 coefficients, 64 level 3 coefficients, 128 level 2 coefficients and finally 256 level 1 coefficients. Note that less and less number of samples is used at lower frequencies, therefore, the time resolution decreases as frequency decreases, but since the frequency interval also decreases at low frequencies, the frequency resolution increases. Obviously, the first few coefficients would not carry a whole lot of information, simply due to greatly reduced time resolution. To illustrate this richly bizarre DWT representation let us take a look at a real world signal. Our original signal is a 256-sample long ultrasonic signal, which was sampled at 25 MHz. This signal was originally generated by using a 2.25 MHz transducer, therefore the main spectral component of the signal is at 2.25 MHz. The last 128 samples correspond to [6.25 12.5] MHz range. As seen from the plot, no information is available here, hence these samples can be discarded without any loss of information. The preceding 64 samples represent the signal in the [3.12 6.25] MHz range, which also does not carry any significant information. The little glitches probably correspond to the high frequency noise in the signal. The preceding 32 samples represent the signal in the [1.5 3.1] MHz range. As you can see, the majority of the signals energy is focused in these 32 samples, as we expected to see. The previous 16 samples correspond to [0.75 1.5] MHz and the peaks that are seen at this level probably represent the lower frequency envelope of the signal. The previous samples probably do not carry any other significant information. It is safe to say that we can get by with the 3rd and 4th level coefficients, that is we can represent this 256 sample long signal with 16+32=48 samples, a significant data reduction which would make your computer quite happy. One area that has benefited the most from this particular property of the wavelet transforms is image processing. As you may well know, images, particularly high-resolution images, claim a lot of disk space. As a matter of fact, if this tutorial is
46
taking a long time to download, that is mostly because of the images. DWT can be used to reduce the image size without losing much of the resolution. Here is how: For a given image, you can compute the DWT of, say each row, and discard all values in the DWT that are less then a certain threshold. We then save only those DWT coefficients that are above the threshold for each row, and when we need to reconstruct the original image, we simply pad each row with as many zeros as the number of discarded coefficients, and use the inverse DWT to reconstruct each row of the original image. We can also analyze the image at different frequency bands, and reconstruct the original image by using only the coefficients that are of a particular band. I will try to put sample images hopefully soon, to illustrate this point. Another issue that is receiving more and more attention is carrying out the decomposition (sub band coding) not only on the low pass side but on both sides. In other words, zooming into both low and high frequency bands of the signal separately. This can be visualized as having both sides of the tree structure of Figure 4.1. What result is what is known as the wavelet packages. We will not discuss wavelet packages in this here, since it is beyond the scope of this tutorial. Anyone who is interested in wavelet packages, or more information on DWT can find this information in any of the numerous texts available in the market. And this concludes our mini series of wavelet tutorial. If I could be of any assistance to anyone struggling to understand the wavelets, I would consider the time and the effort that went into this tutorial well spent. I would like to remind that this tutorial is neither a complete nor a through coverage of the wavelet transforms. It is merely an overview of the concept of wavelets and it was intended to serve as a first reference for those who find the available texts on wavelets rather complicated. There might be many structural and/or technical mistakes, and I would appreciate if you could point those out to me. Your feedback is of utmost importance for the success of this tutorial.
47
4.3 ALGORITHM
The proposed enhancement algorithm consists of three stages: the first and the third stage are applied in the spatial domain and the second one in the discrete wavelet domain.
48
From an image processing stance, this assumption indicates that images of natural scenes should contain pixels having almost equal average gray levels in each spectral band. Combining Equation 2 with the gray-world assumption, we perform histogram adjustment as follows: 1. The amount of shift corresponding to illuminance is determined from the beginning of the lower tail of the histogram such that a predefined amount (typically 0.5%) of image pixels is clipped 2. The shift is subtracted from each pixel value 3. This process is repeated separately for each color Channel
49
Where aJ,k,l are the approximation coefficients at scale J with corresponding scaling functions FJ,k,l (x, y) , and d j,k,l are the detail coefficients at each scale with corresponding
Wavelet functions Yj,k,l (x, y) . A raised hyperbolic sinefunction given by Equation 5 maps the normalized range [0,1] of aJ,k,l to the same range, and is used for compressing the dynamic range represented by the coefficients. The compressed coefficients at level J can be obtained by
and r is the curvature parameter which adjusts the shape of the hyperbolic sine function. Applying the mapping operator to the coefficients and taking the inverse wavelet transform would result in a compressed dynamic range with a significant loss of contrast. Thus, a center/surround procedure that preserves/enhances the local contrast is applied to those mapped coefficients.
50
Where s is the surround space constant, and k is determined under the constraint that
Local average image representing the surround is obtained by 2D convolution of (7) with image A , the elements of which are the normalized approximation coefficients a J,k,l and given by (6) :
The contrast enhanced coefficients matrix Anew which will replace the original approximation coefficients aJ,k,l is given by
where, R is the centre/surround ratio given by elements are the output of the hyperbolic sine function in (5).
d is the
enhancement strength constant with a default value of 1; A is the matrix whose A linear combination of three kernels with three different scales, combinedscale-Gaussian (Gc ), is used for improved rendition is given by
51
where A and Anew are the original and the enhanced approximation coefficient matrices at level 1; Dh , Dv , Dd are the detail coefficient matrices for horizontal, vertical and diagonal details at the same level, and Dnew h , Dnew v , Dnew d are the corresponding modified matrices, respectively. If the wavelet decomposition is carried out for more than one level, this procedure is repeated for each level.
52
where Ii(x, y)is the RGB values of the input color image at the corresponding pixel location and Ienh (x, y) is the resulting enhanced intensity image derived from the inverse wavelet transform of the modified coefficients. Here _ is the non-linear gain factor corresponding. This factor has a canonical value and increases the color saturation resulting in more appealing color rendition. Since the coefficients are normalized during the enhancement process, the enhanced intensity image obtained by the inverse transform of enhanced coefficients, along with the enhanced color image given by (15) span almost only the lower half of the full range of the histogram. For the final display domain output enh i I , s in (15) are stretched to represent the full dynamic range. Histogram clipping from the upper tail of histograms in each channel give the best results in converting the output to display domain.
53
5.1 RESULTS
54
55
5.3 CONCLUSION
In this project application of the WDRC algorithm in aerial imagery is presented. The results obtained from large variety of aerial images show strong robustness, high image quality, and improved visibility indicating promise for aerial imagery during poor visibility flight conditions. This algorithm can further be applied to real time video streaming and the enhanced video can be projected to the pilots heads-up display for aviation safety.
56
57
REFERENCES
[1] D. J. Jobson, Z. Rahman, G. A. Woodell, G.D.Hines, A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes, Visual Information Processing XV, Proc. SPIE 6246, (2006) [2] S. M. Pizer, J. B. Zimmerman, and E. Staab, Adaptive grey level assignment in CT scan display, Journal of Computer Assistant Tomography, vol. 8, pp. 300-305 ,(1984). [3] J. B. Zimmerman, S. B. Cousins, K. M. Hartzell, M. E. Frisse, and M. G. Kahn, A psychophysical comparison of two methods for adaptive histogram equalization, Journal of Digital Imaging, vol. 2, pp. 82-91(1989). [4] S. M. Pizer and E. P. Amburn, Adaptive histogram equalization and its variations, Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355-368, (1987). [5] K. Rehm and W. J. Dallas, Artifact suppression in digital chest radiographs enhanced with adaptive histogram equalization, SPIE: Medical Imaging III, (1989).
58
59
figure('Name','Stage 2.1: Orthogonal Wavelet Decomposition','NumberTitle','off'); subplot(2,2,1); imshow(ai,[]); subplot(2,2,2); imshow(aih); subplot(2,2,3); imshow(aiv); subplot(2,2,4); imshow(aid); %Mapping operator r=2; j=1; aim=ai; aivm=aiv; aidm=aid; aihm=aih; for n=1:s2(2) for m=1:s2(1) aim(m,n)=((sinh(4.6248*((ai(m,n)/(255*(2^j))))-2.3124)+5)/10).^r; aivm(m,n)=((sinh(4.6248*((aiv(m,n)/(255*(2^j))))-2.3124)+5)/10).^r; aihm(m,n)=((sinh(4.6248*((aih(m,n)/(255*(2^j))))-2.3124)+5)/10).^r; aidm(m,n)=((sinh(4.6248*((aid(m,n)/(255*(2^j))))-2.3124)+5)/10).^r; end end
% 2.2,2.3: Local Contrast Enhancement and Detail Coefficient modification aimd = histeq(aim); ai=ai+0.000001; % to avoid divided by zero aivmd=imadjust(aivm); avn=(aimd./ai).*aivm; aidmd=imadjust(aidm);adn=(aimd./ai).*aidm; aihmd=imadjust(aihm);ahn=(aimd./ai).*aihm; figure;imshow(aimd), title('WDRC Approximation image')
60
a=double(a); aId=idwt2(aimd,avn,ahn,adn,'bior1.1'); figure;imshow(aId,[]); title('WDRC Reconstructed Spatial Domain image') I=double(a1); B=3; % Step 3: Colour Restoration for n=1:s1(2) for m=1:s1(1) Ai=((a(m,n,:))./max(a(m,n,:))).^B; I(m,n,:)=Ai.*aId(m,n).*2.5; end end
figure('Name','Final Result','NumberTitle','off'); subplot(2,1,1); imshow(uint8(a)); title('Original Image') subplot(2,1,2); imshow(I); title('Enhanced Image')
61