You are on page 1of 12

LOCAL EDGE-PRESERVING MULTISCALE DECOMPOSITION FOR

HIGH DYNAMIC RANGE IMAGE TONE MAPPING


NARENDRA REDDY K
VI Sem Department of ECE, SVCE, Bangalore

SHASHI RANJAN
Assistant professor Department of ECE,DBIT

SURESHA H S
Associate professor Department of ECE ,DBIT

SANDESH KUMAR
Assistant professor Department of ECE, SJBIT, Bangalore

ABSTRACT: A novel filter is proposed for edge-preserving decomposition of an image. It is


different from previous filters in its locally adaptive property. The filtered image contains local
means everywhere and preserves local salient edges. Comparisons are made between our filtered
result and the results of three other methods. A detailed analysis is also made on the behavior of
the filter. A multiscale decomposition with this filter is proposed for manipulating a high dynamic
range image, which has three detail layers and one base layer. The multiscale decomposition with
the filter addresses three assumptions: 1) the base layer preserves local means everywhere; 2)
every scales salient edges are relatively large gradients in a local window; and 3) all of the
nonzero gradient information belongs to the detail layer. An effective function is also proposed for
compressing the detail layers. The reproduced image gives a good visualization. Experimental
results on real images demonstrate that our algorithm is especially effective at preserving or
enhancing local details.

KEYWORDS: multiscale, edges, visualization.

INTRODUCTION
A novel filter is proposed for edge-preserving decomposition of an image. It is different from
previous filters in its locally adaptive property. The filtered image contains local means
everywhere and preserves local salient edges. A detailed analysis is also made on the behavior of
the filter. A multiscale decomposition with this filter is proposed for manipulating a high dynamic
range image, which has three detail layers and one base layer.

HIGH DYNAMIC RANGE IMAGE

When a scene with high light contrast is captured by a camera, either the dark area or the bright
area will be saturated in the output image, as shown in Fig. 2.1. This is due to the limitation of the
camera sensor, and has been existed since the first camera was invented. To solve this, high
dynamic range (HDR) imaging has been invented by recovering the real world scene using
multiple conventional low dynamic range (LDR) images.
Fig. 2.1 Problem in capturing high dynamic range scene, where (Top) is the real world scene
perceived by human eyes, and (Bottom) are the images captured by camera using different
exposure times (t). Both over-exposed saturation (red square) and underexposed saturation (blue
square) are present, even in the best exposed image (Bottom Middle) The real world scene
image was synthesized through HDR imaging by using the bottom 3 images.

Dynamic range
The dynamic range (DR) of a scene is defined as the range from the lowest light intensity to the
highest light intensity in the scene. It is named as scene contrast too. In real world, the full range of
light intensities that can be perceived by human vision system spans from as low as star light to as
high as the sun shine. The human vision system is capable of perceiving light over a range of 5
orders of magnitude simultaneously.

High dynamic range acquisitions


In order to recover the real world scene, various HDR synthesis methods have been proposed with
different acquisition methods, as listed in Table 2.1. It is possible to capture an HDR image using a
single shot with the modification of camera hardware. Capturing image3 gradients rather than
actual pixel intensities was shown to increase the dynamic range at the cost of computationally
expensive Poisson solver. As far as I know, the gradient camera is still a theoretical solution. In
assorted pixels method, multiple neighboring pixels with fixed pattern filter are used to capture at
different exposures. Each pixel in the final HDR image was reconstructed by 2-4 neighboring
pixels, which results a loss of camera resolution. To overcome this problem, an alternative design
uses an aligned spatial light modulator with programming imaging. However, such a design is
difficult to implement with extra hardware expenses.

Workflow of software solution of HDR imaging


The modules in amber color in Fig. 2.3 are the research topics covered in this project. The modules
in green color are technologies used in our software solution of HDR imaging.

Intensity Mapping
Exposure bracketing is a photography technique to capture the same scene with multiple shots
using various exposure settings. It allows high dynamic range contents to be stored at different
exposure levels.

Image registration
Before jump into de-ghosting, there is another important preprocessing step: image registration. It
is required if the images are captured using hand held devices. Otherwise, blur will appear in the
synthesized HDR image. Luckily, when capturing multiple images in burst mode, misalignment
are commonly seen as rotation and translation only. Thus, in most cases, only image alignment is

422
Fig.2.3 A typical workflow of software solution for high dynamic range imaging

Fig. 2.4 A sequence of exposure bracketing images from the highest exposure (top left) to the
lowest exposure (bottom right). The image of high exposure captured the detail of dark area with
the saturation in the bright area, while the image of low exposure captured the bright area with the
saturation in the dark area.

required in HDR imaging. Fig. 2.8 gives a simple comparison between image alignment and de-
ghosting, which both cause blur in the synthesized HDR image. Blur due to miss-alignment is a
global effect, while ghosting artifact due to moving objects is content based. Thus, it is more
difficult to do de-ghosting than alignment.

423
Fig.2.8 Comparison with alignment (left) and de-ghosting (right).

We have made some study on image alignment methods for HDR imaging, as shown in
Table.2.4 some of the ideas are used in the final HDR imaging application

Table 2.4 Comparison of image alignment methods

Proposed System
Design is one of the most important phases of software development. The design is a creative
process in which a system organization is established that will satisfy the functional and non-
functional system requirements. Large Systems are always decomposed into sub-systems that
provide some related set of services. The output of the design process is a description of the
Software architecture.

Fig.3.2 proposed system architecture


Algorithm:
The input is given as HDR image.

424
The HDR radiance map has to be transformed in to a gray image ranging (0, 255).
We get the luminance simply by averaging the 3 channels.
The luminance is transferred into its logarithm domain.
To sufficiently use the domain of logarithm function, we arbitrarily magnify the
luminance 106 times.
The arbitrarily used the mean after twice iterative decomposition to give the last base
layer, the last found detail layer bears high dynamic range.
Divide the detail layer by two to halve its range.
1% of pixels are cut at low and high values.
The range of Lout is stretched linearly to (0,1).
Draw the stretched histogram as a output.
Algorithm has the following operations as mentioned below:
HDR Generation
Input image
RGB to gray conversion
Preprocessing
Logarithm
LEP Filter technique
Restore Color

Data Flow Diagram Level 1

The Level-1 DFD gives more information than the level-0 DFD. The Fig.5.2 shows the Level-1
DFD.

HDR image Transform to gray


image

Logarithm

Scale to [0, 1]

3 channels

Fig.5.2 Data Flow Diagram Level 1

Data Flow Diagram Level 2


The Level-2 Data Flow Diagram gives more information than the Level-1 DFD. The following
figure shows the Level-2 DFD for each module. An HDR image is commonly obtained by fusing

425
multi-exposure images. The fused HDR image always exceeds the dynamic range of displays. So
some mapping is needed here to compress the intensity distribution of the HDR image. The
compression is based on the feature of the human visual system (HVS) that it is less sensitive to
the low-frequency components than to the high frequency components. The low-frequency
components are compressed while the high-frequency components are retained. Through this
reproduction process, we can hardly discern the difference between the artificial image and the real
scene. Special considerations are also noted here to avoid artifacts

Module1
The input HDR radiance map has to be transformed into a gray image ranging in [0, 255]. We get
the luminance simply by averaging the three channels. And then the luminance is transformed in to
its logarithm domain. This is a typical operation of most methods.

LEP filter LEP filter Mean

- - -
Compression
function
Fig.5.3 Data Flow Diagram Level 2

Module 2
Another special operation is after the dynamic range compression. Since we have arbitrarily used
the mean after twice iterative decompositions to give the last base layer, the last found detail layer
bears high dynamic range, and we divide it by two to halve its range. Lastly, 1% of pixels are cut
at low and high values considering noise and increasing the major pixels contrasts. Subsequently,
the range of Lout is stretched linearly to [0, 1]. The low and high ends of the histogram are
occupied by few pixels, thus we cut the pixel to get stretched histogram.

Result
The figure 5.5 shows the dialog box of an output window. In this dialog box we initialize different
types of key or button, a) select an image, b) HDR generation, c) logarithm, d) LEP filter, e) color
reproduction.

The figure 5.6 shows that select an input image; here we select different types of input image.

The figure 5.7 shows select an input image office, after selecting input image office the different
range images obtained, low range, mid range and high range. This different types range images as
shown in below figure i.e. left side of dialog box.

The figure 5.8 shows generation of HDR image, after selecting different range or exposure image
the HDR image can be generated by click HDR generation key, after that we get an dialog box i.e.
HDR image generated which as shown in below figure.

The figure 5.9 shows after HDR image generated, apply logarithm for that generated HDR image
by click the logarithm key in the dialog box.

426
Compression Compression function Compression
function function

Cut 1% pixels

Linearly stretch
to [0,1]

Output
Restore color

Fig.5.4 Data Flow Diagram Level 2

Fig.5.5 output window

Fig.5.6 input image

427
Fig.5.7 input image office

Fig.5.8 HDR image

Fig.5.9 logarithm

The figure 5.10 shows result of the logarithm, after applying logarithm function. In this figure the
information dialog box shows that for applying LEP filter first select alpha and beta value: if you
want to get more satisfactory result select alpha=0.1 and beta=1, before applying LEP filter.

The figure 5.11 shows applying LEP filter technique by click LEP filter key which as shown in
below dialog box, if we click LEP filter key before selecting alpha and beta value, the LEP filter
technique is not applicable and it show an dialog box i.e. first you have to select the alpha and beta
values which as shown in below figure.

428
Fig.5.10 logarithm result

Fig.5.11 LEP filter technique

The figure 5.12 shows result of LEP filter technique, after applying LEP filter technique we get
three types of detail layer which as shown in below figure.

Fig.5.12 LEP filter result

The figure 5.13 shows apply color reproduction by click color reproduction key in dialog box,
after applying LEP filter technique, finally we get an output result i.e. original image and proposed
method output image.

The figure 5.14 shows original image, after applying color reproduction technique.

The figure 5.15 shows the proposed method, final HDR image after applying proposed method.

429
Fig.5.13 color reproduction

Fig.5.14 original image

Fig.5.15 proposed method

Advantages
LEP filter given more pleasing view than pervious methods.
Avoiding halo effect.
Emphasis on local edge preservation.
Preserving local tiny details.
Appealing global view.

Disadvantages
The linear operations may be a cause of artifacts in results, since they may unsuitably
reduce the gradients.

430
The drawback of our filter may be the preservation of the local shape near a salient edge.
It can be seen from that the details near an edge are preserved, which should be
smoothed.
Future scope
We have presented a new method of compressing the dynamic range of high dynamic range
pictures. This method enables display of high dynamic range pictures on conventional displays. It
is based on the Multi Scale decomposition algorithm. Modifications to the original algorithm are
done in a way that retains the removing of noise along with saving edges and the natural
impression of the resulting image.

CONCLUSION

We have presented three assumptions for our multiscale edge-preserving image decomposition. A
local edge preserving filter has been derived from the assumptions. And we have also explored the
connection with previous algorithms. Only two parameters (except the window radius) are needed
for our filter, and they can be always set default values for good results. Our filter is capable of
multi-scale coarsening an image while keeping local shape of the signal. We have also presented a
process with our filter to reproduce HDR images. The results are compared with the results by
some recent effective algorithms. The comparisons show that our algorithm is good at
compressing the high dynamic range while preserving local tiny details and the global view is
appealing. The process is very efficient for its linear asymptotic time complexity of the image size.
It can be seen from that the details near an edge are preserved, which should be smoothed. This
may be another source of artifacts near edges. A nonlinear function may be a prospect for avoiding
these disadvantages.

REFERENCES

P. E. Debevec and J. Malik, Recovering high dynamic range radiance maps from photographs,
in Proc. SIGGRAPH, 1997, pp. 369378.
J. M. DiCarlo and B. A. Wandell, Rendering high dynamic range images, Proc. SPIE,vol. 3965,
pp. 392401, May 2000.
E. Reinhard, M. M. Stark, P. Shirley, and J. A. Ferwerda, Photographic tone Reproduction for
digital images, in Proc. SIGGRAPH, 2002, pp. 267276.
E. H. Land and J. J. McCann, Lightness and retinex theory, J. Opt. Soc. Amer., vol. 61, no.1,
pp. 111, Jan. 1971.
Z. Rahman, D. J. Jobson, and G. A. Woodell, Retinex processing for automatic image
Enhancement, J. Electron. Image, vol. 13, no. 1, pp.100110, 2004.
S. Battiato, A. Castorina, and M. Mancuso, High dynamic range imaging for digital still camera:
An overview, J. Electron. Imag., vol. 12, no. 3, pp. 459469, 2003.
D. J. Jobson, Z. Rahman, and G. A. Woodell, Properties and performance of a center/surround
retinex, IEEE Trans. Image Process., vol. 6, no. 3, pp. 451462, 1997.
M. Elad, Retinex by two bilateral filters, in Proc. 5th Int. Conf. Scale Space PDE Methods
Comput. Vis., vol. 3459. 2005, pp. 217229.
Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, Edge-preserving decompositions for multi-
scale tone and detail manipulation, ACM Trans. Graph., vol. 27, no. 3, pp. 1 10, Aug.2008.
K. Subr, C. Soler, and F. Durand, Edge-preserving multiscale image decomposition based on
local extrema, ACM Trans. Graph., vol. 28,no. 5, pp. 147155, Dec. 2009.

431
R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel, A variational framework for retinex,
Int. J. Comput. Vis., vol. 52, no. 1, pp. 723, 2003.
G. Guarnieri, S. Marsi, and G. Ramponi, High dynamic range image display with halo and
Clipping prevention, IEEE Trans. Image Process., vol. 20, no. 5, pp.13511362, May 2011.
K. He, J. Sun, and X. Tang, Guided image filtering, in Proc. Eur. Conf. Comput. Vis, vol.1.
2010, pp. 114.

432

You might also like