You are on page 1of 8

Colour Image Restoration Using Morphological Neural Network

S. Uma1 , Dr. S. Annadurai2

1: Lecturer, ECE Department, Coimbatore Institute of Technology,


Coimbatore 641 014, India. email: uma_natarajans@yahoo.com
2: Principal, Govt. College of Engineering, Tirunelveli, 627 007, India
email: anna_prof@yahoo.co.in

Abstract the original image, n is additive random noise and H is


In this paper, we consider the problem of colour image the blurring operator. The main objective is to estimate
restoration degraded by a blur function and corrupted the original image from the observed degraded image.
by random noise. A new approach implemented by Whatever the degraded process, image distortions can
multilayer morphological (MLM) neural network is fall into two categories, namely , spatially invariant or
presented, which uses highly nonlinear morphological space invariant and spatially variant or space variant.
neuron for image processing to get a high quality In a space invariant distortion all pixels have suffered
restored colour image. In this paper colour images are the same form of distortion. This is generally caused by
considered into RGB distribution. Then each subspace problems with the imaging system such as distortions in
can be regarded as a gray image space and is processed optical system, global lack of focus, or camera motion.
by morphological way used in gray images. This In a space variant distortion, the degradation suffered
method is attractive because of its low computational by a pixel in the image depends upon its location in the
overhead, improved performance in terms of signal to image. This can be caused by internal factors, such as
noise ratio with less number of neurons. Experimental distortions in the optical system, or by external factors,
results are given finally which are encouraging. such as object motion.

Key words: Image restoration, multilayer The various restoration techniques used currently can
morphological (MLM) neural network, morphological be broadly viewed under two categories, namely, the
neuron, random noise, PSNR, RGB, colour space. transform related techniques and the algebraic
restoration techniques [1]. The transform related
1. Introduction techniques involve analyzing the degraded image after
an appropriate transform has been applied. The two
Image Restoration is the process of obtaining the popular transform related techniques are inverse
original image from the degraded image given the filtering and Kalman filtering [2]. Inverse filtering
knowledge of the degrading factors. There are a variety produces a perfect restoration in the absence of noise,
of reasons that could cause degradation of an image and but the presence of noise has very bad effects. The
image restoration is one of the key fields in today's Kalman filter approach can be applied to non stationary
Digital Image Processing due to its wide area of image but it is computationally very intensive.
applications. Commonly occurring degradations include
blurring, motion and noise [1,2]. Blurring can be caused Algebraic techniques attempt to find a direct solution to
when object in the image is outside the camera’s depth the distortion by matrix inversion techniques, or
of field sometime during the exposure, whereas motion techniques involving an iterative method to minimize a
blur can be caused when an object moves relative to the degradation measure. The two popular algebraic
camera during an exposure. The general model for techniques available are pseudo inverse filtering and
image degradation phenomenon is given as y = Hf + n, constrained image restoration. The pseudo inverse
where y is the observed blurred and noisy image, f is spatial image restoration techniques attempt to restore
an image by considering the vector space model of the

53
image degradation and attempting to restore the image is presented. In section 4, the implementation of the
in this vector space domain. This technique does not approach is presented. In section 5, experimental results
take into account the effects of noise in the calculations are presented which includes fast neural computation
of the pseudo inverse and so is sensitive to noise in the and improvement in signal to noise ratio for various
image. This involves finding an approximation to the sizes of the structural elements. Finally , the conclusion
inverse of the matrix blurring operator which is about the present work is discussed in section 6.
multiplied with the column scanned image vector to
produce the degraded image. Blur matrices are very 2. Mathematical Morphology
large and it is not computationally feasible to invert
them. Constrained restoration techniques are often In essence mathematical morphology is a method to
based on Wiener estimation and regression techniques. extract and describe information contained within an
One of the major drawbacks in most of the image image. Davidson employed morphological neural
restoration algorithms is the computational complexity, network to solve template identification and target
so much so that many simplifying assumptions have classification problems [11]. C. P. Suarez-Araujo
been made to obtain computationally feasible applied morphological neural networks to compute
algorithms. homothetic auditory and visual invariance's [12].
Another interesting network consisting of a
A number of new restoration methodologies have been morphological net and a classical feedforward network
developed in recent years and has interest in new used for feature extraction and classification was
aspects of image restoration problems. Motivated by the designed by P. D. Gader[13] With the help of
biological neural network in which the processing mathematical morphology it is possible to determine
power lies in a large number of neurons linked with qualities such as the shape, size and structure of the
synaptic weights, artificial neural network models objects contained within an image. Furthermore, by
attempt to achieve a good performance via dense choosing to extract only special information, unwanted
interconnection of simple computational elements. elements such as noise can be removed from the image,
Neural network models have great potential in areas while preserving relevant image data. This is achieved
where high computation rates are required and the using a structuring element, which is also an image,
current best systems are far from equaling human although it is usually smaller and of much simpler
performance. Restoration of a high quality image from shape. By choosing a special structuring element and
a degraded recording is a good application area of applying it to the image, any morphological
neural nets. Joon K. Paik and Aggelos. K. Katsaggelos transformation [2] can be calculated.
[9] proposed a Modified Hopfield neural network
model for solving the restoration problem which The input gray image is represented by f(x,y) and
improves upon the algorithm proposed by Zhou et al. structuring element as b(x,y). Let Z denotes the set of
[10]. real integers, the assumption is that (x,y) are integers
from Z2 and that f and b are functions that assign a gray
Among the various neural networks this paper uses a value from 0 to Lmax to each distinct pair of
new nonlinear neural network that differs from the coordinates (x,y).
conventional neural network called morphological The gray-value ‘dilation’ is defined as:
neural network. Operations based on lattice algebra
[3,7] have found widespread applications in the (f b)(s,t) = max { f (s-x, t-y) + b(x,y) |
engineering sciences. Artificial neural network whose (s-x), (t-y) Є Df ; (x,y) Є Db } (1)
computation are based on lattice algebra have become
where Df and Db are the domains of f and b,
known as morphological neural networks (MNNs).
respectively. b is the structuring element of the
morphological process and is now a function rather than
Mathematical morphology is a non linear filtering
a set. Dilation is based on choosing the maximum value
method applicable for two value or gray images but for
of f+b in a neighborhood defined by the shape of the
colour images, the researchers are still in the
structuring element.
preliminary stage. In this paper colour image is divided
into RGB distribution and is processed by
The gray-value ‘erosion’ is defined as:
morphological way used in gray images. The
organization of this paper is as follows: In section 2, the
basic concept of mathematical morphology is (f b)(s,t) = min { f (s+x, t+y) - b(x,y) |
presented. In section 3, the new MLM neural network s+x), (t+y) Є Df ; (x,y) Є Db } (2)
model, training of MNN and the MLM neural network
colour morphological filter based on RGB distribution

54
Erosion is based on choosing the maximum value of f-b n
in a neighborhood defined by the shape of the Ti (t + 1) = ∑ a j (t).w ij (5)
structuring element. j =1

The gray-value ’ open’ is defined as: and ai (t + 1) = f ( Ti (t + 1) − θ ) (6)


i
f b=(f b) b (3)
where aj(t) denotes the value of the jth neuron at time t,
n represents the number of neurons in the network, wij
In practical applications, opening operations usually are
the synaptic connectivity value between the ith neuron
applied to remove small light details, while leaving the
and the jth neuron, Ti(t+1) the next total input effect on
overall gray levels and larger bright features relatively
the ith neuron, θi a threshold, and f the next state
undisturbed. The initial erosion removes the small
function which usually introduces a nonlinearity into
details, but it also darkens the image. The subsequent
the network. Although not all current network models
dilation again increases the brightness of the image
can be precisely described by these equations, they
without reintroducing the details removed by erosion.
nevertheless can be viewed as variations of these.
The gray-value ‘ close ‘ is defined as:
The mathematical framework of the morphological
neural network model is based on lattice theory and
f b=( f b) b (4) image algebra .The algebraic lattice structure is given
by (R, ∧,∨ , +), where by, ∧ and ∨ denote the
Closing is generally used to remove dark details from
an image, while leaving bright features relatively binary operations of maximum, respective minimum,
undisturbed. The initial dilation removes the dark and by R the set of real numbers. On this basis, the
details and brightens the image, and the subsequent equations describing the behavior of a neuron in such
erosion darkens the image without reintroducing the neural networks are given by the following relation:
n
details removed by dilation.
Ti (t + 1) = ∧ a j (t) + w ij (7)
j=1
These basic operations are used in the proposed MLM n
neural network which is described in the following or Ti (t + 1) = ∨ a j (t) + w ij (8)
section. j=1

3. Multi-layer Morphological Neural Network and a i (t + 1) = f(Ti (t + 1 − θ i )) (9)


The concept of morphological neural networks grew The difference between the classical models and the
out of the theory of image algebra. Lattice based matrix morphological model is the computation of the next
operations have found widespread applications in the total input effect on the ith neuron, which is given by
engineering sciences. In these applications, the usual n
matrix operations of addition and multiplication are ∨ a j (t) + w ij = (a1(t) + w i1 ) ∨ (a 2 (t) + w i2 )
j=1
replaced by corresponding lattice operations.
Applications to image processing were first developed .... ∨ a n (t) + w in ) (10)
by Ritter and Davidson. [3,11]
or
3.1 Computational Basis for Morphological
Neural Network n
∧ a j (t) + w ij = (a1(t) + w i1 ) ∧ (a 2 (t) + w i2 )
j=1
Artificial neural network models are specified by the
network topology, node characteristics, and training or .... ∧ a n (t) + w in ) (11)
learning rules. The underlying algebraic system used in Equations (7) and (8) represent the basic operations of
these models is the set of real numbers R together with a dilation and erosion that form the foundation of
the operations of addition and multiplication and laws mathematical morphology. Hence the reason for
governing these operations. This algebraic system, naming the proposed neural network model as
known as a ring, is commonly denoted by (R,+, x). The morphological neural network.
two basic equations governing the theory of
computation in the standard neural network model are:

55
3.2 Morphological Neural Network Model
The mathematical model of the morphological neuron
is shown in Figure 1. ⎧n ⎫
⎪ ⎪
u j (t) = f j ⎨ ∧ x i (m) + w ij ⎬ (13)
i =1
⎪⎩ ⎪⎭
Output is
composed of just one node, and can be described as:
⎧N ⎫
v(t) = f ⎨∑ w ij u j (t)⎬ (14)
⎩ j=1 ⎭

where f(z)=z and fj(z)=1/(1+e-z).


Figure: 1 Morphological neuron
The topological structure of MLM neural network is 3.3 Training of Morphological Neural Network
shown in Figure 2
The training of a morphological neural network can be
viewed as an unrestricted optimization problem [4,5].
The goal is to minimize a performance index, J(Winp,
Wout), for example, the mean squared error (MSE) with
regard to the network parameters (w):

1
J(winp, Wout) = ∑∑ (y it − d it )2 (15)
N

where N refers to the amount of training data patterns


and yit and dit are, respectively, the actual and desired
network responses at the ith output for the tth training
pattern. Several algorithms for training MLP networks
are available. Amongst them, the back propagation
Figure : 2 Morphological structure of a MNN (BP) is the most deployed in practice. This is a
xi(t) - input of MLM neural network generalization of the least mean squared algorithm
N - number of neurons in a hidden layer (thus, a gradient descent method) in which the weights
uj(t) - output of the j th neuron are adjusted with regard to the cost function J(w),
v(t) - output of the whole network adopting the following learning rule:
th
wij - synaptic strength between the i node of
input layer and j th neuron of hidden w k +1 = w k − η k g k (16)
layer.
th
wj1 - synaptic strength between the j node of Where
hidden layer and output layer.
⎡ ∂J(w 1k ) ∂J(w 2k ) ⎤
The relation between these parameters can be described ⎢ ∂w ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅⎥
∂w2k
as following:
gk = ⎢ 1k
⎥ (17)
⎧n ⎫ ⎢ ∂J(w nk ) ⎥
⎪ ⎪
u j (t) = f j ⎨ ∨ x i (m) + w ij ⎬ (12) ⎢⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ∂w ⎥
i =1 ⎣ nk ⎦
⎪⎩ ⎪⎭ ηk is the learning rate at the k th iteration.

Adopting the steepest descent algorithm to modify


synaptic values of network as following:

(or)

56
∂J The four basic operators of colour morphological
w ij (t + 1) = w ij (t) − η. + algorithm can be extensively defined from gray scale
∂ w ij (t)
values [6,8]. Let Y be the output image of the filter. The
[
α w ij (t) − w ij (t − 1) ] definition of erosion and dilation operators, and the
realization of them based on morphological neural
network are given as follows:
= w ij (t) + ηδ j x i (t) + α.∆w ij (t) (18)
Erosion operator:
and Y= X S = { y*(I,j) = Λ (x*(I-k,j+l) +
s*(-k,-l),(k,l) ε S } (22)
∂J
W j1 (t + 1) = W j1 (t) − η +
∂ W j1 (t) where
[
α W j1 (t) − W j1 (t − 1) ]
x ∗ (i − k, j + 1) = X i (t), s ∗ (k, l) = Wij and *

= W j1(t) + η.δ.U j (t) + α.∆W j1 (t) (19) represents R,G or B respectively.

where Dilation operator:


Y= X S = { y*(I,j) = V (x*(I-k,j+l) +
N
δ= ( yi - di) . f ( ∑ Wj1. Uj(t) ) and (20) s*(-k,-l),(k,l) ε S } (23)
j=1
δ j = f j (∨ or ∧ (X i (t) + w ij (t))δ j1 (21) where x ∗ (i − k, j + 1) = X i (t), s ∗ (k, l) = Wij

η -learning rate, α - moving factor rate Open and close operators are the composite of both of
them.
f(.) = U j (t)(1 − U j (t )))
Open operator:
X ο S = (X S) S (24)
3.4 Colour Morphological Filter Based On RGB
Distribution Close operator:
X • S = (X S) S (25)
The colour image is divided into RGB distribution, as
X=[XR,XG,XB] and structure elements as S =
[SR,SG,SB].The basic colour morphological operators
4. Implementation
In this paper, colour images are divided into RGB
are defined as follows: distribution. Then each subspace can be regarded as a
gray image space, and is processed by morphological
Erosion operator: way used in gray image. The block diagram of the
X S = [ XR SR, XG SG, XB SB] proposed method is shown in Figure 3:

Dilation operator:
X ⊕ S = [X R ⊕ S R , X G ⊕ S G , X B ⊕ S B ]

Open operator:
X o S = [X R o S R , X G o S G , X B o S B ]

Close operator:
X • S = [X R • S R , X G • S G , X B • S B ] Figure: 3 Block diagram of the proposed method

The proposed method is tested on three kinds of


and ⊕ represent Minkowski addition and artificially degraded colour images. The three
where
degradation models are blur, random noise and
subtraction.
combination of both. The testing image is the 256*256

57
Lena image. Three morphological sub networks are
applied to process the degraded image divided into
RGB distribution, with each network works for one
colour band. A morphological sub network is shown in
figure 4. 3 x 3 pixels are chosen as structure elements,
and a group of gray values of pixels are selected as
input signals, which are considered to be the
3 x 3 mask that has the pixel(i,j) as center. The learning
ratio is set to 0.5 with moving factor rate set to 0.3. The
initial values for synaptic values are set randomly by Figure 5.1
random distribution function.

Figure 5.2 Figure 5.3

Figure 5. 4 Figure 5.5

Fig 4 Morphological sub network

5. Experiments and Results


The restoration algorithm using MNN are implemented
in Pentium IV using visual C++ language. Different Figure 5.6 Figure 5. 7
images like Lena, mandrill, flower and boat are used to
evaluate the performances of the above algorithm. The Figures 5.1 to 5.7: Images with blur, noise, both with
various images are degraded by using i) blur function corresponding MNN based restored images.
ii) random noise and iii) a combination of both blur and
random noise. The quality of restored images has been assessed by
peak signal - to-noise ratio ( PSNR). The restored
In order to evaluate the performance of this method, images obtained with this approach are of good visual
several 8 bit test images have been used. Figure 5.1 quality with higher SNR and experimental results
shows the original Lena image. In the first experiment, obtained are tabulated. All the above said 3 experiments
we degrade the image using only a blur function are carried out using structural element of size 3 * 3.
(Figure 5.2) and the corresponding restored image is The experiments are also conducted with different
shown in Figure 5.3. In the second experiment we structuring elements of various sizes varying from 3*3
degrade the image by random noise. Figure 5.4 shows to 9*9 and the results are tabulated in table 1. The time
the image degraded by random noise and figure 5.5 taken by all the three methods are also noted down and
shows the corresponding restored image. In the third given in Table 2.
experiment we degrade the image using a combination
of both blur and noise and the resulting image is shown
in Figure 5.6 . Figure 5.7 shows the corresponding
restored image using morphological neural network..

58
Table.1 PSNR values of restored image for different sizes of structuring elements
Type of PSNR of PSNR of PSNR of PSNR of the PSNR of the
degradation corrupted the the restored restored image
image restored restored image (Erosion )
image image (Erosion)
(Erosion) (Erosion)
(dB) (3*3) (5*5) (7*7) (9*9)

Image with 17.812 32.220 33.359 33.506 35.363


Blur

Image with 19.497 32.221 34.328 34.817 35.711


Random noise

Image with 16.078 31.228 32.567 32.690 34.889


blur and
random noise

Table.2 Processing time in Seconds for different sizes of structuring elements


Type of Processing time Processing Processing Processing time
degradation in secs for time in secs time in secs in secs
structuring for structuring for structuring for structuring
elements of size elements of elements of elements of size
( 3*3) size ( 5*5) size ( 7*7) (9*9)

Image with Blur 7 10 11 12

Image with 10 13 14 15
Random noise

Image with blur 11 14 15 16


and random noise

36
Restored image with blur
35.5 Restored image with blur and random noise
Restored image with random noise
35

34.5

34
PSNR in db

33.5

33

32.5

32

31.5

31
3*3 5*5 7*7 9*9
structural elements

Figure 6: Variation of PSNR for restored images


for different structuring elements.

From figure 6, it is found that as the size of the structuring element is increased the reconstructed image quality is
also improved considerably.

59
6. Conclusion Acoust., Speech, Signal Processing, Vol,
A MLM neural network based on morphological neuron ASSP-36, pp 1141-1151, July 1988.
and its algorithm is presented in this paper. Advantages of [11] J.L. Davidson and F. Hummer,“
this method include, fast neural computation, less Morphological neural networks: An
computational complexity due to the less number of introduction with applications,” IEEE
neurons used and convergence problem and lengthy Systems Signal Processing, Vol 12, No.2,
training algorithm are not needed. Taking advantage of its pp 177-210, 1993.
high degree of non-linearity, MLM neural network serves [12] J.L. Davidson and R. Srivatsa, “ Fuzzy
efficiently in colour image restoration. From the image algebra neural network for template
experimental results it is evident that the restored image identification,” in 2nd Annu. Midwest
quality in terms of S / N ratio is improved. In addition to Electrotechnol. Conf., Ames, IA, pp. 68-71,
this, the proposed algorithm takes less time to process and Apr 1993.
restore the image, compared to other conventional [13] Y.Won and P.D. Gader, “ Morphological
algorithms. shared weight neural network for pattern
classification and automatic target
References detection,” in Proc. 1995 IEEE Int. Conf.
Neural Networks, Perth, Australia, Vol 4,
[1] H. C. Andrews & B. R. Hunt, “Digital Image pp 2134-2138, Nov. 1995.
restoration”, Englewood cliffs, NJ, Prentice Hall,
1977.
[2] Rafael C. Gonzalez and Richard E. Woods, “Digital
image processing”, 2nd edition, Addison- Wesely,
2004.
[3] G.X.Ritter,”An Inroduction to morphological neural
networks”, Procs. Of the 13th International
Conference on Pattern Recognition, Vol IV, Track
D, pp. 709-717, Austria, 1996.
[4] Peter Sussener: “ Morphological perceptron
learning” , Procs. Of the IEEE ISIC/CIRA/ISAS
joint conference, Gaithersburg, MD, September
1998.
[5] Clodoaldo Ap.M.Lima, etal. :“Hybrid training of
morphological neural networks: A comparative
study”, DCA- FEEC-UNI camp , Aug 2000.
[6] Ling Zhang, Yun Zhang , Yi Min Yang : “Colour
image restoration with multilayer Morphological
Neural network”, Proceedings of the second
international conference on Machine learning and
Cybernetics, Xian, November 2003.
[7] Gerchard X.Ritter “Lattice Algebra Approach to
single neuron computation” IEEE Transactions on
neural networks. Vol.14 , No.2 , pp. 282-195, March
2003.
[8] Lucio F. C. Pessoa and Petros Maragos,
“Morphological / rank neural networks and their
adaptive optimal design for image processing”,
Proceedings of the IEEE International conference on
Acoustics, Speech and Signal Processing, Atlanta,
GA, May 1996.
[9] Joon K. Paik and Aggelos K. Katsaggelos, “ Image
restoration using a modified Hopfield Network”,
IEEE Transactions on image processing, Vol 1, No.1,
pp. 49-63, January 1992.
[10] Y.T.Zhou, R. Chellappa and B.K. Jenkins, “Image
restoration using a neural network,” IEEE Trans.

60

You might also like