You are on page 1of 14

Imperial College London

Department of Electrical and Electronic Engineering


Digital Image Processing
PART 3
IMAGE RESTORATION
Academic responsible
Dr. Tania STATHAKI
Room 811b
Ext. 46229
Email: t.stathaki@ic.ac.uk
http://www.commsp.ee.ic.ac.uk/~tania/
1. Preliminaries
1.1 What is image restoration?
Image Restoration refers to a class of methods that aim to remove or reduce the degradations that have
occurred while the digital image was being obtained. All natural images when displayed have gone
through some sort of degradation:
during display mode
during acquisition mode, or
during processing mode
The degradations may be due to
sensor noise
blur due to camera misfocus
relative object-camera motion
random atmospheric turbulence
others
In most of the existing image restoration methods we assume that the degradation process can be
described using a mathematical model.
1.2 How well can we do?
Depends on how much we know about
the original image
the degradations
(how accurate our models are)
1.3 Image restoration and image enhancement differences
Image restoration differs from image enhancement in that the latter is concerned more with
accentuation or extraction of image features rather than restoration of degradations.
Image restoration problems can be quantified precisely, whereas enhancement criteria are
difficult to represent mathematically.
1.4 Image observation models
Typical parts of an imaging system: image formation system, a detector and a recorder. A general
model for such a system could be:
[ ] ) , ( ) , ( ) , ( j i n j i w r j i y +
[ ]


j d i d j i f j i j i h j i f H j i w ) , ( ) , , , ( ) , ( ) , (
{ } ) , ( ) , ( )] , ( [ ) , (
2 1
j i n j i n j i w r g j i n +
where
) , ( j i y
is the degraded image,
) , ( j i f
is the original image and
) , , , ( j i j i h
is an operator that
represents the degradation process, for example a blurring process. Functions
( ) g and
( ) r are
generally nonlinear, and represent the characteristics of detector/recording mechanisms.
) , ( j i n
is the
additive noise, which has an image-dependent random component
[ ] { } ) , ( )] , ( [
1
j i n j i f H r g and an
image-independent random component ) , (
2
j i n .
1.5 Detector and recorder models
2
The response of image detectors and recorders in general is nonlinear. An example is the response of
image scanners

) , ( ) , ( j i w j i r
where

and

are device-dependent constants and


) , ( j i w
is the input blurred image.
For photofilms
0 10
) , ( log ) , ( r j i w j i r
where

is called the gamma of the film,


) , ( j i w
is the incident light intensity and
) , ( j i r
is called the
optical density. A film is called positive if it has negative

.
1.6 Noise models
The general noise model
{ } ) , ( ) , ( )] , ( [ ) , (
2 1
j i n j i n j i w r g j i n +
is applicable in many situations. Example, in photoelectronic systems we may have x x g ) ( .
Therefore,
) , ( ) , ( ) , ( ) , (
2 1
j i n j i n j i w j i n +

where
1
n and
2
n are zero-mean, mutually independent, Gaussian white noise fields. The term
) , (
2
j i n may be referred as thermal noise. In the case of films there is no thermal noise and the noise
model is
) , ( ) , ( log ) , (
1 10
j i n r j i w j i n
o

Because of the signal-dependent term in the noise model, restoration algorithms are quite difficult.
Often
) , ( j i w
is replaced by its spatial average,
w
, giving
[ ] [ ] ) , ( ) , ( ) , (
2 1
j i n j i n r g j i n
w
+
which makes
) , ( j i n
a Gaussian white noise random field. A lineal observation model for
photoelectronic devices is
) , ( ) , ( ) , ( ) , (
2 1
j i n j i n j i w j i y
w
+ +
For photographic films with
1
) , ( ) , ( log ) , (
1 0 10
y x an r j i w j i y +
where a r ,
0
are constants and
0
r can be ignored.
The light intensity associated with the observed optical density
) , ( j i y
is
) , ( ) , ( 10 ) , ( 10 ) , (
) , ( ) , (
1
j i n j i w j i w j i I
j i an j i y


where
) , (
1
10 ) , (
j i an
j i n

now appears as multiplicative noise having a log-normal distribution.
Keep in mind that we are just referring to the most popular image observation models. In the literature
you can find a quite large number of different image observation models. Image restoration
algorithms are based on the above image formation models.
1.7 A general model of a simplified digital image degradation process
A simplified version for the image restoration process model is
[ ] ) , ( ) , ( ) , ( j i n j i f H j i y +
where
) , ( j i y
the degraded image
) , ( j i f
the original image
H an operator that represents the degradation process
) , ( j i n
the external noise which is assumed to be image-independent
3
1.8 Possible classification of restoration methods
Restoration methods could be classified as follows:
deterministic: we work with sample by sample processing of the observed (degraded) image
stochastic: we work with the statistics of the images involved in the process
non-blind: the degradation process H is known
blind: the degradation process H is unknown
semi-blind: the degradation process H could be considered partly known
From the viewpoint of implementation:
direct
iterative
recursive
2. Linear position invariant degradation models
2.1 Definition
We again consider the general degradation model
[ ] ) , ( ) , ( ) , ( j i n j i f H j i y +
If we ignore the presence of the external noise
) , ( j i n
we get
[ ] ) , ( ) , ( j i f H j i y
H is linear if
[ ] [ ] [ ] ) , ( ) , ( ) , ( ) , (
2 2 1 1 2 2 1 1
j i f H k j i f H k j i f k j i f k H + +
H is position (or space) invariant if
[ ] ) , ( ) , ( b j a i y b j a i f H
From now on we will deal with linear, space invariant type of degradations.
In a real life problem many types of degradations can be approximated by linear, position
invariant processes.
Advantage: Extensive tools of linear system theory become available.
Disadvantage: In some real life problems nonlinear and space variant models would be more
appropriate for the description of the degradation phenomenon.
2.2 Typical linear position invariant degradation models
Motion blur. It occurs when there is relative motion between the object and the camera during
exposure.

'

otherwise , 0
2 2
if ,
1
) (
L
i
L
L
i h
Atmospheric turbulence. It is due to random variations in the reflective index of the medium
between the object and the imaging system and it occurs in the imaging of astronomical objects.

,
_

+

2
2 2
2
exp ) , (

j i
K j i h
Uniform out of focus blur
4

'

otherwise , 0
if ,
1
) , (
2 2
R j i
R
j i h

Uniform 2-D blur

'

otherwise , 0
2
,
2
if ,
) (
1
) , (
2
L
j i
L
L
j i h

2.3 Some characteristic metrics for degradation models
Blurred Signal-to-Noise Ratio (BSNR): a metric that describes the degradation model.
[ ]

'

2
2
10
) , ( ) , (
1
10log BSNR
n
i j
j i g j i g
MN

) , ( ) , ( ) , ( j i n j i y j i g
)} , ( { ) , ( j i g E j i g
2
n
: variance of additive noise
Improvement in SNR (ISNR): validates the performance of the image restoration algorithm.
[ ]
[ ]

'

i j
i j
j i f j i f
j i y j i f
2
2
10
) , (

) , (
) , ( ) , (
10log ISNR
where ) , (

j i f is the restored image.


Both BSNR and ISNR can only be used for simulation with artificial data.
2.4 One dimensional discrete degradation model. Circular convolution
Suppose we have a one-dimensional discrete signal
) (i f
of size A samples
) 1 ( , ), 1 ( ), 0 ( A f f f
,
which is due to a degradation process. The degradation can be modeled by a one-dimensional discrete
impulse response
) (i h
of size B samples. If we assume that the degradation is a causal function we
have the samples
) 1 ( , ), 1 ( ), 0 ( B h h h
.We form the extended versions of
) (i f
and
) (i h
, both of
size 1 + B A M and periodic with period M . These can be denoted as ) (i f
e
and ) (i h
e
. For a time
invariant degradation process we obtain the discrete convolution formulation as follows

+
1
0
) ( ) ( ) ( ) (
M
m
e e e e
i n m i h m f i y
Using matrix notation we can write the following form
n Hf y +
1
1
1
1
]
1

) 1 (
) 1 (
) 0 (
M f
f
f
e
e
e

f ,
5
1
1
1
1
]
1


+
+

) 0 ( ) 2 ( ) 1 (
) 2 ( ) 0 ( ) 1 (
) 1 ( ) 1 ( ) 0 (
e e e
e e e
e e e
M) (M
h M h M h
M h h h
M h h h

H
At the moment we decide to ignore the external noise n . Because h is periodic with period M we
have that
1
1
1
1
]
1

) 0 ( ) 2 ( ) 1 (
) 2 ( ) 0 ( ) 1 (
) 1 ( ) 1 ( ) 0 (
e e e
e e e
e e e
M) (M
h M h M h
h h h
h M h h

H
We define
) (k
to be
+ + + ) 2
2
exp( ) 2 ( )
2
exp( ) 1 ( ) 0 ( ) ( k
M
j M h k
M
j M h h k
e e e

1 , , 1 , 0 ], ) 1 (
2
exp[ ) 1 ( + M k k M
M
j h
e

Because )
2
exp( ] ) (
2
exp[ ik
M
j k i M
M
j

we have that
) ( ) ( k MH k
) (k H
is the discrete Fourier transform of ) (i h
e
.
I define
) (k w
to be
1
1
1
1
1
]
1

] ) 1 (
2
exp[
)
2
exp(
1
) (
k M
M
j
k
M
j
k

w
It can be seen that
) ( ) ( ) ( k k k w Hw
This implies that
) (k
is an eigenvalue of the matrix H and
) (k w
is its corresponding eigenvector.
We form a matrix
w
whose columns are the eigenvectors of the matrix H, that is to say
[ ] ) 1 ( ) 1 ( ) 0 ( M w w w W
1
]
1

ki
M
j i k w
2
exp ) , (
and
1
]
1

ki
M
j
M
i k w
2
exp
1
) , (
1
We can then diagonalize the matrix H as follows
HW W D WDW H
-1 -1

where
1
1
1
1
]
1

) 1 (
) 1 (
) 0 (
M

0
0
D

Obviously D is a diagonal matrix and


) ( ) ( ) , ( k MH k k k D
If we go back to the degradation model we can write


f DW y W f WDW y Hf y
1 -1 -1
1 , , 1 , 0 ), ( ) ( ) ( M k k F k MH k Y
6
1 , , 1 , 0 ), ( ), ( ), ( M k k F k H k Y
are the M sample discrete Fourier transforms of
), ( ), ( ), ( i f i h i y

respectively. So by choosing
) (k
and
) (k w
as above and assuming that ) (i h
e
is periodic, we start
with a matrix problem and end up with M scalar problems.
2.5 Two dimensional discrete degradation model. Circular convolution
Suppose we have a two-dimensional discrete signal
) , ( j i f
of size B A samples which is due to a
degradation process. The degradation can now be modeled by a two dimensional discrete impulse
response
) , ( j i h
of size D C samples. We form the extended versions of
) , ( j i f
and
) , ( j i h
, both
of size N M , where 1 + C A M and 1 + D B N , and periodic with period N M . These can
be denoted as ) , ( j i f
e
and ) , ( j i h
e
. For a space invariant degradation process we obtain

+
1
0
1
0
) , ( ) , ( ) , ( ) , (
M
m
e e
N
n
e e
j i n n j m i h n m f j i y
Using matrix notation we can write the following form
n Hf y +
where f and
y
are MN dimensional column vectors that represent the lexicographic ordering of
images ) , ( j i f
e
and ) , ( j i h
e
respectively.
1
1
1
1
]
1

0 2 M 1 M
2 0 1
1 1 M 0
H H H
H H H
H H H

H
1
1
1
1
]
1

) 0 , ( ) 2 , ( ) 1 , (
) 2 , ( ) 0 , ( ) 1 , (
) 1 , ( ) 1 , ( ) 0 , (
j h N j h N j h
j h j h j h
j h N j h j h
e e e
e e e
e e e
j

H
The analysis of the diagonalisation of H is a straightforward extension of the one-dimensional case.
In that case we end up with the following set of N M scalar problems.
)) , ( )( , ( ) , ( ) , ( v u N v u F v u MNH v u Y +
1 , , 1 , 0 , 1 , , 1 , 0 N v M u
In the general case we may have two functions
B i A i f ), (
and
D i C i h ), (
, where C A, can
be also negative (in that case the functions are non-causal). For the periodic convolution we have to
extend the functions from both sides knowing that the convolution is
D B i C A i f i h i g + + ), ( ) ( ) (
.
3. Direct deterministic approaches to restoration
3.1 Inverse filtering
The objective is to minimize
2 2
) ( ) ( Hf y f n f J
We set the first derivative of the cost function equal to zero
0

) ( 2 0
) (
Hf y H
f
f
T
J
y H H H f
T -1 T
) (
7
If N M and
1
H

exists then
y H f
-1

According to the previous analysis if H (and therefore


-1
H
) is block circulant the above problem can
be solved as a set of N M scalar problems as follows
1
1
]
1

2
1
2
) , (
) , ( ) , (
) , (
) , (
) , ( ) , (
) , (
v u H
v u Y v u H
j i f
v u H
v u Y v u H
v u F
3.1.1 Computational issues concerning inverse filtering
(I) Suppose first that the additive noise
) , ( j i n
is negligible. A problem arises if
) , ( v u H

becomes very small or zero for some point
) , ( v u
or for a whole region in the
) , ( v u
plane. In
that region inverse filtering cannot be applied. Note that in most real applications
) , ( v u H

drops off rapidly as a function of distance from the origin. The solution is that if these points
are known they can be neglected in the computation of
) , ( v u F
.
(II) In the presence of external noise we have that
( )

2
) , (
) , ( ) , ( ) , (
) , (

v u H
v u N v u Y v u H
v u F
+

2 2
) , (
) , ( ) , (
) , (
) , ( ) , (
v u H
v u N v u H
v u H
v u Y v u H
) , (
) , (
) , ( ) , (

v u H
v u N
v u F v u F +
If
) , ( v u H
becomes very small, the term
) , ( v u N
dominates the result. The solution is again
to carry out the restoration process in a limited neighborhood about the origin where
) , ( v u H

is not very small. This procedure is called pseudoinverse filtering. In that case we set

'

<

T v u H
T v u H
v u H
v u Y v u H
v u F
) , ( 0
) , (
) , (
) , ( ) , (
) , (

2
The threshold T is defined by the user. In general, the noise may very well possess large
components at high frequencies
) , ( v u
, while
) , ( v u H
and
) , ( v u Y
normally will be
dominated by low frequency components.
3.2 Constrained least squares (CLS) restoration
It refers to a very large number of restoration algorithms.
The problem can be formulated as follows.
minimize
2 2
) ( ) ( Hf y f n f J
subject to
<
2
Cf
where Cf is a high pass filtered version of the image. The idea behind the above constraint is that
the highpass version of the image contains a considerably large amount of noise! Algorithms of
the above type can be handled using optimization techniques. Constrained least squares (CLS)
restoration can be formulated by choosing an f to minimize the Lagrangian
8
( )
2 2
min Cf Hf y +
Typical choice for C is the 2-D Laplacian operator given by
1
1
1
]
1

00 . 0 25 . 0 00 . 0
25 . 0 00 . 1 25 . 0
00 . 0 25 . 0 00 . 0
C

represents either a Lagrange multiplier or a fixed parameter known as regularisation parameter


and it controls the relative contribution between the term
2
Hf y and the term
2
Cf . The
minimization of the above leads to the following estimate for the original image
( ) y H C C H H f
T T T
1
+
3.2.1 Computational issues concerning the CLS method
(I) Choice of

The problem of the choice of

has been attempted in a large number of studies and different


techniques have been proposed. One possible choice is based on a set theoretic approach: a
restored image is approximated by an image which lies in the intersection of the two
ellipsoids defined by
} | {
2
2
E Q Hf y f
y | f
and
} | {
2
2
Cf f
f
Q
The center of one of the ellipsoids which bounds the intersection of y | f
Q
and
f
Q , is given by
the equation
( ) y H C C H H f
T T T
1
+
with
2
) / ( E . Another problem is then the choice of
2
E
and
2

. One choice could be


BSNR
1

Comments
With larger values of

, and thus more regularisation, the restored image tends to have more
ringing. With smaller values of

, the restored image tends to have more amplified noise


effects. The variance and bias of the error image in frequency domain are
( )


+

M
u
N
v
n
v u C v u H
v u H
Var
0 0
2
2 2
2
2
) , ( ) , (
) , (
) (


( )

1
0
1
0
2
2 2
4
2
2
2
) , ( ) , (
) , ( ) , (
) (
M
u
N
v
n
v u C v u H
v u C v u F
Bias


The minimum MSE is encountered close to the intersection of the above functions.
A good choice of

is one that gives the best compromise between the variance and bias of
the error image.
4. Iterative deterministic approaches to restoration
They refer to a large class of methods that have been investigated extensively over the last decades.
They possess the following advantages.
There is no need to explicitly implement the inverse of an operator. The restoration process is
monitored as it progresses. Termination of the algorithm may take place before convergence.
The effects of noise can be controlled in each iteration.
The algorithms used can be spatially adaptive.
9
The problem specifications are very flexible with respect to the type of degradation. Iterative
techniques can be applied in cases of spatially varying or nonlinear degradations or in cases
where the type of degradation is completely unknown (blind restoration).
In general, iterative restoration refers to any technique that attempts to minimize a function of the
form
) (f M
using an updating rule for the partially restored image.
4.1 Least squares iteration
In that case we seek for a solution that minimizes the function
2
Hf y f ) ( M
A necessary condition for
) (f M
to have a minimum is that its gradient with respect to f is equal to
zero. This gradient is given below
Hf) H y H f
f
f
T T
f
+

( 2 ) (
) (
M
M
and by using the steepest descent type of optimization we can formulate an iterative rule as follows:
y H f
T
0

k
T T
k
T
k
k
k
k
f H H I y H Hf y H f
f
f
f f ) ( ) (
) (
+ +


+
M
1 k
4.2 Constrained least squares iteration
In this method we attempt to solve the problem of constrained restoration iteratively. As already
mentioned the following functional is minimized
2 2
) , ( Cf Hf y f + M
The necessary condition for a minimum is that the gradient of
) , ( f M
is equal to zero. That gradient
is
] ) [( 2 ) , ( ) ( y H f C C H H f f
T T T
f
+ M
The initial estimate and the updating rule for obtaining the restored image are now given by
y H f
T
0

] ) ( [
k
T T T
k
f C C H H y H f f + +
+1 k
It can be proved that the above iteration (known as Iterative CLS or Tikhonov-Miller Method)
converges if
max
2
0

< <
where
max
is the maximum eigenvalue of the matrix
) ( C C H H
T T
+
If the matrices H and C are block-circulant the iteration can be implemented in the frequency
domain.
4.3 Projection onto convex sets (POCS)
The set-based approach described previously can be generalized so that any number of prior
constraints can be imposed as long as the constraint sets are closed convex.
If the constraint sets have a non-empty intersection, then a solution that belongs to the intersection set
can be found by the method of POCS. Any solution in the intersection set is consistent with the a
10
priori constraints and therefore it is a feasible solution. Let
m
Q Q Q , , ,
2 1
be closed convex sets in a
finite dimensional vector space, with
m
P P P , , ,
2 1
their respective projectors. The iterative procedure
k 1 k
f f
m
P P P ,
2 1

+
converges to a vector that belongs to the intersection of the sets m i Q
i
, , 2 , 1 , , for any starting
vector
0
f . An iteration of the form
k 1 k
f f
2 1
P P
+
can be applied in the problem described previously,
where we seek for an image which lies in the intersection of the two ellipsoids defined by
} | {
2
2
E Q Hf y f
y | f
and } | {
2
2
Cf f
f
Q
The respective projections f
1
P and f
2
P are defined by
( ) ) (
1
1 1 1
Hf y H H H I f f
T T
+ +

P
( ) f C C C C I I f
T T
] [
1
2 2 2

+ P
4.4 Spatially adaptive iteration
The functional to be minimized takes the form
2
W
1
W
Cf Hf y f
2 2
) , ( + M
where
Hf) y W Hf) y Hf y
1
T
1
W
( (
2
Cf) W Cf) Cf
2
T
2
W
( (
2

2 1
W W , are diagonal matrices, the choice of which can be justified in various ways. The entries in
both matrices are non-negative values and less than or equal to unity. In that case
y W H f C W C H W H f f
1
T
2
T
1
T
f
+ ) ( ) , ( ) ( M
A more specific case is
W
Cf Hf y f
2 2
) , ( + M
where the weighting matrix is incorporated only in the regularization term. This method is known as
weighted regularised image restoration. The entries in matrix W will be chosen so that the high-
pass filter is only effective in the areas of low activity and a very little smoothing takes place in the
edge areas.
4.5 Robust functionals
Robust functionals allow for the efficient supression of a wide variety of noise processes and permit
the reconstruction of sharper edges than their quadratic counterparts. We are seeking to minimize
Cf Hf y f
x n
R R M + ) ( ) , (
() (),
x n
R R are referred to as residual and stabilizing functionals respectively.
4.6 Computational issues concerning iterative techniques
(I) Convergence
The contraction mapping theorem usually serves as a basis for establishing convergence of
iterative algorithms. According to it iteration
0 f
0

) ( ) (
k k k
f f f f +
+

1 k
converges to a unique fixed point

f
, that is, a point such that

f f ) ( , for any initial
vector, if the operator or transformation
) (f
is a contraction. This means that for any two
vectors
1
f and
2
f in the domain of
) (f
the following relation holds
2 1 2 1
f f f f ) ( ) (
11
with
1
and any norm. The above condition is norm dependent.
(II) Rate of convergence
The termination criterion most frequently used compares the normalized change in energy at
each iteration to a threshold such as
6
2
2
10
+

k
k 1 k
f
f f
5. Stochastic approaches to restoration
5.1 Wiener estimator (stochastic regularisation)
The image restoration problem can be viewed as a system identification problem as follows:
) , ( j i f

) , ( j i y
) , (

j i f

) , ( j i n
The objective is to minimize the following function
)}

( )

{( f f f f
T
E
To do so the following conditions should hold:
(i) } { } { } { }

{ y W f f f E E E E
(ii) The error must be orthogonal to the observation about the mean 0 } }) { )(

{(
T
y y f f E E
From (i) and (ii) we have that
0 } }) { )( {(
T
y y f Wy E E + 0 } }) { )( } { } { {(
T
y y f y W f Wy E E E E
0 } }) { })]( { ( }) { ( {[
T
y y f f y y W E E E E
If
} {
~
y y y E
and } {
~
f f f E then
0 }
~
)
~
~
{(
T
y f y W E
y f
y y
T T T T
R WR y f y y W y f y y W
~
~ ~ ~ }
~
~
{ }
~ ~
{ }
~
~
{ }
~ ~
{ E E E E
If the original and the degraded image are both zero mean then yy y y
R R ~ ~
and fy
y f
R R
~
~
. In that
case we have that fy yy
R WR
. If we go back to the degradation model and find the autocorrelation
matrix of the degraded image then we get that
T T T T
n H f y n Hf y + +
yy nn
T
ff
T
R R H HR yy + } { E
fy
T
ff
T
R H R fy } { E
From the above we get the following results
1 1
+ ) (
nn
T
ff
T
ff yy fy
R H HR H R R R W
y R H HR H R f
nn
T
ff
T
ff
1
+ ) (

Note that knowledge of


ff
R and
nn
R is assumed. In frequency domain
) , ( ) , ( ) , (
) , ( ) , (
) , (
2
v u S v u H v u S
v u H v u S
v u W
nn ff
ff
+

12
H
W
) , (
) , ( ) , ( ) , (
) , ( ) , (
) , (

2
v u Y
v u S v u H v u S
v u H v u S
v u F
nn ff
ff
+

5.1.1 Computational issues


The noise variance has to be known, otherwise it is estimated from a flat region of the observed
image. In practical cases where a single copy of the degraded image is available, it is quite common to
use
) , ( v u S
yy as an estimate of
) , ( v u S
ff . This is very often a poor estimate.
5.1.2 Wiener smoothing filter
In the absence of any blur,
1 ) , ( v u H
and
1 ) (
) (
) , ( ) , (
) , (
) , (
+

SNR
SNR
v u S v u S
v u S
v u W
nn ff
ff
(i)
1 ) , ( 1 ) ( >> v u W SNR
(ii)
) ( ) , ( 1 ) ( SNR v u W SNR <<
) (SNR
is high in low spatial frequencies and low in high spatial frequencies so
) , ( v u W
can be
implemented with a lowpass (smoothing) filter.
5.1.3 Relation with inverse filtering
If
) , (
1
) , ( 0 ) , (
v u H
v u W v u S
nn

which is the inverse filter
If 0 ) , ( v u S
nn

'

0 ) , ( 0
0 ) , (
) , (
1
) , ( lim
0
v u H
v u H
v u H
v u W
nn
S
which is the pseudoinverse filter.
5.1.4 Iterative Wiener filters
They refer to a class of iterative procedures that successively use the Wiener filtered signal as an
improved prototype to update the covariance estimates of the original image as follows.
Step 0: Initial estimate of
ff
R
} { ) 0 (
T
yy ff
yy R R E
Step 1: Construct the
th
i
restoration filter
1
+ + ) ) ( ( ) ( ) 1 (
nn
T
ff
T
ff
R H HR H R W i i i
Step 2: Obtain the
th
) 1 ( + i estimate of the restored image
y W f ) 1 ( ) 1 (

+ + i i
Step 3: Use ) 1 (

+ i f to compute an improved estimate of


ff
R given by
)} 1 (

) 1 (

{ ) 1 ( + + + i i E i
T
ff
f f R
Step 4: Increase i and repeat steps 1,2,3,4.
References
13
[1] Digital Image Processing by R. C. Gonzales and R. E. Woods, Addison-Wesley Publishing
Company, 1992.
[2] Two-Dimensional Signal and Image Processing by J. S. Lim, Prentice Hall, 1990.
[3] 'Digital Image Restoration', by M.R. Banham and A.K. Katsaggelos, IEEE Signal Processing
Magazine, pp. 27-41, March 1997.
14

You might also like