You are on page 1of 81

Technische Universiteit Delft

Faculteit Elektrotechniek, Wiskunde en Informatica

Delft Institute of Applied Mathematics

Regularisatie methodes voor Seismische Tomografie

(Regularization methods for Seismic Tomography)

Verslag ten behoeve van het

Delft Institute of Applied Mathematics
als onderdeel ter verkrijging

BACHELOR OF SCIENCE
in
TECHNISCHE WISKUNDE

door

Cindy Caljouw

Delft, Nederland
Juni, 2015

c 2015 door Cindy Caljouw. Alle rechten voorbehouden.
BSc verslag TECHNISCHE WISKUNDE

“Regularisatie methodes voor Seismische Tomografie”

(“Regularization methods for Seismic Tomography”)

Cindy Caljouw

Begeleider

Dr.ir. M.B. van Gijzen

Overige commissieleden

Prof.dr.ir. C. Vuik

Prof.dr.ir. G. Jongbloed

Dr.ir. M. Keijzer

Juni, 2015 Delft

Abstract

In seismic tomography, parts of the Earth are reconstructed by the travel times of seismic waves
that are measured by seismometers. These tomographic problems can be formulated as a system
of linear equations. Unfortunately, most of the time the system is ill-posed, because of ray
bending and noise. This problem can not only be found in geophysics but also in other sciences,
Regularization methods replaces the ill-posed problem with a "nearby" well-posed problem whose
solution is a good approximation of the true solution. The main goal of this report is to un-
derstand two regularization methods, Tikhonov and another regularization which we will call
the Saunders regularization, and to investigate when these regularization methods are equivalent
and which regularization matrices can be used for Saunders regularization.
Saunders and Tikhonov regularisation are equivalent when the identity matrix is used. But when
an arbitrary matrix is used, the regularization methods are not equivalent. We also examined
choices for the regularization matrix for both methods. It was not very clear which matrix one
should use for the Saunders regularization, but for Tikhonov regularization method we used
a smoothing operator for the solution in both horizontal en vertical direction. For Saunders
regularization we also used a smoothing operator but instead for the solution, a smoothing
operator for the residuals as regularization matrix.

i
ii
Preface

This thesis is the final output of my Bachelor Applied Mathematics at Delft University of Tech-
nology. In the last three years I have encountered different areas of mathematics. In this thesis,
I combined two areas of mathematics, linear algebra and numerical analysis.
Furthermore, I would like to thank my supervisor Martin van Gijzen for his support and for all
the time he made available for me. Every appointment with Martin that could last more than an
hour, was very interesting and also very entertaining. Lastly, I want to thank prof.dr.ir. C. Vuik,
prof.dr.ir. G. Jongbloed and dr.ir. M. Keijzer for their time and willingness to be a member of
Last of all, I wish you much pleasure in reading, criticizing and valuing my Bachelor thesis.
Cindy Caljouw,
Delft, June 2015

iii
iv
Contents

Abstract i

Preface iii

1 Introduction 1
1.1 Research Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Structure of the paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Model Description 3
2.1 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Singular Value Decomposition Method 7

3.1 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Using the SVD for Least Squares Problem . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Tikhonov Regularization 17
4.1 Tikhonov I-regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1.1 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Tikhonov M -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.1 Generalized Singular Value Decomposition . . . . . . . . . . . . . . . . . . 22
4.2.2 Using GSVD for Tikhonov M -regularization . . . . . . . . . . . . . . . . . 23
4.2.3 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.4 Choice of M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5 Saunders Regularization 31
5.1 Saunders I-regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1.1 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Saunders M -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.1 Choice of M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

v
vi CONTENTS

6 Comparison Saunders and Tikhonov Regularization 39

6.1 I-regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.1.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.2 M -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.3 Summary Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

7 Conclusion 45

Bibliography 47

Appendices

A Invertible Matrices 49
A.1 AT A + λ2 I is invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A.2 AT A + λ2 M T M is invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A.3 AAT + λ2 I is invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A.4 AAT + λ2 M M T is invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

B Solutions Test Problem Narrow 51

C MATLAB Programs 59
C.1 SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
C.2 Tikhonov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
C.2.1 I-regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
C.2.2 M1 -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
C.2.3 M2 -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
C.2.4 M3 -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
C.3 Saunders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
C.3.1 I-regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
C.3.2 M -regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
C.3.3 function plotsol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
C.3.4 function mmread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Chapter 1

Introduction

Seismologists want to know what the composition of our Earth is, but not only scientists are
interested in the secrets that lay deep below the surface. Also for the search for liquid gold, oil
and gas, and coal the knowledge of the Earth’s composition is important. The first few kilometres
of the Earth’s crust can be discovered by drilling a hole, but the rest of the Earth’s composition
cannot be determined by taking a peek. Moreover, drilling holes into the Earth is very expensive.
Therefore the reconstruction techniques of seismic tomography can come in handy.
Seismic tomography is the study of reconstruction of slices of the Earth from its projections
(travel times of seismic waves). The word tomography comes from the ancient greek τόμος
tomos, meaning “slice”, and γράπηο graphō, meaning “to write”. Tomography is applied in many
sciences: geophysics, medicine, astrophysics and more. For this reason, the need for faster and
better reconstruction techniques is always there.

1.1 Research Goal

Most of the time, iterative reconstruction techniques are used to reconstruct objects. However,
some methods do not always converge in the presence of noise. Another problem is that seismic
waves do not travel in a straight line. This leads to an ill-posed system Ax = b, which can be
solved in the sense of least squares with normal equation AT Ax = AT b. Most algorithms solve
nonetheless AAT y = b with x = AT y, but this only works when the system is consistent. How-
ever, we can replace the system with a “nearby” well-posed problem whose solution approximates
the original solution, known as a regularization method. There exists a little known regulariza-
tion method which solves normal equation (AAT + λI)y = b with x = AT y, which we call the
Saunders regularization I-method. Another regularization method that is very popular is the
Tikhonov M -regularization, which solves normal equation (AT A + λ2 M T M )x = AT b.
This leads to the following research questions:

1.2 Structure of the paper

The structure of the paper is as follows. First, the mathematical model and the test problems are
described in Chapter 2. Second, Chapter 3 introduces the singular value decomposition, which

1
2 CHAPTER 1. INTRODUCTION

can be used to compute the least squares minimum norm solution, and its results. Afterwards,
we shall explain the Tikhonov regularization methods and test how well these methods perform
in Chapter 4. Moreover, we introduce the Saunders regularization method and show the test
results in Chapter 5. Chapter 6 compares the two regularization methods.
The end results are presented and suggestions for follow-up research are given in the conclusion.
Subsequently, in the appendices the MATLAB files can be found among other things.

1.3 Notation
In this report mathematical notations are used and to avoid ambiguities a list of notations can
be found in Table 1.1.

Mathematical Object Notation Example

Vector Bold x
Matrix Capital or capital Greek A, Σ
Euclidean norm (2-norm) Double vertical line kxk
Transpose Superscript T AT

Table 1.1: Notation used in thesis

Notice that every norm we use in this report is the Euclidean norm.
Chapter 2

Model Description

With tomography, a 3-dimensional object is reconstructed by its 2-dimensional projected images.

By sending penetrating waves through the 2-dimensional slices and measuring them on the other
side of the domain, projected images can be constructed. Combining these 2-dimensional images,
gives a reconstruction of the 3-dimensional object. In seismic tomography, the penetrating waves
are seismic waves coming from earthquakes or an other artificial sources. These waves can be
measured by seismometers. With the travel time of each wave, the seismic velocity can be
approximated with the following mathematical model.
First, the domain of computation is defined and after that the 2-dimensional space is divided in a
grid of pixels. These pixels are numbered from 1 to n, and the seismic waves which are measured
by the seismographs, are numbered from 1 to m with m >> n. A schematic representation is
given in Figure 2.1.

Figure 2.1: Domain divided in n pixels

For each seismic wave i along a ray, the travel time is given by a line integral:
Z
1
bi = dr, with v(r) as the seismic velocity at place r in the domain
ray i v(r)

Since the domain is divided into pixels, the integral can be written as a sum. Each pixel j has a
constant seismic velocity vj and for every seismic wave i the travel time bi is known. This gives
the following equations:

3
4 CHAPTER 2. MODEL DESCRIPTION

n
X
bi = aij xj , ∀i = 1, . . . , m,
j=1

with xj = v1j the slowness of pixel j and aij the length of the ith ray through cell j. Thus, the
obtained matrix equation is:

Ax = b (2.1)
(Further details can be found in [6].)
A is a sparse m × n matrix determined by the structure of the problem. Most of the time the
rank of matrix A is r with r < n, leading to an underdetermined system. Moreover, the system
can be ill-posed, because of nonlinear effects , data errors and approximations in the calculation
of the elements of A.
In the “perfect” world the travel time vector bmodel = Axmodel is given by the model solution
xmodel , also known as the model solution. But most of the time there is noise and then the
measured vector is b̂ instead of bmodel . Noise can be seen as a vector n added to the bmodel .
The assumption will be made that the noise is normally distributed. So b̂ = bmodel + n with
knk =  kbmodel k. Since we do not know in reality whether the measured travel time vector
is perturbed or not, we will call it b, which can be the model travel time vector bmodel or a
perturbed time vector b̂.
As a result, there is usually no solution to equation (2.1). Therefore, the solution x should be
found by solving the least squares problem:

min kAx − bk (2.2)

x
A simple way to solve the least squares problem is by solving the normal equation:

AT Ax = AT b (2.3)
However, the solution of problem (2.2) is not unique, because system (2.1) is sensitive to noise
and every solution to Ax = b is the sum of a particular solution and a vector which is in the
null space of A [8]:
A(xparticular + xnullspace ) = Axparticular
Hence there is an extra condition needed:

min kxk
All in all, the problem that needs to be solved is:

min kxk subject to AT Ax = AT b (2.4)

When Ax = b is consistent, there is also an alternative method to find the solution. The
following problem is solved:
AAT y = b with x = AT y (2.5)
Note that this problem is equivalent to (2.4), since Ax = b is consitent. There exists at least one
solution x. The solution x has minimum norm when x is in the row space of A. This solution
is also unique! [1] Because x is in the row space of A, we can say x = AT y. Thus this solves
exactly the same problem as (2.4).
Some algorithms solve (2.4), while others solve problem (2.5).
2.1. TEST PROBLEMS 5

2.1 Test Problems

Before we can answer the research question and solve problem (2.2), we introduce two test
problems. These test problems provide ill-posed problems, that can be used to test the accuracy
of every method. We call the first test problem the Nolet test problem, which consists of two
types, wide and narrow. This problem is described by Nolet in [6]. The second problem is from
the MATLAB package AIR tools [5] and therefore, we call it the test problem AIR tools.
For both test problems the solutions are known and thus the global error of each method can be
calculated for the two test problems. The smaller the global error is, the resemblance between the
estimated solution and the model solution of the test problem, which can be seen in Figure 2.2,
gets closer.

Figure 2.2: Test problems

Test problem Wide of Nolet will be used to test every method. The global error of each method
will be displayed for test problem Wide. The results of test problem Narrow will be given in
the Appendix. With a few small differences like changing the number of singular values or the
regularization parameter, the same conclusions can be drawn as with the Wide test problem.
The same is true for test problem AIR tools.
6 CHAPTER 2. MODEL DESCRIPTION
Chapter 3

Singular Value Decomposition Method

The Singular Value Decomposition (SVD) is a common method to calculate the solution for the
least squares problem (2.4). First, we explain below the SVD of a matrix.

3.1 Singular Value Decomposition

The SVD decomposes any matrix A ∈ Rm×n with m ≥ n and rank(A) = r to

A = U ΣV T (3.1)

where U ∈ Rm×m and V ∈ Rn×n orthogonal, and Σ ∈ Rm×n , such that

 
Σr 0
Σ= , with Σr = diag(σ1 , σ2 , . . . , σr )
0 0
and

σ1 ≥ σ2 ≥ . . . ≥ σr > 0
With σi for all i ∈ {1, 2, . . . , r} the singular values of A.

3.2 Using the SVD for Least Squares Problem

With the SVD of the matrix A, using r singular values, the Moore-Penrose pseudoinverse
A+ = V Σ+ U T can be used to find the solution for the least squares problem (2.4).
 −1 
+ + + T Σr 0 +
x = A b = V Σ U b, with Σ = (3.2)
0 0

In [8] the formula for A+ is proved from the least squares problem (2.2), which is repeated here.
Using the SVD, we minimise:

min kAx − bk = min U ΣV T x − b = min ΣV T x − U T b

x x x

This is allowed since U is an orthogonal matrix with the properties kU zk = kzk and U T U = I.
Now define    
y1 c
y= := V x and c = 1 := U T b
T
y2 c2

7
8 CHAPTER 3. SINGULAR VALUE DECOMPOSITION METHOD

, with y1 , c1 ∈ Rr . Hence that kyk = kxk. Thus, minimise the following:

       
Σr 0 y1 c1 Σr y1 − c1
min kΣy − ck = min − = min

y y1 ,y2 0 0 y2 c2 y1 c2

Hence, that this is minimized for y1 = Σ−1

r c1 . y2 can be chosen arbitrary, but for minimum norm
we choose y2 = 0. This leads to

   −1   −1 
y1 Σr c1 Σr 0
x=Vy =V =V =V U T b = A+ b
y2 0 0 0

Thus, the SVD can be used to compute the least squares minimum norm solution.

3.3 Error Analysis

Below the error of the SVD method is determined, this can also be found in [11].

The solution of the least squares problem, using the SVD method with r singular values, is given
by:
x + = A+ b = V Σ + U T b

Now b can be the exact right-hand side, so the model solution bmodel or with noise b̂. Recall
that:

and define V T xmodel = α, then bmodel = Axmodel = U Σα.

Remember that noise can be seen as an other vector added to the original bmodel . In other words
b̂ = bmodel + n with knk =  kbmodel k. Hence that

m
X
n= (uTi n)ui
i=1

with ui is the ith column of the orthogonal matrix U . Thus,

m
X
b̂ = Axmodel + n = U Σα + (uTi n)ui
i=1

Now suppose there was noise in the measurements, then the SVD solution of the least squares
problem can be written as
3.3. ERROR ANALYSIS 9

x+ = V Σ+ U T b̂
m
!
X
+ T
=VΣ U U Σα + (uTi n)ui
i=1
m
!
X
= V Σ+ U T U Σα + U T (uTi n)ui
i=1
uT1 n
  

= V Σ+ Σα + U T U  ... 
  

uTm n
  T 
u1 n
+  .. 
= V Σ Σα +  . 
uTm n

Since there are only r singular values used, the solution is:
r
+
X 1
x = vi (σi αi + uTi n)
σi
i=1
r
X σi αi + uT n i
= vi
σi
i=1

With this information the global error can be determined:

r n

+ X σ i α i + uT n X
x − xmodel = i
vi − α i vi

σi
i=1 i=1

r n
X uTi n X
= vi − α i vi

σi
i=1 i=r+1
 T 
u1 n
 σ.1 

 .. 
 
= V

 uTr n 

 σr 
−αr+1 

−αn
v
u r  T 2 n
u X ui n X
=t + αi2
σi
i=1 i=r+1

What we can see is that when the singular values are very small σi << 1, the global error gets
very large. This can be avoided by not using the smallest singular values.
Suppose there was no noise, with other words n = 0, then the global error is:
v
u n
+ uX
x − xmodel = t αi2
i=r+1
10 CHAPTER 3. SINGULAR VALUE DECOMPOSITION METHOD

3.4 Results
In this section the results of the SVD is used to solve the test problems. Since the global error
gets bigger when very small singular values are used, the singular values of each test problem
are plotted first.

Figure 3.1: All singular values of the Nolet test problem Wide

What we can see in Figure 3.1 are that the last 3 singular values σ198 , σ199 and σ200 are very
small. This means that if we use the first 197 singular values, the global error should be the
smallest and the solution x+ will look more like the model solution.

150 2.2544·10−3
195 1.0043·10−3
197 1.5448·10−4
200 5.7700·10−3

Table 3.1: Global Error of test problem Wide without noise

In Figure 3.2 and Table 3.1 the results can be found. Figure 3.2c shows more resemblance to the
model solution, Figure 2.2a, than the others as expected. When there is noise, the SVD solution
using 197 singular values is a slightly better result than for 195 singular values, Figure 3.3, but
195 singular values gives the smallest global error.

150 2.3833·10−3
195 2.7457·10−3
197 2.8380·10−3
200 3.5164·1010

3.4. RESULTS 11

(c) r=197 (d) r=200

Figure 3.2: Solution of test problem Wide without noise, using r singular values

The same can be done for the Narrow problem. Figure 3.4 shows us that there are more small
singular values than for test problem Wide. The last 80 singular values are close to zero, σi << 1
for all i = 121, . . . , 200. The results of SVD can be seen in Figure 3.5. Using 120 singular values
gives the best result. When there are more singular values used, the solution becomes worse.
However, when there is noise, the best solution is given with less singular values, 100, see Figure
3.6. This can be explained with the analytic global error. Since we divide through each singular
value, the global error gets smaller when the singular value is greater than 1.
For the AIR tools problem the singular values are also plotted which can be seen in Figure 3.7.
In this plot there are 6 singular values, σ220 till σ225 , smaller than 10−10 . When these singular
values are not used, the solution looks more like the model solution, which can be seen in Figure
3.8b. Just as with test problem Narrow, the solution is better when using less than 219 singular
values, see Figure 3.9. This can also be explained with the singular values that are smaller than
1 should not be used, since it makes the global error bigger.
In conclusion, the SVD method can be used to solve the minimum norm least squares problem.
When there is noise in the data, the smallest singular values should not be used, because it
makes the global error bigger. Plotting the singular values of the matrix A helps us determine
how many singular values there should be used to obtain the best solution.
12 CHAPTER 3. SINGULAR VALUE DECOMPOSITION METHOD

(c) r=197 (d) r=200

Figure 3.3: Solution of test problem Wide with noise  = 0.01, using r singular values

Figure 3.4: All singular values of the Nolet test problem Narrow
3.4. RESULTS 13

(c) r=150 (d) r=200

Figure 3.5: Solution of test problem Narrow without noise, using r singular values

(c) r=150 (d) r=200

Figure 3.6: Solution of test problem Narrow with noise  = 0.01, using r singular values
14 CHAPTER 3. SINGULAR VALUE DECOMPOSITION METHOD

Figure 3.7: All singular values of the test problem AIR tools

(c) r=223 (d) r=225

Figure 3.8: Solution of test problem AIR tools without noise, using r singular values
3.4. RESULTS 15

(c) r=223 (d) r=225

Figure 3.9: Solution of test problem AIR tools with noise  = 0.01, using r singular values
16 CHAPTER 3. SINGULAR VALUE DECOMPOSITION METHOD
Chapter 4

Tikhonov Regularization

Another way to solve the ill-posed problem Ax = b is by making the system matrix ”nearby“
a regularization method. A famous regularization method is called the Tikhonov regularization.
This method will be explained in this chapter.

4.1 Tikhonov I-regularization

The Tikhonov I-regularization makes the system the ill-posed problem Ax = b of full column
rank. This leads to the following problem, also known as the damped least squares problem [4]:
    2
2 2
A b
min kAx − bk + kλxk ≡ min x− (4.1)
x x λI 0
The normal equation corresponding to the damped least squares problem is:
(AT A + λ2 I)x = AT b (4.2)
Notice that (AT A + λ2 I) invertible for all λ 6= 0 (See Appendix A.1). It follows that the solution
of problem (4.1) for every λ is:
xλ = (AT A + λ2 I)−1 AT b (4.3)
This means that there is a unique solution for every λ, therefore, a λ should be chosen to get
the best solution. Hansen describes in [4] how to chose λ with an L-curve.

4.1.1 Error analysis

First of all, to analyse the global error, the SVD of the matrix A is used under the assumption
λ 6= 0. Then the solution xλ (4.3) can be rewritten:

xλ = (AT A + λ2 I)−1 AT b
= (V ΣT U T U ΣV T + λ2 I)−1 V ΣT U T b
= (V ΣT ΣV T + λ2 V V T )−1 V ΣT U T b
= (V (ΣT Σ + λ2 I)V T )−1 V ΣT U T b
= V (ΣT Σ + λ2 I)−1 V T V ΣT U T b
= V (ΣT Σ + λ2 I)−1 ΣT U T b

17
18 CHAPTER 4. TIKHONOV REGULARIZATION

Recall that b can be the time travel vector with noise b̂ or without noise bmodel . The model right-
hand side can be written as bmodel = Axmodel = U ΣV T xmodel and the definition V T xmodel = α
is used, then:

bmodel = U Σα

Remember that noise can be seen as an other vector added to the original bmodel . With other
words b̂ = bmodel + n with knk =  kbmodel k. Hence that

m
X
n= (uTi n)ui
i=1

with ui is the ith column of the orthogonal U . Thus,

m
X
b̂ = U Σα + (uTi n)ui
i=1

m
!
X
xλ = V (ΣT Σ + λ2 I)−1 ΣT U T U Σα + (uTi n)ui
i=1
m
!
X
T 2 −1 T T
= V (Σ Σ + λ I) Σ Σα + U (uTi n)ui
i=1
uT1 n
  
  uT n 
 2 
= V (ΣT Σ + λ2 I)−1 ΣT Σα + U T U  . 

  .. 
uTm n
uT1 n
  
  uT n  
T 2 −1 T   2 
= V (Σ Σ + λ I) Σ Σα +  . 
  .. 
uTm n

However, with the SVD the maximum number of singular values is n, because A ∈ Rm×n with
m >> n, so:

n  
X 1
σi σi αi + uTi n

xλ = vi 2 (4.4)
i=1
σi + λ2
n 
σi2 αi σi (uTi n)
X 
= v i + v i (4.5)
i=1
σi2 + λ2 σi2 + λ2
4.1. TIKHONOV I-REGULARIZATION 19

Pn
Note that xmodel = with αi = viT xmodel . Thus the global error is
i=1 αi vi

n  n
σi2 αi σi (uTi vi )
X  X
kxλ − xmodel k = v + v − α v

2 2 i 2 2 i i i

i=1
σi + λ σi + λ i=1

n 
σi2 αi σi (uTi vi )
X 
= + − α v

2 + λ2 2 + λ2 i i
σ i σ i
i=1
n  
X σi2 αi σi (uTi vi ) αi (σi2 + λ2 )
= 2 + λ2 + σ 2 + λ2 − vi

σ σ 2 + λ2
i i i
i=1 
n
−λ2 αi + σi (uTi n)
X 
= vi

2
σi + λ 2
i=1
 −λ2 α +σ (uT n) 
1 1 1
σ12 +λ2
=
 .
..

V 
 

−λ2 αn +σn (uT n n)

2 +λ2

σn
v
u n 
uX −λ2 αi + σi (uT n) 2

i
= t
i=1
σi2 + λ2

The λ ensures us that there will not be divided through almost zero, meaning that the global
error cannot grow rapidly.
Without noise the global error is:
v
u n  2 !
u X
2t αi
kxλ − xmodel k = λ 2
i=1
σi + λ2

This means that we have the tendency to chose λ = 0, however, AT A does not have to be
invertible for λ = 0. Thus, a λ close to zero performs better. (Watch out with machine precision)
These results can also be found in [11].

4.1.2 Results
This section shows the results of the Tikhonov I-regularization.
The test problem Wide of Nolet without noise is tested. The results for different λ’ s can be
found in Figure 4.1. For λ = 0.01 and λ = 0.1 the results have more resemblance with the
model solution than for λ = 1 and λ = 10, just as the global error which we determined analytic
suggested. It is hard to see whether λ = 0.01 or λ = 0.1 looks more like the model solution, but
the model solution is known and so the global error can be calculated. Table 4.1 shows us that
λ = 0.01 gives the best result.
However, when there is noise in the data, the solution with λ = 5 looks the most like the
model solution, see Figure 4.2. There may exist a better solution with a global error less than
1.5983·10−3 , when the regularization parameter is optimised with the L-curve.
20 CHAPTER 4. TIKHONOV REGULARIZATION

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

λ Global Error
0.01 1.5448·10−4
0.1 1.5454·10−4
1 3.4093·10−4
10 1.5343·10−3

λ Global Error
1 2.4387·10−3
5 1.5983·10−3
10 1.6895·10−3
15 1.9782·10−3

Table 4.2: Global Error of test problem Wide with  = 0.01

4.1. TIKHONOV I-REGULARIZATION 21

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

Figure 4.2: Solution of test problem Wide with  = 0.01

22 CHAPTER 4. TIKHONOV REGULARIZATION

4.2 Tikhonov M -regularization

Another Tikhonov regularization method is the Tikhonov M -regularization, this means that we
take an arbitrary p × n matrix M instead of the identity matrix I.

    2
2
A
2 b
min kAx − bk + kλM xk ≡ min
x− (4.6)
x x λM 0

The normal equation of this problem is

(AT A + λ2 M T M )x = AT b (4.7)

When M = I the matrix AT A + λ2 M T M is invertible. This is also the case for an arbitrary
p × n matrix M with rank M 6= 0 (see Appendix A.2). To be able to do an error analysis we
need the generalized singular value decomposition (GSVD) of the matrices A and M .

4.2.1 Generalized Singular Value Decomposition

The GSVD decomposes any matrix pair (A, M ) with A ∈ Rm×n and M ∈ Rp×n , and m ≥ n ≥ p.

A = U ΣX −1 , M = V LX −1 , (4.8)

with U ∈ Rm×n and V ∈ Rp×p have orthonormal columns such that U T U = In and V T V = Ip ,
X ∈ Rn×n is a nonsingular matrix, and Σ and L are of the form

 
Σp 0 
Σ= , L = Lp 0
0 In−p

Here Σp = diag(σi ) and Lp = diag(µi ) are both p × p-matrices and it holds for i = 1, . . . , p:
σi2 + µ2i = 1 and σi and µi are ordered such that

0 ≤ σ1 ≤ . . . ≤ σp , 1 ≥ µ1 ≥ . . . ≥ µp > 0

σi
The generalized singular values of matrix pair (A, L) are defined as γi = µi for i = 1, . . . , p.

When M = In , then X −1 = L−1 V and A = U (ΣL−1 )V T . This shows us that after reordering,
the singular values ψi of A are the same as the generalized singular values γi , ψi = γn−i+1 for
i = 1, . . . n. This can be found in [4].
4.2. TIKHONOV M -REGULARIZATION 23

4.2.2 Using GSVD for Tikhonov M -regularization

Now we can use the GSVD to find the solution of the normal equation (4.7) by substituting the
GSVD of matrix pair (A, M ) in the equation:

AT A + λ 2 M T M x = AT b


⇐⇒
  T 
−1 T T T −1 −1 T L
 −1
2
(X ) Σ U U ΣX + λ (X ) T
V V L 0 X x = (X −1 )T ΣT U T b
0
⇐⇒
  T 
2 L L 0
(X −1 )T T
Σ Σ+λ X −1 x = (X −1 )T ΣT U T b
0 0
⇐⇒
  T 
2 L L 0
T
Σ Σ+λ X −1 x = ΣT U T b
0 0
⇐⇒
  T −1
−1 T 2 L L 0
X x= Σ Σ+λ ΣT U T b
0 0
⇐⇒
  T −1
T 2 L L 0
x=X Σ Σ+λ ΣT U T b
0 0

σi
When we use the generalized singular values γi = µi , the solution of (4.7) can be written as:

p n
X γi2 uTi b X
xλ = xi + (uTi b)xi
i=1
γi2 + λ2 σi i=p+1

Here is xi the ith column of the nonsingular matrix X. This result can also be found in [4].

4.2.3 Error analysis

To determine the global error, the model solution has to be written with the use of the GSVD
uT b
of A, thus xmodel = A−1 bmodel = XΣ−1 U T bmodel = pi=1 i σmodel xi + ni=p+1 (uTi bmodel )xi .
P P
i

Suppose there is noise in the data, then b is the perturbed time vector b̂ = bmodel + n. The
global error of the Tikhonov M -regularization is:
24 CHAPTER 4. TIKHONOV REGULARIZATION

kxλ − xmodel k
 
p n p n
X γi2 uTi b T

X
T
X (ui bmodel ) X
T

=
2 2
x i + (u i b)x i −  x i + (u i b)x i 
i=1 γi + λ σi σi
i=p+1 i=1 i=p+1

p n p n
X γi2 uTi (bmodel + n)) T

X
T
X (ui bmodel ) X
T

=
2 2
x i + (u i (b model + n))x i − x i − (u i b)x i

i=1 γi + λ σi σi
i=p+1 i=1 i=p+1

p n
X γi2
 T T
 T

u i b model u i n (u i b model ) X
T T

= 2 2
+ x i − x i + (u i (b model + n))x i − (u i b model )x i

i=1 γi + λ
σi σi σi
i=p+1

p  2 T 2 (uT b
 n
X 1 γ i (u i n) λ i model ) X
T

= 2 2
− xi + ui n xi
i=1 γi + λ
σi σi
i=p+1
  2 T 2 T

1 γ1 (u1 n)−λ (u1 bmodel )
γ 2 +λ2 σ
1 1
..
 
 
 2 T . 2 T
 
 
 1 γp (up n)−λ (up bmodel ) 
= X  γp +λ
 2 2 σp 

 u T n 
 p+1 
 .. 

. 

T
u n
n

The global error suggests us that we should choose a matrix M with dimensions m × p with
p = n instead of p < n, since it reduces the noise. λ also influences the global error and should
be chosen with the use of the L-curve.
Without noise, the global error is:
  2 T 
1 −λ (u1 bmodel )
2 2 σ1
 γ1 +λ

..

 
 2. T
 
 
 1 −λ (up bmodel ) 
kxλ − xmodel k =
X  γp2 +λ2
 σp 

 0 

 .
.. 

0

4.2.4 Choice of M
Since M can be any m × p matrix, one can be overwhelmed with options. In this section a few
logical choices of matrices are suggested.
With the M -matrix more information of the solution can be added to the problem. A logical
choice of M -matrices are matrices which make the solution as smooth as possible. This means
that the difference in slowness between every pixel and its neighbours should be minimised. To
define the regularization matrix, we want to know more on how the pixels are numbered.
In Figure 4.3 the numbering of the Nolet test problem is sketched. Nolet used horizontal num-
bering for his test problem with 20 pixels in each row and 200 pixels in total, thus 10 rows. The
knowledge is needed to make logical matrices.
4.2. TIKHONOV M -REGULARIZATION 25

Figure 4.3: Numbering Nolet test problem

We suggest 3 different M , because of the global error the dimensions of these matrices are m × n.
These matrices are specially made for the Nolet test problem. This means that the numbering
can be different for other problems.

The first matrix, we are going to test is the following:

 
1 −1 0 ··· 0
. .. 
−1 . .

0 1 . 
 
M1 =  ... .. .. .. , with M (20i, 20i + 1) = 0 ∀i = 1, . . . , 9 (4.9)
 
. . . 0
 ..
 
.. 
. . 1 −1
0 ··· ··· 0 1

This matrix minimises the difference in slowness of each pixel j and pixel j + 1 except the pixels
on the east boundary of the domain, instead the slowness of the pixels on the east boundary is
minimised.

The second matrix minimises the difference in slowness of each pixel j with its downstairs neigh-
bour, pixel j + 20. This matrix looks like this:

1 0 ··· 0 −1 0 ··· 0
 
.. .. .. .. .. 
0 1 . . . . . 

.
.. .. .. .. ..

. . . . . .
. 0

. .. .. .. .. 
. . . . .
. −1

M2 =  . .. .. ..  (4.10)
 .. . . . 0
 
. .. .. .. .. 
 .. . . . . 
 
. ..
 ..

. 1 0
0 ··· ··· ··· ··· ··· 0 1
26 CHAPTER 4. TIKHONOV REGULARIZATION

The third matrix is a combination of the first matrix and the second matrix and is defined as:
2 −1 0 · · · 0 −1 0 · · · 0
 
.. .. .. .. .. .. 
0 2 . . . . .

. 
.
. .. .. .. .. .. ..

. . . . . . . 0

.
.. .. .. .. ..

. . . . . . −1
. 
. .. .. .. .. 
 ..
M3 =  . . . . 0 , (4.11)

. .. .. .. .. .. 
 .. . . . . . 
 
. .. .. ..
 ..

 . . . 0 
. ..
 ..

. 2 −1
0 ··· ··· ··· ··· ··· ··· 0 2
with M (20i, 20i) = 1, M (20i, 20i + 1) = 0∀i = 1, . . . , 9

4.2.5 Results
In this section the results of Tikhonov M -regularization are given. First the matrix defined as
(4.9) is tested for Nolet test problems Wide with and without noise.
For test problem Wide without noise (see Figure 4.4), the smallest global error is given by
λ = 0.01. As we look at the images, only λ = 10 does not look as much as the model solution as
the other λ’s. For the test problem Wide with noise, the solution with λ = 10 looks the most like
the model solution, which also has the smallest global error. We can also see that the solution
is quite smooth in the horizontal direction, but not smooth at all in the vertical direction.

λ Global Error
0.01 9.3057·10−5
0.1 9.3060·10−5
1 9.4495·10−5
10 9.6208·10−5

Table 4.3: Global Error of test problem Wide without noise with M1

λ Global Error
1 2.7823·10−3
5 2.2512·10−3
10 2.0467·10−3
15 2.2336·10−3

Table 4.4: Global Error of test problem Wide with  = 0.01 with M1

Figures 4.6 and 4.7 gives the results of the second matrix M2 (4.10). The small λ’s look very
much like the model solution for test problem Wide without noise. Though the best result is
given by λ = 0.01 since it has the lowest global error, see Table 4.5. With noise λ = 15 has the
lowest global error. While λ = 10 looks the best in eyeball norm. The solution for λ = 10 is very
smooth, not only in the vertical direction, but also in horizontal direction.
The results of the third matrix M3 (4.11) can be seen in Figures 4.8 and 4.9. For λ = 0.1 the
best result is given for the case without noise, since it has the smallest global error. λ = 0.01 also
4.2. TIKHONOV M -REGULARIZATION 27

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure 4.4: Solution of test problem Wide without noise with M1

λ Global Error
0.01 5.4568·10−5
0.1 5.5340·10−5
1 5.0677·10−4
10 2.7736·10−3

Table 4.5: Global Error of test problem Wide without noise with M2

looks very much like the model solution. The first thing what strikes is that for λ = 1 the lower
part of the solution is smoother in vertical direction than the model solution is. With noise the
lowest global error is given by λ = 1. This can also be seen in Figure 4.9. The smoothing in the
vertical direction is less than in the horizontal direction.
All in all, when there is a bit noise in the data,  = 0.01, the lowest global error is given when
we used matrix M1 and λ = 10, but the global error for M2 with λ = 15 was very small too.
Though the images of the solutions for M2 looks better than M1 . This can be explained by the
test model, since the seismic waves travel in vertical direction. When the rays bend, the influence
of the noise is noticed in horizontal direction, thus, smoothing in vertical direction works better
for the eyeball norm. However, when we take a look at Tikhonov I-regularization, the global
error is even smaller for the wide test problem with noise, when we use λ = 5, but looks less in
eyeball norm since the solution is not very smooth.
28 CHAPTER 4. TIKHONOV REGULARIZATION

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

Figure 4.5: Solution of test problem Wide with  = 0.01 with M1

λ Global Error
1 2.2651·10−3
5 2.4575·10−3
10 2.8491·10−3
15 2.0853·10−3

Table 4.6: Global Error of test problem Wide with  = 0.01 with M2

λ Global Error
0.01 5.8785·10−5
0.1 5.8760·10−5
1 5.6052·10−4
10 3.6222·10−3

Table 4.7: Global Error of test problem Wide without noise with M3

λ Global Error
1 2.2746·10−3
5 2.9295·10−3
10 3.6864·10−3
15 4.0313·10−3

Table 4.8: Global Error of test problem Wide with  = 0.01 with M3
4.2. TIKHONOV M -REGULARIZATION 29

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure 4.6: Solution of test problem Wide without noise with M2

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

Figure 4.7: Solution of test problem Wide with  = 0.01 with M2

30 CHAPTER 4. TIKHONOV REGULARIZATION

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure 4.8: Solution of test problem Wide without noise with M3

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

Figure 4.9: Solution of test problem Wide with  = 0.01 with M3

Chapter 5

Saunders Regularization

There is also another type of regularization which we will call the Saunders regularization in this
paper. This chapter explains and tests the regularization method.

5.1 Saunders I-regularization

With the Saunders I-regularization, extra columns are added to the ill-posed problem Ax = b,
such that the system is full row rank. This leads to the following problem (can also be found
in [4]):

  2  
x  x
min subject to A λI = b ≡ min kxk2 + ksk2 subject to Ax + λs = b (5.1)
x,s s s x,s
 
 x
We want to solve A λI = b, which is consistent. Now we solve this as a regularized
s
version of (2.5). Thus we can find the solution by solving:
 AT
 
A λI y=b
λI

where    T 
x A y
= (5.2)
s λy
Thus the normal equation we want to solve is:

(AAT + λ2 I)y = b

Notice that (AAT + λ2 I) is invertible for all λ 6= 0 (See Appendix A.3), so the unique y can be
found by

y = (AAT + λ2 I)−1 b

which means that x can also be found by the substitution of (5.2)

xλ = AT (AAT + λ2 I)−1 b

Since there is a unique solution y to (5.1) for every λ, there are many possible solutions. There-
fore, λ should be chosen to give the best solution. Just as explained in section 4.1, choosing λ is
very difficult and can be done with the L-curve (see [4]).

31
32 CHAPTER 5. SAUNDERS REGULARIZATION

5.1.1 Error analysis

The global error kxλ − xmodel k can be computed with the SVD of A. First of all, the solution
xλ can be rewritten:

xλ = AT (AAT + λ2 I)−1 b
= V ΣT U T (U ΣV T V ΣT U T + λ2 I)−1 b
= V ΣT U T (U ΣΣT U T + λ2 U T U )−1 b
= V ΣT U T (U (ΣΣT + λ2 I)U T )−1 b
= V ΣT U T U (ΣΣT + λ2 I)−1 U T b
= V ΣT (ΣΣT + λ2 I)−1 U T b

First the global error will be determined for the problem with noise, so b = b̂ = U Σα +
P m T
i=1 (ui n)ui can be plugged into the equation:

xλ = V ΣT (ΣΣT + λ2 I)−1 U T b̂
m
X
T T 2 −1 T
= V Σ (ΣΣ + λ I) U (U Σα + (uTi n)ui )
i=1
m
X
T T 2 −1 T
= V Σ (ΣΣ + λ I) (Σα + U (uTi n)ui )
i=1
uT1 n
  

= V ΣT (ΣΣT + λ2 I)−1 Σα + U T U  ... 

  

uTm n
  T 
u1 n
T T 2 −1   .. 
= V Σ (ΣΣ + λ I) Σα +  . 
uTm n

However, with the SVD the maximum number of singular values is n, because A ∈ Rm×n with
m >> n, so:
n  
X 1
σi σi αi + uTi n

xλ = vi 2 (5.3)
i=1
σi + λ2
n 
σi2 αi σi (uTi n)
X 
= v i + vi (5.4)
i=1
σi2 + λ2 σi2 + λ2

Note that xλ is the same as the solution of the Tikhonov regularization, see (4.5). Thus, the
global error is
v
u n 
uX −λ2 αi + σi (uT n) 2

i
kxλ − xmodel k = t
i=1
σi2 + λ2

Moreover, without noise the global error is also the same as the global error of the Tikhonov
5.1. SAUNDERS I-REGULARIZATION 33

regularization without noise:

v
u n  !
u X −λ2 αi 2
kxλ − xexact k = t
i=1
σi2 + λ2
v
u n  2 !
u X α i
= λ2 t
i=1
σi2 + λ2

5.1.2 Results
In this section the results of Saunders I-regularization are discussed.
For test problem Wide, the best result is given by the smallest λ = 0.01 for the cases without
noise, Figure 5.1. This is also determined by the analytic global error, which suggests that λ = 0
gives the best result, however, the matrix AAT + λ2 I is not invertible for λ = 0. The numerical
global error, see Table 5.1, is also the smallest for λ = 0.01. When noise is added to the data,
the lowest global error is given by λ = 5, which gives us the following solution which can be seen
in Figure 5.2b.

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure 5.1: Solution of test problem Wide without noise

34 CHAPTER 5. SAUNDERS REGULARIZATION

λ Global Error
0.01 1.5448·10−4
0.1 1.5454·10−4
1 3.4093·10−4
10 1.5343·10−3

Table 5.1: Global Error of test problem Wide without noise

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

λ Global Error
1 2.4387·10−3
5 1.5983·10−3
10 1.6895·10−3
15 1.9782·10−3

Table 5.2: Global Error of test problem Wide with  = 0.01

5.2. SAUNDERS M -REGULARIZATION 35

5.2 Saunders M -regularization

Another regularization method we can use is Saunders M -regularization. Instead of the identity
matrix I in the Saunders I-regularization, we can choose an arbitrary matrix M ∈ Rm×p and
solve the following problem:

  2  
x  x
min
subject to A λM = b ≡ min kxk2 + ksk2 subject to Ax + λM s = b
x,s s s x,s
(5.5)
Analogously to Saunders I-regularization we can write:
 
 x
A λM =b
s
 AT
 
⇐⇒ A λM y=b
λM T
⇐⇒ AAT + λ2 M M T y = b


and
   T 
x A y
=
s λM T y
 
 x
A λM = b is consistent, thus AAT + λ2 M M T is invertible (See A.4). The solution to
s
the problem is y = (AAT + λ2 M M T )−1 b.
Unfortunately, we could not find a composition like the GSVD to be able to write down the
solution of problem (5.5). This also means that the error analysis can not be done analytic in
this paper.

5.2.1 Choice of M
Another difficult point for the Saunders M -regularization is the choice of M . There are many
m × p matrices which can be used for the regularization. It is more difficult to choose a logical
matrix M for the Saunders M -regularization than for Tikhonov M -regularization.
The regularization matrix influences the residuals as the constraint of minimisation problem is
Ax + λM s = b ⇐⇒ λM s = b − Ax. There cannot be said much about the residuals except
that we want to have the residual as small as possible. Since we can only use linear operator,
there does not seem to be a way to define a M .
We chose to use a M which is analogous to Tikhonov M1 , this matrix is defined as:

 
1 −1 0 ··· 0
. .. 
−1 . .

0 1 . 
 
M =  ... .. .. .. (5.6)
 
. . . 0
 ..
 
.. .. 
. . . −1
0 ··· ··· 0 1
This matrix smooths the residuals.
36 CHAPTER 5. SAUNDERS REGULARIZATION

5.2.2 Results
Here the results of Saunders M -regularization are presented with the use of M defined as in
(5.6).
Figure 5.3 shows the results for test problem Wide without noise. There does not seem to be a
difference in solutions for λ = 0.01, λ = 0.1 and λ = 1. This is also suggested by the global error
which does not differ that much. Though the global error shows us that λ = 0.01 is a better
choice. Adding noise to the data, the best solution is given by λ = 5, see Figure 5.4 and Table
5.4. However, in eyeball norm the solution does not look very much like the model solution. This
suggests that the smoothing operator given by (5.6) does not work well for the test problem.

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure 5.3: Solution of test problem Wide without noise

λ Global error
0.01 1.544864 · 10−4
0.1 1.544866 · 10−4
1 1.5590 · 10−4
10 9.6068 · 10−4

Table 5.3: Test problem Wide without noise

In conclusion, the best solution for test problem Wide with  = 0.01 is given by the Saunders
I-regularization with λ = 5. Saunders M -regularization does not work well on the test problem
when M defined as (5.6) is used. Also we only used a few λ’s to solve the problem and since we
did not use the L-curve, there may exist a λ which gives a better solution.
5.2. SAUNDERS M -REGULARIZATION 37

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

λ Global error
1 6.7704 · 10−3
5 4.6935 · 10−3
10 3.3341 · 10−3
15 2.8054 · 10−3

Table 5.4: Test problem Wide with  = 0.01

38 CHAPTER 5. SAUNDERS REGULARIZATION
Chapter 6

Comparison Saunders and Tikhonov

Regularization

6.1 I-regularization
While analyzing the global error, the solution xλ of the Tikhonov and Saunders I-regularization,
see (4.5) and (5.3), are exactly the same.
In [7], the problems (5.1) and (4.1) can be proved to be the same by substituting

r = λs = b − Ax (6.1)

min kxk2 + ksk2 subject to Ax + λs = b

Now substitute s = λ1 r:

2
1
min kxk2 +
λ r subject to Ax + r = b

Notice that Ax + r = b ≡ Ax + b − Ax = b is always correct, so the problem can be reduced.

2
2
1
⇐⇒ min kxk + λ r

1
⇐⇒ min kxk2 + 2 krk2
λ
⇐⇒ min λ kxk + krk2
2 2

⇐⇒ min kλxk2 + krk2

⇐⇒ min kλxk2 + kb − Axk2
⇐⇒ min kλxk2 + kAx − bk2

As a result, the problem (5.1) is equivalent to (4.1), meaning that the solution of the Saunders
I-regularization is the same for the Tikhonov I-regularization.

39
40 CHAPTER 6. COMPARISON SAUNDERS AND TIKHONOV REGULARIZATION

6.1.1 Experiments
To show whether the solutions are exactly the same for both I-regularization, a few images are
plotted.
For test problem Wide with noise, Figure 6.1, the solutions look exact the same. Also the global
errors are identical, which can be seen in Table 6.1.

(c) Tikhonov, Wide, λ = 10 (d) Saunders, λ = 10

Figure 6.1: Solutions of Tikhonov and Saunders for test problem Wide with noise  = 0.01

λ Global Error Tikhonov Global Error Saunders

5 1.5983·10−3 1.5983·10−3
10 1.6895·10−3 1.6895·10−3

Table 6.1: Global Error of Figure 6.1

However, when λ is close to zero like λ = 10−9 (Figure 6.2), the solutions does not look like each
other at all . Saunders I-regularization does not give a good solution for this λ, but for Tikhonov
I-regularization it does. This is a result of the ill-posedness of the problems.
6.2. M -REGULARIZATION 41

(a) Tikhonov, Wide, λ = 10−9 (b) Saunders, Wide, λ = 10−9

Figure 6.2: Solutions of Tikhonov and Saunders for test problem Wide with noise  = 0.01

6.2 M -regularization
Just as with the I-regularization, we can try to prove whether Tikhonov and Saunders M -
regularization problems are equivalent.
So we want to rewrite

min kxk2 + ksk2 subject to Ax + λM s = b, with M ∈ Rm×p

First, we define:

r = λM s = b − Ax
1
⇐⇒ s = M −1 r
λ
This can only be done when M is an invertible matrix, thus p = m. Substitute into the problem,
gives:
1 −1 2

2
min kxk + λ M r subject to Ax + λM s = b

Ax + r = b ≡ Ax + b − Ax = b is always correct, so the problem can be reduced to:

1 −1 2

2
⇐⇒ min kxk + λ M r

1 2
⇐⇒ min kxk2 + 2 M −1 r
λ

If M is orthogonal, we have:

1
⇐⇒ min kxk2 + krk2
λ2
⇐⇒ min λ2 kxk2 + krk2
⇐⇒ min kλxk2 + krk2
⇐⇒ min kλxk2 + kb − Axk2
42 CHAPTER 6. COMPARISON SAUNDERS AND TIKHONOV REGULARIZATION

The problem we encounter is that we want to get M in front of x, which is possible since M is
orthogonal. However, the dimensions of M are m × m and the dimensions of x is n × 1. Thus,
we cannot make the last step to
2
min λM̂ x + kb − Axk2

It seems as if there exists no matrix M such that the Tikhonov and Saunders M -regularization
give the same result.

6.3 Summary Numerical Experiments

Now we have seen several regularization methods and the SVD. In this section all the methods
are compared to each other. The best solution of each method can be seen in the Tables for each
test problem.

Matrix Global Error λ

I 1.5983 ·10−3 5 Matrix Global Error λ
M1 2.0467 ·10−3 10 I 1.5983 ·10−3 5
M2 2.0853 ·10−3 15 M 2.8054 ·10−3 15
M3 2.2746 ·10−3 1
Table 6.3: Saunders Regularization
Table 6.2: Tikhonov Regularization
Singular Values Global Error
150 2.3833 ·10−3

Table 6.5: Best solutions to Test Problem Wide

For test problem Wide, the smallest global error is given by the Tikhonov and Saunders I-
regularization and on second place it is the Tikhonov M1 -regularization. In eyeball norm the
best solution is given for Tikhonov M1 regularization. This is because the dark blue rectangle
from the model solution is clearer for Tikhonov M1 -regularization than for the Tikhonov and
Saunders I-regularization.
The best solution for test problem Narrow is given by Tikhonov M2 regularization in eyeball
norm. The model solution can be more recognized in this solution (Figure B.8b) than the
solution given by SVD (Figure 3.6b). Nonetheless, the SVD solution has a smaller global error
by at least a 0.001 less than the solution of Tikhonov M2 -regularization.
In general, an approximation of the solution of the test problem AIR tools is the best when
reconstructed with the use of the Tikhonov M2 -regularization method. Here the global error is
the smallest, but also in eyeball norm Tikhonov M2 -regularization works well for the AIR tools
test problem. The Tikhonov and Saunders I-regularization gives also very good results, both in
global error and eyeball norm.
All in all, the Tikhonov M2 regularization works well on all three problems. For test problems
Narrow and AIR tools the best results are given by this regularization method in eyeball norm.
For test problem Wide Tikhonov M2 -regularization method performs a bit less than Tikhonov
6.3. SUMMARY NUMERICAL EXPERIMENTS 43

Matrix Global Error λ

I 4.4737 ·10−3 5 Matrix Global Error λ
M1 4.3567 ·10−3 10 I 4.4737 ·10−3 5
M2 4.0471 ·10−3 5 M 4.4888 ·10−3 15
M3 5.0035 ·10−3 5
Table 6.7: Saunders Regularization
Table 6.6: Tikhonov Regularization
Singular Values Global Error
120 3.0749 ·10−3

Matrix Global Error λ

I 1.2171 1 Matrix Global Error λ
M1 1.3228 1 I 1.2171 1
M2 1.1198 1 M 1.2266 1
M3 1.6334 1
Table 6.11: Saunders Regularization
Table 6.10: Tikhonov Regularization
Singular Values Global Error
175 1.2982

Table 6.13: Best solutions to Test Problem AIR tools

M1 -regularization. The Saunders M -regularization method does not perform well for test prob-
lems Wide and Narrow, but for test problem AIR tools the global error is one of the smallest.
Since the λ’s are chosen randomly, there is a chance that the regularization method performs
better.
44 CHAPTER 6. COMPARISON SAUNDERS AND TIKHONOV REGULARIZATION
Chapter 7

Conclusion

This chapter presents the most important results and discusses them. Moreover, we made a few
suggestions for future research. The goal of this bachelor project was to investigate the regular-
ization methods Tikhonov and Saunders. Most importantly the following research questions:

These questions will be answered one by one.

The Tikhonov and Saunders I-regularization problems are equivalent for every problem Ax = b.
However, for M -regularization the problems are not equivalent for any matrix M . There does not
seem to be a M such that the problems are equivalent, since the dimensions of M are different
for the regularization methods.
A logical M for Saunders M -regularization is quite hard to define, because the regularization
matrix M influences the residuals. What we want is the residuals to be as small as possible,
which cannot be easily done without knowing the residuals. Therefore, the most logical matrix
to use with Saunders regularization is a matrix which makes the residuals smooth. This matrix
is defined as (5.6). For Tikhonov M -regularization the matrix M influences the solution instead
of the residuals. This leads to very logical options for M , which all smooths the solution. The
smoothing can be done in the horizontal, vertical direction and also in both directions. The
best results are given when the solution is smoothened in the vertical direction. This can be
explained by the fact that noise has more influence in the horizontal direction, since the seismic
waves travel in vertical direction.
An other important result we found is that when there is noise in the data, the test problems
can be solved with different methods. Each method give a quite reasonable solution with an
acceptable global error. In general Tikhonov M -regularization using a smoothing operator in
vertical direction gives the best result for the three test problems. The Saunders M -regularization
does not seem to work well for the test problems except for test problem AIR tools.
For future research, there is still enough to find out about Saunders regularization. For example,
one could use the L-curves to determine a good regularization parameter and take a look at
choices of M for the Saunders regularization. Also there might be a M such that the Tikhonov
and Saunders regularization are equivalent.

45
46 CHAPTER 7. CONCLUSION
Bibliography

[1] Unique rows space solution. https://www.khanacademy.org/

math/linear-algebra/alternate_bases/othogonal_complements/v/
lin-alg-unique-rowspace-solution-to-ax-b. Accessed: 25-6-2015.

[2] T. M. Buzug. Computed tomography: from photon statistics to modern cone-beam CT.
Springer Science & Business Media, 2008.

[3] L. Dykes, S. Noschese, and L. Reichel. Rescaling the GSVD with application to ill-posed
problems. Numerical Algorithms, 68(3):531–545, 2015.

[4] P. C. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. SIAM review,
34(4):561–580, 1992.

[5] P. C. Hansen and M. Saxild-Hansen. AIR tools – a MATLAB package of algebraic iterative
reconstruction methods. Journal of Computational and Applied Mathematics, 236(8):2167–
2178, 2012.

[6] G. Nolet. Solving or resolving inadequate and noisy tomographic systems. Journal of
computational physics, 61(3):463–482, 1985.

[7] M. A. Saunders. Solution of sparse rectangular systems using LSQR and CRAIG. BIT
Numerical Mathematics, 35(4):588–604, 1995.

[8] G. Strang. Linear algebra and its applications. Academic Press, 1980.

[11] M.J. Vuik. Reconstructie van de aardkorst met tomografie. 2010.

47
48 BIBLIOGRAPHY
Appendix A

Invertible Matrices

A.1 AT A + λ2 I is invertible
To prove that AT A + λ2 I is invertible for all λ 6= 0, every eigenvalue of the matrix must be
non-zero. For eigenvector x with eigenvalue τ :

(AT A + λ2 I)x = τ x
⇐⇒
AT Ax = (τ − λ2 )x
⇐⇒
AT Ax = µx

µ ∈ R and

µ≥0
⇐⇒
τ − λ2 ≥ 0
⇐⇒
τ ≥ λ2

So all eigenvalues of AT A + λ2 I are positive and non-zero.

A.2 AT A + λ2 M T M is invertible
AT A + λ2 M T M is invertible for all λ > 0 and rank M 6= 0, because every eigenvalue is non-zero.
Suppose eigenvector x with eigenvalue τ , then:

(AT A + λ2 M T M )x = τ x

Hence that AT A is symmetric and positive-semidefinite and M T M is also symmetric and positive-
semidefinite. For λ > 0, the matrix AT A + λ2 M T M is symmetric and positive-semidefinite, thus
τ /neq0.

49
50 APPENDIX A. INVERTIBLE MATRICES

A.3 AAT + λ2 I is invertible

To show AAT + λ2 I is invertible for all λ 6= 0, all eigenvalues of the matrix are non-zero. For
eigenvector x with eigenvalue τ :

(AAT + λ2 I)x = τ x
⇐⇒
AAT x = (τ − λ2 )x
⇐⇒
AAT x = µx

τ ∈ R and

µ≥0
⇐⇒
τ − λ2 ≥ 0
⇐⇒
τ ≥ λ2

A.4 AAT + λ2 M M T is invertible

AAT + λ2 M M T is invertible for all λ > 0 and with rank M 6= 0, because every eigenvalue is
non-zero. Suppose eigenvector x with eigenvalue τ , then:

(AAT + λ2 M M T )x = τ x

Hence that AAT is symmetric and positive-semidefinite and M M T is also symmetric and positive-
semidefinite. For λ ≥ 0, the matrix AAT + λ2 M M T is symmetric and positive-semidefinite, thus
τ /neq0.
Appendix B

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure B.1: Solution of Tikhonov I-regularization without noise

51
52 APPENDIX B. SOLUTIONS TEST PROBLEM NARROW

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

(a) λ = 0.011 (b) λ = 5

(c) λ = 1 (d) λ = 10

Figure B.3: Solution of Tikhonov M -regularization without noise with M1

53

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure B.5: Solution of Tikhonov M -regularization without noise with M2

54 APPENDIX B. SOLUTIONS TEST PROBLEM NARROW

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure B.7: Solution of Tikhonov M -regularization without noise with M3

55

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure B.9: Solution of Saunders I-regularization without noise

56 APPENDIX B. SOLUTIONS TEST PROBLEM NARROW

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

(a) λ = 0.01 (b) λ = 0.1

(c) λ = 1 (d) λ = 10

Figure B.11: Solution of Saunders M -regularization without noise

57

(a) λ = 1 (b) λ = 5

(c) λ = 10 (d) λ = 15

Figure B.12: Solution of Saunders M -regularization with  = 0.01

58 APPENDIX B. SOLUTIONS TEST PROBLEM NARROW
Appendix C

MATLAB Programs

(All MATLAB files are the scripts for the Nolet test problem. A few alternations should be made
when using the scripts for test problem AIR tools)

C.1 SVD
clear all; close all;

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
r = 0;
problem = ’w’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Singularvalues:’,’Wide/narrow’};
defaults = {num2str(noise),num2str(r),problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
r = str2num(char(params(2)));
problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )

59
60 APPENDIX C. MATLAB PROGRAMS

randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

[U,S,V] = svd(full(A));
U=U(:,1:r);
S=S(1:r,1:r);
V=V(:,1:r);
y2 = U’*b;
y3 = S\y2;
x = V*y3;

figure
plotsol( x )

error=norm(x-x_mod)
end

C.2 Tikhonov
C.2.1 I-regularization
clear all; close all;

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
lambda = 0;
problem = ’w’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
C.2. TIKHONOV 61

problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

x = (A’*A + lambda^2*speye(n,n))\(A’*b);
error= norm(x-x_mod)
figure
plotsol( x )
end

C.2.2 M1 -regularization
clear all; close all;

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
lambda = 0;
tau2 = 0;
r = 0;
problem = ’w’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
62 APPENDIX C. MATLAB PROGRAMS

problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end
M=diag(ones(1,n))+diag(-ones(1,n-1),1);
for j=1:9
M(j*20,j*20+1)=0;
end
[U,V,X,C,S] = gsvd(full(A),lambda*M);
y1= C’*U’*b;
y2= (C’*C+S’*S)\y1;
y3= X’\y2;

figure
plotsol( y3 )

error=norm(y3-x_mod)
end

C.2.3 M2 -regularization
clear all; close all’
scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
lambda = 0;
problem = ’w’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
C.2. TIKHONOV 63

params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

M=diag(ones(1,n))+diag(-ones(1,n-20),20);
[U,V,X,C,S] = gsvd(full(A),lambda*M);
y1= C’*U’*b;
y2= (C’*C+S’*S)\y1;
y3= X’\y2;

figure
plotsol( y3 )

error=norm(y3-x_mod)
end

C.2.4 M3 -regularization
clear all, close all

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
lambda = 0;
problem = ’w’;

contin = 1;
while ( contin )
64 APPENDIX C. MATLAB PROGRAMS

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

M=diag(2*ones(1,n))+diag(-ones(1,n-1),1)+diag(-ones(1,n-20),20);
[U,V,X,C,S] = gsvd(full(A),lambda*M);
y1= C’*U’*b;
y2= (C’*C+S’*S)\y1;
y3= X’\y2;

figure
plotsol( y3 )

error=norm(y3-x_mod)
end

C.3 Saunders
C.3.1 I-regularization
clear all; close all;

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

noise = 0;
C.3. SAUNDERS 65

lambda = 0;
problem = ’w’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

x = (A’*A + lambda^2*speye(n,n))\(A’*b);
error= norm(x-x_mod)
figure
plotsol( x )
end

C.3.2 M -regularization
% Script to test regularisation on Nolet’s problem
%

clear all; close all;

scrsz = get(0,’ScreenSize’);

[x_mod, n1, m1, entries, rep, field, symm] = mmread(’solution.mtx’);

66 APPENDIX C. MATLAB PROGRAMS

noise = 0;
lambda = 0;
problem = ’n’;

contin = 1;
while ( contin )

Title = ’Parameters’;
prompt = {’Noise:’, ’Regularization:’, ’Wide/narrow’};
defaults = {num2str(noise), num2str(lambda), problem};
lineNo=[1 25];
params=inputdlg(prompt,Title,lineNo,defaults);
if ( isempty(params) ) contin = 0; break; end

noise = str2num(char(params(1)));
lambda = str2num(char(params(2)));
problem = char(params(3));
if problem == ’w’
[A, m, n, entries, rep, field, symm] = mmread(’wide20.mtx’);
else
[A, m, n, entries, rep, field, symm] = mmread(’narrow20.mtx’);
end
d = A*x_mod;

if ( noise > 0 )
randn(’state’,0);
db = randn(size(d));
pert = noise*norm(d)/norm(db);
b = d + pert*db;
else
b = d;
end

M=diag(ones(1,m))+diag(-ones(1,m-1),1);
y=(A*A’ +lambda^2*M*M’)\b;
x=A’*y;

figure
plotsol( x )

error=norm(x-x_mod)
end

C.3.3 function plotsol

function plotsol( x )

y = reshape(x,20,10);
s= -1./36.;
C.3. SAUNDERS 67

y = y’/s;
imagesc(y);
caxis([-0.06 0]);
colorbar;
drawnow;
axis off

return

%
%
%
% Reads the contents of the Matrix Market file ’filename’
% into the matrix ’A’. ’A’ will be either sparse or full,
% depending on the Matrix Market format indicated by
% ’coordinate’ (coordinate sparse storage), or
% ’array’ (dense array storage). The data will be duplicated
% as appropriate if symmetry is indicated in the header.
%
% Optionally, size information about the matrix can be
% obtained by using the return values rows, cols, and
% entries, where entries is the number of nonzero entries
% in the final matrix. Type information can also be retrieved
% using the optional return values rep (representation), field,
% and symm (symmetry).
%

mmfile = fopen(filename,’r’);
if ( mmfile == -1 )
disp(filename);
end;

error(’Empty file.’)
end

% NOTE: If using a version of Matlab for which strtok is not

% defined, substitute ’gettok’ for ’strtok’ in the
% Matrix Market site.
68 APPENDIX C. MATLAB PROGRAMS

rep = lower(rep);
field = lower(field);
symm = lower(symm);
if ( length(symm) == 0 )
disp([’Not enough words in header line of file ’,filename])
disp(’Recognized format: ’)
disp(’%%MatrixMarket matrix representation field symmetry’)
end
end
disp([’This seems to be a MatrixMarket ’,head1,’ file.’]);
disp(’This function only knows how to read MatrixMarket matrix files.’);
disp(’ ’);
error(’ ’);
end

commentline = fgets(mmfile);
while length(commentline) > 0 & commentline(1) == ’%’,
commentline = fgets(mmfile);
end

% Read size information, then branch according to

% sparse or dense format

if ( strcmp(rep,’coordinate’)) % read matrix given in sparse

% coordinate matrix format

[sizeinfo,count] = sscanf(commentline,’%d%d%d’);
while ( count == 0 )
commentline = fgets(mmfile);
if (commentline == -1 )
error(’End-of-file reached before size information was found.’)
end
[sizeinfo,count] = sscanf(commentline,’%d%d%d’);
if ( count > 0 & count ~= 3 )
error(’Invalid size specification line.’)
end
end
rows = sizeinfo(1);
cols = sizeinfo(2);
C.3. SAUNDERS 69

entries = sizeinfo(3);

if ( strcmp(field,’real’) ) % real valued entries:

[T,count] = fscanf(mmfile,’%f’,3);
T = [T; fscanf(mmfile,’%f’)];
if ( size(T) ~= 3*entries )
message = ...
str2mat(’Data file does not contain expected amount of data.’,...
’Check that number of data lines matches nonzero count.’);
disp(message);
error(’Invalid data.’);
end
T = reshape(T,3,entries)’;
A = sparse(T(:,1), T(:,2), T(:,3), rows , cols);

elseif ( strcmp(field,’complex’)) % complex valued entries:

T = fscanf(mmfile,’%f’,4);
T = [T; fscanf(mmfile,’%f’)];
if ( size(T) ~= 4*entries )
message = ...
str2mat(’Data file does not contain expected amount of data.’,...
’Check that number of data lines matches nonzero count.’);
disp(message);
error(’Invalid data.’);
end
T = reshape(T,4,entries)’;
A = sparse(T(:,1), T(:,2), T(:,3) + T(:,4)*sqrt(-1), rows , cols);

elseif ( strcmp(field,’pattern’)) % pattern matrix (no values given):

T = fscanf(mmfile,’%f’,2);
T = [T; fscanf(mmfile,’%f’)];
if ( size(T) ~= 2*entries )
message = ...
str2mat(’Data file does not contain expected amount of data.’,...
’Check that number of data lines matches nonzero count.’);
disp(message);
error(’Invalid data.’);
end
T = reshape(T,2,entries)’;
A = sparse(T(:,1), T(:,2), ones(entries,1) , rows , cols);

end

elseif ( strcmp(rep,’array’) ) % read matrix given in dense

% array (column major) format
70 APPENDIX C. MATLAB PROGRAMS

[sizeinfo,count] = sscanf(commentline,’%d%d’);
while ( count == 0 )
commentline = fgets(mmfile);
if (commentline == -1 )
error(’End-of-file reached before size information was found.’)
end
[sizeinfo,count] = sscanf(commentline,’%d%d’);
if ( count > 0 & count ~= 2 )
error(’Invalid size specification line.’)
end
end
rows = sizeinfo(1);
cols = sizeinfo(2);
entries = rows*cols;
if ( strcmp(field,’real’) ) % real valued entries:
A = fscanf(mmfile,’%f’,1);
A = [A; fscanf(mmfile,’%f’)];
if ( strcmp(symm,’symmetric’) | strcmp(symm,’hermitian’)
| strcmp(symm,’skew-symmetric’) )
for j=1:cols-1,
currenti = j*rows;
A = [A(1:currenti); zeros(j,1);A(currenti+1:length(A))];
end
elseif ( ~ strcmp(symm,’general’) )
disp(’Unrecognized symmetry’)
disp(symm)
disp(’Recognized choices:’)
disp(’ symmetric’)
disp(’ hermitian’)
disp(’ skew-symmetric’)
disp(’ general’)
end
A = reshape(A,rows,cols);
elseif ( strcmp(field,’complex’)) % complx valued entries:
tmpr = fscanf(mmfile,’%f’,1);
tmpi = fscanf(mmfile,’%f’,1);
A = tmpr+tmpi*i;
for j=1:entries-1
tmpr = fscanf(mmfile,’%f’,1);
tmpi = fscanf(mmfile,’%f’,1);
A = [A; tmpr + tmpi*i];
end
if ( strcmp(symm,’symmetric’) |
strcmp(symm,’hermitian’) | strcmp(symm,’skew-symmetric’) )
for j=1:cols-1,
currenti = j*rows;
A = [A(1:currenti); zeros(j,1);A(currenti+1:length(A))];
end
C.3. SAUNDERS 71

elseif ( ~ strcmp(symm,’general’) )
disp(’Unrecognized symmetry’)
disp(symm)
disp(’Recognized choices:’)
disp(’ symmetric’)
disp(’ hermitian’)
disp(’ skew-symmetric’)
disp(’ general’)
end
A = reshape(A,rows,cols);
elseif ( strcmp(field,’pattern’)) % pattern (makes no sense for dense)
disp(’Matrix type:’,field)
error(’Pattern matrix type invalid for array storage format.’);
else % Unknown matrix type
disp(’Matrix type:’,field)
error(’Invalid matrix type specification. Check header against MM documentation.’);
end
end

%
% If symmetric, skew-symmetric or Hermitian, duplicate lower
% triangular part and modify entries as appropriate:
%

if ( strcmp(symm,’symmetric’) )
A = A + A.’ - diag(diag(A));
entries = nnz(A);
elseif ( strcmp(symm,’hermitian’) )
A = A + A’ - diag(diag(A));
entries = nnz(A);
elseif ( strcmp(symm,’skew-symmetric’) )
A = A - A’;
entries = nnz(A);
end

fclose(mmfile);
% Done.