Professional Documents
Culture Documents
Outline
General Subspace Minimization Algorithm
Review orthogonalization and projection formulas
Arbitrary Subspace
methods
,...,
{w0 ,..., wk 1}
w
w
k 1N
0N
Pick a kdimensional
Subspace
x =
k
k 1
w
i =0
Residual Minimization
Arbitrary Subspace
methods
If x =
k
k 1
w
i
i =0
k 1
r = b Mx = b i Mwi
k
i =0
k 2
2
( ) (r )
k 1
= b i Mwi b i Mwi
i =0
i =0
k 1
Arbitrary Subspace
methods
Minimizing r
k 2
2
= b
Residual Minimization
Computational Approach
k 1
Mw
i =0
is easy if
i
2
( Mw ) = 0, i j or ( Mw ) orthogonal to ( Mw )
Create a set of vectors { p0 ,..., pk 1} such that
( Mwi )
and ( Mpi ) ( Mp j ) = 0, i j
T
Residual Minimization
Arbitrary Subspace
methods
Algorithm Steps
For j = 0 to k 1 p j = w j
i =0
( Mw ) ( Mp ) p
T
( Mpi ) ( Mpi )
T
( r ) ( Mp )
i =0
( Mpi ) ( Mpi )
x =
k
0 T
k 1
( r ) ( Mp )
i =0
( Mpi ) ( Mpi )
pi =
i T
pi
Arbitrary Subspace
methods
1) orthogonalize the Mwi ' s
w
p00
w11
p
Residual Minimization
Algorithm Steps by Picture
w
p22
w
p33
M p0
Minimization Algorithm
Arbitrary Subspace
Solution Algorithm
r 0 = b Ax 0
For j = 0 to k-1
p j = wj
For i = 0 to j-1
T
p j p j ( Mp j ) ( Mpi ) pi
pj
( Mp ) ( Mp )
T
j +1
j +1
Normalize
= x + (r
j
=r
pj
) ( Mp ) p
( r ) ( Mp ) M p
Orthogonalize
Search Direction
j T
Update Solution
j T
Update Residual
Arbitrary Subspace
methods
Subspace Selection
Criteria
k 1
Mw
i =0
is small
Arbitrary Subspace
methods
Subspace Selection
Historical Development
1 T
T
Consider minimizing f ( x ) = x Mx x b
2
T
T
x f ( x ) = Mx b x = M 1b minimizes f
( )
k 1
)}
Arbitrary Subspace
methods
Subspace Selection
Krylov Subspace
)}
( )
= span r 0 ,..., r k 1
k 1
Mr
i
i =0
k 1
k 1
} = span {r , Mr ,..., M
0
k 1 0
Krylov Subspace
SMA-HPC 2003 MIT
Krylov Methods
( r ) ( Mp )
k T
k =
( Mpk ) ( Mpk )
T
x k +1 = x k + k pk
k +1
= r k Mpk
pk +1 = r
k +1
Mr ) ( Mp )
(
p
( Mp ) ( Mp )
j =0
k +1 T
Krylov Methods
( r ) ( Mp )
k T
k =
( Mpk ) ( Mpk )
T
x k +1 = x k + k pk
k +1
= r k Mpk
pk +1 = r
k +1
Mr ) ( Mp )
(
p
( Mp ) ( Mp )
k
j =0
k +1 T
Krylov Methods
Symmetric Case
k +1
j =0
k +1 T
k +1 T
k +1
k +1
Krylov Methods
Nodal Formulation
No-leak Example
Insulated bar and Matrix
Incoming Heat
T (1)
T (0)
Near End
Temperature
Discretization
m
SMA-HPC 2003 MIT
2 1
1 2
Nodal
1 Equation
Form
1 2
Far End
Temperature
Krylov Methods
Nodal Formulation
1
m
SMA-HPC 2003 MIT
No-leak Example
Circuit and Matrix
2 1
1 2
1 2
m 1
Nodal
Equation
Form
Krylov Methods
Nodal Formulation
leaky Example
Conducting bar and Matrix
T (1)
T (0)
Near End
Temperature
Discretization
2.01 1
1 2.01
Nodal
Equation
1 Form
1 2.01
m
SMA-HPC 2003 MIT
Far End
Temperature
leaky Example
Krylov Methods
Nodal Formulation
1
m
SMA-HPC 2003 MIT
m 1
2.01 1
1 2.01
Nodal
Equation
1
Form
1 2.01
R
E
S
I
D
U
A
L
10
10
10
10
Insulating
-1
Leaky
-2
-3
-4
10
20
Iteration
30
40
50
60
10
R
E
S
I
D
U
A
L
-1
10
-2
10
-3
10
Insulating
-4
10
Leaky
-5
10
10
15
20
25
Iteration
30
35
40
45
50
Convergence Analysis
Krylov Subspace
Methods
Polynomial Approach
k +1
M r
i =0
k +1
i 0
= k ( M ) r
= r i M r = ( I M k ( M )) r
0
i +1 0
i =0
}=
span r , Mr
Krylov Methods
Convergence Analysis
Basic Properties
2) x
= k ( M )r , k is the k order
k +1
polynomial which minimizes r
k +1
th
2
2
= b Mx = r M k ( M )r
0
0
= ( I M k ( M ) ) r k +1 ( M ) r
th
0
where k +1 ( M ) r is the ( k + 1) order poly
k +1 2
minimizing r
subject to k +1 ( 0 ) =1
3) r
k +1
k +1
Convergence Analysis
Krylov Methods
k +1 2
2
k+1 ( M )r
0 2
2
Therefore
Any polynomial which satisfies
the zero constraint can be used
to get an upper bound on
SMA-HPC 2003 MIT
k +1 2
2
Eigenvalues and
Vectors Review
Basic Definitions
Mui = i ui
eigenvector
Or, i is an eigenvalue of M if
M i I is singular
ui is an eigenvector of M if
( M i I ) ui
=0
Eigenvalues and
Vectors Review
1.1 1
1 1.1
M 11
M
21
M N 1
Examples
1 1 0 0
1 1 0 0
0 0 1 1
0 0 1 2
0
M 22
Basic Definitions
M NN 1
M NN
Eigenvalues?
Eigenvectors?
A Simplifying
Assumption
Eigenvalues and
Vectors Review
u1
u2
u3
uN
= 1u1
2u2
3u3
N u N
Eigenvalues and
Vectors Review
A Simplifying
Assumption Continued
MU =U
1
0
0
0
0
0
N
U MU = or M = U U
1
Eigenvalues and
Vectors Review
Im ( )
Spectral Radius
Re ( )
Eigenvalues and
Vectors Review
Incoming Heat
T (1)
T (0)
Unit Length Rod
T1
+
-
vs = T(0)
TN
+
-
vs = T(1)
Eigenvalues and
Vectors Review
2 1 0 0
1 2
0
1
0
0
1
2
Eigenvalues N=20
Eigenvalues and
Vectors Review
Useful
Eigenproperties
Spectral Mapping
Theorem
Given a polynomial
f ( x ) = a0 + a1 x + + a p x
f ( M ) = a0 + a1M + + a p M
Then
spectrum ( f ( M ) ) = f ( spectrum ( M ) )
Useful
Eigenproperties
Spectral Mapping
Theorem Proof
MM = U U U U = U U
p
p 1
M = U U
1
f ( M ) = U ( a0 I + a1 + + a p
Diagonal
f ( M ) U = U ( a0 I + a1 + + a p
SMA-HPC 2003 MIT
)U
Useful
Eigenproperties
Spectral
Decomposition
x = 1u1 + 2u2 + + N u N
1
Compute by solving U = x = U 1 x
N
Applying M to x yeilds
Mx = M (1u1 + 2u2 + + N u N )
= 11u1 + 2 2u2 + + N N u N
SMA-HPC 2003 MIT
Krylov Methods
Convergence Analysis
Important Observations
n ( M ) r 0 = 0 and therefore r n = 0
2) If M has only q distinct eigenvalues, the GCR
Algorithm converges in at most q steps
Summary
Arbitrary Subspace Algorithm
Orthogonalization of Search Directions