You are on page 1of 15

Matrix Solution

Methods for Systems of


Linear Algebraic
Equations
PET 5110

Some Background
EN EN,N
Linearly independent vector
A set of vectors is linearly independent if the only way the
following equation is satisfied is if all the cs are zero.

c1v1 c2 v2 c3v3 .... cn vn 0


Basis:
A basis for EN is a set of N linearly independent vectors of
EN

Transpose
interchange rows and diagonals of a matrix

Some Background
Symmetric
A matrix is symmetric if AT=A

Conjugate Transpose Hermitian or adjoint


take the transpose and then take the complex conjugate
of each entry

SPD symmetric positive definite


An n n real symmetric matrix M is positive definite if
zTMz> 0 for all non-zero vectors z with real entries
If L is nonsingular and real, LLT is SPD

Trace of a matrix
the trace of an NxN square matrix is the sum of the
elements of the main diagonal

Orthogonality and Norms


Two vectors are orthogonal if they are
perpendicular. To determine if two vectors are
orthogonal, take the dot product. If it is zero, they
are orthogonal.
All elements of a basis must be orthogonal to
each other.
Norms in general, a norm is a function that
assigns a strictly positive length or size to a vector
or other object (and must be zero for the zero
vector)
The norm of a scalar is its absolute value
For vectors and matrices, we have the p-norm and special
cases (n=2 for a vector is the Euclidean norm, when p=1
it is the taxicab norm; when p = 2 for a matrix, it is called
the Frobenius norm

Eigenvalues and Eigenvectors


Take a scalar and multiply it by the identity matrix to get
a matrix Z (Z = c*I)
Subtract that from any matrix A, and take the
determinant
If the determinant is 0, the matrix A-Z (A c*I) is singular
The question becomes, can we find such a scalar to
transform any matrix into a singular matrix? Such a scalar
would be termed an eigenvalue
We do that by solving the characteristic equation of A,
which is det(A c*I)=0. This equation is an Nth degree
polynomial, and will have N roots (real or complex)
hence an NxN matrix will have N eigenvalues
Typically, eigenvalues are symbolized with a

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors


Each eigenvalue has a corresponding linearly
independent vector known as an eigenvector. This
vector, x satisfies the equation:

A i I xi 0

The eigenvectors are orthogonal and form a basis


for EN

Computation of an Eigenvector
given an Eigenvalue

Properties of Eigenvalues and


Eigenvectors
the absolute value of a determinant (|detA|) is the product of the
absolute values of the eigenvalues of matrix A
c = 0 is an eigenvalue of A if A is a singular (noninvertible)
matrix
If A is a nxn triangular matrix (upper triangular, lower triangular)
or diagonal matrix , the eigenvalues of A are the diagonal entries
of A.
A and its transpose matrix have same eigenvalues.
Eigenvalues of a symmetric matrix are all real.
The dominant or principal eigenvector of a matrix is an
eigenvector corresponding to the eigenvalue of largest magnitude
(for real numbers, largest absolute value) of that matrix.
For a transition matrix, the dominant eigenvalue is always 1.
The smallest eigenvalue of matrix A is the same as the inverse
(reciprocal) of the largest eigenvalue of A-1; i.e. of the inverse of A.

What do eigenvectors do?


In general, a matrix acts on a vector by changing both
its magnitude and its direction. However, a
matrix may act on certain vectors by changing only
their magnitude, and leaving their direction
unchanged (or possibly reversing it). These vectors
are the eigenvectors of the matrix. A matrix acts
on an eigenvector by multiplying its magnitude by
a factor, which is positive if its direction is
unchanged and negative if its direction is reversed.
This factor is the eigenvalue associated with that
eigenvector.

Diagonalizing a matrix
To diagonalize a matrix, first, find the eigenvalues,
then (if you have N different eigenvalues) the
corresponding eigenvectors.
Construct a matrix with the eigenvectors as its
columns (this is usually termed P)
Then find the inverse of P
If you performed the multiplication: P-1AP, you
would get a diagonal matrix (with the
eigenvectors in the diagonal).
This is very useful when raising a matrix to a
power!

Example:

Iterative vs. Direct Methods


Direct methods get to the solution in one step,
e.g. Gaussian elimination
Preferable for moderate sized problems but not always
feasible for the larger problems that arise from the
numerical solutions of PDEs (especially time dependent
ones)
Sensitive to roundoff error

Iterative methods involve making an initial


guess at the solution and refining it. Newton
iteration from Calculus is one example of the type
of process.

Newton (Newton Raphson iteration)

Jacobi iteration for solving a system of


linear equations

You might also like