You are on page 1of 9

Elimination

-Produces an upper triangular system --which can be solved with back substitution.
n equations n pivots for elimination. (Lie on the diagonal of the triangular matrix after
elimination)
{0 is never pivot}
When Elimination doesnt work
case:1 No solution
case:2 Infinitely Many Solutions
case:3 Fixable by row Exchange

Singular Linear Systems have infinitely many OR zero solutions. (For n equations, there
exists less than n valid pivots)

Gaussian Elimination : Seeing a 3x3 matrix in operation

2 4 -2 | 2
4 9 -3 | 8
-2 -3 7 | 10
First pivot : 2 \\ First Multiplier (to eliminate 4 in row 2) : 4 / 2 = 2 .So, multiply equation 1 by 2 and
subtract eq 1 from 2 -- this eliminates the 4x term in the second equation. Find the multiplier l31 to
eliminate -2x from equation 3.
*Ax = b turns to (an equivalent linear system) Ux = c, where U is an upper triangular matrix. (Solve with
back substitution)

Elimination Using Matrices


-viewing each elimination as a matrix operation on the matrix A. so E21A would get rid of the first element
of the second row. The next elimination matrix E31 would operate on the result of E21A (or A) to give a
new matrix A which has 0s in indices (2,1) and (3,1) .. This process carries on until the resultant matrix
is an upper triangular matrix U.
-Also, apply the elimination matrices on an augmented matrix that contains the vector b as its last column. So, Ax =
b turns into an easily solvable linear equation Ux = c.

KEY : the elimination matrices Eij for all relevant i,j can be combined into a single matrix
E. -- this can be then used to transform A to U in a single matrix multiplication operation.
NEATEST way is to combine all inverses (Eij)-1 INTO A SINGLE MATRIX L = E-1
(WHY NEAT? ?)
All multipliers fall into their respective positions when the inverses of each elimination
matrix is multiplied in the right order to get E-1.
Ways to Matrix multiply :C = A*B
Row i of A times col j of B to get element Cij.
A * column i of B to get col i of C.
row i of A * B to get row i of C.

Fundamental Law of Matrix Multiplication:


(AB)C = A(BC) -- Associativity

Cost of Matrix Multiplication : :


A*B =C A, B = nxn square matrices.

Each element Cij of C is a dot product of the row i of matrix A and column j of matrix B. So, in order to
obtain C, n2 dot products need to be computed. For each dot product there are n distinct multiplications
required. So, computing C costs n3 multiplications.
Inner Product Row Vector * Column Vector
This gives a single number.

Outer Product Column Vector * Row Vector


This gives a full matrix.

Other Ways:
Column Picture. Each column of the product matrix AB
is the combination of the columns of the matrix A.
Row Picture : Each row of the product matrix AB is the
combination of the rows of the matrix B.
Sum of Outer Products of vectors in A and B.
column i of A multiplies row i of B to give a full
matrix. (For all is)...the product AB is the sum of all
these matrices (produced by the outer product).

Block Multiplication ::: Block Elimination :::Schur


complement
Inverse Matrices
Definition : If a square matrix has an inverse, then A-1A = AA-1 = I

Testing the Invertibility ::

With an algorithm :: Elimination Must have non-zero pivots


With an equation :: Ax = 0 has only one solution i.e. x=0

With algebra :: Determinant of A is not 0.

If A and B are the same size :: for their product AB :: (AB)-1 = B-1A-1

For a square matrix A of size nxn to have an inverse, elimination must


produce n non-zero pivots.

Inverse of a PRODUCT

A Product AB has an inverse only if A and B


are separately invertible (and the same
size).

More IMPORTANT >> A-1 and B-1 come in reverse


order.
(AB)-1 = B-1A-1
Inverse of Elimination Matrix.
Calculating A-1 by Gauss-Jordan Elimination

Gauss Jordan idea to find the inverse is to solve the linear system AA-
=I for each column of A-1.
1

A multiplies the first column of A1 (call that x1) to give the first
column of I (call that e1). This is our equation Ax1 = e1 = (1, 0, 0).
There will be two more equations. Each ofthe columns x1, x2, x3 ofA1 is
multiplied by A to produce a column of I:

To invert a 3 by 3 matrix A, we have to solve three systems of


equations: Ax1 = e1 and Ax2 = e2 = (0, 1, 0) and Ax3 = e3 = (0, 0, 1).
Gauss-Jordan finds A1 this way.

FOR A SQUARE MATRIX::

Right Inverse = Left Inverse


If AC = I, then CA = I and C =A-1.

A triangular Matrix is invertible iff the


diagonal entries are all non-zeroes.
Recognizing an invertible matrix.

The usual way to confirm if a matrix is invertible is to find the full set of nonzero pivots in
elimination.

But some matrices can be easily determined to be invertible : :

Diagonally dominant Matrices are Invertible!


What are diagonally dominant matrices?
- Each diagonal entry in some row i of a matrix is
larger than the sum of rest of the entries in row i
of the same matrix.
Why
are
diagona
lly
domina
nt
matrice
s
always
invertib
le ?

You might also like