You are on page 1of 6

MATRICES

MATRICES SOME DEFINITIONS MATRIX OPERATIONS


Matrix: A rectangular array of numbers (named with capital Addition: If matrices A and B are the same size, calculate A + B
letters) called entries with the size of the matrix described by the by adding the entries that are in the same positions in both matrices
number of rows (horizontals) and columns (verticals); for
Subtraction: If matrices A and B are the same size, calculate A - B
example, a 3 by 4 matrix (3 X 4) has 3 rows and 4 columns;
by subtracting the entries in B from the entries in A that are in the
3 -2 0 6
same positions
- I 9 5 0 is a 3 by 4 matrix
Multiplying by a scalar: The product of kA, where k is a scalar,
o -3 6 3
is obtained by mUltiplying every entry in matrix A by k


Square matrix: Has the same number of rows and columns
Diagonal matrix: A square matrix with all entries equal to zero,
except the entries on the main diagonal (diagonal from upper
left to lower right); for example, this is a main diagonal
Multiplying matrices: If the number of columns in A equals the
number of rows in B, calculate the product AB by multiplying
the entries in row i of A by the entries in columnj of B, adding
these products and placing the resulting sum in the ij position of
the final matrix; the final resulting product matrix will have the
~
7
o - 5 and this is a diagonal matrix
300
0 - 2 0
same number of rows
columns as matrix B
as matrix A and the same number of

1 -2 0 0 1
Identity matrix (denoted by I): A square matrix with entries that
Multiplicative inverses: If A and B are square matrices and AB =
BA = I (remember I is the identity matrix) then A and B are inverses;
are all' zeros except entries on the main diagonal, which must all
the inverse ofa matrix A may be denoted as A- I; therefore, B =A-I
~
equal the number one and A = B-1; to find the inverse of an inveliible matrix A:
Triangular matrix: A square matrix with all entries below the First, use a sequence of row operations to change A to I , the
main diagonal equal to zero (upper triangular), or with all identity matrix; then,
entries above the main diagonal equal to zero (lower triangular)
Use these exact same row operations on I; this will result in the
Equal matrices: Are the same size and have equal entries
inverse matrix A-I of matrix A
Zero matrix: Every entry is the number zero
The transpose of A with dimensions of m x n is the matrix N of
Scalar: A magnitude or a multiple
dimensions n x m whose columns are the rows of A in the same
Row equivalent matrices: Can be produced through a sequence
of row operations, such as:
Row interchange: Interchanging any 2 rows
Row scaling: Multiplying a row by any nonzero number
Row addition: Replacing a row with the sum of itself and any
other row or multiple of that other row



order; that is, row one becomes column one, row two becomes
column two, etc.
Orthogonal matrix: A square, invertible matrix such that N =
A-I; that is, NA = AN = I
Normal matrix: A square matrix that satisfies NA = AN; that
is, commutes with its transpose
The trace of a square matrix A is the sum of all of the entries on
Column equivalent matrices: Can be produced through a
the main diagonal, and is denoted as tr(A)
sequence of column operations, such as:
Column interchange: interchanging any 2 columns
COMPLEX MATRICES

Column scaling: multiplying a column by any nonzero number
Column addition: replacing a column with the sum of itself and
any other column or multiple of that other column.
Elementary matrices: Square matrices that can be obtained
Entries are all complex numbers, a + bi
The conjugate ofa complex matrix: Denoted as A, entries are
all conjugates of the complex matrix A; remember that the
conjugate of a + bi is a - bi and conversely
from an identity matrix, I, of the same dimensions through a
Conj ugate transpose: A H = (A)t = ( A I); notice that A H means ~
single row operation
matrix A was both transposed and conjugated
The rank of matrix A, denoted rank(A), is the common
dimension of the row space and column space of matrix A
Hermitian complex matrix: A if AH = A
Skew-Hermitian complex matrix: A if AH =-A
~
The nullity of matrix A, denoted nullity(A), is the dimension of IfA is a complex matrix and AH = A- I, then it is unitary
the nullspace of A A complex matrix is normal if AHA = AAH
0
m
z
0
m
z
Whenthesizesof thematricesarecorrect,allowingtheindicated
operationsto be performed, the following propertiesare true.
Commutative:
A+B=B+A,
butAB= BA is FALSE
Associative:
A+ (B + C)= (A + B) + C
A(BC)= (AB)C
Symmetric:
IfN =A, then matrix A is symmetric
IfN =-A,thematrix A is skew-symmetric
Matrix Distribution:
A(B+ C)=AB+AC
A(B- C)=AB-AC
ScalarDistribution:
k(A+ B) = kA+ kB
k(A- B) = kA- kB
A(k+I)= kA+ LA
A(k-I)= kA- LA
ScalarProducts:
k(IA)= (kl)A
k(AB)= kA(B)=A(kB)
Negativeofa Matrix: -leA)=-A
Addition ofa ZeroMatrix: A+0= 0+A=A
Addition ofOpposites: A+ (-A) =A- A= 0
Multiplication by a Zero Matrix: A(O) = (O)A = 0
- IfAB=AC then B does NOTnecessarily equal C
- IfAB= 0then A andlorBdo NOT necessarilyequal zero
- Multiplicative Inverses :A(A-I) =A-I(A)= I
- Productof inverses: If AandBareinvertible(if theyhave
inverses), then AB is invertible and A-I(B-I) = (BA)-I;
noticethattheorderofthe matrices mustbe reversed
Exponents: IfA is a square matrix andk, mand nare nonzero
integers, then
- AO =I
- An =A(A)(A).... (A), n times
- AmAn = Am+n; (Am)" = Amll
(kA)-1 = lA-I ifA is invertible;
Transpose:
- (N)'= A
- (A + B)' = N + B'
(kA)I=kN
(AB)'= B'N;notice the orderofthe matrices is reversed
Trace: IfA and Bare square matrices ofthe same sizethen
tr(A+ B) = tr(A)+ tr(B)
- tr(kA)= k tr(A)
tr(N)= tr(A)
tr(AB)= tr(BA)
Squarematrices: IfAisamx msquarematrixand invert-iblethen
- AX= 0 has only the trivial solution for X
- A is row equivalentto I ofthcsamcdimensionsmx m
- AX= B is consistentfor everym x I matrix B
LINEAR EQUATIONS
Definition: Equations ofthe form alxl + a2x2+ a
3
x
3
+... + anxn = b, where
ai' a
2
, a
3
,.,a
n
and b are real-number constants and the variables, XI' x
2
,
x3"",xn are all to the first degree
Solutions: Numbers Sl' S2' s3'" .,sn that make the linear equation true when
substituted forthe variables XI' x
2
,x
3
,.,xu
Linearsystems: Finite sets oflinear equations with matching variables that
havethe samesolution valuesfor all equations in the system
Inconsistentlinearsystems have no solutions
Consistentlinearsystems have at leastone solution
Coefficient matrix for a linear system: The coefficicnts ofthe variables
atterthe variables have all been arranged in the same orderin all equations
Augmentedmatrixforalinearsystem:Thematrixof thelinearsystemtogether
withtheconstantsforeachequation;amentalrecordmllstbekeptof thepositions
ofthevariables, the+ signsandthe= signs;example: thelinearsystem
alixi+ aJ2x2+ a13x3 + ... + a1nxn= b l
a
21
x
I
+ a
n
x
2
+ a
n
x
3
+ ... + a
2n
x
n
= b
2
a
31
x
I
+ a
32
x2+ a
33
x
3
+ ... + a
3n
x
n
= b
J
amlXI+ am2x
Z
+ a
m3
X
3
+ ... + amnxn = bm
can be written as the coefficientmatrix
all all au a,,,
a21 a22 a", a,,,
a"
a32 a33 'l/l
or
a .
am' am'
am] amn
can bewritten as the augmented matrix
all all 01.1 alII h,
a"
0 22 02J a,,, h,
where
a"
aJ2 a:u a.ltl hJ
am'
Om2 amJ a"",
b",
the subscripts indicate the equation and the position in
the equation for each coefficientorconstant; that is, all
is equation 1variable 1while a
Z3
is equation 2 variabl e
3;theseareoftenreferredtoaseithera"111 oraijwherethe
mandiindicatetheequationortherowin thematrixand
the nandj indicate the variable position orthe column
inthe matrix
Row-echelon:A form ofa matrix in which:
The first nonzeroentry, ifthere is one, in arow is
the number 1, called the leading I;
Any rows that have all entries equal to zero are
moved to the bottomofthe matrix;
Any two consecutive rows that are not all zeros, the
lowerofthetwo rows hasthe leading 1 furtherto
therightthan the leading 1 in thehigherrow
Reducedrow-echelon:Aformofamatrixwiththe
same characteristics as a row-echelon, but al so has
zeros everywhere in each column thatcontain alead I
exceptinthepositionorthelead 1
....
SOLVING LINEAR SYSTEMS USING MATRICES
MATRIXINVERSION
When a system of linearequations has the same number
of variables and equations, resulting in a square
coefficientmatrixsizemx In, ifthecoeffi cientmatri x is
invertible then
ofequations - B is a m x 1matrix ofthe systcm constants;
Gauss-Jordan elimination - X is the solution matrix; and,
Row operations are usedto reducethe augmented matrixto a reduced - AX= B hasexactlyone solution. whi ch is X =A-IB
row-echelon form; then, When solving a sequence of11 systems that have equal
Each equation in the corresponding system ofequations is solved for squarecoefficientmatricesthesolutionmatricesXI 'X
2
, X.l'
the lead 1 variable and arbitrary values are assigned for the remai ning ..,Xn canbefound
variables,yieldinga general solution; or - with XI = A-IB
I
; X
2
=A-IB2; ",Xn = A-IB
n
ifA is
If the reduced row-echelon matrix has zeros everywhere except the invertible
main diagonal ofthe matrix (disregarding the constants on the far - by reducingthesystems to reducedrow-echelon form
right) which contains the lead 1's,thenthe variables with the lead l's and applying Gauss-Jordan elimination
ascoefficientshavethe numerical valuesindicatedatthefarrightends Cramed' Rule is used for solving systems of linear
oftheirrows equations and is found underthedeterminates section
Gaussian elimination
Row operations are used to reduce the augmented matrix to a row-
echelon form; then,
Back-substitution is used to solve the resulting corresponding system
DETERMINATES
Definition: A number value calculated for every square matrix; Determinate oforder 1: The determinate ofa matri x with onl y
denoted by det(A) or IAI orD; ifa matrix has two proportional oneentrywherethedeterminate is equal to that one entry;that is,
rows,then its determinateequalszero ifA = [all], thenthedeterminateofA ordet(A)orD= lal tI = all
A d<-:terminate oforder 2 is the determinate ofa 2 by 2 matrix
an
d
It IS = = allan- a
12
a
21
. Dial ' al21
a ZI a22
Adetemlinateof order3isthedetemlinateofa3by3matrixandmay
all {lI Z a'l
becalculatedasD= aZI an an =alla22a.B+al2a23aJI+al3a32a21
aJI a3Z 1133
- a13a22aJI - a12a21a33 - allaJ2a23; this process can be displayed by
writingthedeterminatematrixwiththefirst2columnsrepeatedafter
thematrixandthen usingdiagonalstofindtheproducts,forexample
all {l12 {l13
{l2I a22 an
a31 a 32 a33
CofactororLaplaceexpansion
- Definition: Thecofactorofentryaij in matrix A is (-I)i +j(Mij)
where Mij is the detenninate ofthe submatrix that results from
omittingthe it/, row andthe/" column from the original matrix
A; cofactorsaredenotedasCij; forexample,the cofactorofthe
3 I -'4
a'2 entry in thematrix -2 0 5 is(_1)1+2(MI2) where
-3 8
M'2isthedeterminateofthe submatrixr-; andMI2 =
-2(8) - 5(1)=-21 andC
12
=-1(-2J) = 21
- Cofactor expansion or Laplace expansion: A method for
calculatingthedetemlinateof asquarematrixbyaddingaijCij for
anyfull row orfull columnof amatrix;that is,det(A)= IAI
= aijCijwhen eliminatinga row and det(A)= IAI = llijCij
j = , i = I
when eliminatinga column all al2 al2
cofactor expansion by eliminating any row or column, so
det(A) = allCII + a
12
C
12
+ auCI3 Ll sing the first row and
det(A)= a
2
,C
ZI
+ anCn+ a
2J
C
2J
using the second row and
det(A) = a
31
C
JI
+ a)2CJ2 + a
JJ
C
J3
Ll sing the third row and
det(A)= allC
II
+ a
ZI
C
21
+ a
JI
C
31
usingthe firstcolumn and
det(A) = a'2Cl2 + anCn+ a32C32 using the second column
anddet(A)= a
13
C13 + aBC
U
+ a.BC,,) usingthethirdcolumn
Rule: If there is asystemoflinearequationsthatcreate aII
x n coefficient matrixAwith det(A)= IAI ;>' 0, then thesystemhas a
IAII IA21 IA31 IA"I
uniquesolutionwhichis XI =TAT ;x2=TAT ;xJ=TAT ;... ;xn = TAT
wherej= 1,2, 3, ... ,nandAjis the matrix created by repl acingall
b,
b2
entriesinthefIr columnbythematri xof equati onconstants, B= b3
b"
- Applying Cramer.i Rule to solving a 2 by 2 system of linear
equations using a 2 X 2 coeffici ent matri x: create the 2 X :2
coefficient matlix A of the system; calculate IAI ; repl ace the
entriesincolumn 1ofAwiththeconstantsofthesystemcreating
matrix AI; calculate lAd; replace the entries in column 2ofA
with the constants ofthe system creating matri x A
2
; cal cul ate
IA21 ; then find the values ofthe variables by using XI = =
I;; :::1 ",dx _ IA,I _I::::;1 b,a"-a,, b,
a"an-{l12{l 21 < 2 IAI I I
I I

- Exampleofcofactorexpansion: GivenmatrixA= a21 an a23
{lllb2 -bl a21
thedeterminatecan be calculated using a"an-a I2 a 21
3
IfA is a square matrix,then IfA and B are squarematriceswith the samedimensions,
- AX= 0 has onlythe zero solution then det(AB)= IABI = IAIIBI = det(A) det(B)
det(A)= IAI= det(At)= IAtl
If BresultsfromAthroughroworcolumnoperations,then
- det(B)= IBI =-IAI =- det(A)when B resulted from 2
row orcolumn interchanges
it is invertible ifandonly ifdet(A)= IAI to
- det(A)= IAI =0 whenA hasaroworacolumnofzeros
- det(B) = IBI = k IAI = kdet(A) when B resulted from a
det(A)= IAI =0 whenA has2identicalrowsorcolumns
scalarmultiplicationofa row orcolumn in A
- det(A) = IAI = alla22a33 ... ann when A is a n x n
- det(B) = IBI = IAI = det(A) when B resulted from the
triangularmatrix
addition of2 rows or row multiples or 2 columns or
IfA is invertible, then det(A-I)= IA-II = = det\A)
column multiples in A
VECTORS
OPERATIONS OF VECTORS
and
Definition: An arrow in 2- or 3-dimensional space with 2 Addition: Ifuis avectorwithendpoint(ai' bl'c
l
)and
characteristics: direction and length or magnitude; or, a list of v is a vectorwith endpoint(a
2
, b
2
, c
2
), then u +v has
numbers (also called a li near array containing coordinates, theendpoint(a
l
+a
2
,b
l
+b
2
,c
i
+c
2
); thi scan beseen
entries or components); the endpoints of vectors uniquely graphicallyusingtheparallelogram lawwhich pl aces
r a
determine everyvector theinitialpointofvattheterminalpointofu, thenthe
'red
Initialpoint:Thetail endofthe arrow vectoru +v is the vectort hatconnectsthe initial point
Terminal point:Thetip ofthe arrow ofu to theterminal pointofv
slIlg
Scalars:Thenumerical quantities ofvectors
Notation:Vectorsarenamedwithsmallcaseletters,andiftheinitial
pointofvectorv isA andtheterminalpointis B,then v= AS
The diagonal of
)lI1t
Vectors are equal ifthey have the same direction and length (or
thi sparallelogram
entries) regardlessofthe position; for example, v = u =w
isu+v and
Also they areequal ifv = (3,-I,5)
u = (3, -1,5)andw = (3, -1,5)
Theyhave the samecomponents
Subtraction: For vectors u and v, their di ffe rence is
definedtobe u - v = u +(-v);graphically:
The zero vector has a length ofzero in any direction and is
denotedas ()
-v v
The negative ofvector v, written -v, is the vector having the
same magnitude orlength, buttheopposite di rection ofv
Ical
U-V\ZV 0' U@U-V
NORM OF A VECTOR
v
- The normorlength ofa vector, Il ull ' in n-spaceisthesquare u - v isthe other diagonal
oftheparall el ogram
root(always nonnegative) ofu . u; that is, ifu = (ai'a
2
, a
3
,
Also, ifu = (ai ' b
l
)andv = (a
2
, b
2
),then u - v = (a
l
... an) then = Il ull = ;u:u= ... +a;'
rical
- a
2
, b
l
- b
2
); this rel ationship is also true in 3-space
- Ilull2: () and Ilull = 0 ifandonly ifu = 0
Scalar multipli cation: The product ofany nonzero
- Iqull = 1orifu'u = 1, then u is a unitvector
real number scal ar k and nonzero vector v is the
Alinearcombination ofvectorsVI' v2,v3,...,vnis vectorwwhen productofthe magnitude ofv and scalar k; retaining
w =kivi+k2v2+k3V3 +...+knvnandkl'k2' k3' ... kn are scalars
the same direction if k > 0 and changing to the
oppositedirection ifk < 0
Alinearspaceis the spacespannedbya setofvectorsS= (VI' v2
the
- Ifv has the endpoint (ai ' bl' c
l
),then kv,called the
v
3
, .v
n
); denoted by lineS)
scalarmultipleofv, hastheendpoint(ka
l
, kb
l
,kc
l
);
and
Alinearly independentnonemptyset ofvectors,S = {vI,V 2,V 3'
ifeitherk =0 orv=0 then kv=0
... ,v
n
}, has only one solution to the vector equation kiv
i
+k
2
v
2
Translation equations
+k3v] +...+knv
n
= 0 and that is all scalars, equal zero; if - When the initial point is not at the origin; that is,
t here are other solutions to the equation then S is a linearly v = PQ and P = (ai ' bl) when Q= (a2' b2), then
dependentset v = (a2 - ai' b2- bl)
- In translationof axes, ifv hasatermi nalpointof (ai'
Abasisforvectorspace VisasetofvectorsS in VifS is linearly
independentandS spans V
bl) and the origin is trans lated to (x, y) then the
translatedcomponents (a
2
,b
z
)are found by usingthe
Thematrix M =A- tIn'where In is then-squareidentitymatrix,t is
equations a
2
=a
l
- xandb
2
=b
l
- y; thiscanalsobe
anindetermi nate,andA= laijjn-squarematrixhasanegativematrix
done in 3-space: v has the termlnal point of(ai' bl '
fln- A and a determinant L'1(t) = det(tln- A) = (-I )n det( A - tIn)
c
l
) and the origin is translated to (x, y, z) so the
which isthecharacteristic polynomialofA transl atedcomponentsofvare(a
l
- x,b
l
- y,c
i
- z)
__________________________________ ________________________ __________
Dotor InnerProduct
- Ifuandvarevectorsin n-spaceandu =(ai'a
z
,a
3
, ...a
n
)
andv= (b
l
, b
z
,b
3
, ... b
n
),thenthe dotorinnerproduct
u .v = (u,v) = (alb
l
+ azb
z
+ a
3
b
3
+ ... + anb
n
)
- Orthagonal orperpendicularvectors: Vectorswhosedot
orinner product is zero; that is, u . v = 0
Distance between vectors
- Ifu = (ai ' a
z
, a
3
, ... a
n
) and v = (b
l
, b
z
, b
3
,..., b
n
) then
the distance between the vectors is deu,v) = Ilu - vii =
j (al- b
l
)2+(a2- b
2
)Z+ (a)- b)l+... + (a,,- b,,)l
Projection
- The projectionofa vectoru ontoa nonzerovectorv is
proj(u,v) = u v
Ilvil
Angle between vectors
- Ifu andvarenonzerovectors,thentheangle ebetween
them is found using the formula cose= 11:lill:11 wherc
-1 11:lill:11 1
- Angle e, between vectors u and v, is
I. Acute ifand only ifu . v>0
2. Obtuse ifandonly ifu . v < 0
3. Right ifandonlyifu . v = 0
Euclideaninnerprod uctordotproductfor2- or3-space
vectors u and v with the included angle eis defined by
u.v=Jilull llvllcose;ifU&Vf.O}
, l 0;ifu=0 orv=0
Cross productfor3-spacevectorsu = (u
l
,u
z
,u
3
) andv =
(VI' vz,v
3
) is u xv= (UZv3 - u3vZ' U3vI - UIV3' UIV2- UZvl)
Gra m-Schmidt Orthogonalization Process for
constructing an orthogonal basis (WI' wz, w
3
,... , w
n
) ofan
innerproductspace Vgiven (VI 'v
z
,v
3
,... ,v
n
)asabasisofV
is w
k
= v
k
- CklW
I
- CkZW
Z
- .. .- Ck,k_IWk_1 where k = 2, 3,
(Vk'Wj)
4, ... ,nandckj = ( ) is thecomponentofvkalongWj
Wi,Wj
Eigenvector: If A is any n-squarc matrix, then a real
number scal ar A is called an eigenvalue ofA and x is
eigenvector ofA corresponding to A ifAx= AX, where
Xis a nonzero vectorin R"
- Thcfollowing areequivalent:
I. A is aneigenvalue ofA
2. Thesystemofequations(AI- A)x= 0 has nontri vial
solutions
3. There is a nonzero vector x in Rn such thatAx = AX
4. A is a solution of the characteristic equation
det(M- A)=0
- Computing eigenvalues and eigenvectors for an n-
squarematrix A
I. Find the characteristic polynomi al ofA
2. Findthe rootsof to obtain the eigenvalues ofA
3. Rcpeatthese2steps for eachei genvalue AofA:
a. Formthe matrix M =A - AI
b. Find a basis for the solution space of the
homogeneoussystem MX = 0
4. For the S = (VI' v
z
, v
3
, ..., VOl) ofall ei genvectors
obtained in step 3 above
a. Ifm ;0' n,thenA is notdi agonali zable
b. If m = n,thenA isdiagonali zableand Pis thematrix
whosecolumnsaretheeigenvectorsVI' v
z
,v
3
, ...,VII
withD=P-IAP=diag(AI,Az , A3 "'" All)where Aj
istheeigenvaluecorrespondingtotheei genvectorVj
Bilinearforms: A bilinearform on V,a vectorspace of
finite dimension over field K,is a mapping f:V XV-K
suchthatforall a, b EK andall Uj, Vj EV
j{au
l
+ bu
z
, v) = aj( u
l
, v) + bj{u
z
,v) that is,fi s linear
in the first variable, and
- j{u, aV
I
+ bv
z
) = aj(u,VI) + vz);fisli near in the
second variable
Hermitian forms: A Hermitian form on V, a vectorspace
offinite dimension overthe complex fi eld C, is a mapping
j:VXV-Csuchthat for all a, b EC and all u
j
' VjEV
- j{aul + bu
z
,v) = aj(u
l
, v) + bj{u
z
,v); that is,fis linear
in thefirst vari able
- j{u,v) = f (v, u);thatis,j(v,v) is realfor every VEV
- j{u, aV
I
+ bv
z
) = af(u , vl)+ bf Cu, Vl) ; that is, f is
conjugate linear in the second vari able
PROPERTIESANDTHEOREMSOF VECTORS
Ifu, v,andw are nonzerovectors in Euclidean n-space and
kand s are nonzero scal ars, then
(u + v)+ W= u + (v + w)
u+O=u
u +(-u)= 0
u+v=v+u
k(u+v)= ku +kv
(k +s) u = ku+su
(ks)u= k(su)
lu=u
(u +v)'w=u'w+v'w
U'V=V'U
(ku)'v = k(u v)
U' U 0 and u . u = 0 ifandonly ifu = 0
Il ull 0 and Ilull = 0 ifandonly ifu = 0
Ilkull = Ikiliull
luvi Ilullllvll
Ilu +vii Ilull +Ilvil
,- -.L - l
v- Ilvil v- Ilvil
u'(ux v) = 0 when u x v is orthogonalto u
v(u x v) = 0 when u x v is orthogonal tov
Iluxvll
z
=Ilulllllvlll- (u'd ;Lagrange 's identity
Ilu + vll
l
= II ul1
2
+ Ilvil lifu and v are orthogonal vectors in
an innerproductspace; generalized Py thagoreal1 Theorem
(u,V)l (u,v)(v,v) ifu and v are vectors in a real in ner
productspace; Cauchy-Schwarz Inequality
FIND THE SOLUTION TO A SYSTEM OF EQUATIONS
USINGONEOFTHESE METHODS:
Graph Method - Graph the equations and locate the point of
intersection, if there is one. The point can be checked by
substituting the x value and the y value into all ofthe equations.
Ifit is the correct point it should make all ofthe equations true.
This method is weak. since an approximation of the coordinates
olthe point is often all that is possible.
Substitution Method for solving consistent systems of linear
equations includes following steps;
I. Solve one ofthe equations for one ofthe variables. It is
easiesttosolvefora variable whichhasacoetlicientofone
(if such a variable coefficient is in the system) because
fractions can be avoided until the veryend.
2. Substitute the result ing expression for the variable into
the other equation, not the same equation which was
justused.
3. Solvetheresultingequationfortheremainingvariable.This
should result in a numerical value for the variable, eitherx
ory, ifthe system was originallyonlytwo equations.
4. Substitute this numerical value back into one of the
, original equationsand solve for the othervariable.
5. Thesolutionisthepointcontainingthesexandy-values,(x,y).
6. Checkthesolution in all oftheoriginal equations.
Elimination Method or the Add/Subtract Method or the
LinearCombination Method - eliminate either the x orthe y
variable through either addition or subtraction of the two
equations. These are the steps for consistent systems oftwo
linearequations;
I. Writebothequationsinthesameorder,usuallyax+by= c,
where a, b,andcare real numbers.
2. Observe the coefficients ofthe x and y variables in both
equations to determine:
a. If the x coefficients or the y coefficients are the same,
subtract theequations.
b. If they are additive inverses (opposite signs: such as 3
and -3), addtheequations.
c. Ifthecoefficientsofthexvariables are notthesameand
are not additive inverses, and the same is true ofthe
coefficien ts of the y variables, then multiply the

I
equations to make one ofthese conditions true so the
equations can be either added or subtracted to
eliminateoneofthe variables.
3. Theabovestepsshouldresult in oneequationwithonlyone
variable,eitherxory,butnotboth.Iftheresultingequation
has both x and y, an error was made in following the steps
indicated in substitutionmethodat left.Correcttheerror.
4. Solvethe resultingequation for theone variable(x ory).
5. Substitute this numerical value back into either ofthe
originalequationsandsolvefortheoneremainingvariable.
6. The solution is the point (x,y) with the resulting x and
y-values.
7. Checkthe solution in all ofthe original equations.
Matrix method - involves substantial matrix theory for a
system ofmore than two equations and will not be covered
here. Systems oftwo linear equations ean be solved using
Cramer'sRulewhich is based on determinants.
I. Forthesystemofequations:a
l
x+bly=c
i
anda
2
x+b
2
y=c
2

where all ofthe a,b, and cvalues are real numbers, the point
of intersection is (x,y) where x = (O,)ID and
y=(Oy)lD.
2. The determinant D in these equations is a numerical
value found in this manner:
3. The determinant D, in these equations is a numerical
value found in this manner:
4. The determinant D)' in these equations is a numerical
value found in this manner:
5. Substitute the numerical values found from applying the
formulas in steps2 through4 into the formulas for xand
y in step I above.

January2005
CustomerHotline# 1.800.230.9522
hUlldreds,pf ti tl es at We welcomeyourfeedbacksowe
qUlcKsluay.com
can maintain and exceed your
U,S. $5.95
expectations.
Checkoutthese othermath guides:
CAN. $8.95
o Algebra Part 1 Calculus2 6 4614 20867 5
ISBN 157222867-9
Algebra Part2 Trigonometry All rights reserved. No part or this publication rnay be rcproulIcl:ti or
transmitted in any rOfm. orby any n1l.:ans.elcdronicormec hanical, including
Algebraic Equations Geometry Part1 photocopy.recording.orany inhmnationstorage andretrieval systcrn.""' ithoLll
\',-ritkn permission from the publisher. (02005 BarCharts,Inc.
Calculus 1 Geometry Part2 as a guide. 9 Note: Due 10 ils comknseJ formal, ph:asc LIS\,;' this QuickStuuy
hut notas a n:plaCemL'nl for assigned
6

You might also like