You are on page 1of 7

Math 125 Definitions

1.3
Linear equality describes a mathematical expression in which all variables appear to the
first power and it contains an equal sign
Linear inequality same as above except contains an inequality sign instead of equal sign
Half plane region defined by linear inequality, ie points (x,y) for which inequality is
true
System of linear inequalities collection of more than one linear inequality
Feasibility region region determined simultaneously by all inequalities in a system of
linear inequalities, find it by drawing the graph
1.4
Linear function function where every variable in the expression defining the function is
raised to the first power
Lines of constancy assume that we have a linear function f(x,y)=ax+by, lines of
constancy are lines along which the value f(x,y) is constant, it is the set of values (x,y) for
which f(x,y)=c for some constant c
Linear program problem of maximizing or minimizing a linear function over a set of
constraints
Polygon bounded straight lines
Theorem: Let z=ax+by be a linear function, and let P be a polygon in the plane. Then the
maximum and minimum values of z are attained at corner points of P
2.1
Row echelon form
First non zero entry in each row is a leading 1
Each entry below a leading 1 is 0
As you move down the rows the leading 1s move to the right
Can move over two places right, but must move right
any row of all 0's is always at the bottom (or just above a row of all 0's)
Gaussian elimination using elementary row operations to put a matrix into row echelon
form
2.2
Reduced row echelon form A matrix is in reduced echelon form if it is in row echelon
form and has the additional property that above each leading 1 are 0's
Back addition this is a process for eliminating the matrix entries above leading 1's
Multisystem collection of linear systems, all with the same coefficient matrix, but with
different right-hand sides

2.3
Consistent system system of linear equations that is consistent if it has one or more
solutions
Inconsistent system linear system that does not have any solutions
Row rank number of non zero rows the matrix has after it has been put in row echelon
form
Consistency Theorem a system is consistent only if the row rank of the coefficient
matrix equals the row rank of the augmented matrix
Parametric solutions when a linear system has an infinitenumber of solutions, then
some variables can be expressed in terms of other variables. These other variables are
viewed as free variables, or independent variables, or parameters we call the
solution set a parametric solution
Homogenous system system where all constraints (right hand side) are 0, in an
augmented matrix the final column are all 0s
Trivial solution all variables = 0
Homogenous System Theorem
Row rank of coefficient matrix equals number of variables then system has trivial
solution
Row rank of coefficient matrix is less than number of variables then there are
infinitely many solutions
2.5
Pivoting procedure of choosing a point in the matrix, turning it into a 1, and eliminating
the entries above and below it
Gauss Jordan elimination due process of using pivoting to move a matrix into rref
2.6
Simplex algorithm algebraic method for finding maximum in a linear program with 2 or
more variables
Slack variable extra variable, which is added to an inequality to make the constraint an
equality
Initial simplex table augmented matrix required for the simplex algorithm from the
linear system that arises when:
Objective function is written in a standard form
Convert to equalities by introducing slack variables
4.1
Vector an ordered n-tuple of numbers (u
1
, u
2
,,u
n
)
Pair n=2, triple n=3, etc
Parallel two non zero vectors u and v are parallel if one is a scalar multiple of the other
u=kv, k does not = 0
length or magnitude of a vector - |u|=(u
1
2
+ u
2
2
+ u
3
2
)
position vector any point in Euclidean space R
n
can be thought of as a vector starting at
the origin (the zero vector) and ending at the point
the vector equation for a line in R
n
Let Po (vector) be the position vector of any point on
the line and v be any vector parallel to the line.
5.1
Dot Product u x v = u
1
v
1
+u
2
v
2
++u
n
v
n

Commutative
Distributive
Associative
Zero vector
Orthogonal when multiplying two vectors together and it equals zero then the
vectors are perpendicular and its orthogonal

4.2
Linear combination v =c1v1+c2v2++cnvn, when cn is a real number and vn is
vectors
Vector (as matrix) vector matrix has a single column within descending order from
the top
Consistency Theorem a system with augmented matrix [A| b(vector)] is consistent
if and only if b can be written as a linear combination of the columns of the
coefficient matrix A]

3.1
Scalar multiple of matrix multiplying a matrix by a number
Zero matrix entries are all zero, additive identity, denoted by O
Theorem of sums and scalar multiples of matrices
Addition is commutative: A+B=B+A
Addition is associative: A+(B+C)=(A+B)+C
Additive identity
Scalar multiplication distributes over matrix addition: r(A+B)=rA+rB
Matrix multiplication distributes over scalar addition: (r+s)A=rA+sA
Scalar multiplication is associative: r(sA)=(rs)A
Equal matrices two matrices are equal if they are same size and corresponding
entries are equivalent
Identity matrix diagonal ones and all other zeros
Theorem: Properties of matrix multiplication
Associative: A(BC)=(AB)C
Distributive over addition from left and right: A(B+C)=AB+AC
Scalar multiplication over matrix multiplication: r(AB)=(rA)B=A(rB)
Identity
Transpose of matrix: Given an mxn matrix, the transpose is nxm whose columns are
the corresponding rows of A
Theorem: Properties of Transpose
(At)t=A
(A+B)t=At+Bt
(rA)t=rAt
(AB)t=BtAt
3.2
Inverse multiplicative inverse if AB=I and BA=I
Only one inverse
Only for square matrices
Not every square matrix has inverse
If A is invertible then it can be changed into the identity matrix through row
operations
Theorem: Properties of inverse
AB=I then BA=I
(A
-1
)
-1
=A
(AB)
-1
=B
-1
A
-1

Anxn has an inverse if and only if A has row rank n
If A is invertible and AB=AC then B=C
Only if A is invertible
3.3
Consumption matrix matrix that represents the cost per dollar to run several
companies or industries in an economy
Productive economy an economy is productive if given any demand there is a
production schedule that meets that demand
Every column of C sums to less than 1
Every row of C sums to less than 1
(I-C)-1 exists and has all positive entries

3.6
Markov chain process in which the probability of a system being in a particular
state at a specific time depends only on the state of the system in the preceding time
period
Transition matrix matrix whose entries represent the probabilities of moving from
one state to another
Stable vector a stochastic vector S associated with a transition matrix T is stable if T
S = S

3.4
Determinant a real number calculated from the entries of the matrix that has the
property of determining whether the matrix is invertible
Determinant of 1x1 det A = a
Determinant of 2x2 ad-bc
Matrix that can be turned into the identity I using row operations is invertible
Definition of every determinant comes from the exact conditions that the entries of
the matrix must satisfy to guarantee that the matrix can be turned into I through row
operations
If det (A) = 0, A is not invertible
Theorem: if the determinant of a nxn matrix is non zero then that matrix is invertible
(I,j)-minor is the determinant of the submatrix formed by deleting the ith row and
the jth column from A
Determinant (cofactor expansion defn.) if its nxn matrix and n less than or equal to
2 then the cofactor expansion across any row or down any column has the value det
(A).
Properties of Det
Det A = det At
Det A = 0 if there is a row or column of all 0s
If a square matrix has two rows that are proportional then the determinant is
zero
Triangular matrix square matrix where either all entries below the diagonal are zero
or all entries above diagonal are zero
Determinant of triangular matrix is product of diagonal
Diagonal matrix square matrix where every entry off diagonal is zero

3.5
Any system of linear equations can be represented as matrix equation Ax=b
Theorem: for a nxn matrix A
Det (A) 0
A-1 exists
Ax=b has unique solution for any b
Row rank of A is n
If A is square matrix then the homogeneous system Ax=0 has nontrivial solution if
and only if det (A) = 0
If A and B are nxn matrices, then det (AB) = det A det B
If det A-1 = 1/det A

4.2 REV
Linear combination v =c1v1+c2v2++cnvn, when cn is a real number and vn is
vectors

4. VS
Real vector space a set V of objects together with two operations
V is closed under operation and satisfying the following properties:
V is closed under operation
If u and v are in V, u v are in V
Is commutative
U v = v u
Is associative
An additive identity exists in V
For every element in V there is an additive inverse
V is closed under operation
distributes over
is associative
Multiplicative identity exists
Elements of vector space are called vectors
Operation is called vector addition
operationis called scalar multiplication
scalars c and d are real numbers and this is the reason that we have a real vecot space
the zero vector
negative of a vector
closure property
V is closed under operations of vector addition and scalar multiplication

4.3
Subspace let V be a vector space and let W be a non empty set of vectors V. if W is
a vector space with respect to the operation in V then W is called a subspace of V
Subspace can be thought of as a vector space W that is part of a larger vector
space V
Elements in W are already vectors since they come from the known vector
space V. And W inherits the vector space operations from V. As a result, to
check whether a set W is a vector space, it suffices to check only the closure
properties
Span of a set of vectors If S = {v1, v2, , vk} is a set of vectors from a vector
space V, then the span of S is the set of all linear combos of the vectors in S
Theorem: Let If S = {v1, v2, , vk} be a set of vectors from a vector space V. Then
span S is a subspace of V
Any plane that does not pass through the origin cant be a subspace
Theorem: any non empty subset W of a vector space V is a subspace if and only if it
is the span of a collection of vectors in V
Every subspace can be written as a span, as the set of all linear combinations
of some set of vectors in V
Null space set of all solutions of the homogenous system A x = 0
Column space set of all linear combinations of the columns of A
Consistency Theorem #3 A system A x = b is consistent if and only if b is in the
column space of A
4.4
0 parameter solution is a point, 1 parameter is line, 2 parameter is a plane, 3
parameter is a hyperplane
vector equation for a plane in Rn - the vector equation of a plane in Rn is given by
x=Po+tv1+sv2 where Po is any point in the plane and v1 and v2 are any two
vectors that are parallel to the plane but not parallel to each other
Linear dependence and linear independence
linearly dependent if there exist constants c1, c2,..., ck, NOT ALL ZERO
such that c1v1+c2v2....=0
linearly independent if whenever c1v1+c2v2....=0 then WE MUST HAVE
c1=c2=...=ck=0
Let A be an nxn matrix then the following conditions are equivalent
the columns of A are linearly independent
A is invertible
det A does not equal 0
4.5
Basis for a vector space - a finite set of vectors is a basis for a vector space V if
the set spans V and is linearly independent
Dimension of vector space - number of vectors in a basis for a vector space V is
called the dimension of V

You might also like