Professional Documents
Culture Documents
(1) Vocabulary
(a) Linear combination
A linear combination of two vectors is the sum of two vectors multiplied by
corresponding scalar values. Each term is multiplied by a constant that applies
to the entire vector.
(b) Linear dependence
A set of vectors is said to be linearly dependent if one of the vectors can be
dened as a linear combination of the others.
(c) Linear independence
A set of vectors is said to be linearly independent if none of the vectors are
linear combinations of each other. Additionally, in solving the system Ax = 0,
a linearly independent set will only have x=0 as a possible solution, meaning
there is no solution but the trivial solution.
(d) Subspace (of a `)
A subspace is dened as a euclidean space which is contained wholly in another
space. The characteristics of a subspace include: Having the zero-element,
being closed under addition, and being closed under scalar multiplication. We
say a space U is closed under addition if the vector a + b is U given that
aU and b U . We say that a space U is closed under scalar multiplication
if, given that a is a vector in U , a is also a vector in U for any R
(e) Span (of a set of vectors)
The "span" of a set of vectors is dened as the subspace occupied by all of the
linear combinations of vectors within the specied set.
(f ) Basis (of a subspace)
A "basis" of a subspace is a set of linearly independent vectors which have the
same span of the original subspace.
(g) Inner Product, Dot Product We dene the inner product of vectors x,y as the
sum of the products of corresponding elements. Thus < x, y >= x1 y1 + x2 y2 +
...xn yn . This denition is synonymous with our calculation of the dot product,
notated as x y.
(h) Norms
Norms are functions which take vectors as input and return non-negative real
numbers as output. Norms can be considered a means to measure the length
or magnitude of a vector. Norms satisfy a few conditions:
If x is R then kxk 0.
If x = 0 then kxk = 0.
If kxk = 0 then x = 0.
If R then kxk = |kxk.
1
2
Lastly, if x and y are both R, then kx + yk kxk + kyk, which satises the
triangle inequality.
(i) Linear functions
For a function f :U V
If u1 and u2 are U , then f (u1 + u2 ) = f (u1 ) + f (u2 ).
If u U and R, then f (u) = f (u).
Additionally, note that f (0) = 0, by taking = 0.
(j) dimension
The number of basis elements needed to dene a space is called the dimension
of the space.
(k) orthogonal (vectors)
Two vectors are orthogonal if and only if the lines drawn from the origin to the
points with coordinates dened by x and y are perpendicular to each other.
Additionally, the dot product of orthogonal vectors is 0.
(l) orthonormal (vectors)
Orthonormal vectors are both orthogonal and normalized (usually to 1 in the
case of QR decomposition).
(m) rank (of a matrix)
The dimension of the range of A is called the rank of A. This can be written
as rank(A ) = dim(range(A)).
(n) range (of a matrix)
The range of A is the span of the columns of A. This is also called the column
space of A.
(o) nullspace (of a matrix)
The nullspace of a matrix is dened as the set of vectors x which, when multi-
plied by the matrix A equal 0. A
Additionally, we can dene the nullspace of
as the set of vectors x AT .
that are orthogonal to the range of
T
We make the claim that Range(A is orthogonal to Null(A). This means that
for:
y Range(AT ) means y = AT x.
z Null(A) means Az = 0.
T T T T
Thus y z = (A x) z = x Az = 0.
T T
This suggests that Range(A A) = Range(A )
(p) inverse (of a matrix)
We dene the inverse of a matrix A as A1 , which when multiplied by A
becomes the identity matrix I.
Note that this means that both AA1 = I and A1 A = I.
(q) elementary vectors e1 , . . . , e n in Rn
The standard basis consists of the "unit vectors", e1 , e2 , ... en , which form a
basis for the space Rn . Applying a function to the standard basis vectors will
produce a matrix which models the function.
(r) Echelon form and reduced Echelon form
We dene a matrix to be in row echelon form when it satises the following
3
conditions:
The rst non-zero element in each row, called a "pivot", is a 1.
Each pivot is in a column right of the previous row.
Practice problems
(3) Take any norm k k. Are the following norms?
(a) f (x) = 2kxk Yes. This is the same as taking the norm of 2 x.
(b) f (x) = k xk Yes, the norm of x is equal to the norm of x by the denition
of a norm. Therefore, a function which takes the norm of x is the same as
nding the norm of x and thus is clearly a norm.
(c) f (x) = kxk2 No, Clearly this does not work with scalar multiplication, as
f (2x) 6= 2f (x).
(d) f (x) = kx 2k No, The zero-norm is a non-zero element.
f : R3 R2 with
(5) Suppose I give you a linear function
1 1 0
1 3 1
f 0 = , f 1 = , f 0 = .
1 2 1
1 0 2
Find an explicit matrix A so that f (x) = Ax. Add The rst column to the second
1
0 0
4
column. v2 = 1 = Divide the third column by 2 0 = 2 Subtract
3 1
1 1
2
3 7
1 0
the third column from the rst and second 0 = 2. 1 = 2
1 3
0 0
2 2
1 0 0
0 1 0 = 1.5 3.5 0.5
0.5 1.5 0.5
0 0 1
6bx + 2c. Based on the condition p(1) = 1, this means that 12a + 6b + 2c = 1.
Clearly, if we were to add two polynomials which would meet this condition
1 4 47 3 13 4 47 3
i.e
12 x and x4 6 x , the resulting polynomial 12 x 6 x no longer meets
the condition, as it's second derivative will no longer equal 1. Therefore this is
NOT a subspace.
(7) Let
rs
S = {x R3 | x = 2s , r, s R}.
r
Verify that S is a linear space; then nd a basis for S. What is the dimension of S?
S = {p P 4 , p(0) = 0, p0 (1) = 0}
(10) Find four linearly independent vectors that are all orthogonal to 1 3 3 2 1
in R5 .
(c) Now use the previous two problems to write down all solutions to the system
x1
1 0 1 0 x2
= 3
0 1 0 0 x3 2
x4
.
(12) Let
1
V1 = 2 , V2 = 1 1 1 .
3
Be careful to think of these as matrices. Compute the following.
(a) V2 V1
(b) V1 V2
(c) V2 V2T
(d) V2T V2