Professional Documents
Culture Documents
Introduction
Many math structures are different at first sight, but looking deeper the resemblance is astonishing. The benefits of the study of an abstract structure is that all the properties can be applied to all the representations of that structure. The concept of a 'real vector space' is an abstract structure in that way.
Concept
We start with a set V and the field of real numbers R. We define the concept 'vector space' by means of postulates. We say V is a vector space if and only if 1. There is an addition '+' in V such that V,+ is a commutative group. 2. Any element v in V and any r in R determine a scalar product rv in V. This scalar product has the following properties for any r,s in R and any v,w in V. 3. r(sv) = (rs)v 4. r(v + w) = rv + rw 5. (r + s)v = rv + sv 6. 1v = v Any element of a vector space is called a vector. The identity element of the group V,+ is called the vector 0. The symmetric element of v is called the opposite vector -v. The subtraction v - v' is defined by v - v' = v + (-v') . Examples of real vector spaces are
The ordinary vectors in the plane or in space. The couples of real numbers. The complex numbers. The real numbers The n-tuples (a,b,c,..,l) ; with a,b,.. in R. Real 2x2 matrices Polynomials in x, of third degree or lower and with real coefficients.
Calculation rules
Deducing from the postulates of a vector space, one can prove the following calculation rules. They hold for each vector u,v in V, and for each r,s in R.
(-r)u = r(-u) = -(ru) r(u - v) = ru - rv (r - s)u = ru - su ru = 0 <=> (r = 0 or u = 0) (ru = rv with r not zero) = > u = v (ru = su with u not zero) => r = s
Subspaces
definition
M is a subspace of a vector space V if and only if
Criterion
Theorem: A non-void subset M of a vector space V, is a vector space if and only if rx + sy is in M for any r,s in R and any x,y in M. Part 1: first we prove that If M is a vector space then rx + sy is in M for any r,s in R and any x,y in M. Well, if M is a vector space then, from the postulates, rx and sy are in M and therefore rx + sy is in M . Part 2: we prove that If rx + sy is in M for any r,s in R and any x,y in M, then M is a vector space.
Since rx + sy is in M, choose r = s = 1. So, x + is in M Since the associativity holds in V, it holds in M Since rx + sy is in M, choose r = 1 s = -1. So, 0 is in M Since rx + sy is in M, choose r = -1 s = 0. So, -x is in M Since the commutativity holds in V, it holds in M Since rx + sy is in M, choose s = 0. So, rx is in M The properties of scalar multiplication hold because they hold in V.
Q.E.D. Example 1 V is the vector space of all polynomials in x. M is the set of all the polynomials in x of second degree or lower.
We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and ax2+bx+c and dx2+ex+f are random elements in M.
M is subspace of V <=> r(ax2+bx+c) + s(dx2+ex+f) <=> (ra+sd)x2 + (rb+ se)x + (rc+sf) is in M is in M
Since the last claim is true, M is a subspace of V. Example 2 V is the vector space of all polynomials in x. M is the set of all the polynomials in x divisible by (x-2). We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and (x-2)p(x) and (x-2)q(x) are random elements in M.
M is subspace of V <=> r(x-2)p(x) + s.(x-2)q(x) is in M <=> (x-2).( r.p(x) + s.q(x) ) is in M
Since the last claim is true, M is a subspace of V. Example 3 V is the vector space of all couples of real numbers. M = { (x,y) | x,y in R and x + y = 1} One can now follow the method as in the previous examples, but it can shorter. Since (0,0) is not in M, M is not a vector space. So, M is not a subspace of V. Example 4 V is the vector space of all 2x2 matrices. M is the set of all the regular 2x2 matrices. We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and A and B are regular 2x2 matrices.
M is subspace of V <=> r A + s B is in M
The last claim is false because for r=s=0 we have 0.A + 0.B = 0 and the 0-matrix is not in M.
Since 0 is in M and in N, 0 is in the intersection. For any r,s in R and any x, y in the intersection of M and N, we can state: (rx + sy is in M) and (rx + sy is in N); so it is in the intersection. Appealing on previous criterion the intersection of M and N is a subspace of V. Example V is the vector space of all polynomials in x. M is the subspace of all the polynomials in x divisible by (x-2). N is the subspace of all the polynomials in x divisible by (x-1). The intersection I of M and N is the set of all the polynomials in x divisible by (x-1)(x-2). I is a subspace of V.
Example 2 V is the vector space of all row matrices [a,b,c] with a,b,c in R. D = { [1,1,1] }. M = span(D) = { [r,r,r] | r in R}
If we add a vector from M to the set D, then still M = span(D). Suppose there is a vector in D, that is a linear combination of the other vectors in D. If we remove that vector from D, then still M = span(D). If we multiply a vector from D with a real number (not 0), then still M = span(D). If we multiply a vector from D with a real number, and add that result to another vector of D, then still M = span(D).
Examples: V is the vector space of all row matrices [a,b,c,d] with a,b,c,d in R. D = { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] } M = span(D) = span( [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] ) 3 [1,0,0,0] + 4 [0,1,0,0] = [3,4,0,0] is in M, So, M = span( [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] , [3,4,0,0] ) [2,4,1,0] is a linear combination of the other vectors in D because [2,4,1,0] = 0 [1,0,0,0] + [0,1,0,0] + [2,3,1,0]. So, M = span( { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] } M = span([17,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] } M = span( [1,0,0,0] , [0,1,0,0] - [2,3,1,0] , [2,3,1,0] , [2,4,1,0] }
Linear dependent
Linear dependent vectors
A set D of vectors is called dependent, if there is at least one vector in D, that can be written as a linear combination of the other vectors of D. A set of one vector is called dependent, if and only if it is the vector 0. Example: V is the vector space of all row matrices [a,b,c,d] with a,b,c,d in R. D = { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [-2,0,-1,0] }
Since the last statement is false, the three vectors are not linear dependent.
Corollary:
Take a set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear independent if and only if ra + sb + tc + ... + zl = 0 => r = s = t = ... = z = 0
-13r = 0
We see immediately that there is no solution for that system. The three vectors are linear independent. It is a free set.
Properties of coordinates
It is easy to verify that co(a + b) = co(a) + co(b) co(ra) = r.co(a) with a,b in M and r in R.
Suppose there are two bases, B1 and B2, with a different number of elements. Assume B1 = {a,b,c,d} and B2 = {u,v,w} Then, span(B2) = V . Now, we have V = span{d,u,v,w} and {d,u,v,w} is a linear dependent set. It contains at least one vector, say v, who is a linear combination of the previous vectors. We can omit this vector and then V = span{d,u,w} . Again, V = span{c,d,u,w} and {c,d,u,w} is a linear dependent set. It contains at least one vector,say w, which is a linear combination of the previous vectors. That vector can't be d, because c and d are independent (as a part of a basis). We can omit this vector and then V = span{c,d,u} . Again, V = span{b,c,d,u} and {b,c,d,u} is a linear dependent set. It contains at least one vector, which is a linear combination of the previous vectors. That vector can't be b, c or d, because b, c and d are independent (as a part of a basis). That vector must be u! We can omit this vector and then V = span{b,c,d} . Again, V = span{a,b,c,d} and {a,b,c,d} is a linear dependent set. This is impossible because it is a basis. From all this we see that is is impossible that two bases, B1 and B2, have a different number of elements.
Corollary
If dim(V) = n, then
every set that spans V has at least n vectors. every free set has at most n vectors. each set of n vectors that spans V is a basis. each free set of n vectors is a basis.
Example Let V = the vector space R3. An obvious basis is ((1,0,0) , (0,1,0) , (0,0,1)). Dim(V) = 3. Each basis consists of three vectors but three random vectors do not always constitute a basis. Take the three vectors ( (2+m,m,m) (n,2,n) (2, 1, -4) ). We search the necessary and sufficient condition for m and n such that these three vectors are not a basis of R3.
(2+m,m,m) (n,2,n) (2, 1, -4) <=> (2+m,m,m) (n,2,n) (2, 1, -4) are linear dependent <=> There is an r,s and t , not all zero, such that r(2+m,m,m) + s(m,2,n) + t(2, 1, -4) = 0 <=> are not a basis
The following system has a solution different from (0,0,0) (2+m)r + n s + 2t = 0 m r + 2 s + t = 0 m r + n s - 4t = 0 <=> | 2+m | m | m <=> 6 m n - 12 m - 2 n -16 = 0 n 2 n 2 | 1 | = 0 -4 |
The three vectors are not a basis of V if and only if the latter condition is fulfilled. Example (1,2,5) and (-1,1,3) are two vectors of R3. Choose another vector from R3 such that the three vectors form a basis of R3. We try with the simple vector (1,0,0). As in the previous example we have:
(1,0,0) (1,2,5) and (-1,1,3) constitute a basis <=> (1,0,0) (1,2,5) and (-1,1,3) are linear independent <=> ... <=> | 1 0 0 | | 1 2 5 | is not zero |-1 1 3 |
When we unfold the determinant following the first row, we see immediately that the determinant is 1. So, (1,0,0) (1,2,5) and (-1,1,3) constitute a basis.
interchange two rows multiply a row with a real number (not zero) add a real multiple of row to another row
So, such a row transformation does not change the row space of A. The dimension of the row space, is the number of independent rows of A.
The rank of A is 3. There are 3 linear independent rows. In this example the three rows of A form a basis of the row space. The row space is a three dimensional space with basis ((1 0 2 3),(1 2 0 1),(1 0 1 0)). It is a subspace of R4. Now, we simplify the matrix A, by means of row transformations until we reach the canonic matrix.
[1 [1 [1 R2-R1 [1 [0 [1 (1/2)R2 [1 [0 [1 R3 - R1 [1 [0 [0 (-1)R3 [1 [0 [0 0 2 3] 1 -1 -1] 0 1 3] 0 2 3] 1 -1 -1] 0 -1 -3] 0 2 3] 1 -1 -1] 0 1 0] 0 2 3] 2 -2 -2] 0 1 0] 0 2 0 2 0 1 3] 1] 0]
Now, we have the unique basis of the row space. ((1 0 0 -3),(0 1 0 2),(0 0 1 3))
interchange two columns multiply a column with a real number (not zero) add a real multiple of column to another column
So, such a column transformation does not change the column space of A. The dimension of the column space, is the number of independent columns of A.
the rank of A is the number of linear independent columns. the column space of A and the row space of A have the same dimension.
Exercise: Take the matrix A from previous example and find the unique basis of the column space. Example: Find the m-values such that: (the dimension of the row space of A) = 3.
[ m 1 A = [ 3 1 [ 1 -2
2 ] 0 ] 1 ]
The dimension of the row space of A = rank A. The rank A is 3 if and only if (the determinant of A is not zero). The determinant of A is m-17. Conclusion: (the dimension of the row space of A) = 3 if and only if m is different from 17.
The columns of the transformation matrix are the coordinates of the new basis with respect to the old basis. Example: Say V is the vector space of the ordinary three dimensional space. In that space we take a standard basis e1, e2, e3. They are the unit vectors along x-axis, y-axis and z-axis. We rotate the three basis vectors, around the z-axis, by an angle of 90 degrees. Thus we get a new
basis u1, u2, u3. The link between old and new base is
u1 = e2 u2 = - e1 u3 = e3 co(u1) = (0,1,0) co(u2) = (-1,0,0) co(u3) = (0,0,1)
(x,y,z) are the coordinates of a vector v with respect to the old basis. (x',y',z') are the coordinates of the vector v with respect to the new basis. The connection is [x] [0 -1 [y] = [1 0 [z] [0 0 0] [x'] 0].[y'] 1] [z']
This is a system of the second kind. x and y can be taken as main unknowns. z and t are the side unknowns. The solutions are
x = -z + (2/5)t y = z - (3/5)t The set of solutions can we written as (-z + (2/5)t , z - (3/5)t , z , t ) with z and t in R <=> z(-1,1,1,0) + t(2/5,-3/5,0,1) with z and t in R
Hence, all solutions are linear combinations of the linear independent vectors (-1,1,1,0) and (2/5,3/5,0,1). These two vectors constitute a basis of the solution space.
has a solution (1,1,0,0) . All solutions of the last system are (1,1,0,0) + z(-1,1,1,0) + t(2/5,-3/5,0,1).
Example
In the space R3 A = span{ (3,2,1) } B = span{ (2,1,4) ; (0,1,3) } Investigate if A+B is a direct sum
Say r,s,t are real numbers, then each vector in space A is of the form r.(3,2,1) and each vector in space B is of the form s.(2,1,4) + t.(0,1,3) . For each common vector, there is a suitable r,s,t such that r.(3,2,1) = s.(2,1,4) + t.(0,1,3) <=> r.(3,2,1) - s.(2,1,4) - t.(0,1,3) = (0,0,0) <=> / 3r - 2s = 0 | 2r - s - t = 0 \ r - 4s -3t = 0 Since |3 |2 |1 -2 -1 -4 0| -1| is -3|
not 0,
The vector (0,0,0) is the only common vector of A and B. Thus, A+B is a direct sum.
Proof:
Suppose v = a1 + b1 = a2 + b2 Then with ai in A and bi in B.
a1 - a2 = b2 - b1 and a1 - a2 is in A and b2 - b1 is in B
Therefore a1 - a2 = b2 - b1 is a common element of A and B. But the only common element is 0. So, and a1 - a2 = 0 and b2 - b1 = 0 a1 = a2 and b2 = b1
Each vector v of V can be written as m + n with m in M and n in N. Then m = ra + sb + tc + ... + zl and n = r'a' + s'b' + t'c' + ... + z'l' , with r,s,t,...z,r',s',t',...z' real coefficients. Thus each vector v = ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l' Therefore the set {a,b,c,..,l,a',b',c',..,l'} generates V. If ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l' = 0 , then ra + sb + tc + ... + zl is a vector m of M and r'a' + s'b' + t'c' + ... + z'l' is a vector n of N. From a previous theorem we know that we can write the vector 0 in just one way, as the sum of an element of M and an element of N. That way is 0 = 0 + 0 with 0 in M and 0 in N. From this we see that necessarily m = 0 and n = 0 and thus ra + sb + tc + ... + zl = 0 and r'a' + s'b' + t'c' + ... + z'l' = 0 Since all vectors in these expressions are linear independent, it is necessarily that all coefficients are 0 and from this we know that the generating vectors {a,b,c,..,l,a',b',c',..,l'} are linear independent.
Converse theorem
If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, and {a,b,c,..,l,a',b',c',..,l' } are linear independent, then M+N is a direct sum. Proof: Each element m of M can be written as ra + sb + tc + ... + zl . Each element n of N can be written as r'a' + s'b' + t'c' + ... + z'l' . For a common element we have
ra + sb + tc + ... + zl = r'a' + s'b' + t'c' + ... + z'l'
Since all vectors are linear independent, all coefficients must be 0. The only common vector is 0.
From the two previous theorems we deduce that: If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, then
M+N is a direct sum independent <=> {a,b,c,..,l,a',b',c',..,l' } are linear
Projection, example
V is the space of all polynomials with a degree not greater than 3. We define two supplementary subspaces M = span { 1, x } N = span { x2, x3 } Each vector of V is the sum of exactly one vector of M and of N. e.g. 2x3 - x2 + 4x - 7 = (2x3 - x2) + (4x - 7) Say p is the projection of V on M with respect to N, then
p(2x3 - x2 + 4x - 7 ) = 4x - 7
We say that h is a similarity transformation of V with factor r. Important special values of r are 0, 1 and -1.
We say that s is the reflection of V in M with respect to the N. This definition is a generalization of the ordinary reflection in a plane. Indeed, if you take the ordinary vectors in a plane and if M and N are one dimensional supplementary subspaces, then you'll see that with the previous definition, s becomes the ordinary reflection in M with respect to the direction given by N.
Example of a reflection
Take V = R4. M = span{(0,1,3,1);(1,0,-1,0)} N = span{(0,0,0,1);(3,2,1,0)}
It is easy to show that M and N have only the vector 0 in common. (This is left as an exercise.) So, M and N are supplementary subspaces. Now we'll calculate the image of the reflection of vector v = (4,3,3,1) in M with respect to N. First we write v as the sum of exactly one vector m of M and n of N.
(4,3,3,1) = x.(0,1,3,1) + y.(1,0,-1,0) + z.(0,0,0,1) + t.(3,2,1,0)
The image of the reflection of vector v = (4,3,3,1) in M with respect to N is vector v' =
(1,1,2,1) - (3,2,1,0) = (-2,-1,1,1)