You are on page 1of 20

Concept and calculation rules

Introduction
Many math structures are different at first sight, but looking deeper the resemblance is astonishing. The benefits of the study of an abstract structure is that all the properties can be applied to all the representations of that structure. The concept of a 'real vector space' is an abstract structure in that way.

Concept
We start with a set V and the field of real numbers R. We define the concept 'vector space' by means of postulates. We say V is a vector space if and only if 1. There is an addition '+' in V such that V,+ is a commutative group. 2. Any element v in V and any r in R determine a scalar product rv in V. This scalar product has the following properties for any r,s in R and any v,w in V. 3. r(sv) = (rs)v 4. r(v + w) = rv + rw 5. (r + s)v = rv + sv 6. 1v = v Any element of a vector space is called a vector. The identity element of the group V,+ is called the vector 0. The symmetric element of v is called the opposite vector -v. The subtraction v - v' is defined by v - v' = v + (-v') . Examples of real vector spaces are

The ordinary vectors in the plane or in space. The couples of real numbers. The complex numbers. The real numbers The n-tuples (a,b,c,..,l) ; with a,b,.. in R. Real 2x2 matrices Polynomials in x, of third degree or lower and with real coefficients.

Calculation rules
Deducing from the postulates of a vector space, one can prove the following calculation rules. They hold for each vector u,v in V, and for each r,s in R.

u + u + u + u + ... + u = n.u (n terms at the left side) 0u = 0 r0 = 0

(-r)u = r(-u) = -(ru) r(u - v) = ru - rv (r - s)u = ru - su ru = 0 <=> (r = 0 or u = 0) (ru = rv with r not zero) = > u = v (ru = su with u not zero) => r = s

Subspaces
definition
M is a subspace of a vector space V if and only if

M is a non-void subset of V M is a real vector space

Criterion
Theorem: A non-void subset M of a vector space V, is a vector space if and only if rx + sy is in M for any r,s in R and any x,y in M. Part 1: first we prove that If M is a vector space then rx + sy is in M for any r,s in R and any x,y in M. Well, if M is a vector space then, from the postulates, rx and sy are in M and therefore rx + sy is in M . Part 2: we prove that If rx + sy is in M for any r,s in R and any x,y in M, then M is a vector space.

Since rx + sy is in M, choose r = s = 1. So, x + is in M Since the associativity holds in V, it holds in M Since rx + sy is in M, choose r = 1 s = -1. So, 0 is in M Since rx + sy is in M, choose r = -1 s = 0. So, -x is in M Since the commutativity holds in V, it holds in M Since rx + sy is in M, choose s = 0. So, rx is in M The properties of scalar multiplication hold because they hold in V.

Q.E.D. Example 1 V is the vector space of all polynomials in x. M is the set of all the polynomials in x of second degree or lower.

We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and ax2+bx+c and dx2+ex+f are random elements in M.
M is subspace of V <=> r(ax2+bx+c) + s(dx2+ex+f) <=> (ra+sd)x2 + (rb+ se)x + (rc+sf) is in M is in M

Since the last claim is true, M is a subspace of V. Example 2 V is the vector space of all polynomials in x. M is the set of all the polynomials in x divisible by (x-2). We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and (x-2)p(x) and (x-2)q(x) are random elements in M.
M is subspace of V <=> r(x-2)p(x) + s.(x-2)q(x) is in M <=> (x-2).( r.p(x) + s.q(x) ) is in M

Since the last claim is true, M is a subspace of V. Example 3 V is the vector space of all couples of real numbers. M = { (x,y) | x,y in R and x + y = 1} One can now follow the method as in the previous examples, but it can shorter. Since (0,0) is not in M, M is not a vector space. So, M is not a subspace of V. Example 4 V is the vector space of all 2x2 matrices. M is the set of all the regular 2x2 matrices. We investigate whether or not M is a subspace of V. To this end we choose r, s at random in R and A and B are regular 2x2 matrices.
M is subspace of V <=> r A + s B is in M

The last claim is false because for r=s=0 we have 0.A + 0.B = 0 and the 0-matrix is not in M.

Intersection of two spaces


Theorem: The intersection of two subspaces M and N of V, is itself a subspace of V. Proof:

Since 0 is in M and in N, 0 is in the intersection. For any r,s in R and any x, y in the intersection of M and N, we can state: (rx + sy is in M) and (rx + sy is in N); so it is in the intersection. Appealing on previous criterion the intersection of M and N is a subspace of V. Example V is the vector space of all polynomials in x. M is the subspace of all the polynomials in x divisible by (x-2). N is the subspace of all the polynomials in x divisible by (x-1). The intersection I of M and N is the set of all the polynomials in x divisible by (x-1)(x-2). I is a subspace of V.

Generators of a vector space


Linear combinations of vectors
Take from a vector space V, the vectors a,b,c,d ... ,l . Take as much real numbers r,s,t, ... ,z . Then we call ra + sb + tc + ... + zl a linear combination of the vectors a,b,c,d ... ,l.

Generating a vector space


Let D = { a,b,c,d ... ,l } a fixed set of vectors from V. Let M be the set of all possible linear combinations of the vectors of D. We'll show that M is a vector space. Indeed, appealing on previous criterion, take two vectors u,v from M. For any r,s in R, we have that ru + sv is a linear combination of two linear combinations of a,b,c,d ... ,l. So ru + sv is itself a linear combination of a,b,c,d ... ,l and therefore ru + sv is in M. Since each vector space containing the vectors a,b,c,d ... ,l must contain each linear combination of these vectors, M is the 'smallest' vector space generated by D. Conclusions and definitions: All linear combinations of vectors of D = { a,b,c,d ... ,l } generate a vector space M. The elements of D are called, generators of M. M is called the vector space spanned by D. The vector space spanned by the vectors a,b,c,d ... ,l is denoted span(D). It is the smallest vector space containing the set D.

Example 1 V is the vector space R3 = R x R x R . D = { (2,3,0) , (-1,4,0) }. M = span(D) = { (x,y,0) | x,y in R }.

Example 2 V is the vector space of all row matrices [a,b,c] with a,b,c in R. D = { [1,1,1] }. M = span(D) = { [r,r,r] | r in R}

Properties of a generating set


Say D is a subset of vector space V and M = span(D). It is easy to see that:

If we add a vector from M to the set D, then still M = span(D). Suppose there is a vector in D, that is a linear combination of the other vectors in D. If we remove that vector from D, then still M = span(D). If we multiply a vector from D with a real number (not 0), then still M = span(D). If we multiply a vector from D with a real number, and add that result to another vector of D, then still M = span(D).

Examples: V is the vector space of all row matrices [a,b,c,d] with a,b,c,d in R. D = { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] } M = span(D) = span( [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] ) 3 [1,0,0,0] + 4 [0,1,0,0] = [3,4,0,0] is in M, So, M = span( [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] , [3,4,0,0] ) [2,4,1,0] is a linear combination of the other vectors in D because [2,4,1,0] = 0 [1,0,0,0] + [0,1,0,0] + [2,3,1,0]. So, M = span( { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] } M = span([17,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] } M = span( [1,0,0,0] , [0,1,0,0] - [2,3,1,0] , [2,3,1,0] , [2,4,1,0] }

Linear dependent
Linear dependent vectors
A set D of vectors is called dependent, if there is at least one vector in D, that can be written as a linear combination of the other vectors of D. A set of one vector is called dependent, if and only if it is the vector 0. Example: V is the vector space of all row matrices [a,b,c,d] with a,b,c,d in R. D = { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [-2,0,-1,0] }

The vectors in D are dependent because [-2,0,-1,0] = 0 [1,0,0,0] + 3.[0,1,0,0] - [2,3,1,0]

Linear independent vectors


A set D of vectors is called independent, if and only if that set is not a dependent set. Such set is called a free set of vectors.

Criterion for linear dependent vectors


Theorem: Take a set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear dependent if and only if there is a suitable set of real numbers r,s,t, ... ,z , not all zero, such that ra + sb + tc + ... + zl = 0 Proof: Part 1 : If the set D is dependent, there is at least one vector in D, say b, who is a linear combination of the other vectors of D. Then b = ra + tc + ... + zl <=> ra + (-1)b + tc + ... + zl = 0 So, there is a suitable set of real numbers r,s = -1,t, ... ,z , not all zero, such that ra + sb + tc + ... + zl = 0 Part 2 : If there is a suitable set of real numbers r,s,t, ... ,z , not all zero, such that ra + sb + tc + ... + zl = 0 , then we can choose a nonzero coefficient, say s , and then ra + tc + ... + zl = -sb . Dividing both sides by (-s), we see that b is a linear combination of the other vectors of D. So, D is an dependent set. Example:
De vectors [-12, 17, 14] are linear dependent <=> There are real numbers r,s,t not all zero, such that r [-12, 17, 14] + s [10, -7, 8] + t [-11, 3, 24] = [0,0,0] <=> There are real numbers r,s,t not all zero, such that -12 r + 10 s - 11t = 0 17 r - 7 s + 3t = 0 14 r + 8 s + 24t = 0 [10, -7, 8] [-11, 3, 24]

<=> | -12 | 17 | 14 <=> -3930 = 0 10 -7 8 (Relying on the theory of systems) -11 | 3 | = 0 24 |

Since the last statement is false, the three vectors are not linear dependent.

Corollary:
Take a set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear independent if and only if ra + sb + tc + ... + zl = 0 => r = s = t = ... = z = 0

Second Criterion for linear dependent vectors


Take an ordered set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear dependent if and only if there is at least one vector, who is a linear combination of the PREVIOUS vectors in D. Proof: Part 1: If the set D is linear dependent, we know from the first criterion that there is a suitable set of real numbers r,s,t, ... ,z , not all zero, such that ra + sb + tc + ... + vi + wj + ... + zl = 0 Say w is the last non-zero coefficient, then ra + sb + tc + ... + vi + wj = 0 <=> -wj = ra + sb + tc + ... + vi Dividing both sides by (-w), we see that vector j is a linear combination of the PREVIOUS vectors of D. Part 2: If a vector of D is a linear combination of the PREVIOUS vectors of D, then it is a linear combination of the other vectors of D (with coefficients 0 for the following vectors). Thus, D is a linear dependent set. Example: We investigate whether the vectors [1, 0, -13] , [2, 17, 0] , [12, 7, 0] are independent. The second vector is not a linear combination of the previous one. So, the first two vectors are independent. The third vector is not a linear combination of the previous vectors because
r[1, 0, -13] + s [2, 17, 0] = [12, 7, 0] <=> r + 2s = 12 17s = 7

-13r = 0

We see immediately that there is no solution for that system. The three vectors are linear independent. It is a free set.

Basis and dimension of a vector space


Minimal generating set and basis
Say M = span(D). D is a generating set of M. We know that, if there is a vector in D, that is a linear combination of the other vectors in D, and if we remove that vector from D, then still M = span(D). Now remove, one after another, the vectors from D who are a linear combination of the others. For the remaining part D' still holds M = span(D'), but the vectors in D' are now linear independent. D' is a free set that spans M. Such a minimal generating set of M is called a basis of M. In this introduction, we restrict the theory to vector spaces with a finite basis.

Coordinates in a vector space


Say D = (a,b,c,..,l) is a ordered basis of M. Each vector v in M can be written as a linear combination of the vectors in D. Assume v can be written in two ways as a linear combination of the vectors in D, then we have v = ra + sb + tc + ... + zl = r'a + s'b + t'c + ... + z'l and then (r - r')a + (s - s')b + (t - t')c + ... + (z - z')l = 0 But, appealing on the criterion of linear independent vectors, all the coefficients must be 0. So, r = r' ; s = s' ... Conclusion: Each vector v of M is uniquely expressible is a linear combination of the vectors of the ordered basis D. The unique coefficients are called the coordinates of v with respect to D. We write co(v) = (r,s,t, ... ,z) or v(r,s,t, ... ,z). Mind the difference: v(2,4,-3) is the vector v with coordinates (2,4,-3). But in v = (2,4,-3) is vector v is equal to the vector (2,4,-3)

Properties of coordinates
It is easy to verify that co(a + b) = co(a) + co(b) co(ra) = r.co(a) with a,b in M and r in R.

Two bases of V have exactly the same number of elements

Suppose there are two bases, B1 and B2, with a different number of elements. Assume B1 = {a,b,c,d} and B2 = {u,v,w} Then, span(B2) = V . Now, we have V = span{d,u,v,w} and {d,u,v,w} is a linear dependent set. It contains at least one vector, say v, who is a linear combination of the previous vectors. We can omit this vector and then V = span{d,u,w} . Again, V = span{c,d,u,w} and {c,d,u,w} is a linear dependent set. It contains at least one vector,say w, which is a linear combination of the previous vectors. That vector can't be d, because c and d are independent (as a part of a basis). We can omit this vector and then V = span{c,d,u} . Again, V = span{b,c,d,u} and {b,c,d,u} is a linear dependent set. It contains at least one vector, which is a linear combination of the previous vectors. That vector can't be b, c or d, because b, c and d are independent (as a part of a basis). That vector must be u! We can omit this vector and then V = span{b,c,d} . Again, V = span{a,b,c,d} and {a,b,c,d} is a linear dependent set. This is impossible because it is a basis. From all this we see that is is impossible that two bases, B1 and B2, have a different number of elements.

Dimension of a vector space


Since a vector space has a constant number of vectors in a basis, that number n is characteristic for that space and is called the dimension of that space. We write dim(V) = n. Note that if D spans V, the linear independent vectors of D form a basis of V.

Corollary
If dim(V) = n, then

every set that spans V has at least n vectors. every free set has at most n vectors. each set of n vectors that spans V is a basis. each free set of n vectors is a basis.

Example Let V = the vector space R3. An obvious basis is ((1,0,0) , (0,1,0) , (0,0,1)). Dim(V) = 3. Each basis consists of three vectors but three random vectors do not always constitute a basis. Take the three vectors ( (2+m,m,m) (n,2,n) (2, 1, -4) ). We search the necessary and sufficient condition for m and n such that these three vectors are not a basis of R3.
(2+m,m,m) (n,2,n) (2, 1, -4) <=> (2+m,m,m) (n,2,n) (2, 1, -4) are linear dependent <=> There is an r,s and t , not all zero, such that r(2+m,m,m) + s(m,2,n) + t(2, 1, -4) = 0 <=> are not a basis

The following system has a solution different from (0,0,0) (2+m)r + n s + 2t = 0 m r + 2 s + t = 0 m r + n s - 4t = 0 <=> | 2+m | m | m <=> 6 m n - 12 m - 2 n -16 = 0 n 2 n 2 | 1 | = 0 -4 |

The three vectors are not a basis of V if and only if the latter condition is fulfilled. Example (1,2,5) and (-1,1,3) are two vectors of R3. Choose another vector from R3 such that the three vectors form a basis of R3. We try with the simple vector (1,0,0). As in the previous example we have:
(1,0,0) (1,2,5) and (-1,1,3) constitute a basis <=> (1,0,0) (1,2,5) and (-1,1,3) are linear independent <=> ... <=> | 1 0 0 | | 1 2 5 | is not zero |-1 1 3 |

When we unfold the determinant following the first row, we see immediately that the determinant is 1. So, (1,0,0) (1,2,5) and (-1,1,3) constitute a basis.

Vector spaces and matrices


Row space of a matrix
Say A is a m x n matrix. The rows of that matrix can be viewed as a set D of vectors, of the vector space of all n-tuples of real numbers. The space generated by D is called the row space of A. The rows of A are a generating set of the row space. From the properties of generating parts, we have : The row space of A does not change if we

interchange two rows multiply a row with a real number (not zero) add a real multiple of row to another row

So, such a row transformation does not change the row space of A. The dimension of the row space, is the number of independent rows of A.

Dimension of a row space


We know that it is possible to transform a matrix A, by suitable row transformations, to a row canonic matrix. Then the non-zero rows are linear independent and form a basis of the row space. But the number of non-zero rows is the rank of A. Hence, we can say that the rank of A is the dimension of the row space of A. It can be proved that the non-zero rows of the canonic matrix form a unique basis for the row space. Corollary : the rank of A is the number of linear independent rows. Example: We'll find the row space of a matrix A and the unique basis for that row space.
[1 A = [1 [1 0 2 0 2 0 1 3] 1] 0]

The rank of A is 3. There are 3 linear independent rows. In this example the three rows of A form a basis of the row space. The row space is a three dimensional space with basis ((1 0 2 3),(1 2 0 1),(1 0 1 0)). It is a subspace of R4. Now, we simplify the matrix A, by means of row transformations until we reach the canonic matrix.
[1 [1 [1 R2-R1 [1 [0 [1 (1/2)R2 [1 [0 [1 R3 - R1 [1 [0 [0 (-1)R3 [1 [0 [0 0 2 3] 1 -1 -1] 0 1 3] 0 2 3] 1 -1 -1] 0 -1 -3] 0 2 3] 1 -1 -1] 0 1 0] 0 2 3] 2 -2 -2] 0 1 0] 0 2 0 2 0 1 3] 1] 0]

R1-2.R3 [1 [0 [0 R2 + R3 [1 [0 [0 0 1 0 0 -3] 0 2] 1 3] 0 0 -3] 1 -1 -1] 0 1 3]

Now, we have the unique basis of the row space. ((1 0 0 -3),(0 1 0 2),(0 0 1 3))

Column space of a matrix


Say A is a m x n matrix. The columns of that matrix can be viewed as a set D of vectors of the vector space of all m-tuples of real numbers. The space generated by D is called the column space of A. The columns of A are a generating set of the column space. From the properties of generating parts, we have : The column space of A does not change if we

interchange two columns multiply a column with a real number (not zero) add a real multiple of column to another column

So, such a column transformation does not change the column space of A. The dimension of the column space, is the number of independent columns of A.

Dimension of a column space


We know that it is possible to transform a matrix A, by suitable column transformations, to a column canonic matrix. Then the non-zero columns are linear independent and form a basis of the column space. But the number of non-zero columns is the rank of A. Thus, we can say that the rank of A is the dimension of the column space of A. It can be proved that the non-zero columns of the canonic matrix form a unique basis for the column space. Corollary :

the rank of A is the number of linear independent columns. the column space of A and the row space of A have the same dimension.

Exercise: Take the matrix A from previous example and find the unique basis of the column space. Example: Find the m-values such that: (the dimension of the row space of A) = 3.

[ m 1 A = [ 3 1 [ 1 -2

2 ] 0 ] 1 ]

The dimension of the row space of A = rank A. The rank A is 3 if and only if (the determinant of A is not zero). The determinant of A is m-17. Conclusion: (the dimension of the row space of A) = 3 if and only if m is different from 17.

Coordinates and changing a basis


We'll show the properties in a vector space with dimension 3, but it can be extended to vector spaces with dimension n. Take an ordered basis (u,v,w) of V. Then each vector s has coordinates (x,y,z) with respect to this basis. If we take another basis (u',v',w'), then s has other coordinates (x',y',z') with respect to that new basis. Now, we'll investigate the link between these two ordered sets of coordinates. We know that s = xu + yv + zw = x'u' + y'v' + z'w'. But the vectors of the new basis (u',v',w'), also have coordinates with respect to the old basis (u,v,w). co(u') = (a,b,c) => u' = au + bv + cw co(v') = (d,e,f) => v' = du + ev + fw co(w') = (g,h,i) => w' = gu + hv + iw Then
s = = but s = x' (au + bv + cw) + y' (du + ev + fw) + z' (gu + hv + iw) (ax' + dy' + gz')u + (bx' + ey' + hz')v + (cx' + fy' + iz')w from above we have also xu + yv + zw

Therefore, the relation between the coordinates is


x = ax' + dy' + gz' y = bx' + ey' + hz' z = cx' + fy' + iz'

These relations can be written in matrix notation.


[x] [a [y] = [b [z] [c [a [b [c d e f d e f g] [x'] h].[y'] i] [z']

g] h] is called the transformation matrix. i]

The columns of the transformation matrix are the coordinates of the new basis with respect to the old basis. Example: Say V is the vector space of the ordinary three dimensional space. In that space we take a standard basis e1, e2, e3. They are the unit vectors along x-axis, y-axis and z-axis. We rotate the three basis vectors, around the z-axis, by an angle of 90 degrees. Thus we get a new

basis u1, u2, u3. The link between old and new base is
u1 = e2 u2 = - e1 u3 = e3 co(u1) = (0,1,0) co(u2) = (-1,0,0) co(u3) = (0,0,1)

The transformation matrix is [0 [1 [0 -1 0 0 0] 0] 1]

(x,y,z) are the coordinates of a vector v with respect to the old basis. (x',y',z') are the coordinates of the vector v with respect to the new basis. The connection is [x] [0 -1 [y] = [1 0 [z] [0 0 0] [x'] 0].[y'] 1] [z']

Vector spaces and systems of linear equations


Vector spaces and homogeneous systems
Take a homogeneous system of linear equations in n unknowns. Each solution of that system can be viewed as a vector from the vector space V of all the real n-tuples. Each real multiple of that solution is a solution too, and the sum of two solutions is a solution too. Therefore, all the solutions of the system form a subspace M of V. It is called the solution space of the system.

Basis of a solution space


By means of an example, we show how a basis of a solution space can be found.
/ 2x + 3y - z + t = 0 \ x - y + 2z - t = 0

This is a system of the second kind. x and y can be taken as main unknowns. z and t are the side unknowns. The solutions are
x = -z + (2/5)t y = z - (3/5)t The set of solutions can we written as (-z + (2/5)t , z - (3/5)t , z , t ) with z and t in R <=> z(-1,1,1,0) + t(2/5,-3/5,0,1) with z and t in R

Hence, all solutions are linear combinations of the linear independent vectors (-1,1,1,0) and (2/5,3/5,0,1). These two vectors constitute a basis of the solution space.

Solutions of a non homogeneous system


We can denote such system shortly as AX = B, with coefficient matrix A, the column matrix B of the known terms and X is the column matrix of the unknowns. Consider also the corresponding homogeneous system AX = 0 with the same A and X as above. If X' is a fixed solution of AX = B then AX' = B . If X" is a arbitrary solution of AX = 0 then AX" = 0 . Then, AX' + AX" = B <=> A(X' + X") = B <=> X' + X" is a solution of AX = B. Conclusion: If we add an arbitrary solution of AX = 0 to a fixed solution of AX = B then X' + X" is a solution of AX = B. Furthermore: If X' is a fixed solution of AX = B then AX' = B . If X" is a arbitrary solution of AX = B then AX" = B . Then, AX" - AX' = 0 <=> A(X" - X') = 0 <=> X" - X' is a solution of AX = 0. <=> X" = X' + (a solution of AX = 0). Conclusion: Any arbitrary solution of AX = B is the sum of a fixed solution of AX = B and a solution of AX =0 So, if we have a fixed solution of AX = B and we add to this solution all the solutions of the corresponding homogeneous system one after another, then we get all solutions AX = B. Example:
/ 2x + 3y - z + t = 0 \ x - y + 2z - t = 0

Above we have seen that the solutions are z(-1,1,1,0) + t(2/5,-3/5,0,1).


/ 2x + 3y - z + t = 5 \ x - y + 2z - t = 0

has a solution (1,1,0,0) . All solutions of the last system are (1,1,0,0) + z(-1,1,1,0) + t(2/5,-3/5,0,1).

Sum of two vector spaces


Say A and B are subspaces of a vector space V. We define the sum of A and B as the set { a + b with a in A and b in B } We write this sum as A + B.

The sum as subspace


Theorem: The sum A+B, as defined above, is a subspace of V. Proof: For all a1 and a2 in A and all b1 and b2 in B and all r, s in R we have
r(a1 + b1) + s(a2 + b2) is in A + B. = (r a1 + s a2) + (r b1 + s b2)

Direct sum of two vector spaces


The sum A+B, as defined above, is a direct sum if and only if the vector 0 is the only vector common to A and B.

Example
In the space R3 A = span{ (3,2,1) } B = span{ (2,1,4) ; (0,1,3) } Investigate if A+B is a direct sum

Say r,s,t are real numbers, then each vector in space A is of the form r.(3,2,1) and each vector in space B is of the form s.(2,1,4) + t.(0,1,3) . For each common vector, there is a suitable r,s,t such that r.(3,2,1) = s.(2,1,4) + t.(0,1,3) <=> r.(3,2,1) - s.(2,1,4) - t.(0,1,3) = (0,0,0) <=> / 3r - 2s = 0 | 2r - s - t = 0 \ r - 4s -3t = 0 Since |3 |2 |1 -2 -1 -4 0| -1| is -3|

not 0,

the previous system has only the solution r = s = t = 0.

The vector (0,0,0) is the only common vector of A and B. Thus, A+B is a direct sum.

Property of direct sum


If A + B is a direct sum, then each vector v in A+B can be written, in just one way, as the sum of an element of A and an element of B.

Proof:
Suppose v = a1 + b1 = a2 + b2 Then with ai in A and bi in B.

a1 - a2 = b2 - b1 and a1 - a2 is in A and b2 - b1 is in B

Therefore a1 - a2 = b2 - b1 is a common element of A and B. But the only common element is 0. So, and a1 - a2 = 0 and b2 - b1 = 0 a1 = a2 and b2 = b1

Supplementary vector spaces


Say that vector space V is the direct sum of A and B, then
A is the supplementary vector space of B with respect to V. B is the supplementary vector space of A with respect to V. A and B are supplementary vector spaces with respect to V.

Basis and direct sum


Theorem: Say V is the direct sum of the spaces M and N. If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, then {a,b,c,..,l,a',b',c',..,l' } is a basis of M+N. Proof:

Each vector v of V can be written as m + n with m in M and n in N. Then m = ra + sb + tc + ... + zl and n = r'a' + s'b' + t'c' + ... + z'l' , with r,s,t,...z,r',s',t',...z' real coefficients. Thus each vector v = ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l' Therefore the set {a,b,c,..,l,a',b',c',..,l'} generates V. If ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l' = 0 , then ra + sb + tc + ... + zl is a vector m of M and r'a' + s'b' + t'c' + ... + z'l' is a vector n of N. From a previous theorem we know that we can write the vector 0 in just one way, as the sum of an element of M and an element of N. That way is 0 = 0 + 0 with 0 in M and 0 in N. From this we see that necessarily m = 0 and n = 0 and thus ra + sb + tc + ... + zl = 0 and r'a' + s'b' + t'c' + ... + z'l' = 0 Since all vectors in these expressions are linear independent, it is necessarily that all coefficients are 0 and from this we know that the generating vectors {a,b,c,..,l,a',b',c',..,l'} are linear independent.

Dimension of a direct sum


From previous theorem it follows that dim(A+B) = dim(A) + dim(B)

Converse theorem
If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, and {a,b,c,..,l,a',b',c',..,l' } are linear independent, then M+N is a direct sum. Proof: Each element m of M can be written as ra + sb + tc + ... + zl . Each element n of N can be written as r'a' + s'b' + t'c' + ... + z'l' . For a common element we have
ra + sb + tc + ... + zl = r'a' + s'b' + t'c' + ... + z'l'

<=> ra + sb + tc + ... + zl - r'a' - s'b' - t'c' - ... - z'l' = 0

Since all vectors are linear independent, all coefficients must be 0. The only common vector is 0.

Direct sum criterion

From the two previous theorems we deduce that: If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, then
M+N is a direct sum independent <=> {a,b,c,..,l,a',b',c',..,l' } are linear

Projection in a vector space


Choose two supplementary subspaces M and N with respect to the space V. Each vector v of V can be written in exactly one way as the sum of an element m of M and an element n of N. Then v = m + n . Now we can define the transformation
p: V --> V : v --> m We define this transformation as the projection of V on M with respect to N

Projection, example
V is the space of all polynomials with a degree not greater than 3. We define two supplementary subspaces M = span { 1, x } N = span { x2, x3 } Each vector of V is the sum of exactly one vector of M and of N. e.g. 2x3 - x2 + 4x - 7 = (2x3 - x2) + (4x - 7) Say p is the projection of V on M with respect to N, then
p(2x3 - x2 + 4x - 7 ) = 4x - 7

Say q is the projection of V on N with respect to M, then


q(2x3 - x2 + 4x - 7 ) = 2x3 - x2

To create the matrix of a projection see chapter: linear transformations.

Similarity transformation of a vector space


Let r = any constant real number. In a vector space V we define the transformation

h : V --> V : v --> r.v

We say that h is a similarity transformation of V with factor r. Important special values of r are 0, 1 and -1.

Reflection in a vector space


Choose two supplementary subspaces M and N with respect to the space V. Each vector v of V is the sum of exactly one vector m of M and n of N. Now we define the transformation
s : V --> V : v --> m - n

We say that s is the reflection of V in M with respect to the N. This definition is a generalization of the ordinary reflection in a plane. Indeed, if you take the ordinary vectors in a plane and if M and N are one dimensional supplementary subspaces, then you'll see that with the previous definition, s becomes the ordinary reflection in M with respect to the direction given by N.

Example of a reflection
Take V = R4. M = span{(0,1,3,1);(1,0,-1,0)} N = span{(0,0,0,1);(3,2,1,0)}

It is easy to show that M and N have only the vector 0 in common. (This is left as an exercise.) So, M and N are supplementary subspaces. Now we'll calculate the image of the reflection of vector v = (4,3,3,1) in M with respect to N. First we write v as the sum of exactly one vector m of M and n of N.
(4,3,3,1) = x.(0,1,3,1) + y.(1,0,-1,0) + z.(0,0,0,1) + t.(3,2,1,0)

The solution of this system gives x = 1; y = 1; z = 0; t = 1. The unique representation of v is


(4,3,3,1) = (1,1,2,1) + (3,2,1,0)

The image of the reflection of vector v = (4,3,3,1) in M with respect to N is vector v' =
(1,1,2,1) - (3,2,1,0) = (-2,-1,1,1)

To create the matrix of a reflection see chapter: linear transformations.

You might also like