You are on page 1of 4

REAL VECTOR SPACES

Let V be an arbitrary non empty set of objects.


There are two operations are defined: addition and multiplication by scalars.

Addition and multiplication.


Let u and v are objects in V,
• The sum of u and v is u + v
• The scalar multiple of u by k is in V

Axioms
1. If u and v are objects in V, then u + v is in V
2. u + v = v + u
3. u + (v + w) = (u + v) + w
4. There is an object 0 in V called zero vector for V such that 0 + u = u + 0 = u for all u in V
5. For each u in V, there is –u in V, called negative u such that u + (-u) = (-u) + u = 0
6. If k is scalar and u is any object in V, then ku is in V
7. k(u + v) = ku + kv
8. (k + l)u = ku + lu
9. k(lu) = (kl)u
10. 1u = u

If all the axioms above are satisfied by all u, v and w in V and all scalars k and l, then we call V a
vector space and we call the objects in V are vectors.

Vector spaces in which the scalars are complex numbers are called complex vector spaces
Vector spaces in which the scalars must be real are called real vectors spaces

Examples of vector spaces.

1. The set V = Rn with standard operation of addition and scalar multiplication defined before is a
vector space.
2. Rn → R1 : are the real numbers; Rn → R2 : are vectors in plane; Rn → R3 : the vectors in 3-space

Subspace.

Definition. A subset W of a vector space V is called a subspace of V if W is itself a vector space under
the addition and scalar multiplication defined on V.

Axioms 2, 3, 7, 8, 9, and 10 is inherited by W from V, thus to show that a set W is a subspace of a


vector space V, we need only verify axioms 1, 4, 5, 6.

Theorem.

If W is a set of one or more vectors from a vector space V, then W is a subspace of V if and only if the
following conditions hold.
a. If u and v are vectors in V, then u + v is in W
b. If k is any scalar and u is any vector in W, then ku is in W.

A set of one or more vectors from a vector space V is said to be closed under addition if: u and v are
vectors in W, then u + v is in W is satisfied, and closed under scalar multiplication: if k is any scalar
and u is any vector in W, then ku is in W is satisfied.

In other word: W is subspace of V if and only if W is closed under addition and scalar
multiplication.

Examples
Solution spaces of Homogeneous System.

.Let Ax = b is a system of linear equations


X that satisfies that equation is called solution vectors.
The solution vectors of a homogeneous linear system form a vector space, and is called solution space

Theorem.
If Ax = 0 is a homogeneous linear system of m equation in n unknowns, then the set of solution vectors
is a subspace of Rn.

Proof. Let W be the set of solution vectors. There is at least one vector in W, namely 0. If x and x’ are
solution vectors and k is any scalar, then
Ax = 0 and Ax’ = 0
A(x + x’) = Ax + Ax’ = 0 + 0 = 0
A(kx) = kAx = k0 = 0

So x + x’ and kx are solution vectors, so W is subspace of Rn.

Example.

Definition. A vector w is called a linear combination of vectors v1, v2,…..vn if it can be expressed in
the form: w = k1v1 + k2v2…….+ krvr, where k1,k2,…kr are scalars.
If r = 1, then the equation becomes w = k1v1, that is w is a linear combination of a single vector v1 if it
is scalar multiple of v1.

Theorem.
If v1, v2 and v3 are vectors in a vector space V, then
a. The set W of all linear combination of v1, v2 … vr is subspace of V.
b. W is the smallest subspace of V that contains v1, v2 ….vr must contain W

Proof.
There is at least one vector in W, that is 0 since 0 = ov1 + 0v2 …..+0vr
If u and v are any vectors in W, then
u = c1v1 + c2v2…….+crvr
v = k1v1 + k2v2 ……+ krvr
ku = kc1v1 + kc2v2 ….+kcrvr

therefore W is closed under addition and scalar multiplication.

Definition.

If S = (v1,v2….vr) is a set of vectors in a vector space V, then the subspace of W of V consisting linear
combination of vectors in S is called the space-spanned by v1, v2…..vr and the vectors v1, v2…vr is
called span W. We write: W = span(S) or W = span(v1, v2….vr)

Example.

Theorem.

If S = {v1, v2…..vr) and S’ = {w1, w2, ….wk} are two sets of vectors in a vector space V, then
Span{v1, v2…vr} = Span{w1, w2…..wk}, if and only if each vector in S is a linear combination of
those in S’ and each vector in S’ is a linear combination of those in S.

Example

Geometric interpretation of Linear Independence.

• In R2 or R3, a set of two vectors is linearly independent if and only if the vectors do not lie on
the same line when they placed with their initial point at the origin.
• In R3, a set of three vectors is linearly independent if and only if the vectors do not lie in same
plane when their initial points at the origin.

Linear independence of functions

If f1 = f1(x), f2 = f2(x)……fn = fn(x) are (n-1) times differentiable functions in the interval (-∞,∞)
then the determinant of f1, f2….fn, is called the Wronskian of f1, f2….fn.

f1(x) f2(x) ………………..fn(x)


f’(x) f’2(x)……………….f’n(x)
.
W(x) = . Is called the Wronskian of f1, f2…fn
.
f1(n-1) (x)………….…… f1(n-1) (x)

If the function f1, f2, ….fn have (n-1) derivatives on the interval (-∞,∞) and if the Wronskian of these
function is not identically zero on (-∞,∞) then these functions form a linearly independent set of
vectors in C(n-1) (-∞,∞)

Examples

Basis and dimension

Definition. If V is any vector space and S = {v1, v2,….vn} is a set of vectors in V, then S is called a
basis of V if the following hold.
(1) S is linearly independent
(2) S spans V

Uniqueness of Basis Representation.

If S = {v1, v2,…vn} is a basis for a vector space V, then every vector v in V can be expressed in the
form V = c1v1 + c2v2 +….cnvn in exactly one way.

Proof.

Coordinate relative to a basis.

If S = {v1, v2, …vn} is a basis of a vector space V, and v = c1v1 + c2v2 + …. c nvn, is the expression for
a vector v in terms of the basis S, then the scalars c, c2, …cn are called the coordinates of v relative to
the basis S.

The vector (c1, c2….cn) in Rn constructed from these coordinate is called the coordinate vector of v
relative to S, denoted by : (v)S = (c1, c2 ….cn)

Examples.

Definition. A non zero vector space V is called finite-dimensional if it contains a finite set of vectors
{v1, v2, ….vn} that form a basis . If no such set exists, V is called infinite-dimensional. We shall
regard the zero vector space to be finite-dimensional.

Theorem. Let V be a finite-dimensional vector space and {v1, v2, … vn} any basis.
(a) If a set has more than n vectors, then it is linearly dependent.
(b) If a set has fewer than n vectors, then it does not span V

Theorem. All bases for a finite-dimensional vector space have the same number of vectors. It implies
that all basi for Rn has n vectors, every basis for R3 has 3 vectors….etc.
Definition. The dimension of a finite-dimensional vector space V, denoted by dim(V), is defined to be
the number of vectors in a basis for V. So the zero vector space has zero-dimension.
Dim(Rn) = n
Dim(Pn) = n + 1
Dim(Mmn) = mn

(Notes: we will see in the following examples.)


The steps summarize:
Given a set of vectors S = {v1, v2, v3….vn} in Rn
(a) Find the vectors that form a basis for span(S)
(b) Express these vectors of S that are not the basis as linear combination of the basis vectors.

Step1. Form the matrix A having v1, v2….vn as its column vectors.

Step2. Reduce An to its reduced row echelon form R


Let w1, w2,….wn be its column vectors of R

Step3. Identify the column that contains the leading 1’s in R, these vectors are the basis vector for
span(S)

Step4. Express the vectors of R does not contain leading 1’s as linear combination of the basis.

Rank and Nullity.

Theorem. If A any matrix, then the row space and column space of A have the same dimension

Definition. The common dimension of the row space of a matrix A is called the rank of A or rank(A),
the dimension of the nullspace of A is called the nullity of A or nullity(A)

Theorem. If A is any matrix, then rank(A)=rank(AT)


Proof. Rank(A) = dim (row space of A) = dim (column space of AT) = rank(AT)

Theorem. If A is amatrix with n columns, then rank(A) + nullity(A) = n

Theorem. If A is an m x n matrix, then


(a) rank(A) = the number of leading variables in the solution of Ax = 0
(b) nullity (A) = the number of parameters in the general solution of Ax=0

Number parameters in a general solution.

You might also like