Professional Documents
Culture Documents
SUBMITTED TO:
MISS:
SUBMITTED BY:
NAME :HARVINDER SINGH
CLASS : B.TECH CSE B
ROLL NO. : R 714 A 21
ACKNOWLEDGEMENT
Linear independence
In linear algebra, a family of vectors is linearly independent if none of them can
be written as a linear combination of finitely many other vectors in the collection.
A family of vectors which is not linearly independent is called linearly dependent.
For instance, in the three-dimensional real vector space R3 we have the following
example.
Here the first three vectors are linearly independent; but the fourth vector equals 9
times the first plus 5 times the second plus 4 times the third, so the four vectors
together are linearly dependent. Linear dependence is a property of the family, not
of any particular vector; here we could just as well write the first vector as a linear
combination of the last three.
In probability theory and statistics there is an unrelated measure of linear
dependence between random variables.
•
Formal definition
A subset S of a vector space V is called linearly dependent if there exists a finite
number of distinct vectors v1, v2, ..., vn in S and scalars a1, a2, ..., an, not all zero,
such that
Note that the zero on the right is the zero vector, not the number zero.
If such scalars do not exist, then the vectors are said to be linearly independent.
This condition can be reformulated as follows: Whenever a1, a2, ..., an are scalars
such that
we have ai = 0 for i = 1, 2, ..., n, i.e. only the trivial solution exists.
A set is linearly independent if and only if the only representations of the zero
vector as linear combinations of its elements are trivial solutions.
More generally, let V be a vector space over a field K, and let {vi}i∈I be a family of
elements of V. The family is linearly dependent over K if there exists a family
{aj}j∈J of elements of K, not all zero, such that
where the index set J is a nonempty, finite subset of I.
A set X of elements of V is linearly independent if the corresponding family {x}x∈X
is linearly independent.
Equivalently, a family is dependent if a member is in the linear span of the rest of
the family, i.e., a member is a linear combination of the rest of the family.
A set of vectors which is linearly independent and spans some vector space, forms
a basis for that vector space. For example, the vector space of all polynomials in x
over the reals has the (infinite) basis {1, x, x2, ...
Geometric meaning
A geographic example may help to clarify the concept of linear independence. A
person describing the location of a certain place might say, "It is 5 miles north and
6 miles east of here." This is sufficient information to describe the location,
because the geographic coordinate system may be considered a 2-dimensional
vector space (ignoring altitude). The person might add, "The place is 7.81 miles
northeast of here." Although this last statement is true, it is not necessary.
In this example the "5 miles north" vector and the "6 miles east" vector are linearly
independent. That is to say, the north vector cannot be described in terms of the
east vector, and vice versa. The third "7.81 miles northeast" vector is a linear
combination of the other two vectors, and it makes the set of vectors linearly
dependent, that is, one of the three vectors is unnecessary.
Also note that if altitude is not ignored, it becomes necessary to add a third vector
to the linearly independent set. In general, n linearly independent vectors are
required to describe any location in n-dimensional space.
Example I
The vectors (1, 1) and (−3, 2) in R2 are linearly independent.
Proof
Let λ1 and λ2 be two real numbers such that
Taking each coordinate alone, this means
Solving for λ1 and λ2, we find that λ1 = 0 and λ2 = 0.
Alternative method using determinants
An alternative method uses the fact that n vectors in Rn are linearly dependent if
and only if the determinant of the matrix formed by taking the vectors as its
columns is zero.
In this case, the matrix formed by the vectors is
We may write a linear combination of the columns as
We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on
the determinant of A, which is
Since the determinant is non-zero, the vectors (1, 1) and (−3, 2) are linearly
independent.
When the number of vectors equals the dimension of the vectors, the matrix is
square and hence the determinant is defined.
Otherwise, suppose we have m vectors of n coordinates, with m < n. Then A is an
n×m matrix and Λ is a column vector with m entries, and we are again interested in
AΛ = 0. As we saw previously, this is equivalent to a list of n equations. Consider
the first m rows of A, the first m equations; any solution of the full list of equations
must also be true of the reduced list. In fact, if 〈i1,…,im〉 is any list of m rows, then
the equation must be true for those rows.
Furthermore, the reverse is true. That is, we can test whether the m vectors are
linearly dependent by testing whether
for all possible lists of m rows. (In case m = n, this requires only one determinant,
as above. If m > n, then it is a theorem that the vectors must be linearly dependent.)
This fact is valuable for theory; in practical calculations more efficient methods are
available.
Example II
Let V = Rn and consider the following elements in V:
Then e1, e2, ..., en are linearly independent.
Proof
Suppose that a1, a2, ..., an are elements of R such that
Since
then ai = 0 for all i in {1, ..., n}.
Example III
Let V be the vector space of all functions of a real variable t. Then the functions et and e2t in V are
linearly independent.
Proof
Suppose a and b are two real numbers such that
aet + be2t = 0
for all values of t. We need to show that a = 0 and b = 0. In order to do this, we
divide through by et (which is never zero) and subtract to obtain
bet = −a
In other words, the function bet must be independent of t, which only occurs when
b = 0. It follows that a is also zero.
Proof
We need to find scalars λ1, λ2 and λ3 such that
Forming the simultaneous equations:
we can solve (using, for example, Gaussian elimination) to obtain:
where λ3 can be chosen arbitrarily.
Since these are nontrivial results, the vectors are linearly dependent.
The projective space of linear dependences
A linear dependence among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar
components, not all zero, such that
If such a linear dependence exists, then the n vectors are linearly dependent. It
makes sense to identify two linear dependences if one arises as a non-zero multiple
of the other, because in this case the two describe the same linear relationship
among the vectors. Under this identification, the set of all linear dependences
among v1, ...., vn is a projective space.