You are on page 1of 12

math

Topic: linearly dependence


and independnce

SUBMITTED TO:
MISS:
SUBMITTED BY:
NAME :HARVINDER SINGH
CLASS : B.TECH CSE B
ROLL NO. : R 714 A 21
ACKNOWLEDGEMENT

I would to show my gratitude


towards the help and guidance rendered
tome by my teacher ER.

would also like to acknowledge the support


and help I received from my family and
friends, for completing this term paper.
CONTENTS:
Linear Dependence and Inde

• Linear independence
○ Formal definition
○ Geometric meaning
• Alternative method using determinants
• The projective space of linear dependences
• Linear dependence between random variables

• Linear Independence & theWronskian of Two


Functions

• Linear Independence & theWronskian for two


solutions to the ODE
Linear Dependence and Independence
The idea of dimension is fairly intuitive. Consider any vector in <m,
(a1, a2, a3, ..., am). Each of the m components is independent of the others.
That is, choosing a value for a single component in no way affects the avail-
able choices of values for the other components, in order to have the choice
end up being a vector in <m. For this reason we say that <m has m degrees
of freedom or equivalently has dimension m. Thus, <2 has dimension 2, <3
has dimension 3, and so forth.
Consider now the “subspace” (soon to be defined) of <3 consisting of all
vectors (x, y, z) which have x + y + z = 0. We may now only choose values
for 2 of the components freely and these will determine the value of the third
component. For instance, if we choose x = 1 and y = 2, then we must have
z = −3, in order to satisfy x + y + z = 0. Therefore this subspace is said to
have dimension 2.
The rest of this course will be concerned with this idea of dimension and
related concepts. In this unit, we lay the foundation with the concepts of
linear dependence and independence.
Definition 20.1. A linear combination of a set of vectors, ~v1, ~v2, .., ~vk
in <m
is an expression of the form c1~v1 +c2~v2 +...+ ck~vk, where each ci is a
scalar.
Example 1. Let ~v1 = (1, 0, 1), ~v2 = (−1, 1, 0) and ~v3 = (1, 2, 3).
Express ~v3 as a linear combination of ~v1 and ~v2.
Solution: We must find scalars c1 and c2 so that ~v3 = c1~v1 + c2~v2.
Using our knowledge of scalar multiplication and addition of vectors, from
Unit 13, we set
(1, 2, 3) = c1(1, 0, 1) + c2(−1, 1, 0)) (1, 2, 3) = (c1, 0, c1) + (−c2, c2, 0)
) (1, 2, 3) = (c1 − c2, c2, c1)
Equating corresponding components we see that what we have is a SLE:
1 = c1 − c2
2 = c2
3 = c1
Since c1 = 3 and c2 = 2 does satisfy c1 − c2 = 1, we see that this is the
solution to the SLE. Thus, we see that (1, 2, 3) = 3(1, 0, 1) + 2(−1, 1, 0), so
we have found the required linear combination to be ~v3 = 3~v1 + 2~v2.
We now give two definitions for the same thing. Each definition is (of
course) equivalent to the other, but sometimes the use of one definition is
more suited to a given task than the other
Definition 20.2. A set of vectors in <m is said to be linearly dependent
if there is at least one vector in the set which can be expressed as a linear
combination of the others. If a set is not linearly dependent then it is said
to be linearly independent.
Definition 20.3. A set of vectors {~v1, ~v2, ..., ~vn} in <m is linearly
dependent
if the vector equation c1~v1 + c2~v2 + ... + cn~vn = ~0 has a solution in
which at
least one ci 6= 0.
Equivalently, the set is linearly independent if the above equation has only
the trivial solution, namely c1 = c2 = ... = cn = 0.
From this second definition, we see that a set containing only one vector is
linearly dependent if that one vector is the zero vector. That is, we define:
Definition 20.4. A set S = {~v} with ~v 2 <m is linearly dependent if ~v =
~0
(and linearly independent otherwise).

Note: Linear dependence or independence is a property of a set of vec-


tors, not of an individual vector. That is, it has to do with the relationship
between the vectors in the set. (The definition above describes how this
property is applied to a set which contains only one vector.)
Also note: Definition 20.2 says that if S is a set of linearly dependent
vectors, then at least one of the vectors in S can be written as a linear com-
bination of the others. This does not necessarily mean that each vector in
the set can be written as a linear combination of the others. For instance,
S = {(1, 0), (2, 0), (0, 1)} is a set of linearly dependent vectors in <2, since
(2, 0) = 2(1, 0) + 0(0, 1). However, (0, 1) cannot be written as a linear com-
bination of (1, 0) and (2, 0)..

Linear Dependence and Independence


A linear dependency among vectors v(1) to v(k) is an equation,

in which some of the c's ar not 0. A set of vectors is said


to be linearly independent if there is no linear dependence among them, and linearly
dependent if there is one or more linear dependence.
Example: suppose v(1) = i + j; v(2) =2i; v(3) = 3j.
Then v(1), v(2) and v(3) are linearly dependent because there is the relation
6v(1) = 3v(2) + 2v(3), or 6v(1) - 3v(2) - 2v(3) = 0

Linear independence
In linear algebra, a family of vectors is linearly independent if none of them can
be written as a linear combination of finitely many other vectors in the collection.
A family of vectors which is not linearly independent is called linearly dependent.
For instance, in the three-dimensional real vector space R3 we have the following
example.
Here the first three vectors are linearly independent; but the fourth vector equals 9
times the first plus 5 times the second plus 4 times the third, so the four vectors
together are linearly dependent. Linear dependence is a property of the family, not
of any particular vector; here we could just as well write the first vector as a linear
combination of the last three.
In probability theory and statistics there is an unrelated measure of linear
dependence between random variables.

Formal definition
A subset S of a vector space V is called linearly dependent if there exists a finite
number of distinct vectors v1, v2, ..., vn in S and scalars a1, a2, ..., an, not all zero,
such that
Note that the zero on the right is the zero vector, not the number zero.
If such scalars do not exist, then the vectors are said to be linearly independent.
This condition can be reformulated as follows: Whenever a1, a2, ..., an are scalars
such that
we have ai = 0 for i = 1, 2, ..., n, i.e. only the trivial solution exists.
A set is linearly independent if and only if the only representations of the zero
vector as linear combinations of its elements are trivial solutions.
More generally, let V be a vector space over a field K, and let {vi}i∈I be a family of
elements of V. The family is linearly dependent over K if there exists a family
{aj}j∈J of elements of K, not all zero, such that
where the index set J is a nonempty, finite subset of I.
A set X of elements of V is linearly independent if the corresponding family {x}x∈X
is linearly independent.
Equivalently, a family is dependent if a member is in the linear span of the rest of
the family, i.e., a member is a linear combination of the rest of the family.
A set of vectors which is linearly independent and spans some vector space, forms
a basis for that vector space. For example, the vector space of all polynomials in x
over the reals has the (infinite) basis {1, x, x2, ...
Geometric meaning
A geographic example may help to clarify the concept of linear independence. A
person describing the location of a certain place might say, "It is 5 miles north and
6 miles east of here." This is sufficient information to describe the location,
because the geographic coordinate system may be considered a 2-dimensional
vector space (ignoring altitude). The person might add, "The place is 7.81 miles
northeast of here." Although this last statement is true, it is not necessary.
In this example the "5 miles north" vector and the "6 miles east" vector are linearly
independent. That is to say, the north vector cannot be described in terms of the
east vector, and vice versa. The third "7.81 miles northeast" vector is a linear
combination of the other two vectors, and it makes the set of vectors linearly
dependent, that is, one of the three vectors is unnecessary.
Also note that if altitude is not ignored, it becomes necessary to add a third vector
to the linearly independent set. In general, n linearly independent vectors are
required to describe any location in n-dimensional space.
Example I
The vectors (1, 1) and (−3, 2) in R2 are linearly independent.
Proof
Let λ1 and λ2 be two real numbers such that
Taking each coordinate alone, this means
Solving for λ1 and λ2, we find that λ1 = 0 and λ2 = 0.
Alternative method using determinants
An alternative method uses the fact that n vectors in Rn are linearly dependent if
and only if the determinant of the matrix formed by taking the vectors as its
columns is zero.
In this case, the matrix formed by the vectors is
We may write a linear combination of the columns as
We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on
the determinant of A, which is
Since the determinant is non-zero, the vectors (1, 1) and (−3, 2) are linearly
independent.
When the number of vectors equals the dimension of the vectors, the matrix is
square and hence the determinant is defined.
Otherwise, suppose we have m vectors of n coordinates, with m < n. Then A is an
n×m matrix and Λ is a column vector with m entries, and we are again interested in
AΛ = 0. As we saw previously, this is equivalent to a list of n equations. Consider
the first m rows of A, the first m equations; any solution of the full list of equations
must also be true of the reduced list. In fact, if 〈i1,…,im〉 is any list of m rows, then
the equation must be true for those rows.
Furthermore, the reverse is true. That is, we can test whether the m vectors are
linearly dependent by testing whether
for all possible lists of m rows. (In case m = n, this requires only one determinant,
as above. If m > n, then it is a theorem that the vectors must be linearly dependent.)
This fact is valuable for theory; in practical calculations more efficient methods are
available.
Example II
Let V = Rn and consider the following elements in V:
Then e1, e2, ..., en are linearly independent.
Proof
Suppose that a1, a2, ..., an are elements of R such that
Since
then ai = 0 for all i in {1, ..., n}.

Example III
Let V be the vector space of all functions of a real variable t. Then the functions et and e2t in V are
linearly independent.
Proof
Suppose a and b are two real numbers such that
aet + be2t = 0
for all values of t. We need to show that a = 0 and b = 0. In order to do this, we
divide through by et (which is never zero) and subtract to obtain
bet = −a
In other words, the function bet must be independent of t, which only occurs when
b = 0. It follows that a is also zero.
Proof
We need to find scalars λ1, λ2 and λ3 such that
Forming the simultaneous equations:
we can solve (using, for example, Gaussian elimination) to obtain:
where λ3 can be chosen arbitrarily.
Since these are nontrivial results, the vectors are linearly dependent.
The projective space of linear dependences
A linear dependence among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar
components, not all zero, such that
If such a linear dependence exists, then the n vectors are linearly dependent. It
makes sense to identify two linear dependences if one arises as a non-zero multiple
of the other, because in this case the two describe the same linear relationship
among the vectors. Under this identification, the set of all linear dependences
among v1, ...., vn is a projective space.

Linear dependence between random variables


The covariance is sometimes called a measure of "linear dependence" between two
random variables. That does not mean the same thing as in the context of linear
algebra. When the covariance is normalized, one obtains the correlation matrix.
From it, one can obtain the Pearson coefficient, which gives us the goodness of the
fit for the best possible linear function describing the relation between the
variables. In this sense covariance is a linear gauge of dependence.

Linear Independence & theWronskian of Two Functions


Recall that our derivation of the Wronskian and the idea of linearly
independent functions
came from considering a fundamental set of two solutions of a second order
linear homoge-
neous ODE in standard form:
y00 + p(t)y0 + q(t)y = 0;
where p and q are both continuous on some interval I.
For our purposes, we apply the Wronskian as a test for linear independence
of two
solutions to the above equation. It turns out that the idea of linear
independence is more
general than just checking for two fundamental solutions to the above ODE.
Furthermore,
a nonzero Wronskian can be used to verify the linear independence of any
two di_erentiable
functions, not just two solutions of the above ODE.
Linear Independence & the Wronskian for any two functions
Recall our de_nition of the linear dependence of two functions f and g on an
open interval
I: f and g are linearly dependent if there exists constants c1 and c2,not both
zero, such
that
c1f(t) + c2g(t) = 0; for all t 2 I
If we must choose c1 = 0 = c2, then we say f and g are linearly independent.
Since we are considering only two functions, linear dependence is equivalent
in this
special case to one function being a scalar multiple of the other:
f(t) = Cg(t) or g(t) = Cf(t) for some constant C:
Note that C may be zero.
If two di_erentiable functions f and g are linearly dependent, then their
Wronskian is
zero for all t 2 I, i.e.,
W[f; g](t) = f(t)g0(t) �g(t)f0(t) = 0; for all t 2 I:
Thus, if the Wronskian is nonzero at any t 2 I, the two functions must be
linearly indepen-
dent.
Examples
1. f(t) = 2t and g(t) = 3t are linearly dependent for all t 2 R, because each of
the
following holds for all t 2 R:
(a) 3f(t) + (�2)g(t) = 3(2t) + �2(3t) = 0, where c1 = 3 6= 0 and c2 = �2 6=
0.
(b) f(t) = 2t = 2
3g(t), where C = 2=3, i.e., one function is a scalar multiple of the
other.
2
It follows that the Wronskian is zero for all t 2 R:
W[f; g](t) = f(t)g0(t) � g(t)f0(t) = 2t(3) � 3t(2) = 6t � 6t = 0:
2. On the interval I = (�2; 2), f(t) = 2t and g(t) = 3t2 are linearly
independent, because
of each of the following holds for all t 2 (�2; 2):
(a) c1f(t) + c2g(t) = 0 for all t 2 (�2; 2) implies
c12t + c23t2 = 0
t(2c1 + 3c2t) = 0
which implies t = 0 or 2c1 + 3c2t = 0, neither of which hold for all t 2 (�2; 2)
unless c1 = 0 = c2.
(b) Neither function is a scalar multiple of the other (check this!).
Not surprisingly, the Wronskian is not zero for all t 2 R:
W[f; g](t) = f(t)g0(t) � g(t)f0(t) = 2t(6t2) � 3t2(2) = 6t � 6t2 = 6t(1 � t) 6= 0;
as long as t 6= 0; 1.
Note: The fact that theWronskian is zero at two points in I = (�2; 2), i.e.,
W[f; g](0) =
0 = W[f; g](1), does not imply linear dependence. In fact, one can rig up two
linearly
independent functions whose Wronskian is everywhere zero! Take the
di_erentiable
functions f(t) = t2 and g(t) = jtjt, for example, and consider the cases t < 0
and t _ 0
separately.
Linear Independence & theWronskian for two solutions to
the ODE

If we are considering f = y1 and g = y2 to be two solutions to the ODE


y00 + p(t)y0 + q(t)y = 0;
where p and q are both continuous on some interval I, then the Wronskian
has some extra
properties which are given by Abel's Theorem:
W[y1; y2](t) = ce�R p(t) dt; for some constant c:
This theorem essentially says that if two solutions the the ODE are linearly
independent,
then the Wronskian of the two solutions is never zero on the interval I, i.e., c
6= 0. Otherwise,
the Wronskian is always zero, i.e., c = 0, and the solutions are linearly
dependent. This is
the key result that we _nd useful for checking for a fundamental set of two
solutions to a
second order linear homogeneous di_erential equation.
3
Examples
1. y1(t) = e2t and y2(t) = e3t are linearly independent solutions to
y00 � 5y0 + 6 = 0
for all t 2 R because of each of the following equivalences holds:
(a) c1y1(t) + c2y2(t) = 0 for all t 2 R implies
c1e2t + c2e3t = 0
e2t(c1 + c2et) = 0
which implies c1 = 0 = c2.
(b) Neither function is a scalar multiple of the other (check this!).
(c) W[y1; y2](t) = y1(t)y0
2(t)y2(t)y0
1(t) = e2t(3e3t)�e3t(2e2t) = 3e5t �2e5t = e5t 6= 0
for all t 2 R. Here p(t) = �5 and c = 1 in Abel's Theorem.
2. y1(t) = e2t and y2(t) = eln 3+2t are linearly dependent solutions to
y00 � 5y0 + 6 = 0
for all t 2 R because of each of the following equivalences holds for all t 2 R:
(a) �3y1(t) + y2(t) = �3e2t + eln 3+2t = �3e2t + eln 3e2t = �3e2t + 3e2t = 0,
where
c1 = �3 6= 0 and c2 = 1 6= 0.
(b) y2(t) = eln 3+2t = eln 3e2t = 3e2t = 3y1(t), where C = 3, i.e., one function is a
scalar multiple of the other.
(c) W[y1; y2](t) = y1(t)y0
2(t) � y2(t)y0
1(t) = e2t(2eln 3+2t) � eln 3+2t(2e2t) = 0. Here,
p(t) = �5 and c = 0 in Abel's Theorem.
In each of the above two examples we see that the Wronskian of the two
solutions is
everywhere zero or nowhere zero on the interval I = R. This is guaranteed by
Abel's
Theorem.
Considering our earlier example, it follows that f(t) = 2t and g(t) = 3t2 cannot
be a
fundamental set of solutions to any second order linear homogeneous ODE
on the interval
I = (�2; 2). Can you _gure out why not?!?

You might also like