Professional Documents
Culture Documents
Martin Schmidt
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Basic Concepts
Outline
Linear Independence.
Linear Functions.
Matrix Representaion.
Determinant.
Linear Mappings.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence.
Let {xs X ; s S} be a family of vectors in a vector space X .
A linear combination of {xs ; s S} is a finite sum of the form
X
x=
s xs X .
s
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence.
Basis
Definition
(Hamel basis, Def. 3.1.1) A Hamel basis for a vector space X is a
linearly independent family {xs ; s S} of vectors that spans X .
Every vector space has a Hamel basis.
The cardinality of a Hamel base is called dimension.
Vector spaces with finite Hamel bases are called finite dimensional.
A Hamel basis allows us to write every element of X in a unique
way as a linear combination of elements of the basis.
Theorem
(Thm. 3.1.2) Let {vs ; s S} be a Hamel basis for X . Then every
x X \ {0} has a unique representation as a linear combination of
finitely many vectors in {xs ; s S} with non zero coefficients.
P
P
Proof s s xs = s s xs = s = s s.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Functions.
Definition
(Def. 3.2.1) Let X and Y be two vector spaces defined over the
same field F . We say that a function T : X Y is linear if for all
x1 , x2 X and any , F , we have
T (x1 + x2 ) = T (x1 ) + T (x2 )
This implies, of course, that
T (x1 + x2 ) = T (x1 ) + T (x2 ) and T (x1 ) = T (x1 )
Theorem
(Thm. 3.2.3) Let X and Y be two vector spaces defined over the
same field. The set L(X , Y ) of linear functions from X to Y is a
vector space.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Functions.
Image and Kernel
Definition
(Def. 3.2.5) Given a linear function T : X Y ,
its image (im) or range is the subset of Y given by
imT = T (X ) = {y Y ; y = T (x) for some x X }
and its kernel or null space is the subset of X given by
ker T = T 1 ({0}) = {x X ; T (x) = 0}.
Definition
The rank of a linear function T is dim imT .
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Functions.
Image and Kernel
Theorem
(Thm. 3.2.6) Given a linear function T : X Y ,
imT is a vector subspace of Y .
If {x1 , . . . , xn } is a basis for X ,
then {T (x1 ), . . . , T (xn )} spans imT .
Proof: Exercise
Theorem
(Thm. 3.2.7) Given a linear function T : X Y ,
kerT is a vector subspace of X .
Proof: Exercise
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Functions.
Image and Kernel
Theorem
(Thm. 3.2.9) Let X be a finite-dimensional vector space, and
T : X Y a linear function. Then
dimX = dim(kerT ) + dim(imT ).
Proof Choose v1 , . . . , vr , u1 , . . . , uk X s.th.
{T (v1 ), . . . , T (vr )} is a basis of imT and {u1 , . . . , uk } of kerT .
We claim that {v1 , . . . , vr , u1 , . . . , uk } is a basis of X .
By definition of the v s for any x there exist y span{v1 , . . . , vr }
with T (x) = T (y ). This implies x y kerT = span{u1 , . . . , uk }
and x span{v1 , . . . , vr , u1 , . . . , uk }.
T maps 1 v1 + . . . + r vr + 1 u1 + . . . + k uk = 0 to
1 T (v1 ) + . . . + r T (vr ) = 0. By definition of the v s the s are
zero. And by definition of the us the s are zero
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Functions.
Inverse of a Linear Function
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation.
Matrix Multiplication
a11
a21
..
.
a12
a22
..
.
b11
a1M
a2M b21
.. ..
. .
b12
b22
..
.
b1L
b2L
.. =
.
m=1
m=1 a2m bm2
m=1 a2m bmL
=
..
..
..
.
.
.
PM
PM
PM
m=1 aNm bm1
m=1 aNm bm2
m=1 aNm bmL
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation.
Matrices as Linear Functions
a11 a12 a1M
x1
a11 x1 + a12 x2 + . . . + a1M xM
a21 a22 a2M x2 a21 x1 + a22 x2 + . . . + a2M xM
..
..
.. .. =
..
.
.
. .
.
aN1 aN2
aNM
xM
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation
Gau Algorithm
b1
a11 x1 + a12 x2 + . . . + a1M xM
a21 x1 + a22 x2 + . . . + a2M xM b2
Ax =
= .. .
..
.
.
aN1 x1 + aN2 x2 + . . . + aNM xM
bN
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation
Gau Algorithm
Gau Algorithm
After finitely many transformations (1)-(2) there exists finitley
many 0 < m1 < m2 < . . . < mr s.th. anmn 6= 0 for 1 n r
anm = 0 for n > r and for 1 n r and m < mn .
0
0
..
.
A=
0
..
.
. . . 0 a1,m1
... ...
0
... ...
... ...
...
...
...
a2,m2
...
0
...
ar ,mr
...
. . .
..
.
For such A and all b F N one can easily determine all solutions.
The number r in the Gau algorithm is the rank of A
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation
Gau Algorithm
4 13
0 1
0 1
4 1
0 1
4 1
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation
Matrix Representaion of a Linear Function
n=1
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation.
Change of Basis
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Matrix Representation.
Change of Basis
Mb = PMa
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Determinant.
Permutations
Permutations of 3 elements:
(1, 2, 3) 7 (1, 2, 3), (2, 3, 1), (3, 1, 2) even
(1, 2, 3) 7 (1, 3, 2), (2, 1, 3), (3, 2, 1) odd
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Determinant.
For any N N matrix A we define the determinant of A
det(A) =
with
(1)||
(
1
=
1
for even
.
for odd
If we fix all other rows of the matrix A with the exception of one
row, then det(A) depends linearly on the remaining row.
If we interchange any two rows of A, then det(A) changes the sign.
We say that det(A) is multilinear and totally antisymmetric with
respect to the rows of A.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Determinant.
Theorem
A linear function between finite dimensional vector spaces is
invertible, iff the vector spaces have the same dimension and the
matrix representing the function has nonzero determinant.
Proof The determinant det(A) does not change by transformations
(2). By transformations (1) det(A) changes the sign. Therefore it
suffices to prove the statement for matrices of the form obtained
by the Gau Algorithm. The only permutaion of {1, . . . , N} with
(n) nn = 1, . . . , N is the identity. Then the determinant of an
upper triangular matrix is the product of the elements in the
diagonal. Therefore det(A) vanishes, iff the rank is not N.
For two N N matrices A and B we have (without proof)
det(AB) = det(A) det(B).
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Mappings.
Bounded Linear Mappings
Definition
(Bounded linear function, Def. 3.4.2) Let T be a linear function
between two normed vector spaces, X and Y . We say that T is
bounded if there exists some real number B such that
kTxk Bkxk
x X .
Theorem
(Thm. 3.4.3) Let X and Y be normed vector spaces. A linear
function T : X Y is continuous if and only if it is bounded.
Proof T bounded kT (x) T (y )k = kT (x y )k Bkx y k.
T continuous in 0 > 0 s.th. kT (x)k 1 x B (0).
2
2kxk
x
T
kxk x 6= 0.
kT (x)k =
2kxk
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Mappings.
Finite dimensional Vector Spaces
Definition
(Def. 2.10.1) On a real vector space X two norms k k1 and k k2
are called equiavlent, if there exist C1 , C2 > 0 s.th.
kxk2 C1 kxk1
kxk1 C2 kxk2
x X .
Theorem
(Thm. 3.4.6) All norms on a finite dim. vector space are equivalent.
Proof For a basis {v1 , . . . , vn } of X the isomorphism Rn ' X is
continuous: k1 v1 + . . . + n vn k (kv1 k + . . . + kvn k)kk2 .
min{k1 v1 + . . . + n vn k; kk2 = 1} k1 v1 + . . . + n vn k/kk2 .
Theorem
(Thm. 3.4.5) A linear function from a finite-dimensional normed
vector space into a normed vector space is continuous.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Mappings.
Norm of Linear Mappings
x X .
Theorem
(Thm. 3.4.8) For two normed vector spaces X and Y the space of
bounded linear functions L(X , Y ) is a normed vector space.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
n
Y
i=1
n
X
i=1
aii =
n
X
i=1
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
e
n
n = 0, then for all polynomials p
n=1
p(A)
N
X
n=1
!
n e n
N
X
p(n )n en = 0.
n=1
Q
If we insert the polynomials p() = n6=l ( n ) we see that all
1 , . . . , N vanish. The foregoing Theorem implies the statement.
Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.
Theorem
(Thm. 3.6.5) Let A be a real n n matrix. Then, complex
eigenvalues of A, if they exist, come in conjugate pairs. Moreover,
the corresponding eigenvectors also come in conjugate pairs.
Proof A real matrix A can be considered as a complex matrix.
Let be an eigenvalue of the complex matrix with eigenvector e.
and e denote the matrices with complex conjugates entries.
Let A
e =
e .
Then Ae = e and A
is eigenvalue with eigenvector e.
Therefore
Note is real, iff there exists real non-zero eigenvector e.