You are on page 1of 30

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Mathematics for Economists


Vector Spaces and Linear Algebra

Martin Schmidt

Mannheim, September 2012

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Basic Concepts
Outline

Linear Independence.

Linear Functions.

Matrix Representaion.

Determinant.

Linear Mappings.

Eigenvalues and Eigenvectors.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Independence.
Let {xs X ; s S} be a family of vectors in a vector space X .
A linear combination of {xs ; s S} is a finite sum of the form
X
x=
s xs X .
s

Only finitely many numbers s called coefficients are non-zero.


The set of these linear combinations is called span of {xs ; s S}.
Definition
A family of vectors {xs ; s S} is called linearly independent if no
vector can be written as a linear combination of the others:
X
s xs = 0 s = 0 s S.
s

0 is always a lin. comb. = Lin. independ. vectors are non-zero.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Independence.
Basis

Definition
(Hamel basis, Def. 3.1.1) A Hamel basis for a vector space X is a
linearly independent family {xs ; s S} of vectors that spans X .
Every vector space has a Hamel basis.
The cardinality of a Hamel base is called dimension.
Vector spaces with finite Hamel bases are called finite dimensional.
A Hamel basis allows us to write every element of X in a unique
way as a linear combination of elements of the basis.
Theorem
(Thm. 3.1.2) Let {vs ; s S} be a Hamel basis for X . Then every
x X \ {0} has a unique representation as a linear combination of
finitely many vectors in {xs ; s S} with non zero coefficients.
P
P
Proof s s xs = s s xs = s = s s.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Independence and Bases.


Basis

In the case of finite-dimensional vector spaces, bases have a very


simple structure. If X has dimension n < , any collection of n
linearly independent vectors in X is a basis for X .
Theorem
(Thm. 3.1.4) Let {b1 , . . . , bn } be a basis of X . Then no set of
more than n vectors in X is linearly independent.
P
Proof x = ni=1 i bi with 1 6= 0 = {x, b2 , . . . , bn } is basis.
Replace b1 , . . . , bn by linear independent vectors and get a basis.
Theorem
(Thm. 3.1.5) Let X be a vector space of dimension n. Then any
linearly independent family of n vectors in X is a basis for X .

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Functions.
Definition
(Def. 3.2.1) Let X and Y be two vector spaces defined over the
same field F . We say that a function T : X Y is linear if for all
x1 , x2 X and any , F , we have
T (x1 + x2 ) = T (x1 ) + T (x2 )
This implies, of course, that
T (x1 + x2 ) = T (x1 ) + T (x2 ) and T (x1 ) = T (x1 )
Theorem
(Thm. 3.2.3) Let X and Y be two vector spaces defined over the
same field. The set L(X , Y ) of linear functions from X to Y is a
vector space.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Functions.
Image and Kernel

Definition
(Def. 3.2.5) Given a linear function T : X Y ,
its image (im) or range is the subset of Y given by
imT = T (X ) = {y Y ; y = T (x) for some x X }
and its kernel or null space is the subset of X given by
ker T = T 1 ({0}) = {x X ; T (x) = 0}.
Definition
The rank of a linear function T is dim imT .

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Functions.
Image and Kernel

Theorem
(Thm. 3.2.6) Given a linear function T : X Y ,
imT is a vector subspace of Y .
If {x1 , . . . , xn } is a basis for X ,
then {T (x1 ), . . . , T (xn )} spans imT .
Proof: Exercise
Theorem
(Thm. 3.2.7) Given a linear function T : X Y ,
kerT is a vector subspace of X .
Proof: Exercise

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Functions.
Image and Kernel

Theorem
(Thm. 3.2.9) Let X be a finite-dimensional vector space, and
T : X Y a linear function. Then
dimX = dim(kerT ) + dim(imT ).
Proof Choose v1 , . . . , vr , u1 , . . . , uk X s.th.
{T (v1 ), . . . , T (vr )} is a basis of imT and {u1 , . . . , uk } of kerT .
We claim that {v1 , . . . , vr , u1 , . . . , uk } is a basis of X .
By definition of the v s for any x there exist y span{v1 , . . . , vr }
with T (x) = T (y ). This implies x y kerT = span{u1 , . . . , uk }
and x span{v1 , . . . , vr , u1 , . . . , uk }.
T maps 1 v1 + . . . + r vr + 1 u1 + . . . + k uk = 0 to
1 T (v1 ) + . . . + r T (vr ) = 0. By definition of the v s the s are
zero. And by definition of the us the s are zero

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Functions.
Inverse of a Linear Function

The following theorem shows that the inverse of a linear function is


also a linear function.
Theorem
(Thm. 3.2.11) Let T L(X , Y ) be an invertible linear function.
Then the inverse function T 1 : Y X is linear: T 1 L(Y , X ).
Proof: Exercise
Theorem
(Thm. 3.2.13) A linear function T : X Y is one-to-one
iff T (x) = 0 x = 0, i.e. iff kerT = {0}.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation.
Matrix Multiplication

N M matrix times M L matrix = N L matrix:

a11
a21

..
.

a12
a22
..
.


b11
a1M

a2M b21
.. ..
. .

b12
b22
..
.

b1L
b2L

.. =
.

bM1 bM2 bML


aN1 aN2 aNM
PM
PM
PM

m=1 a1m bm1


m=1 a1m bm2
m=1 a1m bmL
P
P
P
M
M
M a2m bm1

m=1
m=1 a2m bm2
m=1 a2m bmL
=

..
..
..

.
.

.
PM
PM
PM
m=1 aNm bm1
m=1 aNm bm2
m=1 aNm bmL

This product is associative and distributive.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation.
Matrices as Linear Functions

We identify F M with the M 1-matrices (column vectors)


and F N with the N 1-matrices (column vectors).
The matrix product of a N M-matrix A with x F M
defines a linear function from F M to F N :
F M F N with x 7 A x =


a11 a12 a1M
x1
a11 x1 + a12 x2 + . . . + a1M xM
a21 a22 a2M x2 a21 x1 + a22 x2 + . . . + a2M xM


..
..
.. .. =
..
.
.

. .
.
aN1 aN2

aNM

xM

aN1 x1 + aN2 x2 + . . . + aNM xM

For an L N matrix B the product B A describes the


composition of the linear functions x 7 A x and y 7 B y .

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation
Gau Algorithm

Consider the following system of N linear equations for M variables:


b1
a11 x1 + a12 x2 + . . . + a1M xM
a21 x1 + a22 x2 + . . . + a2M xM b2

Ax =
= .. .
..
.

.
aN1 x1 + aN2 x2 + . . . + aNM xM

bN

Solutions x F M are preserved by the following transformations


of the coefficients (A, b) combined to a N (M + 1)-matrix:
1

interchange two rows.

add to one row a multiple of another row.

multiply one row with a nonzero number.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation
Gau Algorithm

Gau Algorithm
After finitely many transformations (1)-(2) there exists finitley
many 0 < m1 < m2 < . . . < mr s.th. anmn 6= 0 for 1 n r
anm = 0 for n > r and for 1 n r and m < mn .

0
0

..
.
A=
0

..
.

. . . 0 a1,m1
... ...
0
... ...
... ...

...
...

...
a2,m2

...

0
...

ar ,mr
...

. . .

..
.

For such A and all b F N one can easily determine all solutions.
The number r in the Gau algorithm is the rank of A

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation
Gau Algorithm

Let A L(F N , F N ) be an invertible N N matrix.


The columns of the inverse matrix A1 are
the products of A1 with the columns of the identity matrix.
Therefore these columns are the solutions of the equations Ax = b,
where b runs through the columns of the identity matrix.
Algorithm to calculate the inverse
Perform a combination of the transformations (1)-(3) on the
N 2N matrix (A, 1l) until it is of the form (1l, A1 ).
Example









1 3
1 0
1 3
1 0
1 0
13 3

4 13
0 1
0 1
4 1
0 1
4 1

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation
Matrix Representaion of a Linear Function

Let X be a vector space with basis {v1 , . . . , vM }


Then F M X , 7 1 v1 + . . . + M vM is an isomorphism.
Theorem
(Thm. 3.3.4) Every M-dim. vector space is isomorphic to F M
Let X and Y be two finite-dimensional vector spaces with bases
{v1 , . . . , vM } and {w1 , . . . , wN }, respectively.
With the Theorem we identify X ' F M and Y ' F N .
Then all linear functions T : X Y are uniquely determined by
the N M matrices A, whose columns are the coefficients of
T (v1 ), . . . , T (vM ) with respect to the basis {w1 , . . . , wN }.
!
M
N
X
X
T
m vm =
n wn
= A .
m=1

n=1

As a result L(X , Y ) is identified with the space of N M matrices.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation.
Change of Basis

Let {a1 , . . . , aN } and {b1 . . . , bN } be bases of a vector space X .


Both bases give rise to two different isomorphisms X ' F N .
The coefficients = (1 , . . . , N ) with respect to the second basis
are determined by the coeffcients with respect to the first basis by
the linear function
7 = P .
Here the columns of the matric P are the coefficients of the
vectors a1 , . . . , aN with respect to the basis {b1 . . . , bN }.
Since this function is a composition of the isomorphism
corresponding to the basis {a1 , . . . , aN } with the inverse of the
isomorphism corresponding to the basis {b1 , . . . , bN } it is an
isomorphism.
The columns of the inverse matrix P 1 are the coefficients of the
vectors b1 . . . , bN with respect to the basis {a1 , . . . , aN }.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Matrix Representation.
Change of Basis

Now, let T : X X be a linear function with matrix


representation Ma in basis {a1 , . . . , aN } and Mb in basis
{b1 , . . . , bN }. Then, given an arbitrary vector x in X , its image
T (x) has coordinates Ma in basis {a1 , . . . , aN } and Mb in basis
{b1 , . . . , bN }. These two coordinate vectors are related by
= P

Mb = PMa

Because this realtions hold for all vectors , it follows that


Mb = PMa P 1 .
Definition
(Def. 3.5.2) Two matrices A and B are said to be similar if there
exists an invertible matrix P such that PAP 1 = B .

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Determinant.
Permutations

A permutation of N elements is a bijective function


: {1, . . . , N} {1, . . . , N}.
These permutations form a group with N! = 1 2 N elements.
Any permutation is a compostion of transpositions, which
interchange two elements.
A permutation is called even or odd, if it is a composition of an
even or odd number of transpositions, respectively.
Example: Permutations of 2 elements:
(1, 2) 7 (1, 2) even

(1, 2) 7 (2, 1) odd

Permutations of 3 elements:
(1, 2, 3) 7 (1, 2, 3), (2, 3, 1), (3, 1, 2) even
(1, 2, 3) 7 (1, 3, 2), (2, 1, 3), (3, 2, 1) odd

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Determinant.
For any N N matrix A we define the determinant of A

det(A) =

(1)|| a1(1) aN(N)

with

(1)||

(
1
=
1

for even
.
for odd

If we fix all other rows of the matrix A with the exception of one
row, then det(A) depends linearly on the remaining row.
If we interchange any two rows of A, then det(A) changes the sign.
We say that det(A) is multilinear and totally antisymmetric with
respect to the rows of A.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Determinant.
Theorem
A linear function between finite dimensional vector spaces is
invertible, iff the vector spaces have the same dimension and the
matrix representing the function has nonzero determinant.
Proof The determinant det(A) does not change by transformations
(2). By transformations (1) det(A) changes the sign. Therefore it
suffices to prove the statement for matrices of the form obtained
by the Gau Algorithm. The only permutaion of {1, . . . , N} with
(n) nn = 1, . . . , N is the identity. Then the determinant of an
upper triangular matrix is the product of the elements in the
diagonal. Therefore det(A) vanishes, iff the rank is not N.
For two N N matrices A and B we have (without proof)
det(AB) = det(A) det(B).

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Mappings.
Bounded Linear Mappings

Definition
(Bounded linear function, Def. 3.4.2) Let T be a linear function
between two normed vector spaces, X and Y . We say that T is
bounded if there exists some real number B such that
kTxk Bkxk

x X .

Theorem
(Thm. 3.4.3) Let X and Y be normed vector spaces. A linear
function T : X Y is continuous if and only if it is bounded.
Proof T bounded kT (x) T (y )k = kT (x y )k Bkx y k.
T continuous in 0 > 0 s.th. kT (x)k 1 x B (0).


2
2kxk
x
T
kxk x 6= 0.
kT (x)k =

2kxk

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Mappings.
Finite dimensional Vector Spaces

Definition
(Def. 2.10.1) On a real vector space X two norms k k1 and k k2
are called equiavlent, if there exist C1 , C2 > 0 s.th.
kxk2 C1 kxk1

kxk1 C2 kxk2

x X .

Theorem
(Thm. 3.4.6) All norms on a finite dim. vector space are equivalent.
Proof For a basis {v1 , . . . , vn } of X the isomorphism Rn ' X is
continuous: k1 v1 + . . . + n vn k (kv1 k + . . . + kvn k)kk2 .
min{k1 v1 + . . . + n vn k; kk2 = 1} k1 v1 + . . . + n vn k/kk2 .
Theorem
(Thm. 3.4.5) A linear function from a finite-dimensional normed
vector space into a normed vector space is continuous.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Linear Mappings.
Norm of Linear Mappings

For a bounded linear function T : X Y between normed vector


spaces we set


kT (x)k
; x X and x 6= 0 .
kT k = sup
kxk
Theorem
(Thm. 3.4.7) For a bounded linear function T : X Y we have
kT (x)k kT k kxk

x X .

Theorem
(Thm. 3.4.8) For two normed vector spaces X and Y the space of
bounded linear functions L(X , Y ) is a normed vector space.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.


Definition
(Def. 3.6.1) Let X be a vector space and T L(X , X ) a linear
function. If there exists a number (real or complex) and a
non-zero vector e X s.th.
Te = e,
then we say that is an eigenvalue of T ,
and e is called eigenvector of the eigenavlue .
If X is finite-dimensional with basis {v1 , . . . , vn } and T
corresponds to the n n matrix A, then the coefficients of
e = 1 v1 + . . . + n vn obey A = .
The subspace of X containing all non necessarily non-zero
eigenvectors of an eigenvalue is called eigenspace.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.


characteristic polynomial

Let X be a N-dimensional vector space and A the N N matrix


representation of T L(X , X ) with respect to a basis.
Theorem
On a finite dimensional vector space X a number is an
eigenvalue of the linear function T L(X , X ), iff the
corresponding matrix A obeys det(1l A) = 0.
Definition
The function 7 det(1l A) is a polynomial of degree N. It is
called characteristic polynomial.
Due to the fundamental theorem of algebra it has at exactely N
not necessarily different complex roots (for real and complex
coefficients).

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.


Theorem
(Thm. 3.6.6) Let A = [aij ] be an n n matrix. Then
the product of the eigenvalues of A is equal to its
determinant, that is,
det A =

n
Y

i=1

the sum of the eigenvalues of A is equal to its trace, that is,


trA

n
X
i=1

aii =

n
X

i=1

if A is a triangular matrix, then its eigenvalues are the


coefficients in the diagonal of the matrix (i.e.,i = aii ).

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.


A matrix A is called diagonalizable if it is similar to a diagonal
matrix, i.e. there exists invertible matrix P s.th. P 1 AP is diagonal.
Theorem
(Thm. 3.6.7) A n n matrix A is diagonalizable, iff it has n
linearly independent eigenvectors. Moreover, the diagonalzing
matrix is the matrix E = (e1 , . . . , eN ) whose columns are the
eigenvectors of A, and the resulting diagonal matrix is the matrix
= diag(1 , . . . , N ), with the eigenvalues of A in the principal
diagonal, and zeros elsewhere. That is, A = E E 1 .
Proof e1 , . . . , en are eigenvectors with eigenvalues 1 , . . . , n iff
A(e1 , . . . , en ) = (e1 , . . . , en )diag(1 , . . . , n ).
(e1 , . . . , en ) is invertible, iff e1 , . . . , en are linear independent.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.


Theorem
(Thm. 3.6.8) Let A be an N N matrix. If the N eigenvalues of A
are all distinct, then its eigenvectors e1 , . . . , eN are linearly
independent, and therefore A is diagonalizable.
Proof If det(1l A) = 0 has N pairwise different roots
1 , . . . , N , then there exist N pairwise different eigenspaces and
therefore
also N pairwise different eigenvectors e1 , . . . , eN . If
PN

e
n
n = 0, then for all polynomials p
n=1
p(A)

N
X
n=1

!
n e n

N
X

p(n )n en = 0.

n=1

Q
If we insert the polynomials p() = n6=l ( n ) we see that all
1 , . . . , N vanish. The foregoing Theorem implies the statement.

Linear Independence. Linear Functions. Matrix Representaion. Determinant. Linear Mappings. Eigenvalues and Eigenvectors.

Eigenvalues and Eigenvectors.

Theorem
(Thm. 3.6.5) Let A be a real n n matrix. Then, complex
eigenvalues of A, if they exist, come in conjugate pairs. Moreover,
the corresponding eigenvectors also come in conjugate pairs.
Proof A real matrix A can be considered as a complex matrix.
Let be an eigenvalue of the complex matrix with eigenvector e.
and e denote the matrices with complex conjugates entries.
Let A
e =
e .
Then Ae = e and A
is eigenvalue with eigenvector e.
Therefore
Note is real, iff there exists real non-zero eigenvector e.

You might also like