You are on page 1of 22

1

Matrix Algebra
What to read:
Appendix A in Greene (7th) (or Appendix A in Greene (6th)).
Also, Appendix D in Wooldridge (5th) is useful.1
Key concepts:
Identity matrix, diagonal matrix, symmetric matrix, idempotent matrix, inverse of a
matrix, inverse of a product of matrices, inverse of a partitioned matrix, transpose of a
matrix, transpose of a product of matrices, trace of a matrix.
Linearly independent vectors, orthogonal vectors, characteristic roots (eigenvalues), characteristic vectors (eigenvectors), rank of a matrix.

1
Greene refers to Econometric Analysis by Greene; Wooldridge refers to Introductory Econometrics: A Modern
Approach by Wooldridge.

1.0

1.1

Denition and related concepts

1. Matrix: a rectangular array of numbers or elements arranged in rows or columns. The


dimension of a matrix is m n (read m by n) where m = the number of rows and n = the
number of columns.

a11 a12 a12 a1n


a21 a22 a23 a2n

A = [aij ] = a31 a32 a33 a3n


..
..
.
..
.
..
..

.
.
.
am1 am2 am3 amn

where aij represents the element in the ith row and the jth column.
2. Scalar: m = n = 1
= [2]
3. Vector:
A column vector, m 1.
A row vector, 1 n.


6
A = 3 ,
0



B= 4 8 7 0

4. Square matrix: m = n

1 2 3
B = 3 4 5
3 6 7
5. Identity matrix, I: (1) m = n,



1
1 0
I2 =
, I3 = 0
0 1
0

(2) Diagonal elements = 1 and o-diagonal elements = 0.

0 0
1 0 ,
0 1

6. Diagonal matrix: (1) m = n, (2) O-diagonal elements = 0.



b11 0
0
1 0
A=
, B = 0 b22 0
0 4
0
0 b33

1.1

7. Triangular matrix: a matrix with zeros either above or below the diagonal elements.

8. Symmetric matrix: a matrix in which aij is equal to aji . In other words, the transpose of A
is equal to itself: A = A.

9. Idempotent matrix: a matrix M is idempotent if M = M M .

1.2

10. Inverse
(a) Denition: Let A1 be the inverse of A. Then, A1 satises the following:
A1 A = I,

and

AA1 = I.

(b) A matrix is nonsingular if and only if its inverse exists.


(c) How to nd the inverse?
A1 =

1
(Cof actor(A)) ,
|A|

where
|A| is the determinant of A,
Cof actor(A) = [cij ] with cij = (1)i+j |Mij |,
Mij = the minor of the element aij , and

|A| = (1)i+1 ai1 |Mi1 | + + (1)i+n ain |Min |,


for any ith row. |A| can also be obtained by picking any column and carrying out
similar calculation as in the above.
(d) The inverse of a 22 matrix:




1
a b
d b
1
A=
. Then, A =
.
c d
ad bc c a

1.3

11. More on a (block-) diagonal matrix: inverse and square root.


Inverse of a diagonal matrix

a 0 0
A = 0 b 0
0 0 c

Inverse of a block-diagonal matrix

a 0 0
A = 0 b c
0 d e

Square root of

a
A = 0
0

a diagonal matrix

0 0
b 0
0 c

1.4

12. Some general rules for matrix manipulation: for conformable matrices, A, B, C,
Associative law: (AB)C = A(BC)
Distributive law: A(B + C) = AB + AC
Transpose of a product: (AB) = B  A , (ABC) = C  B  A , etc.
(AB)1 = B 1 A1
(A ) = A
(A1 )1 = A
(A )1 = (A1 )

1.5

13. Linear dependence:


Denition 1.1. A set of vectors is linearly dependent if any one of the vectors in the set
can be written as a linear combination of the others.
Denition 1.2. A set of vectors, a1 , , ak is linearly independent if and only if the only
solution to
1 a1 + 2 a2 + + k ak = 0
is 1 = 2 = = k = 0.
Example:
 
1
a1 =
,
2


3
a2 =
,
4

4
a3 =
2

Are a1 and a3 independent?

1.6

1.7

Denition 1.3. Two nonzero vectors a and b are orthogonal, written a b, if and only if
a b = b a = 0.
Consider
 
1
a1 =
,
2


3
a2 =
,
4


4
a3 =
,
2


4
b=
.
1

Are a1 and a3 orthogonal?

What about a1 and b? Are they orthogonal?

1.8

14. Trace: the trace of a K K matrix is the sum of its diagonal elements.
tr(A) =

K


akk

k=1

Some rules are, for a constant c and conformable matrices A, B, C,


tr(cA) = c tr(A)
tr(A ) = tr(A)
tr(A + B) = tr(A) + tr(B)
tr(AB) = tr(BA)
tr(ABC) = tr(BCA) = tr(CAB)
tr(A) = the sum of characteristic roots of A

1.9

15. Characteristic roots (eigenvalues):


Let A be n n. If A = where is n 1,  = 1 and is a scalar, then is eigenvector
of A and is eigenvalue of A.
Calculation:
A =
=

(A I) = 0

|A I| = 0

which means (A I) is singular.


Example: Let


2 1
A=
1 2
(a) Find the eigenvalues of A.
(b) Find the eigenvectors of A.
(c) Verify that a product of eigenvectors from dierent eigenvalues is zero (that is, the
vectors are orthogonal). Note that the eigenvectors are said to be a set of orthonormal
vectors (orthogonal + normalized).

1.10

1.11

Facts:
(a) A symmetric matrix has real eigenvalues.
(b) |A| = product of eigenvalues.

(c) A singular matrix has at least one eigenvalue = 0.


(d) tr(A) = the sum of all the eigenvalues of A.

(e) Scale of eigenvectors is arbitrary: A(c) = (c). A normalization of  = 1 is required.


(f) If a matrix is psd, its eigenvalues are 0. For a pd matrix, its eigenvalues are > 0.
Denition 1.4. A matrix A is positive denite (or positive semi-denite) if and only if
x Ax > 0 (or x Ax 0) for all nonzero x where A is p p and x is p 1.

(g) The eigenvectors of a symmetric matrix that correspond to distinct eigenvalues are
orthogonal.

1.12

(h) Note C  = C 1

(i) Diagonalization of a matrix: let A be symmetric with distinct eigenvalues. Let C be a


matrix with all the eigenvectors of A such that C  C = I. Then,
C  AC =
where
= a diagonal matrix with eigenvalues in the same order as in C.
Also,
A = CC  .

(j) Eigenvalues of a symmetric idempotent matrix = 0 or 1.

1.13

16. Factoring a matrix: in some applications, we need a matrix P such that


P  P = A1 .
One choice for P is that
P = 1/2 C 

For any positive denite symmetric matrix A can be written as


A = LU
where L =a lower triangular matrix and U = L , an upper triangular matrix. This is the
Cholesky decomposition of A. It is useful for nding the inverse of positive denite
matrices.
1.14

17. Rank:
Denition 1.5. The column rank of a matrix is equal to the largest number of linearly
independent column vectors that it contains.
When a matrix is said to have a full column rank, the column rank of the matrix is equal
to the number of columns that the matrix contains. The row rank and the full row rank are
dened similarly.
Rank of a matrix refers to the smaller of the number of linearly independent rows or
columns of a matrix.
For a matrix A, rank(A) = rank(A ) min(the number of rows, the number of columns).
rank(A) = rank(A ) = rank(A A) = rank(AA )
The determinant of a matrix is nonzero if and only if it has full rank.
The rank of a symmetric matrix is the number of nonzero characteristic roots it contains.
The rank of a matrix A equals the number of nonzero characteristic roots in A A.
Consider

C=

1 4
2 2

1.15

Consider the following matrices,

1 2
3
9 ,
A = 3 6
4 8 4

1 2
1 0

X=
1 1 .
1 1

1.16

18. Partitioned matrices: Lets group the elements of A in submatrices.

a12
|
a13
a11


a21

a
|
a
A
A
22
23
11
12

A=
= A21 A22
a31
a32
|
a33
Then, we call A a partitioned matrix.
Let B be a partitioned matrix,


E F
B=
.
G H
Then,
B


=

D1 F H 1
D1
H 1 GD1 H 1 + H 1 GD1 F H 1

where
D = E F H 1 G,
J = H GE 1 F

1.17


=

E 1 + E 1 F J 1 GE 1 E 1 F J 1
J 1 GE 1
J 1

1.18

1.19

1.2

Practice Exercise for Matrix Algebra

1. Consider the linear regression model, for i = 1, , n


yi = 1 xi1 + 2 xi2 + + K xiK + i =

K


j xiK + i

j=1

which can be written in matrix form of y = X +  where y is n 1, X is n K, is K 1,


and  is n 1. For the following expressions, nd their dimensions, distribute the transpose
and do the multiplication, where appropriate:
(a) (  X y)
(b) (X Xy  X)
(c) (y X) (y X)
2. Compute A1 , A1/2 , and A1/2 where

4 0 0 0
0 16 0 0

A=
0 0 36 0
0 0 0 1
3. Consider B in partitioned form:

4
2
|
0
0
0


3
|
0
0

B
B
11
12

B=
= B21 B22
1
0
|
1
0
1
0
|
0
1
And let
B

B 11 B 12
=
B 21 B 22

Find B 11 and B 22 .
4. Consider


13 6
C=
5
0

(a) Find characteristic roots (eigenvalues) of C. Is the product of eigenvalues equal to the
determinant of C?
(b) Determine if C is positive denite, positive semi-denite, negative denite or negative
semi-denite.
(c) Find the trace of the matrix C. Is the trace equal to the sum of all the eigenvalues in
C?

1.20

5. Let Ri be the return on asset i. Suppose that Mike holds a portfolio with three assets, C, D,
F and his portfolio return can be expressed as
1
1
1
Rp = RC + RD + RF
3
3
3
Assume the following information:


.05 .2 .1
.03
E(RC )
E(RD ) = .02 , = .2 .3 .1
.1 .1 .4
.04
E(RF )
where is the variance-covariance matrix of (RC , RD , RF ) .
(a) Compute the expected return for Mikes portfolio.
(b) What is the variance for Mikes portfolio?

RC
(c) Let R be a return vector so that R = RD . Let x be the vector denoting the share of
RF

1/3
each asset in a portfolio so that for example, in Mikes portfolio, x = 1/3.
1/3
Express Mikes portfolio return, Rp in terms of R and x. Write the variance of Mikes
portfolio using x and .

1.21

You might also like