Professional Documents
Culture Documents
It is dened in the
same way as |x|, and satises the same properties as |x|, and the double bars are just for notational
convenience.
Determinants
The determinant, denoted det(A), is a scalar value associated with a square matrix A Rnn . For a
2 2 matrix, the determinant is computed as
a
a12
det(A) =
11
= a11 a22 a12 a21 ,
(5)
a21 a22
Scalar products
a
a23
+ a13
21
a33
a31
a22
a32
(6)
= a11 a22 a33 + a12 a23 a31 + a13 a21 a32 a11 a23 a32 a12 a21 a33 a13 a22 a31
The standard inner product (or dot product or scalar product) between two vectors x Rn and y Rn
is dened as
y1
n
y2
T
x y = x y = x1 x2 . . . xn . =
x i yi .
(1)
.. i=1
yn
Two vectors are perpendicular to each other if and only if their inner product is zero, i.e.,
x y xT y = 0,
(2)
Matrix-vector products
mn
A norm on a nite dimensional vector space (e.g. Rn ) is denoted |x|, where || is any function satisfying
the following properties for all x in the vector space:
i=1
n
(10)
(11)
i=1
1. |x| 0
2. |x| = 0 x = 0
Null space
3. |x| = || |x|
The nullspace (or kernel ), denoted N (A), of a matrix A Rmn consists of all vectors x Rn such
that Ax = 0, i.e,
N (A) = {x | Ax = 0, x Rn }.
(12)
|x|2 = xT x,
where xT x is the inner product of x with itself. Furthermore, for all x, y Rn the so called CauchySchwartz inequality holds
|xT y| |x| |y|.
(4)
1
Hence, the nullspace consists of all vectors x Rn that are perpendicular to each row of A. This is
clear by looking at
T
T
a1
0
a1 x
..
where
aTi ,
aTm x
0
aTi x
2
= 0 means that ai x.
Range space
Matrix inverse
mn
A matrix A Rnn is invertible or non-singular if there exists a matrix A1 Rnn such that
A1 A = AA1 = I.
(14)
Hence the range space of A consists of all vectors that are linear combinations of the columns of A.
This is clear by looking at
x1
n
1 . . . a
n ... =
y = Ax = a
xi a
i ,
(15)
i=1
xn
where a
i , i = 1, . . . , n are the columns of A. Suppose that there are r linearly independent columns
in A, then these are a basis for the range space and the dimension of the range space of A is nR .
a11
a21
a12
a22
1
=
1
a22
det(A) a21
1
a12
a22
=
a11
a11 a22 a12 a21 a21
a12
a11
(20)
A singular matrix B Rnn is a matrix that is not invertible. A singular matrix B always have
det(B) = 0, and at least one eigenvalue is zero and hence there exist a null space to B.
For singular matrices and non-square matrices, the (Moore-Penrose) pseudo inverse, denoted by ,
is a generalization of the matrix inverse. The pseudo inverse A of the matrix A Rmn is unique
and satises the four properties
Rank
The number of linearly independent columns of a matrix A is called the column rank, whereas the
number of linearly independent rows are called the row rank. The column rank is equal to the row
rank, and this value is referred to as the rank of the matrix A.
The column rank (and hence the rank) of A is the same as the dimension of the range space of A,
i.e.,
rank(A) = dim(R(A)) = r,
(16)
(19)
AA A = A
(21)
A AA = A
(22)
(AA ) = AA
(23)
(A A) = A A.
(24)
where r is the number of linearly independent columns or rows in the matrix A. A square matrix (i.e.
A Rnn ) is invertible (or non-singular) if and only if the matrix has full rank (i.e. rank(A) = n).
If A is invertible, then A = A1 .
The pseudo inverse can be computed in dierent ways, e.g. using singular value decompositions.
In Matlab it can be computed using the command pinv.
Let S denote a real symmetric matrix with n rows (and columns). For a symmetric matrix it holds
that the quadratic form (xT Ax) can be bounded from below and from above as
min (A)xT x xT Ax max (A)xT x
(17)
where min (A) denotes the smallest eigenvalue of A and max (A) denotes the largest one. A symmetric
matrix A Sn is called positive denite if for all x
= 0 it holds that xT Ax > 0. This is denoted as
A 0 and this is true if and only if all eigenvalues of A are strictly positive. The set of positive denite
matrices is commonly denoted as Sn++ .
Furthermore, a matrix A Sn for which is holds that xT Ax 0 for all x
= 0 is called positive
semidenite. This is denoted as A
0 and this is true if and only if all eigenvalues of A are non-negative
(positive or zero). The set of positive semidenite matrices is commonly denoted as Sn+ .
1 (x)
. . . fx
x1
x2
n
f2 (x)
f2 (x)
fn (x)
...
x1
x2
xn
(25)
Jf (x) =
..
.. ,
..
..
.
.
.
.
fn (x)
fn (x)
fn (x)
...
x1
x2
xn
which can be used to dene the rst order approximation of the function around x that is given as
f (z) f (x) + Jf (x)(z x)
(26)
n
for every point z that is close enough to x. Now consider a vector-valued function h : R R that
is dened as h(x) = g(f (x)) where g : Rm Rp is another vector-valued function. The Jacobian
matrix for this function can be computed using the chain rule which states that
(18)
(27)
VT
mmr
nr
R
R
,U
, V R
, V Rnnr with U T U = Im and V T V = In . The matrices
and U
Im and In denote m m and n n identity matrices, respectively. The factorization in (18) is called
the singular value decomposition of A and the numbers i are the singular values. Note that columns
are a basis for the range space of A and columns of V are a basis of null space of A. Here we
of U
assumed that A is real-valued, however, the same denitions hold even if A is complex. In this case,
matrices U and V can be complex-valued and wherever we use matrix transpose, i.e., T , it should be
replaced by matrix conjugate-transpose, i.e., . However, the singular values i are always real valued.
3