You are on page 1of 30

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.

doc 1/18/2012 Page 1


11 12 1k 1
21 22 2k 2
1 2 k
k k1 k2 kk
A A A y
A A A y
QF = y Ay = y y y
y A A A
( (
( (
(
( (
( (



k k k k k
i ij j ii i i ij j
i 1 j 1 i 1 i 1 j 1
yA y A y yA y
= = = = =


CHAPTER 2QUADRATIC FORMS AND THEIR DISTRNS
STA 666GENERAL LINEAR MODELS, A First Course in the Theory of Linear Statistical Models, 2nd Ed, Myers & Milton, McGraw Hill
Winter 2012
2.1 QUADRATIC FORMS
Defn: Let A be a k x k matrix and y a k x 1 vector, then q = ______ is a Quadratic Form
(QF) and y and A are known as the vector and matrix of the QF.
Notes and Comments
1.




=

= Sums of Squares & Cross Products of the y
i
s
2. QF dimensions? _____________
3. Even if A is not symmetric, yAy = yBy where B is symmetric (show this for HW).
MOTIVATIONWHY STUDY QUADRATIC FORMS?
Background
MR model: Y
i
=
0
+
1
X
i1
+
2
X
i2
+... +
k
X
ik
+
i
or Y = X +
LS estimator of = B = ________________
Let Sum of Squared Total = SSTO = (Y
i
-
Y
)
2

SSTO = Sum of Squares Regression + the Sum of Squared Errors or SSR + SSE,
SSR = ( B
0
+ B
1
X
i1
+ + B
k
X
ik
-
Y
)
2
, with ___________ df
SSE = (E
i
)
2
= (Y
i
- B
0
- B
1
X
i1
- - B
k
X
ik
)
2
, with ___________ df
Mean Squares Regression = MSR = ___________
Mean Squared Error = MSE = ___________
Prove: SSE/
2
~

2
n-k-1
needs:
i
~ NID(0,
2
) [ ~ MN
n
(0,
2
I) ] and Cochrans Thm
Cochrans Theorem states that if Y
i
~ NID(,
2
) and SSTO can be partitioned into
sums of squares with additive degrees of freedom, then the SSQs
2
are
independent Chi-Square random variables. You most likely did NOT prove Cochrans
Theorem, but simply used the result!
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 2
w/o QFs, proof that SSE/
2
is Chi-Squared is difficult & tedious because:

i
~ NID(0,
2
), Y
i
~ NID(
0
+
1
X
i1
+... +
k
X
ik
,
2
)
B = (XX)
-1
XY Bs = function of X and Y E
i
= Y
i
- B
0
- B
1
X
i1
- - B
k
X
ik
= k
j
Y
j
,
where the k
i
are constants involving the Xs
1. E
i
is a linear combination of the Y
i
s, Normal, E
i
are Normal
2. B, LSE, BLUE, E
i
s have zero mean
3. Y
i
s are independent & k
j
s are constants Var( E
i
) =
2
k
j
2
4. E
i
is a function of every Y
i
the E
i
are not uncorrelated
5. E[ Y
2
] = ________________________________
6. Facts 1 5 MSE is unbiased for
2








However, showing the shape (kind) of the distribution of MSE is difficult


C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 3
NOTE: Every SSQ can be expressed as QF involving Y.

For example:
1. Residuals vector =

E= Y-Y = Y - X B = Y - X { ______________________ }

= { ______________________}Y

= { __________________ }Y = E
2. SSE = (E
i
)
2
= EE b/c? ________________________________
3. Recall that ( I
n
H ) is ___________________________ and ______________________________
4. EE = [ {I
n
- X (XX)
-1
X}Y ] { I
n
- X (XX)
-1
X}Y = [ {I
n
- H}Y ] { I
n
- H}Y

= _________________________________________________________________ = YA Y (QF!)

QFs in important for two reasons:
1. Expressions of SSQs simplified
2. It GREATLY simplifies the determination of distributions of SSQs
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 4
POSITIVE DEFINITE AND SEMI-DEFINITE QFS AND MATRICES
Defn: The QF, yAy, is Positive Definite (Pos Def) if yAy > 0 for every y 0
Defn: The QF, yAy, is Positive Semi-Definite (Pos Semi-Def) if yAy > 0 for every y 0
Note
1. If yAy is Pos Definite, then A is said to be Positive Definite matrix
2. If yAy is Pos Semi-Def, then A is said to be Positive Semi-Definite matrix
Aside: Some texts define Pos Def matrices as
Defn: A is a Pos Def matrix A is symmetric AND yAy > 0 for every y 0

BUT!!!!
Positive Definite matrix Results
1. If A Pos Def, then A is non-singular, ie A
-1
exists
2. If A Pos Def, then A
-1
also Pos Def
3. For any
n x k
X
then
a. XX is symmetric
b. XX is Pos Semi-Def
c. If the rank of X = r(X) = k, then XX is Full Rank AND is Pos Def
Note:
n x k
X
with r(X) = k and k < n, X is Full Column Rank since r(X) < min (n, k)
Alternative version of c
c'. If X, n x k, has rank = k, then (XX)
-1
exists
Note: Implications in solving the Normal Equations from regression

(XX)B = XY B = (XX)
-1
XY
4. A symmetric and Pos Def every eigenvalue of A > 0
5. A symmetric and Pos Semi-Def every eigenvalue of A > 0 AND at least one = 0
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/23/2012 Page 5
Thm 2.1.3: Let A be a symmetric Pos Def matrix =
(

22 21
12 11
A A
A A
where A
11
and A
22
are
both square matrices. Let A
-1
= B =
(

22 21
12 11
B B
B B
where B
11
and B
22
are also square
with the same dimensions as A
11
and A
22
, respectively. Then A
11
-1
= B
11
- B
12
B
22
-1

B
21

Proof: See the text
Application: Used when testing components of in a GLM, ie, the General
Linear F-Test (ie the Full and Reduced SSE procedure) in Regression
2.2 DIFFERENTIATION OF QFs, EXPECTATION AND VARIANCE OF RANDOM
VECTORS AND MATRICES
Differentiation of vectors and QFs

Let y = [ y
1
y
2
y
k
] and let z = f(y
1
, y
2
, , y
k
) = f( y ). Defn:


Rules of Differentiation (like
ax
x

= ____,
2
x
x

= ____,
2
ax
x

= ____, )
Let a and A be a vector and matrix of constants, respectively


a'y y'a
y y

=

= ____ ,
y'y
y

= ____


y'Ay
y

= ________________________________

z
y

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 6


Random Vectors and the Expectation Function
Y
1
, Y
2
, , Y
k
be k random variables,
(
(
(

=
k
2
1
Y
Y
Y
Y

is a random vector
If E[Y
i
] =
i
, then
______ ______
E[ Y ] = ______ = ______ = ______

______ ______

Rules of Expectation
Y random vector with E[ Y ]= and a and A be a constants
E[ a ] = ______ and E[A] =______
E[a Y] = ______ = ______ and E[AY] = ______ = ______
Variance/Covariance Matrix of Y
Defn: Variance of Y
i
= E{ ________ } & Covariance of Y
i
& Y
j
= E{ ____________ }
Defn: Variance/Covariance matrix of Y is = V (symmetric)

= V[ Y ] =
1 1 2 1 k
2 1 2 2 k
k 1 k 2 k
V(Y) Cov(Y, Y ) Cov(Y, Y )
Cov(Y, Y) V(Y ) Cov(Y, Y )
Cov(Y, Y) Cov(Y, Y ) V(Y )
2
11 1 12 1k
2
21 2 2k
2
k1 k2 k



( (
=
( (
( (
=
( (
( (
( (



= _____________________
Rules for Var/Cov Matrices
V[a Y] = __________ = ___________

V[AY] = __________ = ___________
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 7
Quadratic Forms with Random Vectors
Defn: Let Y be a random vector E[ Y ] = and V[ Y ] = V and let A be a matrix of
constants, then Q = YA Y , where now, Q is a QF AND a random variable
Distributions of Quadratic Forms with Random Vectors
Thm 2.2.1: Let Y be a random vector E[ Y ] = and V[ Y ] = V and let A be a matrix of
constants, then

E[ YA Y ] = trace(AV) + A =
k k k k
ij ji i ij j
i=1 j=1 i=1 j=1
A V + A
| |

|
|
\

Proof: First some facts re variances and covariances





k k k k
i ij j i ij j
i=1 j=1 i=1 j=1
k k k k
ij i j ij
i=1 j=1 i=1 j=1
k k k k
ij ij
i=1 j=1 i=1 j=1
k k k k
ij ij
i=1 j=1 i=1 j=1
k k
ij i
i=1 j=1
Now E[ Y'A Y ] = E YA Y = E[YA Y]
= A E[YY] = A [ ]
= A + A
= A V + A
= A V + A
(

(

+

ij i j
ij i j
ij i j
ji i

k k
j
i=1 j=1
= trace(AV) + 'A .

j

V(Y
i
) =
E[ Y
2
] -
Cov(Y
i
, Y
j
) =
E[ Y Y ] -
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 8
2.3 DISTRIBUTIONS OF QUADRATIC FORMS
Distribution of Random Variable = _______, _________, & __________ of the RV,
where Shape is the ______________ of RV, like Normal or Binomial. The parameters of
the distribution determine its Center (usually the mean) and Spread (the variance).
Some Familiar Distributions
Defn: Moment Generating Function (MGF): MGF(Y) = _______ & MGF(Y) = _______
FACT re MGF: X and Y independent then MGF
X+Y
= ______________
1. Normal Distribution
Univariate Normal Distribution
Y ~ N ( ,
2
) f
Y
(y) = ______________ MGF = M
Y
(t) = ___________
Both f
Y
(y) and M
Y
(t) are UNIQUE!!!
Z ~ N ( 0, 1 ) f
Z
(z) = ______________ MGF = M
Z
(t) = ___________
if Y ~ N ( ,
2
) then
Z

Y
=


Multivariate Normal Distribution
(
(
(

=
k
2
1
Y
Y
Y
Y

~ MN
k
( , V )
-1
1 1
- ( y-)'V (y-) t'+ t'Vt
2 2
Y Y n 1
2 2
1 1
f (y)= e MGF=M (t)=e
(2) |V|



Suppose Ys ~ NID ( ,
2
) then Y ~ MN
k
( , V ) where NID = ___________
Res: If A and b are a matrix and vector of constants, Y ~ MN
k
( , V ), and W = AY + b,
then W ~ MN
k
(A + b, AVA ).

Proof: Since its easy to do, show this for homework.

ASIDE Since YY = Y
i
2
then any Y
i
2
( ie any Sum of Squares) is a QF, namely YY = YIY

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 9
2. Chi-Square Distribution
Central Chi-Square Distribution
Y ~
2
(df)


1
y df
-
2 2
Y df
2
y e
f (y)=
df
2
2

| |

|
\

df
2
Y
M (t) = (1-2t)


Recall: Z
1
, Z
2
, , Z
k
are k independent Zs, then Z
i
2
=
2
(k)


First QF Result: Z
i
2
=
2
(k)

but Z
i
2
= ZZ QF = ZZ is
2
(k)


Thm 2.3.0: Let W
1
, W
2
, W
n
be n independent Central Chi-Square random
variables with degrees of freedom given by df
i
. Then W
i
has a Chi-Square
distribution with degrees of freedom = df
i
.

In words, the sum of independent Chi-Squares is also Chi-Square with
parameter equal to the sum of the degrees of freedom.
Proof: Use MGFs?
Non-Central Chi-Square Distribution
Y ~
2
(df, )

f(y; df, ) [see Graybills Theory and Application of the Linear Model]

-1
df
- [1-(1-2t) ]
2
Y
M (t) = (1-2t) e



Note: Central Chi-Square = Non-Central Chi-Square with _______________
Defn: Let Y
1
, Y
2
, , Y
k
be k independent N (
i
, 1 ), where at least one
i
0. Then
Y
i
2
is said to have a Non-Central Chi-Square distribution with TWO
parameters: k = degrees of freedom and non-centrality
parameter = =
'
2
1
, where is a k x 1 vector of the
i
.

Res: Let Y ~ MN
k
( , I ). Then Y
i
2
= YY ~
2
) (k

,
Proof: Why are these Y
i
s independent?

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 10
Thm 2.3.1: Let W
1
, W
2
, W
n
be n independent Non-Central Chi-Square random
variables with degrees of freedom given by df
i
and non-centrality parameters
given by
i
, respectively. Then W
i
has a Non-Central Chi-Square distribution
with degrees of freedom = ____ and non-centrality parameter given by ____.

In words, the sum of independent Non-Central Chi-Squares is also
Non-Central Chi-Square with parameters equal to the sum of the degrees of
freedom and sum of non-centrality parameters.
Proof: MGFs?
3. T Distribution
Central T Distribution
Non-Central T Distribution
4. F Distribution
Central F Distribution
Y ~ 1 2
(df ,df )
F

1
1 2
1
df
2
1 2 1
df +df
-
df
2
-1
2 1
2
Y
1 2 2
df+ df df
2 df ydf
f (y) = y 1 +
df df df
2 2
| |
| |

| |
| |
\
\
|
| | | |
\

| |
\ \

W
1
~
1
2
(df )

independent of W
2
~
2
2
(df )

then ______________________
Use? _________________________________
Non-Central F Distribution
W
1
=
1
2
(df , )

= Non-Central Chi-Square with df


1
and non-centrality parameter,
W
2
be an independent Central Chi-Squares with df
2
Y =
1
1 1
1 2 2
2 2
2 W
df (df, ) 1
(df, df , ) W 2
df (df ) 2
df
F
df


= Non-Central F with parameters df
1
, df
2
, and
See Graybills text for the actual density function form
Use: Used in finding the Power = Pr{ Reject H
o
when H
A
is true} of F tests
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 11
Distributions of Quadratic Forms where Y is Multivariate Normal
Thm 2.3.2: Let Y be a MN
n
( , I ) and A
n x n
be symmetric.

YAY =
2
(k, )

, =
'A
2
1
A is idempotent of rank k

Proof: The proof can be found in Searle. Heres the proof
HOW? YAY = sum of k indep N(
j
, 1)
2
which we know will be a =
,
2
(k )



1. Y be a MN
n
( , I )
2. A symmetric
3. A idempotent
4. rank(A) = k
5. trace(BCD) = trace(CDB) = trace(DBC)
6. A symmetric & idempotent rank(A) = trace(A)
7. rank(A) = k trace(A) = k
8. A idempotent eigenvalues of A are 0s and 1
9. = diagonal matrix of eigenvalues of A =
0
0
(
(
(


10. A symmetric P
n x n
(PP = PP = I) such that PAP = = diagonal matrix of eigenvalues of A
11. k = tr(A) = tr(IA) = tr(PPA)= tr(PAP) = tr() = eigenvalues of A
12. P* = rearranged columns of P so that P*AP* = * = (

0 0
0 I
kxk

13. P*P* = I = P*P* P*
-1
= P*
14. A = P* *P* =
* *
1 2
P P
(

(

0 0
0 I
kxk
*'
1
*'
2
P
P
(
(
(

= P
*
1
P
*
1
[13]
15. P
*
1
(n x k) and P
*
1
P
*
1
= I
k
16. W = P
*
Y =
* *
1 2
P P '
(

Y =
*'
1
*'
2
P Y
P Y
(
(
(

=
1
2
W
W
(
(
(


17. Y be a MN
n
( , I ) & W
1
= P
*
1
Y W
1
~ MN
k
(P
*
1
, P
*
1
I P
*
1
= I )
18. k elements in W
1
are NID(P
*
1
, 1)
19. W
1
W
1
= sum of k indep N(
j
, 1)
2
= * * * *
, ( )' '
' ' '
1 1 1 1
2
1 1 1
(k P P P P 'A)
2 2 2
= = =

=
,
2
1
(k 'A)
2
=


20. W = P
*
Y Y = P
*
W
21. YAY = ( P*W )A( P*W ) = W P*AP* W = W * W = W
(

0 0
0 I
kxk
W = W
1
W
1
=
,
2
1
(k 'A)
2
=


22. YAY =
,
2
1
(k 'A)
2
=


QED = __________________________
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 12
Cor 2.3.1: Let Y be a MN
n
( 0, I ) and let A be an n x n symmetric matrix. Then YAY
has a Central Chi-Square distribution with k degrees of freedom A is
idempotent of rank k.
Proof: HW exercise [this is also exercise 2.37 in the Myer & Milton text]

Cor 2.3.2: Let Y be a MN
n
( ,
2
I ) where
2
> 0. Let A be an n x n symmetric matrix.
Then ( 1/
2
) YAY has a Non-Central Chi-Square distribution with k degrees of
freedom and non-centrality parameter = =
'A
2
1
2
A is idempotent of rank
k.
Proof: HW exercise [this is also exercise 2.38 in the Myer & Milton text]
Thm 2.3.3: Let Y be a MN
n
( , V ) and let A be an n x n symmetric matrix. Then YAY
has a Non-Central Chi-Square distribution with k degrees of freedom and non-
centrality parameter = =
'A
2
1
AV is idempotent of rank k.

Proof:

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 13
Cor 2.3.3: Let Y be a MN
n
( 0, V ) and let A be an n x n symmetric matrix. Then YAY
has a Central Chi-Square distribution with k degrees of freedom AV is
idempotent of rank k.

Proof: ApplyThm 2.3.3 with = 0, the results following directly

Cor 2.3.4: Let Y be a MN
n
( , V ). Then YV
-1
Y has a Non-Central Chi-Square
distribution with n degrees of freedom and non-centrality parameter = =
'V
2
1
1
.

Proof: Applying Thm 2.3.3 with A = V
-1
, the results follows directly

2.4 INDEPENDENCE OF QUADRATIC FORMS
Defn: Two random variables Y
1
and Y
2
are independent
) (y )f (y f ) ,y (y f
2 Y 1 Y 2 1 Y Y
2 1 2 1
=

How to determine independence of QFs? Same way?
Lemma 2.4.1: Let A
1
, A
2
, , A
m
be a collection of k x k symmetric matrices:
orthogonal matrix P such that PA
i
P is diagonal A
i
A
j
= A
j
A
i
for every pair (i, j)
Thm 2.4.1: Let Y be a MN
n
( , V ). Let A and B be n x n symmetric matrices of ranks r
1

and r
2
, respectively. YAY and YBY are independent AVB = 0

Proof: Consider the case.

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 14
Cor 2.4.1: Let Y be a MN
n
( ,
2
I) for
2
> 0. Let A and B be n x n symmetric matrices
of ranks r
1
and r
2
, respectively. YAY and YBY are independent AB = 0.

Note: our text presents this corollary only as but it is
Proof: Apply Thm 2.4.1. where V =
2
I. Then AVB = 0 0 = A
2
IB =
2
AIB
2
AIB = 0
since
2
> 0
How to determine independence of a random QF (scalar) and random vector?
Thm 2.4.2: Let Y be a MN
n
( , V ). Let A be a n x n symmetric matrix and let B be an m
x n matrix. YAY and BY are independent BVA = 0.

Proof: Proof see Graybill or Searle.

Corr 2.4.2: Let Y be a MN
n
( ,
2
I ) for
2
> 0. Let A be a n x n symmetric matrix of
rank m < n and let B be an q x n matrix. YAY and BY are independent BA = 0.

Proof:

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 15
Thm 2.4.3: Let Y be a MN
n
( , I ). Let A
1
, A
2
, , A
m
be a collection of m n x n
symmetric matrices where r(A
i
) = r
i
. Let Y A
1
Y, YA
2
Y, , YA
m
Y be a collection of
m QFs. If any two of the following three statements is true

1. All A
i
are idempotent
2. A = A
i
is idempotent
3. A
i
A
j
= 0 for i j
then
a. for each i, Y A
i
Y has a Non-Central Chi-Square distribution with parameters
r
i
degrees of freedom and non-centrality parameter =
i
=
'A
2
1
i

b. Y A
i
Y and Y A
j
Y are independent for i j
c. rank of A = r = r
i
, where A = A
i

are all true.

Proof:

C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 16
SUMMARY OF IMPORTANT RESULTS FROM CHAPTERS 1 - 2
Myers and Milton
MATRIX RESULTS
A non-symmetric then B = (A + A) is symmetric
For A not symmetric, YAY equals some YBY where B is symmetric (see above)
For any A, (AA) and (AA) are symmetric
r(A + B) does not necessarily = r(A) + r(B); sometimes it does, sometimes it doesnt
If X an n x n matrix whose determinant = |A| 0 rank(X) = n
If X an n x n matrix whose determinant = |A| = 0 rank(X) < n
A idempotent A
2
= A
A symmetric and idempotent rank(A) = trace(A)
A symmetric orthogonal P (PP = I = PP) such that PAP = = diagonal matrix of
eigenvalues of A
A idempotent eigenvalues of A are either 0 or 1
tr(A B) = tr(A) tr(B)
tr(ABC) = tr(BCA) = tr(CAB)
X, an n x (k+1) ( where n > k+1) full column rank matrix, then (XX)
-1
exists and both
H = X(XX)
-1
X and I-H are symmetric and idempotent with ranks (k+1) and (n k -1 ),
respectively
RANDOM VECTOR RESULTS
Let Y be a random vector, then E[Y] = vector of E[Y
i
] = , Var/Cov(Y) = V(Y) = E[(Y-)( Y-)],
E[AY] = A E[Y], V [AY] =AV[Y]A
Thm 2.2.1: Let Y be a random vector E[ Y ] = and V[ Y ] = V and let A be a matrix of
constants. Then
E[ YA Y ] = trace(AV) + A =
k k k k
ij ji i ij j
i=1 j=1 i=1 j=1
A V + A
| |

|
|
\

If Y = MN
k
( , V ) then W = AY + b, where A and b are a matrix, of rank k, and vector of
constants, respectively, is MN
k
(A + b, AVA ).
If Y = MN
k
( , V ) and V is diagonal, then elements of Y, the Y
i
, are independent
C:\MyDocs\Current Stuff\1 Winter 2012\WebPage\STA 666\Class Notes\Chapter 2.doc 1/18/2012 Page 17
If Y
i
= NID (
i
, 1) then i
n
2
i=1
Y
( = YY ) =
2
) '
2
1
, (k =

a Non-Central Chi-Square; if = 0, then


its a Central Chi-Square
Thm 2.3.1: Let W
1
, W
2
, W
n
be n independent Non-Central Chi-Square random variables
with degrees of freedom given by df
i
and non-centrality parameters given by
i
,
respectively. Then W
i
has a Non-Central Chi-Square distribution with degrees of
freedom = df
i
. and non-centrality parameter given by
i
. In words, the sum of
independent Non-Central Chi-Squares is also Non-Central Chi-Square with parameters
equal to the sum of the degrees of freedom and sum of non-centrality parameters.
QUADRATIC FORM RESULTS
Even if A is not symmetric, YAY can be written as a YBY where B is symmetric
Thm 2.3.3: Let Y be a MN
n
( , V ) and let A be an n x n symmetric matrix. Then YAY has
a Non-Central Chi-Square distribution with k degrees of freedom and non-centrality
parameter = =
'A
2
1
AV is idempotent of rank k.
Thm 2.4.1: Let Y be a MN
n
( , V ). Let A and B be n x n symmetric matrices of ranks r
1

and r
2
, respectively. YAY and YBY are independent AVB = 0.
Thm 2.4.2: Let Y be a MN
n
( , V ). Let A be a n x n symmetric matrix and let B be an m x
n matrix. YAY and BY are independent BVA = 0.
Thm 2.4.3: Let Y be a MN
n
( , I ). Let A
1
, A
2
, , A
m
be a collection of k x k symmetric
matrices where r(A
i
) = r
i
. Let Y A
1
Y, YA
2
Y, , YA
m
Y be a collection of m QFs.
If any of the two following three
statements is true:
1. All A
i
are idempotent

2. A = A
i
is idempotent

3. A
i
A
j
= 0 for i j
then all of the following are true
a. for each i, Y A
i
Y has a Non-Central Chi-
Square distribution with parameters r
i

degrees of freedom and non-centrality
parameter =
i
=
'A
2
1
i


b. Y A
i
Y and Y A
j
Y are independent for i j

c. rank of A = r = r
i
, where A = A
i

You might also like