Professional Documents
Culture Documents
1
and
2
, we obtain
e
2
i
1
= 2
(Y
i
2
X
i
) = 2
e
i
(1)
e
2
i
2
= 2
(Y
i
2
X
i
)X
i
= 2
e
i
X
i
(2)
3A.2 LINEARITY AND UNBIASEDNESS PROPERTIES
OF LEAST-SQUARES ESTIMATORS
From earlier caculations , we have
2
=
x
i
Y
i
x
2
i
=
k
i
Y
i
(3)
where
k
i
=
x
i
x
2
i
2
is a linear estimator because it is a linear function of Y;
actually it is a weighted average of Y
i
with k
i
serving as the weights. It can
similarly be shown that
1
too is a linear estimator.
Incidentally, note these properties of the weights k
i
:
1. Since the X
i
are assumed to be nonstochastic, the k
i
are nonstochas-
tic too.
2.
k
i
= 0.
3.
k
2
i
= 1
x
2
i
.
4.
k
i
x
i
=
k
i
X
i
= 1.
To prove 2 (similarly you can prove 3 and 4),
k
i
=
x
i
x
2
i
=
1
x
2
i
x
i
, since for a given sample
x
2
i
is known
= 0, since
x
i
, the sum of deviations from
the mean value, is always zero
Now substitute the PRF Y
i
=
1
+
2
X
i
+u
i
into (3) to obtain
2
=
k
i
(
1
+
2
X
i
+u
i
)
=
1
k
i
+
2
k
i
X
i
+
k
i
u
i
(4)
=
2
+
k
i
u
i
where use is made of the properties of k
i
noted earlier.
Now taking expectation of (4) on both sides and noting that k
i
, being non-
stochastic, can be treated as constants, we obtain
E(
2
) =
2
+
k
i
E(u
i
)
=
2
(5)
since E(u
i
) = 0 by assumption. Therefore,
2
is an unbiased estimator of
2
.
Likewise, it can be proved that
1
is also an unbiased estimator of
1
.
Setting equations 1 and 2 equal to zero, gives the estimators.
where e
i
=
Y
i
2
X
i
Note that
1
=
2
Note that for any two variables Y and X ,
x
i
Y
i
X
i
y
i
=
x
i
y
i
If and are both 0, then all three will be
X
X
i
Y
i
=
=
e
2
i
and = RSS
3A.3 VARIANCES AND STANDARD ERRORS
OF LEAST-SQUARES ESTIMATORS
Now by the denition of variance, we can write
var (
2
) = E[
2
E(
2
)]
2
= E(
2
)
2
since E(
2
) =
2
= E
k
i
u
i
2
using Eq. (4) above
= E
k
2
1
u
2
1
+k
2
2
u
2
2
+ +k
2
n
u
2
n
+2k
1
k
2
u
1
u
2
+ +2k
n1
k
n
u
n1
u
n
(6)
Since by assumption, E(u
2
i
) =
2
for each i and E(u
i
u
j
) = 0, i = j, it follows
that
var (
2
) =
2
k
2
i
=
2
x
2
i
(using the denition of k
2
i
)
(7)
The variance of
1
can be obtained following the same line of reasoning.
Once the variances are obtained, their + square roots give the corresponding standard errors.
0 1
0 1
..........(1)
..........(2)
1, 2,..., .
' (2)
i i i
Our PRF is given by Y X u
We can also write our PRF above as Y X u
for i n in a sample fromthis population
We can sumover all the i s in and di
| |
| |
= + +
= + +
=
0 1
1
,
..........(3) 0. (?)
(3) (1)
( )..........(4).
.
i i i
i i i i
vide by n and we get
Y X u Note that u
Nowwe can subtract from and we obtain
y x u u
where y Y Y and x X X
Nowlet us find xpressions in
| |
|
= + + . =
= +
= =
0 1
1 1
.
1, 2,...,
..........(5)
( ) 0 0.
(5)
i i i
n n
i i i i
deviation form for our SRF
Note that SRF for the sample observations i n is given by
Y X e
where e residuals satisfy e and x e
We could write as Y
| |
=
= + +
= =
0 1
0 1
,
( | ).
(5) 7) ' ,
..........(8)
i i i
i i i
Y e
where Y X is the estimated value of E Y X
Nowaggregate as well as over i s and dividing by n we get
Y X Y
| |
| |
= + .........(6)
= + ......... (7)
(
= + = .
1
1
(8) (5),
...........(9). [ ]
(8) (7),
.............(10)
( )
i i i
i i
When we nowsubtract from we obtain
y x e SRF in deviations form
Subtracting from we get
y x
or equivalently subtacting Y Y from
|
|
= +
=
=
1 1
(6), ...........(11).
0 , 0 (10).
i i i
n n
i i i i
we get y y e
nowbcoz x e y e follows from
= +
= =
__________________________________________________________________________
Notice if we now square (11) on both sides and then sum over i , we will have TSS on LHS
and sum of ESS and RSS on RHS. All we would need to show is that
y
i
e
i
= 0 (see just above).
var (
1
) =
X
2
i
n
x
2
i
2
Collecting terms, squaring, and summing on both sides, we obtain
e
2
i
= (
2
)
2
x
2
i
+
(u
i
u)
2
2(
2
)
x
i
(u
i
u) (14)
Taking expectations on both sides gives
E
_
e
2
i
_
=
x
2
i
E(
2
)
2
+ E
_
(u
i
u)
2
_
2E
_
(
2
)
x
i
(u
i
u)
_
=
x
2
i
var (
2
) +(n1) var (u
i
) 2E
_
k
i
u
i
(x
i
u
i
)
_
=
2
+(n1)
2
2E
_
k
i
x
i
u
2
i
_
(15)
=
2
+(n1)
2
2
2
= (n2)
2
where, in the last but one step, use is made of the denition of k
i
given in
Eq. (3) and the relation given in Eq. (4). Also note that
E
(u
i
u)
2
= E
_
u
2
i
n u
2
_
= E
_
u
2
i
n
_
u
i
n
_
2
_
= E
_
u
2
i
1
n
_
u
2
i
_
_
= n
2
n
n
2
= (n1)
2
where use is made of the fact that the u
i
are uncorrelated and the variance
of each u
i
is
2
.
Thus, we obtain
E
_
e
2
i
_
= (n2)
2
(16)
Therefore, if we dene
2
=
e
2
i
n2
(17)
its expected value is
E(
2
) =
1
n2
E
_
e
2
i
_
=
2
using (16) (18)
which shows that
2
is an unbiased estimator of true
2
.
3A.5 THE LEAST-SQUARES ESTIMATOR OF
2
Recall that
Y
i
=
1
+
2
X
i
+u
i
(9)
Therefore,
Y =
1
+
2
X + u (10)
Subtracting (10) from (9) gives
y
i
=
2
x
i
+(u
i
u) (11)
Also recall that
e
i
= y
i
2
x
i
(12)
Therefore, substituting (11) into (12) yields
=
2
x
i
+(u
i
u)
2
x
i
(13) e
i
3A.6 MINIMUM-VARIANCE PROPERTY
OF LEAST-SQUARES ESTIMATORS
It was shown in Appendix 3A, Section 3A.2, that the least-squares estimator
2
is linear as well as unbiased (this holds true of
1
too). To show that these
estimators are also minimum variance in the class of all linear unbiased
estimators, consider the least-squares estimator
2
:
2
=
k
i
Y
i
where
k
i
=
X
i
X
(X
i
X)
2
=
x
i
x
2
i
(see Appendix3A.2) (19)
which shows that
2
is a weighted average of the Ys, with k
i
serving as the
weights.
Let us dene an alternative linear estimator of
2
as follows:
2
=
w
i
Y
i
(20)
where w
i
are also weights, not necessarily equal to k
i
. Now
E(
2
) =
w
i
E(Y
i
)
=
w
i
(
1
+
2
X
i
) (21)
=
1
w
i
+
2
w
i
X
i
Therefore, for
2
to be unbiased, we must have
w
i
= 0 (22)
and
w
i
X
i
= 1 (23)
Also, we may write
var (
2
) = var
w
i
Y
i
=
w
2
i
var Y
i
[Note: var Y
i
= var u
i
=
2
]
=
2
w
2
i
[Note: cov (Y
i
, Y
j
) = 0(i = j )]
=
2
w
i
+
2
(Note the mathematical trick)
=
2
w
i
2
+
2
+2
2
w
i
x
i
x
2
i
x
i
x
2
i
(24)
because the last term in the next to the last step drops out. (Why? Look at (23))
Since the last term in (24) is constant, the variance of (
*
2
) can be mini-
mized only when in the rst term. If we let that
Eq. (24) reduces to
var (
*
2
) =
2
x
2
i
= var (
2
)
(25)
To put it differently, if
there is a minimum-variance linear unbiased estimator of
2
, it must be the
least-squares estimator. Similarly it can be shown that
1
is a minimum-
variance linear unbiased estimator of
1
.
k
i
k
i
k
i
k
2
i
=
2
w
i
2
+
2
k
i
k
2
i
=
w
i i
x
w
i
k
i
=