Professional Documents
Culture Documents
Walter Sosa-Escudero
Walter Sosa-Escudero
F (z) =
1 s2
e 2 ds
2
Walter Sosa-Escudero
Walter Sosa-Escudero
Walter Sosa-Escudero
Example: Z N (, 2 )
Here = (, 2 )0 , and K = 2.
Note:
i
h
2
exp (z)
2 2
h
i
2
1
exp (z)
L(; z) : <2 < : 2
2 2
f (z; ) : < < :
1
2
Walter Sosa-Escudero
Maximum Likelihood
Walter Sosa-Escudero
Some normalizations
n argmax L(; z)
Note
n = argmax ln L(; z)
and
Pn
i=1 ln L(; zi )
Pn
i=1 l(; zi )
Also
1X
n = argmax
l(; zi )
n
i=1
Walter Sosa-Escudero
Walter Sosa-Escudero
Conditional likelihood
Walter Sosa-Escudero
Three Examples
Poisson Distribution: Y Poisson() if it takes integer and
positive values (including zero) and:
f (y) = P r(Y = y) =
eo yo
y!
n
Y
e yi
i=1
yi !
n
X
[ + yi ln ln yi !]
i=1
= n + ln
n
X
i=1
Walter Sosa-Escudero
yi
n
X
ln yi !
i=1
FOCs are
n +
1X
yi = 0
i=1
X
= 1
yi = y
n
i=1
Walter Sosa-Escudero
Probit Model:
Y |X Bernoulli(p), p P r(Y = 1|x) = F (x0 ), and F (z) is
the normal CDF.
The sample (conditional) likelihood function will be:
L(, Y ) =
Y
i/yi =1
pi
(1 pi ) =
i/yi =0
Walter Sosa-Escudero
n
Y
pyi i (1 pi )1yi
i=1
Then
l(, Y ) =
n
X
yi ln F (x0i ) + (1 yi ) ln 1 F (x0i )
i=1
Fi (1 Fi )
=0
fi f (x0 ).
This is a system of K non-linear
,Fi F (x0i ),
i
equations with K unknowns. Moreover, it is not possible to solve
for and obtain and explicit solution.
Walter Sosa-Escudero
or, alternatively
"
2 #
0
1
1
y
x
0
y|x N (x0 0 , 2 ) =
exp
2
2
Then
n
1 X
l(, 2 ; Y ) = n ln n ln 2 2
(yi x0i )2
2
i=1
Walter Sosa-Escudero
Asymptotic Properties
Walter Sosa-Escudero
At this stage we will simply state them, and discuss them as we go along.
Some are purely technical, but some of them have important intuitive meaning.
1
Zi , i = 1, . . . , n, iid f (zi ; 0 )
6= 0 f (zi ; ) 6= f (zi ; 0 ).
, a compact set.
Walter Sosa-Escudero
0 is an interior point of .
10
Walter Sosa-Escudero
Walter Sosa-Escudero
Consistency
Our starting point is the following normalized version of the MLE:
n
n = argmax Qn (),
1X
Qn ()
l(, Zi )
n
i=1
up
Qn () Q0 () argmax Qn () argmax Q0 ()
|
{z
}
|
{z
}
n
Walter Sosa-Escudero
Intuition
n maximizes Qn () (definition of MLE).
Qn () Q0 () (the MLE problem if well defined at ).
0 maximizes Q0 () (the true value solves the problem at ).
By maximizing Qn () we end up maximizing Q0 (),
convergence of the sequence of functions guarantees
converges of maximizers. (this is the difficult step).
If argmax Qn () is seen as function defined on functions, what property is
implied by 3)?
Walter Sosa-Escudero
1X
p
l( , Zi ) E[l( , Z)]
Qn ( ) =
n
i=1
Walter Sosa-Escudero
Walter Sosa-Escudero
Walter Sosa-Escudero
N c
Walter Sosa-Escudero
Asymptotic normalilty
Walter Sosa-Escudero
i=1
0
; Z) ], an
K K
df (z; )/d
d ln f (z; )
=
d
f (z; , )
Walter Sosa-Escudero
hence
df (z; )/d = s(; z)f (z, )
Replacing above:
Z
s(; z)f (z; ) dz = 0
When = 0
Z
s(0 ; z)f (z; 0 ) dz = E [s(0 ; z)]
So
E [s(0 ; z)] = 0
Walter Sosa-Escudero
Source: Bartle, R., 1966, The Elements of Integration, Wiley, New York
Walter Sosa-Escudero
Z
H(0 ; z)f (z; 0 )dz
J + E(H(0 ; Z))
Asymptotic normality
Under our assumptions, wpa1 the MLE estimator satisfies the
FOCs
=0
s(n ; Z)
Take a first order Taylor expansion around 0
Z)(
= s(0 ; Z)
+ H(;
n 0 ) = 0
s(n ; Z)
where is a mean value located between n and 0 . (Note that
p
consistency implies n 0 ).
Now solve
!1
!
Z)
H(
;
s(
;
Z)
0
n 0 =
n
n
Now we are back in familiar territory: we will show that the first factor does not explode, and that the second is
aymptotically normal
Walter Sosa-Escudero
1 p
Z)
= n1 Pn H(; Zi )
According to our assumptions n1 H(; Z)
i=1
converges uniformly in probability to E [H(; Z)], which by the
previous results, it is continuous in .
p
Hence, by the previous result, since 0
p
Z)
n1 H(;
E [H(0 ; Z)] = J <
Now we show:
d
s(0 ; Z)
N (0, J)
n
Start with
s(0 , Z)
s(0 ; Z)
= n
= n
n
n
Pn
i=1 s(0 , Zi )
Walter Sosa-Escudero
Collecting results:
n n 0 =
1
H(0 ,X)
n
p
J 1
s(0 ,X)
N 0, J)
Walter Sosa-Escudero
Variance estimation
The asymptotic variance of n is J 1 , with J = V (s(0 , Z). We
will propose three consistent estimators:
1
H(n ; Zi )
n
i=1
Invariance
Walter Sosa-Escudero
Proof:
Since is the MLE
z) l(, z),
l(,
for every . Since = g() is one-to-one:
z) l(g 1 (), z)
l(g 1 (),
= g()
maximizes the reparametrized log-likelihood.
then
Walter Sosa-Escudero
Walter Sosa-Escudero
Cov(X, Y )2
V (Y )
It is immediate to check
We will take X = and Y = s(0 , Z).
!
n
X
=V
V (s(0 , Z))
s(0 , Zi ) = n J
i=1
We have
2
Cov , s(0 , Z)
V ( )
nJ
= 1.
so we need to show Cov , s(0 , Z)
= 0, Cov , s(0 , Z)
= E s(0 , Z)
f(
z)
f (
z ) d
z
f (
z)
Z
=
=
=
=
f(
z ) d
z
Z
f (
z , ) d
z
=0
E( )|=0
since E( ) = 0 .
Walter Sosa-Escudero