You are on page 1of 48

Motivation: a non-linear model

Likelihood and Maximum-Likelihood


Asymptotic properties

Maximum Likelihood Estimation


Walter Sosa-Escudero
Econ 507. Econometric Analysis. Spring 2009

April 13, 2009

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Consider the following data set

The explained variable is binary.


Admission to grad school depending on GRE score, Families send
kids to school as a function of income, etc.
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

This case is a good candidate for a non-linear model

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

How to search for a model and estimate its parameters?


Consider the following non-linear model
Z
E(y|x) = F (1 + 2 x),

F (z) =

1 s2
e 2 ds
2

For example, positive values for 1 may produce a function


that looks like the one in the previous graph (we well get
1 = 2.8619 and 2 = 6.7612.
This is a truly non-linear model (in both, parameters and
variables).
This is the probit model.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

In order to search for an estimation strategy for the parameters, we


will exploit the following fact.
Since y|x is a binary variable
E(y|x) = P r(y = 1|x)
so, by being specific about the form of E(y|x) we are being
specific about P r(y = 1|x).

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Likelihood and Basic Concepts


Z f (z; 0 ). o <K . f (z; ) is a member of a parametric
class indexed by .
Z = (Z1 , Z2 , . . . , Zn )0 is an iid sample f (z; 0 ).
The likelihood function for Z is
L(; z) : <K < : f (z; )
In the density function is taken as given and z varies. In the
likelihood function these roles are reversed
Note that due to the iid assumption:
L(; z) = f (
z ; ) = ni=1 f (zi ; ) = ni=1 L(; zi )

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Example: Z N (, 2 )
Here = (, 2 )0 , and K = 2.
Note:
i
h
2
exp (z)
2 2
h
i
2
1
exp (z)
L(; z) : <2 < : 2
2 2
f (z; ) : < < :

1
2

Intuitively, L(; z0 ) quantifies how compatible is any choice of


with the occurrence of z0 .

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Maximum Likelihood

The maximum-likelihood estimator n is defined as


n argmax L(; z)

It is kind of a reverse enginereeing process: to generate random numbers for a


certain distribution you first set parameter values and then get realizations.
This is doing the reverse process: first set the realizations and try to get the
parameters that are most likely to have generated them.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Some normalizations

n argmax L(; z)

Note
n = argmax ln L(; z)

and

Pn

i=1 ln L(; zi )

Pn

i=1 l(; zi )

Also

1X
n = argmax
l(; zi )
n

i=1

We will use whichever one is more convenient.


Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

If l(; z) is differentiable and has a local maximum in an interior


point, then the FOCs for the problem are


n
X
l(; zi )
l(; z)
=
= 0.
=n
=n
i=1

This is a system of K possibly non-linear equations with K


unknowns, that define n implicitely.
Even if we can guarantee that a solution to this problem

exists, we do not have enough information to solve for .

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

The discrete case

When Y is a discrete random variable, the likelihood function will


be directly the probability function, that is
L(Y ; ) = f (y; )
where f (y : ) is now P r(Y = y; ).

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Conditional likelihood

Suppose f (y, x, ) is the joint density function of two variables X


and Y . Then, it can be decomposed as
f (y, x ; ) = f (y|X ; )f (x; )
Suppose we are interested in estimating : if and are
functionally unrelated, then maximizing the joint likelihood is
achivied through maximizing separately the conditional and the
marginal likelihood: the MLE of also maximizes the conditional
likelihood: we can obtain ML estimates by specifying the
conditional likelihood only.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Three Examples
Poisson Distribution: Y Poisson() if it takes integer and
positive values (including zero) and:
f (y) = P r(Y = y) =

eo yo
y!

For an iid sample:


L(, Y ) =

n
Y
e yi
i=1

yi !

its log is:


l(, Y ) =

n
X

[ + yi ln ln yi !]

i=1

= n + ln

n
X
i=1

Walter Sosa-Escudero

yi

n
X

ln yi !

i=1

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

FOCs are

n +

1X
yi = 0

i=1

so the MLE of is:

X
= 1
yi = y

n
i=1

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Probit Model:
Y |X Bernoulli(p), p P r(Y = 1|x) = F (x0 ), and F (z) is
the normal CDF.
The sample (conditional) likelihood function will be:
L(, Y ) =

Y
i/yi =1

pi

(1 pi ) =

i/yi =0

Walter Sosa-Escudero

n
Y

pyi i (1 pi )1yi

i=1

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Then
l(, Y ) =

n
X


yi ln F (x0i ) + (1 yi ) ln 1 F (x0i )



i=1

FOCs for a local maximum are:


n
X
(yi Fi )fi xi
i=1

Fi (1 Fi )

=0

fi f (x0 ).
This is a system of K non-linear
,Fi F (x0i ),
i
equations with K unknowns. Moreover, it is not possible to solve
for and obtain and explicit solution.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Gaussian regression model:


y = x0 0 + u,

with u|x N (0, 2 )

or, alternatively
"

2 #
0
1
1
y

x
0
y|x N (x0 0 , 2 ) =
exp
2

2
Then
n

1 X
l(, 2 ; Y ) = n ln n ln 2 2
(yi x0i )2
2
i=1

Any idea what will be the MLE of 0 ?

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Asymptotic Properties

We will follow a similar path as we did with other estimation


strategies
Consistency
Asymptotic Normality
Estimation of the asymptotic variance
Asymptotic efficiency
Invariance (this is new)
But first we need to agree on some regularity conditions

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Setup and Regularity Conditions

At this stage we will simply state them, and discuss them as we go along.
Some are purely technical, but some of them have important intuitive meaning.
1

Zi , i = 1, . . . , n, iid f (zi ; 0 )

6= 0 f (zi ; ) 6= f (zi ; 0 ).

, a compact set.

ln f (zi ; ) is continuous at each w.p.1.




E sup |ln f (z; )| < .

These conditions will be used for consistency.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

In addition, for asymptotic normality we will add the following:


6

0 is an interior point of .

f (z; ) is twice continuously differentiable and strictly


poisitive in a neighborhood N of 0 .
R
R supN k f (z; )k dz < and
supN k f (z; )k dx < .

J E [s(0 , Z)s(0 , Z)0 ] exists and is non-singular.

10

E[supN kH(Z; )k] < .

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Quick detour: on bounds.


Recall that:
|E(X)| < E(|X|)
Then by bounding E(|X|) we are guaranteeing that < E(X) < .


By considering something like E sup |ln f (z; )| < . we are
bounding the worst case scenario (the sup of the absolute value).

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Consistency
Our starting point is the following normalized version of the MLE:
n

n = argmax Qn (),

1X
Qn ()
l(, Zi )
n
i=1

For consistency we need to establish the following three results


1

Qn () converges uniformly in probability to


Q0 () E [l(, Z)].
Q0 () has a unique maximum at 0 .

up

Qn () Q0 () argmax Qn () argmax Q0 ()

|
{z
}
|
{z
}
n

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Intuition
n maximizes Qn () (definition of MLE).
Qn () Q0 () (the MLE problem if well defined at ).
0 maximizes Q0 () (the true value solves the problem at ).
By maximizing Qn () we end up maximizing Q0 (),
convergence of the sequence of functions guarantees
converges of maximizers. (this is the difficult step).
If argmax Qn () is seen as function defined on functions, what property is
implied by 3)?

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

1) Qn () converges uniformly in probability to Q0 () E [l(, Z)]


If we fix at any point then
n

1X
p
l( , Zi ) E[l( , Z)]
Qn ( ) =
n

i=1

since by our assumptions (which ones?), l( , Zi ) is a sequence of


rvs that satisfies Kolmogorovs LLN.
This establishes pointwise convergence of Qn () to Q0 (). But our
strategy requires uniform convergence.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Uniform Convergence: a sequence of real valued functions fn


defined on a set S < converges uniformly to a function f on S if
for each  > 0, there is a number N such that
n > N |fn (x) f (x)| <  for all x S
Intuition: we are using the same  for the whole domain, that is,
eventually we can put fn in the strip f .
It can be shown that uniform convergence is equivalent to
sup |f (x) fn (x)| 0
xS
(see Ross (1980, pp. 137))

Uniform convergence in probability: Qn () converges uniformly in


p
probability to Q0 () means sup |Qn () Q0 ()| 0.
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Uniform LLN: if Zi are iid, is compact and a(Zi , ) is a


continuous function at each wp1, and there is d(Z) with
ka(z, )k d(Z) for allP and E[d(Z)] < , then E[a(z, )]
is continuous and n1 ni=1 a(Zi , ) converges uniformly in
probability to E[a(Z, )] (Newey and West, 1994, pp. 2129).
In our case, a(Zi , ) = l(, Zi ), continuous on a compact set ,
the required bound is provided by our assumption


E sup |ln f (z; )| <

and the desired result follows from the iid assumption.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

2) Q0 () has a unique maximum at 0 .


Information inequality: if 6= 0 f (zi0 ) 6= f (zi ; 0 ) and
E[|l(; Z)|] < for all , then Q0 () = E[l(, Z)] has a unique
maximum at 0 .
E[l(0 ; Z)] E[l(; Z)] = E[l(0 ; Z) l(; Z)]


f (Z, )
= E ln
f (Z, 0 )


f (Z; )
> ln E
Jensens inequality!
f (Z; 0 )
Z
f (z; )
> ln
f (z; 0 ) dz
f (z; 0 )
> ln 1
> 0
Important: check where does the argument break down if assumptions do not hold.
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

3) 1) and 2) imply consistency.


Pick any  > 0. Let us get three inequalities wpa 1.
n maximizes Qn (), so
a) Qn (n ) > Qn (0 ) /3
Qn () converges uniformly to Q0 (), so Qn (n ) Q0 (n ) < /3,
hence
b) Q0 (n ) > Qn (n ) /3
and Q0 (0 ) Qn (0 ) < /3 hence
c) Qn (0 ) > Q0 (0 ) /3

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Now start with b)


Q0 (n ) > Qn (n ) /3
> Qn (0 )  2/3
d) Q0 (n ) > Q0 (0 ) 

Substract /3 in both sides of a)


Substract  2/3 in both sides of c)

Let N be an open subset of containing 0 . N open implies


N c closed and bounded: compact in our case.
Since Q0 () is continuous (the uniform limit of a continuous
function is continuous) then:
sup Q0 () = Q0 ( )

N c

for some N c (continuous functions over compact sets


achieve their maximum).
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Now since 0 is the unique maximizer of Q0 (),


Q0 ( ) < Q0 (0 )
Now pick  = Q0 (0 ) Q0 ( ), then by inequality d), wpa1
> Q0 ( )
Q0 ()
so N wpa1. (if not Q0 ( ) would not be the sup).
Then using the definition of convergence in probability, since N
was chosen arbitrarily
p
n 0

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Asymptotic normalilty

Asymptotic normality is a bit easier to establish since we will follow


a strategy very simimlar to what we did with all previous
estimators.
But first we need to establish some notation and results.
Score and hessian.
Score equality
Information matrix
Information matrix equality

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Score, Hessian and Information


Score: s(; Z) l(; Z), a K 1 vector.
= l(; Z)
= Pn s(; Zi )
Sample score: s(; Z)
i=1

Hessian: H(; Z) 0 l(; Z), a K K matrix.


= 0 l(; Z)
= Pn H(; Zi )
Sample hessian: H(; Z)
Information matrix: J E [s(0 ; Z)s(0
matrix.
Score equality: E [s(0 ; Z)] = 0

i=1
0
; Z) ], an

(It is kind of a FOC of the likelihood inequality.)

Information equality: E [H(0 ; Z)] = J


Note that this imples V [s(0 ; Z)] J
Walter Sosa-Escudero

K K

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Proof of score equality (the continuous case):


For any
Z
f (z; ) dz = 1
Taking derivatives in both sides
R
d[ f (z; )dz)]
=0
d
If it is possible to interchange differentiation and integration:
Z
df (z; )
d = 0
d
The score is a log-derivative, so
s(; z) =

df (z; )/d
d ln f (z; )
=
d
f (z; , )

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

hence
df (z; )/d = s(; z)f (z, )
Replacing above:
Z
s(; z)f (z; ) dz = 0
When = 0
Z
s(0 ; z)f (z; 0 ) dz = E [s(0 ; z)]
So
E [s(0 ; z)] = 0

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Aside: Interchanging differentiation and integration

Source: Bartle, R., 1966, The Elements of Integration, Wiley, New York

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Proof of information equality (the continuous case):


From the previous result:
Z
s(; z)f (z; )dz = 0
Take derivatives in both sides, use the product rule and omit arguments in functions
to simplify notation:
Z
(sf 0 + s0 f )dz = 0
Z
Z
sf 0 dz + s0 f dz = 0
From the score equality, f 0 = sf , replacing f 0 above
Z
Z
s(; z)s(; z)0 f (z; )dz + s(; z)0 f (z; )dx = 0
When = 0
E(s(0 , Z)s(0 , Z)0 ) +

Z
H(0 ; z)f (z; 0 )dz

J + E(H(0 ; Z))

which implies the desired result.


Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Asymptotic normality
Under our assumptions, wpa1 the MLE estimator satisfies the
FOCs
=0
s(n ; Z)
Take a first order Taylor expansion around 0
Z)(
= s(0 ; Z)
+ H(;
n 0 ) = 0
s(n ; Z)
where is a mean value located between n and 0 . (Note that
p
consistency implies n 0 ).
Now solve
!1
!

Z)


H(
;
s(
;
Z)
0

n 0 =
n
n
Now we are back in familiar territory: we will show that the first factor does not explode, and that the second is
aymptotically normal
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties


1 p
Z)

First we will show: n1 H(;


J 1
Preliminary result: if gn () is a sequence of random functions that converge uniformly
p
in probability to g0 () for all in a compact set , and g0 () is continuous, n 0
p
implies gn (n ) g0 (0 ) (see Ruud (2000, pp. 326).

= n1 Pn H(; Zi )
According to our assumptions n1 H(; Z)
i=1
converges uniformly in probability to E [H(; Z)], which by the
previous results, it is continuous in .
p
Hence, by the previous result, since 0
p
Z)

n1 H(;
E [H(0 ; Z)] = J <

by the information equality. Then the result follows by continuity


of matrix inversion and existence of the information matrix.
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Now we show:

d
s(0 ; Z)

N (0, J)
n

Start with

s(0 , Z)

s(0 ; Z)

= n
= n
n
n

Pn

i=1 s(0 , Zi )

In order to apply the CLT we check


E [s(0 , Zi )] = 0, by the score equality.
V [s(0 , Zi )] = J < .
Then, using the Cramer Wold device (please fill details) we get the
desired result.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Collecting results:




n n 0 =

1

H(0 ,X)
n
p

J 1

s(0 ,X)

N 0, J)

Then by Slutzky theorem and linearity







d
n n 0 N 0, J 1 JJ 1 = N 0, J 1

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Variance estimation
The asymptotic variance of n is J 1 , with J = V (s(0 , Z). We
will propose three consistent estimators:
1

Inverse of empirical minus hessian:


"
#1
n
1X

H(n ; Zi )
n
i=1

Inverse of empirical variance of score (OPG):


" n
#1
1X
s(n ; Zi )s(n ; Zi )0
n
i=1

Inverse of empirical information matrix:


h
i
J(n )1
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Invariance

Invariance: Let = g(), where g() is a one-to-one function. Let


0 denote the true parameter, so 0 = g(0 ) is the true parameter
under the new reparametrizaton. Then, if its MLE of 0 ,
= g()
is the MLE of 0

Example: if is the MLE of ln(0 ), how can we get the MLE of 0 ?

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Proof:
Since is the MLE
z) l(, z),
l(,
for every . Since = g() is one-to-one:
z) l(g 1 (), z)
l(g 1 (),
= g()
maximizes the reparametrized log-likelihood.
then

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

MLE and unbiasedness


The invariance property makes the MLE estimator very likely to be
biased in many relevant cases.
Consider the following intuition. Suppose is the MLE for 0 and
suppose it is unbiased, so
= 0
E()
is the MLE of g(0 ). Is g()
unbiased? In
By invariance, g()
general
6= g(E())
= g(0
E(g())
so if the MLE is unbiased for one parametrization, it is very likely
to be biased for most other parametrizations.
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

MLE and Efficiency

Let be any unbiased estimator of 0 . An important result is the


following
Cramer-Rao Inequality: V ( ) (nJ)1 is psd.
This provides a lower bound for unbiased estimators.

Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

Proof: the single parameter case (K = 1).


For any two random variables X and Y
Cov(X, Y )2 V (X)V (Y )
since the squared correlation is less than 1. Then
V (X)

Cov(X, Y )2
V (Y )

It is immediate to check
We will take X = and Y = s(0 , Z).
!
n
X
=V
V (s(0 , Z))
s(0 , Zi ) = n J
i=1

So...what do we need to show to finish the proof?


Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

We have

2


Cov , s(0 , Z)

V ( )
 nJ


= 1.
so we need to show Cov , s(0 , Z)





= 0, Cov , s(0 , Z)
= E s(0 , Z)

(Sketch:) Since E(s(0 , Z))


Z
E( s) =
s f (
z ) d
z
Z
=

f(
z)
f (
z ) d
z
f (
z)

Z
=
=
=
=

f(
z ) d
z

Z

f (
z , ) d
z

=0

E( )|=0

since E( ) = 0 .
Walter Sosa-Escudero

Maximum Likelihood Estimation

Motivation: a non-linear model


Likelihood and Maximum-Likelihood
Asymptotic properties

MLE and efficiency


The CR bound applies to unbiased estimators. MLE is likely to be
biased.
MLE estimators are asymptotically normal, centered around the true
parameter with normalized variance equal to the CR lower bound for
unbiased estimators.
Problem: the class of consistent AN estimators includes some
extreme (an highly unusual) cases that can improve upon the CR
bound (the so called superefficient estimator).
Rao (1963): the MLE estimator is efficient (minimum variance) in
the class of consistent and uniformly asymptotically normal (CUAN)
estimators.
CUAN estimators: is CUAN for 0 if it is consistent and
p
( 0 ) converges in distribution to a normal rv uniformly over
compact subsets of .
Walter Sosa-Escudero

Maximum Likelihood Estimation

You might also like