You are on page 1of 8

Zheqi Zhang

California State University Fullerton


Math 435

Spring 2014
Assignment 8

Section 9.7
72. Suppose that Yi P ois().
for .
a. Find the MLE
L() = L(|y1 , y2 , ..., yn )
n
Y
=
f (yi |)
=

i=1
n
Y
i=1
n
P

yi e
I[yi =1,2,...,n]
yi !
y

i
n
i=1 en Y
= Q
I[yi =1,2,...,n]
n
yi ! i=1

i=1

ln[L()] =

n
X
i=1
n
P

dln[L()]
=
d
Let

dln[L()]
d

yi ln() n ln(

n
Y

yi !)

i=1

yi

i=1

= 0 and solve for , we have


n
P

yi

i=1

n
n
d ln[L()]
1 X
= 2
yi .
d2
i=1
2

n
P

Then substitute =

i=1

yi

, we have
d2 ln[L()]
|
d2

n
P

= i=1
n

yi

n2
= P
< 0.
n
yi
i=1

n
P

Thus, we have =

i=1

yi

n
P

M LE =
is the maximum. Therefore,

i=1

Yi

= Y .

and V ar()

b. Find E()
n
P

M LE ) = E(
E(

yi

i=1

n
n
= E( )
n
=
n
P

M LE ) = V ar(
V ar(

yi

i=1

)
n
n
X
1
= 2 V ar(
yi )
n
i=1
=

.
n

c. Show that the estimator found in (a) is consistent for .


M LE ) = lim
lim V ar(
n
n n
=0
M LE is consistent for .
Thus, by Theorem 9.1
d. What is the MLE for P (Y = 0) = e ?
M LE = Y . Thus,
By the Invariant Property of MLE, and we have

P (Y = 0) = e = eY .
74. Let Y1 , Y2 , ..., Yn denote a random sample from the density function given by
yr
1
f (y|) = ( )ry r1 e I[>0,y>0] ,

where r is a known positive constant.


a. Find a sufficient statistic for .
L() = L(|y1 , y2 , ..., yn )
n
Y
=
f (yi |)
i=1
n
P
n
1 n Y r1 1 i=1 yi r
= nr
yi e
I[>0,yi >0]

i=1
n
P

1
yi r Y
1
= n rn e i=1
yir1 I[>0,yi >0]

i=1

Let g(
Thus,

n
P
i=1
n
P

yir )

1
1 n
r
e
n

n
P

yi r

i=1

and h(y1 , y2 , ..., yn ) =

n
Q

yir1 I[>0,yi >0] .

i=1

yir is sufficient for .

i=1

b. Find the MLE for .


n
n
Y
1X r
y + nln(r) + (r 1)ln( yi )
ln[L()] = nln()
i=1 i
i=1
n
P
yr
dln[L()]
n i=1 i
= + 2
d

Let

dln[L()]
d

= 0 and solve for , we have


n
P

=
n
d2 ln[L()]
= 2
2
d

n
P

Then substitute =

yir

i=1

yir

i=1

n
n
P
2 yir
i=1
3

, we have

d2 ln[L()]
|
d2

n
P

= i=1n

yr
i

n3
2 < 0.
n
P
r
yi
i=1

n
P

Thus, we have =

i=1

yir

n
P

is the maximum. Therefore, M LE =

i=1

Yir

= Y r.

c. Is the estimator in part (b) an MVUE for .


n
P
We need to find
Yir ?
i=1

FYir (y) = P (Yir y)


1

= P (Yi y r )
1

= FYi (y r )
d
fYir (y) = FYir (y)
dy
1
d
= FYi (y r )
dy
1
d 1
= fYi (y r ) y r
dy
( 1 )r
y r
1
1
1 1
= ( )ry ( r )(r1) e ( )y r 1

r
1 y
= e .

Thus, Yir exp(), and

n
P

Yir is a distribution of summation of n iid exponential

i=1

distributions with mean . Therefore, we have


n
X

Yir Gam( = n, = )

i=1
n
P

E(M LE ) = E(

Yir

i=1

n
n
= E( )
n
= .

n
P

Therefore, the estimator M LE =

i=1

Yir

= Y r is an MVUE for .

79. A random sample of 100 voters selected from a large population revealed 30 favoring
candidate A, 38 favoring candidate B, and 32 favoring candidate C. Find MLEs for the
proportions of voters in the population favoring candidates A, B, and C, respectively.
Estimate the difference between the fractions favoring A and B and place a 2-standarddeviation bound on the error of estimation.
3
3
P
P
We have
pi = 1,
ni = n, and p3 = 1 p1 p2
i=1

i=1

n!
pn1 1 pn2 2 (1 p1 p2 )n3
n1 !n2 !n3 !
n!
ln[f (p1 , p2 , p3 |n1 , n2 , n3 )] = ln(
) + n1 ln(p1 ) + n2 ln(p2 ) + n3 ln(1 p1 p2 )
n1 !n2 !n3 !
f (p1 , p2 , p3 |n1 , n2 , n3 ) =

ln[f (p1 , p2 , p3 |n1 , n2 , n3 )] =


p1

ln[f (p1 , p2 , p3 |n1 , n2 , n3 )] =


p2

n1
n3

p1
1 p1 p2
n2
n3

p2
1 p1 p2

(1)
(2)

Multiplying (1) by p1 (1 p1 p2 ) and (2) by p2 (1 p1 p2 ) obtaining


(1 p1 p2 )n1 p1 n3 = 0
(1 p1 p2 )n2 p2 n3 = 0

(3)
(4)

Adding (3) and (4), and simplifying obtaining


p 1 + p2 =

n1 + n2
n

(5)

Substituting (5) into (3) obtaining


p1 =

n1
,
n

and similarly, we have


p2 =

n2
n3
, p3 = .
n
n

To show p1 , p2 , p3 are the maximum likelihood, we take the second derivative and
evaluate at P1 = nn1 :
2
n1
n1
ln[f (p1 , p2 , p3 |n1 , n2 , n3 )]|P1 = nn1 = 2
<0
2
p1
p1
(1 p1 p2 )2
n2
n3
=
<0
n1
n1 (1 n p2 )2
Thus, p1 is the maximum. Similarly, p2 and p3 are also maximum. Finally, we have
p1 =

N2
N3
N1
, p2 =
, p3 =
.
n
n
n
5

Given that p1 = 0.30, p2 = 0.38, p3 = 0.32, and since p1 , p2 , p3 are MLEs, we can use
p1 p2 to estimate the population proportion p1 p2 .
V ar(p1 p2 ) = V ar(p1 ) + V ar(p2 ) 2Cov(p1 , p2 )
n1
n2
= V ar( ) + V ar( ) 2Cov(p1 , p2 )
n
n
1
1
1
= 2 V ar(n1 ) + 2 V ar(n2 ) 2 2 Cov(n1 , n2 )
n
n
n
np1 q1 np2 q2
2

+
+ 2 np1 p2
n2
n2
n
p1 q1 p2 q2 2

+
+ p1 p2
n
n
n
0.30 0.70 0.38 0.62 2 0.3 0.38
+
+

100
100
100
0.006736.
Thus, two standard derivation bound is

(p1 p2 ) 2 = (0.08) 2 0.006736


= (0.2441, 0.0841).
Section 9.8
90. Suppose Yi exp(). Find a 100(1 ) confidence interval for t() = 2 .
t() = 2
t()
= 2

t() 2
] = 42
[

1 y
f (y|) = e

y
ln[f (y|)] = ln()

ln[f (y|)]
1
y
= + 2


2 ln[f (y|)]
1
2y
= 2 3
2

2
ln[f (y|)]
1
E(
)= 2
2

In addition, we can easily find that M LE is Y by maximizing the log likelihood of the
exponential distribution as following:
L() = L(|y1 , y2 , ..., yn )
n
P

1 1 yi
= n e i=1

1X
yi
ln[L()] = nln()
i=1
n
ln[L()]
n
1 X
= + 2
yi

i=1

Let

ln[L()]

= 0 and solve for , we get


n
P

=
n
P

That is, we have M LE =


Next, we need to show

i=1

yi

yi

i=1

= Y .
2 2
2 2
r
=
Z
2
2

42
n( 12 )

so that we can evaluate at to calculate the confidence interval.


We have M LE is consistent for , then we have 2 is consistent for by the invariance
property of MLE.
In addition, we know that
2 2
2
2

By Theorem 9.3, let Un =

2 2
2
2

Z in probability and Wn =

2
2

n
2
2

converges to 1 in

probability.
Thus, we have
2 2
2
2

n
2
2

n
2
2

Z.

Finally, the 95% confidence interval is


r
2
22
Z ([ t() ]2 /nE( ln[f (y|)] ))| = 2 Z
t()
.
=
2
2

2
n
7

Supplementary Exercise
100. Let Yi P ois() with mean and define
Y
Wn = q .
Y
n

a. Show that the distribution of Wn converges to a standard normal distribution.


By Central Limit Theorem, we have
Y
Un = q Z.

M LE = Y in Exercise 9.72, we know that Y is consistent


Also, since we found that
for . Then, we have
q
Wn = q

Y
n

converges to 1 in probability.
Finally, by Theorem 9.3, we have

Y

Un
= q n Z.
Wn
Y
n
q

b. Use Wn and the result in part(a) to derive the formula for an approximate 95%
confidence interval for .
to calculate the confidence interval.
We have Wn Z, so we can substitute by
Thus, the confidence interval is
s

M LE 1.96 M LE .

You might also like