You are on page 1of 2

Statistics Examples Sheet 1

iid
(1) State and prove the Rao-Blackwell theorem. Consider X1 , . . . , Xn ∼ U where U
has density function

(− log θ)x θ
fU (x; θ) = for x ∈ N+ and 0 < θ < 1.
x!

Find a one-dimensional sufficient statistic T for θ and show that θ̃ = 1{X1 =0} is an
unbiased estimator of θ. Find another unbiased estimator θ̂ which is a function of the
sufficient statistic T and has a smaller variance than θ̃. You may use the fact that
P
i Xi ∼ Po(−n log θ).

iid
(2) Consider X1 , . . . , Xn ∼ U where U has density function fU (x; θ) depending on
θ ∈ R. What does it mean to say T is a sufficient statistic for θ? Prove that if the joint
densities of X1 , . . . , Xn satisfies the factorisation criterion for a statistic T , then T is
sufficient for θ.
Let U = U[θ, 2θ], show that θ̃ = 32 X1 is an unbiased estimator and T = (mini Xi , maxi Xi )
is a minimal sufficient statistic for θ. Find an unbiased
√ √estimator of θ which is a function
of T and has smaller variance than θ̂. U = U[− θ, θ]? [Hint: Show that θ̂ = 3X12 is
an unbiased estimator for θ.]

iid
(3) Let X1 , . . . , Xn ∼ Exp(θ) Find a minimal sufficient statistic T and write down its
distribution. Show that the maximum likelihood estimator θ̂ of θ is a function of T ,
and show that it is biased, but asymptotically unbiased. Find an injective function h
on (0, ∞) such that, writing ψ = h(θ), the maximum likelihood estimator ψ̂ of the new
parameter ψ is unbiased.

(4) Suppose X1 , . . . , Xn are independent random variables with distribution Bin(1, p).
Show that a sufficient statistic for θ = (1 − p)2 is T (X) = ni=1 Xi and that the MLE
P
2
for θ is 1 − n1 T . Show that the MLE is a biased estimator for θ. Let θ̃ = 1{X1 +X2 =0} .
Show that θ̃ is unbiased for θ. Use the Rao-Blackwell theorem to find a function of T
which is an unbiased estimator for θ.
iid
(5) Let X1 , . . . , Xn ∼ N(µ, σ 2 ) where µ and σ 2 are unknown. Write down the log-
likelihood function l(µ, σ 2 ) and find a pair of sufficient statistics for (µ, σ 2 ). Find
(µ̂, σ̂ 2 ) and construct a 95% confidence interval for µ.

1
(6) Let X1 , . . . , Xn be iid with Xi ∼ U[0, θ]. Find the maximum likelihood estimator
θ̂ of θ. Show that the distribution of R(X, θ) = θ̂/θ does not depend on θ, and use
R(X, θ) to find a 100(1 − α)% confidence interval for θ for 0 < α < 1.
The length (in minutes) of calls to a call centre may be modelled as iid exponentially
distributed random variables, and n such call lengths are observed. The original sample
is lost, but the data manager has noted down n and t where t is the total length of the
n calls in minutes. Derive a 95% confidence interval for the probability that a call is
longer than 2 minutes if n = 50 and t = 105.3.

(7) Suppose that X1 ∼ N(θ1 , 1) and X2 ∼ N(θ2 , 1) independently, where θ1 and θ2 are
unknown. Suppose that (θ1 − X1 )2 + (θ2 − X2 )2 ∼ χ22 , show that this is the same as
Exp(1/2). Show that both the square S and the circle C in R2 , given by
S = {(θ1 , θ2 ) : |θ1 − X1 | 6 2.236; |θ2 − X2 | 6 2.236}
C = {(θ1 , θ2 ) : (θ1 − X1 )2 + (θ2 − X2 )2 6 5.991}

are 95% confidence regions for (θ1 , θ2 ). [Hint: Φ(2.236) = 12 (1 + .95).] What might
be a sensible criterion for choosing between S and C?
iid
(8) Let X1 , . . . , Xn ∼ U[0, θ] where θ ∈ (0, ∞) is unknown. Find a one-dimensional
sufficient statistic T for θ and construct a 95% confidence interval for θ based on T .
Suppose now that θ is a random variable with prior density π(θ) ∝ θ−1 1{θ>a} where
a > 0 and k > 2. Compute the posterior density for θ and find the optimal Bayes’
estimator θ̂ under quadratic loss.

(9) Suppose X1 , . . . , Xn are iid with (conditional) probability density function f (x|θ) =
θxθ−1 for 0 < x < 1 (and is zero otherwise), for some θ > 0. Suppose that the prior for
θ is Γ(α, β), α, β > 0. Find the posterior distribution of θ given X = (X1 , . . . , Xn ) and
the Bayesian estimator of θ under quadratic loss.
Now let f (x; θ) = e−(x−θ) for θ < x < ∞ where θ has a prior distribution N(0, 1).
Determine the posterior distribution of θ. Suppose that θ is to be estimated under
absolute loss. Determine the Bayes’ estimator and express it in terms of the function
cn (x) defined by
2Φ(cn (x) − n) = Φ(x − n) for − ∞ < x < ∞
x 2
where Φ(x) = √12π −∞ e−y /2 dy is the standard normal distribution function.
´

iid
(10) Let X1 , . . . , Xn ∼ N(θ, σ 2 ) and suppose that the prior distribution for θ is N(µ, τ 2 )
where σ 2 , µ, τ 2 are known. Determine the posterior distribution for θ given X1 , . . . , Xn
and the Bayes’ estimator of θ under quadratic and absolute error loss.

You might also like