The Rao-Blackwell theorem provides a method for improving an unbiased estimator θ* (i.e. reducing its variance) of a parameter θ when a sufficient statistic for θ is available. But the theorem says nothing about the quality of the new unbiased estimator obtained by "blackwellizing" the original estimator. In particular, it certainly does not say that it is a Uniformly Minimum Variance Unbiased Estimator (UMVUE).
Yet, it would be very useful to identify a condition that would make blackwellizing an unbiased estimator generate a UMVUE. This is sometimes possible : all it takes is for the sufficient statistic used for blackwellizing the unbiased estimator to be "complete".
This fundamental result is called the Lehmann-Scheffé theorem, and is stated as follows :
* θ* be any unbiased estimator of the parameter θ,
* and T a statistic that is both sufficient and complete for θ,
then Z = E[θ* | T] is the unique Uniformly Minimum Variance Unbiased Estimator of θ.
It is demonstrated in the Tutorial below.
So the Lehmann-Scheffé theorem may be regarded as a "super-blackwellization" (upper and lower image of the following illustration) :
The Lehmann-Scheffé theorem is therefore the culminating point of our quest for the best unbiased estimator of a parameter θ.
This quest started with the identification of the concept of sufficient statistic, whose practical usefulness was revealed by the Rao-Blackwell theorem. It was then further pursued by the identification of the concept of minimal sufficient statistic, then by that of complete statistic. This last concept bestows on the blackwellization procedure the status of "lethal weapon".
From the Lehmann-Scheffé theorem, we'll deduce the following corollary :
If an unbiased estimator θ* is a function of a sufficient and complete statistic, then θ* is the unique UMVUE of θ.
The Lehmann-Scheffé theorem is of deep theoretical significance, but is hard to use in practice : as we already noticed, the blackwellization procedure often leads to cumbersome or even intractable calculations because it relies of conditional expectations. But it is often easy to show that a "naïve" unbiased estimator is a function of a complete statistic (and is therefore a UMVUE by the above Corollary), as done in the examples below and in several other places throughout this site.
The Lehmann-Scheffé theorem is not to be confused with "Lehmann-Scheffé condition" (see here), also sometimes called the "Lehmann-Scheffé theorem on minimal sufficient statistics".
In this Tutorial, we first demonstrate the Lehmann-Scheffé theorem, as well as its Corollary.
We then use the Corollary to identify two Uniform Minimum Variance Unbiased Estimators :
* The first one is very simple, and bears on the parameter p of the Bernoulli distribution b(p).
* The second one is more complex and bears on the parameter λ of the exponential distribution Exp(λ). This will give us an opportunity to discover that the variance of a Uniform Minimum Variance Unbiased Estimator may be larger than the Cramér-Rao lower bound : in short, a UMVUE is not necessarily efficient.
More UMVUEs obtained either by the Lehmann-Scheffé theorem or by its Corollary can be found here.
THE LEHMANN-SCHEFFE THEOREM
The Lehmann-Scheffé theorem
Unbiased estimator function of a sufficient and complete statistic
Blackwellizing by a sufficient and complete statistic
Corollary of the Lehmann-Scheffé theorem
Exponential distribution : example of a UMVUE that is not efficient
The "natural" estimator is biased
An unbiased estimator
The estimator is the UMVUE
The UMVUE is not efficient
Variance of the UMVUE
The Cramér-Rao lower bound
The UMVUE is not efficient
Related readings :