Neyman-Pearson lemma

We mentioned that identifying the Best Critical Region for a test (that is, the critical region that maximizes the power of the test), is usually difficult.

Yet, there is a situation in which this task is relatively easy, and entirely taken care of by the Neyman-Pearson lemma (or Theorem).

# Simple hypothesis

A hypothesis H is said to be "simple" if, when true, the distribution is completely specified. The most common type of simple hypothesis assigns a value to a parameter of the distribution :

Hθ = θ0

but we'll see an example of simple hypothesis that does not involve parameters at all.

A hypothesis such as Hθ > θ 0  , which is not simple, is said to be composite.

# The Neyman-Pearson lemma

The Neyman-Pearson lemma bears on the identification of the Best Critical Region (BCR) of a test when both the null and the alternative hypothesis are simple. The test therefore reads :

 * H0 :  θ = θ0 * H1 :  θ = θ1

## Likelihoods

If H0 is true, the distribution is completely determined, and so is the likelihood L(x, θ0 ) of the sample. Similarly, if H1 is true, the distribution is completely determined, and so is the likelihood L(x, θ1) of the sample.

It seems natural to favor the hypothesis conducive to a large value of the likelihood, but this is just a hunch that needs to be formalized. This is what the Neyman-Pearson lemma does.

## Likelihood ratio

The theorem (that we demonstrate below) is as follows :

* Let α be a specified significance level.

* Then there exists a number kα such that the Best Critical Region of the test is the region of the sample space such that :

 L(x, θ1 ) / L(x, θ0 )  >  kα

or in words :

* The BCR is the region containing all the samples such that the value of the ratio of the likelihoods under each hypothesis is above a certain threshold

kα determined by α only.

# Test statistic ?

Note that the likelihood ratio is not a test statistic :

* It is not a statistic because it depends not only on the sample x but also on the two parameters θ1 and θ0.

* The test does not rely on its probability distribution (which is usually unknown anyway) like, for instance, ANOVA relies on the probability distribution of the F statistic.

But it can be said that, as far as the test is concerned, all the useful information contained in the sample is concentrated in this single number.

So running a test based on the Neyman-Pearson lemma consists in two steps :

* Calculate kα from α,

* Then identify the region of the sample space that satisfies the above inequality.

This last step will usually be facilitated by the identification of an authentic statistic, whose presence in a certain region of R garantees that the sample is in the Neyman-Pearson region of the sample space (see for example "Mean of a normal distribution" below).

-----

So, overall, a Neyman-Pearson test is an exercise in algebra, and probability distributions appear nowhere except in the hypothesis.

# Power of the test

## "Unbiased" test

We will the show that, under the same assumption (test hypothesis are both simple), it is always the case that :

 1 - β > α

where β is the probability of  Type II error. In other words :

The probability for the sample to be in the Best Critical Region is larger when H1 is true than when H0 is true.

a result that conforts us in the idea of rejecting H0 in favor of H1 when the sample is in the best critical region.

-----

A test verifying this relation is said to be unbiased.

## α and β vary in opposite directions

We'll show that under the same assumptions, α and β always vary in opposite directions. So, for a given sample size, if one decides to reduce the probability of a Type I error by choosing a smaller α, one will unfortunately increase the probability β of a Type II error, and vice versa.

## Convergence of the test

Under the same assumptions, it can be shown that the power of the test (1 - β) converges to 1 when the sample size grows without limit. We do not demonstrate this difficult result, but it will clearly appear as true in the graphic interpretation of the test when the parameter admits a sufficient statistic.

# Neyman-Pearson and sufficient statistic

We'll show that, simple as it is, the expression of the Neyman-Pearson lemma becomes even simpler when the parameter θ admits a sufficient statistic.

This will turn out to be a direct consequence of the Factorization Theorem.

-----

We'll also give particularly convincing graphic representations of :

* The significance level α,

* And the power (1 - β) of the test,

as well as of the fact that α and β vary in opposite directions.

# Likelihood Ratio Test

The Neyman-Pearson lemma is limited to testing a pair of simple hypothesis. Yet, the idea of using likelihoods for testing more general hypothesis is appealing. When further developed, this idea leads to the Likelihood Ratio Test (LRT) procedure, a general and powerful method for building tests bearing on the values of the parameters of a distribution.

_____________________________________________________________________

 Tutorial 1

In this first Tutorial, we demonstrate the Neyman-Pearson lemma.

We then demonstrate two important consequences :

* The power 1 - β is larger than α, the level of significance of the test.

* α and β vary in opposite directions.

THE NEYMAN-PEARSON LEMMA

 The Neyman-Pearson lemma The problem Likelihood ratio Demonstration kα always exist Neyman-Pearson with no parameter Power is larger than significance level α and β vary in opposite directions TUTORIAL

_______________________________________________

 Tutorial 2

We now review some applications of the Neyman-Pearson lemma.

* We first use it for identifying the BCR of the test on simple hypothesis about the mean of the normal distribution.

* We then address the case of testing the value of the location parameter of the Cauchy distribution for the purpose of showing that, although the hypothesis are both simple, the structure of the BCR varies wildly with the value of the significance level chosen for the test.

* We dispell the belief that "simple hypothesis" only means "specifying the value of a parameter" by giving an example in which both the null and the alternative hypothesis are simple, but involve no parameter value.

-----

We then move on to the important question of the parameter under test admitting a sufficient statistic. The Neyman-Pearson lemma then takes a particular simple form that we use when revisiting the example of the mean of the normal distribution.

The concepts of  "significance level" and of "power" will receive a particularly instructive graphic representation, which will also make it clear that α and β vary in opposite directions.

APPLICATIONS OF THE NEYMAN-PEARSON LEMMA

 Mean of the normal distribution Location parameter of the Cauchy distribution Simple hypothesis with no parameter to specify Sufficient statistic New expression for the BCR The "normal" example revisited Graphic interpretation The BCR Probability of a Type II error Power of the test TUTORIAL

_____________________________________________________