Interactive animation

t distribution

If X ~ N(µ, ²) is a standard normal random variable, the sample mean   of n-observation samples is also normally distributed

* with mean µ,

* and variance ²/n.

~ N(µ, ²/n)

The standardized variable

is the departure of the sample mean from the true distribution mean, standardized by its own distribution's standard deviation used as "unit length".

z is normal with mean 0 and unit variance

z ~ N(0, 1)

Therefore, if the variance ² of a normal distribution is known, the sample mean is transformed into a standard normal variable in a very simple way.

# The T statistic

Now what if the variance ² is unknown (as is usually the case) ? In the expression of z, replace ² by its unbiased estimate

to obtain the so-called T statistic. If we denote

then

very much like before.

-----

Unfortunately, S is now a random variable that prevents the distribution of from being standard normal. What is the distribution of T ?

# Student's t distribution

This distribution is known as "Student's t distribution", or simply "t distribution".

It depends on n, which is therefore a parameter of the distribution. The distribution of T (for n-observation samples) is called the "t distribution with (n - 1) degrees of freedom", and is denoted tn-1:

T ~ tn-1

(The reason why the number of degrees of freedom is n - 1 and not n is given here).

# Animation

The following interactive animation illustrates the t distribution.

 The "Book of Animations" on your computer

Upper frame

 Displays a normal distribution, and a sample drawn from this distribution. The distribution is standard (mean 0 and unit variance), but this is not fundamental.

Middle display

 Below the upper frame are displayed:         * To the left, the difference between the sample mean and the distribution mean (here, 0). This is T''s numerator.         * To the right is T''s denominator with

Lower frame

 * The tallest curve (blue) is the standard normal distribution. It is the distribution of m' (that is, when the variance ² is known).     * The middle curve (red) is the t distribution corresponding to the number of observations in the sample. This curve is symmetric with respect to the vertical axis T = 0. Note that the number of degrees of freedom (df) is n - 1.     * The shortest curve (black) is the t distribution for n = 2 (df = 1). We show in the Tutorial that it is the Cauchy distribution.   Below the frame is the value of the T statistic for the current sample. _______________________________       * Change the sample size and observe how the shape of the t distribution changes. You may click on "Mask sample" to avoid being disturbed by the sample. Although it always looks like the standard gaussian, it is never as tall as the gaussian. But the area under the curve is always 1, so the missing area in the central zone is to be found in the "tails" of the t distribution, that are always fatter than that of the gaussian. This is a consequence of the fact that the sum of the squares of the distances of the observations to the sample mean is always less that the sum of the squares of the distances of the observations to the true mean. Because we are forced to estimate the distribution variance, we introduce an uncertainty about the value of the ratio of: the difference between the estimated mean and the true mean, and the standard deviation of the mother distribution.   Large values of T are therefore more probable than the same values of m' (that is, when the variance ² is known). This remark is the vary basis of t-tests.       * Observe that the t distribution tends to the standard normal distribution for large values of n. This reflects the fact that the estimated variance converges in probability to the true variance when n grows without limit.       * Observe that the tails of the t distribution become more pronounced for smaller values of n.       * The T statistic needs at least two observations. When n = 2 (df = 1), the t distribution is identical to another classical distribution, the Cauchy distribution. It's tails are so fat that they prevent it from having a mean (as well as any higher order moment).       * Select a value for n, then click on "Go" and observe the progressive build-up of the histogram of the corresponding t distribution.

# Properties of the t distribution

## Probability density function

We'll show that the pdf of the t distribution with n degrees of freedom is :

## Extreme behaviors

* We'll show that if n = 1, the t distribution reduces to the Cauchy distribution, as illustrated in the above animation.

* We'll show that as n tends to infinity, fn(x) converges to the pdf of the standard normal distribution N(0, 1) (see animation).

This last result is also obtained here by a completely different method, as an application of Slutsky's theorem.

## Mean

As the Cauchy distribution has no mean, the t distribution has no mean for n = 1.

Note that the pdf of the t distribution is even. So the mean of the t distribution is 0 for n  2.

## Variance

We'll show that when it exists, the variance of the t distribution is :

Note that :

1) The variance does not exist for n = 1, 2.

2) It tends to 1 from above as n grows without limit.

# Student's t and Fisher's F distributions

We show here that if the r.v. T is distributed as tm , then T ² is distributed as F1, m .

___________________________________________

# Applications of the t distribution

## General

The key point about the t distribution is that it does not depend on the variance σ² of the original normal distribution. This point is made clear below.

The T statistic is therefore a pivotal quantity, from which it is possible to devise :

• Confidence intervals around the value of the mean of a sample from a normal distribution.
• Several tests (the so-called "t tests") that pertain to how much trust can be placed in the observed value of the sample mean as an estimate of the distribution mean.

## Linear regression

We show that under the standard assumptions of Simple Linear Regression, the coefficients (slope and intercept) of the Least Squares Line are both normally distributed. But, contrary to what we assumed when we defined the T statistic, not only are the variances of these normal distributions unknown, but their means are also unknown and have to be estimated. So, for either the slope or the intercept, the distribution of the standardized coefficient now involves the estimation of two parameters instead of just one.

As a consequence, it can be shown that the standardized coefficients are distributed as tn - 2, and this is the distribution that has to be taken into account when elaborating confidence intervals and tests for the regression coefficients.

This result is difficult, and is not demonstrated, but it should come as no surprise that estimating two parameters instead of one leads to losing two degrees of freedom instead of one.

For just a little bit more on "losing degrees of freedom", please see here. This Tutorial bears on the Chi-square distribution, but we show below that the t distribution is intimately linked to the Chi-square distribution.

# Formal definition of the t distribution

We can now elaborate a more general formal definition of the t distribution. Take the expression for T, and divide both the numerator and the denominator by , the true standard deviation:

1) The new numerator of T is

which is N(0, 1).

2) The new denominator of T can be written:

But the term

is n-1 (see here).

The denominator under the radical is n - 1, that is just the number of degrees of freedom of the numerator.

3) The numerator and denominator of T are independent random variables. If we write T under its original form:

* The numerator is normally distributed,

* with estimated standard deviation S.

and these quantities are known to be independent.

4) Note that T is now identified as the ratio of two independent variables, the distributions of which do not depend on the variance σ² of the original normal distribution. Therefore, we do not even need to calculate the distribution of T to assert that this distribution does not depend on σ².

-----

The formal definition of the t distribution is therefore:

By definition, the T random variable has a tn distribution with n degrees of freedom if:

where:

## * U ~ N(0, 1),

* X ~n,

* U and X are independent.

This definition makes no reference to the original problem that led to the identification of the t distribution. It can therefore be used in a more general context.

________________________________________________________________

 Tutorial 1

In this Tutorial, we calculate the probability density function of the t distribution. Note that we also calculate this function here by considering a T variable as the ratio of two independent random variables.

We then examine the two extreme cases n = 1 and n tending to infinity.

* We show that t1 is just the Cauchy distribution.

* We then show that as n tends to infinity, the pdf of Student's t distribution tends to the pdf of the standard normal distribution (a result also established here by resorting to Slutsky's theorem).

-----

We conclude by noting that although the number of degrees of freedom of the t distribution is by nature an integer, nothing precludes its mathematical form to accommodate non-integer degrees of freedom. This turn out to be useful for calculating approximate confidence intervals for the difference of two sample means when the variances of the normal distributions are not assumed to be equal (Welch's approximation).

PROBABILITY DENSITY FUNCTION OF THE t DISTRIBUTION

 Probability density function of the t distribution General outline of the demonstration Definition of the t distribution The joint pdf of U and X The distribution function of the t distribution The pdf of T The structure of F(t )  Differentiating F(t ) Special cases n = 1 : Cauchy distribution n infinite : normal distribution Limit of the functional part Limit of the normalization coefficient The standard normal distribution Non-integer number of "degrees of freedom" : Welch's approximation TUTORIAL

___________________________________________________________________

 Tutorial 2

The expression of the variance of Tn, a r.v. following Student's t distribution with n degrees of freedom

Var(Tn ) = n / (n - 2)

is simple enough, but obtaining this result is no easy business and deserves a Tutorial of its own.

We'll give two demonstrations :

1) The first one uses the direct approach :

where fn(x) is the pdf of the t distribution (recall that the expectation of Tn is 0 and that its variance is therefore the same as its second order moment).

Although direct, this approach is by no means straightforward and will lead us to explore the interesting properties of a family of integrals known as Wallis integrals.

The study of Wallis integrals is further pursued here and culminates with the "Wallis formula", a key ingredient of the demonstration of Stirling's formula.

2) The second approach is less direct, but ultimately simpler. It considers a Tn random variable as a function of two r.v. (as given by the formal distribution of the t distribution), and then calls on the bivariate version of the LOTUS theorem for calculating the expectation of Tn².

VARIANCE OF THE t DISTRIBUTION

 First method : direct calculation Splitting the integral Change of variable and Wallis integrals Recursion equation of Wallis' integrals Value of Wallis' integral n even n odd The normalisation coefficient The variance Second method : by LOTUS theorem TUTORIAL

______________________________________________________