Change, transformations and functions of random variables

# Univariate transformations

When working on the probability density function (pdf) of a random variable X, one is often led to create a new variable Y defined as a function f(X) of the original variable X. For example, if X~N(µ, ²), then the new variable:

Y = f(X) = (X - µ)/

is N(0, 1).

It is also often the case that the quantity of interest is a function of another (random) quantity whose distribution is known. Here are a few examples:

* Scaling: from degrees to radians, miles to kilometers, light-years to parsecs, degrees Celsius to degrees Farenheit, linear to logarithmic scale, to the distribution of the variance see here).

* Laws of physics: what is the distribution of the kinetic energy of the molecules of a gas if the distribution of the speed of the molecules is known ?

So the general question is:

* If Yh(X),

* and if  f(x) is the pdf of X,

then what is the pdf g(y) of Y ?

# Multivariate transformations

The problem extends naturally to the case when several variables Yj are defined from several variables Xi through a transformation y = h(x). Here are some examples :

## Rotation of the reference frame

Let f(x, y) be the probability density function of the pair of r.v. {X, Y}. Let's rotate the reference frame {x, y} by an angle . The new axes {x', y'} define two new r.v. {X', Y'}. What is the joint probability density function of {X', Y'} ?

## Polar coordinates

Let f(x, y) be the joint probability density function of the pair of r.v. {X, Y}, expressed in the cartesian reference frame {x, y}. Any point (x, y) in the plane can also be identified by its polar coordinates (r, ). So any realization of the pair {X, Y} produces a pair of values of r and , therefore defining two new r.v. R and .

What is the joint probability density function of R and ? What are the (marginal) distributions of R and of ?

## Sampling distributions

Let f(x) be the pdf of the r.v. X. Let also Z1 = z1(x1, x2 , ..., xn) be a statistic, e.g. the sample mean. What is the pdf of Z1 ?

Z1 is a function of the n r.v. Xi (with n the sample size), that are iid with pdf  f(x). If it is possible to identify n - 1 other independent statistics  Zi, i = 2, .., n, then a transformation Z = h(X) is defined, and g(z), the joint distribution of Z = {Z1, Z2, ..., Zn} can be calculated. The pdf of Z1 is then calculated as one of the marginal distributions of Z by integrating  g(z) over  zi, i = 2, .., n.

## Integration limits

Calculations on joint distributions often involve multiple integrals whose integration limits are themselves variables. An appropriate change of variables sometimes allows changing all these variables but one into fixed integration limits, thus making the calculation of the integrals much simpler.

## Distributions

* Some distributions are naturally defined as the ratio of two random variables (Student's t, Fisher's F, Cauchy distribution when considered as the ratio of two standard normal distributions).

An appropriate variable transformation then makes it possible to explicitely calculate the pdf of these variables.

* Some distributions are most easily calculated as marginal distributions of the joint distribution of an appropriately chosen set of random variables. Calculating this joint distribution in turn usually involves proceding first with some variable transformations.

________________________________________________________________________________

 Tutorial 1

We first give a few simple examples of univariate transformations of random variables. We don't use any general theory : instead, each case is solved from basic considerations about the distribution function.

We illustrate these results with some classical examples that clearly show the usefulness of the concept of variable transformation.

SIMPLE EXAMPLES OF VARIABLE TRANSFORMATIONS

 Linear change of variable Translation Change of scale General linear transformation Example: the distribution of the sample variance from a normal distribution Square of a random variable Theory Example 1: the 1 distribution Example 2: the square of a Student's Tm is a Fisher's F1,m Square root of a non negative random variable Theory Example: distribution of the sample standard deviation from a standard normal distribution Inverse of a random variable Theory Exemple: inverse of a Cauchy variable TUTORIAL

_____________________________________________________________________________

 Tutorial 2

We now elaborate a general theory of univariate transformations, that we illustrate with examples. These results are a first step on the way to the concept of jacobian determinant, which is at the core of multivariate transformations (see next Tutorial).

UNIVARIATE TRANSFORMATIONS

 Presentation of the general method The transformation h(x) is one-to-one h(x ) is monotonically increasing h(x ) is monotonically decreasing Both cases together The transformation h(x) is not one-to-one Four examples The Probability Integral transformation TUTORIAL

_____________________________________________

 Tutorial 3

Multivariate transformations are both more powerful and more complex than univariate transformations. They rely on the concept of jacobian determinant, a central issue in the theory of multiple integrals. This theory cannot be addressed in any detail in this Glossary, but we give here the main result, that can be used as such in any problem involving multivariate transformations.

In the particular case of bivariate transformations, it is possible to justify in a simple way the role of the jacobian determinant, and to give a geometric interpretation of the determinant, which we do here.

The next two Tutorials will be devoted to particularly important applications of multivariate transformations.

MULTIVARIATE TRANSFORMATIONS

 Bivariate transformations One-to-one transformations Jacobian Non univocal transformations Examples Example 1 Example 2: creating two independent variables Example 3: creating one variable with a uniform distribution Example 4 : polar coordinates Multivariate transformations Transforming n variables into n new variables Transforming n variables into m < n new variables TUTORIAL

_____________________________________________________________

 Tutorial 4

Let X1 and X2 be two independent r.v. whose distributions are known. What is the distribution of X = X1X2 ?

A first approach consists in using the properties of the moment generating function. But this method is limited because :

* The m.g.f.s must exist, which is not always the case (see for example the Cauchy distribution).

* Knowing a m.g.f. does not mean that the corresponding pdf can be known explicitely, as the mgf cannot, in general, be inverted.

These two limitations are lifted when resorting to the more powerful "characteristic function" instead of the mgf. But the characteristic function is not (yet !) addressed in this Glossary.

The second approach consists in using a simple bivariate transformation that then leads easily to the classical result : the distribution of X is the convolution of the distributions of X1 and of X2.

This simple method can be used with few modifications in a number of problems where the distribution that is to be calculated appears as a marginal distribution of a multivariate distribution that is built artificially for the sole purpose of integrating out all the variables but the variable of interest.

-----

We illustrate this important result by some classical examples. In particular, we show that the distribution of the empirical mean of the Cauchy distribution is this very Cauchy distribution, irrespective of the sample size. This surprising result is intimately linked to the fact that the Cauchy distribution as no mean and no variance, and is therefore outside the scope of the Central Limit Theorem (see interactive animation).

-----

In the special case of non-negative, integer-valued r.v., the distribution of the sum of independent r.v. can also be obtained by calling on the properties of the generating function.

SUM OF INDEPENDENT RANDOM VARIABLES

 Reminder: moment generating function Theory Examples Example 1:  sum of two independent exponential variables Example 2:  sum of two uniformly distributed variables Example 3: distribution of the sample mean of the Cauchy distribution TUTORIAL

__________________________________________________________________

 Tutorial 5

We finally address the issue of calculating the pdf of the ratio Z of two independent r.v. X and Y (Z = X/Y). The general idea is the same as that of the previous Tutorial, with the additional difficulty that the denominator Y may have 0 in its range. This is for example the case when the Cauchy distribution is defined as that of the ratio of two independent standard normal r.v. (see animation).

Other important distributions are also defined as the ratio of two independent r.v. :

* Fisher's F distribution

* Student's t distribution,

whose analytical forms we calculate explicitely.

We also calculate here the t distribution with a more direct, but also more laborious method.

RATIO OF TWO INDEPENDENT VARIABLES

 Theory The Jacobian Case 1: the denominator is never 0 inside its range Case 2: the denominator vanishes inside its range Examples Student's T Fisher's F The Cauchy distribution TUTORIAL

__________________________________________________