Random vector

Let {X1, X2 ,  ..., Xn} be a set of n random variables.

It is often convenient to consider this set as a single object x = {X1, X2 ,  ..., Xn} that will be called a random vector. This is particularly true when this set is to be submitted to linear transformations, thus making Linear Algebra the natural mathematical tool.

Just as a random variable is described by its probability distribution, a random vector is is described by the joint probability distribution of the n variables that make up the vector.

The above illustration represents the probability distribution of a random vector of dimension 2.

-----

The moments of a random vector are defined just as for a random variable (see below). But because of the "internal structure" of a random vector that a simple random variable is lacking, the definitions are both a bit more complex and richer that those pertaining to random variables. In particular :

* Multiplicative constants must be replaced by constant (not random) matrices, or constant vectors.

* Variances must be replaced by covariance matrices.

# Expectation of a random vector

## Definition

By definition, the expectation of a random vector is the vector whose components are the expectations of the r.v. making up the vector :

µ = E[x] is the center of gravity of the joint probability distribution of {X1, X2 ,  ..., Xn}.

## Linearity

It is immediately verified that the expectation is linear.

* If x and y are two random vectors, and if A and B are two constant matrices :

E[Ax + By] = AE[x] + BE[y

* If b is a constant vector :

E[Axb] = AE[x] + b

## Expectation of a twice linearly transformed random vector

Let :

* x be a random vector,

* A and B be constant (non random) matrices,

* c be a constant vector.

We'll show that :

 E[AxB + c] = AE[x]B + c

We'll need this result for establishing the expression of the covariance matrix of a linearly transformed random vector (see below).

# Variance of a random vector

Covariance matrix

The definition of the variance of a random vector is the same as that of a random variable. Denote µ the vector E[x]. Then, by definition :

Var(x) = E[(x - µ)(x - µ)']

Var(x) is a symmetric, positive (semi-)definite square matrix that is called the covariance matrix of x, or the covariance matrix of the set of variables {X1, X2 ,  ..., Xn}. It is usually denoted .

We'll show that this expression may also be written :

 Var(x) = E[xx'] - µµ'

which is for random vectors what  :

Var(X) = E[X ²] - µ²

is for random variables.

## Covariance matrix of a linearly transformed random vector

Let :

* x be a random vector,

* A be a constant matrix,

* b be a constant vector.

We'll show that

 Var(Ax + b) = AVar(x)A'

which is for random vectors what :

Var(aX + b) = a²Var(X)

is for random variables.

-----

This important result is used for calculating the covariance matrix of the linear transform of a random vector.

For example, we'll use it for establishing that -1 plays for the multivariate normal distribution the same role as 1/² plays for the univariate normal distribution.

## Variance of an inner product

If the matrix A' is a unique row vector a', and if the vector b is reduced to a unique component b, then a'x + b is just a random variable, and the "Var" of the left hand term is just the ordinary variance of a r.v.., and not a covariance matrix.

If, in addition, b = 0, the above expression gives the variance of the inner product of a fixed vector and a random vector.

## Variance of the projection of a vector

In particular, if a is a unit vector, Var(a'x) is the variance of the projection P of the random vector x on the straight line D defined by the fixed unit vector a.

We therefore have :

 Var(Projection of x on the direction defined by a) = a'a

where  is the covariance matrix of x.

This result is very useful, in particular when studying the multivariate normal distribution and in Discriminant Analysis.

# Mahalanobis transformation, "spherization" of a random vector

The Mahalanobis transformation is the multivariate generalization of the standardization of a random variable.

We show here that any random vector X with a non degenerate covariance matrix Σ can be transformed by a linear transformation into another vector X' which is :

* Centered (mean is 0),

* With identity covariance matrix. The marginal variables of X' are then uncorrelated with unit variance.

This transformation is called the "Mahalanobis transformation", and the transformed vector is said to be "sphericized".

-----

The Mahalanobis transformation is particularly useful when studying the multivariate normal distribution : it transforms this distribution into the spherically symmetric standard multinormal distribution, which leads to simpler calculations than the original distribution. The results may then be "carried over" back to the original distribution by the inverse Mahalanobis transformation.

# Covariance of two random vectors

Let :

* x be a random vector mx1:

x = {X1, X2 ,  ..., Xm }

* y be a random vector nx1:

y = {Y1, Y2 ,  ..., Yn}

## Definition

The covariance Cov(xy) of these two random vectors is defined exactly as is defined the covariance of two random variables :

Cov(xy) = E[(x1 - µx )(x2 - µy )']

where µx and µy are the mean vectors of x and of y.

Developping this expression shows that Cov(xy) is a matrix with m rows and n columns whose generic term is :

[Cov(xy)]ij = Cov(Xi, Yj )

where "Cov" in the right hand side term is the ordinary covariance between two random variables.

-----

The covariance of two random vectors will appear naturally when we partition the covariance matrix of a multivariate normal distribution (see here).

## Properties of the covariance of two random vectors

It is easily verified that :

- Transpose

Cov(y, x) = [Cov(x, y)]'

- Variance of a sum or random vectors

If x and y have the same number of components :

Var(x y) = Var(x) + Var(y)  2.Cov(x, y)

for Cov(x, y) is then a symmetric matrix.

- Orthogonality

If x and y have the same number of components, they are said to be orthogonal if :

x'y = 0

It is then immediately verified that :

Cov(x, y) = 0

and that therefore :

Var(x y) = Var(x) + Var(y)

___________________________________________________________

 Tutorial

In this Tutorial :

* We establish the second form of the covariance matrix of a random vector.

* We then calculate the expectation of a twice linearly transformed random variable.

* We'll need this result for establishing the important result stated above about the covariance matrix of a linearly transformed random vector.

RANDOM VECTORS

 Second form of the covariance matrix of a random vector Expectation of a twice linearly transformed random vector Covariance matrix of a linearly transformed vector TUTORIAL

___________________________________________________

 Expectation Variance Covariance Covariance matrix

 Want to contribute to this site ?