Also spelled "Homoscedasticity".
This term is met in Regression, and more particularly in Simple or Multiple Linear Regression. It refers to the fact that the variance σ² of the response (or "dependent") variable y is constant across the range of the predictor(s) x :
Var(y) = σ²(x) = σ²
When y may be considered as a function f(x) with a superimposed random noise ε, "homoskedasticity" means that the variance σ² of this noise does not vary from one point of the range of x to the next :
y = f(x) + ε
Var(ε) = σ² = Ct
Homoskedasticity is considered a favorable circumstance because it is a necessary condition for the Ordinary Least Squares (OLS) estimator of the vector of parameters of a linear regression function to be the best linear unbiased estimator of this vector (see Gauss-Markov theorem). In addition, homoskedasticity allows an easy estimation of the (constant) variance of the noise (or "errors").
When data is not homoskedastic, one says that one is confronted to heteroskedasticity.
* Identifying the best linear unbiased estimators of the parameters of a linear regression function is still possible by resorting to the Weighted Least Squares (WLS) method, provided that the way the errors variance varies across the range of the predictors is known.
* But estimating the noise variance can then be done only if additional assumptions are made about how this variance is distributed throughout the predictors range.
Related readings :