A standard normal deviate is a normally distributed deviate. It is a realization of a standard normal random variable, defined as a random variable with expected value 0 and variance 1.[1] Where collections of such random variables are used, there is often an associated (possibly unstated) assumption that members of such collections are statistically independent.
Standard normal variables play a major role in theoretical statistics in the description of many types of models, particularly in regression analysis, the analysis of variance and time series analysis.
When the term "deviate" is used, rather than "variable", there is a connotation that the value concerned is treated as the no-longer-random outcome of a standard normal random variable. The terminology here is the same as that for random variable and random variate. Standard normal deviates arise in practical statistics in two ways.
- Given a model for a set of observed data, a set of manipulations of the data can result in a derived quantity which, assuming that the model is a true representation of reality, is a standard normal deviate (perhaps in an approximate sense). This enables a significance test to be made for the validity of the model.
- In the computer generation of a pseudorandom number sequence, the aim may be to generate random numbers having a normal distribution: these can be obtained from standard normal deviates (themselves the output of a pseudorandom number sequence) by multiplying by the scale parameter and adding the location parameter. More generally, the generation of pseudorandom number sequence having other marginal distributions may involve manipulating sequences of standard normal deviates: an example here is the chi-squared distribution, random values of which can be obtained by adding the squares of standard normal deviates (although this would seldom be the fastest method of generating such values).