In statistics, Samuelson's inequality, named after the economist Paul Samuelson,[1] also called the LaguerreSamuelson inequality,[2][3] after the mathematician Edmond Laguerre, states that every one of any collection x1, ..., xn, is within n  1 uncorrected sample standard deviations of their sample mean.

Statement of the inequality

If we let

be the sample mean and

be the standard deviation of the sample, then

[4]

Equality holds on the left (or right) for if and only if all the n  1 s other than are equal to each other and greater (smaller) than [2]

If you instead define then the inequality still applies and can be slightly tightened to

Comparison to Chebyshev's inequality

Chebyshev's inequality locates a certain fraction of the data within certain bounds, while Samuelson's inequality locates all the data points within certain bounds.

The bounds given by Chebyshev's inequality are unaffected by the number of data points, while for Samuelson's inequality the bounds loosen as the sample size increases. Thus for large enough data sets, Chebychev's inequality is more useful.

Applications

Samuelson's inequality may be considered a reason why studentization of residuals should be done externally.

Relationship to polynomials

Samuelson was not the first to describe this relationship: the first was probably Laguerre in 1880 while investigating the roots (zeros) of polynomials.[2][5]

Consider a polynomial with all roots real:

Without loss of generality let and let

and

Then

and

In terms of the coefficients

Laguerre showed that the roots of this polynomial were bounded by

where

Inspection shows that is the mean of the roots and that b is the standard deviation of the roots.

Laguerre failed to notice this relationship with the means and standard deviations of the roots, being more interested in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in their location.

When the coefficients and are both zero no information can be obtained about the location of the roots, because not all roots are real (as can be seen from Descartes' rule of signs) unless the constant term is also zero.

References

  1. Samuelson, Paul (1968). "How Deviant Can You Be?". Journal of the American Statistical Association. 63 (324): 1522–1525. doi:10.2307/2285901. JSTOR 2285901.
  2. 1 2 3 Jensen, Shane Tyler (1999). The LaguerreSamuelson Inequality with Extensions and Applications in Statistics and Matrix Theory (PDF) (MSc). Department of Mathematics and Statistics, McGill University.
  3. Jensen, Shane T.; Styan, George P. H. (1999). "Some Comments and a Bibliography on the Laguerre-Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory". Analytic and Geometric Inequalities and Applications. pp. 151–181. doi:10.1007/978-94-011-4577-0_10. ISBN 978-94-010-5938-1.
  4. Barnett, Neil S.; Dragomir, Sever Silvestru (2008). Advances in Inequalities from Probability Theory and Statistics. Nova Publishers. p. 164. ISBN 978-1-60021-943-6.
  5. Laguerre E. (1880) Mémoire pour obtenir par approximation les racines d'une équation algébrique qui a toutes les racines réelles. Nouv Ann Math 2e série, 19, 161–172, 193–202
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.