In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, conservative for tests that are positively dependent, and liberal for tests that are negatively dependent. It is credited to a 1967 paper [1] by the statistician and probabilist Zbyněk Šidák.[2] The Šidák method can be used to determine the statistical significance, and evaluate adjusted P value and confidence intervals.
Usage
- Given m different null hypotheses and a familywise alpha level of , each null hypothesis is rejected that has a p-value lower than .
- This test produces a familywise Type I error rate of exactly when the tests are independent of each other and all null hypotheses are true. It is less stringent than the Bonferroni correction, but only slightly. For example, for = 0.05 and m = 10, the Bonferroni-adjusted level is 0.005 and the Šidák-adjusted level is approximately 0.005116.
- One can also compute confidence intervals matching the test decision using the Šidák correction by using 100 (1 − α)1/m % confidence intervals.
- For continuous problems, one can employ Bayesian logic to compute from the prior-to-posterior volume ratio.[3]
When there are considerably large numbers of hypotheses or when the hypotheses are correlated, correction factors like Bonferroni and Šidák give in quite conservative results, which leads us to consider other approaches.
Proof
The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be ; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant). Since it is assumed that they are independent, the probability that all of them are not significant is the product of the probability that each of them is not significant, or . Our intention is for this probability to equal , the significance threshold for the entire series of tests. By solving for , we obtain It shows that in order to reach a given level, we need to adapt the values used for each test.[4]
Šidák correction for t-test
See also
References
- ↑ Šidák, Z. K. (1967). "Rectangular Confidence Regions for the Means of Multivariate Normal Distributions". Journal of the American Statistical Association. 62 (318): 626–633. doi:10.1080/01621459.1967.10482935.
- ↑ Seidler, J.; Vondráček, J. Í.; Saxl, I. (2000). "The life and work of Zbyněk Šidák (1933–1999)". Applications of Mathematics. 45 (5): 321. doi:10.1023/A:1022238410461. hdl:10338.dmlcz/134443.
- ↑ Bayer, Adrian E.; Seljak, Uroš (2020). "The look-elsewhere effect from a unified Bayesian and frequentist perspective". Journal of Cosmology and Astroparticle Physics. 2020 (10): 009–009. arXiv:2007.13821. doi:10.1088/1475-7516/2020/10/009.
- ↑ "Abdi-Bonferonni2007-pretty.dvi" (PDF).