Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

Edit this page

Abstract

Background: The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods: We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results: We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion: The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

Link to resource: https://doi.org/10.1371/journal.pone.0105825

Type of resources: Primary Source, Reading

Education level(s): College / Upper Division (Undergraduates)

Primary user(s): Student

Subject area(s): Applied Science, Math & Statistics, Social Science

Language(s): English