You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic

Edit this page

Abstract

Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation.

Link to resource: https://doi.org/10.1177/1745691614548513.

Type of resources: Primary Source, Reading, Paper

Education level(s): College / Upper Division (Undergraduates)

Primary user(s): Student

Subject area(s): Math & Statistics

Language(s): English