Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work,” readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We introduce p-curve as a way to answer this question. …
p-Hacking, the use of analytic techniques that may lead to distorted research results, is widely condemned on epistemic and practical grounds. The prevalent position on this questionable research practice is that p-hacking should be avoided because …
Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data …
P. E. Meehl did first 10 sessions (Winter Quarter, Jan–Mar 1989). In the Spring Quarter, several other department members lectured on various topics. Then PEM did last two sessions (5/25/89 and 6/1/89).
In psychology, attempts to replicate published findings are less successful than expected. For properly powered studies replication rate should be around 80%, whereas in practice less than 40% of the studies selected from different areas of …