[What I propose] is not a reform of significance testing as currently practiced in soft-psych. We are making a more heretical point… We are attacking the whole tradition of null-hypothesis refutation as a way of appraising theories… Most psychology uses conventional Ho refutation in appraising the weak theories of soft psychology… [is] living in a fantasy world of “testing” weak theories by feeble methods.
– Paul Meehl (1990)
A medical reversal is when an existing treatment is found to be ineffective and harmful. Psychology, for example, has been racking up reversals. In recent years, scholarship showed only 40-65% of some classic results were replicated, in the weak sense of finding statistical significance for the same direction of effect (less than zero or greater than zero effect). Even in those that replicated, the average effect found was half the originally reported effect. We realise that replications of social sciences are themselves intricate phenomena with analytical and researcher dependencies, but while such failures to replicate are far less costly to society than medical ones, it still pollutes science’s goal of accumulating knowledge.
Psychology is not alone: medicine, cancer biology, and economics all have their share of irreplicable results. It’d be wrong to write off psychology, or any other discipline for that matter, as not only scientific subfields differ a lot with respect to replication rates and effect-size shrinkage, thereby rendering field-wide generalizations uninformative but also because one reason psychology reversals are so prominent has to do with it’s unusual ‘openness’ in terms of code and data sharing. A less scientific field would never have caught its own bullshit.
Box 1. Reversals in the context of COVID-19.
A counterexample from the COVID-19 pandemic: the UK’s March 2020 policy was based on the idea of behavioural fatigue and Western resentment of restrictions; that a costly prohibition would only last a few weeks before the population revolt against it, and so it had to be delayed until the epidemic’s peak. Now, this policy was so politically toxic that we know it had to be based on some domain reasoning, and it is in a way heartening that the government tried to go beyond socially naive epidemiology. But it was strongly criticised by hundreds of other behavioural scientists, who noted that the evidence for these ideas was too weak to base policy on. Here’s a catalogue of bad psychological takes.
The following are empirical findings about empirical findings; they’re all open to re-reversal. Also it’s not that “we know these claims are false”: failed replications (or proofs of fraud) usually just challenge the evidence for a hypothesis, rather than affirm the opposite hypothesis. We’ve tried to ban ourselves from saying “successful” or “failed” replication, and to report the best-guess effect size rather than play the bad old Yes/No science game. Code for converting means to Cohen’s d and Hedge’s g here.
Andrew Gelman and others suggest deflating all single-study effect sizes you encounter in the social sciences, without waiting for the subsequent shrinkage from publication bias, measurement error, data-analytic degrees of freedom, and so on. There is no uniform factor, but it seems sensible to divide novel effect sizes by a number between 2 and 100 (depending on its sample size, method, measurement noise, maybe its p-value if it’s really tiny).
Claims are included if there was at least one of: several failed replications, several good meta-analyses with notably smaller d, very strong publication bias, clear fatal errors in the analysis, a formal retraction, or clear fraud. Cases like growth mindset are also included, where the eventual effect size, though positive, was a tiny fraction of the hyped original claim. To best interpret our list below, please compare it to the original paper’s effect size. Here, we do not provide an averaging of high-quality supporting papers. This is because thousands of potentially non-replicable papers are published every year, and filtering, reading, and listing them all would be a full-time job even if they were all included in systematic replication or reanalysis projects, ripe fruit. The rule is that if a spurious effect is discussed, or our community or contributors sees it in a book, or if it could hurt someone, it’s noteworthy.
One systematic problem with older results is that they were not pre-registered; we have no assurance that the published analysis was the only one, and so that the inferences presented are in fact valid.
Replication studies have very high rates of pre-registration, and higher rates of code and data sharing. For “ direct” replications, the original target study has in effect pre-registered their hypotheses, methods, and analysis plan.
But don’t trust any of them, in the sense of accepting them uncritically. Look for 3+ failed replications from different labs, just to save lots of rewriting, as the garden of forking paths and the mystery of the lefty p-curve unfold.
The purpose of collating these reversal effects in social science is to encourage educators to incorporate replications of these effects into their students' project (e.g., third-year, thesis, course work) to provide them the opportunity to experience the research process directly, assess their ability to perform and report scientific research, and to help evaluate the robustness of the original study, thereby also helping them become good consumers of research. The below crowdsourced and community-curated resource aims to satisfy three of FORRT’s Goals:
and four of FORRT’s Mission:
Anyone can add reversals or replications by joining our initiative and then following the instructions in our crowdsource g-doc.
Elderly priming, that hearing about old age makes people walk slower. The p-curve alone argues against the first 20 years of studies.
No good evidence for Money priming, that “images or phrases related to money cause increased faith in capitalism, and the belief that victims deserve their fate”.
Questionable evidence for Commitment priming (recall), participants exposed to a high-commitment prime would exhibit greater forgiveness.
Hostility priming (unscrambled sentences), exposing participants to more hostility-related stimuli caused them subsequently to interpret ambiguous behaviors as more hostile.
Intelligence priming (contemplation), participants primed with a category associated with intelligence (e.g. “professor”) performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (“soccer hooligans”).
Moral priming (contemplation), participants exposed to a moral-reminder prime would demonstrate reduced cheating.
Death priming (Mortality Salience/Terror Management Theory), participants not exposed to mortality primes would show higher fear of death.
Verbal framing (temporal tense), participants who read what a person was doing showed enhanced accessibility of intention-related concepts and attribute more intentionality to the person, relative to what they did.
Gustatory Disgust on Moral Judgment, gustatory disgust triggers a heightened sense of moral wrongness.
No good evidence for the Macbeth effect, that moral aspersions induce literal physical hygiene.
A failed replication with opposite results of Social Class on Prosocial Behaviour such that people with high social class were more likely to be pro-social than those with low social class.
No good evidence of anything from the Stanford prison ‘experiment’. It was not an experiment; ‘demand characteristics’ and scripting of the abuse; constant experimenter intervention; faked reactions from participants; as Zimbardo concedes, they began with a complete “absence of specific hypotheses”.
No good evidence from the famous Milgram experiment that 65% of people will inflict pain if ordered to. Experiment was riddled with researcher degrees of freedom, going off-script, implausible agreement between very different treatments, and “only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter.”
No good evidence that tribalism arises spontaneously following arbitrary groupings and scarcity, within weeks, and leads to inter-group violence . The “spontaneous” conflict among children at Robbers Cave was orchestrated by experimenters; tiny sample (maybe 70?); an exploratory study taken as inferential; no control group; there were really three experimental groups - that is, the experimenters had full power to set expectations and endorse deviance; results from their two other studies, with negative results, were not reported.
Screen time and wellbeing. Lots of screen-time is not strongly associated with low wellbeing; it explains about as much of teen sadness as eating potatoes, 0.35%.
No good evidence that female-named hurricanes are more deadly than male-named ones. Original effect size was a 176% increase in deaths, driven entirely by four outliers; reanalysis using a greatly expanded historical dataset found a nonsignificant decrease in deaths from female named storms.
At most weak use in implicit bias testing for racism. Implicit bias scores poorly predict actual bias, r = 0.15. The operationalisations used to measure that predictive power are often unrelated to actual discrimination (e.g. ambiguous brain activations). Test-retest reliability of 0.44 for race, which is usually classed as “unacceptable”. This isn’t news; the original study also found very low test-criterion correlations.
The Pygmalion effect, that a teacher’s expectations about a student affects their performance, is at most small, temporary, and inconsistent, r<0.1 with a reset after weeks. Rosenthal’s original claims about massive IQ gains, persisting for years, are straightforwardly false (“The largest gain… 24.8 IQ points in excess of the gain shown by the controls.”), and used an invalid test battery. Jussim: “90%–95% of the time, students are unaffected by teacher expectations”.
Questionable evidence for an increase in “narcissism” (leadership, vanity, entitlement) in young people over the last thirty years. The basic counterargument is that they’re misidentifying an age effect as a cohort effect (The narcissism construct
apparently decreases by about a standard deviation between adolescence and retirement.) “every generation is Generation Me”
All such “generational” analyses are at best needlessly noisy approximations of social change, since generations are not discrete natural kinds, and since people at the supposed boundaries are indistinguishable.
Be very suspicious of anything by Diederik Stapel. 58 retractions here.
No good evidence that taking a “power pose” lowers cortisol, raises testosterone, risk tolerance.
Weak evidence for facial-feedback (that smiling causes good mood and pouting bad mood).
Reason to be cautious about mindfulness for mental health. Most studies are low quality and use inconsistent designs, there’s higher heterogeneity than other mental health treatments, and there’s strong reason to suspect reporting bias. None of the 36 meta-analyses before 2016 mentioned publication bias. The hammer may fall.
No good evidence for Blue Monday, that the third week in January is the peak of depression or low affect ‘as measured by a simple mathematical formula developed on behalf of Sky Travel’. You’d need a huge sample size, in the thousands, to detect the effect reliably and this has never been done.
Good and robust evidence against ego depletion, that willpower is limited in a muscle-like fashion.
Mixed evidence for the Dunning-Kruger effect. No evidence for the “Mount Stupid” misinterpretation.
Questionable evidence for a tiny “depressive realism” effect, of increased predictive accuracy or decreased cognitive bias among the clinically depressed.
Questionable evidence for the “hungry judge” effect, of massively reduced acquittals (d=2) just before lunch. Case order isn’t independent of acquittal probability (“unrepresented prisoners usually go last and are less likely to be granted parole”); favourable cases may take predictably longer and so are pushed until after recess; effect size is implausible on priors; explanation involved ego depletion.
No good evidence for multiple intelligences (in the sense of statistically independent components of cognition). Gardner, the inventor: “Nor, indeed, have I carried out experiments designed to test the theory… I readily admit that the theory is no longer current. Several fields of knowledge have advanced significantly since the early 1980s.
At most weak evidence for brain training (that is, “far transfer” from daily training games to fluid intelligence) in general, in particular from the Dual n-Back game.
Original paper: ‘Improving fluid intelligence with training on working memory’, Jaeggi 2008, n=70. (2200 citations).
Original effect size: d=0.4 over control, 1-2 days after training
Replication effect size: Melby: d=0.19 [0.03, 0.37] nonverbal; d=0.13 [-0.09, 0.34] verbal. Gwern: d=0.1397 [-0.0292, 0.3085], among studies using active controls.
Maybe some effect on non-Gf skills of the elderly.
A 2020 RCT on 572 first-graders finds an effect (d=0.2 to 0.4), but many of the apparent far-transfer effects come only 6-12 months later, i.e. well past the end of most prior studies.
In general, be highly suspicious of anything that claims a positive permanent effect on adult IQ. Even in children the absolute maximum is 4-15 points for a powerful single intervention (iodine supplementation during pregnancy in deficient populations)
See also the hydrocephaly claim under “Neuroscience”.
Good replication rate elsewhere.
Failed replications of automatic imitation claims.
Weak or no evidence for cross-domain congruency sequence effect.
Some evidence for a tiny effect of growth mindset (thinking that skill is improvable) on attainment. Really we should distinguish the correlation of the mindset with attainment vs. the effect of a 1-hour class about the importance of growth-mindset on attainment. I cover the latter but check out Sisk for evidence against both.
“Expertise attained after 10,000 hours practice” (Gladwell). Disowned by the supposed proponents.
Links Between Personality Traits and Consequential Life Outcomes. Pretty good? One lab’s systematic replications found that effect sizes shrank by 20% though (see comments below by Oliver C. Schultheiss).
Anything by Hans Eysenck should be considered suspect, but in particular these 26 ‘unsafe’ papers (including the one which says that reading prevents cancer).
The effect of “nudges” (clever design of defaults) may be exaggerated in general. One big review found average effects were six times smaller than billed. (Not saying there are no big effects.). Here are a few cautionary pieces on whether, aside from the pure question of reproducibility, behavioural science is ready to steer policy.
Moving the signature box to the top of forms does not decrease dishonest reporting in the rest of the form.
One comment mentioned we need to consider frequently studied phenomena such as differential reinforcement, extinction bursts, functional communication training, derived relational responding, schedules of R+.
Brian Wansink accidentally admitted gross malpractice; fatal errors were found in 50 of his lab’s papers. These include flashy results about increased portion size massively reducing satiety.
No good evidence that brains contain one mind per hemisphere. The corpus callosotomy studies which purported to show “two consciousnesses” inhabiting the same brain were badly overinterpreted.
Very weak evidence for the existence of high-functioning (IQ ~ 100) hydrocephalic people. The hypothesis begins from extreme prior improbability; the effect of massive volume loss is claimed to be on average positive for cognition; the case studies are often questionable and involve little detailed study of the brains (e.g. 1970 scanners were not capable of the precision claimed).
Readiness potentials seem to be actually causal, not diagnostic. So Libet’s studies also do not show what they purport to. We still don’t have free will (since random circuit noise can tip us when the evidence is weak), but in a different way.
At most extremely weak evidence that psychiatric hospitals (of the 1970s) could not detect sane patients in the absence of deception.
No good evidence for precognition, undergraduates improving memory test performance by studying after the test. This one is fun because Bem’s statistical methods were “impeccable” in the sense that they were what everyone else was using. He is Patient Zero in the replication crisis, and has done us all a great service. (Heavily reliant on a flat / frequentist prior; evidence of optional stopping; forking paths analysis.)
Questionable evidence for the menstrual cycle version of the dual-mating-strategy hypothesis (that “heterosexual women show stronger preferences for uncommitted sexual relationships [with more masculine men] during the high-fertility ovulatory phase of the menstrual cycle, while preferring long-term relationships at other points”). Studies are usually tiny (median n=34, mostly over one cycle). Funnel plot looks ok though.
No good evidence that large parents have more sons (Kanazawa); original analysis makes several errors and reanalysis shows near-zero effect. (Original effect size: 8% more likely.)
At most weak evidence that men’s strength in particular predicts opposition to egalitarianism.
At most very weak evidence that sympathetic nervous system activity predicts political ideology in a simple fashion. In particular, subjects’ skin conductance reaction to threatening or disgusting visual prompts - a noisy and questionable measure.
Be very suspicious of any such “candidate gene” finding (post-hoc data mining showing large >1% contributions from a single allele). 0/18 replications in candidate genes for depression. 73% of candidates failed to replicate in psychiatry in general. One big journal won’t publish them anymore without several accompanying replications. A huge GWAS, n=1 million: “We find no evidence of enrichment for genes previously hypothesized to relate to risk tolerance.”
Critical period hypothesis: Hartshorne, Tenenbaum and Pinker’s 2018 study on two-thirds of a million English speakers concluded one sharply defined critical age at 17.4 for all language learners. A reanalysis of the data showed that such a conclusion is based on artificial results ( van der Silk et al. 2021). There was no evidence for any critical age for language learning.
Findings regarding mindsets (aka implicit theories) have been mixed, with increasing failure of replication that puts the value of the theory and the derived interventions in question ( Brez et al, 2020; ). According to the meta-analysis by Sisk and colleagues ( 2018), the relationship between mindsets and academic achievement is weak: Of the 129 studies that they analyzed, only 37% found a positive relationship between mindset and academic outcomes. Furthermore, 58% of the studies found no relationship and 6% found a negative relationship between mindset and academic outcomes. Evidence on the efficacy of mindset interventions is not promising: of the 29 studies reviewed, only 12% had a positive effect, 86% of the studies found no effect of the intervention and 2% found a negative effect of the intervention. It should be noted that interventions seemed to work for low SES populations.
A review of 2500 social science papers, showing the lack of correlation between citations and replicability, between journal status and replicability, and the apparent lack of improvement since 2009.
See also the popular literature with uncritical treatments of the original studies: