Focus
Type

8 Replication and meta-research

7 sub-clusters · 82 references

Attainment of a grounding in 'replication research', which takes a variety of forms, each with a different purpose and contribution. Replicable science requires replication research. When teaching, students should understand the purpose and need of replications in its variety of forms and be able to conduct (and join) replication projects. There are 6 sub-clusters which aim to further parse the learning and teaching process:[ag][ah]

Conducting replication studies; challenges, limitations, and comparisons with the original study 21 / 21

A replication study seeks to repeat findings of previous research using identical or similar methods to determine if consistent results can be obtained. Limits can arise from protocol drift, differences in context or measurement, and low power.

evidence Paper
Estimating the reproducibility of psychological science
This foundational meta-research provides empirical evidence regarding the reproducibility of psychological science by attempting to replicate 100 experimental and correlational studies. The findings demonstrate a significant decline in effect sizes and statistical significance rates compared to the original publications.
overview Paper
Toward a more credible assessment of the credibility of science by many-analyst studies
This paper introduces the "many-analyst" research design as a method for assessing the robustness of scientific findings. It argues that observing the variation in results when different researchers analyze the same dataset provides a more rigorous measure of uncertainty and credibility than individual replication attempts.
evidence Paper
The (Non)Academic Community Forming around Replications: Mapping the International Open Science space via its Replication Initiatives
This study maps the international landscape of replication initiatives to illustrate how the movement has evolved into a transdisciplinary community. It provides evidence of the diverse stakeholders involved, including non-academic actors and commercial publishers, showing how replication concerns have expanded beyond specific scientific fields.
practice/tools Paper
How can we make sound replication decisions?
This perspective piece introduces a conceptual framework to guide researchers and institutions in making strategic decisions about which findings should be prioritized for replication. It provides actionable criteria for weighing scientific values against practical constraints to ensure that limited research resources are allocated effectively.
Devezer, B., & Buzbas, E. O. (2025). Minimum viable experiment to replicate. PhilSci Archive. https://philsci-archive.pitt.edu/24738/
Errington, T. M. (2024). Building reproducible bridges to cross the “valley of death.” Journal of Clinical Investigation, 134(1). https://doi.org/10.1172/JCI177383
teaching/training Paper
Teaching Replication
This resource advocates for a pedagogical model where students in laboratory courses perform direct replications of published findings as a core part of their training. It outlines how this approach addresses the 'replication crisis' by generating needed data while providing students with authentic, high-stakes experience in scientific methodology.
critique Letter
Comment on “Estimating the reproducibility of psychological science”
This paper provides a statistical critique of the landmark 2015 Open Science Collaboration study, arguing that its conclusions regarding the low reproducibility of psychology are based on flawed analysis. It offers a re-evaluation of the original data to suggest that the reproducibility of psychological science may actually be quite high.
Grahe, J. E., Brandt, M. J., Wagge, J. R., Legate, N., Wiggins, B. J., Christopherson, C. D., Weisberg, Y., Corker, K.S., Chartier, C.R., Fallon, M., Hildebrandt, L., Hurst, M.A., Lazarevic, L., Levitan, C., McFall, J., McLaughlin, H., Pazda, A., Ijzerman, H., Nosek, B.A., … & France, H. (2018). Collaborative Replications and Education Project (CREP). https://osf.io/wfc6u/
teaching/training Paper
Harnessing the Undiscovered Resource of Student Research Projects
This resource introduces the "question-list paradigm" as a framework for leveraging the high volume of undergraduate research projects to advance the science of psychology. It explains how these student projects can be coordinated to test the replicability of established findings across diverse geographic locations and institutional populations.
practice/tools Paper
Valid replications require valid methods: Recommendations for best methodological practices with lab experiments.
This resource provides actionable methodological recommendations for conducting lab experiments to ensure they serve as a solid foundation for valid replications. It highlights specific practices in experimental design and implementation that are essential for producing reliable and reproducible findings.
Horbach, S. P. J. M., Cole, N. L., Kopeinik, S., Leitner, B., Ross-Hellauer, T., & Tijdink, J. (2025). How to get there from here? Barriers and enablers on the road towards reproducibility in research [Manuscript]. OSF. https://osf.io/n28sg/
practice/tools Preprint
Pre-replication: anything goes, once
This commentary introduces the concept of "pre-replication," a structured preparatory exercise to be performed before conducting a replication study. It provides a framework for analyzing the epistemic and analytic goals of the replication to ensure the work is conceptually sound and its outcomes are interpretable.
King, G. (1995). Replication, replication. PS: Political Science & Politics, 28(3), 444-452. https://gking.harvard.edu/files/replication.pdf
overview Paper
Growth From Uncertainty: Understanding the Replication ‘Crisis’ in Infant Cognition
This article applies the philosophical concept of "epistemic iteration" to the field of infant cognition to reframe the replication crisis as a productive part of scientific growth. It argues that failed replications should be viewed as opportunities for theoretical refinement rather than evidence of untrustworthy research.
Lenne & Mann (2016). CREP project report. https://osf.io/sdj7e/
critique Preprint
Adversarial reanalysis and the challenge of open data in regulatory science
This paper examines the risks associated with open data mandates in the specific context of environmental regulatory science, where transparency requirements can be used as 'Trojan Horses' to undermine scientific evidence. It distinguishes between replication and reanalysis to highlight how adversarial reanalysis can be weaponized to exclude critical studies from the policy-making process.
critique Editorial
Editorial Essay: The Tumult over Transparency: Decoupling Transparency from Replication in Establishing Trustworthy Qualitative Research
This editorial warns against the uncritical transfer of transparency and replication standards from psychology to qualitative management research. It argues for decoupling transparency from replication, suggesting that while transparency is necessary for trust, replication is often a poor fit for qualitative research goals.
overview Paper
TIER2: enhancing Trust, Integrity and Efficiency in Research through next-level Reproducibility
This resource describes TIER2, a European Commission-funded project designed to investigate and improve research reproducibility across the social, life, and computer sciences. It outlines a systematic approach to developing tools and guidelines for diverse stakeholders to foster a more robust and trustworthy scientific ecosystem.
evidence Paper
Expectations for Replications
This resource uses computer simulations to demonstrate how random measurement error alone makes high replication success rates statistically unlikely, even under ideal conditions. It argues that many perceived failures in psychological science stem from mathematically unreasonable expectations regarding the consistency of experimental results.
teaching/training Paper
Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project
This resource describes the Collaborative Replications and Education Project (CREP), which provides a structured framework for incorporating high-quality replication research into undergraduate education. It outlines how the project benefits students by providing publication opportunities and practical training in open science practices.
Direct vs. conceptual replications 7 / 7

Direct replications use the exact same methods and materials, while conceptual replications test the same concept but with different methods, materials, or both. There is an ongoing debate as to how “direct” a replication can possibly be.

critique Paper
Kinds of Replication: Examining the Meanings of “Conceptual Replication” and “Direct Replication”
This article critiques the theoretical foundations of the replication crisis by examining how "direct" and "conceptual" replications are defined and understood. It proposes that the discipline should move beyond a discovery-oriented philosophy of science to consider the way research practices actively shape psychological phenomena.
overview Paper
Reconceptualizing replication as a sequence of different studies: A replication typology
This resource synthesizes existing literature on replication to propose a comprehensive typology that views replication as a sequential process rather than a binary outcome. It provides a structured framework for researchers to transition from simple direct replications to studies that systematically test alternative explanations and real-world applicability.
critique Paper
Approaching psychology’s current crises by exploring the vagueness of psychological concepts: Recommendations for advancing the discipline.
This resource argues that the replication, theory, and universality crises in psychology are fundamentally linked to the vagueness of psychological concepts. It suggests that advancing the discipline requires a focus on theoretical and philosophical refinement rather than just methodological or statistical changes.
evidence Paper
Internal conceptual replications do not increase independent replication success
This meta-research study evaluates whether the inclusion of internal conceptual replications in an original paper increases the likelihood of successful independent direct replications. The findings suggest that internal replications do not predict future reproducibility, calling into question their role as indicators of robust psychological effects.
advocacy Paper
The Value of Direct Replication
This commentary advocates for direct replication as the primary mechanism for verifying the reliability of scientific effects across laboratories. It critiques the assumptions of those who devalue replication, arguing that direct repetition is the only rigorous way to confirm the validity of a research finding.
evidence Paper
Contextual sensitivity in scientific reproducibility
This study presents empirical evidence on the role of "contextual sensitivity" in the reproducibility of psychological research by analyzing findings from the Reproducibility Project: Psychology. It demonstrates that findings judged to depend heavily on specific social, settings, or temporal contexts are significantly less likely to replicate than more generalizable findings.
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2017). Making replication mainstream. Behavioral and Brain Sciences, 41. https://doi.org/10.1017/S0140525X17001972
Meta-analyses 16 / 16

Meta-analysis pools estimates to show the bigger picture. Careful work starts with a prespecified plan, aligns effect sizes, and checks bias and sensitivity. It reports heterogeneity and uses prediction intervals to show what a future study might find. Data and code are shared so the synthesis can be audited and updated.

evidence Paper
Meta-analyses in psychology often overestimate evidence for and size of effects
This study provides empirical meta-research demonstrating that meta-analyses in psychology frequently overestimate effect sizes and evidence strength due to inadequate publication bias adjustments. It evaluates the effectiveness of various adjustment methods and illustrates the systematic distortion present in current psychological literature.
practice/tools Paper
Robust Bayesian meta‐analysis: Model‐averaging across complementary publication bias adjustment methods
This paper introduces Robust Bayesian Meta-Analysis (RoBMA), a methodological framework that uses model-averaging to account for publication bias. It provides a practical solution for researchers to synthesize data more reliably without having to choose a single adjustment method that may not fit their specific data conditions.
evidence Paper
Footprint of publication selection bias on meta‐analyses in medicine, environmental sciences, psychology, and economics
This large-scale meta-epidemiological study assesses the prevalence of publication selection bias across medicine, environmental sciences, psychology, and economics by analyzing over 700,000 effect size estimates. It provides a comparative analysis that reveals how bias varies by discipline, identifying economics and environmental sciences as having particularly high levels of contamination.
evidence Paper
Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods
This resource uses a comprehensive simulation study to evaluate how different meta-analytic methods designed to correct for bias perform on data patterns specifically common to psychology. It identifies which statistical techniques are most effective at recovering true effect sizes when research is affected by questionable research practices or publication bias.
evidence Paper
Meta-assessment of bias in science
This meta-assessment offers a global view of bias prevalence by analyzing a large random sample of meta-analyses drawn from all scientific disciplines. It identifies specific risk factors for effect size overestimation—such as small sample size and early publication—while demonstrating that although certain bias patterns are consistent, their magnitude varies significantly across fields.
evidence Paper
“Positive” Results Increase Down the Hierarchy of the Sciences
This study empirically tests the "Hierarchy of the Sciences" hypothesis by analyzing the proportion of positive results reported in over 2,400 papers across all disciplines. It demonstrates that the frequency of results supporting a tested hypothesis increases as one moves from the physical to the social sciences, suggesting that the "hardness" of a field influences its susceptibility to non-cognitive biases.
Hong, S., & Reed, W. R. (2020). Using Monte Carlo experiments to select meta‐analytic estimators. Research Synthesis Methods, 12(2), 192–215. Portico. https://doi.org/https://doi.org/10.1002/jrsm.1467
evidence Paper
The Power of Bias in Economics Research
This research evaluates the credibility of empirical economics by quantifying statistical power and publication bias across 159 different research literatures. It reveals that the majority of economics studies are severely underpowered and that reported effects are frequently exaggerated, providing a stark assessment of the structural challenges facing the reliability of economic findings.
practice/tools Paper
Advice for improving the reproducibility of data extraction in meta‐analysis
This resource provides practical steps and R-based software recommendations to improve the transparency and reproducibility of data extraction in meta-analyses. It specifically addresses the lack of guidance for making this crucial phase of evidence synthesis shareable and verifiable by other researchers.
evidence Paper
Comparing meta-analyses and preregistered multiple-laboratory replication projects
This study provides empirical evidence comparing the effect sizes found in traditional meta-analyses versus those from large-scale, preregistered multi-laboratory replication projects. It highlights a significant discrepancy where traditional meta-analyses tend to report much larger effect sizes, suggesting they may be more susceptible to systematic biases.
critique Paper
The case of the misleading funnel plot
This resource critiques the reliance on funnel plots for detecting publication bias, explaining how factors like study heterogeneity and chance can produce misleading asymmetry. It warns researchers that visual evidence in these plots is often ambiguous and should not be used as a definitive diagnostic tool for bias.
evidence Paper
Footprint of publication selection bias on meta‐analyses in medicine, environmental sciences, psychology, and economics
This large-scale meta-epidemiological study assesses the prevalence of publication selection bias across medicine, environmental sciences, psychology, and economics by analyzing over 700,000 effect size estimates. It provides a comparative analysis that reveals how bias varies by discipline, identifying economics and environmental sciences as having particularly high levels of contamination.
evidence Paper
Assessing treatment effects and publication bias across different specialties in medicine: a meta-epidemiological study
This study examines the impact of treatment effect inflation and publication bias across various medical specialties. It demonstrates that large effects reported in small, low-powered studies contribute significantly to evidence distortion and argues for shifting institutional incentives toward research quality rather than the extremity of results.
evidence Paper
What meta-analyses reveal about the replicability of psychological research.
This study provides a large-scale empirical assessment of replicability in psychology by analyzing over 12,000 effect sizes from 200 meta-analyses. It quantifies the prevalence of low statistical power and evaluates how bias and heterogeneity contribute to the field's replication challenges.
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J. P. C., Van den Akker, O. R., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A. J., … Westwood, S. J. (2023). An integrative framework for planning and conducting Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR). Meta-Psychology, 7. https://doi.org/10.15626/MP.2021.2840
evidence Paper
Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis
This meta-meta-analysis empirically investigates the extent of publication bias by comparing hundreds of systematic reviews across the fields of psychology and medicine. The research highlights how this bias leads to overestimated effects, providing a comparative perspective on how different disciplines struggle with the credibility of published findings.
Meta-research 23 / 23

Meta-research studies how research is done. It maps power, bias, reporting quality, and the uptake of open practices. It tests which interventions improve credibility and efficiency. The aim is practical guidance that helps fields do better work and waste less effort. Findings are shared openly so policies, training, and incentives can respond.

critique Letter
Claims about scientific rigour require rigour
This resource critiques a high-profile study on rigour-enhancing practices, arguing that the study itself failed to adhere to the preregistration and transparency standards it advocated. It serves as a methodological cautionary tale, emphasizing that meta-research about scientific rigour must be held to the same rigorous standards it promotes.
critique Preprint
Revisiting the replication crisis without false positives
This paper challenges the common assumption that the replication crisis is primarily a result of false positives caused by questionable research practices. By proposing alternative meta-scientific models, the authors demonstrate that low replicability can be explained by factors other than false positives, calling for a more nuanced understanding of the crisis.
evidence Paper
The (Non)Academic Community Forming around Replications: Mapping the International Open Science space via its Replication Initiatives
This study maps the international landscape of replication initiatives to illustrate how the movement has evolved into a transdisciplinary community. It provides evidence of the diverse stakeholders involved, including non-academic actors and commercial publishers, showing how replication concerns have expanded beyond specific scientific fields.
evidence Paper
Comparing dream to reality: an assessment of adherence of the first generation of preregistered studies
This empirical study assesses the adherence of psychology researchers to their preregistration plans and the transparency of their reporting regarding deviations. It identifies a significant gap between initial research designs and final publications, emphasizing the importance of disclosing changes made during the data collection and analysis process.
practice/tools Paper
How can we make sound replication decisions?
This perspective piece introduces a conceptual framework to guide researchers and institutions in making strategic decisions about which findings should be prioritized for replication. It provides actionable criteria for weighing scientific values against practical constraints to ensure that limited research resources are allocated effectively.
Devezer, B., & Buzbas, E. O. (2025). Minimum viable experiment to replicate. PhilSci Archive. https://philsci-archive.pitt.edu/24738/
overview Paper
Open science interventions to improve reproducibility and replicability of research: a scoping review
This scoping review provides a comprehensive synthesis of existing literature regarding the effectiveness of open science interventions designed to improve research reproducibility. It systematically categorizes these practices and identifies specific gaps where empirical evidence of their actual impact is still needed.
critique Paper
Can a Good Theory Be Built Using Bad Ingredients?
This paper examines how the replication crisis impacts theory development, arguing that the consequences of replication failures depend on whether a theory aims to explain, predict, or unify. It provides a nuanced theoretical analysis of why replicability is more foundational for explanatory theories than for those focused primarily on predictive outcomes.
overview Paper
Promoting Virtue or Punishing Fraud: Mapping Contrasts in the Language of ‘Scientific Integrity’
This paper maps the diverse and often conflicting meanings of "research integrity" across different stakeholders, from narrow definitions focused on misconduct to broader ethical frameworks. It highlights the subtle linguistic and conceptual differences in how integrity is understood by researchers, policymakers, and the public.
overview Journal Article
The ghosts of HeLa: How cell line misidentification contaminates the scientific literature
This resource explores the intersection of Open Science practices and generative AI, identifying how these technologies both facilitate and complicate goals of transparency and accessibility. It specifically analyzes the tensions between generative AI's black-box nature and foundational open principles, offering an exploratory framework for navigating these emerging challenges.
evidence Review Article
The changing forms and expectations of peer review
This resource provides an empirical quantification of the scale of scientific literature contaminated by the use of misidentified cell lines, identifying tens of thousands of affected papers. It highlights the persistence of 'ghost' data in the research record and the systemic failure of scholarly publishing to correct known errors over time.
overview Journal Article
The ability of different peer review procedures to flag problematic publications
This resource traces the historical evolution of peer review, examining how it transitioned from a mechanism for quality assessment to a modern gatekeeper of scientific integrity. It contextualizes the current debate over scientific self-regulation by highlighting how the expectation for peer review to detect fraud is a relatively recent development.
evidence Paper
The extent and causes of academic text recycling or ‘self-plagiarism’
This study presents empirical findings from a survey of academic researchers to demonstrate how perceptions of departmental research climate influence the prevalence of misconduct. The results suggest that the local organizational environment and prevailing norms are significant predictors of research misbehavior, highlighting the need for culture-focused institutional interventions.
evidence Editorial
Journal Peer Review and Editorial Evaluation: Cautious Innovator or Sleepy Giant?
This study provides empirical evidence on the effectiveness of various peer review models by correlating specific procedures with retraction rates in the Retraction Watch database. It offers a data-driven comparison of how different review innovations perform in their primary task of flagging problematic or fraudulent research.
evidence Journal Article
No time for that now! Qualitative changes in manuscript peer review during the Covid-19 pandemic
This study evaluates the impact of the COVID-19 pandemic on the quality of scholarly peer review by analyzing changes in review reports and editorial decision letters. It specifically investigates whether the rapid acceleration of the publication process during the pandemic led to a decrease in the depth and rigor of critical evaluation.
critique Paper
Why Most Published Research Findings Are False
This landmark theoretical paper uses a mathematical framework to argue that the majority of published research findings are likely false due to factors like low statistical power, small effect sizes, and researcher bias. It serves as a fundamental critique of the prevailing incentive structures and methodological standards in modern science.
evidence Paper
With Low Power Comes Low Credibility? Toward a Principled Critique of Results From Underpowered Tests
Employing a survey design with truth-telling incentives, this paper provides empirical data on the widespread prevalence of questionable research practices among psychologists. It reveals that researchers are significantly more likely to admit to behaviors they perceive as defensible, providing insight into the normalization of problematic methodologies within the discipline.
evidence Paper
Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation
This study uses a quasi-experimental design to empirically evaluate the effectiveness of preregistration as a pedagogical tool for undergraduate psychology students. It demonstrates how early exposure to open science practices can influence student learning outcomes and help mitigate questionable research practices during initial research training.
overview Paper
What is critical metascience and why is it important?
This article defines and establishes the scope of 'critical metascience,' a reflexive field that questions the underlying assumptions and potential biases within the metascience movement itself. It provides a conceptual framework for how critical inquiry can complement empirical meta-research to ensure scientific reforms are robust and self-correcting.
evidence Paper
Registered report: Survey on attitudes and experiences regarding preregistration in psychological research
This study provides empirical evidence regarding the attitudes and practical experiences of psychology researchers concerning preregistration. It identifies specific systemic and individual obstacles that hinder the adoption of the practice, offering insights into how to better foster open science workflows.
Syed, M. (2023). Some Data Indicating that Editors and Reviewers Do Not Check Preregistrations during the Review Process. https://osf.io/nh7qw/
critique Preprint
The social replication of replication: Moving replication through epistemic communities
This resource analyzes the "replication drive" and how the practice of replication is being moved across various epistemic communities. It warns against the indiscriminate promotion of replication as a universal standard for quality, highlighting the social and institutional pressures that shape this culture change.
evidence Paper
Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology
This research evaluates the practical impact of preregistration by comparing matched sets of preregistered and non-preregistered studies in psychology. It contributes critical data by showing that preregistration did not necessarily lead to fewer positive results or smaller effect sizes in this sample, challenging theoretical expectations about its immediate effects on the literature.
Purposes of replication attempts - what is a ‘failed’ replication? 7 / 7

Explains the diverse aims of replication and clarifies that “failure” is not a verdict on truth but evidence about effect size, robustness, and conditions. Encourages nuanced interpretation (e.g., meta-analytic and design-aware) over binary success/failure narratives and highlights responsible communication of discrepant findings.

Fidler, F., & Wilcox, J. (2021). Reproducibility of scientific results. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/sum2021/entries/scientific-reproducibility
teaching/training Paper
Teaching Replication
This resource advocates for a pedagogical model where students in laboratory courses perform direct replications of published findings as a core part of their training. It outlines how this approach addresses the 'replication crisis' by generating needed data while providing students with authentic, high-stakes experience in scientific methodology.
overview Paper
Replication and the Manufacture of Scientific Inferences: A Formal Approach
This resource introduces a formal theoretical framework for replication using "replication causal diagrams" (r-dags) and Bayesian inference to categorize replication types and evaluate their evidentiary value. It moves beyond simple typologies by providing a mathematical basis for updating beliefs about natural phenomena based on specific study procedures and outcomes.
overview Paper
What is replication?
This paper proposes a conceptual shift in the definition of replication, moving away from procedural similarity toward a focus on the diagnostic evidence provided for a prior claim. It argues that a study's status as a replication is determined by whether its possible outcomes are informative regarding the validity of the original finding.
critique Preprint
A Problem in Theory and More: Measuring the Moderating Role of Culture in Many Labs 2
This article critiques the methodology of the Many Labs 2 project, specifically challenging its conclusions regarding the role of cultural variability in replication success. It identifies significant theoretical flaws in the project's design, such as the selection of effects that lacked theoretical reasons to vary by culture and the use of sample sites with insufficient cultural contrast.
evidence Paper
Contextual sensitivity in scientific reproducibility
This study presents empirical evidence on the role of "contextual sensitivity" in the reproducibility of psychological research by analyzing findings from the Reproducibility Project: Psychology. It demonstrates that findings judged to depend heavily on specific social, settings, or temporal contexts are significantly less likely to replicate than more generalizable findings.
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2017). Making replication mainstream. Behavioral and Brain Sciences, 41. https://doi.org/10.1017/S0140525X17001972
Registered Replication Reports 5 / 5

Registered Reports are studies that are peer-reviewed prior to data collection, with an agreement between the journal and the author(s) that it will be published regardless of outcome as long as the preregistered methods are reasonably followed. Registered REPLICATION Reports are a special category of these that only include replications.

evidence Paper
Registered Replication Report
This resource provides empirical data from a large-scale multi-lab replication effort to estimate the true effect size of the verbal overshadowing phenomenon. It addresses discrepancies between original findings and subsequent research by using a pre-registered, standardized protocol across multiple sites to ensure a high-powered and unbiased assessment.
Anon. (n.d.). Ongoing Replication Projects. Association for Psychological Science - APS. https://www.psychologicalscience.org/publications/replication/ongoing-projects
overview Paper
The past, present and future of Registered Reports
This article provides a comprehensive overview of the Registered Reports publication model, tracing its historical development and evaluating its effectiveness in mitigating publication bias. It serves as both a theoretical reflection on the format's impact and a practical guide for researchers, editors, and reviewers navigating the pre-acceptance process.
evidence Letter
Registered Replication Report
This resource reports the results of a multi-laboratory replication attempt focused on the psychological effect of grammatical aspect on perceptions of intentionality. It contributes empirical evidence to determine the robustness and generalizability of previously reported findings in the field of psycholinguistics and social cognition.
overview Paper
An Introduction to Registered Replication Reports at <i>Perspectives on Psychological Science</i>
This article introduces the Registered Replication Report (RRR) as a new article format within the journal Perspectives on Psychological Science, explaining its purpose in addressing the replication crisis. It details the editorial philosophy and procedural framework intended to encourage high-quality, multi-lab replications of foundationally important psychological findings.
The politics of replicating famous studies 3 / 3

Sometimes responses to replication research can be negative. Failed replications of famous work, most notably power posing, ego depletion, stereotype threat, and facial feedback[ai][aj], have received a lot of attention.

evidence Paper
Behavioral Priming: It's All in the Mind, but Whose Mind?
This research presents two experiments that failed to replicate a seminal social priming study while specifically investigating the role of experimenter expectancy effects. It contributes to the meta-research discussion by suggesting that unconscious experimenter cues, rather than direct priming of participants, may account for previously observed behavioral effects.
Neuliep, J. W., & Crandall, R. (1990). Editorial bias against replication research. Journal of Social Behavior & Personality, 5(4), 85-90.
Neuliep, J. W., & Crandall, R. (1993). Reviewer bias against replication research. Journal of Social Behavior & Personality, 8(6), 21-29.
Reading List 0
Saved to your reading list! Click the pill to view, export BibTeX, or manage your list.
JUST-OS chatbot (offline)
Chatbot offline — we hope to bring it back soon