Focus
Type

1 Replication Crisis and Credibility Revolution

7 sub-clusters · 146 references

Attainment of foundational knowledge on the importance of reproducible and open research (i.e., grounding the motivations and theoretical underpinnings of Open and Reproducible Science). Integration with field specific content (i.e., grounded in the history of replicability). There are 7 sub-clusters which aim to further parse the learning and teaching process:

History of the replication crisis & credibility revolution 26 / 26

In order to understand and weigh in on current developments, we need to first understand how the Open and Reproducible Science movement started, from its origins over the replicability/reproducibility crisis to the credibility revolution.

critique Paper
Retiring Popper: Critical realism, falsificationism, and the crisis of replication
This article offers a philosophical critique of the replication crisis debate in psychology, arguing that the field's focus on Popperian falsificationism lacks necessary ontological depth. It suggests that methodological reforms alone are insufficient without addressing the underlying assumptions about the nature of psychological phenomena.
evidence Paper
Quantitative Political Science Research Is Greatly Underpowered
This large-scale meta-research study provides empirical evidence that quantitative political science research is severely underpowered, with a median power of only 10% across thousands of tests. The findings demonstrate that only a small fraction of tests in the discipline meet the standard 80% power threshold required to detect consensus effects.
evidence Paper
1,500 scientists lift the lid on reproducibility
This resource presents the results of a comprehensive survey of 1,500 scientists to quantify their experiences with and attitudes toward the reproducibility crisis. It provides empirical evidence on how researchers across various fields perceive the severity of the issue and the specific factors they believe contribute most to failed replication attempts.
overview Paper
Is there a reproducibility crisis in science?
This resource explores the fundamental question of whether science is experiencing a reproducibility crisis by summarizing the key arguments and evidence. It provides an introductory survey of the topic suitable for those looking to understand the scope and implications of the replication debate across different scientific fields.
critique Paper
Understanding the Replication Crisis as a Base Rate Fallacy
This paper presents a theoretical critique of the standard narrative that the replication crisis is primarily caused by poor scientific conduct or questionable research practices. It uses the logic of the base rate fallacy to argue that high failure rates in replications are a predictable mathematical outcome in fields that investigate a large proportion of unlikely hypotheses.
advocacy Book
The Seven Deadly Sins of Psychology
This work identifies and analyzes systemic flaws in psychological science, such as publication bias and lack of transparency, that contribute to the replication crisis. It makes a strong case for institutional reform and the adoption of open science practices, such as Registered Reports, to improve the reliability of the field.
evidence Paper
Estimating the Reproducibility of Experimental Philosophy
This study reports the results of a large-scale collaboration to replicate 40 foundational findings in experimental philosophy to empirically assess the field's reproducibility. It provides specific data on replication success rates and effect sizes, offering a benchmark for the reliability of research within this philosophical sub-discipline.
overview Paper
Seven Easy Steps to Open Science
This article serves as an accessible introductory guide to open science, specifically tailored for students and researchers in psychological science. It uses a curated and annotated reading list of seven foundational papers to explain core concepts such as pre-registration, open access, and data sharing.
overview Paper
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition
This handbook offers a comprehensive survey of the field of academic integrity, addressing both theoretical research and practical applications in global educational settings. It serves as a foundational reference that bridges various perspectives on plagiarism, ethical conduct, and the systemic factors influencing scholarly honesty.
critique Paper
What is the Replication Crisis a Crisis Of?
This paper provides a theoretical analysis of the replication crisis in psychology, arguing that the debate must move beyond methodological fixes to address fundamental questions about how the subject matter is defined. It highlights the tension between seeking stable effects and acknowledging the inherent context sensitivity of psychological phenomena, suggesting that theoretical inadequacy is a central component of the crisis.
overview Preprint
Reproducibility Failure in Biomedical Research: Problems and Solutions
This review identifies specific failures in the research lifecycle—from reporting to peer review—that lead to irreproducibility in biomedicine. It evaluates the effectiveness of proposed reforms and emphasizes the need for evidence-based interventions to ensure scientific changes are both rigorous and beneficial.
advocacy Paper
The replication crisis has led to positive structural, procedural, and community changes
This resource reframes the replication crisis as a positive "credibility revolution" by cataloging the constructive structural and community-driven improvements it inspired. It provides a forward-looking roadmap for researchers to build upon these advancements to create a more robust and transparent scientific ecosystem.
overview Preprint
Concerns about Replicability, Theorizing, Applicability, Generalizability, and Methodology across Two Crises in Social Psychology
This paper provides a historical comparison between the two major crises of confidence in social psychology, demonstrating that many current methodological and theoretical concerns were already voiced in the 1960s and 70s. By analyzing recurring themes through direct quotes, it highlights the persistent structural challenges the field faces in achieving replicable and generalizable findings.
Leonelli, S. (2023). Philosophy of open science. Cambridge University Press. http://philsci-archive.pitt.edu/id/eprint/21986
overview Paper
The Matthew Effect in Science
This foundational paper introduces the concept of the Matthew effect to describe how disproportionate credit is awarded to eminent scientists compared to less-known researchers. It provides a sociological analysis of how the reward system in science reinforces existing advantages and influences the visibility of scientific contributions.
overview Paper
The Matthew Effect in Science, II: Cumulative Advantage and the Symbolism of Intellectual Property
This sequel to the original Matthew effect paper expands the analysis to include the concept of cumulative advantage and the symbolic value of intellectual property. It examines how institutionalized rewards and systemic structures create self-reinforcing cycles of inequality in scientific recognition and resource allocation.
advocacy Paper
A manifesto for reproducible science
This influential paper argues for a systemic overhaul of the scientific process to prioritize research reliability and efficiency over the mere volume of discovery. It proposes a comprehensive set of reforms across study design, reporting standards, and institutional incentive structures to address the structural causes of the reproducibility crisis.
overview Paper
Psychology's Renaissance
This resource provides a historical and methodological review of the transformative period in experimental psychology known as the 'Renaissance,' detailing how concerns shifted from traditional publication bias to the prevalence of p-hacking. It synthesizes the specific methodological reforms and reporting standards that emerged from this period of self-reflection to improve the field's reliability.
overview Paper
Replicability, Robustness, and Reproducibility in Psychological Science
This article clarifies the conceptual distinctions and relationships between replicability, robustness, and reproducibility within the context of psychological research. It frames the replication crisis as a productive opportunity for innovation and theory development, emphasizing how these practices help identify the boundaries of scientific knowledge.
overview Paper
How scientists fool themselves – and how they can stop
This resource explores the psychological factors that lead researchers to see patterns in noise and provides an overview of how cognitive biases affect data analysis. It introduces several preventative practices, including the use of blind analysis and the adoption of more formal pre-analysis planning to mitigate the risk of false discoveries.
Devezer, B., & Buzbas, E. O. (2025). Minimum viable experiment to replicate. PhilSci Archive. https://philsci-archive.pitt.edu/24738/
practice/tools Paper
A community-sourced glossary of open scholarship terms
This comprehensive, community-sourced glossary provides clear definitions for a wide range of terms used in the open scholarship movement. It serves as a practical reference tool designed to lower the barrier to entry for newcomers and facilitate consistent communication across diverse academic stakeholders.
overview Preprint
Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge
This paper analyzes the organizational shift toward large-scale "ManyX" collaborative consortia, examining how they implement strict formalization, bureaucratization, and procedural hygiene. It explores how these collaborative prototypes redistribute scientific agency and attempt to institutionalize the production of credible knowledge through standardization.
overview Paper
Metascience as a Scientific Social Movement
This paper frames the rise of metascience as a scientific social movement aimed at reforming the institutional structures and normative practices of research. It provides a sociological perspective on how reformers organize to challenge established scientific standards and advocate for systemic cultural change across academia.
critique Paper
Positive Deviance Underlies Successful Science: Normative Methodologies Risk Throwing out the Baby With the Bathwater
This resource critiques the current movement toward "normative methodologies" in psychology, arguing that an over-emphasis on preventing research failure may inadvertently suppress the creative deviance necessary for breakthrough discoveries. It highlights the risk that rigid procedural rules like preregistration and openness badges prioritize data hygiene over the development of successful scientific theories.
overview Paper
Implications of the Credibility Revolution for Productivity, Creativity, and Progress
This article explores how the credibility revolution in psychology affects broader scientific goals like productivity, creativity, and progress. It evaluates the shift toward higher evidence standards and preregistration, framing these changes as empirical questions that must be studied to understand their long-term impact on the health of the discipline.
Scientific Misconduct: Fabrication and Falsification 10 / 10

In order to understand and weigh in on how the Reproducibility Crisis started, we first need to understand scientific misconduct, especially data fabrication and falsification. These practices erode trust in science and distort the research record. Fabrication involves inventing data, participants, or outcomes; falsification involves altering materials, methods, measurements, images, or reporting so that findings are misrepresented. Because intent to mislead is central, these acts are distinct from questionable research practices and from honest mistakes. Recognizing the role of misconduct is therefore essential for understanding how unreliable or non-replicable studies entered the literature and contributed to the broader crisis.

advocacy Paper
How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data
This paper articulates the fundamental necessity of research integrity for maintaining trust within the scientific community and society at large. It argues for the systematic adoption of open science practices as the primary mechanism for ensuring transparency, accountability, and the ability to verify research findings.
evidence Paper
Misconduct accounts for the majority of retracted scientific publications
This study provides empirical evidence that misconduct, rather than honest error, is the primary driver of retractions in the biomedical and life sciences. It reveals that previous estimates significantly underreported fraud due to uninformative retraction notices and highlights a historical increase in the rate of articles retracted for fraud.
evidence Paper
Explanations of Research Misconduct, and How They Hang Together
This meta-analysis provides the first standardized empirical estimate of the prevalence of fabrication and falsification among scientists based on survey data. It offers a critical quantitative baseline for understanding the frequency of research misconduct and highlights the methodological challenges in measuring self-reported ethical breaches.
advocacy Paper
Signaling the trustworthiness of science
This resource argues for the adoption of explicit signals of trustworthiness by scientists and journals to better communicate adherence to scientific norms to both peers and the public. It proposes specific article-level signals, such as transparent reporting and evidence of replication, to reinforce the credibility of the scientific enterprise.
critique Letter
Reply to Kornfeld and Titus: No distraction from misconduct
This scholarly reply defends the necessity of focusing on scientific misconduct and its signals against criticisms that such focus might be a distraction. It emphasizes the importance of maintaining rigorous standards and transparent signaling to prevent the erosion of scientific integrity.
policies Paper
Promoting Research Integrity in <scp>A</scp>frica: An African Voice of Concern on Research Misconduct and the Way Forward
This document establishes the official national code of conduct for research integrity within the Dutch academic system, outlining the principles and standards expected of all researchers. It serves as a regulatory framework for defining misconduct and promoting ethical research practices across all disciplines in the Netherlands.
advocacy Paper
Stop ignoring misconduct
This resource advocates for a shift from passive observation to active intervention regarding scientific misconduct. It emphasizes the responsibility of the scientific community to implement robust mechanisms for identifying and penalizing fabrication and falsification.
advocacy Letter
Signaling the trustworthiness of science should not be a substitute for direct action against research misconduct
This article argues that symbolic indicators of research quality, such as transparency badges, are insufficient for addressing research misconduct. It advocates for structural changes and direct investigative actions as more effective tools for ensuring scientific integrity.
evidence Paper
Scientific Misconduct and the Myth of Self-Correction in Science
This seminal study presents empirical evidence on the prevalence of questionable research practices among thousands of early- and mid-career scientists. It demonstrates that behaviors compromising research integrity are far more common than blatant fraud, shifting the focus from individual "bad apples" to systemic pressures within the scientific environment.
overview Journal Article
Publication Pressure and Scientific Misconduct in Medical Scientists
This scoping review maps and categorizes existing guidance documents and practices used by research organizations and funders to promote research integrity. It identifies common themes and gaps in current integrity promotion strategies, providing a comprehensive catalog of how research integrity is institutionalized across various organizations.
Questionable research practices & their prevalence 13 / 13

Questionable research practices are actions which researchers take to increase the probability of their desired result. They can be done consciously and unconsciously, distinguishing them from deliberate scientific misconduct, but still compromise research integrity since they can lead to misleading conclusions. Examples of such behaviors include p-hacking, selective reporting, and HARK-ing (Hypothesizing After the Results are Known). The ways in which researchers engage in behaviors and decision-making that increase the probability of their (consciously or unconsciously) desired result.

overview Paper
p-Hacking: Its Costs and When It Is Warranted
This paper provides a precise conceptual definition of p-hacking and uses decision theory to evaluate its epistemic and practical consequences. It uniquely explores the theoretical conditions under which certain analytic flexibilities might be warranted while maintaining a focus on the risks of false positives.
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Unpublished manuscript. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
evidence Preprint
Meta-Research: How problematic citing practices distort science
This meta-research study uses case studies to demonstrate how distorted or problematic citation practices persist even within the field of research integrity itself. It illustrates the specific ways in which these habits can misrepresent epistemic foundations and undermine the reliability of scientific communication across disciplines.
evidence Paper
The extent and causes of academic text recycling or ‘self-plagiarism’
This study presents empirical findings from a survey of academic researchers to demonstrate how perceptions of departmental research climate influence the prevalence of misconduct. The results suggest that the local organizational environment and prevailing norms are significant predictors of research misbehavior, highlighting the need for culture-focused institutional interventions.
overview Paper
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
This resource examines the phenomenon of academic text recycling (self-plagiarism), analyzing its prevalence and the regulatory uncertainties revealed by high-profile misconduct cases. It contributes to the field by clarifying the conditions for fair reuse of one's own work and identifying the remaining gray areas in academic policy.
evidence Paper
With Low Power Comes Low Credibility? Toward a Principled Critique of Results From Underpowered Tests
Employing a survey design with truth-telling incentives, this paper provides empirical data on the widespread prevalence of questionable research practices among psychologists. It reveals that researchers are significantly more likely to admit to behaviors they perceive as defensible, providing insight into the normalization of problematic methodologies within the discipline.
critique Paper
A brief review of research that questions the impact of questionable research practices
This review synthesizes research that challenges the prevailing consensus on the prevalence and negative impact of questionable research practices (QRPs) such as p-hacking and HARKing. It suggests that these practices may not be the primary drivers of the replication crisis and are not inherently problematic in all scientific contexts.
critique Paper
The Costs of HARKing
This resource provides a philosophical and critical evaluation of the twelve proposed costs of 'Hypothesizing After the Results are Known' (HARKing). It argues that many of these costs are either conceptually flawed or lack empirical evidence, suggesting that the negative impact of HARKing may be overestimated.
overview Paper
What is critical metascience and why is it important?
This article defines and establishes the scope of 'critical metascience,' a reflexive field that questions the underlying assumptions and potential biases within the metascience movement itself. It provides a conceptual framework for how critical inquiry can complement empirical meta-research to ensure scientific reforms are robust and self-correcting.
evidence Paper
False-Positive Psychology
This study provides empirical evidence of publication bias by comparing result outcomes in Registered Reports against a random sample of standard psychological studies. It quantifies the gap in reported positive findings between these formats, demonstrating how result-blind peer review significantly mitigates the selective reporting of statistically significant results.
critique Paper
The natural selection of bad science
This paper employs an evolutionary model to argue that scientific incentive structures, which prioritize publication volume, lead to the "natural selection" of poor research methods. It demonstrates that low-quality practices will inevitably proliferate in a system that rewards productivity over methodological rigor, regardless of individual scientists' intentions.
practice/tools Paper
Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking
This resource provides an extensive checklist of 34 specific researcher degrees of freedom that can lead to p-hacking across various stages of the research process. It serves as a practical tool for psychologists to preemptively identify and minimize opportunistic choices during study planning, data collection, analysis, and reporting.
evidence Paper
Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis
This resource presents data from a large-scale survey of researchers to estimate the frequency of observed but unreported scientific misconduct. It highlights the systemic failure of institutional reporting mechanisms and emphasizes the need for better protections and incentives to ensure that integrity breaches are properly surfaced and addressed.
Collection of large scale replications 29 / 29

This is a collection of large scale replications that have been conducted estimating the rate of reproducibility of entire (sub)disciplines, offering a big-picture view of replication efforts and the current state of replicability across fields.

evidence Paper
Do economists replicate?
This resource provides an empirical analysis of the prevalence and nature of replication studies within the field of economics. It investigates how often economists conduct replications and identifies the factors that influence the likelihood of these studies being performed and published.
evidence Paper
Quantitative Political Science Research Is Greatly Underpowered
This large-scale meta-research study provides empirical evidence that quantitative political science research is severely underpowered, with a median power of only 10% across thousands of tests. The findings demonstrate that only a small fraction of tests in the discipline meet the standard 80% power threshold required to detect consensus effects.
evidence Paper
A Review of Multisite Replication Projects in Social Psychology: Is It Viable to Sustain Any Confidence in Social Psychology’s Knowledge Base?
This article synthesizes findings from 36 multisite replication projects in social psychology to evaluate the success rate of the field's knowledge base. It identifies a low rate of successful replications and explores the theoretical and practical implications of these outcomes for the validity of established social psychological findings.
evidence Paper
Evaluating replicability of laboratory experiments in economics
This resource presents the results of a large-scale project that conducted high-powered, pre-registered replications of 18 laboratory experiments in economics. It provides precise data on replication success rates and reveals that replicated effect sizes in economics are, on average, about two-thirds of the original reported sizes.
evidence Paper
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015
This study systematically evaluates the replicability of social science experiments published in the high-impact journals Nature and Science. It offers critical evidence on the reliability of high-profile findings across various social science disciplines, comparing original effect sizes with those found in rigorous, high-powered replication attempts.
evidence Paper
Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say "Usually Not"
This study empirically evaluates the replicability of economics research by attempting to reproduce results from 67 papers published in 13 reputable journals. It highlights a significant gap in the availability of data and code files between journals with and without mandatory sharing policies, ultimately finding that most results could not be successfully replicated.
evidence Preprint
The Prevalence of Direct Replication Articles in Top-Ranking Psychology Journals
This meta-research assesses the prevalence of direct replication studies within top-ranking psychology journals to determine how often the field publishes self-correcting research. The authors use keyword searches and manual verification to provide an empirical indicator of the actual value and incentives provided for replication work in the published literature.
evidence Paper
Many Labs 3: Evaluating participant pool quality across the academic semester via replication
This large-scale crowdsourced project investigates whether the timing of data collection within an academic semester impacts the quality of behavioral research and the replicability of known effects. By testing 10 effects across 20 different university participant pools, it provides evidence that time-of-semester variations have little impact on data quality or the robustness of experimental findings.
evidence Paper
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
This study evaluates whether involving original authors and experts in a pre-data-collection peer review process can improve the success rates of replication attempts in psychology. It specifically examines 10 studies that previously failed to replicate, testing the hypothesis that replication failures are often due to protocol deficiencies rather than the absence of the original phenomenon.
Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Investigating the replicability of preclinical cancer biology. ELife, 10. CLOCKSS. https://doi.org/10.7554/eLife.71601
advocacy Paper
Replications in agricultural economics
This resource outlines the necessity of replication studies within agricultural economics and proposes a disciplinary framework to overcome institutional and practical barriers. It specifically targets the disconnect between policy-relevant recommendations and the current lack of verified research, urging a cultural shift among researchers and journal editors.
Christopherson, C. D., Hildebrandt, L., Adeyemi Adetula, Wiggins, B. J., McLaughlin, H., Hurst, M. A., IJzerman, H., Levitan, C., Legate, N., Pazda, A., Kaylis Hase, VanBenschoten, A., Fallon, M., LePine, S., Gervais, H., Lazarevic, L., Chartier, C. R., Corker, K. S., France, H., … Wagge, J. (2013). Collaborative Replications and Education Project (CREP). OSF. https://doi.org/10.17605/OSF.IO/WFC6U
evidence Paper
Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014–2017)
This empirical study assesses the adoption of open science practices in psychology by manually auditing a random sample of published literature from 2014 to 2017. It establishes a quantitative baseline for tracking the impact of transparency and reproducibility initiatives within the discipline.
advocacy Paper
Replicating Empirical Research In Behavioral Ecology: How And Why It Should Be Done But Rarely Ever Is
This article highlights the historical neglect of replication in behavioral ecology and argues for its restoration as a fundamental scientific requirement. It identifies specific cultural and institutional obstacles, such as publication bias and editorial disdain, explaining why the field must adopt a more rigorous approach to verifying empirical evidence.
evidence Paper
Rate and success of study replication in ecology and evolution
This study measures the actual frequency of replication in ecology and evolutionary biology journals, identifying a strikingly low rate of published replication attempts. By providing these empirical figures, it exposes the disparity between scientific ideals and the actual publishing practices within these specific biological fields.
evidence Paper
Investigating Variation in Replicability
This foundational study provides empirical data on the replicability of 13 psychological effects by testing them across 36 independent labs and diverse participant pools. It contributes to the understanding of reproducibility by demonstrating that most of the tested effects were consistently reproducible regardless of whether the research was conducted in a lab, online, or in different geographical locations.
evidence Paper
Many Labs 2: Investigating Variation in Replicability Across Samples and Settings
This large-scale empirical study examines the replicability of 28 psychological findings across 125 diverse samples from 36 countries to investigate how variation in settings affects results. It provides critical meta-research evidence suggesting that the primary determinant of replication success is the strength of the original effect rather than the specific sample or context.
evidence Paper
Assessing the replication landscape in experimental linguistics
This research quantifies the prevalence of replication studies within the field of experimental linguistics by analyzing publication trends across nearly 100 journals. It provides a baseline assessment of the "replication gap" in the discipline, highlighting how infrequently direct replications are published compared to novel confirmatory research.
Makel, M. C., & Plucker, J. A. (2014). Facts Are More Important Than Novelty. Educational Researcher, 43(6), 304–316. https://doi.org/10.3102/0013189X14545513
evidence Paper
Replications in Psychology Research
This study provides a comprehensive historical audit of replication practices in psychology since 1900, measuring the frequency and nature of replication attempts across a century of literature. It establishes that a very small percentage of the field's output consists of replications and examines the publication outcomes of those that are attempted.
evidence Paper
Replication of Special Education Research
This study provides empirical data on the prevalence and success of replications within the field of special education by reviewing the complete publication history of 36 specialized journals. It identifies a low replication rate of 0.5% and examines authorship history to assess the status of scientific rigor in the discipline.
ManyPrimates. (n.d.). ManyPrimates. ManyPrimates. https://manyprimates.github.io/
evidence Paper
Replication in Second Language Research: Narrative and Systematic Reviews and Recommendations for the Field
This resource presents a systematic review of 67 self-labeled replication studies in second language (L2) research to estimate publication rates and analyze study characteristics. It also provides a narrative review of field-specific challenges, offering recommendations to bridge the gap between the perceived importance of replication and actual practice.
evidence Paper
Replication in criminology: A necessary practice
This study conducts a content analysis of leading criminology journals to measure how often replication studies are published compared to other social and natural sciences. The findings reveal a replication rate of approximately 2% in criminology, highlighting the need for more frequent confirmation of research results in the field.
evidence Paper
Replication studies in economics—How many and which papers are chosen for replication, and why?
This research investigates the frequency and types of replication studies published in the top 50 economics journals over a 40-year period. It distinguishes between narrow and scientific replications and identifies that higher-impact articles by authors from leading institutions are more likely to be selected for replication.
(2012). An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science. Perspectives on Psychological Science, 7(6), 657–660. https://doi.org/10.1177/1745691612462588
evidence Paper
Estimating the reproducibility of psychological science
This foundational meta-research provides empirical evidence regarding the reproducibility of psychological science by attempting to replicate 100 experimental and correlational studies. The findings demonstrate a significant decline in effect sizes and statistical significance rates compared to the original publications.
evidence Paper
RETRACTED ARTICLE: High replicability of newly discovered social-behavioural findings is achievable
This study demonstrates that high replication rates are achievable for novel social-behavioural findings when laboratories utilize rigour-enhancing practices like preregistration, large sample sizes, and methodological transparency. It provides empirical evidence that the low reproducibility observed in other meta-research efforts may be a result of suboptimal methods rather than an inherent unreliability of the phenomena themselves.
evidence Paper
Contextual sensitivity in scientific reproducibility
This study presents empirical evidence on the role of "contextual sensitivity" in the reproducibility of psychological research by analyzing findings from the Reproducibility Project: Psychology. It demonstrates that findings judged to depend heavily on specific social, settings, or temporal contexts are significantly less likely to replicate than more generalizable findings.
Proposed science improvement initiatives on statistics, measurement, teaching, data sharing, code sharing, pre-registration, & replication 22 / 22

Published checklists and other resources that can be used to shift behavior more toward improved practices.

critique Paper
Statistical Nonsignificance in Empirical Economics
This article critiques the standard practice in empirical economics of prioritizing statistically significant rejections over non-significant findings. It demonstrates that in the context of large economic datasets, the failure to reject a point null is often more scientifically informative than a rejection, challenging the traditional hierarchy of evidence.
advocacy Letter
Towards a culture of open scholarship: the role of pedagogical communities
This resource argues for the critical role of teaching and mentorship in establishing a sustainable culture of open scholarship and research integrity. It specifically calls on institutions and stakeholders to integrate open science principles into pedagogical practices to ensure that the next generation of researchers is equipped with the skills for reproducible research.
teaching/training Preprint
Introducing a Framework for Open and Reproducible Research Training (FORRT)
This publication introduces a pedagogical framework and a community-led initiative designed to help educators integrate open scholarship into their teaching. It aims to bridge the gap between the researcher-led adoption of open practices and the lack of structured training available for the next generation of scientists.
advocacy Paper
Behavioural science is unlikely to change the world without a heterogeneity revolution
This article argues that the impact of behavioral science on real-world problems is hindered by a neglect of treatment effect heterogeneity. It advocates for a shift in research priorities toward understanding how and why effects vary across different contexts and populations, proposing a framework to improve the generalizability of findings.
overview Other
Powering Reproducible Research
This resource highlights the fundamental importance of statistical power in ensuring the reproducibility of research results across scientific disciplines. It explains how low power not only reduces the chance of detecting true effects but also increases the probability that reported significant results are false positives.
overview Paper
Seven Easy Steps to Open Science
This article serves as an accessible introductory guide to open science, specifically tailored for students and researchers in psychological science. It uses a curated and annotated reading list of seven foundational papers to explain core concepts such as pre-registration, open access, and data sharing.
critique Paper
The case for formal methodology in scientific reform
The authors critique the current wave of scientific reform for being primarily heuristic and lacking formal mathematical rigor, which leads to over-generalized solutions. They propose a structured five-step framework for developing formal methodology to ensure that scientific reforms are theoretically sound and practically effective.
overview Paper
Open science interventions to improve reproducibility and replicability of research: a scoping review
This scoping review provides a comprehensive synthesis of existing literature regarding the effectiveness of open science interventions designed to improve research reproducibility. It systematically categorizes these practices and identifies specific gaps where empirical evidence of their actual impact is still needed.
critique Paper
Practical Methodological Reform Needs Good Theory
This article argues that the methodological reform movement in psychology currently lacks sufficient theoretical grounding, which may limit the long-term effectiveness of practical improvements. It provides established theoretical frameworks from other disciplines to help model the research process and guide more robust metascience initiatives.
practice/tools Paper
Valid replications require valid methods: Recommendations for best methodological practices with lab experiments.
This resource provides actionable methodological recommendations for conducting lab experiments to ensure they serve as a solid foundation for valid replications. It highlights specific practices in experimental design and implementation that are essential for producing reliable and reproducible findings.
evidence Paper
Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention
This publication examines the prevalence and impact of publication and reporting biases within the cognitive sciences, offering specific methods for their detection. It synthesizes evidence of how these biases distort the literature and suggests preventive strategies to improve the reliability of reported findings.
practice/tools Paper
Easing Into Open Science: A Guide for Graduate Students and Their Advisors
This guide offers a structured roadmap for graduate students and advisors to incrementally adopt open science practices by categorizing them by difficulty level. It provides practical advice on eight specific behaviors, ranging from starting journal clubs to submitting registered reports, to help researchers transition toward more transparent workflows.
Krähmer, D., Schächtele, L., & Auspurg, K. (2026). Code sharing and reproducibility in survey-based social research: evidence from a large-scale audit. Royal Society Open Science, 13(3). https://doi.org/10.1098/rsos.251997
evidence Paper
Many Labs 2: Investigating Variation in Replicability Across Samples and Settings
This large-scale empirical study examines the replicability of 28 psychological findings across 125 diverse samples from 36 countries to investigate how variation in settings affects results. It provides critical meta-research evidence suggesting that the primary determinant of replication success is the strength of the original effect rather than the specific sample or context.
practice/tools Paper
Seven steps toward transparency and replicability in psychological science.
This resource provides a practical framework consisting of seven actionable steps that psychological researchers can implement to improve the transparency and reproducibility of their work. It offers a structured guide for transitioning to open science workflows, covering specific practices like preregistration and public data sharing.
advocacy Paper
A manifesto for reproducible science
This influential paper argues for a systemic overhaul of the scientific process to prioritize research reliability and efficiency over the mere volume of discovery. It proposes a comprehensive set of reforms across study design, reporting standards, and institutional incentive structures to address the structural causes of the reproducibility crisis.
advocacy Editorial
Checklists work to improve science
This resource makes the case for the adoption of structured checklists as a fundamental tool to ensure procedural consistency and reduce human error in research. It highlights how these simple interventions can effectively lead to more rigorous methodology and higher standards of transparent reporting across various scientific domains.
advocacy Paper
The Reproducibility Crisis in Science: A Statistical Counterattack
This resource identifies the lack of advanced statistical and analytical skills as a primary driver of the reproducibility crisis in science. It calls for a systemic shift in scientific training, arguing that rigorous statistical education is the most effective tool for ensuring research findings are replicable.
teaching/training Paper
Embedding open and reproducible science into teaching: A bank of lesson plans and resources.
This resource provides a curated collection of lesson plans and pedagogical materials designed to help educators integrate open scholarship into undergraduate and postgraduate curricula. It addresses the barrier of time and resource constraints by offering ready-to-use tools and sharing existing educational resources with the wider academic community.
overview Paper
Teaching open and reproducible scholarship: a critical review of the evidence base for current pedagogical methods and their outcomes
This resource provides a critical synthesis of existing research regarding the impact of open science training on undergraduate and postgraduate student outcomes. It evaluates the effectiveness of various pedagogical methods and identifies current gaps in the evidence base for teaching reproducible research practices.
critique Paper
Beyond Statistical Ritual: Theory in Psychological Science
This article critiques the current state of psychological science by arguing that an over-reliance on statistical significance has led to a decline in theoretical rigor. It suggests that the discipline’s focus on methodological fixes is insufficient without addressing the underlying 'theory crisis' where statistical rituals have replaced meaningful theoretical development.
practice/tools Preprint
A Guide for Social Science Journal Editors on Easing into Open Science
This guide provides actionable steps for social science journal editors to implement open science policies and practices within their publications. It consolidates existing resources into a structured pathway to help editors navigate the transition toward incentivizing transparency and data sharing.
Ethical considerations for improved practices 13 / 13

Engaging in Open and Reproducible Science practices comes with ethical challenges that need to be sensitively navigated (e.g. when sharing data openly).

critique Paper
Open Science and Feminist Ethics: Promises and Challenges of Open Access
This article utilizes feminist ethics to critically evaluate the promises and challenges of the open access movement, particularly regarding power dynamics and data privacy. It offers an updated ethical framework to help researchers and policymakers navigate the intersection of transparency, social justice, and participant protection.
evidence Paper
The Matthew effect in science funding
This empirical study provides evidence for the "Matthew effect" in science funding by analyzing grant proposals and review scores to show how early success disproportionately benefits certain researchers. It demonstrates how funding gaps widen between winners and losers even when their initial qualifications are nearly identical.
overview Paper
Open Science: Challenges, Possible Solutions and the Way Forward
This review explores the multifaceted concept of open science, examining the barriers to accessible scientific communication such as high publication costs. It specifically evaluates the challenges of the open-access model and suggests various pathways for maintaining the accessibility of research outputs.
teaching/training Paper
How (and Whether) to Teach Undergraduates About the Replication Crisis in Psychological Science
This resource provides a validated one-hour lecture design and evaluation for introducing undergraduates to the replication crisis in psychology. It demonstrates that teaching these concepts can maintain student trust in science while improving their understanding of methodological improvements.
overview Paper
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition
This handbook offers a comprehensive survey of the field of academic integrity, addressing both theoretical research and practical applications in global educational settings. It serves as a foundational reference that bridges various perspectives on plagiarism, ethical conduct, and the systemic factors influencing scholarly honesty.
evidence Paper
The Economic Impacts of Open Science: A Rapid Evidence Assessment
This study provides a systematic synthesis of research on the economic impacts of open science, identifying both positive and negative effects and the mechanisms through which they occur. It highlights the methodological challenges in tracking the usage of open outputs and offers evidence-based insights into how these economic benefits can be maximized.
Jacobs, A. M., Büthe, T., Arjona, A., Arriola, L. R., Bellin, E., Bennett, A., Björkman, L., Bleich, E., Elkins, Z., Fairfield, T., Gaikwad, N., Greitens, S. C., Hawkesworth, M., Herrera, V., Herrera, Y. M., Johnson, K. S., Karakoç, E., Koivu, K., Kreuzer, M., … Yashar, D. J. (2021). The Qualitative Transparency Deliberations: Insights and Implications. Perspectives on Politics, 19(1), 171–208. https://doi.org/10.1017/S1537592720001164
overview Paper
The concept of risk and responsible conduct of research
This resource explores how the definition of risk informs the ethical standards required for the responsible conduct of research. It provides a conceptual analysis of how researchers can navigate the balance between scientific advancement and their ethical obligations to minimize harm.
overview Paper
Fostering openness in open science: An ethical discussion of risks and benefits
This article examines the ethical tension between the benefits of open science—such as enhanced reliability and collaboration—and the potential risks associated with sharing sensitive information. It specifically highlights the 'dual-use dilemma' regarding security, confidentiality, and privacy concerns in an open research environment.
critique Paper
The challenges of open data sharing for qualitative researchers
This resource critiques the application of universal open science mandates to qualitative research, arguing that sharing full datasets for replication is epistemologically and ethically problematic. It highlights how standardized requirements fail to account for methodological differences and the specific risks involved in de-identifying complex qualitative narratives.
Lupia, A. (2020). Practical and Ethical Reasons for Pursuing a More Open Science. PS: Political Science & Politics, 54(2), 301–304. https://doi.org/10.1017/S1049096520000979
critique Editorial
Editorial Essay: The Tumult over Transparency: Decoupling Transparency from Replication in Establishing Trustworthy Qualitative Research
This editorial warns against the uncritical transfer of transparency and replication standards from psychology to qualitative management research. It argues for decoupling transparency from replication, suggesting that while transparency is necessary for trust, replication is often a poor fit for qualitative research goals.
critique Paper
Reflection over compliance: Critiquing mandatory data sharing policies for qualitative research
This resource critiques the 'Mandatory Inclusion of Raw Data' (MIRD) model, arguing that universal data-sharing mandates fail to account for the unique ethical and epistemological challenges of qualitative research. It provides a series of reflective questions to help researchers in health psychology and related fields navigate these policies while protecting participant confidentiality and methodological integrity.
Ongoing debates (e.g., incentives for and against open science practices) 33 / 33

Open Science is not a monolith, and continued scrutiny of the proposed practices and reforms can be of value - whether to understand why there is resistance (and how to combat anti-open arguments) as well as pushing us to evaluate the potential positive and negative impacts of reforms.

critique Paper
Open Science Isn't Always Open to All Scientists
This article critiques the implementation of open science, highlighting how standard practices may inadvertently exclude researchers who lack specific resources, institutional support, or geographical advantages. It argues that for open science to be truly inclusive, the movement must address the diverse socio-economic and systemic challenges faced by scientists globally.
critique Preprint
Revisiting the replication crisis without false positives
This paper challenges the common assumption that the replication crisis is primarily a result of false positives caused by questionable research practices. By proposing alternative meta-scientific models, the authors demonstrate that low replicability can be explained by factors other than false positives, calling for a more nuanced understanding of the crisis.
overview Paper
Does Sociology Need Open Science?
This article provides a disciplinary overview of open science's relevance to sociology, connecting modern transparency standards with classical sociological theory. It serves as an introductory guide for sociologists to understand how open science principles intersect with the field's specific epistemological and methodological traditions.
critique Paper
Getting ontologically serious about the replication crisis in psychology.
This article applies analytic philosophy to argue that the replication crisis in psychology is deeply rooted in ontological confusion rather than just methodological failures. It suggests that until the field addresses the nature of psychological phenomena themselves, epistemological and procedural reforms will remain insufficient to solve the crisis.
advocacy Paper
Open is not enough
This resource uses the experiences of the high-energy physics community to argue that simple data openness is insufficient for ensuring true computational reproducibility. It advocates for the adoption of more comprehensive practices, such as preserving the entire computational environment and workflow, to make research findings genuinely verifiable by others.
advocacy Paper
An Agenda for Open Science in Communication
This paper outlines a seven-point agenda for integrating open science practices into communication research to address the discipline's replication crisis. It advocates for specific shifts in research culture, such as the publication of materials and code, to enhance the transparency and generalizability of communication studies.
Drummond, C. (2017). Reproducible research: a minority opinion. Journal of Experimental & Theoretical Artificial Intelligence, 30(1), 1–11. https://doi.org/10.1080/0952813X.2017.1413140
evidence Paper
US studies may overestimate effect sizes in softer research
This study empirically investigates the "US effect" by analyzing over a thousand primary outcomes from meta-analyses in genetics and psychiatry to demonstrate how institutional pressures can lead to overestimated effect sizes. It specifically identifies how productivity-driven career systems in the United States may incentivize more extreme research findings compared to other regions.
critique Paper
Is science really facing a reproducibility crisis, and do we need it to?
This resource disputes the prevailing "crisis" narrative in science, arguing that the focus on widespread unreliability is based on a misinterpretation of current evidence. It proposes a more constructive framework that views recent methodological shifts as a positive evolution and empowerment of the scientific community rather than a sign of systemic failure.
evidence Paper
The Economic Impacts of Open Science: A Rapid Evidence Assessment
This study provides a systematic synthesis of research on the economic impacts of open science, identifying both positive and negative effects and the mechanisms through which they occur. It highlights the methodological challenges in tracking the usage of open outputs and offers evidence-based insights into how these economic benefits can be maximized.
critique Paper
Open Science for private Interests? How the Logic of Open Science Contributes to the Commercialization of Research
This article critically examines the intersection of the Open Science movement and the commercialization of research within the private sector. It argues that while Open Science promotes transparency and accountability, its current implementation may inadvertently serve private corporate interests rather than addressing social justice or public epistemic needs.
critique Paper
Science of psychological phenomena and their testing.
This paper argues that the 'replication crisis' in psychology stems from a misunderstanding of the inherent stability and variability of psychological phenomena rather than a failure of scientific rigor. It critiques the use of replication as a primary gatekeeper for scientific truth, suggesting that the field should instead focus on identifying stable patterns within variable data.
critique Paper
Campbell’s Law Explains the Replication Crisis: Pre-Registration Badges Are History Repeating
This article critiques the implementation of preregistration badges by framing them through the lens of Campbell’s Law, suggesting that high-stakes indicators can corrupt the very research processes they are meant to improve. It warns that mandating such practices may inadvertently prioritize the appearance of rigor over the actual quality of the science.
advocacy Paper
How to Produce, Identify, and Motivate Robust Psychological Science: A Roadmap and a Response to Vize et al.
This resource proposes a three-part vision for improving the robustness of psychological science by prioritizing methodological quality—such as sample size and valid measurement—over administrative mandates. It advocates for cultivating these foundational research practices as a more effective and less harmful path to scientific progress than relying on procedural requirements like preregistration.
advocacy Preprint
Open Science as Confused: Contradictory and Conflicting Discourses in Open Science Guidance to Researchers
This formal comment advocates for the integration of gender and diversity considerations into researcher assessment frameworks to improve institutional integrity and representation. It specifically argues that reshaping assessment criteria is a necessary step in fostering an inclusive and responsible research environment.
critique Paper
Low replicability can support robust and efficient science
This publication uses computational modeling of scientific communities to argue that a single-minded focus on high replicability may actually impede scientific progress and efficiency. It suggests that a certain level of non-replicable findings is an acceptable trade-off in a system that prioritizes discovery and the exploration of novel hypotheses.
advocacy Paper
Replication is relevant to qualitative research
This paper argues for the relevance and value of replication within qualitative research, suggesting it can address issues of transparency and transferability. It seeks to promote the adoption of replication as a fundamental building block of scholarship even in methodologies where it has traditionally been ignored.
critique Paper
The quantitative paradigm and the nature of the human mind. The replication crisis as an epistemological crisis of quantitative psychology in view of the ontic nature of the psyche
This paper frames the replication crisis in psychology as a fundamental epistemological mismatch between the complex nature of the human psyche and the quantitative methods used to measure it. It moves beyond statistical explanations to argue that the crisis stems from underlying philosophical and ontological assumptions that remain largely unaddressed in the field.
critique Paper
Conflicting Results and Statistical Malleability: Embracing Pluralism of Empirical Results
This article discusses how researcher degrees of freedom and methodological malleability naturally result in conflicting empirical findings, even in the absence of malpractice. It advocates for embracing a pluralistic view of results, where divergent outcomes are understood as a function of different but plausible design choices.
critique Paper
Breaking free
This resource provides a critical perspective on preregistration, arguing that it fails to solve the problems it targets while introducing new barriers for under-resourced researchers. It specifically highlights how the practice may stifle scientific creativity and impose disproportionate costs on junior scholars without guaranteeing ethical or novel outcomes.
overview Editorial
Reality check on reproducibility
This article offers a situational assessment of the reproducibility debate, aiming to provide a realistic perspective on the scale of the challenges facing scientific research. It contributes to the literature by encouraging a grounded evaluation of research practices to determine if the perceived 'crisis' matches empirical reality.
overview Paper
Is the Replicability Crisis Overblown? Three Arguments Examined
This publication examines and rebuts three common arguments used to downplay the significance of the replicability crisis, such as the sufficiency of low alpha levels and the role of conceptual replications. It clarifies statistical misunderstandings that lead some researchers to underestimate the prevalence of false-positive results in published literature.
critique Paper
Scandal in scientific reform: the breaking and remaking of science
This perspective analyzes how the narrative of 'scandal' and the claim that 'science is broken' have been used as rhetorical tools to drive the scientific reform movement. It highlights the power of these tactics in catalyzing institutional change while cautioning against their potential to undermine public trust and damage scientific careers.
critique Paper
Is There a Reproducibility Crisis? On the Need for Evidence-based Approaches
Through a systematic analysis of a UK parliamentary report, this resource argues that the label of a 'reproducibility crisis' lacks sufficient empirical evidence. It advocates for scientific reforms to be based on documented findings and evidence-based approaches rather than reacting to unverified claims of systemic failure.
critique Paper
Preregistration Is Neither Sufficient nor Necessary for Good Science
This article identifies several limitations and potential adverse effects of implementing a systematic preregistration system, specifically within the context of consumer research. It challenges the assumption that preregistration is a necessary condition for high-quality science by outlining implementation challenges and epistemological drawbacks.
critique Editorial
Editorial Essay: The Tumult over Transparency: Decoupling Transparency from Replication in Establishing Trustworthy Qualitative Research
This editorial warns against the uncritical transfer of transparency and replication standards from psychology to qualitative management research. It argues for decoupling transparency from replication, suggesting that while transparency is necessary for trust, replication is often a poor fit for qualitative research goals.
Ràfols, I. (2025). Rethinking open science: Towards care for equity and inclusion. Global Dialogue. https://globaldialogue.isa-sociology.org/articles/rethinking-open-science-towards-care-for-equity-and-inclusion
critique Paper
A brief review of research that questions the impact of questionable research practices
This review synthesizes research that challenges the prevailing consensus on the prevalence and negative impact of questionable research practices (QRPs) such as p-hacking and HARKing. It suggests that these practices may not be the primary drivers of the replication crisis and are not inherently problematic in all scientific contexts.
critique Paper
Preregistration does not improve the transparent evaluation of severity in Popper’s philosophy of science or when deviations are allowed
This paper offers a philosophical critique of preregistration by demonstrating its limited utility within Popper’s theory-centric philosophy of science. It specifically argues that concerns regarding Type I error rate inflation are irrelevant in Popperian contexts and that preregistration fails to enhance the evaluation of test severity when deviations from plans occur.
critique Paper
The replication crisis is less of a “crisis” in Lakatos’ philosophy of science than it is in Popper’s
This article contrasts Popper’s and Lakatos’ philosophies to argue that the perception of a "replication crisis" is largely dependent on the philosophical framework one adopts. It suggests that moving toward a Lakatosian perspective can mitigate the sense of crisis by reframing how unexpected replication failures are interpreted within scientific programs.
critique Paper
There is no theory crisis in psychological science.
This article challenges the prevailing "theory crisis" narrative in psychology by arguing that theoretical limitations are persistent and rooted in the inherent complexity of human behavior. It provides a historical perspective that suggests current issues are not a temporary crisis but a fundamental, long-standing characteristic of the social and behavioral sciences.
critique Paper
Preregistration is not a panacea, but why? A rejoinder to “infusing preregistration into tourism research”
This rejoinder critiques the push for preregistration in tourism research by identifying it as an incomplete solution that may not address the field's underlying methodological challenges. It provides a cautionary perspective on the limitations of adopting open science practices without considering discipline-specific nuances and practical barriers.
overview Website
Epistemic replicability: A primer for psychological science and beyond
This resource introduces the concept of epistemic replicability, providing a conceptual framework for understanding the theoretical foundations of scientific reproducibility. It bridges the gap between statistical replication and broader knowledge accumulation across diverse scientific fields.
Reading List 0
Saved to your reading list! Click the pill to view, export BibTeX, or manage your list.
JUST-OS chatbot (offline)
Chatbot offline — we hope to bring it back soon