Zenodo

Also available in: Arabic | German |
 

Definition: An open science repository where researchers can deposit research papers, reports, data sets, research software, and any other research-related digital artifacts. Zenodo creates a persistent digital object identifier (DOI) for each submission to make it citable. This platform was developed under the European OpenAIRE program and operated by CERN.

Related terms: DOI (digital object identifier), figshare, Open data, Open Science Framework, Preprint

Reference: www.zenodo.org

Drafted and Reviewed by: Ali H. Al-Hoorie, Sara Middleton #### # {#heading} # References {#references} Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes toward open science and public data sharing. Social Psychology, 50, 252-260. https://doi.org/10.1027/1864-9335/a000384 Aczel, B., Szaszi, B., Nilsonne, G., Van den Akker, O., Albers, C. J., van Assen, M. A. L. M., 
 Wagenmakers, E. (2021, April 21). Guidance for Multi-Analyst Studies. https://doi.org/10.31222/osf.io/5ecnh Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., KucharskĂœ, Ć ., Benjamin, D., … & Wagenmakers, E. J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4-6. https://doi.org/10.1038/s41562-019-0772-6 Albayrak, N. (2018a). Diversity helps but decolonisation is the key to equality in higher education. https://lsepgcertcitl.wordpress.com/2018/04/16/diversity-helps-but-decolonisation-is-the-key-to-equality-in-higher-education/ Albayrak, N. (2018b). Academics’ role on the future of higher education: Important but unrecognised. https://lsepgcertcitl.wordpress.com/2018/11/29/academics-role-on-the-future-of-higher-education-important-but-unrecognised/ Albayrak, N., & Okoroji, C. (2019). Facing the challenges of postgraduate study as a minority student. A Guide for Psychology Postgraduates, 63. Albayrak-Aydemir, N. (2020). The hidden costs of being a scholar from the global south. Higher Education Across Borders (LSE Blog). https://blogs.lse.ac.uk/highereducation/2020/02/20/the-hidden-costs-of-being-a-scholar-from-the-global-south/ ALLEA - All European Academies (2017). The European Code of Conduct for Research Integrity. Revised Edition. Available at: https://allea.org/code-of-conduct/ Allen, L., & McGonagle-O’Connell, A. (n.d.). CRediT – Contributor Roles Taxonomy. CASRAI. https://casrai.org/credit/ Anderson, M.S., Ronning, E.A., Devries, R., & Martinson, B.C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of research. Journal of Higher Education, 81(3), 366–393. https://doi.org/10.1353/jhe.0.0095. Andersson, N. (2018). Participatory research—a modernizing science for primary health care. Journal of General and Family Medicine, 19(5): 154–159. https://doi.org/10.1002/jgf2.187 Angrist, J. D., & Pischke, J. S. (2010). The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspectives, 24, 3-30. https://doi.org/10.1257/jep.24.2.3. Anon (n.d.). About CC Licenses. Creative Commons. https://creativecommons.org/about/cclicenses/ Anon. (n.d.). Ckan. https://ckan.org/ Anon. (2006). Correction or retraction?. Nature, 444, 123–124. https://doi.org/10.1038/444123b Anon (n.d.). Datacite Metadata Schema. Datacite Schema. https://schema.datacite.org/ Anon. (n.d.). Domov | SKRN (Slovak Reproducibility network). SKRN. https://slovakrn.wixsite.com/skrn Anon (n.d.). Home | re3data.org. Registry of Research Data Repositories. Retrieved 6 June 2021, from https://www.re3data.org/ Anon (n.d.). INVOLVE – INVOLVE Supporting public involvement in NHS, public health and social care research. INVOLVE. https://www.invo.org.uk/ Anon. (n.d.). Licenses & Standards | Open Source Initiative. OpenSource.Com. https://opensource.org/licenses Anon (n.d.). Open Source in Open Science | FOSTER. Foster. https://www.fosteropenscience.eu/foster-taxonomy/open-source-open-science Anon. (2019). The DOI Handbook. DOI. https://www.doi.org/hb.html Anon (n.d.). Welcome to Sherpa Romeo - v2.sherpa. Sherpa Romeo. https://v2.sherpa.ac.uk/romeo/ Anon (n.d.). What is a codebook? ICPSR. https://www.icpsr.umich.edu/icpsrweb/content/shared/ICPSR/faqs/what-is-a-codebook.html Anon.(2009-2020). What is a digital object identifier, or DOI? American Psychological Association. h ttps://apastyle.apa.org/learn/faqs/what-is-doi Anon. (n.d.). What is a reporting guideline? Equator Network. https://www.equator-network.org/about-us/what-is-a-reporting-guideline/ Anon (2021). What is impact? The Economic and Social Research Council. https://esrc.ukri.org/research/impact-toolkit/what-is-impact/ Anon. (n.d.). What is open education? Opensource.Com. https://opensource.com/resources/what-open-education Arslan, R. C. (2019). How to Automatically Document Data With the codebook Package to Facilitate Data Reuse. Advances in Methods and Practices in Psychological Science, 2(2), 169–187. https://doi.org/10.1177/2515245919838783 Arts and Humanities Research Council. (n.d.). Definition of eligibility for funding. Arts and Humanities Research Council. Available at: https://ahrc.ukri.org/skills/earlycareerresearchers/definitionofeligibility/ Aspers, P., & Corte, U. (2019). What is qualitative in qualitative research. Qualitative Sociology, 42(2), 139-160. https://doi.org/10.1007/s11133-019-9413-7 AusRN. (n.d.). Australian Reproducibility Network. Retrieved 5 June 2021, from https://www.aus-rn.org/ Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016). Editorial: Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Business and Psychology, 31(3), 323–338. https://doi.org/10.1007/s10869-016-9456-7 Barba, L. A. (2018). Terminologies for reproducible research. arXiv preprint arXiv:1802.03311. Bardsley, N. (2018) What lessons does the “replication crisis” in psychology hold for experimental economics? In: Handbook of Psychology and Economic Behaviour. 2nd edition. Cambridge Handbooks in Psychology. Cambridge University Press. ISBN 9781107161399 Available at http://centaur.reading.ac.uk/69874/ BartoĆĄ, F., & Schimmack, U. (2020). Z-Curve 2.0: Estimating replication rates and discovery rates. https://doi.org/10.31234/osf.io/urgtn Bateman, I., Kahneman, D., Munro, A., Starmer, C., & Sugden, R. (2005). Testing competing models of loss aversion: An adversarial collaboration. Journal of Public Economics, 89(8), 1561-1580. https://doi.org/10.1016/j.jpubeco.2004.06.013 Baturay, M. H. (2015). An overview of the world of MOOCs. Procedia-Social and Behavioral Sciences, 174, 427-433. https://doi.org/10.1016/j.sbspro.2015.01.685 Bazeley, P. (2003). Defining ‘Early Career’ in Research. Higher Education 45, 257–279 https://doi.org/10.1023/A:1022698529612 Beffara Bret, B., Beffara Bret, A., & Nalborczyk, L. (2021). A fully automated, transparent, reproducible, and blind protocol for sequential analyses. Meta-Psychology, 5. https://doi.org/10.15626/MP.2018.869 Behrens, J. T. (1997). Principles and procedures of exploratory data analysis. Psychological Methods, 2(2), 131-160. https://doi.org/10.1037/1082-989X.2.2.131 Beller, S., & Bender, A. (2017). Theory, the final frontier? A corpus-based analysis of the role of theory in psychological articles. Frontiers in Psychology, 8, 951. https://doi.org/10.3389/fpsyg.2017.00951 Benoit, K., Conway, D., Lauderdale, B. E., Laver, M., & Mikhaylov, S. (2016). Crowd-sourced text analysis: Reproducible and agile production of political data. American Political Science Review, 110(2), 278–295. https://doi.org/10.1017/S0003055416000058 Bhopal, R., Rankin, J., McColl, E., Thomas, L., Kaner, E., Stacy, R., Pearson, P., Vernon, B., & Rodgers, H. (1997). The vexed question of authorship: views of researchers in a British medical faculty. BMJ, 314, 1009-1012. https://doi.org/10.1136/bmj.314.7086.1009 BIDS (n.d.). Modality agnostic files. Brain Imaging Data Structure. https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html BIDS. (2020). About BIDS. Brain Imaging Data Structure. https://bids.neuroimaging.io Bilder, G. (2013). DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right? Crossref. https://www.crossref.org/blog/dois-unambiguously-and-persistently-identify-published-trustworthy-citable-online-scholarly-literature-right/ Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 1-19. https://doi.org/10.1177/1747021819886519 Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework for webometrics. Journal of the American society for information science and technology, 55(14), 1216-1227.https://doi.org/10.1002/asi.20077 Blohowiak, B. B., Cohoon, J., de-Wit, L., Eich, E., Farach, F. J., Hasselman, F., 
 Riss, C. (2020, July 4). Badges to Acknowledge Open Practices. Retrieved from osf.io/tvyxz BMJ. (2015). Introducing ‘How to write and publish a Study Protocol’ using BMJ’s new eLearning programme: Research to Publication. Retrieved, March 2021, from: https://blogs.bmj.com/bmjopen/2015/09/22/introducing-how-to-write-and-publish-a-study-protocol-using-bmjs-new-elearning-programme-research-to-publication/ Boivin, A., Richards, T., Forsythe, L., Gregoire, A., L’Esperance, A., Abelson, J., & Carman, K.L. (2018). Evaluating the patient and public involvement in research. British Medical Journal, 363, k5147. https://doi.org/10.1136/bmj.k5147 Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19), 4887-4890. https://doi.org/10.1073/pnas.1719557115 Bollen, K. A. (1989). Structural Equations with Latent Variables (pp. 179-225). John Wiley & Sons. Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2011). Introduction to meta-analysis. John Wiley & Sons. Bornmann, L., Ganser, C., Tekles, A., & Leydesdorff, L. (2019). Does the $ h_\alpha $ index reinforce the Matthew effect in science? Agent-based simulations using Stata and R. arXiv preprint arXiv:1905.11052. Borsboom, D., Mellenbergh, G. J., & Van Heerden, J. (2004). The concept of validity. Psychological review, 111(4), 1061. https://doi.org/10.1037/0033-295X.111.4.1061 Borsboom, D., van der Maas, H., Dalege, J., Kievit, R., & Haig, B. (2020, February 29). Theory Construction Methodology: A practical framework for theory formation in psychology. https://doi.org/10.31234/osf.io/w5tp8 Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider regarding preprint submission.PLoS Computational Biology, 13(5), e1005473. https://doi.org/10.1371/journal.pcbi.1005473 Box, G.E. P. (1976). Science and statistics. Journal of the American Statistical Association 71(356), 791–799. Bouvy, J. C., & Mujoomdar, M. (2019). All-Male Panels and Gender Diversity of Issue Panels and Plenary Sessions at ISPOR Europe. PharmacoEconomics-open, 3(3), 419-422. https://doi.org/10.1007/s41669-019-0153-0 BramoullĂ©, Y., & Saint-Paul, G. (2010). Research cycles. Journal of economic theory, 145(5), 1890-1920. https://doi.org/10.2139/ssrn.965816 Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: attribution, contribution, collaboration, and credit. Learned Publishing, 28(2), 151-155. https://doi.org/10.1087/20150211 Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … & Van’t Veer, A. (2014). The replication recipe: What makes for a convincing replication?. Journal of Experimental Social Psychology, 50, 217-224. https://doi.org/10.1016/j.jesp.2013.10.005 Braun, V., & Clarke, V. (2013) Successful Qualitative Research. SAGE Publications. Brembs, B., Button, K., & MunafĂČ, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7, 291. https://doi.org/10.3389/fnhum.2013.00291 Breznau, N. (2021). I saw you in the crowd: Credibility, reproducibility, and meta-utility. PS: Political Science & Politics, 54(2), 309-313. https://doi.org/10.1017/S1049096520000980 Brod, M., Tesler, L., & Christensen, T. (2009). Qualitative research and content validity: Developing best practices based on science and experience. Quality of Life Research, 18(9), 1263–1278. https://doi.org/10.1007/s11136-009-9540-9 Brooks, T. A. (1985). Private acts and public objects: An investigation of citer motivations. Journal of the American Society for Information Science, 36(4), 223-229. https://doi.org/10.1002/asi.4630360402 Brunner, J., & Schimmack, U. (2020). Estimating population mean power under conditions of heterogeneity and selection for significance. Meta-Psychology, 4, MP.2018.874. https://doi.org/1[0.15626/MP.2018.874](https://doi.org/10.15626/MP.2018.874) Bruns, S. B., & Ioannidis, J. P. (2016). P-curve and p-hacking in observational research. PLoS ONE, 11(2), e0149144. https://doi.org/10.1371/journal.pone.0149144 Budapest Open Access Initiative (2002) Read the Budapest open access initiative. Budapest, Hungary. Available from: https://www.budapestopenaccessinitiative.org/read Burnette, M., Williams, S., & Imker, H. (2016). From Plan to Action: Successful Data Management Plan Implementation in a Multidisciplinary Project. Journal of eScience librarianship, 5(1), e1101. https://doi.org/10.7191/jeslib.2016.1101 Button, K. S., Chambers, C. D., Lawrence, N., & MunafĂČ, M. R. (2020). Grassroots training for reproducible science: a consortium-based approach to the empirical dissertation. Psychology Learning & Teaching, 19(1), 77-90. https://doi.org/10.1177/1475725719857659 Button, K. S., Lawrence, N. S., Chambers, C. D., & MunafĂČ, M. R. (2016). Instilling scientific rigour at the grassroots. Psychologist, 29(3), 158-159. Byrne J. A. & Christopher J. (2020). Digital magic, or the dark arts of the 21st century—how can journals and peer reviewers detect manuscripts and publications from paper mills? FEBS Lett, 594(4), 583-589. https://doi.org/10.1002/1873-3468.13747 Campbell, D. T., & Stanley, J.C. (1966) Experimental and Quasi Experimental Designs. Rand McNally. Carp, J. (2012). On the plurality of (methodological) worlds: estimating the analytic flexibility of FMRI experiments. Frontiers in Neuroscience, 6, 149. https://doi.org/10.3389/fnins.2012.00149 Carsey, T. M. (2014). Making DA-RT a reality. PS: Political Science & Politics, 47(1), 72–77. https://doi.org/10.1017/S1049096513001753 Carter, A., Tilling, K., & Munafo, M. R. (2021, January 26). Considerations of sample size and power calculations given a range of analytical scenarios. https://doi.org/10.31234/osf.io/tcqrn Case, C. M. (1928). Scholarship in Sociology. Sociology and Social Research, 12, 323–340. http://www.sudoc.fr/036493414 Cassidy, S. A., Dimova, R., GiguĂšre, B., Spence, J. R., & Stanley, D. J. (2019). Failing grade: 89% of introduction-to-psychology textbooks that define or explain statistical significance do so incorrectly. Advances in Methods and Practices in Psychological Science, 2(3), 233-239. https://doi.org/10.1177/2515245919858072 Centre for Evaluation. (n.d.). Evidence Synthesis. https://www.lshtm.ac.uk/research/centres/centre-evaluation/evidence-synthesis Centre for Open Science. (2011-2021) Open Science Framework. Centre for Open Science. https://osf.io/ Centre for Open Science. (n.d.). Show Your Work. Share Your Work. Advance Science. That’s Open Science. The Centre for Open Science. https://www.cos.io/ CESSDA Training Team (2017 - 2020). CESSDA Data Management Expert Guide. Bergen, Norway: CESSDA ERIC. Retrieved from https://www.cessda.eu/DMGuide Chambers, C. D. (2013). Registered reports: a new publishing initiative at Cortex. Cortex, 49(3), 609-610. https://doi.org/10.1016/j.cortex.2012.12.016. Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered reports: realigning incentives in scientific publishing. Cortex, 66, A1-A2. https://doi.org/10.1016/j.cortex.2015.03.022. Chambers, C. D., & Tzavella, L. (2020, February 10). Registered Reports: Past, Present and Future. https://doi.org/10.31222/osf.io/43298 Chartier, C. R., Riegelman, A., & McCarthy, R. J. (2018). StudySwap: A platform for interlab replication, collaboration, and resource exchange. Advances in Methods and Practices in Psychological Science, 1(4), 574-579. https://doi.org/10.1177/2515245918808767 Chuard, P. J. C., Vrtilek, M., Head, M. L., & Jennions, M. D. (2019). Evidence that non-significant results are sometimes preferred: Reverse P-hacking or selective reporting? PLoS Biol 17(1), e3000127. https://doi.org/10.1371/journal.pbio.3000127 Citizen Science Association (2015). Who We Are. Citizen Science. https://www.citizenscience.org/about-3/ Claerbout, J. F., & Karrenbach, M. (1992). Electronic documents give reproducible research a new meaning. In SEG Technical Program Expanded Abstracts 1992 (pp. 601-604). Society of Exploration Geophysicists. Available at http://sepwww.stanford.edu/doku.php?id=sep:research:reproducible:seg92 Clark, H., Elsherif, M. M., & Leavens, D. A. (2019). Ontogeny vs. phylogeny in primate/canid comparisons: a meta-analysis of the object choice task. Neuroscience & Biobehavioral Reviews, 105, 178-189. https://doi.org/10.1016/j.neubiorev.2019.06.001 Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. https://doi.org/10.1037/h0045186 Cohen, J. (1969). Statistical power analysis for the behavioral sciences. Academic Press. Cohn, J. P. (2008). Citizen science: Can volunteers do real research?. BioScience, 58(3), 192-197. https://doi.org/10.1641/B580303 Coles, N. A., Tiokhin, L., Arslan, R., Forscher, P., Scheel, A., & Lakens, D. (2020, May 11). Red Team Challenge. http://daniellakens.blogspot.com/2020/05/red-team-challenge.html Committee on Reproducibility and Replicability in Science, Board on Behavioral, Cognitive, and Sensory Sciences, Committee on National Statistics, Division of Behavioral and Social Sciences and Education, Nuclear and Radiation Studies Board, Division on Earth and Life Studies, 
 National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and Replicability in Science (p. 25303). National Academies Press. https://doi.org/10.17226/25303 Cook, T. D., & Campbell, D. T. (1979). Quasi-Experimentation. Rand McNally. Coproduction Collective (2021). Our approach. https://www.coproductioncollective.co.uk/what-is-co-production/our-approach Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: what constitutes a theoretical contribution?. Academy of management review, 36(1), 12-32. https://doi.org/10.5465/amr.2009.0486 Cornell University (2020). Measuring your research impact: i10 index. Cornell University Library. https://guides.library.cornell.edu/impact/author-impact-10 Corti, L., Van den Eynden, V., Bishop, L., & Woollard, M. (2019). Managing and sharing research data: a guide to good practice. Sage. Cowan, N., Belletier, C., Doherty, J. M., Jaroslawska, A. J., Rhodes, S., Forsberg, A., … & Logie, R. H. (2020). How do scientific views change? Notes from an extended adversarial collaboration. Perspectives on Psychological Science, 15(4), 1011-1025. https://doi.org/10.1177/1745691620906415 Crenshaw, K. W. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine. University of Chicago Legal Forum, 1989 (8), 139–168. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. https://doi.org/10.1037/h0040957 Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology, 52(7), 558–569. https://doi.org/10.1002/asi.1097 Crowdsourcing Week. (2021, April 29). What is Crowdsourcing? https://crowdsourcingweek.com/what-is-crowdsourcing/ Crutzen, R., Ygram Peters, G. J., & Mondschein, C. (2019). Why and how we should care about the General Data Protection Regulation. Psychology & health, 34(11), 1347-1357. https://doi.org/10.1080/08870446.2019.1606222 CrĂŒwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven Easy Steps to Open Science: An Annotated Reading List. Zeitschrift FĂŒr Psychologie, 227(4), 237–248. https://doi.org/10.1027/2151-2604/a000387 Curry, S. (2012) Sick of impact factors. [blogpost] http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/ d’Espagnat, B. (2008). Is science cumulative? A physicist viewpoint. In Rethinking Scientific Change and Theory Comparison (pp. 145-151). Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6279-7_10 Davies, G. M., & Gray, A. (2015). Don’t let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring). Ecology and Evolution, 5(22), 5295–5304. https://doi.org/10.1002/ece3.1782 Del Giudice, M., & Gangestad, S. W. (2021). A traveler’s guide to the multiverse: Promises, pitfalls, and a framework for the evaluation of analytic decisions. Advances in Methods and Practices in Psychological Science, 4(1), 2515245920954925. https://doi.org/10.1177/2515245920954925 Der Kiureghian, A., & Ditlevsen, O. (2009). Aleatory or epistemic? Does it matter?. Structural Safety, 31(2), 105-112. https://doi.org/10.1016/j.strusafe.2008.06.020 DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). Sage. Devito, N., & Goldacre, B. (2019). Publication bias. Catalogue Of Bias https://catalogofbias.org/biases/publication-bias/ Dickersin, K., & Min, Y. (1993). Publication Bias: The problem that wont go away. Annals New York Academy of Sciences, 703(1), 135-148. https://doi.org/10.1111/j.1749-6632.1993.tb26343.x Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. Perspectives on Psychological Science, 6(3), 274-290.https://doi.org/10.1177/1745691611406920 Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in psychology, 5, 781. https://doi.org/10.3389/fpsyg.2014.00781 Dienes, Z. (2016). How Bayes factors change scientific practice. Journal of Mathematical Psychology, 72, 78-89. https://doi.org/10.1016/j.jmp.2015.10.003 Doll, R., & Hill, A. B. (1954). The mortality of doctors in relation to their smoking habits, a preliminary report. British Medical Journal, 1 (4877), 1451–1455. doi:10.1136/bmj.1.4877.1451 Drost, E. A. (2011). Validity and reliability in social science research. Education Research and Perspectives, 38(1), 105-123. Du Bois, W.E.B. (1968). The souls of black folk, essays and sketches. Chicago, A.G. McClurg, 1903. New York: Johnson Reprint Corp. Duval, S., & Tweedie, R. (2000a). A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association, 95, 89–98. https://doi.org/10.2307/2669529 Duval, S., & Tweedie, R. (2000b). Trim and fill: A simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56, 455–463. https://doi.org/10.1111/j.0006-341x.2000.00455.x. Eagly, A. H., & Riger, S. (2014). Feminism and psychology: Critiques of methods and epistemology. American Psychologist, 69(7), 685–702. https://doi.org/10.1037/a0037372 Easterbrook, S. M. (2014). Open code for open science? Nature Geoscience, 7, 779-781. https://doi.org/10.1038/ngeo2283 Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., 
 Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. https://doi.org/10.1016/j.jesp.2015.10.012 Edyburn, D. L. (2010). Would you recognize universal design for learning if you saw it? Ten propositions for new directions for the second decade of UDL. Learning Disability Quarterly, 33(1), 33-41. https://doi.org/10.1177/073194871003300103 Ellemers, N. (2021). Science as collaborative knowledge generation. British Journal of Social Psychology, 60 (1), 1-28.https://doi.org/10.1111/bjso.12430 Eley, A. R. (2012). Becoming a successful early career researcher. Routledge. http://www.worldcat.org/oclc/934369360 f Elliott, K. C., & Resnik, D. B. (2019). Making open science work for science and society. Environmental Health Perspectives, 127(7). https://doi.org/10.1289/EHP4808 Esterling, K., Brady, D., & Schwitzgebel, E. (2021, January 27). The Necessity of Construct and External Validity for Generalized Causal Claims. https://doi.org/10.31219/osf.io/2s8w5 European Commission (2021, January 17th). Responsible research & innovation. Horizon 2020. https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation F. (2019, December 13). Introducing a Framework for Open and Reproducible Research Training (FORRT). https://doi.org/10.31219/osf.io/bnh7p Fanelli, D. (2010). Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLOS ONE. 5 (4), e10271. https://doi.org/10.1371/journal.pone.0010271 Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to?. Proceedings of the National Academy of Sciences, 115(11), 2628-2631. https://doi.org/10.1073/pnas.1708272114 Farrow, R. (2017). Open education and critical pedagogy. Learning, Media and Technology, 42(2), 130-146. https://doi.org/10.1080/17439884.2016.1113991 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. https://doi.org/10.3758/BF03193146 Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149-1160. https://doi.org/10.3758/BRM.41.4.1149 Ferson, S., Joslyn, C. A., Helton, J. C., Oberkampf, W. L., & Sentz, K. (2004). Summary from the epistemic uncertainty workshop: consensus amid diversity. Reliability Engineering & System Safety, 85(1-3), 355-369. https://doi.org/10.1016/j.ress.2004.03.023 Fiedler K., Kutzner F., Krueger J. I.. (2012). The long way from α-error control to validity proper: Problems with a short-sighted false-positive debate. Perspectives on Psychological Science, 7(6), 661-669. https://doi.org/ 10.1177/1745691612462587. Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7(1), 45–52. https://doi.org/10.1177/1948550615612150 Filipe, A., Renedo, A., & Marston, C. (2017). The co-production of what? Knowledge, values, and social relations in health care. PLoS biology, 15(5), e2001403. https://doi.org/10.1371/journal.pbio.2001403 Fillon, A.A., Feldman, G., Yeung, S. K., Protzko, J., Elsherif, M. M., Xiao, Q., Nanakdewa, K. & Brick, C. (2021). Correlational Meta-Analysis Registered Report Template. [Manuscript in preparation]. Findley, M. G., Jensen, N. M., Malesky, E. J., & Pepinsky, T. B. (2016). Can results-free review reduce publication bias? The results and implications of a pilot study. Comparative Political Studies, 49(13), 1667–1703. https://doi.org/10.1177/0010414016655539 Finlay, L., & Gough, B. (Eds.). (2008). Reflexivity: A practical guide for researchers in health and social sciences. John Wiley & Sons. Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456-465. https://doi.org/10.1177%2F2515245920952393 Fletcher-Watson, S., Adams, J., Brook, K., Charman, T., Crane, L., Cusack, J., Leekam, S., Milton, D., Parr, J. R., & Pellicano, E. (2019). Making the future together: Shaping autism research through meaningful participation. Autism, 23(4), 943–953 FORRT. (2021). Welcome to FORRT. Framework for Open and Reproducible Research Training. https://forrt.org Foster, E. D., & Deardorff, A. (2017). Open science framework (OSF). Journal of the Medical Library Association: JMLA, 105(2), 203. https://doi.org/ 10.5195/jmla.2017.88 Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505. https://doi.org/10.1126/science.1255484 Frank, M.C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J.K., Hannon, E.E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S. and Yurovsky, D. (2017). A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building. Infancy, 22, 421-435. https://doi.org/10.1111/infa.12182 Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43(1), 1–20. https://doi.org/10.1016/j.respol.2013.07.005 Fraser, H., Bush, M., Wintle, B., Mody, F., Smith, E., Hanea, A., … & Fidler, F. (2021). Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science). Free Our Knowledge. (n.d.). About. Free Our Knowledge. https://freeourknowledge.org/about/. Frith, U. (2020). Fast lane to slow science. Trends in Cognitive Sciences, 24(1), 1-2.https://doi.org/10.1016/j.tics.2019.10.007 Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: rethinking the way we measure. Serials Review, 39(1), 56-61. https://doi.org/10.1016/j.serrev.2013.01.003 Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University, 348. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science, 9(6), 641-651. https://doi.org/ 10.1177/1745691614551642 Gentleman, R. (2005). Reproducible Research: A Bioinformatics Case Study. Statistical Applications in Genetics and Molecular Biology, 4, 1034. https://doi.org/10.2202/1544-6115.1034 German Research Foundation (2019). Guidelines for Safeguarding Good Research Practice. Code of Conduct. http://doi.org/10.5281/zenodo.3923602 Gilroy, P. (1993). The black Atlantic: Modernity and double consciousness. New York: Harvard University Press. Giner-Sorolla, R., Aberson, C. L., Bostyn, D. H., Carpenter, T., Conrique, B. G., Lewis, N. A., & Soderberg, C. (2019). Power to detect what? Considerations for planning and evaluating sample size [Preprint]. https://osf.io/jnmya/ Ginsparg, P. (1997). Winners and losers in the global research village, The Serials Librarian, 30(3-4), 83-95. https://doi.org/10.1300/J123v30n03_13 Ginsparg, P. (2001). Creating a global knowledge network. In Second Joint ICSU Press-UNESCO Expert Conference on Electronic Publishing in Science (pp. 19-23). Gioia, D. A., & Pitre, E. (1990). Multiparadigm perspectives on theory building. Academy of management review, 15(4), 584-602. https://doi.org/10.5465/amr.1990.4310758 Glass, D. J., & Hall, N. (2008). A brief history of the hypothesis. Cell, 134(3), 378-381. https://doi.org/10.1016/j.cell.2008.07.033 Goertzen, M.J. (2017). Introduction to Quantitative Research and Data. Library Technology Reports. 53(4), 12–18. Gollwitzer, M., Abele-Brehm, A., Fiebach, C., Ramthun, R., Scheel, A. M., Schönbrodt, F. D., & Steinberg, U. (2020, September 10). Data Management and Data Sharing in Psychological Science: Revision of the DGPs Recommendations. https://doi.org/10.31234/osf.io/24ncs Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 341ps12-341ps12. https://doi.org/10.1126/scitranslmed.aaf5027 Goodman, S. W., & Pepinsky, T. B. (2019). Gender Representation and Strategies for Panel Diversity: Lessons from the APSA Annual Meeting. PS: Political Science & Politics, 52(4), 669-676. https://doi.org/10.1017/S1049096519000908 Gorgolewski, K., Auer, T., Calhoun, V. et al. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3, 160044. https://doi.org/10.1038/sdata.2016.44 Graham, I. D., McCutcheon, C., & Kothari, A. (2019). Exploring the frontiers of research co-production: the Integrated Knowledge Translation Research Network concept papers. Health Research Policy and Systems, 17, 88. https://doi.org/10.1186/s12961-019-0501-7 GRN · German Reproducibility Network. (n.d.). A German Reproducibility Network. Retrieved 5 June 2021, from https://reproducibilitynetwork.de/ Grossmann, A., & Brembs, B. (2021). Current market rates for scholarly publishing services. F1000Research, 10(20), 20. https://doi.org/10.12688/f1000research.27468.1 Grzanka, P. R. (2020). From buzzword to critical psychology: An invitation to take intersectionality seriously. Women & Therapy, 43(3-4), 244-261. Guest, O. [@o_guest]. (2017, June 5). Thanks! Hopefully this thread & many other similar discussions & blogs will help make it less Bropen Science and more Open Science. *hides\ [Tweet]. Twitter. https://twitter.com/o_guest/status/871675631062458368 Guest, O., & Martin, A. E. (2020). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science. https://doi.org/10.1177/1745691620970585 Haak, L. L., Fenner, M., Paglione, L., Pentz, E., & Ratner, H. (2012). ORCID: A system to uniquely identify researchers. Learned Publishing, 25(4), 259-264. doi:10.1087/20120404 Hackett, R., & Kelly, S. (2020). Publishing ethics in the era of paper mills. Biology Open, 9(10), bio056556. https://doi.org/10.1242/bio.056556 Hardwicke, T. E., Jameel, L., Jones, M., Walczak, E. J., & Weinberg, L. M. (2014). Only human: Scientists, systems, and suspect statistics. Opticon1826, 16, 25. DOI:10.5334/OPT.CH Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., … & Frank, M. C. (2020). Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study. Royal Society Open Science, 8(1), 201494. https://doi.org/10.1098/rsos.201494 Hart, D. D., & Silka, L. (2020). Rebuilding the Ivory Tower: A Bottom-Up Experiment in Aligning Research with Societal Needs. Issues in Science and Technology, 79-85. https://issues.org/aligning-research-with-societal-needs/ Hartgerink, C. H., Wicherts, J. M., & Van Assen, M. A. L. M. (2017). Too good to be false: Nonsignificant results revisited. Collabra: Psychology, 3(1). https://doi.org/10.1525/collabra.71 Haven, T. L., & van Grootel, L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229–244. https://doi.org/10.1080/08989621.2019.1580147 Haynes, S. N., Richard, D. C. S., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7(3), 238–247. https://doi.org/10.1037/1040-3590.7.3.238 Health Research Board (n.d.) Declaration on Research Assessment. Available from: https://www.hrb.ie/funding/funding-schemes/before-you-apply/how-we-assess-applications/declaration-on-research-assessment/ Healy, K. (2018). Data visualization: A practical introduction. Princeton University Press. Hendriks, F., Kienhues, D., & Bromme, R. (2016). Trust in science and the science of trust. In Trust and communication in a digitized world (S. 143–159). Springer. Henrich, J. (2020). The weirdest people in the world: How the west became psychologically peculiar and particularly prosperous.* Farrar, Straus and Giroux. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and brain sciences, 33(2-3), 61-83. https://doi.org/10.1017/S0140525X0999152X Herrmannova, D., & Knoth, P. (2016). Semantometrics Towards Full text-based Research Evaluation. https://arxiv.org/pdf/1605.04180.pdf Higgins, J.P.T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M.J., Welch, V.A. (Eds). (2019). Cochrane Handbook for Systematic Reviews of Interventions. 2nd Edition. Chichester, UK: John Wiley & Sons. Himmelstein, D. S., Rubinetti, V., Slochower, D. R., Hu, D., Malladi, V. S., Greene, C. S., & Gitter, A. (2019). Open collaborative writing with Manubot. PLOS Computational Biology, 15(6), e1007128. https://doi.org/10.1371/journal.pcbi.1007128 Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102 Hitchcock, C., Meyer, A., Rose, D., & Jackson, R. (2002). Providing new access to the general curriculum: Universal design for learning. Teaching exceptional children, 35(2), 8-17. https://www.proquest.com/scholarly-journals/providing-new-access-general-curriculum/docview/201139970/se-2?accountid=8630 Hoekstra, R., Kiers, H., & Johnson, A. (2012). Are assumptions of well-known statistical techniques checked, and why (not)?. Frontiers in Psychology, 3(137), 1-9. https://doi.org/10.3389/fpsyg.2012.00137 Hoijtink, H., Mulder, J., van Lissa, C., & Gu, X. (2019). A tutorial on testing hypotheses using the Bayes factor. Psychological Methods, 24(5), 539–556. https://doi.org/10.1037/met0000201 Holcombe, A. O. (2019). Contributorship, not authorship: Use CRediT to indicate who did what. Publications, 7(3), 48. https://doi.org/10.3390/publications7030048 Holcombe, A. O., Kovacs, M., Aust, F., & Aczel, B. (2020). Documenting contributions to scholarly articles using CRediT and tenzing. Plos one, 15(12), e0244611. Homepage. (n.d.). Open Science MOOC. Retrieved 5 June 2021, from https://opensciencemooc.eu/ Houtkoop, B. L., Chambers, C., Macleod, M. Bishop, D. V. M. Nichols, T. E., & Wagenmekers, E.-J. (2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 70.85. https://doi.org/10.1177/2515245917751886 Huber, B., Barnidge, M., Gil de ZĂșñiga, H., & Liu, J. (2019). Fostering public trust in science: The role of social media. Public understanding of science, 28(7), 759-777. https://doi.org/10.1177/0963662519869097 Huelin, R., Iheanacho, I., Payne, K., & Sandman, K. (2015). What’s in a name? Systematic and non-systematic literature reviews, and why the distinction matters. The evidence Forum, 34-37. Retrieved from: https://www.evidera.com/wp-content/uploads/2015/06/Whats-in-a-Name-Systematic-and-Non-Systematic-Literature-Reviews-and-Why-the-Distinction-Matters.pdf HĂŒffmeier, J., Mazei, J., & Schultze, T. (2016). Reconceptualizing replication as a sequence of different studies: A replication typology. Journal of Experimental Social Psychology, 66, 81-92. https://doi.org/10.1016/j.jesp.2015.09.009 Hultsch, D. F., MacDonald, S. W., & Dixon, R. A. (2002). Variability in reaction time performance of younger and older adults. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 57(2), P101-P115. https://doi.org/10.1093/geronb/57.2.P101 Hurlbert, S. H. (1984). Pseudoreplication and the Design of Ecological Field Experiments. Ecological Monographs, 54(2), 187–211. https://doi.org/10.2307/1942661 Ikeda, A., Xu, H., Fuji, N., Zhu, S., & Yamada, Y. (2019). Questionable research practices following pre-registration. Japanese Psychological Review, 62, 281–295. International Committee of Medical Journal Editors [ICMJE]. (2019). Recommendations for the conduct, reporting, eduting, and publication of scholarly work in medical journals. http://www.icmje.org/icmje-recommendations.pdf ISO. (1993). Guide to the Expression of Uncertainty in Measurement. 1st ed. Geneva: International Organization for Standardization. Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine, 2(8), e124.https://doi.org/10.1371/journal.pmed.0020124 Ioannidis, J. P., Fanelli, D., Dunne, D. D., & Goodman, S. N. (2015). Meta-research: evaluation and improvement of research methods and practices. PLoS Biology, 13(10), e1002264. https://doi.org/10.1371/journal.pbio.1002264 JabRef Development Team (2021). JabRef - An open-source, cross-platform citation and reference management software. https://www.jabref.org Jacobson, D., & Mustafa, N. (2019). Social Identity Map: A Reflexivity Tool for Practicing Explicit Positionality in Critical Qualitative Research. International Journal of Qualitative Methods, 18, 1609406919870075. https://doi.org/10.1177/1609406919870075 Jafar, A. J. N. (2018). What is positionality and should it be expressed in quantitative studies? Emergency Medicine Journal, 35(5), 323–324. https://doi.org/10.1136/emermed-2017-207158 James, K. L., Randall, N. P., & Haddaway, N. R. (2016). A methodology for systematic mapping in environmental sciences. Environmental evidence, 5(1), 1-13. https://doi.org/ 10.1186/s13750-016-0059-6 Jannot, A. S., Agoritsas, T., Gayet-Ageron, A., & Perneger, T. V. (2013). Citation bias favoring statistically significant studies was present in medical research. Journal of clinical epidemiology, 66(3), 296-301. https://doi.org/10.1016/j.jclinepi.2012.09.015. JASP Team (2020). JASP (Version 0.14.1)[Computer software] John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953 Jones, A., Dr, Duckworth, J., & Christiansen, P. (2020, June 29). May I have your attention, please? Methodological and Analytical Flexibility in the Addiction Stroop. https://doi.org/10.31234/osf.io/ws8xp Joseph, T. D., & Hirshfield, L. E. (2011). ‘Why don’t you get somebody new to do it?’Race and cultural taxation in the academy. Ethnic and Racial Studies, 34(1), 121-141. https://doi.org/10.1080/01419870.2010.496489 Kalliamvakou, E., Gousios, G., Blincoe, K., Singer, L., German, D. M., & Damian, D. (2014). The promises and perils of mining github. In Proceedings of the 11th working conference on mining software repositories (pp. 92-101). Kathawalla, U., Silverstein, P., & Syed, M. (2020). Easing into Open Science: A Guide for Graduate Students and Their Advisors*. Collabra: Psychology.* https://psyarxiv.com/vzjdp Kelley, T. L. (1927). Interpretation of educational measurements. New York: Macmillan. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and social psychology review, 2(3), 196-217. https://doi.org/10.1207/s15327957pspr0203_4 Kerr, N. L., Ao, X., Hogg, M. A., & Zhang, J. (2018). Addressing replicability concerns via adversarial collaboration: Discovering hidden moderators of the minimal intergroup discrimination effect. Journal of Experimental Social Psychology, 78, 66-76. https://doi.org/10.1016/j.jesp.2018.05.001 Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., … & Nosek, B. A. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS biology, 14(5), e1002456. https://doi.org/10.1371/journal.pbio.1002456 Kienzler, H., & Fontanesi, C. (2017). Learning through inquiry: A global health hackathon. Teaching in Higher Education, 22(2), 129-142. https://doi.org/10.1080/13562517.2016.1221805 Kiernan, C. (1999). Participation in research by people with learning disability: Origins and issues. British Journal of Learning Disabilities, 27(2), 43–47. https://doi.org/10.1111/j.1468-3156.1999.tb00084.x King, G. (1995). Replication, replication. PS: Political Science & Politics, 28(3), 444–452. https://doi.org/10.2307/420301 Kitzes, J., Turek, D., Deniz, F. (2017). The practice of reproducible research: Case studies and lessons from the data-intensive sciences. University of California Press. Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., BahnĂ­k, Ć ., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. https://doi.org/10.1027/1864-9335/a000178 Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., 
 Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225 Kleinberg, B., Mozes, M., van der Toolen, Y., & verschuere, B. (2017, June 3). NETANOS - Named entity-based Text Anonymization for Open Science. Retrieved from https://osf.io/w9nhb/ Knoth, P., & Herrmannova, D. (2014). Towards semantometrics: A new semantic similarity based measure for assessing a research publication’s contribution. D-Lib Magazine, 20(11), 8. https://doi.org/10.1045/november14-knoth Knowledge, F. O. (2020, December 3). Preregistration Pledge. Free Our Knowledge. https://freeourknowledge.org/2020-12-03-preregistration-pledge/ Koole, S. L., & Lakens, D. (2012). Rewarding replications: A sure and simple way to improve psychological science. Perspectives on Psychological Science, 7(6), 608-614 .https://doi.org/10.1177/1745691612462586 Kreuter, F. (Ed.). (2013). Improving Surveys with Paradata. doi:10.1002/9781118596869 Kruschke, J. K. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan (2nd ed.). Academic Press. Kuhn, T. (1962). The Structure of Scientific Revolutions. University of Chicago Press. ISBN 978-0226458083. Kukull, W.A. & Ganguli, M. (2012). Generalizability: The trees, the forest, and the low-hanging fruit. Neurology, 78(23), 1886-1891. https://doi.org/10.1212/WNL.0b013e318258f812 Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701–710. https://doi.org/10.1002/ejsp.2023 Lakens, D. (2020). Pandemic researchers — recruit your own best critics. Nature, 581, 121. Lakens, D. (2021a, January 4). Sample Size Justification. https://doi.org/10.31234/osf.io/9d3yf Lakens, D. (2021b). The practical alternative to the p-value is the correctly used p-value. Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259-269. https://doi.org/10.1177/2515245918770963 Lakens, D., McLatchie, N., Isager, P. M., Scheel, A. M., & Dienes, Z. (2020). Improving inferences about null effects with Bayes factors and equivalence tests. The Journals of Gerontology: Series B, 75(1), 45-57. https://doi.org/10.1093/geronb/gby065 Laine, H. (2017) Afraid of scooping – Case study on researcher strategies against fear of scooping in the context of open science. Data Science Journal, 16(29), 1–14. https://doi.org/10.5334/dsj-2017-029 Largent, E. A., & Snodgrass, R. T. (2016). Blind peer review by academic journals. In C. T. Robertson and A. S. Kesselheim (Eds.) Blinding as a solution to bias: Strengthening biomedical science, forensic science, and law, (pp. 75-95). Academic Press. https://doi.org/10.1016/B978-0-12-802460-7.00005-X Lazic, S. E. (2019). Genuine replication and pseudoreplication: What’s the difference? BMJ Open Science. https://blogs.bmj.com/openscience/2019/09/16/genuine-replication-and-pseudoreplication-whats-the-difference/ Leavy, P. (2017). Research design: Quantitative, qualitative, mixed methods, arts-based, and community-based participatory research approaches. The Guilford Press. Leavens, D. A., Bard, K. A., & Hopkins, W. D. (2010). BIZARRE chimpanzees do not represent “the chimpanzee”. Behavioral and Brain Sciences, 33(2-3), 100-101. https://doi.org/10.1017/S0140525X10000166 LeBel, E. P., Vanpaemel, W., Cheung, I., & Campbell, L. (2017). A brief guide to evaluate replications. Meta-Psychology, 3. https://doi.org/10.15626/MP.2018.843 LeBel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., & Vanpaemel, W. (2018). A unified framework to quantify the credibility of scientific findings. Advances in Methods and Practices in Psychological Science, 1(3), 389-402. https://doi.org/10.1177/2515245918787489 Ledgerwood, A., Hudson, S. T. J., Lewis, N. A., Jr., Maddox, K. B., Pickett, C., Remedios, J. D., 
 Wilkins, C. L. (2021, January 11). The Pandemic as a Portal: Reimagining Psychological Science as Truly Open and Inclusive. https://doi.org/10.31234/osf.io/gdzue Lee, R.M. (1993). Doing research on sensitive topics. London: Sage. Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative psychology, 4(1), 2. https://doi.org/10.1037/qup0000082 Lewandowsky, S., & Bishop, D. (2016). Research integrity: Don’t let transparency damage science. Nature News, 529(7587), 459. https://doi.org/10.1038/529459a LibGuides. (n.d.). Measuring your research impact: i10-Index. https://guides.library.cornell.edu/impact/author-impact-10. Lieberman, E. (2020). Research Cycles. In C. Elman, J. Gerring, & J. Mahoney (Eds.), The Production of Knowledge: Enhancing Progress in Social Science (Strategies for Social Inquiry, pp. 42-70). Cambridge: Cambridge University Press. https://doi.org10.1017/9781108762519.003 Lind, F., Gruber, M., & Boomgaarden, H. G. (2017). Content analysis by the crowd: Assessing the usability of crowdsourcing for coding latent constructs. Communication Methods and Measures, 11(3), 191–209. https://doi.org/10.1080/19312458.2017.1317338 Lindsay, D. S. (2015). Replication in Psychological Science [Editorial]. Psychological Science, 26(12), 1827-1832. https://doi.org/10.1177/0956797615616374 Lindsay, D. S. (2020). Seven steps toward transparency and replicability in psychological science. Canadian Psychology/Psychologie canadienne., 61(4), 310–317. https://doi.org/10.1037/cap0000222 Liu, Y. et al. (2020). Replication markets: Results, lessons, challenges and opportunities in AI replication. arXiv:2005.04543 Lu, J., Qiu, Y., & Deng, A. (2018). A note on Type S/M errors in hypothesis testing. British Journal of Mathematical and Statistical Psychology, 72(1), 1-17. https://doi.org/10.1111/bmsp.12132 LĂŒdtke, O., Ulitzsch, E., & Robitzsch, A. (2020). A Comparison of Penalized Maximum Likelihood Estimation and Markov Chain Monte Carlo Techniques for Estimating Confirmatory Factor Analysis Models with Small Sample Sizes [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/u3qag Lutz, M. (2001). Programming python. O’Reilly Media, Inc. Lyon, L. (2016) Transparency: The Emerging Third Dimension of Open Science and Open Data. LIBER Quarterly, 25(4), 153-171. http://doi.org/10.18352/lq.10113 Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., & LĂŒdecke, D. (2019). Indices of Effect Existence and Significance in the Bayesian Framework. Retrieved from 10.3389/fpsyg.2019.02767 Marwick, B., Boettiger, C., & Mullen, L. (2018). Packaging data analytical work reproducibly using R (and friends). The American Statistician, 72(1), 80-88. https://doi.org/10.1080/00031305.2017.1375986 McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan (2nd ed.). Taylor and Francis, CRC Press. McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., Kiermer, V., Marcus, E., Pope, B. K., Schekman, R., Swaminathan, S., Stang, P. J., and Verma, I. M. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proceedings of the National Academy of Sciences of the United States of America, 115(11), 2557-2560. https://doi.org/10.1073/pnas.1715374115 Medin, D. L. (2012). Rigor without rigor mortis: The APS Board discusses research integrity. APS Observer, 25 (5-9), 27-28. https://www.psychologicalscience.org/observer/scientific-rigor Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science, 12(4), 269-275. https://doi.org/10.1111/1467-9280.00350 Mertens, G., & Krypotos, A. M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338. Merton, R.K. (1938). Science and the social order. Philosophy of Science, 5(3), 321–337 https://doi.org/10.1086/286513 Merton, R. K. (1942). A note on science and democracy. Journal of Legal and Political Sociology, 1, 115–126. https://doi.org/10.1515/9783110375008-013 Merton, R.K. (1968). The Matthew Effect in Science. Science, 159 (3810), 56–63. https:/doi.org/10.1126/science.159.3810.56 Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational measurement: Issues and practice, 14(4), 5-8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x Michener W.K. (2015). Ten simple rules for creating a good data management plan. PLoS Computational Biology, 11(10), e1004525. https:/doi.org/10.1371/journal.pcbi.1004525 Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., … & Dirnagl, U. (2020). The Hong Kong Principles for assessing researchers: Fostering research integrity. PLoS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737 Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Medicine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097 Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), e2004089. https://doi.org/10.1371/journal.pbio.2004089 Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., … & Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501-515. https://doi.org/10.1177/2515245918797607 Monroe, K. R. (2018). The rush to transparency: DA-RT and the potential dangers for qualitative research. Perspectives on Politics, 16(1), 141–148. https://doi.org/10.1017/S153759271700336X Moretti, M. (2020, August 25). Beyond Open-washing: Are Narratives the Future of Open Data Portals? Medium. https://medium.com/nightingale/beyond-open-washing-are-stories-and-narratives-the-future-of-open-data-portals-93228d8882f3 Morgan, C. (1998). The DOI (Digital Object Identifier). Serials, 11(1), pp.47–51. http://doi.org/10.1629/1147 Munn, Z., Peters, M. D., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC medical research methodology, 18(1), 1-7. https://doi.org/10.1186/s12874-018-0611-x Muthukrishna, M., Bell, A. V., Henrich, J., Curtin, C. M., Gedranovich, A., McInerney, J., & Thue, B. (2020). Beyond Western, Educated, Industrial, Rich, and Democratic (WEIRD) psychology: Measuring and mapping scales of cultural and psychological distance. Psychological Science, 31, 678-701. https://doi.org/10.1177/0956797620916782 National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Committee on Science, Engineering, Medicine, and Public Policy, Board on Research Data and Information, Division on Engineering and Physical Sciences, Committee on Applied and Theoretical Statistics, Board on Mathematical Sciences and Analytics, Division on Earth and Life Studies, Nuclear and Radiation Studies Board, Division of Behavioral and Social Sciences and Education, Committee on National Statistics, Board on Behavioral, Cognitive, and Sensory Sciences, Committee on Reproducibility and Replicability in Science. (2019). Reproducibility and Replicability in Science. In Understanding Reproducibility and Replicability. Washington (DC): National Academies Press (US), Available from: https://www.ncbi.nlm.nih.gov/books/NBK547546/ Nature (n.d.). Recommended Data Repositories. Scientific Data. https://www.nature.com/sdata/policies/repositories Naudet, F., Ioannidis, J., Miedema, F., Cristea, I. A., Goodman, S. N., & Moher, D. (2018). Six principles for assessing scientists for hiring, promotion, and tenure. Impact of Social Sciences Blog. Navarro, D. (2020). Paths in strange spaces: A comment on preregistration. Nelson, L.D., Simmons, J.P. & Simonsohn, U. (2012) Let’s Publish Fewer Papers, Psychological Inquiry, 23 (3), 291-293, https://doi.org/10.1080/1047840X.2012.705245 Neuroskeptic. (2012). The nine circles of scientific hell. (2012). Perspectives on Psychological Science, 7(6), 643–644. https://doi.org/10.1177/1745691612459519 Nichols, T. E., Das, S., Eickhoff, S. B., Evans, A. C., Glatard, T., Hanke, M., … & Yeo, B. T. (2017). Best practices in data analysis and sharing in neuroimaging using MRI. Nature neuroscience, 20(3), 299-303. https://doi.org/10.1038/nn.4500 Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2), 175-220. https://doi.org/10.1037/1089-2680.2.2.175 Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E. J. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nature Neuroscience, 14(9), 1105-1107. https://doi.org/10.1038/nn.2886 NIHR (2021) https://www.learningforinvolvement.org.uk/?opportunity=nihr-guidance-on-co-producing-a-research-project Nittrouer, C., Hebl, M., Ashburn-Nardo, L., Trump-Steele, R., Lane, D., Valian, V. (2018). Gender disparities in colloquium speakers. Proceedings of the National Academy of Sciences Jan, 115 (1) 104-108, DOI: 10.1073/pnas.1708414115 Nosek, B. A., & Bar-Anan, Y. (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23(3), 217-243. https://doi.org/10.1080/1047840X.2012.692215 Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600-2606. https://doi.org/10.1073/pnas.1708274114 Nosek, B.A. & Errington, T.M. (2020) What is replication? PlosBiology, 18(3), e3000691. https://doi.org/10.1371/journal.pbio.3000691 Nosek, B. A., & Lakens, D. (2014). Registered Reports. Social Psychology, 45, 137-141. https://doi.org/10.1027/1864-9335/a000192. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615-631. https://doi.org/10.1177%2F1745691612459058 Noy, N. F., & McGuinness, D. L. (2001). Ontology development 101: A guide to creating your first ontology . https://corais.org/sites/default/files/ontology_development_101_aguide_to_creating_your_first_ontology.pdf NĂŒst, D. C., Boettiger, C., & Marwick, B. (2018) How to Read a Research Compendium. arXiv preprint arXiv:1806.09525 O’Dea, R. E., Parker, T. H., Chee, Y. E., Culina, A., Drobniak, S. M., Duncan, D. H., Fidler, F., Gould, E., Ihle, M., Kelly, C. D., Lagisz, M., Roche, D. G., SĂĄnchez-TĂłjar, A., Wilkinson, D. P., Wintle, B. C., & Nakagawa, S. (2021). Towards open, reliable, and transparent ecology and evolutionary biology. BMC Biology, 19(1). https://doi.org/10.1186/s12915-021-01006-3 O’Grady (2020) Psychology’s replication crisis inspires ecologists to push for more reliable research. Science. https://doi.org/10.1126/science.abg0894 Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in registered reports in psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229-237. Oberauer K., & Lewandowsky S. (2019). Addressing the theory crisis in psychology. Psychonomic bulletin & review. 26(5),1596–1618. https://doi.org/10.3758/s13423-019-01645-2 Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716 Open Aire. (2020). High accuracy Data anonymisation. Amnesia. https://amnesia.openaire.eu/ Open Source Initiative (n.d.). The Open Source Definition. Open Source Initiative. https://opensource.org/osd Orben, A. (2019). A journal club to fix science. Nature, 573(7775), 465-466. https://doi.org/10.1038/d41586-019-02842-8 Ottmann, G., Laragy, C., Allen, J., Feldman, P. (2011). Coproduction in practice: Participatory action research to develop a model of community aged care. Systemic Practice and Action Research, 24, 413–427. https://doi.org/10.1007/s11213-011-9192-x Oxford Dictionaries. (2017). Bias—definition of bias in English. https://en.oxforddictionaries.com/definition/bias Oxford Reference. (2017). Reflexivity. https://www.oxfordreference.com/view/10.1093/acref/9780199599868.001.0001/acref-9780199599868-e-1530 Padilla, A. M. (1994). Research news and comment: Ethnic minority scholars, research, and mentoring: Current and future issues. Educational Researcher, 23(4), 24-27. https://doi.org/10.3102/0013189X023004024 Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. British Medical Journal, 372. https://doi.org/ 10.1136/bmj.n160 Patience, G. S., Galli, F., Patience, P. A., & Boffito, D. C. (2019). Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories. PLoS One, 14(1), e0198117. https://doi.org/10.1371/journal.pone.0198117 Pavlov, Y. G., Adamian, N., Appelhoff, S., Arvaneh, M., Benwell, C., Ph.D., Beste, C., 
 Mushtaq, F. (2020, November 27). #EEGManyLabs: Investigating the Replicability of Influential EEG Experiments. https://doi.org/10.31234/osf.io/528nr PCI (n.d.). PCI IN A FEW LINES. Peer community in. https://peercommunityin.org/ Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006 Peng, R. D. (2011). Reproducible Research in Computational Science. Science, 334(6060), 1226–1227. https://doi.org/10.1126/science.1213847 Percie du Sert, N., Hurst, V., Ahluwalia, A., Alam, S., Avey, M. T., Baker, M., … & WĂŒrbel, H. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. Journal of Cerebral Blood Flow & Metabolism, 40(9), 1769-1777. https://doi.org/10.1371/journal.pbio.3000410 Pernet, C. R. (2015). Null hypothesis significance testing: a short tutorial. F1000Research, 4, 621. https:/doi.org/10.12688/f1000research.6963.3 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6(1), 103. https://doi.org/10.1038/s41597-019-0104-8 Pernet, C., Garrido, M. I., Gramfort, A., Maurits, N., Michel, C. M., Pang, E., … & Puce, A. (2020). Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nature Neuroscience, 23(12), 1473-1483. https://doi.org/10.1038/s41593-020-00709-0 Peterson, D., & Panofsky, A. (2020, August 4). Metascience as a scientific social movement. https://doi.org/10.31235/osf.io/4dsqa Petre, M., & Wilson, G. (2014). Code review for and by scientists. arXiv preprint arXiv:1407.5648. Popper, K. (1959). The logic of scientific discovery. London, United Kingdom: Routledge. Posselt, J. R. (2020). Equity in Science: Representation, Culture, and the Dynamics of Change in Graduate Education. Stanford University Press. https://books.google.de/books?id=2CjwDwAAQBAJ Pownall, M., Talbot, C. V., Henschel, A., Lautarescu, A., Lloyd, K., Hartmann, H., 
 Siegel, J. A. (2020, October 13). Navigating Open Science as Early Career Feminist Researchers. https://doi.org/10.31234/osf.io/f9m47 Psychological Science Accelerator. (n.d.). https://psysciacc.org/. R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/ Rakow, T., Thompson, V., Ball, L., & Markovits, H. (2014). Rationale and guidelines for empirical adversarial collaboration: A Thinking & Reasoning initiative. Thinking & Reasoning, 21(2), 167–175. doi:10.1080/13546783.2015.975405 RepliCATS project. (2020). Collaborative Assessment for Trustworthy Science. The University of Melbourne. https://replicats.research.unimelb.edu.au/ ReproducibiliTea. (n.d.). Welcome to ReproducibiliTea. ReproducibiliTea. https://reproducibilitea.org/ Research Data Alliance (2020). Data management plan (DMP) common standard. Available from: https://github.com/RDA-DMP-Common/RDA-DMP-Common-Standard RIOT Science Club. (2021, May 28). RIOT Science Club. http://riotscience.co.uk/ Rodriguez, J. K., & GĂŒnther, E. A. (2020, Oct 14). What’s wrong with manels and what can we do about it. The Conversation. https://theconversation.com/whats-wrong-with-manels-and-what-can-we-do-about-them-148068 Rolls, L., & Relf, M. (2006). Bracketing interviews: addressing methodological challenges in qualitative interviewing in bereavement and palliative care, Mortality, 11 (3), 286-305, https://doi.org/10.1080/13576270600774893 Rose, D. (2000). Universal design for learning. Journal of Special Education Technology, 15(3), 45-49. https://doi.org/10.1177/016264340001500307 Rose, D. (2018). Participatory research: real or imagined. Social Psychiatry and Psychiatric Epidemiology. 53, 765–771. https://doi.org/10.1007/s00127-018-1549-3 Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Association for Supervision and Curriculum Development, 1703 N. Beauregard St., Alexandria, VA 22311-1714 (Product no. 101042: $22.95 ASCD members, $26.95 nonmembers). Ross-Hellauer, T. (2017). What is open peer review? A systematic review [version 2, peer review: 4 approved]. F1000Research, 6, 588 ( https://doi.org/10.12688/f1000research.11369.2 Rossner, M., Van Epps, H., & Hill, E. (2008). Show me the data. https://doi.org/10.1083/jcb.200711140 Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005). Publication bias in meta-analysis. In Rothstein, A. J. Sutton, & M. Borenstein (Eds.).Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 1-7). John Wiley & Sons, Ltd. https://doi.org/10.1002/0470870168.ch1 Rowhani-Farid, A., Aldcroft, A., & Barnett, A. G. (2020). Did awarding badges increase data sharing in BMJ Open? A randomized controlled trial. Royal Society open science, 7(3), 191818. https://doi.org/10.1098/rsos.191818 Rubin, M. (2021). Explaining the association between subjective social status and mental health among university students using an impact ratings approach. SN Social Sciences, 1(1), 1-21. https://doi.org/10.1007/s43545-020-00031-3 Rubin, M., Evans, O., & McGuffog, R. (2019). Social class differences in social integration at university: Implications for academic outcomes and mental health. In J. Jetten, & K. Peters (Eds.), The social psychology of inequality (pp. 87-102). Springer. https://doi.org/10.1007/978-3-030-28856-3_6 S. (2021, June 5). OSF | StudySwap: A platform for interlab replication, collaboration, and research resource exchange. OSF. https://osf.io/meetings/StudySwap/ Sagarin, B. J., Ambler, J. K., & Lee, E. M. (2014). An ethical approach to peeking at data. Perspectives on Psychological Science, 9(3), 293-304. https://doi.org/10.1177/1745691614528214 San Francisco Declaration on Research Assessment (DORA). https://sfdora.org/ Retrieved February 18th 2021. Sato, T. (1996). Type I and Type II error in multiple comparisons. The Journal of Psychology, 130(3), 293-302. https://doi.org/10.1080/00223980.1996.9915010 Schafersman, S.D. (1997). An Introduction to Science. Available from: https://www.geo.sunysb.edu/esp/files/scientific-method.html Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). Thousand Oaks, CA: Sage. Schmidt, R.H. (1987) A worksheet for authorship of scientific articles. The Bulletin of the Ecological Society of America, 68, 8–10. Retrieved March 4, 2021, from http://www.jstor.org/stable/20166549 Schneider, J., Merk, S., & Rosman, T. (2019). (Re)Building Trust? Investigating the effects of open science badges on perceived trustworthiness in journal articles. https://doi.org/10.17605/OSF.IO/VGBRS Schönbrodt, F. D., Wagenmakers, E.-J., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological Methods, 22(2), 322–339. https://doi.org/10.1037/met0000061 Schönbrodt, F. (2019). Training students for the Open Science future. Nature human behaviour, 3(10), 1031-1031. https://doi.org/10.1038/s41562-019-0726-z Schuirmann, D. J. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15, 657–680. https://doi.org/10.1007/BF01068419 Schulz, K. F., & Grimes, D. A. (2005). Multiplicity in randomised trials I: endpoints and treatments. The Lancet, 365(9470), 1591-1595. https://doi.org/10.1016/S0140-6736(05)66461-6 Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. Trials, 11(1), 1-8. https://doi.org/10.1186/1745-6215-11-32 Schwarz, N., & Strack, F. (2014). Does Merely Going Through the Same Moves Make for a “Direct” Replication?: Concepts, Contexts, and Operationalizations. Social Psychology, 45(4), 305-306. Science, C. (n.d.). Open science badges. Retrieved February 08, 2021, from https://www.cos.io/initiatives/badges Sert, N. P. du, Hurst, V., Ahluwalia, A., Alam, S., Avey, M. T., Baker, M., Browne, W. J., Clark, A., Cuthill, I. C., Dirnagl, U., Emerson, M., Garner, P., Holgate, S. T., Howells, D. W., Karp, N. A., Lazic, S. E., Lidster, K., MacCallum, C. J., Macleod, M., 
 WĂŒrbel, H. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. PLOS Biology, 18(7), e3000410. https://doi.org/10.1371/journal.pbio.3000410 Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton, Mifflin and Company. Sharma, M., Sarin, A., Gupta, P., Sachdeva, S., & Desai, A. (2014). Journal impact factor: its use, significance and limitations. World journal of nuclear medicine, 13(2), 146. https://doi.org/10.4103/1450-1147.139151 Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual review of psychology, 70, 747-770. https://doi.org/[10.1146/annurev-psych-010418-102803](https://doi-org.surrey.idm.oclc.org/10.1146/annurev-psych-010418-102803) Sijtsma, K. (2016). Playing with data—Or how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika, 81(1), 1–15. https://doi.org/10.1007/s11336-015-9446-0 Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., BahnĂ­k, Ć ., Bai, F., Bannard, C., Bonnier, E., Carlsson, R., Cheung, F., Christensen, G., Clay, R., Craig, M. A., Dalla Rosa, A., Dam, L., Evans, M. H., Flores Cervantes, I., 
 Nosek, B. A. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science, 337–356. https://doi.org/10.1177/2515245917747646 Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 1123-1128. https://doi.org/10.1177/1745691617708630 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632 Simmons, J., Nelson, L., & Simonsohn, U. (2021). Pre‐registration: Why and how. Journal of Consumer Psychology, 31(1), 151–162. https://doi.org/10.1002/jcpy.1208 Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: a key to the file-drawer. Journal of experimental psychology: General, 143(2), 534. Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Specification curve: Descriptive and inferential statistics on all reasonable specifications. Retrieved from http://sticerd.lse.ac.uk/seminarpapers/psyc16022016.pdf. Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6), 666–681. https://doi.org/10.1177/1745691614553988 Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2019). P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016). PLoS ONE, 14(3), e0213454. https://doi.org/10.1371/journal.pone.0213454 Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 4(11), 1208-1214. https://doi.org/10.1038/s41562-020-0912-z Slow Science Academy. (2010). The Slow Science Manifesto. Slow Science. http://slow-science.org/. SIPS. (2021). The Society for the Improvement of Psychological Science. SIPS. https://improvingpsych.org/ Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384 Smith, G. T. (2005). On Construct Validity: Issues of Method and Measurement. Psychological Assessment, 17(4), 396–408. https://doi.org/10.1037/1040-3590.17.4.396 Smith, A. C., Merz, L., Borden, J. B., Gulick, C., Kshirsagar, A. R., & Bruna, E. M. (2020, September 2). Assessing the effect of article processing charges on the geographic diversity of authors using Elsevier’s ‘Mirror Journal’ system. https://doi.org/10.31222/osf.io/s7cx4 Smith, A.J., Clutton, R.E., Lilley, E., Hansen K.E.A., Brattelid, T. (2018): PREPARE: Guidelines for planning animal research and testing. Laboratory Animals, 52(2), 135-141. https:/doi.org/10.1177/0023677217724823 Sorsa, M.A., Kiikkala, I., & Åstedt-Kurki, P. (2015). Bracketing as a skill in conducting unstructured qualitative interviews. Nursing Research, 22(4), 8-12. https:/doi.org/10.7748/nr.22.4.8.e1317. Society for Open, Reliable and Transparent Ecology and Evolutionary biology (n.d.). SORTEE. Retrieved 5 June 2021, from https://www.sortee.org/ Spence, J. R., & Stanley, D. J. (2018). Concise, simple, and not wrong: In search of a short-hand interpretation of statistical significance. Frontiers in psychology, 9, 2185. https:/doi.org/10.3389/fpsyg.2018.02185 Stanford Libraries. (n.d.). Data management plans. https://library.stanford.edu/research/data-management-services/data-management-plans#:~:text=A%20data%20management%20plan%20(DMP,share%20and%20preserve%20your%20data. Steup, M., & Neta, R. (2020, April 11). Epistemology. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/epistemology/. Steegen, S., Tuerlinckx, F., , Gelman, A. & Vanpaemel, W. (2016). Increasing Transparency through a Multiverse Analysis. Perspectives on Psychological Science, 11, 702-712. https://doi.org/10.1177/1745691616658637 Stewart, N., Chandler, J., & Paolacci, G. (2017). Crowdsourcing samples in cognitive science. Trends in cognitive sciences, 21(10), 736-748. https://doi.org/10.1016/j.tics.2017.06.007 Stodden, V. C. (2011). Trust your science? Open your data and code. Strathern, M. (1997). ‘Improving ratings’: audit in the British University system. European review, 5(3), 305-321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4 Suber, P. (2004). The primacy of authors in achieving Open Access. Nature. June 10, 2004. (previous, unabridged version: http://dash.harvard.edu/handle/1/4391161) Suber, P. (2015). Open Access Overview. Available from: http://legacy.earlham.edu/~peters/fos/overview.htm SwissRN. (n.d.). Swiss Reproducibility Network. Retrieved 5 June 2021, from https://www.swissrn.org/ Syed, M. (2019). The Open Science Movement is for all of us. PsyArXiv. Syed, M., & Kathawalla, U. (2020, February 25). Cultural Psychology, Diversity, and Representation in Open Science. https://doi.org/10.31234/osf.io/t7hp2 Szollosi, A., & Donkin, C., (2019). Arrested theory development: The misguided distinction between exploratory and confirmatory research. PsyArXiv. Tenney, S., & Abdelgawad, I. (2019). Statistical significance. In StatsPearls. Treasure Island (FL), StatPearls Publishing. Tscharntke, T., Hochberg, M. E., Rand, T. A., Resh, V. H., & Krauss, J. (2007). Author sequence and credit for contributions in multiauthored publications. PLoS Biology, 5(1), e18. https://doi.org/10.1371/journal.pbio.0050018 Tennant, J., Bielczyk, N. Z., Cheplygina, V., Greshake Tzovaras, B., Hartgerink, C. H. J., Havemann, J., Masuzzo, P., & Steiner, T. (2019). Ten simple rules for researchers collaborating on Massively Open Online Papers (MOOPs) [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/et8ak The jamovi project (2020). jamovi (Version 1.2) [Computer Software]. Retrieved from https://www.jamovi.org The Nine Circles of Scientific Hell. (2012). Perspectives on Psychological Science, 7(6), 643–644. https://doi.org/10.1177/1745691612459519 The R Foundation (n.d.). The R Project for Statistical Computing. The R Foundation. https://www.r-project.org/ Thombs, B. D., Levis, A. W., Razykov, I., Syamchandra, A., Leentjens, A. F., Levenson, J. L., & Lumley, M. A. (2015). Potentially coercive self-citation by peer reviewers: a cross-sectional study. Journal of Psychosomatic Research, 78(1), 1-6. https://doi.org/10.1016/j.jpsychores.2014.09.015 Tierney, W., Hardy III, J. H., Ebersole, C. R., Leavitt, K., Viganola, D., Clemente, E. G., … & Hiring Decisions Forecasting Collaboration. (2020). Creative destruction in science. Organizational Behavior and Human Decision Processes, 161, 291-309. https://doi.org/10.1016/j.obhdp.2020.07.002 Tierney, W., Hardy III, J., Ebersole, C. R., Viganola, D., Clemente, E. G., Gordon, M., … & Culture & Work Morality Forecasting Collaboration. (2021). A creative destruction approach to replication: Implicit work and sex morality across cultures. Journal of Experimental Social Psychology, 93, 104060. https://doi.org/10.1016/j.jesp.2020.104060 Tiokhin, L., Yan, M., & Horgan, T. J. H. (2021). Competition for priority harms the reliability of science, but reforms can help. Nature Human Behaviour. https://doi.org/10.1038/s41562-020-01040-1 Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., BĂŒttner, F. C., Elsherif, M. M., 
 Westwood, S. J. (2021, March 15). An integrative framework for planning and conducting Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR). https://doi.org/10.31222/osf.io/8gu5z Tufte, E. R. (1983). The visual display of quantitative information. Graphics Press. Tukey, J.W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley. Tvina, A., Spellecy, R., & Palatnik, A. (2019). Bias in the peer review process: can we do better?. Obstetrics & Gynecology, 133(6), 1081-1083. https://doi.org/10.1097/AOG.0000000000003260 Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711–733. https://doi.org/10.1177/1745691619850561 van de Schoot, R., Depaoli, S., King, R., Kramer, B., MĂ€rtens, K., Tadesse, M. G., Vannucci, M., Gelman, A., Veen, D., Willemsen, J., & Yau, C. (2021). Bayesian statistics and modelling. Nature Reviews Methods Primers, 1(1), 1–26. https://doi.org/10.1038/s43586-020-00001-2 Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884 Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020). Credibility Beyond Replicability: Improving the Four Validities in Psychological Science. https://doi.org/10.31234/osf.io/bu4d3 Villum, C. (2016, July 2). “Open-washing” – The difference between opening your data and simply making them available. Open Knowledge Foundation Blog. https://blog.okfn.org/2014/03/10/open-washing-the-difference-between-opening-your-data-and-simply-making-them-available/ Vlaeminck, S., & Podkrajac, F. (2017). Journals in Economic Sciences: Paying Lip Service to Reproducible Research? Paying Lip Service to Reproducible Research?. IASSIST Quarterly, 41(1-4), 16. https://doi.org/10.29173/iq6 Von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., GĂžtzsche, P. C., & Vandenbroucke, J. P. (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Annals of internal medicine, 147(8), 573-577. https://doi.org/10.1136/bmj.39335.541782.AD Voracek, M., Kossmeier, M., & Tran, U. S. (2019). Which Data to Meta-Analyze, and How?. Zeitschrift fĂŒr Psychologie. https://doi.org/10.1027/2151-2604/a000357 Vuorre, M., & Curley, J. P. (2018). Curating research assets: A tutorial on the Git version control system. Advances in methods and Practices in Psychological Science, 1(2), 219-236. https://doi.org/10.1177/2515245918754826 Wacker, J. (1998). A definition of theory: research guidelines for different theory-building research methods in operations management. Journal of Operations Management, 16(4), 361–385. doi:10.1016/s0272-6963(98)00019-9 Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632-638. https://doi.org/10.1177/1745691612463078 Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., Selker, R., Gronau, Q. F., Ć mĂ­ra, M., Epskamp, S., Matzke, D., Rouder, J. N., & Morey, R. D. (2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychonomic Bulletin & Review, 25(1), 35–57. https://doi.org/10.3758/s13423-017-1343-3 Wagge, J. R., Baciu, C., Banas, K., Nadler, J. T., Schwarz, S., Weisberg, Y., … & McCarthy, R. (2019). A demonstration of the collaborative replication and education project: Replication attempts of the red-romance effect. Collabra: Psychology, 5(1). https://doi.org/10.1525/collabra.177 Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129-140. https://doi.org/10.1080/17470216008416717 Wasserstein, R. L., & Lazar, N. A. (2016). The ASA statement on p-values: Context, process, and purpose. The American Statistician, 70, 129-133. https://doi.org/10.1080/00031305.2016.1154108 Webster, M.M., & Rutz, C. (2020). How STRANGE are your study animals? Nature, 582, 337–40. https://doi.org/10.1038/d41586-020-01751-5 Wendl, M. C. (2007). H-index: however ranked, citations need context. *Nature, 449(*7161), 403-403. https://doi.org/10.1038/449403b Whitaker, K., & Guest, O. (2020). #bropenscience is broken science. The Psychologist, 33, 34-37. Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in psychology, 7, 1832. https://doi.org/10.3389/fpsyg.2016.01832 Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., … & Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3(1), 1-9. https://doi.org/10.1038/sdata.2016.18 Wilson, B. & Fenner, M. (2012) Open Researcher & Contributor ID (ORCID): Solving the Name Ambiguity Problem. Educause Review - E-Content 47(3), 54-55. https://er.educause.edu/articles/2012/5/open-researcher–contributor-id-orcid-solving-the-name-ambiguity-problem Wilson, R. C., & Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data. Elife, 8, e49547. https://doi.org/10.7554/eLife.49547 Wingen, T., Berkessel, J. B., & Englich, B. (2020). No Replication, No Trust? How Low Replicability Influences Trust in Psychology. Social Psychological and Personality Science, 11(4). https://doi.org/10.1177/1948550619877412 Woelfe, M., Olliaro, P., & Todd, M. H. (2011). Open science is a research accelerator. Nature chemistry, 3(10), 745-748. https://doi.org/10.1038/nchem.1149 Wren, J. D., Valencia, A., & Kelso, J. (2019). Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention. Bioinformatics, 35 (18), 3217-3218. https://doi.org/10.1093/bioinformatics/btz071 Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036-1039. https://doi.org/10.1126/science.1136099 Xia, J., Harmon, J. L., Connolly, K. G., Donnelly, R. M., Anderson, M. R., & Howard, H. A. (2015). Who publishes in “predatory” journals?. Journal of the Association for Information Science and Technology, 66(7), 1406-1417. https://doi.org/10.1002/asi.23265 Yamada, Y. (2018). How to crack pre-registration: Toward transparent and open science. Frontiers in Psychology, 9, 1831. https://doi.org/10.3389/fpsyg.2018.01831 Yarkoni, T. (2020). The generalizability crisis. Behavioral and Brain Sciences, 1-37. https://doi.org/10.1017/S0140525X20001685 Yeung, S. K., Feldman, G., Fillon, A., Protzko, J., Elsherif, M. M., Xiao, Q., & Pickering, J. (2020a). Experimental Studies Meta-Analysis Registered Report Templates. [Manuscript in preparation]. Zurn, P., Bassett, D. S., & Rust, N. C. (2020). The Citation Diversity Statement: A Practice of Transparency, A Way of Life. Trends in Cognitive Sciences, 24(9), 669-672. https://doi.org/10.1016/j.tics.2020.06.009

We are currently working to link the references directly. For now, the complete reference list can be viewed here.