Framework for
Open and
Reproducible
Research
Training

Logo of FORRT is a fort.

FORRT’s Clusters


In order to teach open and reproducible science effectively, educators need to make sense of almost a decade of literature, across several fields, and be informed about ongoing (and often dynamic) debates. This is a tall ask for most educators. So FORRT sought to develop strategies and propose solutions to mitigate the effects of competing academic interests and help scholars implement open and reproducible science tenets in their teaching and mentoring workflow. In an effort to reduce some of the burden on educators wishing to learn or teach these concepts, FORRT has draw on the expertise of more than 50 experts from its community to provide educators with a comprehensive but straightforward accessible didactic framework. FORRT clusters is a result of a comprehensive literature review guided by educational, pedagogical and didactic considerations aiming at providing a pathway towards the incremental adoption of Open and Reproducible Science tenets into educators/scholars teaching and mentoring. The focus lies not on simply aggregating the literature into bins, but on making sense of existing works, weaving connections where none exist, and providing a sensible learning-oriented Open and Reproducible Science taxonomy. FORRT taxonomy is composed of 7 clusters:

  1. Reproducibility and replicability knowledge
  2. Conceptual and statistical knowledge
  3. Reproducible analyses
  4. Preregistration
  5. FAIR data and materials
  6. Replication research
  7. Academic life and culture

We further breakdown each cluster into sub-categories to provide educators/scholars with useful information on the extant of open science scholarship, and how they are connected to one another. The idea behind specifying clusters and sub-clusters it to highlight we have drawn fuzzy boundaries between clusters while allowing for diversification and heterogeneity in how each educator integrates these cluster/sub-clusters with their respective field content. The breakdown of each cluster into sub-categories provides scholars with useful information on the extant of open science scholarship, and how they are connected to one another. To have a look at the sub-clusters within each cluster, please click on the clusters above.

See below for each cluster, its description, sub-clusters, and associated works geared for teaching. And here’s an attempt to visualize FORRT’s clusters:


FORRT’s Clusters


FORRT’s Syllabus

Building on the clusters we created a “Open and Reproducible Science” Syllabus. We hope it can serve as starting point for your class. .pdf download or editable G-doc version. Check out the FORRT’s syllabus page.


Cluster 1: Reproducibility Crisis and Credibility Revolution

Description

Attainment of foundational knowledge on the emergence of, and importance of, reproducible and open research (i.e., grounding the motivations and theoretical underpinnings of Open and Reproducible Science). Integration with field specific content (i.e., grounded in the history of replicability). There are 6 sub-clusters which aim to further parse the learning and teaching process:

  • History of the reproducibility crisis & credibility revolution.
  • Exploratory and confirmatory analyses.
  • Questionable research practices and their prevalence.
  • Proposed improvement science initiatives on statistics, measurement, teaching, data sharing, code sharing, pre-registration, replication.
  • Ongoing debates (e.g., incentives for and against open science).
  • Ethical considerations for improved practices.


History of the reproducibility crisis & credibility revolution

  • Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature News, 533(7604), 452. doi: https://doi.org/10.1038/533452a

  • Baker, M. (2016). Is there a reproducibility crisis? Nature, 533(7604), 3–5. doi: https://doi.org/10.1038/d41586-019-00067-3

  • Chambers, C. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press. http://dx.doi.org/10.1515/9781400884940

  • CrĂźwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J., … SchulteMecklenbeck, M. (2018, November 16). 7 easy steps to open science: An annotated reading list. https://doi.org/10.31234/osf.io/cfzyx

  • Edwards, M. A., & Roy, S. (2016). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51-61. DOI: https://doi.org/10.1089/ees.2016.0223

  • Merton, R., K. (1968). The Matthew effect in science. Science, 159(3810), 56-63. 10.1126/science.159.3810.56

  • Merton, R., K. (1988). The Matthew Effect in Science, II: Cumulative Advantage and the Symbolism of Intellectual Property. ISIS, 79(4), 606-623. 10.1086/354848

  • Munafo, M. R., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. DOI: 10.0138/s41562-016-0021

  • Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411-417. https://doi.org/10.1177/1745691617751884



Exploratory and confirmatory analyses

Confirmatory analyses refer to tests of hypotheses that are formulated prior to data collection. Exploratory analyses refer to everything else.

  • Chambers, C. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press. http://dx.doi.org/10.1515/9781400884940

  • Lin, W., & Green, D. P. (2016). Standard operating procedures: A safety net for pre-analysis plans. PS: Political Science & Politics, 49(3), 495-500.

  • Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Mass, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. doi:10.1177/1745691612463078

  • Wagenmakers , E.-J., Dutilh, G., & Sarafoglou, A. (2018). The Creativity-Verification Cycle in Psychological Science: New Methods to Combat Old Idols. Perspectives on Psychological Science, 13(4), 418–427. https://doi.org/10.1177/1745691618771357



Questionable research practices and their prevalence

The ways in which researchers engage in behaviors and decision-making that increase the probability of their (consciously or unconsciously) desired result.

  • Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Unpublished manuscript. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf

  • John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532. https://doi.org/10.1177/0956797611430953

  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366.https://doi.org/10.1177/0956797611417632

  • Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society open science, 3(9), 160384.https://doi.org/10.1098/rsos.160384

  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R. C., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in psychology, 7.



Proposed improvement science initiatives on statistics, measurement, teaching, data sharing, code sharing, pre-registration, replication

Published checklists and other resources that can be used to shift behavior more toward improved practices.

  • CrĂźwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J., … SchulteMecklenbeck, M. (2018, November 16). 7 easy steps to open science: An annotated reading list. https://doi.org/10.31234/osf.io/cfzyx

  • Lindsay (2020) Seven steps toward transparency and replicability in psychological science. Canadian Psychology/Psychologie canadienne.

  • Ioannidis, J. P., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014). Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention. Trends in cognitive sciences, 18(5), 235-241.

  • Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

  • Munafo, M. R., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. DOI: 10.0138/s41562-016-0021

  • Peng, R. (2015). The reproducibility crisis in science: A statistical counterattack. Significance, 12(3). https://doi.org/10.1111/j.1740-9713.2015.00827.x



Ongoing debates (e.g., incentives for and against open science)



Ethical considerations for improved practices

  • Brabeck, M. M. (2021). Open science and feminist ethics: Promises and challenges of open access. Psychology of Women Quarterly, 45(4), 457-474. https://doi.org/10.1177/03616843211030926

  • Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19), 4887-4890. https://doi.org/10.1073/pnas.1719557115

  • Chopik, W. J., Bremner, R. H., Defever, A. M., & Keller, V. N. (2018). How (and whether) to teach undergraduates about the replication crisis in psychological science. Teaching of Psychology, 45(2), 158–163. https://doi.org/10.1177/0098628318762900

  • Edwards, M. A., & Roy, S. (2016). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51-61. DOI: https://doi.org/10.1089/ees.2016.0223

  • Fell, M. J. (2019). The economic impacts of open science: A rapid evidence assessment. Publications, 7(3), 46. https://doi.org/10.3390/publications7030046

  • Jones, NL. (2007). A code of ethics for the life sciences. Science and Engineering Ethics, 13, 25-43. DOI:https://doi.org/ 0.1007/s11948-006-0007-x


Cluster 2: Conceptual and Statistical Knowledge

Description

Attainment of a grounding in fundamental statistics, measurement, and its implications encompassing conceptual knowledge, application, interpretation and communication of statistical analyses. There are 5 sub-clusters which aim to further parse the learning and teaching process:

  • The logic of null hypothesis testing, p-values, Type I and II errors (and when and why they might happen).
  • Limitations and benefits of NHST, Bayesian and Likelihood approaches.
  • Effect sizes, Statistical power, Confidence Intervals.
  • Research Design, Sample Methods, and its implications for inferences.
  • Questionable measurement practices (QMPs), validity and reliability issues.


The logic of null hypothesis testing, p-values, Type I and II errors (and when and why they might happen).

  • Banerjee, A., Chitnis, UB., Jadhav, SL., Bhawalkar, JS., Chaudhury, S. (2009). Hypothesis testing, type I and type II errors. Industrial Psychiatry Journal, 18(2), 127-131. https://doi.org/10.1111/j.1740-9713.2015.00827.x

  • Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing Type S (sign) and Type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641-651. doi: 10.1177/1745691614551642

  • Lakens, D. Improving your statistical inferences. Online course. https://www.coursera.org/learn/statistical-inferences



Limitations and benefits of NHST, Bayesian and Likelihood approaches.

  • Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7-29. https://doi.org/10.1177/0956797613504966

  • Etz, A., Gronau, Q.F., Dablander, F. et al. (2018). How to become a Bayesian in eight easy steps: An annotated reading list. Psychonomic Bulletin Review, 25, 219–234. https://doi.org/10.3758/s13423-017-1317-5

  • Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, p values, confidence intervals, and power: Aa guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–50. http://doi.org/10.1007/s10654-016-0149-3

  • Nuzzo, R. (2014). Statistical errors: P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume. Nature, 506(7487), 150-152. doi:10.1038/506150a

  • Wagenmakers , E.-J., Dutilh, G., & Sarafoglou, A. (2018). The Creativity-Verification Cycle in Psychological Science: New Methods to Combat Old Idols. Perspectives on Psychological Science, 13(4), 418–427. https://doi.org/10.1177/1745691618771357



Effect sizes, Statistical power, Confidence Intervals.

  • Brysbaert, M. and Stevens, M. (2018). Power analysis and effect size in mixed effects models: A Tutorial. Journal of Cognition, 1(1): 9, pp. 1–20, DOI: https://doi.org/10.5334/joc.10

  • Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365-376. https://doi.org/10.1038/nrn3475

  • Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, p values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–50. http://doi.org/10.1007/s10654-016-0149-3

  • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. 10.3389/fpsyg.2013.00863

  • Pek, J., & Flora, D. B. (2018). Reporting effect sizes in original psychological research: A discussion and tutorial. Psychological Methods, 23(2), 208-225. http://doi.org/10.1037/met0000126

  • Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9, 319-332.



Research Design, Sample Methods, and its implications for inferences.

  • Gervais et al. (2015). A powerful nudge? Presenting calculable consequences of underpowered research shifts incentives towards adequately powered designs. Social Psychological and Personality Science, 6, 847-854. https://doi.org/10.1177/1948550615584199

  • Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9, 319-332.

  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1832.doi: 10.3389/fpsyg.2016.01832



Questionable measurement practices (QMPs), validity and reliability issues.

  • Flake, J. K., & Fried, E. I. (2019, January 17). Measurement schmeasurement: Questionable measurement practices and how to avoid them. https://doi.org/10.31234/osf.io/hs7wm

  • Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370–378. https://doi.org/10.1177/1948550617693063

  • Hussey, I., & Hughes, S. (2018, November 19). Hidden invalidity among fifteen commonly used measures in social and personality psychology. https://doi.org/10.31234/osf.io/7rbfp

  • Rodebaugh, T. L., Scullin, R. B., Langer, J. K., Dixon, D. J., Huppert, J. D., Bernstein, A., . . . Lenze, E. J. (2016). Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias. Journal of Abnormal Psychology, 125(6), 840-851. http://dx.doi.org/10.1037/abn0000184


Cluster 3: Reproducible analyses

Description

Attainment of the how-to basics of reproducible reports and analyses. It requires students to move towards transparent and scripted analysis practices. There are 6 sub-clusters which aim to further parse the learning and teaching process:

  • Strengths of reproducible pipelines.
  • Scripted analyses compared with GUI.
  • Data wrangling.
  • Programming reproducible data analyses.
  • Open source and free software.
  • Tools to check yourself and others.


Strengths of reproducible pipelines.

Automating data analysis to make the process easier



Scripted analyses compared with GUI.

Writing analyses in programming language compared to performing them with a point-and-click menu.

  • Gandrud, C. (2016). Reproducible research with R and R Sstudio. New York; CRC Press


Data wrangling

Processing and restructuring data so that it is more useful for analyse.

Nick Fox’s Writing Reproducible Scientific Papers in R

PsuTeachR’s Data Skills for Reproducible Science



Programming reproducible data analyses

Making sure anyone can reproduce analyses through things like well-commented scripts, writing codebooks, etc.



Open source and free software.

  • Chao, L. (2009). Utilizing open source tools for online teaching and learning Information Science. Hershey, PA: Information Science Reference.


Tools to check yourself and others

Includes tools such as statcheck.io, GRIM, and SPRITE

  • Brown, N. J., & Heathers, J. A. (2016). The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Social Psychological and Personality Science, 1948550616673876. http://journals.sagepub.com/doi/pdf/10.1177/1948550616673876

  • Nuijten, M. B., Van Assen, M. A. L. M., Hartgerink, C. H. J., Epskamp, S., & Wicherts, J. M. (2017). The validity of the tool “statcheck” in discovering statistical reporting inconsistencies. Preprint retrieved from https://psyarxiv.com/tcxaj/.

  • van der Zee, T., Anaya, J., & Brown, N. J. (2017). Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab. BMC Nutrition, 3(1), 54. DOI 10.1186/s40795-017-0167-x


Cluster 4: Open (FAIR) data and materials analyses

Description

Attainment of a grounding in open (FAIR) data and materials. It requires students to learn about FAIR data (and education materials) principles: findability, accessibility, interoperability, and reusability; engage with reasons to share data, the initiatives designed to increase scientific openness; as well as ethical considerations and consequences of open (FAIR) data practices. There are 6 sub-clusters which aim to further parse the learning and teaching process:

  • Publication models.
  • Reasons to share; for science, and for one’s own practices.
  • Repositories such as OSF, FigShare, GitHub, Zenodo.
  • Accessing or sharing others data, code, and materials.
  • Ethical considerations.
  • Examples and consequences of accessing un/open data.


Publication models

Traditional publication models, open access models, preprints, etc.

  • Hardwicke, T. E., Mathur, M. B., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., … & Lenne, R. L. (2018). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. Royal Society Open Science, 5(8), 180448. http://dx.doi.org/10.1098/rsos.180448

  • Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

  • Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., BahnĂ­k, Ĺ ., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. https://doi.org/10.1027/1864-9335/a000178

  • Rouder, J. N. (2016). The what, why, and how of born open data. Behavior Research Methods, 48, 1062–1069. doi:10.3758/s13428-015-0630-z

  • Siler, K., Haustein, S., Smith, E., Larivière, V., & Alperin, J. P. (2018). Authorial and institutional stratification in open access publishing: the case of global health research. PeerJ, 6, e4269. doi:10.7717/peerj.4269

  • Tennant, J. P., Waldner, F., Jacques, D. C., Masuzzo, P., Collister, L. B., & Hartgerink, C. H. (2016). The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research, 5, 632. doi:10.12688/f1000research.8460.3



Reasons to share; for science, and for one’s own practices

  • Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PloS One, 15(4), e0230416.

  • Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

  • Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., BahnĂ­k, Ĺ ., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. https://doi.org/10.1027/1864-9335/a000178

  • Levenstein, M. C., & Lyle, J. A. (2018). Data: Sharing Is Caring. Advances in Methods and Practices in Psychological Science, 1(1), 95–103. https://doi.org/10.1177/2515245918758319

  • Piwowar, H.A., & Vision, T.J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175 https://doi.org/10.7717/peerj.175

  • Rouder, J. N. (2016). The what, why, and how of born open data. Behavior Research Methods, 48, 1062–1069. doi:10.3758/s13428-015-0630-z

  • Stodden, V. C. (2011). Trust your science? Open your data and code. Amstat News, 409, 21-22.

  • Tennant, J. P., Waldner, F., Jacques, D. C., Masuzzo, P., Collister, L. B., & Hartgerink, C. H. (2016). The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research, 5, 632. doi:10.12688/f1000research.8460.3



Repositories such as OSF, FigShare, GitHub, Zenodo

  • Gilmore, R. O., Kennedy, J. L., & Adolph, K. E. (2018). Practical solutions for sharing data and materials from psychological research. Advances in Methods and Practices in Psychological Science, 1(1), 121–130. https://doi.org/10.1177/2515245917746500

  • Rouder, J. N. (2016). The what, why, and how of born open data. Behavior Research Methods, 48, 1062–1069. doi:10.3758/s13428-015-0630-z

  • Soderberg, C. K. (2018). Using OSF to Share Data: A Step-by-Step Guide. Advances in Methods and Practices in Psychological Science, 1(1), 115–120. https://doi.org/10.1177/2515245918757689

  • osf.io

  • figshare.com

  • github.com

  • zenodo.org



Accessing or sharing others data, code, and materials

  • Joel, S., Eastwick, P. W., & Finkel, E. J. (2018). Open sharing of data on close relationships and other sensitive social psychological topics: Challenges, tools, and future directions. Advances in Methods and Practices in Psychological Science, 1(1), 86–94. https://doi.org/10.1177/2515245917744281

  • Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

  • Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., BahnĂ­k, Ĺ ., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. https://doi.org/10.1027/1864-9335/a000178

  • Piwowar, H.A., & Vision, T.J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175 https://doi.org/10.7717/peerj.175

  • Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726–728. https://doi.org/10.1037/0003-066X.61.7.726

  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1832.



Ethical considerations

  • Hand, D. J. (2018). Aspects of data ethics in a changing world: Where are we now? Big Data, 6(3), :176–190. doi: https://doi.org/10.1089/big.2018.0083

  • O’Callaghan, E., & Douglas, H. M. (2021). #MeToo Online Disclosures: A Survivor-Informed Approach to Open Science Practices and Ethical Use of Social Media Data. Psychology of Women Quarterly, 45(4), 505–525. https://doi.org/10.1177/03616843211039175

  • Ross, M. W., Iguchi, M. Y., & Panicker, S. (2018). Ethical aspects of data sharing and research participant protections. American Psychologist, 73(2), 138-145. http://dx.doi.org/10.1037/amp0000240

  • Siler, K., Haustein, S., Smith, E., Larivière, V., & Alperin, J. P. (2018). Authorial and institutional stratification in open access publishing: the case of global health research. PeerJ, 6, e4269. doi:10.7717/peerj.4269

  • Walsh, C. G., Xia, W., Li, M., Denny, J. C., Harris, P. A., & Malin, B. A. (2018). Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: Current practices and future challenges. Advances in Methods and Practices in Psychological Science, 1(1), 104–114. https://doi.org/10.1177/2515245917749652



Examples and consequences of accessing un/open data

  • Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V. M., Nichols, T. E., & Wagenmakers, E.-J. (2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 70–85. https://doi.org/10.1177/2515245917751886

  • Peng, R. (2015). The reproducibility crisis in science: A statistical counterattack. Significance, 12(3). https://doi.org/10.1111/j.1740-9713.2015.00827.x

  • Rouder, J. N. (2016). The what, why, and how of born open data. Behavior Research Methods, 48, 1062–1069. doi:10.3758/s13428-015-0630-z

  • Walsh, C. G., Xia, W., Li, M., Denny, J. C., Harris, P. A., & Malin, B. A. (2018). Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: Current practices and future challenges. Advances in Methods and Practices in Psychological Science, 1(1), 104–114. https://doi.org/10.1177/2515245917749652

  • Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726–728. https://doi.org/10.1037/0003-066X.61.7.726

  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1832.


Cluster 5: Preregistration

Description

Preregistration entails laying out a complete methodology and analysis before a study has been undertaken. This facilitates transparency and removes several potential QRPs. When teaching, students should attain knowledge regarding what a pre-registration entails, why it is important to remove potential QRPs and how to address deviations from preregistered plans. There are 6 sub-clusters which aim to further parse the learning and teaching process:

  • Purpose of preregistration.
  • Preregistration and registered reports - strengths and differences.
  • When can you preregister? Can you pre-register secondary data?
  • Understanding the types of preregistration and writing one.
  • Comparing a preregistration to a final study manuscript.
  • Conducting a preregistered study.


Purpose of preregistration

Distinguishing exploratory and confirmatory analyses, transparency measures.

  • Dal-RĂŠ, R., Ioannidis, J. P., Bracken, M. B., Buffler, P. A., Chan, A.-W., Franco, E. L., La Vecchia, C., Weiderpass, E. (2014). Making prospective registration of observational research a reality. Science translational medicine, 6(224), 224cm1. DOI: https://doi.org/10.1126/scitranslmed.3007513

  • Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45, 137–141.

  • Lin, W., & Green, D. P. (2016). Standard operating procedures: A safety net for pre-analysis plans. PS: Political Science & Politics, 49(3), 495-500.

  • Nosek, B. A., Ebersole, C. R., DeHaven, A., & Mellor, D. (2018). The Preregistration Revolution. Proceedings of National Academy Sciences, 115(11), 2600-2606. https://doi.org/10.1073/pnas.1708274114

  • Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1832.doi: 10.3389/fpsyg.2016.01832

  • Nuzzo, R. (2015). How scientists fool themselves — and how they can stop. Nature, 526, 182–185.

  • Wagenmakers, E. J., & Dutilh, G. (2016). Seven selfish reasons for preregistration. APS Observer, 29(9).



Preregistration and registered reports - strengths and differences

*OSC, CREP, ManyLabs, etc.

  • Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49(3), 609-610. https://doi.org/10.1016/j.cortex.2012.12.016

  • Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17. DOI: 10.3934/Neuroscience2014.1.4

  • Chambers, C.D., Dienes, Z., McIntosh, R.D., Rotshtein, P., & Willmes, K. (2015). Registered Reports: Realigning incentives in scientific publishing. Cortex, 66, A1-2. DOI: 10.1016/j.cortex.2015.03.022



When can you preregister? Can you pre-register secondary data?



Understanding the types of preregistration and writing one.



Comparing a preregistration to a final study manuscript.



Conducting a preregistered study.


Cluster 6: Replication research

Description

Attainment of a grounding in ‘replication research’, which takes a variety of forms, each with a different purpose and contribution. Reproducible science requires replication research. When teaching, students should understand the purpose and need of replications in its variety of forms and being able to conduct (and join) replication projects. There are 6 sub-clusters which aim to further parse the learning and teaching process:

  • Purposes of replication attempts - what is a ‘failed’ replication?
  • Large scale replication attempts.
  • Distinguishing direct and conceptual replications.
  • Conducting replication studies; challenges, limitations, and comparisons with the original study.
  • Registered Replication Reports (RRR).
  • The politics of replicating famous studies.


Purposes of replication attempts - what is a ‘failed’ replication?



Large scale replication attempts

*OSC, CREP, ManyLabs, etc.

  • Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., BahnĂ­k, Ĺ ., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. https://doi.org/10.1027/1864-9335/a000178

  • Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225

  • Open Science Collaboration (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7, 657–660.

  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aaC6716. DOI: 10.1126/science.aaC6716

  • Van Bavel, J. J., Mende-Siedlecki, P., Brady, W. J., & Reinero, D. A. (2016). Contextual sensitivity in scientific reproducibility. Proceedings of the National Academy of Sciences, 113(23), 6454-6459. https://doi.org/10.1073/pnas.1521897113

  • ManyPrimates

  • CREP



Distinguishing direct and conceptual replications

Direct replications use the exact same methods and materials, while conceptual replications test the same concept but with different methods, materials, or both.



Conducting replication studies; challenges, limitations, and comparisons with the original study

  • Grahe, J. E., Brandt, M. J., Wagge, J. R., Legate, N., Wiggins, B. J., Christopherson, C. D., . . . LePine, S. (2018). Collaborative Replications and Education Project (CREP). Retrieved from https://osf.io/wfc6u/

  • Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science, 7(6), 605–607. https://doi.org/10.1177/1745691612459057

  • Frank, M. C., & Saxe, R. (2012). Teaching replication. Perspectives on Psychological Science, 7(6), 600–604. https://doi.org/10.1177/1745691612460686

  • Lenne & Mann (2016). CREP project report. https://osf.io/sdj7e/

  • Stanley, D. J., & Spence, J. R. (2014). Expectations for replications: Are yours realistic? Perspectives on Psychological Science, 9(3), 305-318. https://doi.org/10.1177/1745691614528518

  • Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The collaborative replications and education project. Frontiers in psychology, 10, 247.



Registered Replication Reports

Registered Reports are studies that are peer-reviewed prior to data collection, with an agreement between the journal and the author(s) that it will be published regardless of outcome as long as the preregistered methods are reasonably followed. Registered REPLICATION Reports are a special category of these that only include replications.

  • Simons, D. J., Holcombe, A. O., & Spellman, B. A. (2014). An Introduction to Registered Replication Reports at Perspectives on Psychological Science. Perspectives on Psychological Science, 9(5), 552–555. https://doi.org/10.1177/1745691614543974

  • Ongoing Replication projects https://www.psychologicalscience.org/publications/replication/ongoing-projects

  • Alogna, V. K., Attaya, M. K., Aucoin, P., BahnĂ­k, Ĺ ., Birch, S., Birt, A. R., … & Buswell, K. (2014). Registered replication report: Schooler and engstler-schooler (1990). Perspectives on Psychological Science, 9(5), 556-578.

  • Eerland, A., Sherrill, A. M., Magliano, J. P., Zwaan, R. A., Arnal, J. D., Aucoin, P., … & Crocker, C. (2016). Registered replication report: Hart & albarracĂ­n (2011). Perspectives on Psychological Science, 11(1), 158-171.

Psychological Science Accelerator (PSA) Ongoing Replications



The politics of replicating famous studies

Sometimes responses to replication research can be negative. Failed replications of famous work, most notably power posing, ego depletion, stereotype threat, and facial feedback, have received a lot of attention.

  • Neuliep, J. W., & Crandall, R. (1990). Editorial bias against replication research. Journal of Social Behavior & Personality, 5(4), 85-90.

  • Neuliep, J. W., & Crandall, R. (1993). Reviewer bias against replication research. Journal of Social Behavior & Personality, 8(6), 21-29.


Cluster 7: Academic Life, Ethics and Culture

Description

Attainment of a grounding in topics related to academia and academics. Students should understand how individuals, teams, institutions, and academic culture work together to promote (or hinder) openness, inclusion, diversity, equity and transparency. Gathering perspectives on navigating scientific and academic life. Learning the challenges and rewards in the academic setting, the “hidden curriculum” in academic life.

There are 8 sub-clusters which aim to further parse the learning and teaching process:

  • Diversity
  • Equity
  • Inclusion
  • Citizen science
  • Team science
  • Adversarial collaboration
  • The structure of and incentives in academia
  • Types of academic, non-academic & alt-academic positions


Diversity

Diversity is the presence of difference within a specific environment, e.g. racial diversity, gender diversity, social-economic diversity, etc.



Equity is that everyone has access to the same opportunities and that we all have privileges and barriers, thus we do not all start from the same starting position.



Inclusion

Inclusion is that individuals with different representations, identities and feelings being respected, influenced, and welcomed in a specific environment.

  • Bahlai, C., Bartlett, L. J., Burgio, K. R., Fournier, A., Keiser, C. N., Poisot, T., & Whitney, K. S. (2019). Open science isn’t always open to all scientists. American Scientist, 107(2), 78-82. https://doi.org/10.1511/2019.107.2.78

  • Carli, L. L., Alawa, L., Lee, Y., Zhao, B., & Kim, E. (2016). Stereotypes about gender and science: Women≠ scientists. Psychology of Women Quarterly, 40(2), 244-260 https://doi.org/10.1177/0361684315622645

  • Cislak, A., Formanowicz, M., & Saguy, T. (2018). Bias against research on gender bias. Scientometrics, 115(1), 189-200. https://doi.org/10.1007/s11192-018-2667-0

  • Eagly, A. H., & Miller, D. I. (2016). Scientific eminence: Where are the women?. Perspectives on Psychological Science, 11(6), 899-904.

  • Flaherty, C. (2020, August, 20). Something’s Got to Give. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2020/08/20/womens-journal-submission-rates-continue-fall

  • Henrich, J., Heine, S. & Norenzayan, A. (2010) Most people are not WEIRD. Nature 466, 29. https://doi.org/10.1038/466029a

  • Larivière, V., Ni, C., Gingras, Y., Cronin, B., & Sugimoto, C. R. (2013). Bibliometrics: Global gender disparities in science. Nature News, 504(7479), 211. https://doi.org/10.1038/504211a

  • Macoun, A., & Miller, D. (2014). Surviving (thriving) in academia: Feminist support networks and women ECRs. Journal of Gender Studies, 23(3), 287-301. https://doi.org/10.1080/09589236.2014.909718

  • Myers, K. R., Tham, W. Y., Yin, Y., Cohodes, N., Thursby, J. G., Thursby, M. C., … & Wang, D. (2020). Unequal effects of the COVID-19 pandemic on scientists. Nature human behaviour, 4(9), 880-883.

  • Risner, L. E., Morin, X. K., Erenrich, E. S., Clifford, P. S., Franke, J., Hurley, I., & Schwartz, N. B. (2020). Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of underrepresented postdoctoral researchers and promote institutional diversity and inclusion. PloS one, 15(9), e0238518. https://doi.org/10.1371/journal.pone.0238518

  • Roberson, M. L. (2020). On supporting early-career Black scholars. Nature Human Behaviour, 1-1. https://doi.org/10.1038/s41562-020-0926-6

  • Skitka, L. J., Melton, Z. J., Mueller, A. B., & Wei, K. Y. (2020). The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology. Personality and Social Psychology Bulletin, 0146167220947326. https://doi.org/10.1177/0146167220947326



Citizen science

Citizen science is scientific research conducted, in whole or in part, by amateur (or nonprofessional) scientists. Citizen science is sometimes described as “public participation in scientific research,” participatory monitoring, and participatory action research whose outcomes are often advancements in scientific research by improving the scientific communities capacity, as well as an increasing the public’s understanding of science.

  • Hart, D. D., & Silka, L. (2020). Rebuilding the ivory tower: bottom-up experiment in aligning research with societal needs. Issues Sci Technol, 36(3), 64-70. https://issues.org/aligning-research-with-societal-needs/

  • Bonney, R., Cooper, C. B., Dickinson, J., Kelling, S., Phillips, T., Rosenberg, K. V., & Shirk, J. (2009). Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience, 59(11), 977-984.

  • Bonney, R., Shirk, J. L., Phillips, T. B., Wiggins, A., Ballard, H. L., Miller-Rushing, A. J., & Parrish, J. K. (2014). Next steps for citizen science. Science, 343(6178), 1436-1437.

  • Cohn, J. P. (2008). Citizen science: Can volunteers do real research?. BioScience, 58(3), 192-197.



Team science

Team science institutions coordinate a large group of scientists to solve a problem. Individual scientists are rewarded a publication by the institution for their efforts and resources. Once a group signs onto a team science project, the institution serves as a coordinating role, merging the resources from all scientists and focusing on a common project.

  • Forscher, P. S., Wagenmakers, E. J., DeBruine, L., Coles, N., Silan, M. A., & IJzerman, H. (2020). A Manifesto for Team Science. Retrieved from https://psyarxiv.com/2mdxh Silberzahn, R., & Uhlmann, E. L. (2015). Crowdsourced research: Many hands make tight work. Nature News, 526(7572), 189. https://doi.org/10.1038/526189a

  • Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The collaborative replications and education project. Frontiers in psychology, 10, 247. https://doi.org/10.3389/fpsyg.2019.00247



Adversarial collaborations

  • Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64-71. https://doi.org/10.1177/1556264614552421

  • Bateman, I., Kahneman, D., Munro, A., Starmer, C., & Sugden, R. (2005). Testing competing models of loss aversion: An adversarial collaboration. Journal of Public Economics, 89(8), 1561-1580.



Structures and Incentives in academia

Sometimes responses to replication research can be negative. Failed replications of famous work, most notably power posing, ego depletion, stereotype threat, and facial feedback, have received a lot of attention.

  • Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19), 4887-4890.

  • Corker, K. S. (2017). Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values. https://doi.org/10.31234/osf.io/yqfrd

  • Diener, E. (2016). Improving departments of psychology. Perspectives on Psychological Science, 11(6), 909-912.

  • Ebersole, C. R., Axt, J. R., & Nosek, B. A. (2016). Scientists’ reputations are based on getting it right, not being right. PLoS biology, 14(5), e1002460.

  • Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891-904. https://doi.org/10.1007/s11192-011-0494-7

  • Feist, G. J. (2016). Intrinsic and extrinsic science: A dialectic of scientific fame. Perspectives on Psychological Science, 11(6), 893-898.

  • Ferreira, F. (2017). Fame: I’m Skeptical. https://doi.org/10.31234/osf.io/6zb4f

  • Flier J. (2017) Faculty promotion must assess reproducibility. Nature, 549(7671),133. https://doi.org/10.1038/549133a

  • Foss, D. J. (2016). Eminence and omniscience: Statistical and clinical prediction of merit. Perspectives on Psychological Science, 11(6), 913-916.

  • Gernsbacher, M. A. (2018). Rewarding research transparency. Trends in cognitive sciences, 22(11), 953-956. https://doi.org/10.1016/j.tics.2018.07.002

  • Hirsch, J. E. (2010). An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85(3), 741-754.

  • Innes-Ker, Å. (2017). The Focus on Fame Distorts Science. https://doi.org/10.31234/osf.io/vyr3e

  • Ioannidis, J. P., & Thombs, B. D. (2019). A user’s guide to inflated and manipulated impact factors. European journal of clinical investigation, 49(9), e13151. https://doi.org/10.1111/eci.13151

  • Jamieson, K. H., McNutt, M., Kiermer, V., & Sever, R. (2019). Signaling the trustworthiness of science. Proceedings of the National Academy of Sciences, 116(39), 19231-19236.https://doi.org/10.1073/pnas.1913039116

  • Jamieson, K. H., McNutt, M., Kiermer, V., & Sever, R. (2020). Reply to Kornfeld and Titus: No distraction from misconduct. Proceedings of the National Academy of Sciences of the United States of America, 117(1), 42. https://doi.org/10.1073/pnas.1918001116

  • Kornfeld, D. S., & Titus, S. L. (2016). Stop ignoring misconduct. Nature, 537(7618), 29-30.https://doi.org/10.1038/537029a

  • Kornfeld, D. S., & Titus, S. L. (2020). Signaling the trustworthiness of science should not be a substitute for direct action against research misconduct. Proceedings of the National Academy of Sciences of the United States of America, 117(1), 41. https://doi.org/10.1073/pnas.1917490116

  • Li, W., Aste, T., Caccioli, F., & Livan, G. (2019). Early coauthorship with top scientists predicts success in academic careers. Nature communications, 10(1), 1-9.

  • Matosin, N., Frank, E., Engel, M., Lum, J. S., & Newell, K. A. (2014). Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture. Disease Models & Mechanisms, 7(2), 171. https://doi.org/10.1242/dmm.015123

  • Morgan, A. C., Economou, D. J., Way, S. F., & Clauset, A. (2018). Prestige drives epistemic inequality in the diffusion of scientific ideas. EPJ Data Science, 7(1), 40. https://doi.org/10.1140/epjds/s13688-018-0166-4

  • Naudet, F., Ioannidis, J., Miedema, F., Cristea, I. A., Goodman, S. N., & Moher, D. (2018). Six principles for assessing scientists for hiring, promotion, and tenure. Impact of Social Sciences Blog. http://eprints.lse.ac.uk/90753/

  • Pickett, C. (2017). Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit. https://doi.org/10.31234/osf.io/tv6nb

  • Roediger III, H. L. (2016). Varieties of fame in psychology. Perspectives on Psychological Science, 11(6), 882-887.

  • Ruscio, J. (2016). Taking advantage of citation measures of scholarly impact: Hip Hip h Index!. Perspectives on Psychological Science, 11(6), 905-908.

  • Shiota, M. N. (2017). “Fame” is the Problem:Conflation of Visibility With Potential for Long-Term Impact in Psychological Science. https://doi.org/10.31234/osf.io/4kwuq

  • Simonton, D. K. (2016). Giving credit where credit’s due: Why it’s so hard to do in psychological science. Perspectives on Psychological Science, 11(6), 888-892.

  • Tressoldi, P. E., GiofrĂŠ, D., Sella, F., & Cumming, G. (2013). High impact= high statistical standards? Not necessarily so. PloS one, 8(2), e56180.

  • Sternberg, R. J. (2016). “Am I famous yet?” Judging scholarly merit in psychological science: An introduction. Perspectives on Psychological Science, 11(6), 877-881.

  • Vazire, S. (2017). Against eminence.https://doi.org/10.31234/osf.io/djbcw

  • Van Dijk, D., Manor, O., & Carey, L. B. (2014). Publication metrics and success on the academic job market. Current Biology, 24(11), R516-R517.



Types of academic, non-academic & alt-academic positions



Feminist Thought

It aims to understand the nature of gender inequality. Themes explored include discrimination, objectification, oppression, patriarchy, stereotyping, and aesthetics. It examines women’s and men’s social roles, experiences, interests, chores, and feminist politics in a variety of fields.