Resources

 

We have an amazing team who curated many resources for the community.

The Invisible Workload of Open Research

It is acknowledged that conducting open research requires additional time and effort compared to conducting ‘closed’ research. However, this additional work is often discussed only in abstract terms, a discourse which ignores the practicalities of …

The ironic effect of significant results on the credibility of multiple-study articles.

Cohen (1962) pointed out the importance of statistical power for psychology as a science, but statistical power of studies has not increased, while the number of studies in a single article has increased. It has been overlooked that multiple studies …

The landscape of open science in behavioral addiction research: Current practices and future directions

Open science refers to a set of practices that aim to make scientific research more transparent, accessible, and reproducible, including pre-registration of study protocols, sharing of data and materials, the use of transparent research methods, and …

The logical structure of experiments lays the foundation for a theory of reproducibility

The scientific reform movement has proposed openness as a potential remedy to the putative reproducibility or replication crisis. However, the conceptual relationship among openness, replication experiments and results reproducibility has been …

The meaning of “significance” for different types of research

Adrianus Dingeman de Groot (1914-2006) was one of the most influential Dutch psychologists. He became famous for his work "Thought and Choice in Chess", but his main contribution was methodological--De Groot co-founded the Department of Psychological …

The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases

Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or …

The Missing Semester of Your CS Education

Course on computer sciences skills needed for all scientific research

The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical …

The natural selection of bad science.

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods …

The need for public opinion and survey methodology research to embrace preregistration and replication, exemplified by a team’s failure to replicate their own findings on visual cues in grid-type questions

Survey researchers take great care to measure respondents’ answers in an unbiased way; but, how successful are we as a field at remedying unintended and intended biases in our research? The validity of inferences drawn from studies has been found to …
JUST-OS