Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study …
Vul, Harris, Winkielman, and Pashler (2009), (this issue) argue that correlations in many cognitive neuroscience studies are grossly inflated due to a widespread tendency to use nonindependent analyses. In this article, I argue that Vul et al.'s …
Short and rapid publication of research findings has many advantages. However, there is another side of the coin that needs careful consideration. We argue that the most dangerous aspect of a shift toward “bite-size” publishing is the relationship …
Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they …
I read this post over at the blog Cartesian Faith about Probability and Monte Carlo methods. The post describe how to numerically intregate using Monte Carlo methods. I thought the results looked cool so I used the method to calculate the overlap of …
The purpose of this course is to acquaint students with recent developments in open science and reproducibility of the research workflow. By the end of this course students will be familiar with documenting their research workflow (e.g., idea …
Campbell’s Law explains the replication crisis. In brief, useful tools such as hypotheses, p-values, and multi-study designs came to be viewed as indicators of strong science, and thus goals in and of themselves. Consequently, their use became …
How do psychologists determine what is true and what is false about human behavior, affect, and cognition? The question encompasses more than we can know from a single study or even a single research paper, and the issues run deeper than just …