📕 Node [[this article sort of represents a straw man version of the reproducability crisis]]
📄 This-article-sort-of-represents-a-straw-man-version-of-the-reproducability-crisis.md by @enki

This article sort of represents a straw-man version of the reproducability

crisis.

Any serious article on the subject does not focus on the correctness of individual studies, but instead on the absence of published…


This article sort of represents a straw-man version of the reproducability crisis.

Any serious article on the subject does not focus on the correctness of individual studies, but instead on the absence of published replication attempts for landmark studies. It’s precisely because science is hard & false positives are common that the low rate of reproduction attempts, the tendency to avoid publishing negative results, and (on the science journalism side) the naive acceptance of shocking-sounding results are so damaging at scale.

Will teaching general audiences & science journalists about the variety of potential statistical & methodological flaws help? Sure — at least with respect to the science journalism side. It won’t help with the acceptance of false positives due to selection bias (which led to people like Kanneman putting a lot of trust into ideas like cultural priming that ended up being completely unreproducible).

Anyone with an interest in science knows that without high-quality replication & large, randomized samples, results are meaningless. This is not to say that small-scale exploratory studies are not worthwhile, but instead that such studies should be treated as one step above opinion columns with regard to how seriously they should be taken.

There are also real reasons why scientists end up having research and publication habits that encourage bad science; we can’t pretend these reasons don’t exist, but they are economic/incentive-related reasons, and new incentives are easily introduced into science if somebody has the money.

Providing funding for pre-registered replications seems like it’s likely to solve many of the problems that people have with the state of experimental psychology (and if you don’t agree that those problems exist, you don’t have to participate in such programs: you can leave the money on the table, and instead watch other people attempt to replicate your work); likewise, automatic stats checkers are pretty uninvasive: if you avoid mathematical errors, you won’t get a lot of notes, and if you disagree with the notes you can ignore them and wait to be vindicated. These are systems that already exist, and are already being used to change incentive systems in experimental psychology in order to compensate for common sources of concerns.

Individual experiment sample sizes were never the point of the replication crisis, except when experiments were being taken significantly more seriously than they should be, which is not unusual.

By John Ohno on December 5, 2016.

[Canonical link](https://medium.com/@enkiv2/this-article-sort-of-represents-a- straw-man-version-of-the-reproducability-crisis-df80d2dac194)

Exported from Medium on September 18, 2020.

Loading pushes...

Rendering context...