On the Reproducibility Crisis in Science

In the last few months, a number of articles discussing the problem of reproducibility in science appeared. Writing for the website phys.org in September, Fiona Fidler and Ascelin Gordon called it the “reproducibility crisis.” A piece on The Economist of last October emphasized how this harms science’s capability to correct itself. In the same month, Los Angeles Times’ Michael Hiltzik wrote on how “researchers are rewarded for splashy findings, not for double-checking accuracy.”

Reproducibility is one of the main pillars of the scientific method. Its meaning is straightforward: scientists should devise experiments and studies that can be repeated by themselves and other researchers working independently, and the obtained results should be the same. If a study proposes a drug for a certain disease, for instance, it is clear why the method should be tested many times before the medication appears in drugstores. But lately there have been alarming news regarding the practice.

Michael Hiltzik cites a project where biotech firm Amgen tried to reproduce 53 important studies on cancer and blood biology. Only 6 of them lead to similar results. In another example, a company in Germany found that only a quarter of the published research backing its R&D projects could be reproduced.

The consequences are clearly bad for the public in general, but they are also harming for science, because it impairs its reputation, which is already under severe attack on the topics of global warming and evolution by natural selection.

The blame should also be shared between scientists and the general public. Due to people’s restless desire for novelty, and journalists eager to attend that demand, science news have their own section in major newspapers, where the curiousness of the discovery is more important than how reproducible the study is. The scientists fault is in that research quality is measured essentially by number of publications, with little attention on reproducibility.

One initiative to solve the problem is the creation of a “reproducibility index,” to be attached to journals in a way similar to the “impact factor”. In Computer Science, researcher Jake Vanderplas proposed that the code used to produce the results should be well-documented, well-tested, and publicly available. Vanderplas argues that this “is essential not only to reproducibility in modern scientific research, but to the very progression of research itself.”

Similar to how a successful smartphone is accompanied with the so-called ecosystem of apps, developers, and media content, science venues should also provide tools for easier reproducibility. Data, code, and other items additional to the main manuscript should be enforced. Rather than provide extra burden for researchers, this would facilitate validation and comparison of results, and speed up the pace in which important discoveries happen.

(A version of this article appeared in Washington Square News.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s