Skip to main content

Science Research Needs an Overhaul

The current incentive structure often leads to dead-end studies—but there are ways to fix the problem

SA Forum is an invited essay from experts on topical issues in science and technology.

Earlier this year a series of papers in The Lancet reported that 85 percent of the $265 billion spent each year on medical research is wasted. This is not because of fraud, although it is true that retractions are on the rise. Instead, it is because too often absolutely nothing happens after initial results of a study are published. No follow-up investigations ensue to replicate or expand on a discovery. No one uses the findings to build new technologies.

The problem is not just what happens after publication—scientists often have trouble choosing the right questions and properly designing studies to answer them. Too many neuroscience studies test too few subjects to arrive at firm conclusions. Researchers publish reports on hundreds of treatments for diseases that work in animal models but not in humans. Drug companies find themselves unable to reproduce promising drug targets published by the best academic institutions. The growing recognition that something has gone awry in the laboratory has led to calls for, as one might guess, more research on research (aka, meta-research)—attempts to find protocols that ensure that peer-reviewed studies are, in fact, valid.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


It will take a concerted effort by scientists and other stakeholders to fix this problem.We need to identify and correct system-level flaws that too often lead us astray. This is exactly the goal of anew center I co-founded at Stanford University earlier this year: the Meta-Research Innovation Center at Stanford (METRICS), which will seek to study research practices and how these can be optimized. It will examine the best means of designing research protocols and agendas to ensure that the results are not dead ends but rather that they pave a path forward.

The center will do so by exploring what are the best ways to make scientific investigation more reliable and efficient. For example, there is a lot of interest on collaborative team science, study registration, stronger study designs and statistical tools, and better peer review, along with making scientific data, analyses and protocols widely available so that others can replicate experiments, thereby fostering trust in the conclusions of those studies. Reproducing other scientists’ analyses or replicating their results has too often in the past been looked down on with a kind of “me-too” derision that would waste resources—but often they may help avoid false leads that would have been even more wasteful.

Perhaps the biggest impediment to replication is the inaccessibility of data and protocols necessary to rerun the analyses that went into the original experiments. Searching for such information in the archives can be like embarking on an archeological expedition. Investigators die, move and change jobs; computers crash, online links malfunction. Sponsors—in particular those from industry—merge with or get bought by others. Data are sometimes lost—even, as one researcher claimed when confronted about spurious results, eaten by termites.

There has definitely been some recent progress. An increasing number of journals, including Nature and Science, have adopted measures such as checklists for study design and reporting while improving statistical review and encouraging access to data and protocols. (Scientific American is part of Nature Publishing Group.) Several funding agencies, meanwhile, have asked that researchers outline their plans for sharing data before they can receive a government grant. And many fields that process large compendia of data—from astronomy to high-energy physics to genomics—have made attempts to open their databases.

But it will take much more to achieve a lasting culture change. Investigators should be rewarded for performing good science rather than just getting splashy, statistically significant (“positive”) but nonreplicable results. Revising the prevailing incentive structure may require changes on the part of journals, funders, universities and other research institutions. METRICS will work with other researchers to identify and encourage the best research practices,ensuring that well-established statistical principles and other methods, endorsed by all researchers but too often ignored in practice, are widely adopted.

These changes—most notably the need to implement measures to ensure replicability—are essential to safeguard the legacy of the scientific enterprise. Identifying and putting the best standards in place will require the efforts of the entire research community. The payoff, however, will be great: Science will be able to provide the greatest intellectual rewards for investigators and its replicable successes will provide the greatest possible benefit for humanity.