by Pascal-Emmanuel Gobry
Science is broken.
That’s the thesis of a must-read article in First Things magazine, in which William A. Wilson accumulates evidence that a lot of published research is false. But that’s not even the worst part.
Advocates of the existing scientific research paradigm usually smugly declare that while some published conclusions are surely false, the scientific method has “self-correcting mechanisms” that ensure that, eventually, the truth will prevail. Unfortunately for all of us, Wilson makes a convincing argument that those self-correcting mechanisms are broken.
For starters, there’s a “replication crisis” in science. This is particularly true in the field of experimental psychology, where far too many prestigious psychology studies simply can’t be reliably replicated. But it’s not just psychology. In 2011, the pharmaceutical company Bayer looked at 67 blockbuster drug discovery research findings published in prestigious journals, and found that three-fourths of them weren’t right. Another study of cancer research found that only 11 percent of preclinical cancer research could be reproduced. Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that “two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published.”
What explains this? In some cases, human error. Much of the research world exploded in rage and mockery when it was found out that a highly popularized finding by the economists Ken Rogoff and Carmen Reinhardt linking higher public debt to lower growth was due to an Excel error. Steven Levitt, of Freakonomics fame, largely built his career on a paper arguing that abortion led to lower crime rates 20 years later because the aborted babies were disproportionately future criminals. Two economists went through the painstaking work of recoding Levitt’s statistical analysis — and found a basic arithmetic error.
Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after. The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in “less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.”
Then there’s everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth.
SOURCE: The Week