Empirical research in what are commonly called peer-reviewed academic journals is often used as the basis for public policy decisions, in part because people think that peer-review involves checking the accuracy of the research. That might have been the case in the distant past, but times have long since changed. Academic journals rarely, if ever, check data and calculations for accuracy during the review process, nor do they claim to. Journal editors only claim that in selecting a paper for publication they think it merits examination by the research community.
But the other dirty secret of academic research is that the data and computational methods are so seldom disclosed that independent examination and replication has become nearly impossible for most published research.
In a new report we wrote for the Fraser Institute, we review a series of efforts in recent years to replicate empirical studies published in economics journals. Over a thousand papers have now been examined. In over half the cases the data were not archived. When the authors were asked for their data, the majority reported being unable or unwilling to provide it. Where data was provided, the computer code used to generate the results was almost never released, greatly complicating the task of replicating the statistical results. Overall, the vast majority of economics papers could not be independently verified, even in some cases where the authors agreed to assist the replication efforts.
A set of interlocking problems in the peer review system have become pervasive throughout academia: authors do not release their data, journals do not ask for it, thousands of papers get published each year that nobody checks for accuracy, and independent replication has become so costly and difficult that it is rarely attempted.
In researching this issue we have noticed two inconsistent opinions: some non-academics are surprised to find out that peer review does not involve checking data and calculations, while academics are surprised that anyone thought it did.
Our report also explores numerous examples from other academic disciplines, such as medicine, history, environmental science and forestry, in which prominent or policy-relevant research was shielded from independent scrutiny by withholding data and/or computer code. In some cases the research was exposed as faulty only years later, sometimes only through government intervention to force data disclosure, and sometimes after laws had already been passed based on the faulty research.
Non-disclosure of essential research materials may have deleterious scientific consequences, but our ultimate concern is the growing negative effects on public policy formation.
One striking example in the context of the current US housing meltdown concerns a 1992 study by economists at the Boston Federal Reserve, published in the prestigious American Economic Review, that purported to show statistically significant evidence of racial discrimination in US mortgage lending practices. Based on this study, federal regulations were rushed into place that forced banks to loosen lending standards and threatened them with severe financial penalties for failure to correct the alleged discrimination. It took nearly six years, and a Freedom of Information Act request, for independent economists to discover coding errors in the data that invalidated the original conclusions. But by this time the new lending rules were in place that ultimately contributed to the buildup of bad mortgage debt now ravaging the US financial system.
A related feature of this problem is that when a study becomes prominent in a policy debate, academics can end up forming a protective cheering squad around it, defending it from independent scrutiny. In 2006, a US-appointed expert review panel looking at a controversial global warming study noted that when the issue became politically heated, scientists working in the area formed a self-reinforcing feedback mechanism that made it effectively impossible for them to critically assess the work in question, while dismissing the efforts of outsiders who were trying to do so. It should not be assumed that the scientific process will reliably correct erroneous research: the sociological process within science is just as likely to protect false results from scrutiny.
Users of academic research must recognize that scientific findings in journal articles are not checked for accuracy and, unless proven otherwise, are likely not independently replicable. In our report we spell out a simple checklist of conditions that government policymakers should be prepared to verify before basing public policy decisions on the claims in an academic journal article. These are not complicated or contentious matters, they are things long assumed to be true: the data described in the paper was actually used in the analysis, the data are available for independent inspection, the calculations described in the paper match those in the computer code, etc. If these things cant be shown to be true, the paper should not be relied on.
Academics rightly insist on the freedom to do their research without public or political interference. But when that research influences policy, the public has a right to demand independent verification. Researchers might want to influence policy, but if they plan to keep their data and computer code to themselves, they should keep their results to themselves too.
Commentary
Check the numbers; From the U. S. subprime crisis to global warming, bad research is driving disastrous public policy
EST. READ TIME 4 MIN.Share this:
Facebook
Twitter / X
Linkedin
Empirical research in what are commonly called peer-reviewed academic journals is often used as the basis for public policy decisions, in part because people think that peer-review involves checking the accuracy of the research. That might have been the case in the distant past, but times have long since changed. Academic journals rarely, if ever, check data and calculations for accuracy during the review process, nor do they claim to. Journal editors only claim that in selecting a paper for publication they think it merits examination by the research community.
But the other dirty secret of academic research is that the data and computational methods are so seldom disclosed that independent examination and replication has become nearly impossible for most published research.
In a new report we wrote for the Fraser Institute, we review a series of efforts in recent years to replicate empirical studies published in economics journals. Over a thousand papers have now been examined. In over half the cases the data were not archived. When the authors were asked for their data, the majority reported being unable or unwilling to provide it. Where data was provided, the computer code used to generate the results was almost never released, greatly complicating the task of replicating the statistical results. Overall, the vast majority of economics papers could not be independently verified, even in some cases where the authors agreed to assist the replication efforts.
A set of interlocking problems in the peer review system have become pervasive throughout academia: authors do not release their data, journals do not ask for it, thousands of papers get published each year that nobody checks for accuracy, and independent replication has become so costly and difficult that it is rarely attempted.
In researching this issue we have noticed two inconsistent opinions: some non-academics are surprised to find out that peer review does not involve checking data and calculations, while academics are surprised that anyone thought it did.
Our report also explores numerous examples from other academic disciplines, such as medicine, history, environmental science and forestry, in which prominent or policy-relevant research was shielded from independent scrutiny by withholding data and/or computer code. In some cases the research was exposed as faulty only years later, sometimes only through government intervention to force data disclosure, and sometimes after laws had already been passed based on the faulty research.
Non-disclosure of essential research materials may have deleterious scientific consequences, but our ultimate concern is the growing negative effects on public policy formation.
One striking example in the context of the current US housing meltdown concerns a 1992 study by economists at the Boston Federal Reserve, published in the prestigious American Economic Review, that purported to show statistically significant evidence of racial discrimination in US mortgage lending practices. Based on this study, federal regulations were rushed into place that forced banks to loosen lending standards and threatened them with severe financial penalties for failure to correct the alleged discrimination. It took nearly six years, and a Freedom of Information Act request, for independent economists to discover coding errors in the data that invalidated the original conclusions. But by this time the new lending rules were in place that ultimately contributed to the buildup of bad mortgage debt now ravaging the US financial system.
A related feature of this problem is that when a study becomes prominent in a policy debate, academics can end up forming a protective cheering squad around it, defending it from independent scrutiny. In 2006, a US-appointed expert review panel looking at a controversial global warming study noted that when the issue became politically heated, scientists working in the area formed a self-reinforcing feedback mechanism that made it effectively impossible for them to critically assess the work in question, while dismissing the efforts of outsiders who were trying to do so. It should not be assumed that the scientific process will reliably correct erroneous research: the sociological process within science is just as likely to protect false results from scrutiny.
Users of academic research must recognize that scientific findings in journal articles are not checked for accuracy and, unless proven otherwise, are likely not independently replicable. In our report we spell out a simple checklist of conditions that government policymakers should be prepared to verify before basing public policy decisions on the claims in an academic journal article. These are not complicated or contentious matters, they are things long assumed to be true: the data described in the paper was actually used in the analysis, the data are available for independent inspection, the calculations described in the paper match those in the computer code, etc. If these things cant be shown to be true, the paper should not be relied on.
Academics rightly insist on the freedom to do their research without public or political interference. But when that research influences policy, the public has a right to demand independent verification. Researchers might want to influence policy, but if they plan to keep their data and computer code to themselves, they should keep their results to themselves too.
Share this:
Facebook
Twitter / X
Linkedin
Ross McKitrick
Professor of Economics, University of Guelph
Bruce McCullough
STAY UP TO DATE
More on this topic
Related Articles
By: Tegan Hill
By: Jerome Gessaroli
By: Matthew Lau
By: Steven Globerman
STAY UP TO DATE