Research & News
Bookmark and Share

Widespread withholding of data by academic researchers undermines policymaking process

Media Contacts:
Release Date: February 18, 2009

TORONTO-Academic research that is cloaked in secrecy by researchers who refuse to share their data or methodology is contributing to bad public policy, according to a new study released today by independent research organization the Fraser Institute.

Politicians and policymakers often rely on peer-reviewed academic research to justify public policy decisions, without realizing that academic peer-review rarely, if ever, involves verifying the data and calculations. Because many researchers do not release their data, results are rarely subject to independent replication. This allows flawed research to go unchallenged, which is detrimental to public policymaking, concludes the study, Check the Numbers: The Case for Due Diligence in Policy Formation.

At the heart of the problem are frequent refusals by researchers to release the data and computer code upon which they base their conclusions. The problem is compounded by academic journals that are unable or unwilling to check the content they publish.

"This study arose out of our experiences attempting to replicate published empirical research and our concerns about the way unaudited research is used in public policy formation," said Ross McKitrick, study co-author and Fraser Institute senior fellow.

"When a piece of academic research takes on a public role, such as becoming the basis for public policy decisions, then practices that obstruct independent replication prevent the proper functioning of the scientific process and can lead to poor public decision making."

McKitrick, economics professor at the University of Guelph, and co-author Bruce D. McCullough, professor of decision sciences at Drexel University in Philadelphia, summarize replication efforts of more than 1,000 economics articles published since the 1980s. Most authors did not release their data when asked, and of those who did, only a small number of results were reproducible. McKitrick and McCullough also examined cases from a broad range of academic disciplines, including history, forestry, environmental science, health, and finance, in which influential studies were later found to be flawed, but only after lengthy battles to get access to the underlying data and, in some cases, years after policy decisions had been made based on the flawed results.

Some of the key examples McKitrick and McCullough explore are:

Federal Reserve Bank of Boston on mortgage lending

A 1992 study by economists at the Federal Reserve Bank of Boston purported to show a widespread discrimination against minorities in the Boston mortgage market. This study quickly became the basis for government mandates to relax lending rules, allowing people who did not meet traditional lending requirements to obtain mortgages. This ultimately contributed to the current U.S. financial crisis.

When independent researchers attempted to replicate the study, the underlying data were inaccessible. Key information was eventually obtained using the Freedom of Information Act, which revealed coding errors in the original data that invalidated the results. But the replication process took six years, by which time the new lending rules had long been enacted.

U.S. Centers for Disease Control and Prevention on obesity

A 2004 study published in the Journal of the American Medical Association from the U.S. Centers for Disease Control and Prevention claimed obesity kills 400,000 Americans annually. The study attracted significant media attention and resulted in the U.S. government immediately allocating $60 million for obesity-related programs.

But other researchers soon discovered that the study's data were unreliable and that a proper peer review had not been conducted. The next year CDC scientists estimated the number of deaths attributed to obesity might only be 26,000, and the CDC began downplaying any numerical estimate of deaths related to obesity.

The "hockey stick" graph and climate change

A 1998 study into the climate history of the northern hemisphere, led by Michael Mann, resulted in a graph implying the Earth's climate cooled slightly for 900 years and then warmed rapidly in the 20th century. The graph was used extensively by the United Nation's Intergovernmental Panel on Climate and played an influential role in convincing governments around the world to ratify the Kyoto Protocol in 2002.

But when independent researchers McKitrick and Stephen McIntyre tried to replicate Mann's results, they were stymied by his refusal to identify the data he used and to clarify key steps in his calculations. Using what little data were available, they found errors in Mann's work that invalidated his conclusions.

In 2006, the U.S. National Research Council investigated the issue and concluded that Mann's study failed key tests of statistical validity. Mann's conclusions were also deemed insupportable by an expert panel led by Edward Wegman, professor of statistics at George Mason University and chairman of the National Academy of Sciences Committee on Theoretical and Applied Statistics, which was convened at the request of the U.S. Congress.

"Publicly-traded companies are required by law to ensure transparency and veracity in all their financial reports. Yet the same stipulations don't apply to academic research or academic journals, even when large amounts of public money are at stake," McKitrick said.

"Disclosure of data and code for the purpose of permitting independent replication in no way intrudes on or imperils academic freedom. Instead, it should be seen as essential to good scientific practice, as well as a contribution to better public decision making."



Loading...