How to Talk about Conflict of Interest

Tamar Wilner

Scientists and the modern scientific process have taken quite a beating over the past few weeks. First, the New York Times reported (based on a Journal of the American Medical Association article and commentary) that in the 1960s, the sugar industry funded and reviewed studies that downplayed sucrose’s role in coronary heart disease.1,2 “They were able to derail the discussion about sugar for decades,” one study author said. Then a study by Stanford researcher John Ioannidis found that most medical meta-studies are either unnecessary, misleading, or wrong.3 He also found that when an employee of a drug company served as an author, the paper was twenty-two times less likely to make negative statements about the drug.

Reading such reports can put the defenders of modern science in a difficult spot. We’re used to batting away accusations of money-driven conspiracies, such as climate scientists’ supposed profit motive. But it’s hard to deny that sometimes money and other powerful influences have a deleterious effect on the practice of science and, in turn, on public well-being. Tobacco industry influence over smoking studies is a prime example; others include inaccurate findings about the drugs Vioxx, Paxil, and Avandia and the bone protein Infuse. Conflict of interest, it seems, can be a signal of something deeply wrong, an irrelevant smear, or even the gateway to unscientific thinking.

How, then, should members of the public think about conflict of interest, especially when it comes to medical research—something that touches them on a day-to-day level? And how can science communicators help?

Or Is This Even a Problem?

The first stumbling block to public understanding is that the science community disagrees wildly about how much of a problem conflict of interest is and how we should think about it.

For example, Andrew Brown of the University of Alabama at Birmingham argues that to downgrade evidence because of its source amounts to an ad hominem attack, and devaluing a study because of its funding is an example of the genetic fallacy. Instead, the fair and logical approach is to judge the evidence itself, especially the study’s methodology.

There’s also the problem of establishing cause and effect. The sugar industry’s involvement in that 1960s research certainly wasn’t very palatable—but, as Brown points out, there’s actually no direct evidence that the industry influenced the study’s outcome.

Many scientists also emphasize that industry funding for medical research is simply unavoidable, given the low levels of government support. After peaking at $35.6 billion in 2004, National Institutes of Health (NIH) research funding then declined by almost 2 percent a year in real terms. Industry’s share of U.S. medical research funding grew from 46 percent in 1994 to about 48 percent in 2004 to 58 percent in 2012.4 So for-profit companies have long contributed a good chunk of funding, but medical researchers rely on them now more than ever.

With industry funding being so widespread, a lot of good research risks getting discredited. “It’s kind of damned if you do, damned if you don’t,” Brown told Vox Media site Eater. He says that critics have focused on his own funding and dismissed his work, failing to even look at his methods and results.

Moses III, H., D.H.M. Matheson, S. Cairns-Smith, et al. 2015. The anatomy of medical research: US and international comparisons. Journal of the American Medical Association 313(2): 174–189. doi:10.1001/jama.2014.15939.

Evidence Cuts Both Ways

But advocates for tighter scrutiny say there’s good data to support their claim that on the whole, conflicted science is worse science.

A Cochrane meta-review of forty-eight papers—which themselves made comparisons between corporate-sponsored and non–corporate-sponsored research—found that industry funding led to more favorable results and conclusions.5 Another study of seventeen systematic reviews revealed that of those studies that disclosed a financial conflict of interest, 83 percent found no association between sugary drink consumption and weight gain or obesity. But of the studies without such disclosures, the opposite was true: 83 percent found that sugary drinks were a risk factor.6

The ubiquity of industry funding is significant, transparency advocates argue—but if anything, this reality demands more transparency rather than less.

“I think that there is so little recognition on the part of many members of the general public about the extent of conflicted research,” says Gary Schwitzer, publisher of medical journalism watchdog HealthNewsReview. His website reviews health news stories using a list of ten criteria, including, “Does the story use independent sources and identify conflicts of interest?”

Schwitzer notes that a financial relationship doesn’t necessarily imply diminished research quality. But, he says, “We need to shine the brightest of lights on disclosures that are made, and to be vigilant about those disclosures that should be made but are not. Then, if this is not too idealist, we can start to have a more informed public dialogue about where are we with the funding of research, what are we comfortable with and what are we not comfortable with.”

I’m inclined to agree. To argue that conflicts of interest are an unavoidable evil shows a paucity of imagination and ethical ambition. Most of the world’s social reforms have come about after determined people sought to challenge the “unchangeable” status quo. Science funding isn’t slavery or women’s suffrage, but I think the same broad principle applies: throwing our hands in the air helps no one.

Principles for Science Communicators

With that background in mind, I would suggest that science communicators need a set of principles to govern how they report conflict of interest. This is only a starting point, but such principles could include the following:

Sometimes, science communicators seem to pursue two separate and somewhat opposing conversations. One conversation is about all the problems undermining science—not only conflict of interest but reproducibility, cognitive bias, p-hacking, and so on. But the articles that are actually designed to guide the public or to bolster their belief in the scientific method often fail to acknowledge these problems.

It’s as if by only talking about the general superiority of the scientific process over other ways of knowing, we can avoid talking about some of the reasons that people might be distrustful of science in the first place. To those who know about science’s greatest scandals, such as the Tuskegee experiment, this can seem like willful obfuscation. Distrust of the medical establishment is a legacy years in the making with particular ramifications for minorities, so we needed to start talking openly decades ago. We’ve got some catching up to do.

That said, there are responsible and irresponsible ways to talk about conflict of interest. In most cases it would be inappropriate to lead a news story with a screaming headline like “Industry-Funded Study Finds No Link between X and Y!” Such an article, taken by readers in isolation, suggests that the industry funding was unusual or that it was necessarily responsible for the finding.

At the same time, potential conflicts of interest should be reported. This might be most effective if combined with an overview of funding figures in the given area of research—so readers can see that while industry funded this study, it also funds such-and-such percent of similar studies. Whether the reader concludes from this that the funding source is therefore not a big deal or that it is a problem to be solved will depend on the reader’s own point of view.

Consider Types of Industry Involvement
Scientists taking dictation from industry before diving into their Scrooge McDuck–like pool of gold coins is an unlikely scenario. Industry involvement can take a variety of forms, ranging from unrestricted university research support to partnerships with individual researchers to establishment of research centers to fee-for-service consultation arrangements.

As Julia Belluz argues for Vox: “Often the researchers working with industry are good researchers who honestly believe their views. They may even have good reasons to work with food companies. The problem is that industry funding can elevate minority views and give them more prominence than they otherwise would have.”

In other words, industry influence is both less willful and more pernicious than strident critics might have you believe. Even if a particular study uses good methods and yields valid results, it could be part of a wider pattern of distorted emphasis on particular maladies or interventions.

Consider Wider Types of Influence
This column has focused mostly on problems of funding, but that’s far from the only form conflict of interest takes.

“Financial conflicts of interest are really important… [But] there are so many other conflicts of interest that nobody talks about or that you can’t put a finger on,” says Ivan Oransky, founder of Retraction Watch, global editorial director of MedPage Today, and himself an MD.

He outlines a likely scenario: “I’m so-and-so professor and I’ve built a whole career on the idea that x is related to y. And someone comes along and says, ‘Actually x is not related to y, q is related to y.’ Although what I should be doing if I’m thinking about knowledge and science and the greater good is saying, ‘Wow, that’s great, we know more now than we thought we did,’ instead I’m going to fight like hell to discredit that work.

“The conflicts of interest that worry me are the unspoken ones,” Oransky says.

Look at the Pattern
Are you looking at an isolated example—or has this industry funded dozens of studies over decades? Or something in between? What percentage of industry-funded findings in this field are positive compared to studies with nonprofit funding? These questions can be difficult to answer in a definitive way, but searching the relevant journals can help. There’s also an opportunity here for investigative journalists to build comprehensive databases to inform the public and guide their fellow reporters. (The medical community has discussed such databases too but without much to show for it yet.)

Look at Methods
Conflict of interest or no, one of the most powerful tools we have for evaluating scientific research is simply judging the methods used. Was the study randomized and controlled? Was the sample size large enough? Was the study period long enough to demonstrate lasting change? Were the statistical methods valid? And don’t forget the invaluable ten-point list over at HealthNewsReview.

Guidelines for the Public

Likewise, it would be useful to start developing some guidance for readers on how they might want to consider conflict of interest claims. They might want to ask themselves the following questions:

What’s the methodology?
Although people don’t often have time to dig into the methods of individual studies—and think how impossible that becomes when we talk about entire fields—we should at least encourage them to put methodology top of mind. Ask for Evidence is a great layperson’s guide.

Is this a pattern?
It’s easy to get upset about the conflict of interest in a particular study. But if there’s no evidence of a wider problem, that apparent conflict is pretty irrelevant.

What does the preponderance of evidence say?
Do you even need to read about new studies? Sometimes we get worked up about apparently flip-flopping advice—especially about nutrition—and forget that ongoing research is mostly working at the margins. The fundamentals remain the same. As Michael Pollan puts it: “Eat food, not too much, mostly plants.”

What alternative funding arrangement would satisfy you?
Some people complain about the influence of Big Pharma or Big Agra. Others say government funding influences the direction of research. If you can’t think of a single reasonable source of research funding that would satisfy you, realize that you’re advocating for no research at all.

What’s the alternative to this research? Are you trading off a conflict of interest problem for a deeper methods problem?
One of the most worrying outcomes of distrust in science is that people often turn to unproven alternative medicine. Essentially, they’re trading somewhat flawed research for either much more flimsy research or no research at all.

As Oransky neatly summarizes it: “I don’t think that just because you mistrust some aspect of modern medicine, even if it’s legitimately because there’s been fraud, that should throw you into the hands of people who don’t have any scientific credibility.”

That might be a difficult point to get across to the true believers, but it’s one where we can have a real and positive influence over the broader public. We start by being honest.


  1. Kearns, C.E., L.A. Schmidt, and S.A. Glantz. 2016. Sugar industry and coronary heart disease research: A historical analysis of internal industry documents. Journal of the American Medical Association Internal Medicine. Published online September 12. Available online at
  2. Nestle, M. 2016. Food industry funding of nutrition research: The relevance of history for current debates. Journal of the American Medical Association Internal Medicine. Published online September 12. Available online at
  3. Ioannidis, J. 2016. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. The Milbank Quarterly 94(3): 485–514. Available online at
  4. Moses III, H., D.H.M. Matheson, S. Cairns-Smith, et al. 2015. The anatomy of medical research: US and international comparisons. Journal of the American Medical Association 313(2): 174–189. doi:10.1001/jama.2014.15939.
  5. Lundh, A., S. Sismondo, J. Lexchin, et al. 2012. Industry sponsorship and research outcome. Cochrane Database of Systematic Reviews (12). Available online at
  6. Bes-Rastrollo, M., M.B. Schulze, M. Ruiz-Canela, et al. 2013. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: A systematic review of systematic reviews. PLoS Med 10(12): e1001578. Available online at

Tamar Wilner

Tamar Wilner is a Dallas-based journalist, researcher, and communications graduate student, specializing in the study of misinformation and science communication. She's written and consulted for the Columbia Journalism Review,, and American Press Institute, and she co-created Post Facto, a game that teaches people how to fact check stories in their social media feeds. You can find her at and on Twitter at @tamarwilner.