Engagement with Fake News Extremely Concentrated, New Study Finds

Kendrick Frazier

Ever since the 2016 presidential election, fake news has become an increasing concern because of how fast it spreads and how corrosive it can be to democratic institutions that depend on a properly informed citizenry.

Social scientists and others have been trying to study and better understand the phenomenon. Last year, scientists from MIT published a study in Science of how stories were disseminated on Twitter from 2006 to 2017 and found that false stories spread much faster and to far more people than true ones (“Lies and False News Spread Faster …” Skeptical Inquirer, July/August 2018).

Now six researchers at Northeastern University, Harvard, and the University at Buffalo have published a study of fake news on Twitter during the U.S. presidential election (Grinberg et al., “Fake News on Twitter During the 2016 U.S. Presidential Election,” Science 363: 374–378, January 25, 2019). They found that engagement with fake news sources is extremely concentrated. Only 1 percent of individuals accounted for 80 percent of exposures to fake news sources. And 0.1 percent accounted for nearly 80 percent of fake news sources shared.

They followed a definition of fake news outlets published earlier by one of the authors, David Lazar, who conceived the new study: fake news outlets are those that have the trappings of legitimately produced news but “lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of the information.”

One class of fake news sources (“black”) were a set of websites taken from preexisting lists of fake news sources prepared by fact checkers, journalists, and academics. These sites published almost exclusively fabricated stories. A second class of sites (dubbed “red”) spread falsehoods that clearly reflected a flawed editorial process. A third class (“orange”) represented cases where checkers were less certain that the falsehoods stemmed from a systematically flawed process.

The goal was to study how ordinary citizens experience misinformation on social media platforms.  The researchers collected tweets sent by 16,442 accounts that were active during the 2016 presidential election. They obtained lists of their followers and followees (accounts they followed). They determined that their panel was largely reflective of voters in age, gender, race, and political affiliation.

When totaled across all panel members, 5 percent of aggregate exposures to URLs were from fake news sources. That fraction varied by day, and it increased in all categories during the final weeks of the campaign. As for sharing of content, 6.7 percent of political URLs shared by the accounts came from fake news sources.

But those numbers obscure the major finding. What the study found was that content from fake news sources was highly concentrated, both among a small number of websites and a small number of panel members. Within each of the three categories of fake news, 5 percent of sources accounted for more than 50 percent of exposures. The top seven fake news sources—all in the red and orange categories—accounted for more than 50 percent of fake news exposures.

Content was also highly concentrated among a small fraction of panel members. A mere 0.1 percent of the panel members consumed 80 percent of the volume from fake news sources.

The authors say these “supersharers” and “superconsumers” of fake news—those accountable for more than 80 percent of fake news sharing or exposure—dwarfed typical users in their affinity for fake news sources, and in their activity.

As for political affinities, the study found that fewer than 5 percent of people on the left or center ever shared any fake news content, yet 11 percent of people on the right and 21 percent of people on the extreme right did.

All in all, the vast majority of fake news and shares were attributable to tiny fractions of the population.

So the good news is that the overwhelming majority of users did not share fake news. And “for the average panel member only 1.18 percent of political exposures consisted of content from fake news sources.” This seems to fit with most previous studies, the authors say.

“As in these studies, we found that the vast majority of political exposures, across all political groups, still came from popular nonfake news sources. This is reassuring in contrast to claims of political echo chambers and fake news garnering more engagement than real news during the election.”

And they say the findings suggest that fake news could be reduced by having social media platforms “discourage users from following or sharing content from the handful of established fake news sources that are most pervasive. They could also adopt policies that disincentivize frequent posting, which would be effective against flooding techniques while affecting only a small number of accounts.” The researchers also called on Twitter and other social media platforms to engage more closely with fact-checking organizations. All this could contribute to “more resiliency to misinformation campaigns during key moments of the democratic process.”

Kendrick Frazier

Kendrick Frazier is editor of the Skeptical Inquirer and a fellow of the American Association for the Advancement of Science. He is editor of several anthologies, including Science Under Siege: Defending Science, Exposing Pseudoscience.


Ever since the 2016 presidential election, fake news has become an increasing concern because of how fast it spreads and how corrosive it can be to democratic institutions that depend on a properly informed citizenry. Social scientists and others have been trying to study and better understand the phenomenon. Last year, scientists from MIT published …

This article is available to subscribers only.
Subscribe now or log in to read this article.