Fake news is still making headlines, but we’re learning more about how online misinformation works
UBC Political Science, PhD Candidate
January 18, 2018
‘Fake news’ has regularly made headlines since Donald Trump won the 2016 U.S. presidential election. In recent weeks, President Trump planned a ‘Fake News Awards’ ceremony for “the most corrupt & biased of the Mainstream Media,” France and Brazil promised to crack down on ‘fake news’, the European Commission announced that it will propose an EU-wide strategy for addressing the problem, Facebook changed its News Feed algorithm that many blamed for the spread of misinformation, and several new academic studies shed new light on the prevalence and effects of ‘fake news’.
In our new report, Digital Threats to Democratic Elections, published by the Centre for the Study of Democratic Institutions, we summarize academic research and investigative reporting on how foreign actors use ‘fake news’ and other forms of misinformation to interfere in elections. While President Trump uses the term ‘fake news’ to refer to “accurate but unflattering news items,” as Marwick and Lewis put it, we follow other scholars and journalists who use the term to describe “news articles that are intentionally and verifiably false, and could mislead readers.”
Fake news is a hot topic for researchers. This post looks at some new findings and new arguments that have appeared in the last month, while the report was being prepared for publication.
Misinformation and Public Debate
There is significant concern that we are now living in a post-fact world, which threatens the very possibility of an informed citizenry and meaningful public debate. The fear is not just that falsehoods are being produced, but that they aren’t being properly corrected. A frequently cited piece of evidence is the “backfire” effect, which occurs when someone is presented with a factual correction to an inaccurate belief and responds by doubling down on the incorrect information. If the backfire effect is a real and widespread phenomenon, the growing use of fact-checking might actually make the misinformation problem worse. In a recent Slate article, Daniel Engber delves into the surprisingly long history of the backfire effect in social science research. He argues that although people may be hesitant to accept uncomfortable truths, several new studies find little evidence that this factual information backfires and causes people to believe more strongly in incorrect. Engber concludes: “The end of facts is not a fact.”
Yet, even if there is no backfire effect, the alleged popularity of fake news still seems to pose an obstacle to informed public debate. Our report notes widespread concerns that these effects would be exacerbated if citizens are not exposed to true information because they are in “filter bubbles,” particularly on social media, and only see information that confirms what they already believe. A new study by Andrew Guess, Brendan Nyhan, and Jason Reifler uses people’s internet browser history to estimate that 27% of voting age Americans visited a fake news website during the final weeks of the 2016 presidential election campaign. While they find that most people visit a variety of news websites, they note that the “‘echo chamber’ is deep […] but narrow” (p.11) with 10% of Americans accounting for roughly 60% of visits to fake news websites. Their study suggests that fake news may only be a significant part of the news diet for a small audience.
How Fake News Drives the Public Agenda
The apparently narrow appeal of fake news might appear encouraging, especially if that audience doesn’t include politicians. Recent research, however, shows that politicians are susceptible to reasoning errors, and there can be serious consequences if powerful officials believe or endorse conspiracy theories and fake news. Notable examples include Donald Trump’s promotion of conspiracies about Barack Obama’s birth certificate and National Security Advisor Michael Flynn’s retweeting of fake news stories about Hilary Clinton. A recent survey experiment finds that fact-checking recent fake news stories, such as the ‘birther’ conspiracy theory, is effective at reducing misinformation even for those who initially believe them. Unfortunately, Guess et al.’s study of browser histories reveals no cases of someone looking up a fact-check for a connected fake-news story, and fake news readers rarely seek out fact-checks.
Many of the new findings support our own assessment that few people are likely to seek out fake news, but that misinformation can reach broader publics if it influences elite opinion-makers or gains coverage in the mainstream media. Social media appear to have considerable capacity to drive attention to fake news, with estimates suggesting that between 22% to 67% of fake news traffic originates with Facebook. Our report also documents the shortcomings of several of Facebook’s previous reforms, such as Disputed Flags, which the company has replaced by including fact-checked pieces as Related Articles. Critics suggest that Facebook’s recently announced algorithm change might actually drive further attention to fake news. By focusing on posts that generate high levels of user engagement, such as comments or ‘likes’, the new algorithm may increase the visibility of false, but sensational stories. Facebook’s previous experiments with platform designs in countries like Bolivia and Cambodia also drove traffic away from high-quality journalism. The results of Facebook’s News Feed change, like its other experiments in our information systems, need to be continually checked by independent investigations.
Citizen Trust and Fake News
One of the key findings of our report is that the “threat of digital interference is not limited to its impact on electoral outcomes.” In their recent study, Guess et al. reach a similar conclusion, calling for more research on whether or not exposure to “dubious and inflammatory content can still undermine the quality of public debate, promote misperceptions, foster greater hostility toward political opponents, and corrode trust in government and journalism” (p.12).
Despite some optimism that citizens are receptive to facts, a new RAND Report suggests that there has been a broad process of “truth decay” in which there is greater disagreement about facts, widespread preferences for personal experience and opinion over fact, and growing distrust in sources of factual information. Recent indications that governments, like those in France and Brazil, will move to regulate speech in an attempt to address the fake news problem have themselves generated considerable concern and evoked expressions of citizen distrust.
We also note in our report that the promotion and consumption of misinformation is not equally distributed across the political spectrum in ways that might encourage divisive conflict over facts. The recent Guess et al. study reaffirms that Americans who supported Donald Trump are more likely to visit fake news websites and the 10% of users who consume the most fake news also tend to access the most politically conservative news sources. This finding supports existing research showing that conservative news media are more polarized than liberal new media, and that the far-right alternative media ecosystem in the U.S. is a critical part of the fake news puzzle. The recent RAND study suggests that such polarization and cognitive biases – two features detailed in our report – are key drivers of “truth decay.”
The Bottom Line
Few citizens appear to be avid readers of fake news, although this apparently limited reach can still have significant consequences. Furthermore, multiple studies suggest that Facebook is the single largest gateway to fake news, indicating that concerns about Twitter, particularly the use of bots, is perhaps overstated. Recent studies reaffirm that fact-checking can work without backfiring but that few people seek out fact-checks. Future research on the effectiveness of initiatives like Facebook’s project of showing fact-checks in its Related Articles feature will be important given the new evidence of Facebook’s major role in facilitating the popularity of fake news.
However, research should also attend to the underlying problems of how to motivate people to seek out fact-checks and to ensure that they have the media and digital literacy skills to evaluate information and find trustworthy answers. Our report’s initial list of vulnerabilities to fake news – a lack of digital literacy, partisan polarization, the design of social media platforms, and the difficulties involved with state regulation of social media content and advertising during campaigns – appears to remain a useful mapping of some of the key dimensions of a complex problem.