Symposium | Democracy and Technology: Allies or Enemies?

Getting Ahead of Misinformation

By Claire Wardle

Tagged DemocracyInternettechnology

I sat down to write this piece almost six years to the day since Facebook announced its Third Party Fact-Checking Program. While we now take for granted the pop-up labels we regularly see on different platforms that provide more context on content, this Facebook partnership with four U.S.-based fact-checking organizations marked the first time a platform had intervened to warn users about the veracity of content.

The launch came five weeks after Donald Trump’s unexpected election victory, and Mark Zuckerberg’s declaration that it was “a pretty crazy idea” to suggest fake news had anything to do with the result. But Zuckerberg’s claim didn’t stand up to scrutiny as journalists provided more and more evidence that election-related misinformation had been running rampant on the platform; soon, Facebook’s hastily arranged pilot partnership with fact-checkers was born.

Misinformation is not a new problem. It did not start in 2016. But as Zuckerberg learned the hard way, technology’s ability to turbocharge the creation and dissemination of information designed to fool, persuade, and confuse is certainly new.

In the wake of Trump’s win, money flowed toward beefing up institutions of civil society, helping to create new nonprofit organizations and university centers in the space. Governments set up task forces to tackle disinformation.  Platforms created “civic integrity” units. Libraries, fact-checkers, and news literacy initiatives doubled down on teaching people how to be critical consumers of information. More than 80 new initiatives and tools to fight disinformation emerged in this time period, according to a database launched by the RAND Corporation in 2019.

Despite all of this, it’s hard to argue that we’ve seen an improvement in our information ecosystem—and across many metrics, the situation has  worsened. A significant reason for this has been the severity of a number of crises, all of which have exacerbated the impact of online misinformation: the COVID-19 pandemic; a number of tight elections where fears existed that disinformation could impact the result (including in the United States, France, and Brazil); the murder of George Floyd and the protests that followed; the storming of the U.S. Capitol on January 6, 2021; and, most recently, the invasion of Ukraine by Russia. Each time, the platforms were on the back foot, hastily creating policies on the fly—around COVID-19 misinformation, around politicians falsely claiming victory after an election, around whether to suspend Donald Trump’s social media accounts, and around how to address users claiming they wanted to kill Vladimir Putin.

It was hard to watch content moderation policies play out this way, and frustrating that companies hadn’t already considered these types of events. Platforms famously organize “red team” events (scenario-based planning exercises), and it was difficult to understand why they hadn’t thought about what they would do in the context of an invasion of Ukraine by Russia, or a global pandemic involving a new pathogen.

But I would argue that researchers were also behind in planning for these events. Despite many years of monitoring disinformation, no one had created a consistent, shared methodology or database to allow the many disparate groups of researchers to work together to fight such threats. Groups are siloed and competitive (a product, I would argue, of the philanthropic incentives, but that’s for another piece).

Governments and nongovernmental organizations weren’t prepared either. For example, in response to the invasion of Ukraine, the EU hastily banned the Russian news channels RT and Sputnik without transparency or justification, leaving journalists and researchers in Europe unable to access the misinformation circulating in other countries, while also knowing other disinformation channels had been left untouched. In another example, the World Health Organization still has a declaring that COVID-19 is not an airborne virus, demonstrating that Twitter has no policy for deleting information that is no longer scientifically sound—even when it is published by the world’s global authority on the virus.

Despite all the conferences that have been held and all the philanthropic dollars that have flowed into the “misinformation field,” I would argue the field has seen twin failures: a failure to take the long view, fueled by a constant desire for a quick fix (“if only we could change Section 230”), and a failure to adequately prepare for crises where harmful information would wreak havoc.

Recent events have shown us what is at stake in terms of the harms of online information, but we need to stop lurching from crisis to crisis. We must build a response that means institutions, platforms, and users are ready for the next emergency.

The ad hoc ecosystem that has sprung up to tackle harmful information is not adequate for the scale of this global problem. The funding is piecemeal and in no way sufficient for tackling the challenge. The field is siloed, organized by nation state or topic (elections, health, climate), whereas those who aim to pollute and influence our information landscapes (whether that’s people simply looking to profit or dark PR firms working on behalf of state actors) seek opportunities on any number of topics and are highly organized and networked.

What we need to see is a properly resourced international body that can support the necessary building of infrastructure, coordinate scenario planning, and fund resilience programs so that no one is caught off guard when we face our next crisis-fueled onslaught of misinformation. This new entity would be global in scope and would be funded by a central pool of funds collected from governments, philanthropic organizations, and the platforms themselves.

Here are some suggestions for the types of activities a body like this could lead:

Develop widely accessible monitoring tools

With the new data access provisions coming on board with the European Union’s Digital Services Act, we need independent funding to build an accessible tool that will allow the systematic monitoring of public posts on major platforms. This is made even more important as CrowdTangle (a Meta-owned platform that allows researchers to see what’s being shared on Facebook and Instagram) becomes less powerful and looks likely to soon be shuttered.

We also need to build a database for multiple research groups to collect and share examples of harmful information. While groups spend countless hours online trying to identify and verify misinformation, the examples, influence operations, and case studies they gather are stored in Slack channels, Google Docs, and other rudimentary storage systems. Working in silos like this means that we are less able to study misinformation across topics and over time.

The monitoring tool and database must be open and accessible to groups working in different countries and languages. Currently, these groups are rarely able to share their research, meaning examples that cross borders are rarely connected.

The database nearest to what we need is the one used by the global network of fact-checkers who are part of Meta’s Third-Party Fact-Checking program (it now boasts over 80 member organizations around the world, significant growth from the four that were announced six years ago). This proprietary, Meta-owned database is not open to any other platforms, let alone to the researchers and civil society groups who could learn so much from the examples stored inside.

Build a strong foundation for research

We need consistent definitions, typologies, and methodologies so that academics, policymakers, and civil society advocates can start to build a solid foundation of what works to mitigate misinformation. The field is growing, and hundreds of experiments have been run by academics (mostly in the United States, unfortunately) to test the efficacy of different interventions to slow down misinformation. However, it is difficult to compare studies and build a strong research foundation for what works when everyone is testing different things: some experiments test an intervention on rumors that gargling lemon can prevent COVID-19 while others test the intervention against conspiracy theories that Bill Gates is inserting microchips into vaccines. Some test pre-bunking interventions against climate misinformation while others test debunking interventions on political misinformation. Every researcher comes up with their own stimuli and rarely shares what they have used, making it even harder to understand what has actually been tested or to glean broader lessons.

If there was agreement on research designs and methodologies, studies could be carried out across borders relatively easily, providing a much more sophisticated understanding of how these issues impact different countries, and whether interventions can be scaled globally.

Promote rigorous evaluation of interventions

While we all agree that we need to help people cultivate resilience against harmful misinformation, there is little agreement about the best strategies for doing so. Hundreds of initiatives have sprung up around the world. Some focus on teaching people critical skills for evaluating news upon seeing it; some simply encourage people to share less; someprepare people before they see misinformation by teaching them the tactics and techniques used by people who aim to manipulate, providing a type of mental inoculation. Some focus simply on misinformation, whereas others include lessons on the ways algorithms work, how data is tracked and how users are targeted by ads, and how the political economy of news impacts journalism.

We need rigorous evaluations of different information literacy initiatives. Many have sprung up globally, but few have been evaluated, meaning it’s impossible to know which should be scaled (potentially by embedding them directly into the major platforms).

Create an ethics and transparency framework

For all this to work, we need an ethics framework, a shared set of standards for how the research should be carried out. As a researcher, is it ethical to “lurk” in WhatsApp groups or Discord servers without announcing yourself to understand information flows? Is it ethical for a nonprofit to share examples of misinformation sent in by community members with journalists, government agencies, or platforms without those members’ consent? When does “counter-messaging” against disinformation narratives move from fact-checking to an influence operation itself?

Connect online misinformation with offline spaces

While much debate centers on online misinformation, too often with a focus on Facebook and Twitter, there is an urgent need to recognize the ways in which harmful misinformation flows across the platform ecosystem (from 4chan to BitChute to YouTube to Truth Social to Gab to Twitter to Instagram to Facebook), as well as how it flows across the wider information landscape, from politicians to talk radio hosts to partisan news channels and back again. A focus solely on online misinformation ignores these different vectors and the amplification potential of rumors and falsehoods as they travel. As mentioned above, the technology for monitoring online platforms is very poor—but our ability to monitor television and radio is almost nonexistent. If we want to understand the narratives flowing across the information ecosystem, we have to understand all of it.

Support and scale community programs

One key lesson learned from the pandemic and recent elections is that top-down responses are limited. You don’t tackle misinformation on this scale with ads on the sides of buses. You tackle it through partnerships with communities. Community leaders are able to understand the rumors that are taking hold, the impacts those rumors are having, and the most effective ways to message accurate information.

There is no one-size-fits-all solution, but we need innovative strategies for scaling up community-based resilience programs that help raise awareness about the harms caused by those who are attempting to manipulate members of the community. The disinformation ecosystem thrives through participatory dynamics: for example, calls to action such as “Send us your examples of ballot fraud at your polling location,” “Send us videos of your vaccine side effects,” or “Do your own research by googling ‘chemtrails.’” These techniques make people feel a sense of urgency, and make them feel heard. If those of us who work to ensure people have access to high-quality information also want them to believe it, we need to use similar tactics to disseminate information that will help people make decisions that serve themselves, their families, and their communities. We need to rebuild trust in institutions by listening to communities’ fears and responding to their questions.

For six years, we’ve seen the majority of funders prioritize responses to known events, most commonly elections. Civil society would see an uptick in donations two months before an election with an abrupt end point the day the ballots were counted, as if misinformation only begins and ends during campaign season.  No one wanted to fund critical, longer-term infrastructure, and even calls for literacy and resilience programs were often set aside in the search for perceived quicker solutions.

Hopefully we have by now learned that there are no quick fixes. We are in this for the long haul, and we need to think about the problem in two ways. First, we must create institutions that will support misinformation research, even as we continue the slow work of building awareness and resilience. And second, we must prepare for the unexpected. Rather than simply ramping up funding around elections, how can we prepare for the worst: the next invasion, the next coup, the next major climate-related event, the next global pandemic, the next genocide? We must adequately plan for all of these and more, rather than waiting for social media platforms, governments, and civil society to be caught off guard and to design hasty content moderation policies, collaborations, or initiatives in opaque Signal groups at midnight.

We can no longer say we didn’t know it was going to happen. We’ve already seen the real-world harms of misinformation. We need to be prepared, and there’s no excuse anymore.

From the Symposium

Democracy and Technology: Allies or Enemies?

next

Tech's Risks on Non-English Platforms

By Samuel Woolley

12 MIN READ

See All

Read more about DemocracyInternettechnology

Claire Wardle is the co-director of the Information Futures Lab, at the Brown University School of Public Health. Previously she was the co-founder and Director of the nonprofit First Draft.

Click to

View Comments

blog comments powered by Disqus