Symposium | Democracy and Technology: Allies or Enemies?

Tech’s Risks on Non-English Platforms

By Samuel Woolley

Tagged InternetSocial Media

Carla is a 20-year-old Mexican American college student living in San Antonio, Texas, who participated in a study run by my research lab. Like many people in the United States who have roots—and family and friends—in Latin America, WhatsApp is her social media platform of choice. The end-to-end encrypted chat application, first released in 2009 and acquired by Meta in 2014, is hugely popular among the U.S. Latino community. It’s similarly favored by Indian Americans, who regularly use it to communicate both with others living stateside and with loved ones in India. Often, the conversations Carla has on the platform are in Spanish—as are many of her interactions across different social media platforms. For Carla, WhatsApp is not only the space where she talks with peers about her daily life. It’s a place for sharing and discussing nearly everything: politics (both U.S. and Mexican), news and media, the COVID-19 pandemic, class and race, and many other subjects. She regularly encounters mis- and disinformation there, and elsewhere online, specifically tailored to Spanish-speaking audiences, such as misleading messages about how to vote shared by family, friends, and online acquaintances.

Carla isn’t alone in her diverse use of WhatsApp—nor in her use of a language other than English for communicating online. According to the U.S. Census, 22 percent of families, or roughly 66 million people living in the country, speak languages other than English in their home. And while English is the most studied language in the world, less than 20 percent of the global population speaks it. There are over 7,000 different languages spoken in the world today—many of which people use to communicate online.

However, conversations about the myriad problems posed by social media and other online forums focus overwhelmingly on a small handful of platforms and challenges in the English language. Bluntly, this is because the vast majority of the most powerful and most used online platforms in the world are based in the United States: Facebook, YouTube, Instagram, Twitter, Messenger, Pinterest, Reddit, and—yes—WhatsApp. Facebook, YouTube, and WhatsApp all boast over two billion active users, well outstripping competitors in other countries, including China’s TikTok, WeChat, and Weibo. U.S.-based parent companies like Meta and Alphabet have invested great resources in developing products and code for the English-language web. (This, of course, hasn’t stopped them and many others from creating online spaces across the unthinkably vast non-English-language internet.)

Unsurprisingly, when it comes to internal responses to some of the most pressing problems tied to social media—disinformation, online hate and harassment, violence and bias against minority groups—firms like Meta and Alphabet have focused almost entirely on the English-language internet. Stephanie Valencia, co-founder of Equis—an organization devoted to research on and work for the Latino community—has shared research that demonstrates just how much worse the U.S. misinformation problem online is in Spanish than in English (though she readily admits it’s pretty bad in English, too). In The Washington Post, she writes that social media companies’ negligence has led to “an entire continent of Spanish-language misinformation largely unchecked by the platforms.” She points out, too, that the platforms continue to fail in maintaining trustworthy and safe online spaces in other languages and countries beyond the Americas.

For the last three and a half years, my research team at the University of Texas at Austin has been studying the latest communication-based challenges around the globe. Our Propaganda Research Lab, based within UT’s Center for Media Engagement, has made a special study of multilingual, multiplatform attempts to influence public opinion through disinformation (the purposeful spread of false content), coordinated hate (organized trolling campaigns), and a variety of other information-manipulation mechanisms. We’ve done hundreds of interviews with people who build, track, and experience online propaganda campaigns across numerous countries, including Brazil, Egypt, Ethiopia, Eritrea, India, Indonesia, Lebanon, Libya, Mexico, Myanmar, the Philippines, Turkey, Ukraine, and the United States. Team members have also analyzed huge swaths of social media data, from Facebook to Gab to Telegram. During the 2020 and 2022 U.S. elections, we studied the production and consumption of digital propaganda over emergent and less studied social media platforms and among diaspora communities and communities of color. Our findings reveal the consequences of social media companies’ lack of focus on these groups and the languages they speak, which range from increased distrust in news to outright fear of institutions. These takeaways absolutely line up with those of Equis and the many other research entities that continue to argue that platforms, policymakers, and everyday people must do more to combat online informational problems beyond English and across the web—not just on Facebook, Twitter, and YouTube.

Disinformation has never spread at the rate it does today. It has never been so unfettered, its promulgators so unchecked by consequences (even when their actions lead to the direst of outcomes), as we are witnessing in the social media era. Platforms like WhatsApp and YouTube allow for the computational enhancement of disinformation, a practice through which bots and organized groups of influencers widely spread false content and give it the illusion of popularity or virality. These platforms enable the massive amplification of organized hate, harassment, and disenfranchisement across the political, medical, scientific, journalistic, and broader social spheres. Governments (both democratic and authoritarian) and governmental actors (politicians, military personnel, intelligence groups, etc.) are still among the most well-resourced promulgators of these various forms of propaganda. However, lobbying groups, corporations, PACs, nongovernmental organizations, and extremist groups also leverage social media to conduct their own manipulation campaigns. Increasingly, too, less organized individuals and groups with allegiances to a particular sociopolitical stance are also able to effectively use digital platforms as a means to coerce and control.

This problem is certainly marked in English. It’s been particularly investigated by lawmakers in the United States, the United Kingdom, and other English-speaking countries. But the digital boosting of harmful content has always existed in other languages, in other countries, and across social media and the internet writ large. Meta and Alphabet, as global technology leaders, bear a huge proportion of the blame for allowing this problem to persist. In India, for instance, governmental and nongovernmental groups continue to stoke anti-Muslim hate on WhatsApp. The ruling Bharatiya Janata Party has a stranglehold on messaging on that platform, and this vein of propaganda has been tied to political repression, murder, and rape. Social media firms, above all others, have allowed for large-scale gaming of online metrics and content, whether due to a lack of foresight or deliberate inaction. But politicians have also failed to effectively control and regulate the space. And although in the last few years there has been a great deal of fervor in informational and political quarters about the failings of social media, it has been myopically focused on the issue in English and on more public platforms. As a result, particularly in the United States, policymakers, platforms, and others have often failed to protect the most vulnerable in our society.

I’ve now been studying online propaganda and disinformation for over a decade. Many of the earliest examples of what my collaborators and I call “computational propaganda”—which is defined by attempts to manipulate communication flows over social media through the use of automation (including bots) and, consequently, control of recommendation, trending, and other curatorial algorithms—occurred in non-English-speaking countries: Ecuador, Mexico, Russia, Syria, and Ukraine. It was attempts to silence activists using automated spam and hate in the Middle East and North Africa during the Arab Spring that first tipped us off to this relatively new phenomenon. Peer-reviewed scholarship on digital influence operations in both English and other languages has been publicly available since as early as 2010. The various teams I’ve worked on—and others researching the same problem—have shared our findings directly with social media platforms and governments since 2013. Why, then, has it taken so long for these institutions to react? And why does their reaction continue to be so constrained?

In some instances, employees at tech’s biggest companies were blindsided by the manipulative use of the platforms they maintain. More often, though, firms have been more concerned with the monetary gains and losses tied to changes in how they operate—particularly in relation to advertising, which is their core business. The issue of digital manipulation is also a complicated one, however. It is difficult to combat. It runs up against battles over free speech versus the rights to privacy and safety. It exacerbates polarization, tribalism, and division. It is most studied, understood, and discussed as a problem occurring in public spaces online, like Twitter, which in particular made it possible for researchers to access platform data. It is under-researched, misunderstood, and overlooked on platforms (like YouTube and Facebook) that generally do not allow outsiders meaningful access to data, as well as those (like WhatsApp and Signal) that enable completely private communication. Work by my lab and several other investigative entities is making it increasingly clear, however, that propaganda, disinformation, and coordinated hate have always existed in these obscured digital spaces—and that they’ve always happened in a multitude of languages.

My colleagues and I see the fight against online disinformation—on all platforms, in all languages—as an essential part of the quest to foster what we call “connective democracy.” Connective democracy is, as my colleagues at UT’s Center for Media Engagement write, “a means of enabling the type of democratic discourse envisioned by deliberative democracy in highly polarized political climates.” As anyone in the United States could tell you—or indeed in many of the world’s other embattled leading democracies, such as Brazil, India, Indonesia, and the United Kingdom—we live in highly polarized times. And an ever-growing body of work reveals that disinformation propels and exacerbates political polarization and other forms of division. Stanford professor Larry Diamond, one of the world’s leading authorities on democracy, recently argued that global democracy’s arc has waned from a resurgent period in the 1990s toward an imperiled one today. He points to polarization as one of the leading structural issues constraining democracy in the United States and around the world abroad. He also points to “social-media manipulation and disinformation” and social media-based inflammation of diverse problems including “economic dislocation, rising inequality, immigration pressures, [and] identity divisions.” All of these issues exist and are discussed, in the United States and abroad, in a multitude of languages. They play out across the internet—often in spaces, like WhatsApp, that are completely private by design.

The health of global communication, and global democracy, hinges on taking these issues seriously. We must create prompt, aggressive, and well-thought-out solutions across the policy, technology, and educational sectors. Fortunately, there are many remedies to the lack of research and oversight beyond English and on less mainstream platforms. The first, and most obvious, is for social media firms to ramp up efforts to combat clearly harmful content in other languages and geographic spaces. According to The New York Times, “the social media companies [say] they [have] moderated content or provided fact-checks in many languages: more than 70 languages for TikTok, and more than 60 for Meta, which owns Facebook.” Meanwhile, “YouTube said it had more than 20,000 people reviewing and removing misinformation, including in languages such as Mandarin and Spanish; TikTok said it had thousands.” Perhaps more illuminating, though, is that “the companies declined to say how many employees were doing work in languages other than English.” Research from scholars like Sarah T. Roberts and Tarleton Gillespie has revealed that the folks who do this moderation work, whether in English or Spanish, on Facebook or Twitter, are often contractors who are underpaid, under-supported, and under-trained. Less is known about the goings-on at smaller platforms, which have fewer resources for moderation and are often less scrutinized by news media and researchers. Regardless, speak to most third-party researchers in the space and they will tell you that the signs of diminishing coordinated English-language manipulation touted by most platforms are not nearly as significant in other languages.

Policymakers must also act. Regulatory paralysis surrounding social media in the United States must come to an end, and quickly. The European Union continues to enact law that attempts to get a handle on the problem of coordinated manipulation online, as well as attendant problems like digital surveillance, the fruits of which are often sold and used for microtargeting in computational propaganda campaigns. The United States, particularly at the federal level, is very far behind. Not only do we not have clear rules and regulations on content moderation and specific issues like electoral disinformation, we don’t have a basic online privacy law—a necessary foundation for laws related to online manipulation to be successful. In countries like Brazil, India, Mali, and Turkey—once-promising democracies recently plagued by problems caused by autocratically inclined leaders—lawmakers must step up to strengthen checks and balances aimed at controlling their heads of state, who have taken marked advantage of social media propaganda. The judiciaries in each of these countries must hold leaders who have leveraged digital tools to sow disenfranchisement and targeted threats accountable.

In the meantime, there are a number of other things social media companies and regular people can do as we fight for connective democracy. In a recent discussion for the Brookings Institution covering 50 years of research on polarization, my co-author Christian Staal Bruun Overgaard and I leverage this scholarship to argue that platforms can combat polarization by: 1) surfacing more positive contact between members of opposing political parties, 2) prioritizing content that is popular among disparate user groups, 3) specifically correcting misrepresentations about the problem of polarization itself via platform-based corrections to misleading content, 4) designing interfaces to prioritize constructive discourse, and 5) collaborating with third-party researchers and providing them with data. And obviously, each of these things must be done beyond the English language, across the internet, and with diverse partners across the globe. Meanwhile, people around the world should push for thoughtful solutions to our digital maladies: contact politicians about relevant policies, advocate for better media and information literacy programs in schools, and vote for candidates with clear plans to respond. We can also be careful about what we share (and how quickly we share it) and practice better social media hygiene. The problem we face is huge, but by correctly identifying its scope and breadth—and by generating responses using this accurate lens—we can address it holistically rather than piece by piece.

From the Symposium

Democracy and Technology: Allies or Enemies?

next

Tech's Danger to Teens and Children

By Stephen Balkam

11 MIN READ

See All

Read more about InternetSocial Media

Samuel Woolley is a writer and researcher who focuses on the ways emerging technologies are used for both democracy and control. He is an assistant professor at the University of Texas at Austin and director of the Propaganda Research Lab at UT's Center for Media Engagement.

Click to

View Comments

blog comments powered by Disqus