Symposium | Democracy and Technology: Allies or Enemies?

A Democratic Digital Age?

By Alexandra Givens

Tagged Democracytechnology

A young woman in Iran sends a video to a journalist waiting safely outside the country. The grainy clip shows a student being beaten by Iranian authorities—one of countless episodes in the violent clampdown on protests sparked by the death of 22-year-old Mahsa Amini in September 2022, after she was arrested by for allegedly improperly wearing the hijab. The sender urges the reporter to keep the video safe, knowing she must delete it from her phone. She faces danger not just for filming the video, but in storing it or sending it via private message.

Iranians are leading the longest-running anti-government protests in the country since 1979. Today, however, they face unparalleled threats from tech-enabled surveillance: So-called spyware allows government authorities to intercept private messages, track people’s past and current locations, and use metadata to monitor and analyze who is speaking with whom. After attending protests, demonstrators have received text messages from local police cautioning them not to show up at another demonstration. Organizers have described the authorities hacking into their social media accounts to read their private messages and entrap allies.

Iranian officials are also embracing new commercial surveillance capabilities to enforce their strict morality laws. In September, Iran announced it would use face recognition to detect and identify women who were not “correctly” wearing the hijab. A member of parliament explained that women who dress improperly would receive text message warnings, followed by penalties such as their bank accounts being blocked. In a country where citizens are required to use biometric national identity cards to receive pensions and food rations or to access the domestic internet, the government’s threats are all too credible—and underscore protestors’ courage in resisting.

The challenges posed by new digital technologies are not limited to Iran, nor are they unique to autocratic regimes. Around the world, democratic governments are grappling with the seismic changes digital technologies have created. In the United States, new technologies have challenged long-held assumptions about democratic governance, civic engagement, civil rights, and civil liberties. This Democracy symposium takes on these issues, examining the myriad ways that technology is transforming democracy today.

Over the past decade and a half, the rise of social media, other online platforms, and sophisticated data analytics has revolutionized core aspects of our society. For many, the internet or other online services are the primary way we access information, communicate with others, express ourselves, and find community. Core economic activities—such as applying for a job, finding a home, applying for credit, or accessing government benefits—are now also mediated by online platforms. New technologies have opened fresh pathways for individuals to communicate, social movements to organize, and historically disenfranchised groups to make themselves known. But they have also injected new vulnerabilities into civic discourse, as platforms give space for actors to spread mis- and disinformation, and data-driven algorithms deepen filter bubbles, reward “engagement” over information value, and entrench existing social divides.

The essays in this symposium examine these issues and other areas where technology is changing how people engage with the world around them, and the resulting effects for democracy. The topics span many aspects of civic and private life: from online harassment, to mis- and disinformation in times of crisis, to the impact of social media on young people’s mental health, to data practices that deepen social and economic inequalities. Throughout, the essays confront a central question: how to address the ways in which technologies can potentially destabilize democracy while preserving their benefits—and, crucially, staying true to democratic values. This question looms large for governments and civil society around the world.

In modern times, technology mediates access to information, self-expression, and the communities we discover and build. The growth of websites and social media platforms has created new ways for people to raise their voices and reach others. This combination of new access and the potential for wide and rapid reach has allowed organic social movements to gain powerful momentum, from #BlackLivesMatter to #MeToo. It has spawned new ways for people to discover and build community around shared interests and experiences. The openness of digital communication has been particularly important and validating for marginalized groups that historically have not been represented in mainstream media. Young people exploring their sexual identity can now find resources, advice, and community by browsing privately online, no matter their location or family circumstances. Marginalized communities can connect in spaces that foster solidarity and movement-building. The potential for expanded individual freedom and robust democracy is enormous as new communities and voices gain power, visibility, and reach.

Despite this promise, it is past dispute that the proliferation of online communications channels can be destabilizing and exacerbate real harms. Social media has created spaces for vulnerable communities, but it has also enabled extremists to reach massive new audiences. Algorithms optimized for engagement are largely agnostic to a speaker’s message so long as the content garners impressions, creating an environment that rewards shock value and engagement over information value. Recommender systems that show users content based on their previous views can create self-reinforcing filter bubbles that increase fragmentation in users’ understanding of the world. In a social media timeline, a well-researched news article about a political candidate appears equally alongside a sensationalist screed of made-up facts. The timeline’s flattening force combines all manner of voices—soft and loud, measured and angry, human and bot—in an ever-growing cacophony.

This environment can also impact democratic participation. Just as online platforms can connect minorities and marginalized communities with support and resources, so too can platforms subject those individuals to abuse. Recent research by my organization, the Center for Democracy & Technology, showed that women of color running for office in the 2020 U.S. general election were twice as likely as other candidates to be the subject of mis- and disinformation. These same women were more than four times more likely to be subjected to violent online abuse. Describing their experiences, candidates admitted that these attacks had caused them to rethink their political candidacies. Online harassment does not just harm individuals: It can distort public discourse and political representation too.

In recent years, digital harms have become a major focus of U.S. legislative and judicial attention. In the absence of conclusive federal action, states are now also engaging on privacy issues and beyond. Current efforts include legislation to regulate the design of digital products used by minors, proposals to restrict the use of algorithmic tools in hiring and employment relationships, and attempts to rein in digital surveillance ecosystems by regulating so-called “data brokers.” These brokers could be companies you’ve heard of, like credit reporting company Experian, or many others you haven’t, all of whom gather data from people’s activities across the web to create detailed profiles that are sold to advertisers and can even be accessed by law enforcement. For the first time in 25 years, the Supreme Court is also poised to weigh in on online speech, hearing one case this term that will consider whether social media platforms can be held responsible for aiding and abetting terrorism because their users posted terrorist content on their services, and a second looking at whether social media platforms can potentially be held responsible for third-party content that they algorithmically “recommend.”

These questions are often deeply complicated. Governments across the world—including the United States—have grappled with the difficulty of balancing the need to reduce harassing and harmful speech on social media and other platforms against foundational democratic beliefs in freedom of speech. How can a platform be incentivized to take down “harmful” words without removing other speech that might appear similar, but makes an important contribution to public discourse (and who decides)? What does it mean for a platform to “recommend” content in a way that should trigger responsibility for that content, when platforms sort and display hundreds of millions of uploads in a single day? The answers to these questions can involve hard trade-offs. One can abhor the ways that platforms are used to hurl abuse at people and want them to do better—while also recognizing that the #MeToo movement would not have gained such momentum if platforms faced potential liability for defamation every time they allowed a user to post an allegation.

Questions involving the privacy of digital communications also raise hard tensions. End-to-end encryption, a set of technical safeguards that protects the content of private messages from being viewed by anyone other than the sender and recipient, is a critical tool for privacy and security. People ranging from the protestors in Iran, to whistleblowers and journalists, to everyday users messaging with their doctor or bank benefit from this technology. However, critics understandably complain that encryption inhibits efforts to detect despicable communications on messaging platforms, like the sharing of child sexual abuse material. Without secure encrypted messaging, governments and companies could more readily detect such conduct—but in the process, the freedom and safety of many others would be deeply compromised.

These examples, and others that follow in this symposium, show the complexity of the governance questions raised by many new technologies in a democratic society. As regulators and advocates seek to address these concerns, the charge is to develop policies and practices that can mitigate some of the gravest harms while preserving values that are fundamental to democracy, such as individual privacy, freedom of expression, and freedom to associate.

While proposed approaches to internet and technology regulation differ wildly, some areas have seen emerging consensus in recent years. One important area focuses on increased transparency into how online platforms operate, including how content moderation decisions are made. In Congress, proposals such as the Platform Accountability and Transparency Act would allow approved researchers to access information from social media platforms so they can analyze trends and understand company responses. In the European Union, the Digital Services Act creates new obligations for large platforms to be transparent about their content moderation decisions, to monitor, report on, and mitigate systemic risks, and obtain external audit reports, among other changes. These proposals and laws aim to provide a better understanding of how platforms operate and influence democratic engagement.

Increasingly, advocates and policymakers have also rallied around the essential need to protect user privacy and combat discriminatory uses of users’ data (for example, when online housing ads are targeted to people based on data points that approximate protected characteristics such as race, age, or gender). These protections are critical in a democracy where privacy is a prerequisite for freedom of expression and association, and safeguards against discrimination are essential to a fair and just society. In 2022, Congress made substantial progress on consumer privacy legislation: The American Data Privacy and Protection Act passed through the House Energy and Commerce Committee with an overwhelming bipartisan vote (while the bill did not go further, it is expected to be reintroduced in 2023 with broad support). At the state level, lawmakers are increasingly advancing laws to protect digital privacy, with five such laws currently enacted and similar laws under consideration in over 20 other states. Meanwhile, the White House, federal agencies, legislators in the United States and Europe, and international bodies like the OECD are all focused on ways to mitigate potential harms caused by artificial intelligence.

In addition to legislative action, there has been a notable maturation in how companies approach questions of platform governance (however much work still needs to be done). Efforts in this space exploded after it was revealed that Russian influence operations exploited social media data to target Black voters and otherwise interfere in the 2016 general election, along with other high-profile incidents like the use of social media to incite genocide in Myanmar. Congressional oversight hearings, advertisers’ threats to pull their business, and general public outrage combined to make the most well-known social media platforms take heed.

In recent years, platforms such as Meta and Google have publicized their increased investment in trust and safety teams and developed elaborate rules governing permissible uses of their services. Among other efforts, the major platforms launched dedicated programs to detect the work of troll farms and other coordinated inauthentic activity, added more formal governance to their content moderation decisions, and created programs to amplify authoritative information around civic questions such as elections or public health.

Critics rightly observe that these steps have not addressed—and realistically never can address—the sheer scale of undesirable content shared online. The volume is simply too great, and the pace too fast, for any company to keep up. In addition, Elon Musk’s abrupt gutting of Twitter’s trust and safety functions proved a devastating illustration of the risks of relying on corporate decision-makers to act. Nevertheless, platforms continue to have enormous power in the daily decisions about how content is shown and disseminated online. How companies respond to this moment remains an essential trend to watch.

This is a critical moment for digital governance and democracy. The Biden Administration has called on democratic nations to present an alternative vision to authoritarianism—in President Biden’s words, to “show that democracy still works.” As the United States and other nations pursue this goal, how countries approach the governance of technology will be a key component. There is an opportunity to rise to the occasion and articulate a human rights-centered, democratic vision for the new digital age. Doing so will matter not only for our own democracy, but as a model for those fighting for human rights and democracy around the world.

From the Symposium

Democracy and Technology: Allies or Enemies?

next

Demanding an Open Internet

By Tom Wheeler

14 MIN READ

See All

Read more about Democracytechnology

Alexandra Givens is the President and CEO of the Center for Democracy & Technology, a nonprofit, nonpartisan organization focused on protecting human rights and democratic values in the digital age.

Click to

View Comments

blog comments powered by Disqus