Features

Bringing Truth to the Internet

Efforts to treat individual disinformation outbreaks, rather than the underlying systemic design flaws, are doomed to fail. Here’s what we need.

By Karen Kornbluh Ellen P. Goodman

Tagged Cybersecurity

The first volume of Special Counsel Robert Mueller’s report notes that “sweeping” and “systemic” social media disinformation was a key element of Russian interference in the 2016 election. No sooner were Mueller’s findings public than Twitter suspended a host of bots who had been promoting a “Russiagate hoax.”

Since at least 2016, conspiracy theories like Pizzagate and QAnon have flourished online and bled into mainstream debate. Earlier this year, a British member of Parliament called social media companies “accessories to radicalization” for their role in hosting and amplifying radical hate groups after the New Zealand mosque shooter cited and attempted to fuel more of these groups. In Myanmar, anti-Rohingya forces used Facebook to spread rumors that spurred ethnic cleansing, according to a UN special rapporteur. These platforms are vulnerable to those who aim to prey on intolerance, peer pressure, and social disaffection. Our democracies are being compromised. They work only if the information ecosystem has integrity—if it privileges truth and channels difference into nonviolent discourse. But the ecosystem is increasingly polluted.

Around the world, a growing sense of urgency about the need to address online radicalization is leading countries to embrace ever more draconian solutions: After the Easter bombings in Sri Lanka, the government shut down access to Facebook, WhatsApp, and other social media platforms. And a number of countries are considering adopting laws requiring social media companies to remove unlawful hate speech or face hefty penalties. According to Freedom House, “In the past year, at least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’ and online manipulation.”

The flaw with these censorious remedies is this: They focus on the content that the user sees—hate speech, violent videos, conspiracy theories—and not on the structural characteristics of social media design that create vulnerabilities. Content moderation requirements that cannot scale are not only doomed to be ineffective exercises in whack-a-mole, but they also create free expression concerns, by turning either governments or platforms into arbiters of acceptable speech. In some countries, such as Saudi Arabia, content moderation has become justification for shutting down dissident speech.

When countries pressure platforms to root out vaguely defined harmful content and disregard the design vulnerabilities that promote that content’s amplification, they are treating a symptom and ignoring the disease. The question isn’t “How do we moderate?” Instead, it is “How do we promote design change that optimizes for citizen control, transparency, and privacy online?”—exactly the values that the early Internet promised to embody.

This approach to combating disinformation starts with the understanding that digital platforms are not neutral, tamper-safe pipes. They are ad-delivery platforms constructed to reward engagement. Without the standards that have traditionally applied to other media, they too can easily become turnkey disinformation campaigns—influence campaigns that hide their coordination with and the provenance of their financial and other supporters, rely on disinformation, and otherwise work to deceive users to pursue the manipulator’s purposes—whether set by foreign governments, secretive domestic political actors, or financial fraudsters.

Remedies that focus on design fixes, instead of discrete content decisions, would be more effective at reducing dangerous disinformation, and would have the further benefit of enhancing free expression, instead of threatening it.

However, as became clear when Facebook CEO Mark Zuckerberg testified before the Senate, Congress lacks the tech savvy to design these remedies and could probably never act nimbly enough to fine-tune laws appropriately. Top-down bureaucratic mandates don’t make sense in a fast-paced industry where regulatory capture is a real danger. What’s needed is a new, dedicated expert agency that can work with stakeholders to craft dynamic rules, borrowing from software design’s agile development and lean enterprise concepts. This agency would work with civil society, engineers, entrepreneurs, and platforms to push back against network tendencies toward centralization, secrecy, and surveillance—and instead promote an Internet supportive of democratic values.

A System Vulnerable to Abuse…

The Internet is us. Americans now spend an average of 24 hours a week online and 40 percent of Americans think the Internet plays an integral role in American politics, according to data from the USC Annenberg School for Communication and Journalism. Pew Research data show that almost as many people now get their news from the Internet as from television. Internet memes and threads often set the agenda for mainstream media, magnifying their importance. Internet disinformation therefore has unprecedented reach.

Of course, disinformation campaigns are not new, nor do they work only online. But today’s online environment bypasses the checks that would once have signaled caution, including sponsor and author disclosures and clear demarcations between advertising and content. Ads (known online as “inorganic” content) hide in plain sight.

Social media offers a new set of tools for propagandists to simulate “organic” grassroots viral conversation, thereby appropriating an unearned authenticity. The Oxford Internet Institute, for example, identified a firm offering “false amplifiers” for hire—fake identities to help a client mount a whisper marketing campaign about the effectiveness of a migraine drug. Campaigns can also make “synthetic content,” including video and audio forgeries (“deep fakes”), bots, trolls, fake accounts, fake social media pages, fake groups, and click farms allowing campaigns to create a false impression of grassroots consensus and even to spark in-person protests half a world away. All of these tools are rendered more effective by the exploitation of personal data for precision targeting. If browser, social media, and credit card data weren’t enough, campaigns also employ “dark patterns” or user interface design to coax users into sharing even more personal information, thereby accelerating the transmission of targeted disinformation.

In presenting content to users, the social media algorithms and tools do not distinguish outlets that follow journalistic standards of transparency and fact-checking from conspiracy-mongering blogs. Users have few signals about the integrity of a “news” source, especially when it is a disinformation engine posing as traditional news. A recent Knight Foundation study conducted by Graphika, a social media research firm specializing in disinformation, and George Washington University researchers found that ten predominantly “fake news” and conspiracy outlets were responsible for 65 percent of tweets linking to such stories, and 50 were responsible for 89 percent.

Far from distinguishing low-quality information, the platforms’ designs tend to reward it. The algorithms responsible for newsfeeds, recommendations, and search results are trained on a dizzying range of personal data points to optimize engagement and time online (in order to serve users more ads). Studies find that fear, anger, and tribalism draw heavy engagement. A false headline about a Toronto terrorist attack claiming the perpetrator was “angry and Middle Eastern” generated far more clicks than an accurate headline describing him as white. To trick the algorithms, campaigns use ads, bots, and trolls to promote content. As a result, YouTube users who click on videos that are nonpartisan are shown extremist content within six clicks. More exposure to disinformation makes it more credible.

The digital platforms acknowledge the presence of these coordinated influence campaigns and periodically purge their websites and apps of what they call “coordinated inauthentic” activity, especially if it is related to election interference or terrorist content. But this is often too little, too late.

To be sure, those seeking to persuade have long tried to launder information to bypass cognitive defenses. A seemingly disinterested and credible source—a news anchor or headline writer, an academic expert, grassroots support, or an influencer—is often better able to make a marketer’s case than the marketer could in its own name. Advertising transparency regulations and standards were designed to protect consumers and citizens from this kind of legerdemain and manipulation. Back in the 1950s in the United States, for example, the “payola” rules in broadcast required sponsorship disclosure, after disc jockeys were caught taking money under the table to play songs. Political ads must carry labels and be reported to the public—even Supreme Court Justice Antonin Scalia thought transparency in political ads is needed.

Legacy media have a rich history of public accountability—both external and self-imposed. From the 1936 Telecommunications Act, to the Hutchins Commission on Freedom of the Press after WWII, to the 1967 Public Broadcasting Act, structural interventions and rules have been developed to make media serve democracy. Plus, newspapers and then broadcast and cable news programs have long imposed on themselves norms of disclosure and embedded credibility signals. These include the news/opinion distinction, mastheads that reveal ownership and management, editorial codes and standards, and rules on conflicts of interest. Regulators have imposed additional requirements on broadcasters such as ownership reports. Indeed, one purpose of media ownership limits was to build firewalls against the wildfire spread of any single ideology or point of view.

There are no comparable rules and norms for digital content. Indeed, the signaling from legacy media now gets buried in a system vulnerable to disinformation and “look alike” journalism that has none of the same traditional public service goals. To make matters even worse, the online platforms have absorbed the ad revenue that once supported journalism. Google and Facebook together capture 60 percent of the digital advertising market. As a result of ad revenue losses, and other factors, the Pew Research Center found that the number of people employed in newsrooms fell by 45 percent across the United States between 2004 and 2017, with hundreds of newspapers folding, including many dailies. (When Facebook decided it should invest in local news, it found it was too late for about a third of its users who now live in a “news desert,” where there is virtually no local news.)

…With Grave Real-World Consequences

The vulnerabilities of the online information environment are not hypothetical or consequence-free. In recent years, we’ve seen a variety of actors exploit these vulnerabilities to destructive ends: lethal violence, foreign interference in democratic elections, false rumors that prompt public health crises, prolonged harassment, and more.

Robert Mueller’s report lays out the Russians’ widespread use of social media to spread disinformation during the 2016 election. Russia’s Internet Research Agency manipulated social media platforms to discourage African Americans from voting, to inflame fears of immigrants, and to spread disinformation about Hillary Clinton. Russia used its government-backed outlets, like RT and Sputnik, as well as fake and conspiratorial sites, to spread disinformation, amplifying stories with bots and trolls. They used ads to promote fake pages, such as Blacktavist, and organized groups of Americans to take to the streets to protest each other, in one case in front of a mosque.

Around the world, we’ve also seen examples of how coordinated online hate speech against racial and ethnic minorities can lead to violence. Rumors circulating on WhatsApp have resulted in hangings in India. And repressive governments in the Philippines and Cambodia have found Facebook to be a useful tool for controlling their populations. The previous Mexican government launched bots and trolls itself to amplify pro-government sentiment and suppress opposition speech.

White supremacists have been successful at organizing on smaller platforms, including 4chan, 8chan, and Gab, and then spreading disinformation and recruiting on larger platforms. The New Zealand mosque shooter participated in these networks and used his “manifesto” to attempt to amplify a hate-filled, racist narrative.

Consequences extend from political violence to forms of financial fraud. Forces behind schemes like stock-pumping or promoting counterfeit prescription drugs employ the same techniques that the Russians used. BuzzFeed reported on an ad fraud ring where potentially millions of consumers were enticed to download apps that programmed a network of bots to perpetrate a multi-million dollar ad fraud scheme.

Disinformation campaigns can mobilize rapidly. When a female scientist produced the first-ever photograph of a black hole in April 2019, she was quickly subjected to misogynist harassment: Fake Twitter accounts emerged, and YouTube videos questioning her credentials went viral. When Notre Dame caught fire just days later, anti-Muslim disinformation quickly gained momentum via fake Twitter accounts masquerading as Fox News and CNN. These falsehoods were quickly perpetuated by influential figures like Rush Limbaugh and Glenn Beck.

Outmoded Internet Policies: Too Little Responsibility, Too Few Freedoms

Contrary to myth, the global Internet is itself born of policy choices, and not just the wizardry of college dropouts working in converted garages. The engineers and entrepreneurs that designed and invested in the network of course benefited from government research grants. Even more fundamentally, U.S. communications policy opened existing networks and allowed the new entrants to function as platforms where others could speak and share. The very principles that allowed these early movers to succeed—transparency, access, and network decentralization—have collapsed in the digital information ecosystem. And a new challenge of data exploitation has emerged. While the Internet has changed dramatically since those early days, the agencies and laws governing it have not.

In the mid-1990s, the U.S. government aggressively promoted competition with the existing telecommunications network, a choice that allowed the early Internet to flourish. And when Congress passed the 1996 Communications Decency Act, it included a provision—Section 230—that said certain Internet companies would be exempt from liability for most third-party content posted on or moving across their networks or platforms, thereby permitting relatively frictionless communication across the Internet.

This framework, combined with the decentralized Internet design, promoted competition, provided new avenues for sharing information, and allowed the Internet to become a vibrant platform for free expression and innovation.

These early Internet traits have diminished in recent years. As large digital platforms have grown, the Internet has lost much of the decentralization and transparency that characterized its early days. Instead, today’s major platforms are the few immensely powerful gatekeepers for much of the information that citizens consume across the globe.

To be sure, users can choose friends and Twitter follows or Google News settings or subscriptions to YouTube channels. But it is platform algorithms, which promote ads, prioritize shares, and queue up autoplay content, that control the flow of information. The platforms have acquired the audience and power once held by traditional media gatekeepers, but without the oversight or self-regulatory traditions. The online ecosystem lacks both news standards and liability; political and TV ad spending and transparency rules; and the diversity, localism, and ownership restrictions that until relatively recently shaped electronic media. As these standards, rules, and restrictions weakened in recent decades, radio shock jocks, televangelists, and opinion-oriented cable news networks (notably Fox News) emerged.

The greenfield online environment is especially hospitable to disinformation campaigns—it makes information laundering easier, faster, cheaper, and effective at scale. The vulnerability of the online information environment then allows for infections to spread to the offline media ecosystem. With so much of the audience and advertising share, a few platforms create a monoculture of media distribution, through which a virus can spread rapidly. Then topics that become “trending” or get picked up by influencers online wind up being covered by the more curated, trusted, and watched, broadcast, cable, or print news outlets, which then validate the claims and further spread these stories online and elsewhere.

To be sure, the major platforms are grappling with these challenges. Facebook changed its newsfeed to prioritize content from users’ friends and family and to promote more trustworthy news sources, launched a tool to limit the spread of pages visited disproportionately from Facebook, and is setting up an appeals body. YouTube updated its recommendation system to prevent the spread of harmful messages and improve the news experience. And Twitter is adopting new metrics meant to improve conversational health. There are crackdowns on groups or coordinated networks spreading false information and additional focus on white supremacy.

But the platforms are subject to sometimes competing pressures from shareholders, the public, and governments. Platforms are rightly concerned about being seen as abusing their gatekeeping role by limiting users’ access to content. Clearer public consensus on both the problem and the structural remedies could actually help platforms to know and implement their duties with more public accountability.

The Wrong Solutions: Status Quo Regulators and Bludgeons

Recent laws around the world imposing hefty fines on online platforms provide an overly strong incentive for companies to err on the side of caution and take down lawful and legitimate content. These rules also often lack safeguards, such as transparent oversight mechanisms or due process protections, under which removal of content can be challenged.

A number of scholars and human rights advocates have argued that policies should promote process fixes instead of content fixes. Unfortunately, today’s federal agencies are poorly equipped to do this. The Federal Trade Commission (FTC), with its duty to protect consumers and prevent anti-competitive practices, lacks the tools and authorities. Many of today’s online challenges—like disinformation—are not only threats to individual consumers, but also systemic threats to the economy and democracy. Even with respect to privacy and security, the FTC is hamstrung by the lack of a comprehensive privacy law in the United States, and the agency has limited fining and rulemaking authority.

Staffing is another shortcoming: FTC Chairman Joseph Simons has raised concern that the agency has only 40 full-time employees focused on privacy and data security matters compared to the U.K. Information Commissioner’s 500 or so employees, and the Irish Data Protection Commissioner’s 110 employees. This staff shortage is compounded by a lack of substantive technical know-how: The FTC currently has just five full-time technologists on staff, and no chief technologist.

FTC Chairman Simons has suggested replicating the limited but useful rulemaking authority enabled by the Children’s Online Privacy Protection Act across the FTC. But a patchwork of fixes isn’t an adequate solution. Instead, we need a vastly augmented FTC—or really, a new agency entirely—equipped for the complex challenges presented by today’s sprawling digital platforms.

The Right Way Forward: New Authority and Expertise

As became clear during Mark Zuckerberg’s Senate testimony, Congress needs expert help in confronting the new challenges posed by these technologies. But government on the industrial-era model—slow and susceptible to regulatory capture—can’t be the answer to twenty-first century big tech.

Instead, a new independent expert agency would push against the tendencies of the network toward centralized control, instead promoting transparency, decentralized citizen control, and privacy—the very values the Internet originally promised to enhance. As think tank Public Knowledge proposed, a new agency would gain and share expertise with other agencies across the government in emerging technologies such as artificial intelligence—shaping their design and use with a focus on supporting democratic values.

A new Digital Democracy Agency would emulate the process design of the Consumer Financial Protection Bureau in operating with open data and open procedures, and would use evidence-based methods devised through work with experts and its own research, conduct citizen/user education, enforce laws, and conduct rulemakings. It would continually evaluate its rules—borrowing from software design’s agile development concepts and lean enterprise principles. Like the FTC and the Federal Communications Commission (FCC), it would be independent of the Executive. Indeed, the agency itself might be situated within the FTC if that agency’s organic statute were changed. The main thing is that we need a new policy shop with new tools.

The new agency’s mandate would be to protect consumers, citizens, and democratic processes through three means: enhanced transparency, user control, and privacy.

To enhance transparency, it would require digital platform companies over a certain size to place information about ads—who bought them, what was the target audience, how much did they pay, and the ad content—in a “public file,” just as broadcasters and cable companies are required to do, and as the bipartisan Honest Ads Act would require for political ads. This should be an easily sortable and searchable database. (Facebook has created an early version of such a tool.) Large platforms should be required to implement Know Your Customer procedures, similar to those implemented by banks, to ensure that advertisers are in fact giving the company accurate information, and the database should name funders of dark money groups rather than their opaque corporate names. As Oxford Internet Institute’s Philip Howard has argued, this database should include all ads, not just political ones—to ensure that disguised political ads don’t slip through the cracks (all too common, according to researchers), and also because attempts at disinformation campaigns about health, scientific findings, stock prices, and a range of public policy issues are important for public awareness.

This new agency would also require these large platforms to be transparent about non-private user activity online, by providing independent researchers access to anonymized data. Facebook has agreed to this in concept—announcing it would make data available through the Social Science Research Council. The agency would also develop transparency requirements for companies using AI—a safeguard that would allow users to ask for explanations and even human review of decisions made by machines—and, in the protected areas of housing, employment, and credit, it could audit algorithms as needed to evaluate for discrimination.

Large platforms would also be required to offer more transparency about bot activity—reporting on the total number of fake accounts on their platforms—and to ensure that their real name policy (Facebook), public verification system (Twitter), or verification badge (YouTube) is not deceptive, by verifying that accounts actually reflect the identities they claim and that they comply with the platform’s terms of service. The new regulator could also require platforms of a certain size to be far more specific and transparent about rules for taking down content, to build an accountable appeals process, and to provide useful data on enforcement of their terms of service. All of these measures would help users understand who is speaking to them and for what purpose.

To promote consumer freedom, the agency could require platforms to allow users to shape their own newsfeeds and recommendations (e.g., in reverse chronological order or based on other variables, just as you can order shoe preferences on an e-commerce site by price, color, heel height, etc.). Users should be able to review the effect of the platform’s content filters and use third-party applications to enhance their experience according to their own choosing.

To provide users more control, the agency should begin with a study about the impact of manipulative tools and design elements, including the microtargeting of behavioral-tested content on citizens. It could also encourage, as Senators Mark Warner and Deb Fischer propose in a recent bill on “dark patterns” (deceptive user interfaces that in effect compel users to consent to sharing their personal information), the creation of a private industry standards group to develop best practices on user interface design and design practices. Just as the FTC helps enforce best practices developed by other industry associations, this new agency would act as a regulatory backstop so that large companies that do not follow the best practices would be subject to civil enforcement. The Warner-Fischer bill also requires informed user consent for segmenting users for behavioral experiments, and routine disclosures for large online operators on any behavioral or psychological experiments. Additionally, the bill would require large online operators to create internal Independent Review Boards to provide oversight on these practices.

To provide users access to truthful information, there remains the issue of how best to promote and support journalism. From the earliest days of broadcasting, structural choices served to create markets for information. Local broadcast stations received spectrum allocations even though it would have been more efficient to have fewer and more powerful regional or even national stations. Those allocations, in addition to early requirements that stations produce local content, effectively created markets for local news. When concerns grew in the mid-1960s that broadcast television threatened local news and education, Congress funded the Corporation for Public Broadcasting (CPB) to address the supply of informative and educational content. To ensure it reached viewers, spectrum was made available for local PBS stations (which cable services were required to air). The cable industry followed a little more than a decade later with C-SPAN.

The threats to independent news have led to proposals to make available new public funding to support journalism from a tax, or from digital platform revenue sharing. If there were enough political will, any such funds could be directed to CPB or to another independent nonprofit, or even to consumers as vouchers to support the news organizations of their choice. The Digital Democracy Agency could serve as facilitator. It would be messy to determine eligibility for such grants, but messy is not impossible. Eligibility could be limited to outlets that follow journalistic codes of practice (e.g., transparency, corrections, separating news from opinion), possibly relying on organizations such as the Trust Project and Newsguard, which are developing indicators of trustworthiness. New Jersey recently decided to give $5 million in funding for a nonprofit news incubator for local journalism, using partnerships with universities in order to provide expertise and an apolitical interface. Funds could also be used to highlight and make available government data, government-funded scientific research, civic information (such as election information), and data analytics tools. The agency could also oversee a media literacy fund—a form of inoculation against disinformation—that it could model on successes in other countries like Finland.

Whatever the new regulator attempts, coordinated and manipulative conduct will persist. Too many violent white supremacist networks remain online, though large online platforms do now work with intelligence to take down ISIS and Al Qaeda networks. None of the reforms suggested here will eliminate online harassment, incitement, or hate speech.

A number of critics have suggested gutting Section 230 so that platforms would be liable for all unlawful content disseminated over their networks—in some cases arguing that doing so is needed to “level the playing field” so that online platforms have to bear the same editorial costs and risk of publishing as journalistic organizations do. We believe there are better ways to make platforms internalize the costs of their businesses. Section 230 is too important to the free flow of ideas online to eliminate, and its absence would simply incentivize platforms to block content aggressively so as to avoid liability.

Instead of drastic changes to Section 230, one option the agency might study is a modest claw-back of the large platforms’ immunity—by making them liable for knowingly promoting unlawful content or activity that threaten or intentionally incite physical violence (or conspiracies to do violence), that clearly constitute online harassment, or that constitute commercial fraud. These large platforms might earn a safe harbor by developing detailed, transparent, appealable practices specifically for disrupting coordinated campaigns that conduct such unlawful acts. In the case of white supremacy, this new approach would likely hamper the ability of Gab and 8chan—the websites that allow the networks of white supremacists to organize—to remain online.

A new domestic terrorism law as recommended by former Acting Assistant Attorney General Mary McCord and others would help get the attention of both platforms and law enforcement. Even without it, the Digital Democracy Agency could encourage law enforcement and national security to work with the platforms to take down these coordinated campaigns conspiring to commit violence—something platforms have already done with campaigns tied to violent Islamic extremist terrorism. To address very legitimate concerns about how the platforms and government decide what is terrorist content, the agency could require additional transparency about removals and appeals.

For the same reason that broadcasters were subject to ownership limits and prohibited from cross-owning stations and print newspapers, concentration of media gatekeeping power is a political and economic danger. There is a robust debate about whether antitrust laws should be changed to account for the power of large platforms, given their network effects and their repositories of data. Meanwhile, the Digital Democracy Agency could use its regulatory power to address both sources of influence. It could provide users with the ability to port their data to another platform or use two platforms simultaneously to reduce barriers to entry and facilitate competition. The new agency could wade through the difficult questions of what constitutes a user’s data and how best to allow for it to port or work with another platform while protecting privacy—just as the FCC did with the telecommunications companies. One possible way to do this would be requiring large platforms to maintain APIs for third-party access under reasonable terms. Unlike the FTC, the new agency would have sufficient personnel who are expert in data forensics to audit companies for compliance with rules and consent decrees.

Since the digital economy runs on personal data, privacy protections will be essential to any kind of structural change. States, beginning with California, are writing new privacy laws modeled in part on Europe’s General Data Protection Regulation (GDPR) and Congress will eventually follow suit. One important piece of GDPR that should be incorporated into new law, and that the new agency could be given authority to enforce, is its treating political, philosophical, and psychological personal data (including psychological inference) as sensitive data—and requiring opt-in for each use. With specific opt-in consent for use of data on individuals’ psychology and political views, it is difficult to microtarget them with political disinformation. Obtaining consent under false pretenses should itself be a serious violation. Given that AI, facial recognition, and the Internet of Things can all be used to manipulate and corrupt information feeds, the new agency should also study what additional safeguards beyond notice and consent might be needed to reduce vulnerability to the compiling, retention, sharing, and exploitation of personal data.

Although many of the above steps would help protect against the threat of foreign actors mounting coordinated disinformation campaigns, more action will be needed to support enhanced national security infrastructure. It will be essential for the new disinformation experts to coordinate closely with national security and intelligence agencies. This can be done by restoring the National Security Council’s cybersecurity coordinator position and incorporating foreign information operations into the mission of that office. Or, as the Alliance for Securing Democracy has proposed, “The White House should appoint a Counter Foreign Interference Coordinator—as our Australian partners have done—and establish a Hybrid Threat Center within the Office of the Director of National Intelligence.” At the international level, the new Digital Democracy Agency should coordinate with its foreign counterparts and promote its approach of addressing structural design vulnerabilities to disinformation through multilateral organizations and provide technical assistance through the World Bank.

Today, U.S. leadership is largely absent as the Internet is increasingly being weaponized to undermine democratic values. Citizens themselves have few tools to evaluate a product’s security, privacy, and lack of transparency, leaving citizens and our democracy exposed.

It’s time for Washington to break out of the false choice between the status quo and aggressive content moderation to restore user control, transparency, and privacy—in a way that strengthens democratic values, especially free expression.

Read more about Cybersecurity

Karen Kornbluh is Senior Fellow and Director, Digital Innovation and Democracy Initiative at the German Marshall Fund and a board member of the U.S. Agency for Global Media.

Also by this author

Bulldozing the Public Square

Ellen P. Goodman is a professor at Rutgers Law School and Non-resident Senior Fellow at the German Marshall Fund.

Click to

View Comments

blog comments powered by Disqus