Imagine a world where the information environment is a thriving hub of diverse ideas, where information flows freely but responsibly, and where digital platforms prioritize the public good, not just profit. Virtual spaces would be designed to bring people from different political, ideological, or cultural backgrounds closer together. Trusted news and information would be the standard, while false information, extreme voices, and conspiracy theories would be pushed to the fringes of the internet, where only a few would ever see them. In this world, tech platforms would be open and transparent, and they would be held accountable if their products caused real-world harm to individuals, communities, or society. Social media platforms, with the help of AI tools and thoughtful human designs, could even make our democracy better. This world is possible.
In 1989, the Berlin Wall came down, marking the beginning of the end of the Soviet Union and the Cold War. That same year, another momentous event happened that would go on to shape humanity perhaps even more than the collapse of the Soviet Union: The World Wide Web was born. A few years later, Francis Fukuyama declared that the ideological triumph of liberal democracy had brought about “The End of History.” Democratic leaders around the world believed that these political and technological events would lead to the free flow of information and unlock the keys to an unending Democratic Spring. At the dawn of this new era, democracy would spread like wildfire and eventually flourish everywhere.
At first, this seemed true. Many of the post-Soviet countries, even Russia itself, undertook a new course toward free markets and democracy. As the internet expanded and social media came onto the scene, hope grew even brighter. In the early 2000s, it seemed like new technologies, easier access to the internet, and the free flow of information would change everything for the better. The pinnacle moment came in 2010 with the Arab Spring. Many believed that their hopes for a freer and more democratic world had finally come to fruition in even the most authoritarian corners of the globe.
That moment was short-lived. Democracy has receded or ceased in every country that made democratic gains during the Arab Spring. Moreover, the leading authoritarian countries, like China and Russia, have chosen to play by their own rules entirely: They don’t permit information to flow freely to their people in the way that techno-optimists envisioned.
And as we fast-forward to the present day, we can see that technology itself, once the great hope for spreading democracy, has become an existential threat to its very existence. At the center of that threat is a business model that is not conducive to democracy—a model that persists thanks to the failure, particularly in the United States, to hold the tech titans of Silicon Valley accountable. Before it is too late to change course, we need to rewrite the script and develop a new technological paradigm in the United States and around the world.
An Industry That Stands Alone
In the earliest days of the automobile industry, harm was expected for the sake of societal progress, which took a backseat to corporate profits. In his landmark 1965 book Unsafe at Any Speed, Ralph Nader somberly noted, “For over half a century the automobile has brought death, injury, and the most inestimable sorrow and deprivation to millions of people.” Less than a year after the book was published, Congress passed legislation to create the federal agency that became the National Highway Traffic Safety Administration (NHTSA), with a mission to save lives, prevent injuries, and reduce crashes. There has been a 70 percent reduction in deaths per motor vehicle on the road since NHTSA was established in 1970.
The real-world harms caused by social media and technology are well documented: an epidemic of teen mental illness; skyrocketing teen suicide rates due to cyberbullying; sextortion; viral challenges (e.g., the choking challenge on TikTok); pills spiked with Fentanyl, which drug dealers sell online and through social media; and more. Just like automobiles in the 1960s, this is a nationwide crisis that necessitates federal action.
Online platforms collect information about their users by gobbling up all of our data—everything from browsing history to email addresses and online purchases—and building out robust “social graphs,” or profiles, of each person and their network. This intimate knowledge of users is coupled with the most sophisticated psychological tools in human history: push notifications, the endless scroll, and the “like” button. People get stuck in the vortex of these platforms for hours on end, with the virtual reality of their online lives becoming more real and important to them than their offline families and communities.
Because they put engagement at the center of their business model, the platforms use their algorithms to promote whatever hooks our attention. Often, that’s the most extreme content. We are selectively exposed to material that makes us feel good, enrages us, or reinforces our preconceived views—all of which keeps us online. This leaves us deeper in our individual rabbit holes, with less meaningful exposure to other perspectives and much more negative views of “the other side,” which polarizes people along political and ideological lines. With more polarization among voters, elected leaders are compelled to embrace the extreme views that galvanize public opinion. They find ever less reason to compromise, and the legislative process grinds to a halt.
A second major byproduct of this business model is the spread of false information at blistering rates—largely from a few key super-spreaders in politics and the media, whose content goes viral by design. MIT researchers found in one study on false news that “[F]alsehoods were 70% more likely to be retweeted than the truth.” The rise of generative AI exacerbates these challenges exponentially by enabling the rapid creation and dissemination of convincing yet entirely false information—from deepfake videos to AI-generated articles. Since more and more people are consuming their news primarily on social media, the rise of fake news and the difficulty of distinguishing truth from fiction have destabilized the foundation of democratic society: trust in the system.
Despite the demonstrable harms inflicted on society by social media, technology platforms are given special treatment like no other industry. A once arcane statute—Section 230 of the Communications Decency Act of 1996—protects platforms from liability for nearly all actions because these platforms aren’t considered publishers and aren’t creating the content themselves. It is their product designs and failures that are leading to the massive spread of false information and other harms, and yet, due to Section 230, platforms have little incentive to limit harmful content.
One major byproduct is that our greatest adversaries, like China, Iran, and Russia, exploit the platforms to spread false information at scale or sow division by amplifying authentic, but divisive, content. While these countries tightly control their own information flows—ensuring that their populations don’t have total access to the outside world and that party lines are followed in public—they weaponize technology, particularly social media, against Americans and free peoples all around the world. In short, the tools that the techno-optimists of the 1990s thought would spread democracy across the globe are instead leading to democracy’s undoing.
Technology companies, like any other business, can’t be blamed for wanting to make a profit. But it is the government’s job to protect our citizens and our sacred values—like democracy. As Senator Lindsey Graham said last year at a Senate hearing on social media:
The American consumer is virtually unprotected from the adverse effects of social media…. How do you protect the consumer? Well, you have regulatory agencies that protect our food and our health in general. In this space there are none. You have statutory schemes to protect the consumer from abuse; in this space, there are none. You can always go to court in America if you feel like you’ve been wronged, except here.
Social media is truly an industry that stands alone. And the stakes are as high as they can possibly be—for our kids, for our national security, and for American democracy itself.
To protect both public health and democracy, the governance of the internet must be reimagined.
An Information Environment for Democracy
Change won’t be easy. The tech titans oversee the wealthiest companies in human history and are willing to use their deep war chests to fight change at every step. But with a few significant interventions, we can rein in these companies, recalibrate the power dynamics, and put control of our democracy back in the hands of the people and their elected leaders. We can build an information environment that enhances, rather than destroys, democracy.
American lawmakers must strike at the heart of the platforms’ damaging business model. To do that, they must empower the public to take back ownership of our personal information in what some call “digital self-determination.” Privacy and freedom are intertwined: It is impossible to live freely and have your own independent thoughts and behaviors if you are being constantly monitored and manipulated by those in power, whether that’s the government or private companies. We must pass comprehensive federal data privacy legislation that minimizes the amount of personal data collected and gives control of that data back to each individual, allowing us to move it with us to different sites, delete it entirely, or request compensation from companies for using it.
After the data-business model has been fundamentally altered, we must ensure that technology companies are held accountable for their product failures, like automobile makers or any other industry are. For that, it is critical that Section 230 be updated. Section 230 was established to protect good Samaritans, not actors who knowingly release harmful products or who know their products do harm but take no action. In 2019, internal researchers at Facebook, Instagram’s parent company, found that “[A]spects of Instagram exacerbate each other to create a perfect storm,” contributing to an online climate that makes “body images worse for one in three teen girls.” The company did nothing. If Section 230 is reformed, it will unleash lawyers, judges, and average citizens to seek justice. This would result in a fundamental change in the business calculus of technology platforms, forcing them to take the safety of their content and algorithms much more seriously.
Third, we must create a new and powerful digital regulatory agency dedicated to overseeing the technology sector. This agency must be robustly staffed with experts who understand the inner workings of these platforms and can adequately oversee their products and behaviors. In light of the recent Supreme Court ruling overturning Chevron deference, which had required federal courts to defer to an expert agency’s reasonable interpretation of ambiguous statutes that it administers, the new agency must be given very explicit, unambiguous powers to oversee everything from social media to generative AI. Most importantly, the agency must be given sufficient teeth to sanction companies when they do not follow the law, similar to the Digital Services Act in the European Union, which can fine platforms up to 6 percent of annual global revenue.
The fourth critical step is tamping down on virality—the rapid spread of popular or sensational material across one or more platforms—which is the most important driver of false information and propaganda. Right now, it is very much in the interest of tech platforms to encourage virality: It drives more eyes to the platform and makes them more money. This system propels information that is, if not downright false, often an exaggerated or one-sided version of the truth. Statutory changes are necessary to penalize companies that are deliberately indifferent about the promotion of false or inflammatory information. For example, research from the Center for Countering Digital Hate published in 2021 identified a “Toxic Ten” of climate disinformation spreaders on Facebook who were responsible for 69 percent of users’ interactions with the subject of climate change denial. Facebook failed to label 92 percent of this content.
Finally, we must have absolute transparency to see “behind the screen” of the algorithms and ensure the companies are complying with the law. At present, social media platforms are a giant black box that is only opened in rare instances, through the help of whistleblowers or lawsuits that get past Section 230 and into the discovery phase. It is essential that some comprehensive transparency legislation is passed so that the government, Congress, and average citizens are given access to this critical information.
Achieving all of the above reforms, with the most well-resourced companies in human history pushing with all their might against these efforts, will be incredibly difficult. However, it is clear that we have entered a unique and historic moment. The tides of public opinion have turned and the wind is blowing squarely at the backs of reformers, who are trying to make important changes to the laws governing the internet.
As noted earlier, Congress eventually created the National Highway Traffic Safety Administration, changing the game for automakers and saving thousands of lives. Congress also created the Federal Aviation Administration (FAA). Partly as a result, there are essentially zero major airline accidents in the United States annually. Earlier this year, a single door plug flew off an Alaska Airlines flight and the plane was forced to make an emergency landing. There were no major injuries. The FAA stepped in, forced the airline to cancel thousands of flights, and cost the industry billions of dollars—all in order to protect American citizens. Meanwhile, social media is stoking a mental health epidemic in which one in five teens suffer anxiety or depression and nearly a quarter contemplate suicide; between 2010 and 2020 there was a 38 percent increase in suicide rates among youth ages 10 to 24. Dangerous “challenges” and content promoting eating disorders, sexual exploitation, and suicide circulate freely. Foreign adversaries are spreading false information and sowing discord rampantly—using tools provided by our own companies—to severely weaken democracy in America and around the world. Yet, there is no U.S. agency to step in, no federal statute to keep the companies in line, no way for average citizens or the government to sue these platforms for the immense damage they are wreaking. The tech companies have, legally, externalized the costs of online addiction, misinformation, and disinformation to optimize their profitability. The costs to society have already been immeasurable: children’s lives; loss of public faith in institutions and the knowability of truth; billions spent and lives unnecessarily lost in internet-intensified wars and conflicts in places like Ethiopia and Myanmar, where algorithmically curated material fueled violence against certain ethnic groups. It’s time to reset the scales in favor of our children’s health, democracy, and the restoration of public trust in elections and the news media.
Despite the uphill battle, there is tremendous reason to be hopeful. On January 31, tech CEOs from several of the major platforms came before Congress to testify and were aggressively questioned by senators on both the left and right. Mark Zuckerberg was compelled to apologize to grieving parents in the room. It was a rare glimmer of bipartisanship—a glimpse of what bipartisan policymaking used to be, and what it can be once more.
The bipartisan concern demonstrated in the hearing is reflected in the spectrum of tech reform bills currently sitting in front of Congress. Richard Blumenthal, one of the Senate’s biggest progressives, and Marsha Blackburn, one of its most conservative members, have combined forces to advocate for children’s online safety. Their bill, the Kids Online Safety Act, was coupled with Ed Markey and Bill Cassidy’s Children and Teens’ Online Privacy Protection Act along with other legislation to form the Kids Online Safety and Privacy Act, which recently passed the Senate with overwhelming bipartisan support, a historic step that could soon result in the first sweeping new regulation of the tech industry in decades. Democrats and Republicans across the political spectrum have come forward with several proposals to update Section 230, including a bill by Representatives Cathy McMorris Rodgers and Frank Pallone to sunset the provision entirely in 18 months. Senators Elizabeth Warren and Lindsey Graham, who agree on very little, even came together on a bill to create a new digital regulatory commission. The chairs of the powerful House Energy and Commerce Committee and Senate Commerce Committee—Republican McMorris Rodgers and Democrat Maria Cantwell—have come together to push comprehensive data privacy legislation. Finally, the parties joined forces to pass a bill that could ban TikTok because of the national security threat it poses, unless it is sold to an individual or entity from a non-adversarial country (that is, not China, Russia, North Korea, or Iran)—demonstrating bipartisan tech reform is possible.
Bipartisanship still exists. Ironically, the sector that has done the most damage to our democracy also provides the biggest opportunity for restoring it.
Remaking the Information Environment
With all of these reforms, power will shift and the information environment will no longer be controlled exclusively by Silicon Valley overlords. These changes would put power back in the hands of individuals and the government. But this is not sufficient. All of the steps outlined above are necessary to stop the bleeding and ensure our democracy doesn’t disappear in the short term. The final step is to use technology to build a stronger citizenry and actually enhance democracy both at home and abroad.
There is a world where AI tools are used to spread trusted information while flagging or weeding out falsehoods and shielding voters from foreign tampering in our politics and elections. Researchers appear to be making strides in that direction: Recently, a team at Oxford developed a new method for detecting “arbitrary and incorrect” answers, called confabulations, that are often found with generative AI products, like ChatGPT. In this world, social platforms and algorithms are designed for healthy dialogue and built in ways that bring people closer together, reducing polarization, ending information silos, and better educating citizens.
This world is not only possible but can be seen just over the horizon. We can actualize the positive vision of an unending democratic spring that so many leaders had in 1989. It will require hard work and courage on the part of our Congress. And it will eventually require tremendous ingenuity and creativity—by the private sector and government alike—to harness the powerful technological tools we have developed to build a democracy-enhancing information environment that serves the public over corporate profits.
Click to
View Comments