Symposium | Bipartisanship Reinvigorated

The Urgent Task of Reforming Section 230

By Dick Gephardt Zach Wamp

Tagged Social Mediatechnology

Law professor Jeff Kosseff calls it “the 26 words that created the internet.” Senator Ron Wyden, one of its co-authors, calls it “a sword and a shield” for online platforms. But we call it Section 230 of the 1996 Communications Decency Act, one of our most regrettable votes during our careers in Congress.

We didn’t know it then. In the 1990s, long before Snapchat, Siri, and some app our grandkids use called “Yik Yak,” the internet seemed young and promising, but simple. It was email, bare-bones websites, and basic search engines. We never could have imagined what it would become or how we would need to regulate it.

At the time, technology companies lobbied Congress, warning that restrictive legislation would stifle the growth of the internet economy and prevent U.S. businesses from competing in the global market. In exchange for protections, they promised to behave responsibly, and we believed them.

So, as part of a larger balancing act of regulating the rapidly developing internet industry, we voted for Section 230. It says two things. First, platforms are free from legal liability for nearly anything that a user posts. Second, platforms can moderate content if they would like to. Those rules, together, are Senator Wyden’s “sword and shield” argument: Platforms are shielded from lawsuits related to their third-party content, but they have the sword to intervene if they decide to.

For a few years, the tech world’s promise to behave ethically in exchange for Section 230’s protections appeared to pan out. Google even had a core motto so ingrained in the company culture that a version of it was the Wi-Fi password on the shuttles used to ferry employees around the company’s California headquarters: “Don’t be evil.” (Google erased the phrase from its code of conduct in 2018.)

But the internet has changed dramatically since the 1990s, and the tech industry’s values have changed along with it. In 1996, Section 230 was protecting personal pages or small forums where users could talk about a shared hobby. Now, tech giants like Google, Meta, and X dominate all internet traffic, and both they and startups put a premium on growth. It is fundamental to their business model. They make money from advertising: Every new user means more profit. And to attract and maintain users, platforms rely on advanced algorithms that track our every online move, collecting data and curating feeds to our interests and demographics, with little regard for the reality that the most engaging content is often the most harmful.

We’re a Democrat and a Republican who disagree on a lot of policy issues, but we agree on this: We must reform Section 230—and soon. When we voted for it, we had no idea what the internet would become. A law meant to protect users and help grow the internet is instead allowing social media companies to run wild and harm our children and our democracy.

Just like Big Tobacco, Big Tech’s profits depend on an addictive product, which is marketed to our children to their detriment. Social media is fueling a national epidemic of loneliness, depression, and anxiety among teenagers. Around three out of five teenage girls say they have felt persistently sad or hopeless within the last year. And almost two out of three young adults either feel they have been harmed by social media themselves or know someone who feels that way. Our fellow members of the Council for Responsible Social Media (CRSM) at Issue One know the harms all too well: Some of them have lost children to suicide because of social media. And as Facebook whistleblower Frances Haugen, another CRSM member, exposed, even when social media executives have hard evidence that their company’s algorithms are contributing to this tragedy, they won’t do anything about it—unless they are forced to change their behavior.

It’s not just our children. Our very democracy is at stake. Algorithms routinely promote extreme content, including disinformation, that is meant to sow distrust, create division, and undermine American democracy. And it works: An alarming 73 percent of election officials report an increase in threats in recent years, state legislatures across the country have introduced hundreds of harmful bills to restrict voting, about half of Americans believe at least one conspiracy theory, and violence linked to conspiracy theories is on the rise. We’re in danger of creating a generation of youth who are polarized, politically apathetic, and unable to tell what’s real from what’s fake online.

In short, Big Tech is putting profits over people. Throughout our careers, we have both supported businesses large and small, and we believe in their right to succeed. But they can’t be allowed to avoid responsibility by thwarting regulation of a harmful product. No other industry works like this. After a door panel flew off a Boeing plane mid-flight in January, the Federal Aviation Administration grounded all similar planes and launched an investigation into their safety. But every time someone tries to hold social media companies accountable for the dangerous design of their products, they hide behind Section 230, using it as a get-out-of-jail-free card.

That wasn’t the intent of Section 230. It was meant to protect companies acting as good Samaritans, ensuring that if a user posts harmful content and the platform makes a good faith-effort to moderate or remove it, the company can’t be held liable. We still agree with that principle, but Big Tech is far from acting like the good Samaritan. The problem isn’t that there are eating disorder videos, dangerous conspiracy theories, hate speech, and lies on the platforms—it’s that the companies don’t make a good-faith effort to remove this content, and that their products are designed to actually amplify it, often intentionally targeting minors.

People on both sides of the aisle want to reform Section 230, and there’s a range of ideas on how to do it. From narrowing its rules to sunsetting the provision entirely, dozens of bills have emerged offering different approaches. Some legislators argue that platforms should be liable for certain kinds of content—for example, health disinformation or terrorism propaganda. Others propose removing protections for advertisements or content provided by a recommendation algorithm. CRSM is currently bringing together tech, mental health, education, and policy experts to work on solutions. But the specifics are less important than the impact of the reform. We will support reform guided by commonsense priorities.

We know it won’t be easy. We have seen tech safety bills with widespread bipartisan support, like the Kids Online Safety Act, stall in Congress in recent years. One obvious reason for these delays is lobbying. Just like we saw with Big Tobacco, Big Tech companies shell out millions of dollars a year lobbying Congress. Meta alone spent $7.6 million in just the first three months of 2024 and has one lobbyist for every eight members of Congress. But legislators need to value their constituents over their campaign pocketbooks and vote for what’s right.

There is also a common claim from Silicon Valley that regulating social media is a violation of free speech. But free speech, as courts have ruled time and time again, is not unconditional. You can’t yell “fire” in a crowded theater where there is no fire because the ensuing stampede would put people in real danger. But this is essentially what social media companies are letting users do by knowingly building products that spread disinformation like wildfire.

Holding social media companies accountable for the amplification of harmful content—whether disinformation, conspiracy theories, or misogynistic messages—isn’t a violation of the First Amendment. Even the platform X, formerly known as Twitter, agrees that we have freedom of speech, but not freedom of reach, meaning posts that violate the platform’s terms of service will be made “less discoverable.” In a lawsuit brought by the mother of a young girl who died after copying a “blackout challenge” that TikTok’s algorithm allegedly recommended to her, the Third Circuit Court of Appeals recently ruled that Section 230 does not protect TikTok from liability when the platform’s own design amplifies harmful content. This game-changing decision, if allowed to stand, could lead to a significant curtailing of Section 230’s shield. Traditional media companies are already held to these standards: They are liable for what they publish, even content like letters to the editor, which are written by everyday people.

Another claim is that restructuring Section 230 would put American tech companies at a competitive disadvantage with Chinese and Russian firms. This is a well-worn scare tactic. In the 1960s, automakers claimed that the cost of installing safety features would jeopardize their business, but they didn’t stop making cars—or money—when we started requiring seat belts. Responsible safeguards for social media won’t put Big Tech out of business either.

If anything, Section 230 reforms could make platforms more pleasant for users; in the case of X, reforms could entice advertisers to come back after they fled in 2022-23 over backlash around hate speech. Getting rid of the vitriol could make space for creative and fact-based content to thrive.

But for now, these platforms are still filled with lies, extremism, and harmful content. We know what it’s like to sit at the dinner table and watch our grandchildren, even those under ten years old, scroll mindlessly on their phones. We genuinely worry, every time they pick them up, what the devices are doing to them—and to all of us.

We also know what it’s like to work in Congress, under both parties, and we know it will take hard work for today’s legislators to fix the unintended consequences of our vote. But we’re confident that it’s worth the effort. The internet may no longer be young, but it’s still promising. It can still be a tool, rather than a trap, for our brilliant kids and grandkids. We just have to treat it right. Reforming Section 230 is a necessary first step. Doing nothing isn’t an option—not if we want a healthy population and a healthy democracy.

From the Symposium

Bipartisanship Reinvigorated

next

How Modernizing Congress Would Heal Dysfunction

By Donna Edwards Joshua Manuel Bonet

8 MIN READ

See All

Read more about Social Mediatechnology

Dick Gephardt is a former member of the U.S. House of Representatives who served as Majority Leader from 1989 to 1995. He currently co-chairs Issue One’s Council for Responsible Social Media.

Zach Wamp is a former member of the U.S. House of Representatives, serving from 1995 to 2011 as a Republican from Tennessee's 3rd congressional district. He currently co-chairs Issue One's bipartisan National Council on Election Integrity.

Click to

View Comments

blog comments powered by Disqus