The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic by Jeff Jarvis • Basic Books • 2024 • 272 pages • $29
When Elon Musk brought the U.S. government to the brink of shutdown over a spending bill in December, he boasted of a victory for democracy. As Musk described the outcome on X, which he owns: “Your elected representatives have heard you and now the terrible bill is dead. The voice of the people has triumphed. VOX POPULI / VOX DEI.”
The voice was not divine, but rather Musk’s own, amplified by a social media platform that he has partly reconstructed around his own prejudices and predilections. Musk isn’t just X’s owner, but its self-appointed Main Character for Life. He sometimes posts hundreds of times in a day, in an algorithm rigged to maximize his influence, encouraging adulation and downsizing or silencing those who cross him. Now, he plays a key role in the Trump Administration as the surviving leader of DOGE, an advisory body that is supposed to lead Trump’s efforts to “reform” (or shrink) the government. The platform economy is bleeding into government, and government into the platform.
The Web We Weave, Jeff Jarvis’s new book, rightly deplores Musk’s “malign weirdness” and pleads for a future that is not controlled by the “tech bros” and “cultish AI boys.” It does not, however, prepare us for the future we are in, where Elon Musk, Mark Zuckerberg, and Jeff Bezos pay obeisance and tribute to the newly elected President. Jarvis’s understanding of politics reflects a time when there was real distance between platform owners and elected officials.
Now, the billionaires who bought Trump stock early are in the ascendant, while others are bending the knee. Even before Trump won the election, Bezos and Zuckerberg were hedging their bets. Bezos blocked The Washington Post, which he owns, from publishing an endorsement of Kamala Harris. After Trump’s victory, Bezos’s Amazon, Zuckerberg’s Meta, and Sam Altman, the CEO of OpenAI, rushed to donate millions to Trump’s inauguration committee. Meta recently replaced Nick Clegg, its top government outreach official, with his deputy, Joel Kaplan, who spent years advocating against disinformation controls that might disadvantage conservatives. The platform monopolists who condemned Trump’s efforts to overturn the election four years ago are now embracing him.
The threat we now face is an oligopolistic fusion of the state and business. But that is not the problem that worries Jarvis. He is less concerned about billionaire power than about intrusive government regulation inspired by moral panics. (Jarvis believes that the power of platform owners will wane as new market entrants disrupt old monopolies.) The key, for Jarvis, is to help the “we” who weave the internet win the battle against the forces of control. On one side is “our conversation, our creativity, our community”: in short, the internet itself. On the other are the regulators and their allies, who either want to stifle internet freedom or might stumble into doing so by accident: Chinese government censors, European Union bureaucrats, old media journalists and editors, and the “moral entrepreneurs” who stir up fear for clicks and effect.
But what, exactly, is the “we” of Jarvis’s book title? As Marion Fourcade and Kieran Healy explain in a recent book, The Ordinal Society, this we is a collective identity shaped by both platform owners and users. All conversations beyond the purely personal are shaped by the technologies they rely on. Platforms like X and Facebook don’t simply reflect the business and personal concerns of wannabe godlings like Musk and Zuckerberg. Their users sometimes turn the technologies to their own purposes. Equally, those purposes may be reshaped by the algorithms and decisions of the networks’ owners.
For three decades, commentators like Jarvis have depicted the politics of the internet as a grand war between the advocates of freedom and the meddling forces of regulation and control. That war may have described the politics of that time. But today, our problems are different. What happens to politics when the voice of the people blurs together with the voices of self-appointed god-emperors like Elon Musk? We are about to find out.
Jarvis came to prominence as one of the earliest commentators on online media, writing books replete with anecdotes about prominent entrepreneurs and his own central role in technology debates. While Jarvis deplores the tech bros, his real targets in The Web We Weave are those in his former profession (journalism) and his new one (book-length opinionation). He argues that the mainstream media is spreading moral panic around the internet. Like others before him, Jarvis points out how new technologies invariably inspire panics: Commentators in the past have noted the destabilizing dangers of such terrifying technologies as the novel and the radio and even the bicycle. Now, their intellectual descendants are at it again.
The Web We Weave is right to highlight some of the terrible arguments that professional opinionators have made about the internet. Shoshana Zuboff has coined a lovely phrase for Google and Facebook’s business model—“surveillance capitalism”—but her notion that algorithms are turning us into puppets is wildly implausible. Jonathan Haidt deploys ambiguous evidence to support grandiose arguments about how the internet is ruining our kids. Kevin Roose’s widely read New York Times article about the desire of a large language model to escape its limits and win his love away from his spouse spawned an entire genre of shock and fear journalism that largely misunderstands how these technologies work.
In Jarvis’s view, traditional media spreads these terrifying claims because it hates and fears the platform economy. Once, publications like The New York Times were irreplaceable intermediaries between advertisers and the public that they wanted to influence. Now, they have been replaced by companies like Google and Facebook, which can serve advertisers better and more efficiently by allowing them to microtarget those they want to influence.
As traditional publications lose revenues and journalists lose jobs, they are turning against their tech adversaries, hyping up stories about the evils of Silicon Valley. That, Jarvis says, feeds the desires of regulators to make rules that will stifle the essentially human conversation of the internet. What Jarvis wants us to understand is that “The internet is not technology. The internet is us.… It is a human network. It is speech.” The internet’s faults and biases are our own. The way forward, in Jarvis’s opinion, is not to stifle this conversation through over-intrusive regulation, but to improve it, through broader and more adaptable rules, smaller forums that do not aspire to scale, and new protocol-based services like Bluesky, which allow people to connect without subjecting themselves to the whimsical tyranny of the likes of Musk. If people want to leave Bluesky, they can set up their own alternative, using the same protocol to talk to their old friends who have stayed.There is much that is attractive about this vision. There is also much that is naïve. Jarvis wants a public sphere of conversation that is unfiltered by the press or by opinion polls, revealing what people really want and think in all its glorious cacophony. He is optimistic that with the right platforms, the voices of the few bad actors who spread division will be utterly swamped by those of ordinary people who just want to talk and make connections.
This is the opposite of Zuboff’s vision of billionaire platform owners as all-controlling puppet masters. But it is just as ridiculous. You cannot simply treat the internet as free speech in all its awesomeness, independent of all intermediating technology. The unfortunate truth is that there is no public sphere that is unfiltered by rules and technology. Even the simplest townhalls rely on technologies like microphones and rules about who speaks when, which reflect, and often reinforce, the power disparities in the communities that make use of them.
All public spheres involve technology and power relations. Pointing this out is not stirring up a moral panic. Instead, it is responsibly highlighting the politics behind even the most idealistic visions of public engagement, and drawing attention to the power relations that are reshaping American democracy. When we look at a public sphere, we need to figure out how the technology that allows it to work intersects with the desires of both its designers and its users. It’s hard to see what empowers people like Musk when one is stuck in the traditional dichotomy between freedom and control.
To ask the right questions, we need to understand how social media platforms affect democracy in the first place. In principle, democracy is the means through which the public can steer its collective destiny, obliging government officials to respond to what it wants. The sordid truth is that there is no such thing as an unmediated democratic public. Political sociologists like Andrew Perrin and Katherine McFarland have documented how past technologies, like opinion polls, have shaped our common understanding of what the public is and wants.
For example, Perrin and McFarland find that when conservatives said in surveys that they believed Barack Obama was a secret Muslim, most of them probably didn’t actually believe that this was true. They write, “Further analysis of these polls strongly suggests that they reflect respondents’ self-identification as members of a public that dislikes the president, not that actually believes him to be a believing or practicing Muslim.” Still, the fact that conservatives publicly shared this apparent belief shaped how they, other citizens, and politicians saw their movement, opening the door to Trump-era policies like the “Muslim ban.” When conservatives saw what other conservatives were saying, they knew what public beliefs they needed to coordinate around.The same is true of the new publics that are forming on social media platforms. Over the last decade, these online platforms have become a major space—perhaps even the major space—in which publics coalesce and are seen both by their members and by the politicians and officials who want to secure their support. For example: Twitter users spread myths about a secret basement beneath a D.C. pizza restaurant where Hillary Clinton and her minions imprisoned children for sexual abuse. But as the psychologist Hugo Mercier has pointed out, only one believed this strongly enough to visit the restaurant to investigate. Even so, the myths reshaped the contours of the politically possible.
For a few years, platforms believed that their job was to squelch such myths, at least in rich democracies like the United States. But when Republicans won the House of Representatives in 2022, the platform companies quickly backed off, watering down or abandoning policies meant to stem the spread of falsehoods that might undermine democracy.
Some, like Meta’s Instagram and Threads, de-emphasized politics altogether, weighting their algorithms to prevent political commentary from spreading. Others, like Twitter after Musk purchased it, welcomed back extremists who had been banned while promoting Musk’s own conspiratorial theories. To really thrive on the new Twitter (or X, as it was now called), you not only had to pay to have your account “verified,” but had to get the attention of Musk, or the like-minded posters whose content he spread.
This was an ecosystem rather than a simple top-down system of control. New myths, arguments, and notions emerged from below—but they were far more likely to spread if Musk supported them. When Musk picked up an attractive political slander—for example, the claim that Haitians in Ohio were stealing and eating pets—it was likely to reverberate. The voice of the people and the voice of God blurred together, forming a single apparent public, in which some beliefs were likely to seem like common wisdom, while others were almost certain to be sidelined.
Under Trump, we’re likely to see the conspiratorial side of conservatism being strengthened by other platforms, too. Zuckerberg has already signaled that he sees Trump’s election as a “cultural tipping point,” and that Meta’s services will move away from stopping the spread of false or offensive content to privilege “free speech” instead. He wants Trump to retaliate against European authorities if they try to enforce disinformation rules against his services.
But there is a more fundamental problem: American democracy itself may be short-circuited, as Musk and those around him shape both the questions that the public is purportedly asking and the answers that government provides. Such a confusion of public, platform owner, and governing administration is profoundly troubling, not just for America but for the world. How should other countries respond when Musk uses his platform to agitate for the downfall of friendly governments and demand the release of far-right provocateurs? How do foreign regulators who reasonably see Musk’s platform and Zuckerberg’s new approach as threats to their own sovereign democracy respond without inviting ferocious retaliation from the Trump Administration?
If Musk and Trump fall out with each other, and if Zuckerberg begins losing users, this chimeric amalgam of public and policymaking might collapse under its own weight. We can’t count on that. We need new ways of creating collective voice and having it responded to, in a world where the platform economy is democratically untrustworthy at best, and at worst becoming actively malign. How do we build new publics, and new relationships between the public and the government, when technologies are leading to increasingly deranged publics and the government seems increasingly incapable of solving problems?
That is the challenge, and it is a far more complex one than simply protecting the internet from regulation. To meet it, we must understand that the “we” who weave the web are also woven by it. The technologies that allow us to talk to each other also necessarily shape the contours of debate. We can’t insulate these technologies from regulation, trusting that market-competitive forces and some notional marketplace of ideas will be sufficient to prevent abuse. Constituting ourselves as a collective “we” that can agree on what our common spaces should look like and refiguring technologies to make these spaces possible would be difficult under the very best of circumstances. Those are not the circumstances we live in. Nonetheless, that’s what we have to do if we are to find our way out of the traps that oligarchs like Musk, and our own collective tendencies, have built for us.
Click to
View Comments