How Biden’s FTC Can Go After Google and Facebook

Their unfair domination of advertising—through surveillance—is bad for consumers, citizens, and democracy. Here’s what the Administration can do.

By Sandeep Vaheesan

Tagged advertisingbig techFacebookFTCJournalismprivacyregulations

It’s official: Facebook and Google have taken over digital advertising. Not only that, they’re now the leading players in all advertising, accounting for nearly half of total ad spending in the United States. By capturing billions of ad dollars from traditional media, this duopoly has wreaked havoc on journalism. Newspapers saw ad revenues—once the lifeblood of their business—decline more than 60 percent between 2008 and 2018. Since 2004, U.S. newspapers have laid off half their staff, and more than 2,000 newspapers have shut down. The flagship newspapers in both New Orleans and Pittsburgh stopped publishing print editions seven days a week.

The decline of journalism, here and abroad, means many important stories are no longer covered thanks to “news deserts” and are sometimes replaced with disinformation spread through social media including Facebook, translating to a less informed citizenry and a further diminishment of healthy electoral democracy.

While some may hail the rise of Facebook and Google and the demise of journalism as “the market” at work, the two Internet giants won the contest in advertising, in large part, through broad and deep surveillance. Facebook and Google operate dragnets that track what we do, think, and feel and with whom we socialize and do business, both online and offline. This personal information is raw material for their core business: targeted advertising that aims to reach users they already believe will be interested in the marketed product or service. Rival publishers that host ads, whether newspapers, television stations, or billboard owners, cannot track and monitor users the way Facebook and Google do and build comparably detailed personal profiles—nor should we want them to. Surveillance advertising entails the systemic violation of laws protecting our privacy and prohibiting discrimination on the bases of race, color, gender, and other personal traits, supercharges corporate marketing’s psychological manipulation, and wastes human labor and resources.

Is there anything that can be done about this? In fact, yes: Federal law should prohibit this business model and prevent corporations, large and small, from tracking our online and offline activities in an effort to “target” advertising based on our non-public activities, speech, and thoughts. But even without new legislation in Congress, the Federal Trade Commission (FTC) can prohibit surveillance advertising as an unfair method of competition and force Facebook, Google, and other surveillance-driven businesses to develop new and benign methods of making money.

To understand how we got here, we need to go back to the early to mid-2000s, when Facebook and Google started remaking advertising by surveilling users online and subsequently offline through mobile phones, wearable devices, and “smart” home appliances. With their expansive and intrusive tracking, the two firms provided advertisers the seeming power to more precisely home in on receptive audiences. On Facebook and YouTube, a retailer of tailor-made suits in lower Manhattan can target 30-something professional men who live within a two-mile radius of its store, make more than $300,000 a year, and purchased Italian leather shoes in the past month. Former Google employee Tim Hwang described this system as “the bundling of a multitude of tiny moments of attention into discrete, liquid assets that can then be bought and sold frictionlessly in a global marketplace.”

In addition to their new method of targeting advertising, the two also grew by each acquiring hundreds of firms—consolidation sprees that the Department of Justice and FTC did nothing to halt. Among other acquisitions, Google bought YouTube, Android, and digital advertising platforms DoubleClick and AdMob, while Facebook purchased WhatsApp and Instagram in the 2010s. They also employed unfair practices such as the bundling of separate products and exclusionary contracts to shut out and impede the growth of would-be rivals, as recent federal and state antitrust suits allege.

As a result of their acquisitions and exclusionary practices, Facebook has more than 2.5 billion individuals on its original platform, 2 billion active users on WhatsApp, and more than a billion each on Instagram and Messenger, while Google has nine services with at least 1 billion users. In an interview with The Atlantic in 2010 (when Google was not nearly as big as it is today), former Google CEO Eric Schmidt captured the company’s scope of surveillance in concise and creepy terms:

We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.

Other online publishers and service providers, such as Bing, ESPN, and The New York Times, track users as well, but they cannot offer the breadth and depth of monitoring and precision that Facebook and Google present to marketers. No one comes close to matching their massive online footprints.

Diagnosing the problems with surveillance advertising requires understanding the public creation and maintenance of markets that enable economic activity in general. Despite common rhetoric around “free markets,” all markets, including advertising markets, are constructed and governed by rules. The government defines what things, whether land, goods, or intangibles, are entitled to property protection and upholds these rights through police and judicial action. The courts enforce contracts and decide what remedies to award in the event of a party’s breach of its obligations. State action establishes and structures market life.

In the competition realm, the law determines which business methods and tactics are permissible. Competition is not a free for all in which corporations can win by hook or crook. For example, a manufacturer cannot gain a competitive edge by making false claims about its own products or baselessly disparaging a competitor’s goods. Antitrust rules also set norms of fair competition. They restrict firms from dominating markets through below-cost pricing, exclusive contracts with distributors, and bundling of separate products or services. In short, the law shapes and restricts business rivalry in accordance with generally accepted notions of morality.

But these existing strictures have not proven to be of much use in the cyber age. In fact, the basic business model of Facebook and Google runs afoul of least four bodies of law or social values in the United States and many other nations.

First, surveillance advertising violates our right to, and general expectation of, privacy. Although the United States does not have a single, all-encompassing privacy law at the national level, it has a body of law that embodies a strong privacy norm. As two legal scholars described it last year, U.S. privacy law is “a complicated hodge-podge of constitutional law, piecemeal federal statutes, state laws, evidentiary privileges, contract and tort law, and industry guidelines.” Facebook and Google, through standard-form “clickwrap” contracts that few have the time to read, let alone carefully consider, compel users to renounce their privacy and assent to pervasive tracking if they want to connect with friends on Facebook, Instagram, or WhatsApp or search for things on Google or YouTube, seek directions on Google Maps, create a Gmail account, or browse the Internet using Chrome. Facebook even tracks non-users who have not granted the fictitious consent that users of one of these services have by clicking “accept” on the terms of service.

Second, surveillance enables illegal discrimination based on age, gender, national origin, race, sexual orientation, and other grounds, including in employment, housing, and lending. For example, Facebook permitted landlords to target ads, explicitly or implicitly, at whites only. Indeed, a less positive framing of targeting is discrimination on myriad grounds, legal (marketing an upcoming medical conference just to health-care professionals) and illegal (advertising a job opening only to men). Given surveillance platforms’ reliance on correlations between personal traits and assorted behaviors, beliefs, and circumstances, targeting that results in a disparate impact on marginalized groups may be impossible to purge from this model.

Third, surveillance advertising breeds psychological manipulation. The desire for eyeballs drives surveillance businesses to design their products and services in addictive ways. Addiction means more “tiny moments of attention” to sell. Users constantly on their phones or refreshing Twitter on their desktops are users (in theory) viewing ads.

Furthermore, Facebook and Google target ads and other material at individuals whose online behavior suggests anger, depression, or distress and often compound these emotions and channel them toward negative ends. In the political arena, these ads can spread groundless theories on electoral contests to the most receptive audiences and inflame existing personal attitudes such as xenophobia, misogyny, and paranoia about perceived out-groups. Two technologists wrote, “[T]he best fodder for [likes, purchases, and shares] has proved to be incendiary, controversial, and divisive material.”

This sensationalist and often false content can unleash a vicious circle. Incendiary material attracts more user engagement, permits more tracking, and allows for even more precise targeting of incendiary material. Facebook does not just allow users to post false and provocative materials—its business model promotes their broad dissemination (or “virality” in online lingo). This is why doctored images and unsubstantiated rumors about the Rohingya people spread far and wide on Facebook in Myanmar and helped unleash genocide against that group.

Fourth, surveillance advertising constitutes a waste of resources. Facebook, Google, and a network of digital marketing firms, ad exchanges, and data brokers direct tens of thousands of workers into making ever-more marginal “improvements” in advertising. Some of the best engineering minds that could go into developing electric vehicles or improving the efficiency of wind turbines are instead designing “home assistants” that listen to our desires, fears, and small talk; thermostats that track our home temperature settings; and specialized apps and devices that closely track one aspect of our behavior (what we eat for breakfast everyday) and transmit this trove of information to Google and Facebook, as well as other online surveillance firms. Surveillance advertising machines are also voracious consumers of energy. Huge amounts of electricity power server farms that store detailed portfolios on every one of us and conduct auctions that decide exactly who sees what ads and where, millions of times every second.

Without the large-scale development of surveillance advertising, producers could still target marketing content by picking outlets likely to attract prospective customers: promoting basketball shoes through ads on a broadcast of an NBA game, pitching continuing legal education courses through inserts in Washington Lawyer, and publicizing a local car dealership on Google by purchasing ads tied to the keywords “buying a new car.” Such contextual advertising ensured—and continues to ensure—that content is not indiscriminately disseminated.

Federal policymakers can and should ban surveillance advertising, as part of the antimonopoly attack on Facebook and Google’s dominance.  A broad public interest coalition (including my employer the Open Markets Institute) has called for a national prohibition on “the practice of extensively tracking and profiling individuals and groups, and then microtargeting ads at them based on their behavioral history, relationships, and identity.” Even if legislation proves too heavy a lift, the Biden Administration is not powerless here. The FTC has the authority to restrict unfair methods of competition and unfair or deceptive acts and practices. Congress established the FTC to develop and codify rules of fair conduct in the marketplace, enacting good morals in trade and taming “the pirates of business.” Drawing on this congressional intent, the Supreme Court stated that the FTC should function like “a court of equity” and “consider[] public values beyond simply those enshrined in the letter or encompassed in the spirit of the antitrust laws.”

Encouragingly, the FTC appears poised to put its regulatory arsenal to use. The FTC’s Acting Chairwoman Rebecca Kelly Slaughter has set up a group at the commission to write competition and consumer protection rules. In light of the serious social harms from surveillance advertising, she and her fellow commissioners should use the FTC’s ample statutory power to ban surveillance advertising.

Just as businesses cannot obtain a competitive advantage and profit through deception and sabotage, they should not be permitted to gain a leg-up in the advertising contest through massive and persistent invasions of private spaces, feelings, and thoughts. Prohibiting surveillance advertising is critical to protecting our privacy, strengthening anti-discrimination laws, ending a business built on the dissemination of addictive and provocative content, and steering resources in socially beneficial directions. This reconstruction of market rules would force Facebook, Google, and countless others to develop new business models in accord with public values.

Read more about advertisingbig techFacebookFTCJournalismprivacyregulations

Sandeep Vaheesan is the legal director of the Open Markets Institute. He is the author of a forthcoming book entitled Democracy in Power: A History of Electrification in the United States, under contract with the University of Chicago Press, on the history and future of cooperative and public power in the United States.

Also by this author

Seeds of an Antitrust Revival

Click to

View Comments

blog comments powered by Disqus