Arguments

Generative AI’s Overlooked Risk: Ads

By Zak Rogoff

Tagged advertisingArtificial IntelligenceEconomicstechnology policy

The recent excitement over ChatGPT—which has the fastest-growing user base of any software application in history—quickly dimmed as a growing chorus of experts began voicing concerns. Worries about its impacts have ranged from fears it will enable cheating among students and phishing scams to concerns it could replace workers in certain jobs or even overthrow humanity. But many critics of generative AIs like ChatGPT are overlooking the most profitable avenue for their potential misuse: surveillance advertising.

Since before the first snake oil salesman, marketers have known that falsehoods can be useful in garnering attention and building interest in a product. The advertising industry has knowingly propagated numerous mistruths over the years, most famously around the safety of tobacco. To this day, many companies continue to use falsehoods to promote products when they can get away with it, from businesses selling dangerous or illegal dietary supplements on Facebook to those peddling questionable cancer treatments on Instagram. 

Likewise, in an effort to save money and time, marketers have sought to make sure their messages, whether true or false, reach those who are most likely to act on them. To this end, targeting and personalization have long been a major goal of the advertising industry—from George Gallup’s introduction of market research in 1935 to the creation of branded characters like Tony the Tiger, designed to appeal to children, starting in the 1950s. With the advent of the internet, it is unsurprising that marketers would begin taking advantage of the reams of new personal data generated by people’s interactions with digital platforms, culminating in the rise of surveillance advertising (also known as “targeted advertising”) since the late 1990s.

Yet with generative AI, targeted advertising may be entering a frightening new era. These systems will soon be powerful enough to design and target online ads with little human involvement. This would give the advertising industry a new generation of automated systems with novel capabilities and untold impacts. At my request, ChatGPT generated a name for such a hypothetical automated system: AdOverlord. While the moniker may sound fit for a Marvel supervillain, if we are not careful, such systems could rapidly increase the number of false, discriminatory, and otherwise harmful advertisements online—and deepen the dangers that digital platforms pose to our society and democracy.

As we know, targeting can have a number of negative consequences for human and civil rights, including harmful discrimination along sensitive categories such as gender or race. This can happen, for example, when a company excludes a minority group from viewing an ad for a home or job because it wants to save money and does not believe the group is likely to respond to the ad; personal bias is not required. Because targeting means different ads are shown to different people, the advertising system becomes less transparent, and it is therefore more difficult for civil society or government to hold companies accountable. It is, in the most literal sense, “systemic racism.”

Ranking Digital Rights (RDR), where I work as a research manager, analyzes tech platforms’ and telcos’ transparency and respect for human rights online. Most major platforms do have rules to protect against false ad content and discriminatory targeting, and these are bolstered by laws that make such practices illegal in many jurisdictions. However, the work of ad moderation is mostly outsourced to algorithmic systems. Our research shows that most platforms do not provide much explanation of how their ad-moderation systems work, and very few disclose the number of ads they block. But publicly available evidence does point to one thing: The systems don’t work well.

Buying a policy-violating ad to run on a major platform does not require the technical expertise of Russia’s Internet Research Agency (which famously did just that to disrupt the 2016 U.S. presidential election). In recent experiments, journalists have been able to buy ads on Facebook that promoted drinking bleach to prevent COVID-19, pushed false narratives about Russia’s invasion of Ukraine, and advocated acts of genocide against Rohingya Muslims. With ad revenues recently on the decline, many major platforms appear to be lowering, not raising, their standards for what ads make it onto their sites. Well-enforced ad-moderation systems ultimately mean turning away money, and so platforms are reluctant to prioritize them.

Though ad moderation is already largely automated, creating ads still requires significant human labor. While marketers have sophisticated tools for managing their work and streamlining communication with ad platforms, humans are still needed for a slew of essential tasks: conceiving of multiple versions of an ad that are designed to appeal to different groups of potential customers; choosing the platforms on which to run the ads; targeting and paying for the ads; analyzing the analytics provided by the ad platforms; and making the requisite modifications to the ad content or targeting for the next round of ads. The more precisely marketers want to target different demographics, the more research, ad-content development, and targeting strategy they have to commit to. The limits of human productivity have created natural constraints on the volume of advertising produced—until now.

Marketers today are already experimenting with asking generative AIs to create ad content. Meta, which has long faced criticism for allowing false and dangerous ads on its platform, is working on its own limited generative AI advertising system, slated for deployment by December. Meanwhile, generative AI technology is advancing so fast that industry leaders recently made an unprecedented call to pause the development of systems with new capabilities for six months. Generative AI will probably soon be good enough to replace the human component of surveillance advertising altogether. And this is where AdOverlords comes in.

AdOverlords would be able to create far more ad content than humans in the same amount of time. And the firms deploying AdOverlord systems could connect them directly to ad platforms, allowing ad submissions to be completed automatically. In line with the race toward personalization, AdOverlords would learn to make many more distinct ads per campaign, each targeted to smaller groups of users sharing more precise interests or demographic traits. Because the audience for each ad would be smaller, the ads could be better customized, potentially increasing their persuasive impact. (Buying more ads would not necessarily require an increase in ad budgets; ads that target fewer people are cheaper.) Without safeguards in place, AdOverlords in the hands of unscrupulous marketing firms would likely also learn to make ads that were false. And, if it improved ads’ effectiveness at converting views into clicks, they would soon learn to discriminate in their targeting too. There is a precedent for such discrimination by surveillance advertising algorithms: A less advanced AI system run by Facebook was caught discriminating by race and gender when doing so stood to increase ad engagement.

Ultimately, outsourcing ad creation to generative AI would likely lead to a larger number of distinct ads submitted to major platforms, each more precisely customized for, and targeted to, a small group of people. As mentioned, without safeguards, some percentage of these AI-generated ads would be false, discriminatory, or otherwise harmful. Unfortunately, we cannot count on platforms’ existing ad-moderation systems to gracefully handle such a significant increase in input. A larger number of harmful ads would likely get through, increasing their impact on individuals and further polluting the public sphere.

Even more worrisome, if AdOverlords could learn to use the full suite of user-tracking, communication, and ad-analytics tools that already exist, they might even create campaigns targeting a single person, with content customized to prey on that individual’s psychological vulnerabilities. Similar tactics have been used by some of the shadiest purchasers of targeted advertising, even though they often violate platforms’ terms of service. Automation could make these maneuvers much cheaper and therefore more common.

Incorporating generative AI into surveillance advertising could have another, less direct risk as well. For years, RDR has been ringing the alarm about the human rights risks of the surveillance advertising business model, which fuels most major platforms. These risks go beyond ads themselves: The business model encourages the design of platforms where inflammatory or controversial user-generated content (not just ads) spreads faster, because this content attracts eyeballs, which can then be redirected to ads.

There is evidence that this property of major platforms has contributed to the rise in extremism and the democratic decline that have occurred globally over the last decade, as well as having negative psychological effects, particularly among young people. Further, the business model incentivizes ever-growing data collection as marketers try to target ads more precisely than their competitors and platforms search for ways to make their products more addictive. Any integration of generative AI into this system, even if it only creates variations on existing ads (as it appears Facebook’s system will do), could boost the efficiency of the surveillance advertising business model. This would further entrench the system—and its corrosive effects on our society.

Instead of supercharging the surveillance advertising system with new types of AI, we should be moving toward an internet that respects privacy and other human rights. Governments should ultimately ban surveillance advertising and incentivize the development of new business models for online platforms, such as subscription models and contextual advertising relevant to the content of the page on which it is displayed. Contextual advertising has a track record of success on the internet, and it would be a more viable option if it did not have to compete with surveillance advertising.

While the movement toward such a ban has made strides, there are intermediate steps that both policymakers and companies can take now to preempt the risks of generative AI-fueled surveillance advertising. Platforms that run ads need to beef up their ad moderation and become more forthcoming about how their current systems work. Such transparency would be facilitated by new regulatory regimes—such as Europe’s upcoming Digital Services Act—requiring large platforms to release data about ads’ purchasers and targeting parameters. Legislation could also ban targeted advertising to children, require the imposition of larger minimum audience sizes, and limit targeting options.

More fundamentally, all governments owe their citizens protections as strong as those provided by Europe’s General Data Protection Regulation (GDPR). When enforced fully, the GDPR limits data collection to only that which is needed to actually deliver the service users sign up for, preventing platforms from hoovering up everything they can in service of ad targeting. The American Data Privacy and Protection Act (ADPPA) is the best legislative proposal for this in the United States, where a disproportionate number of internet companies are based. RDR’s own comprehensive privacy and freedom of expression standards can provide guidance for policymakers and for responsible platforms that are willing to improve faster than governments require. Researchers and civil society groups, who are already speaking out on the potential risks of generative AI, should broaden their calls to action to include greater consideration of the risks from AI in advertising.

It can be very difficult to regulate a problematic practice after a major industry becomes dependent on it. That’s been made clear by the U.S. Congress’s dragging pace in passing basic privacy legislation like the ADPPA, which would curtail Big Tech’s freewheeling data collection. We need to act now to make sure that the integration of generative AI into advertising doesn’t allow supercharged ad systems to boost false and discriminatory content. Indeed, we must seize this moment to fully rethink the governance of online spaces to ensure a wholesale improvement in the protection of privacy and other human rights. New technologies provide new capabilities, but the way they are deployed is not inevitable; anyone who says it is just might be trying to sell you something.

Read more about advertisingArtificial IntelligenceEconomicstechnology policy

Zak Rogoff is research manager at Ranking Digital Rights, which scores tech and telecom companies on their respect for human rights.

Click to

View Comments

blog comments powered by Disqus