Symposium | Democracy and Technology: Allies or Enemies?

Repairing the Algorithmic Lens

By Suresh Venkatasubramanian

Tagged DemocracyInternettechnology

I came of age with the Internet as we see it today. I spent hours surfing the early web, visited Yahoo when it was just Jerry Yang’s page of bookmarks, and experienced the almost miraculous search results of the early incarnations of Google. By the time I graduated from Stanford with my Ph.D., the dot-com boom was in full swing, with the crash of 2001 looming just off the horizon. Many have chronicled the transition of computer science from a nerdy academic discipline to the lingua franca of our time: I’ve lived that transition as a computer scientist and have marveled at the wonders of my field, while at the same time becoming increasingly concerned about the impact of technology on our world.

Computer science—the discipline that I was schooled in—never imagined it could change the world. The AI dreamers came and went, with their Turing tests, Chinese room arguments, and Eliza, but the field I studied was more prosaic, inward facing, and insular—even narcissistic. The study of computer science was the study of computation as a phenomenon and the study of how to build computers. It was engineering, mathematics, and physics. What it never pretended to be was a discipline that spoke to society, to the law, or to the human condition.

And yet…

The unreasonable effectiveness of technology raised questions. What if we could design algorithms to see as we humans do? What if we could make devices that could understand what we say and write, and act on our orders? And what if we could use algorithms to predict how people could behave, and to make decisions that affect our lives and future?

There were no answers to these questions, until the deluge arrived: Floods of data fueled by the web and generated by increasingly mobile and powerful devices, devices that themselves were powered by Moore’s Law and the inexorable advancement of computing power that put thousands, millions, billions of transistors on chips that got smaller and smaller and smaller. Innovations in algorithms and in learning, driven by this frightening and amazing scale. And thousands of minds, swept along by the revolution in the tech industry, with jobs and opportunities galore, and the freedom to tinker.

Data, computing power, and technological innovation seduced us with the promise of answers we had thought impossible. Computer science started to look outward to the society it had long ignored, and technologists decided that our tools could go from predicting machine behavior and performance to predicting human behavior and performance. Computer science was laying claim to being a science of society—a social science on par with traditional approaches to understanding people and systems.

A technical methodology, a way of thinking of algorithms, and a praxis designed for manipulating 0s and 1s with math and silicon were now being deployed as a sociotechnical framework. The premise was that mountains of data would tell us what we needed to know, and that clever algorithms would sift through these mountains to find the patterns that dictated our behavior. I have taken to calling this assemblage of tools and data the algorithmic lens because of the particular perspective it brings to bear on social problems.

Unfortunately, we now know what we see through the algorithmic lens. Our mountains of data had deep historical wounds—racism, sexism, and more—and our clever algorithms faithfully reproduced those wounds. These algorithms, running at large scale and at speed, turbocharged the delivery of biased outcomes. The methodologies at the core of the discipline—abstraction away from the reality of the stories the data was telling us, a relentless need to flatten human variation to make us machine-readable, and an unswerving (and unfounded) faith in the “truth” of data and the power of algorithms to find this truth—were the Achilles’ heels that brought us the greatest failures of technology, in discriminatory practices in the criminal legal system, in biases in hiring and lending, in the use of questionable surveillance technology to punish underrepresented groups, and even in the amplification of viral content that has threatened democracy across the world.

Models based on machine learning were used to predict risk of recidivism for individuals in federal prisons; these models tended to incorrectly predict a higher likelihood of recidivism for Black people compared to white people. A tool to predict which candidates would be good hires ended up systematically eliminating female candidates based on gender cues in resumes even when no gender was explicitly provided.

To be sure, none of the many problems of today can be laid solely at the feet of modern AI-driven technology. But—and this is where the social and the technical collide—we deployed new technology uncritically without understanding what would happen when it collided with the people who were being subjected to decisions based on its use. We tried to make computer science a social science without incorporating an understanding of people into it.

And people took notice. As the incidents mounted, so did the clamor—from journalists, community advocates, defenders of civil rights, and researchers. It turns out that an “algocracy” that cannot see people except as 0s and 1s does not in fact democratize the benefits of technology for all of us.

Something needs to be done.

There are two different narratives that indicate how we should proceed: one in which technology and society are in conflict, and our goal is to build guardrails to protect ourselves, and another in which we need to reimagine how we design and deploy technology to reduce the amount of conflict. These narratives are not mutually exclusive. In fact, they reinforce each other in important ways. They mobilize different groups of people along different paths toward a common goal: of building a technologically aware society that informs and supports the flourishing of the human enterprise, rather than limiting us by forcing us into machine-readable boxes.

The first narrative describes the world of today as a battle between inevitable technological advancement driven by goals of efficiency, scalability, and profit, and a frantic effort by society—policymakers, legislators, civil society, and academia—to place guardrails so that we are protected from the worst consequences of unchecked tech deployment. While many argue that guardrails limit innovation and the potential for world-changing technology, I find it hard to take these arguments seriously while donning a seatbelt in my car on the way to the store to buy medicine for my headache that I know will work safely and effectively, driving on a road that has clearly painted lane markers to make sure cars don’t crash into each other, and parking two spots away from a wheelchair-expanded parking spot with convenient access to the store. Protections don’t limit innovation: They merely suggest directions in which innovation can and should go, to make sure that we can all benefit.

What do these protections look like in the algorithmic world? Well, if the harms people face arise from the collisions between technology and society, then that’s where the protections need to be—at the point of impact, where technology has a meaningful bearing on our ability to function and flourish, as well as to access the resources needed to do so.

This is the core design principle of the Blueprint for an AI Bill of Rights, a document released by the White House Office of Science and Technology Policy in October 2022 that I helped co-author. It lists protections that people need in the age of automation—protections that we’ve learned we need from a drumbeat of examples of where tech has failed. It demands technology that’s safe to use and that works, instead of promising the moon and delivering a lump of coal. It demands technology that doesn’t discriminate algorithmically in a way that would run afoul of civil rights law and basic decency. It demands that we don’t face a forced tradeoff between technology that can benefit us and a complete loss of privacy—that we don’t create incentives for predatory data collection that treats our daily activities and behavior as ways to earn dollars for others. It demands that the advancement of technology be visible and legible, not hidden from view and unaccountable. And it puts people first by requiring that we don’t rely solely on technology for important tasks that affect our lives, creating a pathway for human intervention and decision-making when automation fails—as it always does.

The Blueprint does much more than articulate rights that we must have. It lays out a set of guardrails—a technical companion—that we can use to make sure that the technology we build can be evaluated and inspected for compliance with the rights listed. And in that respect, it’s truly a blueprint—for legislation, regulatory guidelines, and even voluntary practices or new industry standards. It’s a call to arms for collective action, which is fitting since we are impacted by technology collectively.

There’s an interesting and slightly paradoxical dimension to the guardrails. Many of them are pieces of technology themselves, tools that we computer scientists have created and continue to innovate on to police the very technology we have built. We have suites of code that can examine a model and identify the kinds of biased outcomes it might produce.

And that brings me to the second narrative of how we should address the age of tech. Technology, while not morally neutral and certainly not an unmitigated force for good, is very malleable. If the first narrative imagines technology as a flowing river that we can’t stop and can only protect ourselves from with barriers, the second narrative suggests that we can redirect the flow of the river to make it less destructive and more inclusive. We can repair the algorithmic lens to make sure it can truly serve the needs of all.

It’s a narrative that I find compelling because I have had success with it in my own research. It was in 2012 that I started thinking about the societal impact of machine learning and found myself at the front end of a new way of thinking about algorithms: one that established broader concerns like fairness, accountability, and transparency as first-order design criteria along with more traditional criteria like efficiency and accuracy. The translation was imperfect, and incomplete. But the research—both technical and sociotechnical—that has come out of this area has informed policy documents like the Blueprint, legislation like the various bans on facial recognition, and standards development like the AI Risk Management Framework that the National Institute of Standards and Technology released on January 26, 2023, and which provides a framework for how organizations can estimate the level of risk in their AI systems.

At the core of this effort is a realization that the basic technical questions we try to solve in computer science often have a strong normative element. When we build a predictive model in machine learning, we measure the quality of the model by comparing its predictions to “ground truth”—inputs and answers that we are training the model on. We count all the mistakes and call this the error of the model. And we treat all errors the same way. What we know now is that this can lead to deeply flawed outcomes—if there are fewer inputs representing a historically underrepresented group, say, or if false positives and false negatives are treated equally in cases where they have very different real-world outcomes. This recognition has prompted the use of different ways to measure the quality of a model, and different ways to build models of high quality.

Sometimes it’s not the specific question, but the framing of the problem. Again, returning to the problem of training a predictive model, it may not matter what the model does as long as it returns answers that are mostly correct. But if the goal of building the model is to provide insight to the person using it, then why the model makes a prediction is perhaps as important as what prediction it makes. This realization has led to deep technical innovation in the area now known as “explainability.”

This story repeats over and over. Specific and “standard” ways to phrase questions in technology have unintended consequences because of blind spots in how the technology is developed and how the questions are asked. Changing these questions leads to new technology that is better at reflecting broader societal goals.

This reimagination and repair of the algorithmic lens centers the people who are subject to its outcomes. It encourages computer science—and computer scientists like me—to engage deeply with scholars and scholarship on the human side of this divide, in disciplines like philosophy, sociology, anthropology, political science, economics, history, the humanities, the law, and so many others. Computer science has always been an imperialist discipline, eager to spread its influence widely across domains. But interaction with the social sciences represents a true opportunity to build technology that fosters human flourishing.

The algorithmic society is here, and it isn’t going away. We must deal with the problems of technology now, and guardrails, laws, and incentives are all tools we can use to protect ourselves. But in the long run we need to provide an alternative—a fundamental reimagination of what computer science can be. We must repair the algorithmic lens so it can empower us—all of us—in our full humanity.

From the Symposium

Democracy and Technology: Allies or Enemies?

next

Combating Online Harassment

By Danielle Citron

11 MIN READ

See All

Read more about DemocracyInternettechnology

Suresh Venkatasubramanian is the director of the Center for Tech Responsibility at Brown University, the Deputy Director of the Data Science Initiative, and a professor of Computer Science and Data Science. Prior to coming to Brown, he was the Assistant Director for Science and Justice in the White House Office of Science and Technology Policy in the Biden Administration.

Click to

View Comments

blog comments powered by Disqus