One Monday in June 2009, at the start of the evening rush hour in Washington, D.C., a computer killed nine people. At least that’s one possible interpretation of the crash that occurred at a suburban Metrorail station. The train was in ATO, or “automatic train operation” mode, which means a computer was in control. Investigators later determined that the complicated automatic sensor mechanisms embedded in the trains and tracks had failed, causing one train traveling at almost 50 miles per hour to crash into the back of another stopped at the station. The human operator of the train, realizing too late what was happening, tried in the last few seconds to pull the emergency brake. She died along with eight others that day. It was the worst transportation disaster in the history of the D.C. Metro system.
No one would claim a computer intentionally killed, of course, but the day’s events were the unforeseen, tragic consequence of something that increasingly governs many aspects of our daily lives: computer automation.
In The Glass Cage: Automation and Us, Nicholas Carr, the author of several books about technology, explores our increasing reliance on automation. “The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations,” Carr writes. Now, often unwittingly, we allow computers to automate aspects of our lives that never used to be subject to the control of software or hardware. Carr’s astute survey of automation shows just how quickly and uncritically we have outsourced the daily experience of being human to algorithms and machines, and how crucial it is to stop and reflect on what we are doing to ourselves.
Carr’s book is not a story about the technical details of the machinery and software of automation, nor is it a paean to Silicon Valley ingenuity (far too many of which have already been written). It is a story of the human experience of living with the automated technologies we’ve created, which he defines simply as “the use of computers and software to do things we used to do ourselves.” Our contemporary experience of automation is rich and varied, encompassing everything from the questions your doctor asks you during a checkup to the design of buildings to killer robots and driverless cars. Automation has undoubtedly made our lives safer and more convenient in countless ways.
But as Carr’s sweeping survey of automation shows, there are considerable cognitive, physical, political, economic, and moral consequences to our embrace of computer automation. As he did in his previous book, The Shallows, Carr explores the unintended consequences of our relationship with our technologies. If seamlessness, convenience, and efficiency are one side of the automation coin, the other includes unintentional design flaws, misguided assumptions, and—at worst—even human carnage.
Consider cockpit automation. Carr goes into detail about the design of the Airbus A320, whose fly-by-wire system and “glass cockpit” of screens and knobs ushered airplanes into the digital era by automating and routinizing tasks, like controlling airspeed and pitch, that used to be performed by the flight crew. Automation allowed for greater efficiency in the cockpit (and reduced the number of people needed in it from four to two). But, as Carr shows, automation also “severed the tactile link between pilot and plane” and “inserted a digital computer between human command and machine response.” Pilots now spend most of their time monitoring many small machines rather than flying one big one. In fact, Carr argues, “The commercial pilot has become a computer operator.”
Yet numerous studies have shown how computers can distract pilots, making them less likely to achieve situational awareness in an emergency; research has also linked automation to a deterioration of the psychomotor skills required to keep an airplane in the air. As a result, pilots can suffer what one expert calls “skill fade” of their cognitive and motor abilities, leaving them with a reduced ability to react intuitively when an emergency occurs.
As Carr readily concedes, technological improvements to air travel have made air disasters increasingly rare. But he sees a “dark footnote” to this good news: a new kind of accident in which automation is implicated. Tellingly, the National Transportation Safety Board’s report on a Continental Connection commuter flight that crashed near Buffalo, New York, in 2009 noted that, after cockpit warnings sounded, the captain’s response “should have been automatic.” Instead, the pilots became confused and, lacking situational awareness, reacted poorly, causing the plane to crash and killing everyone on board. A similar fog of confusion enveloped the crew of Air France Flight 447, which crashed into the ocean on a flight from Rio de Janeiro to Paris in the summer of 2009, killing all on board.
On the ground, researchers have shown how automation bias influences medical workers: One study of radiologists found that when they relied on automated software to analyze patients’ scans, they were more likely to overlook certain types of cancers. Similarly, doctors who use automated electronic record-keeping for their patients end up spending more time interacting with the computer screen than with their patients, potentially missing subtle cues that could lead to better diagnoses. Patients in a study conducted in a Veterans Affairs clinic reported that their visits “feel less personal” because of the intrusion of the computer into the examining room.
And among knowledge workers in various fields, automated “decision-support systems” software increasingly substitutes data processing for old-fashioned human judgment, siphoning autonomy from humans in the name of greater efficiency. Surveying the many ways we are outsourcing cognitive tasks to automated technology, Carr cites technology historian George Dyson’s pugnacious question: “What if the cost of machines that think is people who don’t?”
We think with our bodies as much as we do with our minds. When we first learn to walk, then run, or swim or ride a bike, repeated effort and the often painful experience of failure eventually train us in the art of synchronizing our brains and limbs. Early in the book, Carr describes his youthful experience of learning to drive a standard transmission car. After many stalls and slipped clutches and grinding gears, he gained competence and, eventually, mastery. Unlike the explicit knowledge one obtains from step-by-step instructions, the tacit knowledge he acquired exists in a “fuzzy realm” far different from but no less crucial than the well-defined processes that characterize explicit knowledge. Tacit knowledge is the reason you can still remember how to ride a bicycle after a 20-year hiatus.
But automation undermines this process, Carr argues, making so many things so easy for us to do that it robs us of opportunities to gain tacit knowledge. This is clear even in creative fields such as architecture, where drawing by hand was until recently the pillar of training and design. Today, no architecture firm would hire someone who wasn’t fluent in computer-aided design (CAD) software, which has almost entirely replaced drawing by hand.
CAD has facilitated extraordinary architectural design, and some might argue that it promotes a new form of tacit knowledge, one that lets architect and software work together to create designs impossible to achieve by hand alone. But Carr notes that although well-known architects such as Renzo Piano and Michael Graves have made great creative use of the software, they worry about the loss of the experience of drawing by hand, and note how the physical act of sketching shapes the final form of building. “Drawings are not just end products: they are part of the thought process of architectural design,” Graves has said.
Like other forms of automation, CAD’s emphasis on efficiency nudges its human users to perform certain actions rather than others, and discourages more open-ended design and creative exploration. Renzo Piano once likened CAD to “those pianos where you push a button and it plays the cha-cha and then a rumba.” CAD software is not intentionally pernicious, but like many things we’ve automated, it contains built-in biases that we rarely question. As Carr warns, “When automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.”
Modern automation also appears to be erasing jobs from our lives. Although technology-induced joblessness has stoked fear since angry Luddites smashed the first mechanized looms, Carr persuasively argues that this time things really are different: “Machines are replacing workers faster than economic expansion creates new manufacturing positions. As industrial robots become cheaper and more adept, the gap between lost and added jobs will almost certainly widen.”
From an employer’s perspective, this makes sense. Machines are the perfect employees. They never get sick or complain or sexually harass their colleagues (at least not yet), and the occasional software upgrade is a lot cheaper than the health insurance and pension plan demanded by a human worker. And yet despite periodic fretting by economists, we’re oddly passive about the implications of this trend, no doubt because of our nation’s longstanding enthusiasm for technology. Carr quotes cognitive scientist Donald Norman, who has observed, “[T]he machine-centered viewpoint compares people to machines and finds us wanting, incapable of precise, repetitive, accurate actions.”
Rather than humanize the machines, we seem intent on making our human institutions more machinelike. Embedded in Silicon Valley’s techno-optimist worldview is a libertarian political ethos that focuses on ends rather than means and views the political process as a problem to be overcome rather than a way to reach democratic solutions. People who create on-demand businesses like Uber and Amazon don’t have patience for sclerotic Senate subcommittee hearings and partisan political negotiation. Sometimes it seems like they don’t have patience for people at all. If you can get a drone to do what a person can, why not do it?
This cultural impatience extends beyond the boundaries of Silicon Valley. Former Obama Administration official Peter Orszag once argued for greater “automaticity” in policy-making to avoid gridlock, for example. Carr argues that we are already too eager to embrace technologies that supplant rather than merely supplement our activities and too impatient to work through the often messy process of solving human problems. Although Carr doesn’t ever come right out and say it, he’s grappling with an existential question: Does our blind faith in automated technology suggest a deepening mistrust of human judgment? Much of the enthusiasm for our increasingly algorithmic existence comes from the anxieties and responsibilities it allows us to transfer to supposedly implacable and unbiased machines. (I won’t give that worker a promotion because my decision-support software predicts he will do poorly in the job.) There is much false hope in this and also much denial of individual responsibility.
Like politics, automation is fundamentally about trust—the trust we place in our machines and the technologists who build them, our trust in a system of government that can regulate them, and trust in our own ability to wisely use them. But this trust is built on the assumption that these technologies are morally neutral. Carr’s book provides further evidence of the need to discard this shibboleth. Automation, like much technology, is neither neutral nor benign. As Carr notes of the so-called “substitution myth”: “A labor-saving device doesn’t just provide a substitute for some isolated component of a job. It alters the character of the entire task, including the roles, attitudes, and skills of the people who take part in it.”
Moreover, a world of easy automation is one in which we are more likely to outsource ethical decision-making to machines. Automation at today’s level of sophistication gives machines the power to offer judgments (explicitly or implicitly), not merely to generate information; increasingly, we are more accepting of machines that replicate human judgment (such as a doctor’s ability to diagnose a patient or a pilot’s ability to land a plane). As for free will, the outsourced moral life has less need of it; it isn’t entirely improbable to imagine a near future where a criminal defendant can say, “The algorithm made me do it!”
Eventually, technologies of automation will act less like helpful bank tellers and more like predatory lenders—and some already do. Predictive algorithms recommend books you might like to buy on Amazon, but parole boards also use them to decide whether inmates should be granted their freedom. Automated technologies could easily be designed to mislead their target audience and deftly evade oversight by their own users, who wouldn’t have the first clue what is going on inside either the hardware or the software of their devices. Do we want to live in a world of machine-driven ersatz moral judgment, where “ingenuity is replacing intuition”? If it means a guarantee of unbiased efficiency and reduced risk, many of us might say yes.
But we shouldn’t. Carr’s book shows that automation is not merely a personal or business decision. It is a moral one. Uncritically embracing automation risks degrading our humanity, bit by bit—death by a thousand apps. “The labors our obliging digital deities would have us see as mere drudgery may turn out to be vital to our fitness, happiness, and well-being,” Carr reminds us.
If earlier eras celebrated self-reliance, and the twentieth century lauded self-expression, in the twenty-first century we admire self-control, perhaps because we live in a world of such convenience and ease that we need much more of it. Not for nothing are the phrases “binge-watch” and “information overload” revealing of our particular cultural moment. Carr worries that our dependence on our machines and our expectations of what they can and should do for us—keep us safe, make our lives more convenient, facilitate personal connections—run the risk of creating a kind of technologically enabled learned helplessness. When we outsource personal responsibility to the technologists of Silicon Valley, we adopt a posture Carr correctly deems “submissive.” Our technologies begin to look less like our guides and more like masters holding our leashes. As Carr poignantly asks, “How far from the world do we want to retreat?”
One of Carr’s great strengths as a critic is the measured calm of his approach to his material—a rare thing in debates over technology. He is neither a bully nor a nanny (loyal readers of Carr’s blog also know he has a sharp wit, which I would like to have seen more of in this book) and he has a gift for stating problems succinctly: “The trouble with automation is that it often gives us what we don’t need at the cost of what we do.”
But Carr could be tougher on us than he is. Discussing how wearable technologies such as Google Glass make us “lose the power of presence” by constantly checking the screen in front of our eyes, for example, he stops short of scolding us, even though by choosing to wear Glass or to glance obsessively at our smartphones we willingly exchange the “power of presence” (and the feelings of other human beings) for access to information. We’ve been making that trade-off since the advent of mobile technology, and not because we suffer from false consciousness. We’ve done this willingly, enthusiastically, often foolishly, and without bothering to consider the costs. “It’s impossible to automate complex human activities without also automating moral choices,” Carr reminds us.
Carr excels at exploring these gray areas and illuminating for readers the intangible things we are losing by automating our lives. “How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can’t,” he writes. But you can, through meticulous research and insight, lay the groundwork for us to ask the right questions—and this is what Carr’s book does so well.
Techno-enthusiasts will dismiss Carr’s concerns about automation as hand-wringing by someone who praises the scythe as a masterful tool (he does, but in the context of a beautiful poem by Robert Frost). But as Carr notes, the true nostalgist is the one who believes that “the new thing is always better suited to our purposes and intentions than the old thing.” Recognizing the potential harm of automation doesn’t require us to toss robots out the window. But it does demand that we develop some principles to guide us.
Our apps and gadgets and their software are obscenely prescriptive. But they are also morally myopic. Their creators, after all, are people who think tacos delivered by drones will transform humanity. And they tell us precisely what we want to hear, as a tweet from Apple CEO Tim Cook shows: “You are more powerful than you think.” Are we? We expect our technologies to do so much for us, from managing our morning commute to suggesting and rating the restaurant we’ll want to eat at; Carr’s book is a reminder that we ought to expect more from ourselves. He’s not naive about the difficulty of striking a balance between using tools and allowing the tools to use us. “The value of a well-made and well-used tool lies not only in what it produces for us,” he writes, “but what it produces in us.”
Carr hopes we haven’t gone so far along the path of technological progress that we can’t pause to reflect on what a more thoughtfully automated world might look like. At the very least, he urges a more rigorous design philosophy for the hardware and software that many of us spend most of our days using and greater accountability from the companies that create them. The binge-watching cynic in me worries that there is little appetite for such an effort among consumers of technology.
But skepticism is crucial given our short memory for technology’s unintended consequences. In September, Washington, D.C.’s Metro system announced that by March all Metro trains on its Red Line would once again be automated, with a goal of fully automating the remaining train lines by fall 2017. Local officials praised the efficiency and security of a rail system built on such sophisticated technology. Asked about the return to computer-driven trains, the aunt of the youngest victim of the 2009 Metro crash told The Washington Post, “[Y]ou can’t risk human beings to some computer…. It’s not worth it.”
Click to
View Comments