Symposium | Beyond Neoliberalism

A New Economic Paradigm

By Heather Boushey

Tagged Economicsneoliberalism

In the fall of 2018, University of California, Berkeley economist Emmanuel Saez said, to an audience of economists, policymakers, and the press, “If the data don’t fit the theory, change the theory.” He was speaking about a new data set he developed to show who gains from economic growth, the rise in monopoly and monopsony power, and the resulting importance of policies such as wealth taxes and antitrust regulation. As we in the crowd listened to his remarks, I could tell this was an important moment in the history of economic thought. Saez had fast become one of the most respected economists in the profession—in 2009, he won the John Bates Clark Medal, an honor given by the economics profession to the top scholar under the age of 40, for his work on “critical theoretical and empirical issues within the field of public economics.” And here he was, telling us that economics needs to change.

Saez is not alone. The importance of his comments is reflected in the research of a nascent generation of scholars who are steeped in the new data and methods of modern economics, and who argue that the field should—indeed, must—change. The 2007 financial meltdown and the ensuing Great Recession brought to the forefront a crisis in macroeconomics because economists neither predicted nor were able to solve the problem. But a genuine revolution was already underway inside the field as new research methods and access to new kinds of data began to upend our understanding of the economy and what propels it forward. While the field is much more technical than ever, the increasing focus on what the discipline calls “heterogeneity”—what we can think of as inequality—is, at the same time, undermining long-held theories about the so-called natural laws of economics.

It’s not clear where the field will land. In 1962, the historian of science Thomas Kuhn laid out how scientific revolutions happen. He defined a paradigm as a consensus among a group of scientists about the way to look at problems within a field of inquiry, and he argued that the paradigm changes when that consensus shifts. As Kuhn said, “Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition.” In this essay, I’ll argue that there is a new consensus emerging in economics, one that seeks to explain how economic power translates into social and political power and, in turn, affects economic outcomes. Because of this, it is probably one of the most exciting times to follow the economics field—if you know where to look. As several of the sharpest new academics—Columbia University’s Suresh Naidu, Harvard University’s Dani Rodrik, and Gabriel Zucman, also at UC Berkeley—recently said, “Economics is in a state of creative ferment that is often invisible to outsiders.”

The Twentieth Century Paradigms

We can demarcate three epochs of economic thinking over the past century. Each began with new economic analysis and data that undermined the prevailing view, and each altered the way the profession examined the intersection between how the economy works and the role of society and policymakers in shaping economic outcomes. In each of these time periods, economists made an argument to policymakers about what actions would deliver what the profession considered the optimal outcomes for society. Thanks in no small part to the real-world successes of the first epoch, policymakers today tend to listen to economists’ advice.

The first epoch began in the early twentieth century, when Cambridge University economist John Maynard Keynes altered the course of economic thinking. He started from the assumption that markets do not always self-correct, which means that the economy can be trapped in a situation where people and capital are not being fully utilized. Unemployment—people who want to work but are unable to find a job—is due to firms not deploying sufficient investment because they do not see enough customers to eventually buy the goods and services they would produce. From this insight flowed a series of policy prescriptions, key among them the idea that when the economy is operating at less than full employment, only government has the power to get back to full employment, by filling in the gap and providing sufficient demand. Keynes’s contribution is often summarized to be that demand—people with money in their pockets ready to buy—keeps the economy moving. For economists, the methodological contribution was that policymakers could push on aggregate indicators, such as by boosting overall consumption, to change economic outcomes.

Keynes explicitly framed his analysis as a rejection of the prevailing paradigm. He begins his The General Theory by decrying the “classical” perspective, writing in Chapter 1 that “[T]he characteristics of the special case assumed by the classical theory happened not to be those of the economic society in which we actually live, with the result that its teaching is misleading and disastrous if we attempt to apply it to the facts of experience.” He spends the next chapter identifying the erroneous assumptions and presumptions of the prevailing economic analysis and lays out his work as a new understanding of the economy. In short, he argues they were wrong because they assumed that the economy always reverts to full employment.

Many credit the ideas he laid out in The General Theory with pulling our economy out of the depths of the Great Depression. He advanced a set of tools policymakers could use to ensure that the economy was kept as close to full employment as possible—a measure of economic success pleasing to democratically elected politicians. Certainly, the reconceptualization of the economy brought forth by National Income and Product Accounts—an idea Keynes and others spearheaded in the years between World War I and II—shaped thinking about the economy. These data allowed the government, for the first time, to see the output of a nation—gross domestic product (GDP)—which quickly became policymakers’ go-to indicator to track economic progress.

By the late 1960s, Keynes’s ideas had become the prevailing view. In 1965, University of Chicago economist Milton Friedman wrote an essay in which he said, “[W]e are all Keynesians now,” and, in 1971, even Richard Nixon was quoted in The New York Times saying, “I am now a Keynesian in economics.”

Nonetheless, the field was shifting toward a consensus around what has become known as “neoliberalism”—and Friedman was a key player. Keynes focused on aggregates—overall demand, overall supply—and did not have a precise theory for how the actions of individuals at the microeconomic level led to those aggregate trends. Friedman’s work on the permanent income theory—the idea that people will consume today based on their assessment of what their lifetime income will be—directly contradicted Keynes’s assumption that the marginal propensity to consume would be higher among lower-income households. Whereas Keynes argued that government could increase aggregate consumption by getting money to those who would spend it, Friedman argued that people would understand this was a temporary blip and save any additional income. The microfoundations movement within economics sought to connect Keynes’s analysis, which focused on macroeconomic trends—the movement of aggregate indicators such as consumption or investment—to the behavior of individuals, families, and firms that make up those aggregates. It reached its apex in the work of Robert Lucas Jr., who argued that in order to understand how people in the economy respond to policy changes, we need to look at the micro evidence.

Together, these arguments shifted the field back toward focusing on how the economy pushed toward optimal outcomes. What we think of in the policy community as “supply-side” policy was the focus on encouraging actors to engage in the economy. In contrast, demand-side management sought to understand business cycles and was important for recessions, which the Federal Reserve could fix using interest rate policy. In other words, we were back to assuming that the economy would revert to full employment and to what economists call “optimal” outcomes, if only the government would get out of the way.

As the United States neared the end of the twentieth century, there were many indications that this was the right economic framework. The United States led the world in bringing new innovations to market and, up until the late 1970s, America’s middle class saw strong gains year to year. We had avoided another crisis like the Great Depression and our economy drew in immigrants from across the globe. If policymakers focused on improving productivity, the market would take care of the rest—or so economists thought.

The Unraveling

By the end of the twentieth century, a cadre of economists had grown up within this new paradigm. In her recent book, Leftism Reinvented, Stephanie Mudge points to the rise of what she terms the “transnational finance-oriented economists” who “specialized in how to keep markets happy and reasoned that market-driven growth was good for everyone.” But behind the scenes, there was a new set of ideas brewing. For the market fundamentalist argument to be true, the market needs to work as advertised. Yet new data and methods eventually led to profound questions about this conclusion.

My introduction to this work came in the first week of my first graduate Labor Economics course in 1993. The professor focused on a set of then newly released research papers by David Card and Alan Krueger in which they used “natural experiments” (here, comparing employment and earnings in fast food restaurants across state lines before and after one state raised their minimum wage) to examine what happened when policymakers raised the minimum wage. This was an innovation. Prior to the 1990s, most research in economics focused on the model, not the empirical methods. Indeed, Card recently told the Washington Center for Equitable Growth’s Ben Zipperer, “In the 1970s, if you were taking a class in labor economics, you would spend a huge amount of time going through the modeling section and the econometric method. And ordinarily, you wouldn’t even talk about the tables. No one would even really think of that as the important part of the paper. The important part of the paper was laying out exactly what the method was.”

This was not only an interesting and relevant policy question; it also was a deeply theoretical one. Standard economic theory predicts that when a price rises, demand falls. Therefore, when policymakers increase the minimum wage, employers should respond by hiring fewer workers. But Card and Krueger’s analysis came to a different conclusion. Using a variety of data and methods—some relatively novel—they found that when policymakers in New Jersey raised the minimum wage, employment in fast food restaurants did not decline relative to those in the neighboring state of Pennsylvania. Their research had real-world implications and broke new ground in research methods; it also brought to the fore profoundly unsettling theoretical questions.

When Card and Krueger published their analysis, “natural experiments” were a new idea in economics research—and Card gives credit to Harvard economist Richard Freeman as “the first person” he heard use the phrase. These techniques, alongside other methods, allowed economists to estimate causality—that is, to show that one thing caused another, rather than simply being able to say that two things seem to go together or move in tandem. As Diane Whitmore Schanzenbach at Northwestern University told me, “In the last 15 or 20 years or so, economics—empirical economics—has really undergone what we call the credibility revolution. In the old days, you could get away with doing correlational research that doesn’t have a format that allows you to extract the cause and effect between a policy and an outcome.”

The profession was not immediately comfortable with Card and Krueger’s research, balking at being told the world didn’t work the way theory predicted. Their 1995 book, Myth and Measurement, contained a set of studies that laid bare a conundrum at the core of economic theory. The reception was cold at best. At an event at Equitable Growth to mark the twentieth anniversary edition, Krueger recalled a prominent economist in the audience at an event for the first edition saying, “Theory is evidence too.” Indeed, when Krueger passed away in March, his obituary in The Washington Post quoted University of Chicago economist James J. Heckman, who in 1999 told The New York Times, “They don’t root their findings in existing theory and research. That is where I really deviate from these people. Each day you can wake up with a brand new world, not plugged into the previous literature.”

History tells a different story. Card and Krueger would go on to become greats in their fields and lead a new generation of scholars to new conclusions using innovative empirical techniques. Krueger’s recent passing earlier this year illustrated the extent of this transformation in economics. The discipline is now grounded in empirical evidence that focuses on proving causality, even if that does not conform to long-standing theoretical assumptions about how the economy works. New methods, such as natural or quasi experiments that examine how people react to policies across time or place, are now the industry standard.

While in hindsight it might seem incredible that this discipline could have existed without these empirical techniques, their widespread adoption only came late in the twentieth century, alongside the dawn of the Information Age and advances in empirical methods, access to data, and computing power. As one journalist put it, “[N]umber crunching on PCs made noodling on chalkboards passé.” One piece of evidence for this is that the top economics journals now mostly publish empirical papers. Whereas in the 1960s about half of the papers in the top three economics journals were theoretical, about four in five now rely on empirical analysis—and of those, over a third used the researcher’s own data set, while nearly one-in-ten are based on an experiment.
Card and Krueger connect their findings to another game-changing development—the emergence of a set of ideas known as behavioral economics. This body of thought—along with much of feminist economics—starts from the premise that there is no such thing as the “rational economic man,” the theoretical invention required for economic theory to work. Krueger put it this way:

The standard textbook model, by contrast, views workers as atomistic. They just look at their own situation, their own self-interest, so whether someone else gets paid more or less than them doesn’t matter. The real world actually has to take into account these social comparisons and social considerations. And the field of behavioral economics recognizes this feature of human behavior and tries to model it. That thrust was going on, kind of parallel to our work, I’d say.

This is the definition of a paradigm shift. As a result of these changes, empirical research is now the route to join those in the top echelons of economics. While this may seem like a field looking within, it also appears to be a field on the cusp of change. Kuhn talks about how as a field matures, it becomes more specialized; as researchers dig into specific aspects of theory, they often then uncover a new paradigm buried in fresh examinations of the evidence. This kind of research commonly elevates anomalies—findings that don’t fit the prevailing theory.

How we make sense of this new empirical analysis will define the new paradigm. The policy world has been quick to take note of key pieces of this new body of empirical research. Indeed, evidence-backed policymaking has become the standard in many areas. But the nature of a paradigm shift means that policymakers are in need of a new framework to make sense of all the pieces of evidence and to guide their agenda. We can see this in current policy debates; while conservatives continue to tout tax cuts as the solution to all that ails us, the public no longer believes this to be the true remedy. Whether that means the agenda being discussed in many quarters to address inequality, by taxing capital and addressing rising economic concentration, will become core to the new framework remains to be seen.

A New Vision

A new focus on empirical analysis doesn’t necessarily mean a new paradigm. The evidence must be integrated into a new story of what economics is and seeks to understand. We can see something like this happening as scholars seek to understand inequality—what economists often refer to in empirical work as “heterogeneity.” Putting inequality at the core of the analysis pushes forward questions about whether the market performs the same for everyone—rich and poor, with economic power or without—and what that means for how the economy functions. It brings to the fore questions that cannot be ignored about how economic power translates into social power. Most famously, in Capital in the Twenty-First Century, Thomas Piketty brings together hundreds of years of income data from across a number of countries, and concludes from this that powerful forces push toward an extremely high level of inequality, so much so that capital will calcify as “the past tends to devour the future.”

That rethinking is happening right now. At January’s Allied Social Science Association conference—the gathering place for economists across all the sub-fields—UC Berkeley’s Gabriel Zucman put up a very provocative slide, which said only “Good-bye representative agent.” This slide was as revolutionary as Card and Krueger’s work decades before because it implied that we should let go of the workhorse macroeconomic models. For the most part, policymakers rely on so-called “representative agent models” to inform economic policy. These models assume that the responses of economic actors, be they firms or individuals, can be represented by one (or maybe two) sets of rules. That is, if conditions change—for example, a price rises—the model assumes that everyone in the economy responds in the same way, regardless of whether they are low income or high income. Moody’s Analytics houses a commonly cited economic model, led by economist Mark Zandi, who confirms this: “Most macroeconomists, at least those focused on the economy’s prospects, have all but ignored inequality in their thinking. Their implicit, if not explicit, assumption is that inequality doesn’t matter much when gauging the macroeconomic outlook.”

But these workhorse models of macroeconomics underperformed—to put it mildly—in the run up to the most recent financial crisis. They neither predicted the crisis nor provided reasonable estimates of economic growth moving forward as the Great Recession hit and then the slow recovery began. When Zandi integrated inequality into the Moodys macroeconomic forecasting model for the United States, he found that adding inequality to the traditional models—ones that do not take into account economic inequality at all—did not change the short-term forecasts very much. But when he looked at the long-term picture or considered the potential for the system to spin out of control, he found that higher inequality increases the likelihood of instability in the financial system.

This research is confirmed in a growing body of empirical work. This spring, International Monetary Fund economists Jonathan D. Ostry, Prakash Loungani, and Andrew Berg released a book pulling together a series of research studies showing the link between higher inequality levels and more frequent economic downturns. They find that when growth takes place in societies with high inequality, the economic gains are more likely to be destroyed by the more severe recessions and depressions that follow—and the economic pain is all too often compounded for those at the lower end of the income spectrum. Even so, as of now, most of the macroeconomic models used by central banks and financial firms for forecasting and decision-making don’t take inequality into account.

Key to any paradigm change is that there’s a new way of seeing the field of inquiry. That’s where Saez, along with many co-authors, including Thomas Piketty and Gabriel Zucman, are making their mark. They have created what they call “Distributional National Accounts,” which combine the aggregate data on National Income so important to the early twentieth century paradigm, with detailed data on how that income is allocated across individuals—incorporating the later-twentieth-century learning—into a measure that shows growth and its distribution. As Zucman told me, “We talk about growth, we talk about inequality; but never about the two together. And so distribution of national accounts, that’s an attempt at bridging the gap between macroeconomics on the one hand and inequality studies on the other hand.”

Congress seems to have gotten the message. The 2019 Consolidated Appropriations Act, which opened the government after a record 35-day shutdown, included language “encourag[ing]” the Bureau of Economic Analysis to “begin reporting on income growth indicators” by decile at least annually and to incorporate that work into its GDP releases if possible. In this way, step by step, new economic paradigms become new policymaking tools.

From the Symposium

Beyond Neoliberalism

next

The Moral Vision After Neoliberalism

By K. Sabeel Rahman

16 MIN READ

See All

Read more about Economicsneoliberalism

Heather Boushey is executive director and chief economist at the Washington Center for Equitable Growth. Her research focuses on economic inequality and public policy, specifically employment, social policy, and family economic well-being.

Also by this author

Taxing for Equitable Growth

Click to

View Comments

blog comments powered by Disqus