Book Reviews

The Dismal Art

Economic forecasting has become much more sophisticated in the decades since its invention. So why are we still so bad at it?

By James Surowiecki

Tagged Economics

Fortune Tellers: The Story of America’s First Economic Forecasters By Walter A. Friedman • Princeton University Press • 2013 • 268 pages • $29.95

We live in an age that’s drowning in economic forecasts. Banks, investment firms, government agencies: On a near-daily basis, these institutions are making public predictions about everything from the unemployment rate to GDP growth to where stock prices are headed this year. Big companies, meanwhile, employ sizable planning departments that are supposed to help them peer into the future. And the advent of what’s often called Big Data is only adding to the forecast boom, with the field of “predictive analytics” promising that it can reveal what we’ll click on and what we’ll buy.

At the dawn of the twentieth century, by contrast, none of this was true. While Wall Street has always been home to tipsters and shills, forecasting was at best a nascent art, and even the notion that you could systematically analyze the U.S. economy as a whole would have seemed strange to many. Economics, meanwhile, had only recently established a foothold in the academy (the American Economic Association, for instance, was founded in 1885), and was dominated by Progressive economists whose focus was more on reforming capitalism via smart regulation rather than on macroeconomic questions.

Walter Friedman’s Fortune Tellers is the story of how, over the course of two decades, this all changed. In a series of short biographical narratives of the first men to take up forecasting as a profession, Friedman shows how economic predictions became an integral part of the way businessmen and government officials made decisions, and how the foundations were laid for the kind of sophisticated economic modeling that we now rely on. Friedman, a historian at Harvard Business School, also shows how the advent of forecasting was coupled with (and fed on) a revolution in the way information about the economy was gathered and disseminated. Relative to today, of course, the forecasters Friedman writes about were operating in the dark, burdened with fragmentary data and unreliable numbers. But the work they did, flawed as it was, would eventually make it possible for decision-makers to get a much better picture of how the economy as a whole was doing. And even as it’s easy to see how the forecasts of today are much more rigorous and complex than those of Friedman’s pioneers, that only makes one question seem all the more salient: Why, if forecasting has come so far, did so many people fail to predict the crash of 2008 and the disastrous downturn that followed?

It’s fitting that Friedman’s book starts with a financial crisis, namely the Panic of 1907, which he argues in some sense gave birth to modern forecasting. That panic began with a failed attempt by the financier Heinze brothers, Otto and Augustus, to corner the copper market. The collapse of their scheme drove institutions that had lent money to the Heinzes into bankruptcy and created a climate of fear that led to massive runs on New York banks and a series of bank failures, even as the Dow fell by almost half. More important, the crisis on Wall Street spilled over into the real economy, with industrial production taking a major hit and economic growth falling sharply. The crisis was shocking both because major panics were thought to be a thing of the past, and because the economic consequences of the crash seemed out of all proportion to the causes. And while there had obviously always been people on Wall Street trying to predict the future, the panic fueled people’s appetite for any information that could insulate them from market turmoil.

That appetite was also growing because the capital markets were booming—stocks, for instance, went from a niche investment at the turn of the century to, by the 1920s, being a crucial part of the way companies raised money (and investors made money). And the upheaval in the real economy—which was benefiting from an explosion in innovation that brought Americans widespread electrification, the automobile, the telephone, the phonograph, the movie camera, and the airplane—gave people “an insatiable demand for information that could shed light on future economic conditions.”

The first person to really meet that demand, Friedman argues, was Roger Babson, who began putting out regular forecasts about the U.S. economy after 1907. Babson was the most obvious huckster of Friedman’s subjects. He was given to faddish beliefs. He was a serial entrepreneur who came up with a host of odd inventions, and he was peculiarly obsessed with Isaac Newton. And his view of the business cycle, which he saw as oscillating regularly between boom and bust, was both simplistic and informed by a highly moralistic notion of excess and punishment. But Babson did two important things, Friedman argues. First, he solidified the notion that there was something called the “U.S. economy” whose different parts were connected to one another in systematic ways. And he popularized the idea that economies were subject to business cycles, about which coherent prognostications could be made. These seem, today, like obvious insights. But at the time, Friedman argues, they were quite new. As he writes, “The economic booms and busts of the previous century were typically ascribed not to any sort of regular business cycle but to fate, the weather, political schemes, divine Providence, or unexpected shocks like new tariffs or earthquakes.”

Babson’s analysis of those cycles was dubious at best (though his emphasis on the way emotions affect economic activity anticipated, in a crude way, both Keynes and today’s behavioral economists). But Babson’s forecasts, which were built on the idea that historical patterns repeated themselves, reflected an enormous amount of data-gathering work. He collected and published statistics about industrial production, immigration, imports and exports, commodity prices, and so on, and eventually began constructing time-series charts that were meant to forecast the performance of the economy as a whole. This was both a conceptual advance and a practical one: Much of this information had never been collected in one place before.

The same can be said, only more so, of the data offered to subscribers by John Moody, founder of Moody’s Investors Service and Moody’s Analyses Publishing, a ratings agency. If Babson was ultimately interested in the macro-economy, Moody’s focus was much more on the micro-economy, because he believed that getting a real picture of what was happening required you to look in detail at what the country’s big companies were doing. The challenge was that companies at the time typically didn’t disclose all that much information about their performance, and certainly didn’t do so in any kind of systematic fashion. Most companies didn’t even issue annual reports, and investors were perennially left at sea, wondering just what was happening to their money. Moody played a key role in changing this state of affairs. He began by publishing a regular manual that contained detailed financial information about almost 2,000 industrial companies. Then he moved from statistics to prediction, starting a ratings agency in 1909 that advised investors about the creditworthiness of bond-issuing companies. One of Moody’s key ideas was that capitalism is all about future value, so that the value of an asset in the present really consists of how much income it can generate in the future (discounted by the relevant interest rate). The need to forecast is, in that sense, built into the system.

The problem, of course, is figuring out just what variables you have to take into account in order to make an accurate prediction. The crucial insight of the economist Irving Fisher, a contemporary of Babson and Moody, was that one of the most important variables was the supply of money. Fisher believed that there was a tight relation between changes in the money supply and what happened in the real economy, and while he overestimated and oversimplified the relationship between the two factors, you can see in his work the roots of what we now call monetarism (including the notion that having the Federal Reserve print money is a smart response to a recession). Fisher’s reputation as a forecaster was famously destroyed in 1929, when he said on the eve of the Great Crash that stocks had reached a “permanently high plateau.” (Babson, by contrast, called the crash in advance.) But of all the people Friedman writes about, Fisher is the most interesting thinker—his theories about the role of money and, later, his ideas about the relationship between debt and financial crises continue to seem relevant even today.

Fisher is also important because he learned from failure. One of the great perils of being a forecaster, particularly one who enjoys early success, is that it becomes difficult to recognize one’s blind spots and easy to stay fixed in one way of seeing the world. Philip Tetlock, a professor of psychology and management at Penn who conducted a 20-year study asking almost 300 experts to forecast political events, has shown that while experts in the political realm are not especially good at forecasting the future, those who did best were, in the terminology he borrowed from Isaiah Berlin, foxes as opposed to hedgehogs—that is, the best forecasters were those who knew lots of little things rather than one big thing. Yet forecasters are more likely to be hedgehogs, if only because it’s easier to get famous when you’re preaching a simple gospel. And hedgehogs are not good, in general, at adapting to changed conditions—think of those bearish commentators who correctly predicted the bursting of the housing bubble but then failed to see that the stock market was going to make a healthy recovery. Fisher, by contrast, reacted to his failure to see the 1929 crash coming by looking at what he had missed, and in doing so came to focus on the importance of debt, and the way an overhang of debt can hold back an economy as it tries to get out of recession, an idea that has gained new popularity as economists try to explain the relative weakness of our current recovery. (While Fisher was discredited as a forecaster, his reputation as an economist was eventually revived.)

What Fisher didn’t give up on, though, was the notion that a couple of key variables could really explain the business cycle. But as Friedman explains, that kind of formulaic approach was gradually eclipsed by one that relied on a more complicated blend of empirical data, historical analysis, and mathematical modeling, an approach pioneered by the economist Wesley Mitchell, founder of the National Bureau of Economic Research. (C.J. Bullock and Warren Persons, who founded an institution called the Harvard Economic Service and whom Friedman also discusses, relied on similar techniques.) Mitchell’s attitude, as Friedman puts it, was more “circumspect” than those of previous forecasters, more cognizant of the limitations of forecasting, and more aware that while history may rhyme, it does not repeat itself. Mitchell’s work was, in many ways, the forerunner of today’s best forecasts. And it was also important because it marked the entrance of the government into the forecasting business, since in the 1920s Mitchell worked closely with Herbert Hoover, who was then secretary of commerce. This may seem surprising, given Hoover’s reputation, but he was a fierce advocate for the idea that the government should collect and disseminate as much economic information as possible, believing that if businesspeople had access to accurate forecasts about the economy, they would make smarter decisions that would help mitigate the excesses of the boom-bust cycle.

Of course, things didn’t work out exactly as Hoover planned, since the boom of the 1920s was followed by the Great Depression. But the broader project of dramatically improving the quantity and quality of economic information not only survived but thrived, as did forecasting. By 1938, Fortune wrote, “Business can no more do without forecasting than it can do without capital,” and in the postwar era the invention of the computer and the spread of econometric models, coupled with the explosion in the amount of economic data available, made people believe that it was possible to truly map the future. Today, in a typical week, investors and businessmen can count on hearing about durable-good orders, new building permits, consumer confidence, home resales, and the third revision of the previous quarter’s GDP number. And they’ll also be able to read a flood of commentary from the financial media, predictions from bank analysts, and the latest rumblings from the Fed. The forecasters Friedman writes about would have been slack-jawed at just how much we now know about the present state of the economy.

So why are we not better at foreseeing the future? One answer is that we actually are better. Companies these days are less likely to get stuck with huge inventories of unsold goods, or to get caught short when demand outstrips supply. The volatility of the business cycle, meanwhile, diminished sharply beginning in the early 1980s, a relative calm that lasted until the crash of 2008. There’s plenty of disagreement about why this happened, but one plausible factor was that policy-makers and businesspeople were doing a better job of forecasting. And it’s also true that policy-makers have gotten better at responding once crises do happen. The response of the Fed to the recent financial crisis, for instance, was not perfect, but it was much better than the response of the Fed to past crises, and it was also instrumental in shortening the recession and boosting the recovery. Similarly, while the 2009 stimulus plan should have been much bigger, it was, by historical standards, a substantial response, and it too helped get the economy growing again.

Even so, it’s impossible to look at the forecasting track record of Wall Street and Washington over the last 15 years and not be dismayed. The Federal Reserve failed to see that a massive housing bubble was inflating, and did nothing to stop it, even as the banking sector was, in effect, betting hundreds of billions of dollars on the fact that the bubble would not burst. And even when things started to fall apart, people did not recognize how bad things were going to get—Fed Chairman Ben Bernanke testified to Congress in 2007 that the problems in housing would be largely confined to the subprime sector, while J.P. Morgan, the day before Lehman Brothers went under, issued a forecast saying that the U.S. economy would grow briskly in 2009. One could add to this list a host of other predictions gone wrong, ranging from the 1998 Asian financial crisis and Russian default to the dot-com stock-market bubble to the perennial forecasts of skyrocketing inflation.

So what explains these failures? Ideology certainly played a role. In the case of both the stock-market and housing bubbles, there were analysts (like the economist Robert Shiller) who warned well in advance that markets had become dangerously overinflated. But the faith that markets were inherently self-correcting led policy-makers—including, above all, former Federal Reserve chairman Alan Greenspan—to discount their warnings. Yogi Berra’s pearl of wisdom—“It’s tough to make predictions, espe­cially about the future”—may be true, but it’s especially tough to make accurate predictions when you aren’t willing to see that sometimes markets can get massively out of whack.

The failure of forecasting is also due to the limits of learning from history. The models forecasters use are all built, to one degree or another, on the notion that historical patterns recur, and that the past can be a guide to the future. The problem is that some of the most economically consequential events are precisely those that haven’t happened before. Think of the oil crisis of the 1970s, or the fall of the Soviet Union, or, most important, China’s decision to embrace (in its way) capitalism and open itself to the West. Or think of the housing bubble. Many of the forecasting models that the banks relied on assumed that housing prices could never fall, on a national basis, as steeply as they did, because they had never fallen so steeply before. But of course they had also never risen so steeply before, which made the models effectively useless.

The uncertainty of the future, the limits of history—these have been perennial problems for forecasters. But there are also factors that are unique to our economy today that have actually made the task of forecasting harder than it was in, say, Wesley Mitchell’s day. The first is the outsized economic role, and increased complexity, of the financial sector. The enormous amounts of leverage that banks and other financial institutions now use have amplified the consequences of their decisions, as has the increased importance of derivatives. But the financial sector is, relatively speaking, opaque—it’s difficult for regulators and next-to-impossible for investors to get a real handle on the kinds of risks financial institutions are taking and the kinds of bets they’re making. If you want to predict how a potential crisis will play out, you have to be able to predict how it will affect the country’s big banks. But that is very hard to do from the outside (although in the case of the housing bubble, it’s hard to see how anyone thought it was going to end well).

The second problem that forecasters face today is more subtle, but perhaps no less important: that there may actually be too much information out there. This would, of course, sound absurd to Roger Babson. But the reality is that investors and businesspeople are now constantly assailed by a high-volume clang of market info and economic data. More important, they’re bombarded by news about what other investors and businesspeople think, and the predictions they’re making. Once upon a time, executives could make decisions based primarily on their own, private information—what they sensed about local business conditions, what they were seeing from their customers. Investors could pay attention chiefly to fundamental data. These days, it’s harder, in psychological terms, to do that, because you’re constantly exposed to the opinions of others.

Now, Herbert Hoover would have said this deluge of data was a good thing. He believed that if businesspeople had access to more information about how the economy was doing, they would act in what economists call a countercyclical fashion. In other words, if they understood that the economy was on the verge of overheating, they’d be more cautious, and vice versa. But it seems just as likely that there are times when the flood of information leads businesspeople and investors to jump on the bandwagon instead—acting recklessly in boom times and too cautiously when things are slow, which ends up amplifying the trend, rather than countering it. Markets and economies work best when people are able to think for themselves. But when everyone is shouting at the same time, it can be hard to do that.

The real issue here is one that the economist Oskar Morgenstern identified back in the late 1920s—namely, that economic predictions actually end up shaping the very outcomes they’re trying to predict. And the more predictions you have, the more complex that Möbius strip becomes. In that sense, for all the challenges they faced, men like Babson and Fisher had it easy, since forecasts were few and far between. The real irony of our forecasting boom is that as fortune-tellers proliferate, fortunes become harder to read.

Read more about Economics

James Surowiecki is a journalist and the author of The Wisdom of Crowds.

Also by this author

Taming the Economy

Click to

View Comments

blog comments powered by Disqus