How Inflation Became a Fact of Life
We tend to associate the inflation problem with the 1970s. But it was years earlier, in the era of Sputnik and Elvis, that the world first woke up to the reality of chronically rising prices. We’re still coping with that episode’s wrongheaded “lessons” today.
Inflation might seem to be one of those afflictions, like death or taxes, that has always been with us, but until the middle of the twentieth century, the kind of inflation we’ve experienced all our lives — a steady, long-term upward slope in the price level — had never been seen.
Before World War II, the tendency of prices was to rise during economic booms but then fall in the subsequent busts. They would rise especially fast during wars, but then fall almost as quickly during postwar readjustments. The net result, in the main capitalist economies, was an inflation rate that fluctuated erratically from year to year, but that averaged out, in the long run, at around zero.
The contrast with our own era makes for some striking numbers. In the last, say, six months, there’s been more inflation in the UK than occurred, on net, over the whole sixty-three year reign of Queen Victoria. (When she died in 1901, the level of British consumer prices was 7% lower than when she acceded to the throne in 1837.)
Or consider the era we now call the Great Moderation (1985–2007). Though it owes its name to the fact of its “low” inflation rates, it nevertheless saw more inflation over its 22 years (an 87% increase in the price level) than occurred over the 150 years preceding World War I (a 76% increase).
A notable by-product of that older, zero-inflation regime was that it imbued the public with a sense of what Milton Friedman, in his 1976 Nobel Prize address, called a “normal price level.” In the eighteenth and nineteenth centuries, this feeling was, in Friedman’s words, “deeply imbedded in the financial and other institutions of [the US and UK] and in the habits and attitudes of their citizens.”
A case in point: shortly after the armistice ending World War I, the renowned economist Irving Fisher warned that a sharp economic slowdown was underway in the United States because, he wrote, with the price level having risen 60% over the four years of war, “most people expect prices to drop”:
People quote the disparity between present prices and those prevailing “before the war,” and decide they will not buy much until present prices get down to “normal.” This general conviction that prices are sure to drop is putting a brake upon the entire machinery of production and distribution.
Today this whole concept of a normal price level belongs to a vanished mental universe. It’s been replaced in the public’s mind by a sense of what a normal inflation rate looks like — a 2%- or 3%-per-year increase in the price level; not too much higher; certainly not less than zero. After a burst of inflation, the expectation is no longer that prices will return to their old levels, but that the rate of change of prices will return to its old level.
The shift from the old inflation regime to the new didn’t take place gradually. The old regime collapsed, as old regimes tend to do, with surprising suddenness: essentially over the course of the second quarter of the twentieth century.
A simple way of gauging the change is to compare the number of inflationary versus deflationary years within a given period. According to the Bank of England, which has compiled annual inflation estimates going all the way back to the year 1207, over the seven centuries prior to World War I, that balance was almost even: there were 295 years of inflation and 258 of deflation, with the remaining 147 years seeing zero change in the price level.
It was even more balanced if we exclude the sixteenth century, when Spain was flooding Europe with New World precious metals, and the years of the “French Wars” (1793–1815) when Britain was off the gold standard; by that reckoning, there were 230 years of inflation and 218 years of deflation (plus 129 years of zero change in prices). And the pattern was consistent: even the century just preceding World War I saw 50 years of inflation versus 48 years of deflation.
Then, in 1931, faced with the sharpest economic contraction in its history, Britain permanently abandoned the gold standard. Two years later, with the worst of the Slump receding and recovery incipient, the country had its last deflationary year. Since 1934 Britain’s annual inflation rate has been positive not half, or even two-thirds of the time, but 89 times out of 89. Similar figures could be adduced for most of the countries in the industrialized world.
But this shift, while easy to spot in retrospect, took decades for contemporaries to perceive in real time. Because centuries of experience had taught that inflation was mainly a wartime exception to a long-run rule of stable prices, a general recognition that something had permanently changed in the behavior of prices would have to await the return of a semblance of peacetime economic conditions. But with the late 1930s seeing a wave of rearmament, followed by a world war, followed by the economic chaos of postwar adjustment and Cold War remobilization, it was not until the Korean War armistice in 1953 that earnest expectations of a return to economic “normality” really took hold.
And it did seem, at first, as if the old nineteenth-century pattern was reasserting itself: in the United States, 1954 and 1955 saw inflation rates, as measured by the consumer price index, that were close to zero.
But in 1956 prices in the US rose by 1.8% and then 3.4% in 1957. In a sign of how deeply the old assumption of a “normal price level” remained imprinted in the public mind, even these quite modest inflation rates inspired intense disquiet among economists, technocrats, and politicians. In 1957 an alarmed Congress commissioned a broad investigation into the inflation situation, soliciting papers and testimony from dozens of leading economists. In the UK, a parallel inflation inquiry — the landmark Radcliffe Committee on the working of the monetary system — was established at about the same time.
An unnerved President Dwight Eisenhower conscripted the chairman of the Federal Reserve and the secretary of the Treasury to serve on a special inflation task force that was to meet with him personally, and he warned in comments to the press that, however much he despised price controls — they are “not the America we know” — he could be forced to impose them if inflation continued to climb.
What was disturbing about the “New Inflation,” as it began to be called, was not the absolute magnitude of the inflation seen in the late 1950s, which was trivial compared to the levels of just a decade earlier, when the country had been in the throes of postwar readjustment. (The inflation rate in 1947 had exceeded 14%.)
What caused alarm was that this new inflation was persisting amid economic conditions that, in the past, could always be relied upon to induce falling prices. Since the end of the Korean War, the government had been running a budget surplus. The Federal Reserve was steadily raising interest rates. Real gross domestic product actually declined in four of the eight quarters between the start of 1956 and the end of 1957. Yet through it all, the price level continued to rise.
This was the start of a protracted debate over the core issues of macroeconomics that sprawled across much of the remainder of the twentieth century and — in the conventional telling at least — culminated in the discrediting of “Keynesian economics” in the 1970s and ’80s.
While that “discrediting” can be debated, what’s indisputable is that the whole episode left a profound imprint on the mind of the economics profession and on the shibboleths of officialdom. For it left behind a litany of putative “lessons” about inflation that multiple generations of economists, journalists, and policymakers would internalize, codify, and pass down to posterity. Decades later, those “lessons” formed the intellectual framework that underpinned the response when inflation finally returned in the COVID era.