Basil Halperin


Blog — Basil Halperin

Blog



NGDP futures via blockchain: Market monetarism meets cryptocurrency (And: how to set up a prediction market on Augur)
July 22, 2018

Scott Sumner has famously proposed that the Fed stabilize monetary policy by pegging nominal GDP futures contracts in such a way to ensure that expectations for nominal income growth remain steady. For more details, see here; the merits (and demerits) of this proposal are not the subject of this post (but will be the subject of a future post).

One major problem: markets for NGDP futures don't exist in the wild!

To be fair, Scott and the Mercatus Center have helped to fund a competition on the Hypermind website, where there is a fixed pot of prize money for correctly guessing the year-over-year NGDP growth rate. But this is not a two-sided market, and the pot of prize money is fixed – as the number of bettors increases, the expected winnings decrease.

Well, good news: an NGDP futures market is now live on the Augur blockchain. The specific contract is simply a binary option: will the growth rate in NGDP from 2018Q1 to 2019Q1 be greater than 4.5%?

The current price/probability implied by this contract can be viewed on the Augur aggregator website predictions.global: just search "NGDP', or the permalink is here.

 

Background on Augur

For those unfamiliar, Augur is a new cryptocurrency project – it launched just last Thursday – built on the Ethereum platform that allows holders of its currency, "REP", to create prediction markets. To speculate on such markets, an investor must use the Ethereum cryptocurrency (ETH).

The platform is decentralized: for everyone who wants to bet that NGDP growth will exceed 4.5%, there must be a counterparty who takes the other side of the bet. That is, the creators of Augur are not acting as market makers for the contract. The price of the contract will move to equilibrate supply and demand in a decentralized market: if the price is 0.7 ETH, that indicates that the market gives a 70% (risk-neutral) probability that NGDP will exceed 4.5%.

Of course, one can create prediction markets for literally anything. There were markets for who would win the World Cup; today, the most popular markets are cryptocurrency futures markets where traders can bet e.g. if the price of Ethereum will close the year higher than $500. A more interesting market is one asking, "Which stablecoin will have the largest market cap by the end of 2018?", though no one has bet on this contract yet. Another interesting contract asks if a new cryptocurrency project will launch on schedule.

 

The NGDP prediction market

Back to the NGDP prediction market. As mentioned, there is currently only one contract on Augur related to NGDP, and it is a simple binary option on if NGDP will be greater or less than 4.5%. A buy order thus indicates that you believe yes, NGDP growth will exceed 4.5%, while a sell order indicates that you think it will be less than that threshold.

Hopefully in the future richer contracts will emerge, and speculators could bet on more specific outcomes: different time horizons, betting on particular growth rates rather than a simple binary choice, etc. This would provide richer analysis for policymakers and academic study.

 

The importance of blockchains and decentralization

To wax unabashedly lyrical for a moment, besides the practical macroeconomic application, what's great about this is that it demonstrates how technology is liberation.

The major obstacle to creating an NGDP futures market in the past has been that prediction markets have been heavily restricted by government regulators. Intrade, a political prediction market, famously had to shut down due to a regulatory crackdown.

Augur, however, is a decentralized market which was only made possible in the last decade by the development of blockchain technology. There is no central point of failure, like with Intrade, where the CFTC can apply regulatory pressure to force a shutdown.

(Related. Despite the protection provided by decentralization, the regulatory environment around cryptocurrencies is notoriously… murky. And the state of regulation is possibly even more murky when it comes to predictions markets in particular. Calibrate your risk aversion appropriately before getting involved with this stuff.)

 

Viewing current probabilities

How can you check out the current market price/probability? One way is to download the Augur app and view the state of the market in the app, including the full order book (pictured below). The faster method, mentioned above, is to check out predictions.global, a great website that is layered on top of Augur and posts the current probabilities on all Augur contracts. Just search "NGDP" and you'll see the contract (permalink here).

The Augur NGDP prediction market order book

 

Conclusion

As of this writing, the market-implied (risk-neutral) probability that NGDP growth exceeds 4.5% is 90%; but only 0.05 ETH ($23.41 at current exchange rates) has been bet on the market. The market creation fee on this market is at 0%, so bettors keep essentially all winnings, unlike many other Augur markets. So go and make some bets!

That's the news. You can probably stop reading here. In the rest of this post, for those interested, I want to provide some resources and give an extremely limited sketch on how – hypothetically ☺ – one would go about setting up a prediction market on Augur. Given the number of people who have asked, I'll highlight that there is no coding required. Augur has an app that makes things pretty easy!

 

 

A step-by-step guide to creating a new market on Augur

2019 update: Due to changes in the infrastructure of these various technologies, this is now deprecated.

▸ Expand






June 5, 2018

 
Confidence level (?): High

I.
The efficient market hypothesis says that you can't pick out which stocks are undervalued versus which are overvalued. Likewise, I claim that you can't pick out which restaurants are underpriced versus which restaurants are overpriced.

Think you've found a great company, so that their stock will outperform on a risk-adjusted basis? Nope, someone else has already incorporated that information into the stock price and pushed the price up.

Think you've found a great restaurant which offers meals at a decent price? Nope, they've already raised their prices to the point where the extra cost just equals the extra utility you get from their extra delicious cuisine.

 

II.
A. But, first of all, we need to emphasize that this is on a risk-adjusted basis. A portfolio of stocks might have higher expected returns – but only if it's riskier.

This applies to restaurants as well to stocks – trying a new exotic cuisine could be eye-opening and awesome, or awful. Admittedly, this is quantitatively much less important for restaurants.

(This is the essence of modern asset pricing theory.)

B. Similarly to stocks, fund managers will not consistently deliver alpha to their investors: if any manager can consistently deliver alpha, that manager will simply raise their fees to capture it for themselves. (This is the essence of the "rational model of active management" model of Berk and Green 2004.)

 

III.
Moreover, second of all, cheap restaurants and cheap managers might exist, but they can have very high search costs.

Truly great cheap restaurants might exist, but you have to pay a lot in time, money, and energy spent searching and reading reviews to pinpoint them. These search costs, this time wasted digging around on Yelp, are real costs: they take time and money that you could otherwise have spent on better food or anything else which gives you utility.

This is likewise true of asset managers. Cheap asset managers that provide alpha might truly exist, but you have to spend so much time and money searching and evaluating potential such managers that these search costs will eat up that alpha. Otherwise, other investors would have already found the manager and grabbed that alpha.

(This is the essence of Garleanu and Pedersen's "Efficiently Inefficient" model.)

 

IV.
Third and finally: the utility of eating out at a restaurant is not just a result of tastiness and search costs. It incorporates every stream of services provided by the restaurant: convenience of location most of all, but also quality of service, ambience, and the social aspect of the other patrons. If a given restaurant achieves higher on these marks – e.g. a restaurant full of beautiful fashion models – then it should be expected that the quality of the food is less.

Similarly, to a lesser extent, with assets or with asset managers. Assets provide more than just a stream of returns: they provide the service of liquidity, or a "convenience yield". We can think of people enjoying the comfort provided by liquid assets, much like they enjoy the ambience of a nice restaurant. And just as a restaurant full of fashion models will – all else equal – have lower quality food, an asset or manager that offers higher liquidity should be expected to provide a lower pecuniary return.

(The idea of a convenience yield has been discussed by Cochrane, Koning, and others. This is also the entirety of the value behind cryptocurrencies.)

[Personal aside: This area is a core component of my own research agenda, as I currently envision it.]

 

V.
Conclusion: in equilibrium, assets or asset managers should not be undervalued or overvalued, on a risk-adjusted, fee-adjusted, search cost-adjusted, liquidity-adjusted basis. Likewise, in equilibrium, restaurants should not be underpriced or overpriced, once one takes into account their riskiness; the time spent searching for them on Yelp and reading reviews; and the ambience and other "convenience yield" services provided by the restaurant.








Aug 15, 2017

 
Confidence level (?): Low

Most people are probably somewhat overconfident. Most people – myself surely included – probably typically overestimate their own talents, and they (we) are overly confident in the precision of their estimates, underestimating uncertainty.

This bias has viscerally real, important consequences. Governments are overconfident that they can win wars quickly and easily; overconfident CEOs have a higher tendency to undertake mergers and issue more debt than their peers.

I claim, however, that this bias does not matter for asset pricing in particular. That is, stock prices (and other asset prices) are not affected by overconfident investors.

In fact, I claim that any kind of behavioral bias cannot in and of itself affect stock prices.

The idea that behavioral biases, on their own, can affect asset prices is one of if not the most widely held misconceptions about financial markets. Just because most people (myself included!) are blinded by cognitive biases – overconfidence, status quo bias, confirmation bias, etc. – does not mean that stock prices are at all affected or distorted.

If this seems crazy, let me try putting it another way: just because behavioral biases exist does not mean that you can get rich by playing the stock market and exploiting the existence of these biases.

The trick is that it only takes the existence of one rational unconstrained arbitrageur to keep prices from deviating away from their rational level.

To see this, consider two extremes.

All it takes is one
First, suppose everyone in the world is perfectly rational and unbiased, except for one poor fellow, Joe Smith. Joe is horribly overconfident, and thinks he's smarter than everyone else. He invests all of his money in Apple stock, insisting that everyone else is undervaluing the company, and pushing the Apple share price up.

Of course, since every other investor is perfectly rational and informed, they will notice this and immediately race to go short Apple, betting against it until the price of the Apple stock is pushed back to the rational level.

Now, consider the inverse situation. Everyone in the world is systematically biased and cognitively limited, except for one rational informed Jane Smith. Perhaps more realistically, instead of Jane Smith, the one rational agent is some secretive hedge fund.

Now, billions of irrational investors are pushing prices away from their rational value. However, as long as Rational Hedge Fund LLC has access to enough capital, this one rational agent can always buy an undervalued stock until the price gets pushed up to its rational level, or short an overvalued stock until the price gets pushed down to the rational level. Rational Hedge Fund LLC profits, and prices are kept at their rational levels.

Even more realistically, instead of a single hypervigilant rational hedge fund keeping all stocks at their respective rational levels, there could be many widely dispersed investors each with specialized knowledge in one stock or one industry, collectively working to keep prices in line.

The marginal investor
The real world, of course, is somewhere between these two extremes. Most people have a host of cognitive biases, which leads to "noise traders" randomly buying and selling stocks. However, there is also a small universe of highly active, often lightning fast rational investors who quickly arbitrage away any price distortions for profit.

It is these marginal investors who determine the price of stocks, not the biased investors. This is why I say that "cognitive biases don't matter for stock prices" – the existence of any unconstrained rational investors ensures that biases will not flow through to asset pricing.

The important caveat: the "limits to arbitrage"
There is an extremely important caveat to this story.

Note that I quietly slipped in the requirement that Rational Hedge Fund LLC must have "access to enough capital." If the rational investors cannot raise enough money to bet against the noisy irrational traders, then prices cannot be pushed to their rational equilibrium level.

(The importance of access to capital is more than just the ability to apply price pressure. It's also important for the marginal investor to be able to withstand the riskiness of arbitrage.)

This assumption of frictionless access to leverage clearly does not hold perfectly in the real world: lending markets are troubled by principal-agent problems, moral hazard, and other imperfections.

This (very important) friction is known as the "limits to arbitrage."

Summing up
It is irrationality in conjunction with limits to arbitrage which allow for market prices to diverge from their rational levels. It is important to acknowledge that cognitive biases alone are not a sufficient condition for market inefficiency. Irrationality and limits to arbitrage are both necessary.

More pithily: Peanut butter alone is not enough to make a PB&J sandwich, and behavioral biases alone are not enough to make the stock market inefficient.








July 18, 2017

 
Confidence level (?): Very high

The Efficient Market Hypothesis (EMH) was famously defined by Fama (1991) as "the simple statement that security prices fully reflect all available information."

That is, you can't open the Wall Street Journal, read a news article from this morning about Google's great earnings numbers that were just released, and make money by buying Google stock. The positive information contained in the earnings numbers would already have been incorporated into Google's share price.

To put it another way, the EMH simply says that there is no such thing as a free lunch for investors.

Does this imply that stock prices (or other asset prices) are unpredictable? No! The EMH unequivocally does not mean that prices or returns are unpredictable.

This fallacy arises all the time. Some author claims to have found a way to predict returns and so declares, "The EMH is dead." Return predictability does not invalidate the EMH. This is important – the empirical evidence shows that returns are indeed eminently predictable.

The key lies with risk premia.

I. What are risk premia?
The price of a stock (or any other asset) can be decomposed into two parts:

  1. The (discounted) expected value of the stock
  2. A "risk premium"

The first part is the standard discounted present-value that you might read about in an accounting textbook. The second is the compensation required by the stock investor in order to bear the risk that the stock might drop in value, known as a risk premium.

To understand risk premia, suppose that I offer you the following deal. You can pay me $x, and then get to flip a coin: heads I give you $100, tails you get nothing. How much would you be willing to pay to have this opportunity?

Although the expected value of this bet is $50, you're probably only going to be willing to pay something like $45 for the chance to flip the coin, if that. The five dollars difference is the compensation you demand in order to bear the risk that you could lose all your money – the risk premium.

II. Return predictability is compensation for risk
The above decomposition suggests that return predictability can either be the result of

  1. The ability to truly predict movements in the underlying value of the stock
  2. The ability to predict risk premia

If the first type of predictability were possible, this would in fact invalidate the EMH. However, the second sort of predictability – predictability of risk premia – allows for stock returns to be predictable, even under the EMH.

This is because, if only risk premia are predictable, then there is still no free lunch.

Sure, you can predict that a stock portfolio will outperform the market over the next year. However, this excess return is simply compensation for the fact that this set of stocks is extra risky – i.e., the portfolio has a high risk premium.

As an extreme example, consider the well-known fact that buying and holding a diverse basket of stocks predictably has higher expected returns than buying and holding short-term Treasury bills.

Is this a free lunch? Does the existence of the stock market invalidate the EMH? No. This return predictability exists only because equities are fundamentally riskier than T-bills.

III. Summing up
This is all to say that while returns may be predictable, it is likely that any profits earned from such predictable strategies are merely compensation for extra risk.

The EMH says that there is no free lunch from investing. Just because returns are predictable does not mean you can eat for free.

 

Postscript. There is another (outdated) theory, the "random walk hypothesis", defined as the claim that returns are not predictable. This is different from the EMH, which says that asset prices reflect all available information. The random walk hypothesis has been shown to be clearly empirically false, per links above.








Apr 9, 2017

 
Confidence level (?): Medium

Update: Selgin points out in correspondence and Sumner points out in comments below that, the below discussion is implicitly using variables in per capita terms.

This post continues the discussion from Scott Sumner's thoughtful reply to my critique of NGDP targeting from 2015. (Note to frequent readers: I previously published a reply to Scott, which I have since deleted.)

In short:

  1. Some economists see zero inflation as optimal in the long run. NGDP targeting cannot achieve this in the long run, except under discretion, as I discussed in my original post.
  2. On the other hand, as I discuss below, many models prescribe the Friedman rule for the optimal long-run rate of inflation. This can, in fact, be achieved under NGDP targeting, even without discretion!

I. The benefit of NGDP targeting is that inflation can fluctuate in the short run. But can NGDP targeting achieve a long-run optimal inflation rate?

Targeting NGDP rather than targeting inflation allows inflation to fluctuate in the short run. This is the major benefit of NGDP targeting, since it makes sense to have higher inflation in the short run when there is a cyclical growth slowdown and lower inflation when there is a growth boom, (see Selgin, Sumner, Sheedy, myself).

This is an argument about the short or medium run, at the frequency of business cycles (say 2-5 years).

Separately, you could imagine – whether or not inflation is allowed to vary in the short run, as it would be under NGDP targeting – that there is a long-run rate of inflation which is optimal. That is, is there a "best" inflation rate at which the economy should ideally settle, at a 10+ year horizon?

If there is an optimal long-run inflation rate, you would hope that this could be achieved under NGDP targeting in the long-run, even while inflation is allowed to fluctuate in the short run.

II. The optimal long-run inflation rate
Economists have thought a lot about the question of what the long-run optimal inflation rate is. There are two competing answers [1]:

1. No inflation: One strand of literature argues that the optimal long-run inflation rate is precisely zero, based on price stickiness. The argument goes: by keeping the price level stable, sticky prices cannot distort relative prices.

2. Friedman rule: Alternatively, another strand of the literature going back to Milton Friedman argues that the optimal inflation rate is the negative of the short-term risk-free real interest rate (i.e. slight deflation). The argument here is that this would set the nominal risk-free interest rate to zero. In this world, there would be no opportunity cost to holding money, since both cash and risk-free bonds would pay zero interest, and the economy could be flush with liquidity and the optimum quantity of money achieved.

These two schools of thought clearly contradict each other. We will consider each separately.

What we want to know is this: could NGDP targeting achieve the optimal inflation rate in the long run (even while allowing beneficial short-run fluctuations in inflation)?

III. NGDP targeting and zero long-run inflation
In a previous blog post, I critiqued NGDP targeting by pointing out that NGDP targeting could not achieve zero inflation in long-run, unless the central bank could discretionarily change the NGDP target. In other words, I was arguing based on the first strand of literature that NGDP targeting was deficient in this respect.

The accounting is simple: NGDP growth = real growth + inflation. Under NGDP targeting without discretion, the growth rate of NGDP is fixed. But, real growth varies in the long run due to changing productivity growth – for example, real growth was higher in the 1960s than it has been in recent decades. As a result, the long-run inflation rate must vary and thus is unanchored.

Zero inflation can be achieved in the long run, but only at the cost of trusting the central bank to act discretionarily and appropriately modify the long-run NGDP target.

I think that such discretion would be problematic, for reasons I outline in the original post. I'll note, however, that I (now) assess that the benefits of NGDP targeting in preventing short-run recessions outweigh this smaller long-run cost.

IV. NGDP targeting and the Friedman rule
On the other hand – and I haven't seen this result discussed elsewhere before – NGDP targeting can achieve the Friedman rule for the optimal inflation rate in the long run without discretion. That is, under the logic of the second strand of literature, NGDP targeting can achieve the optimum. Here's the accounting logic:

The Friedman rule prescribes that the optimal inflation rate, pi*, be set equal to the negative of the real interest rate r so that the nominal interest rate is zero:
pi* = -r

Here's the kicker: Under a wide class of models (with log utility), the long-run real interest rate equals the rate of technological progress g plus the rate of time preference b. See Baker et al (2005) for a nice overview. As a result, the optimal inflation rate under the Friedman rule can be written:
pi* = -r = -(b+g)

This can be achieved under NGDP targeting without discretion! Here's how.

Suppose that the central bank targets a nominal GDP growth rate of -b, that is, an NGDP path that declines at the rate of time preference. Recall again, under NGDP targeting, NGDP growth = g + pi. Since the central bank is targeting an NGDP growth rate of -b, if we rearrange to solve for inflation, we get that
pi = NGDP growth - g = -b - g

That's the optimal inflation rate implied by the Friedman rule shown above. This result holds even if the long-run rate of productivity growth (g) changes.

Thus, we have shown that if the central bank targets an NGDP path that declines at the rate of time preference, then in the long run the Friedman rule will be achieved.

To summarize, under such a regime, the economy would get the short-run benefits of flexible inflation for which NGDP targeting is rightfully acclaimed; while still achieving the optimal long-run inflation rate.

This is a novel point in support of NGDP targeting, albeit a very specific version of NGDP targeting: an NGDP target of negative the rate of time preference.

V. Summing up
There's still the tricky problem that economists can't even agree on whether the Friedman rule or no-inflation is superior.

So, to sum up once more:

  1. NGDP targeting cannot achieve zero inflation in the long run without discretion, as discussed in my original post. This is unfortunate if zero inflation is long-run optimal.
  2. However, NGDP targeting – if targeting a growth rate of -b – can in fact achieve the Friedman rule in the long run without discretion. This is fortunate if the Friedman rule is the long-run optimal inflation rate.

To close this out, I'll note that an alternative middle ground exists… an NGDP target of 0%. This would see a long-run inflation rate of -g: not as low as -g-b as prescribed by the Friedman rule; but not as high as 0% as prescribed by no-inflationistas.

Such a policy is also known as a "productivity norm," (since long-run inflation is negative of productivity growth), advocated prominently by George Selgin (1997).

[1] I ignore ZLB considerations, which typically imply a higher optimal inflation rate, since many advocates of NGDP targeting do not see the ZLB as a true policy constraint (myself included).








Dec 21, 2016

 
Confidence level (?): Very high

I. Marx vs. Smith and food banks
When Heinz produces too many Bagel Bites, or Kellogg produces too many Pop-Tarts, or whatever, these mammoth food-processing companies can donate their surplus food to Feeding America, a national food bank. Feeding America then distributes these corporate donations to local food banks throughout the country.

What's the economically optimal way to allocate the donations across the country?

Option one is what you might call "full communism." Under full communism, Feeding America collects the food donations and then top-down tells individual food banks what endowments they will be receiving, based on Feeding America's own calculation of which food banks need what.

Prior to 2005, this was indeed what occurred: food was distributed by centralized assignment. Full communism!

The problem was one of distributed versus centralized knowledge. While Feeding America had very good knowledge of poverty rates around the country, and thus could measure need in different areas, it was not as good at dealing with idiosyncratic local issues.

Food banks in Idaho don't need a truckload of potatoes, for example, and Feeding America might fail to take this into account. Or maybe the Chicago regional food bank just this week received a large direct donation of peanut butter from a local food drive, and then Feeding America comes along and says that it has two tons of peanut butter that it is sending to Chicago.

To an economist, this problem screams of the Hayekian knowledge problem. Even a benevolent central planner will be hard-pressed to efficiently allocate resources in a society since it is simply too difficult for a centralized system to collect information on all local variation in needs, preferences, and abilities.

This knowledge problem leads to option two: market capitalism. Unlike poorly informed central planners, the decentralized price system – i.e., the free market – can (often but not always) do an extremely good job of aggregating local information to efficiently allocate scarce resources. This result is known as the First Welfare Theorem.

Such a system was created for Feeding America with the help of four Chicago Booth economists in 2005. Instead of centralized allocation, food banks were given fake money – with needier food banks being given more – and allowed to bid for different types of food in online auctions. Prices are thus determined by supply and demand.

At midnight each day all of the (fake) money spent that day is redistributed, according to the same formula as the initial allocation. Accordingly, any food bank which does not bid today will have more money to bid with tomorrow.

Under this system, the Chicago food bank does not have to bid on peanut butter if it has just received a large peanut butter donation from another source. The Idaho food bank, in turn, can skip on bidding for potatoes and bid for extra peanut butter at a lower price. It's win-win-win.

By all accounts, the system has worked brilliantly. Food banks are happier with their allocations; donations have gone up as donors have more confidence that their donations will actually be used. Chalk one up for economic theory.

II. MV=PY, information frictions, and food banks
This is all pretty neat, but here's the really interesting question: what is optimal monetary policy for the food bank economy?

Remember that food banks are bidding for peanut butter or cereal or mini pizzas with units of fake money. Feeding America has to decide if and how the fake money supply should grow over time, and how to allocate new units of fake money. That's monetary policy!

Here's the problem for Feeding America when thinking about optimal monetary policy. Feeding America wants to ensure that changes in prices are informative for food banks when they bid. In the words of one of the Booth economists who helped design the system:

"Suppose I am a small food bank; I really want a truckload of cereal. I haven't bid on cereal for, like, a year and a half, so I'm not really sure I should be paying for it. But what you can do on the website, you basically click a link and when you click that link it says: This is what the history of prices is for cereal over the last 5 years. And what we wanted to do is set up a system whereby by observing that history of prices, it gave you a reasonable instinct for what you should be bidding."

That is, food banks face information frictions: individual food banks are not completely aware of economic conditions and only occasionally update their knowledge of the state of the world. This is because obtaining such information is time-consuming and costly.

Relating this to our question of optimal monetary policy for the food bank economy: How should the fake money supply be set, taking into consideration this friction?

Obviously, if Feeding America were to randomly double the supply of (fake) money, then all prices would double, and this would be confusing for food banks. A food bank might go online to bid for peanut butter, see that the price has doubled, and mistakenly think that demand specifically for peanut butter has surged.

This "monetary misperception" would distort decision making: the food bank wants peanut butter, but might bid for a cheaper good like chicken noodle soup, thinking that peanut butter is really scarce at the moment.

Clearly, random variation in the money supply is not a good idea. More generally, how should Feeding America set the money supply?

One natural idea is to copy what real-world central banks do: target inflation.

The Fed targets something like 2% inflation. But, if the price of a box of pasta and other foods were to rise 2% per year, that might be confusing for food banks, so let's suppose a 0% inflation target instead.

It turns out inflation targeting is not a good idea! In the presence of the information frictions described above, inflation targeting will only sow confusion. Here's why.

As I go through this, keep in the back of your mind: if households and firms in the real-world macroeconomy face similar information frictions, then – and this is the punchline of this entire post – perhaps inflation targeting is a bad idea in the real world as well.

III. Monetary misperceptions
I demonstrate the following argument rigorously in a formal mathematical model in a paper, "Monetary Misperceptions: Optimal Monetary Policy under Incomplete Information," using a microfounded Lucas Islands model. The intuition for why inflation targeting is problematic is as follows.

Suppose the total quantity of all donations doubles.

You're a food bank and go to bid on cheerios, and find that there are twice as many boxes of cheerios available today as yesterday. You're going to want to bid at a price something like half as much as yesterday.

Every other food bank looking at every other item will have the same thought. Aggregate inflation thus would be something like -50%, as all prices would drop by half.

As a result, under inflation targeting, the money supply would simultaneously have to double to keep inflation at zero. But this would be confusing: Seeing the quantity of cheerios double but the price remain the same, you won't be able to tell if the price has remained the same because
(a) The central bank has doubled the money supply
or
(b) Demand specifically for cheerios has jumped up quite a bit

It's a signal extraction problem, and rationally you're going to put some weight on both of these possibilities. However, only the first possibility actually occurred.

This problem leads to all sorts of monetary misperceptions, as money supply growth creates confusions, hence the title of my paper.

Inflation targeting, in this case, is very suboptimal. Price level variation provides useful information to agents.

IV. Optimal monetary policy
As I work out formally in the paper, optimal policy is instead something close to a nominal income (NGDP) target. Under log utility, it is exactly a nominal income target. (I've written about nominal income targeting before more critically here.)

Nominal income targeting in this case means that the money supply should not respond to aggregate supply shocks. In the context of our food banks, this result means that the money supply should not be altered in response to an increase or decrease in aggregate donations.

Instead, if the total quantity of all donations doubles, then the price level should be allowed to fall by (roughly) half. This policy prevents the confusion described above.

Restating, the intuition is this. Under optimal policy, the aggregate price level acts as a coordination mechanism, analogous to the way that relative prices convey useful information to agents about the relative scarcity of different goods. When total donations double, the aggregate price level signals that aggregate output is less scarce by halving.

It turns out that nominal income targeting is only exactly optimal (as opposed to approximately optimal) under some special conditions. I'll save that discussion for another post though.

Feeding America, by the way, does not target constant inflation. They instead target "zero inflation for a given good if demand and supply conditions are unchanged." This alternative is a move in the direction of a nominal income target.

V. Real-world macroeconomic implications
I want to claim that the information frictions facing food banks also apply to the real economy, and as a result, the Federal Reserve and other central banks should consider adopting a nominal income target. Let me tell a story to illustrate the point.

Consider the owner of an isolated bakery. Suppose one day, all of the customers seen by the baker spend twice as much money as the customers from the day before.

The baker has two options. She can interpret this increased demand as customers having come to appreciate the superior quality of her baked goods, and thus increase her production to match the new demand. Alternatively, she could interpret this increased spending as evidence that there is simply more money in the economy as a whole, and that she should merely increase her prices proportionally to account for inflation.

Economic agents confounding these two effects is the source of economic booms and busts, according to this model. This is exactly analogous to the problem faced by food banks trying to decide how much to bid at auction.

To the extent that these frictions are quantitatively important in the real world, central banks like the Fed and ECB should consider moving away from their inflation targeting regimes and toward something like a nominal income target, as Feeding America has.

VI. Summing up
Nominal income targeting has recently enjoyed a surge in popularity among academic monetary economists, so the fact that this result aligns with that intuition is pretty interesting.

To sum up, I'll use a metaphor from Selgin (1997).

Consider listening to a symphony on the radio. Randomly turning the volume knob up and down merely detracts from the musical performance (random variation in the price level is not useful). But, the changing volume of the orchestra players themselves, from quieter to louder and back down again, is an integral part of the performance (the price level should adjust with natural variations in the supply of food donations). The changing volume of the orchestra should not be smoothed out to maintain a constant volume (constant inflation is not optimal).

Central banks may want to consider allowing the orchestra to do its job, and reconsider inflation targeting as a strategy.








Dec 16, 2015

 
Confidence level (?): High

Behavioral economists have a concept called loss aversion. It's almost always described something like this:

"Loss aversion implies that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a $100 windfall."
Wikipedia, as of December 2015

Sounds eminently reasonable, right? Some might say so reasonable, in fact, that it's crazy that those darn neoclassical economists don't incorporate such an obvious, fundamental fact about human nature in their models.

It is crazy – because it's not true! The pop definition of loss aversion given above – that 'losses hurt more than equivalent size gains' – is precisely the concept of diminishing marginal utility (DMU) that is boringly standard in standard price theory.

Loss aversion is, in fact, a distinct and (perhaps) useful concept. But somewhat obnoxiously, behavioral economists, particularly in their popular writings, have a tendency to conflate it with DMU in a way that makes the concept seem far more intuitive than it is, and in the process wrongly makes standard price theory look bad.

I'm not just cherry-picking a bad Wikipedia edit. I name names at the bottom of this post, listing where behavioral economists – Thaler, Kahneman, Sunstein, Dubner, etc. – have (often!) given the same misleading definition. It's wrong! Loss aversion is about reference dependence.

To restate, what I'm claiming is this:

  1. Behavioral economists use an incorrect definition of loss aversion when writing for popular audiences
  2. This incorrect definition is in fact the property of DMU that is assumed in all of neoclassical economics
  3. DMU is much more intuitive than the real definition of loss aversion, and so by using a false definition of loss aversion behavioral economists make neoclassical economics look unnecessarily bad and behavioral economics look misleadingly good

Let me walk through the difference between DMU and loss aversion painstakingly slowly:

Diminishing marginal utility
"Diminishing marginal utility" is the idea that the more you have of something, the less you get out of having a little bit more of it. For example:

If you own nothing but $1,000 and the clothes on your back, and I then give you $100,000, that is going to give you a heck of a lot more extra happiness then if you had $100 million and I gave you $100,000.

An important corollary follows immediately from this: losses hurt more than gains!

I made a super high quality illustration to depict this:

What we have here is a graph of your utility as a function of your wealth under extremely standard (i.e., non-behavioral) assumptions. The fact that the line flattens out as you get to higher wealth levels is the property of DMU.

We can also see that equivalently sized losses hurt more than gains. As you go from 10k wealth to 2k wealth (middle green line to bottom green line), your utility falls by more than the amount your utility rises if you go from 10k wealth to 18k wealth (middle green to top green lines), despite the change in wealth being the same 8k in both directions.

Standard economics will always assume DMU, thus capturing exactly the intuition of the idea described in the above Wikipedia definition of loss aversion.

More mathematically – and I'm going to breeze through this – if your utility is purely a function of your wealth, Utility=U(W), then we assume that U'(W)>0 but U''(W)<0, i.e. your utility function is concave. With these assumptions, the result that U(W+ε)-U(W) < U(W)-U(W-ε) follows from taking a Taylor expansion. See proof attached below.

Loss aversion
Loss aversion is a consequence of reference dependence and is an entirely different beast. The mathematical formulation was first made in Tversky and Kahneman (1991).

In words, loss aversion says this: Suppose you have nothing but the clothes you're wearing and $10,000 in your pocket, and then another $10,000 appears in your pocket out of nowhere. Your level of utility/happiness will now be some quantity given your wealth of $20,000.

Now consider a situation where you only own your clothes and the $30,000 in your pocket. Suppose suddenly $10,000 in your pocket disappears. Your total wealth is $20,000 – that is, exactly the same as the prior situation. Loss aversion predicts that in this situation, your level of utility will be lower than in the first situation, despite the fact that in both situations your wealth is exactly $20,000, because you lost money to get there.

Perhaps this concept of loss aversion is reasonable in some situations. It doesn't seem crazy to think that people don't like to lose things they had before.

But this concept is entirely different from the idea that 'people dislike losses more than they like gains' which sloppy behavioral economists go around blathering about. It's about reference dependence! Your utility depends on your reference point: did you start with higher or lower wealth than you currently have?

In their academic papers, behavioral economists are very clear on the distinction. The use of math in formal economic models imposes precision. But when writing for a popular audience in the less-precise language of English – see below for examples – the same economists slip into using an incorrect definition of loss aversion.

Conclusion
So, please, don't go around claiming that behavioral economists are incorporating some brilliant newfound insight that people hate losses more than they like gains. We've known about this in price theory since Alfred Marshall's 1890 Principles of Economics.

 

Addendum
It's kind of silly for me to write this post without naming names. Here we go:

1. Richard Thaler, one of the founding fathers of behavioral economics, in his 2015 bestseller, Misbehaving:

2. Richard Thaler, in the 2008 bestseller, Nudge:

3. Cass Sunstein (Oct. 2015), Harvard law and behavioral economics professor:

4. Daniel Kahneman, Nobel Prize-winning behavioral economist, in his 2011 bestseller, Thinking Fast and Slow:

5. Stephen Dubner (Nov. 2005):

6. New York Times (Dec. 2013):

7.The Economist (Feb. 2015):

I should note that Tversky and Kahneman in their original paper describing loss aversion are admirably clear in their usage of the concept: the title of their QJE paper is Loss Aversion in Riskless Choice: A Reference-Dependent Model, explicitly highlighting the notion of reference dependence.

References








Sep 15, 2015

 
Confidence level (?): Very high

Until very recently – see last month's WSJ survey of economists – the FOMC was widely expected to raise the target federal funds rate this week at their September meeting. Whether or not the Fed should be raising rates is a question that has received much attention from a variety of angles. What I want to do in this post is answer that question from a very specific angle: the perspective of a New Keynesian economist.

Why the New Keynesian perspective? There is certainly a lot to fault in the New Keynesian model (see e.g. Josh Hendrickson). However, the New Keynesian framework dominates the Fed and other central banks across the world. If we take the New Keynesian approach seriously, we can see what policymakers should be doing according to their own preferred framework.

The punch line is that the Fed raising rates now is the exact opposite of what the New Keynesian model of a liquidity trap recommends.

If you're a New Keynesian, this is the critical moment in monetary policy. For New Keynesians, the zero lower bound can cause a recession, but need not result in a deep depression, as long as the central bank credibly promises to create an economic boom after the zero lower bound (ZLB) ceases to be binding.

That promise of future growth is sufficient to prevent a depression. If the central bank instead promises to return to business as normal as soon as the ZLB stops binding, the result is a deep depression while the economy is trapped at the ZLB, like we saw in 2008 and continue to see in Europe today. The Fed appears poised to validate earlier expectations that it would indeed return to business as normal.

If the New Keynesian model is accurate, this is extremely important. By not creating a boom today, the Fed is destroying any credibility it has for the next time we hit the ZLB (which will almost certainly occur during the next recession). It won't credibly be able to promise to create a boom after the recession ends, since everyone will remember that it did not do so after the 2008 recession.

The result, according to New Keynesian theory, will be another depression.

I. The theory: an overview of the New Keynesian liquidity trap

I have attached at the bottom of this post a reference sheet going into more detail on Eggertsson and Woodford (2003), the definitive paper on the New Keynesian liquidity trap. Here, I summarize at a high level –skip to section II if you are familiar with the model.

A. The NK model without a ZLB

Let's start by sketching the standard NK model without a zero lower bound, and then see how including the ZLB changes optimal monetary policy.

The basic canonical New Keynesian model of the economy has no zero lower bound on interest rates and thus no liquidity traps (in the NK context, a liquidity trap is defined as a period when the nominal interest rate is constrained at zero). Households earn income through labor and use that income to buy a variety of consumption goods and consume them to receive utility. Firms, which have some monopoly power, hire labor and sell goods to maximize their profits. Each period, a random selection of firms are not allowed to change their prices (Calvo price stickiness).

With this setup, the optimal monetary policy is to have the central bank manipulate the nominal interest rate such that the real interest rate matches the "natural interest rate," which is the interest rate which would prevail in the absence of economic frictions. The intuition is that by matching the actual interest rate to the "natural" one, the central bank causes the economy to behave as if there are no frictions, which is desirable.

In our basic environment without a ZLB, a policy of targeting zero percent inflation via a Taylor rule for the interest rate exactly achieves the goal of matching the real rate to the natural rate. Thus optimal monetary policy results in no inflation, no recessions, and everyone's the happiest that they could possibly be.

B. The NK liquidity trap

The New Keynesian model of a liquidity trap is exactly the same as the model described above, with one single additional equation: the nominal interest rate must always be greater than or equal to zero.

This small change has significant consequences. Whereas before zero inflation targeting made everyone happy, now such a policy can cause a severe depression.

The problem is that sometimes the interest rate should be less than zero, and the ZLB can prevent it from getting there. As in the canonical model without a ZLB, optimal monetary policy would still have the central bank match the real interest rate to the natural interest rate.

Now that we have a zero lower bound, however, if the central bank targets zero inflation, then the real interest rate won't be able to match the natural interest rate if the natural interest rate ever falls below zero!

And that, in one run-on sentence, is the New Keynesian liquidity trap.

Optimal policy is no longer zero inflation. The new optimal policy rule is considerably more complex and I refer you to the attached reference sheet for full details. But the essence of the idea is quite intuitive:

If the economy ever gets stuck at the ZLB, the central bank must promise that as soon as the ZLB is no longer binding it will create inflation and an economic boom.

The intuition behind this idea is that the promise of a future boom increases the inflation expectations of forward-looking households and firms. These increased inflation expectations reduce the real interest rate today. This in turn encourages consumption today, diminishing the depth of the recession today.

All this effect today despite the fact that the boom won't occur until perhaps far into the future! Expectations are important, indeed they are the essence of monetary policy.

C. An illustration of optimal policy

Eggertsson (2008) illustrates this principle nicely in the following simulation. Suppose the natural rate is below the ZLB for 15 quarters. The dashed line shows the response of the economy to a zero-inflation target, and the solid line the response to the optimal policy described above.

Under optimal policy (solid line), we see in the first panel that the interest rate is kept at zero even after period 15 when the ZLB ceases to bind. As a result, we see in panels two and three that the depth of the recession is reduced to almost zero under policy; there is no massive deflation; and there's a nice juicy boom after the liquidity trap ends.

In contrast, under the dashed line – which you can sort of think of as closer to the Fed's current history independent policy – there is deflation and economic disaster.

II. We're leaving the liquidity trap; where's our boom?

To be completely fair, we cannot yet say that the Fed has failed to follow its own model. We first must show that the ZLB only recently has ceased or will cease to be binding. Otherwise, a defender of the Fed could argue that the lower bound could have ceased to bind years ago, and the Fed has already held rates low for an extended period.

The problem for showing this is that estimating the natural interest rate is extremely challenging, as famously argued by Milton Friedman (1968). That said, several different models using varied estimation methodologies all point to the economy still being on the cusp of the ZLB, and thus the thesis of this post: the Fed is acting in serious error.

Consider, most tellingly, the New York Fed's own model! The NY Fed's medium-scale DSGE model is at its core the exact same as the basic canonical NK model described above, with a lot of bells and whistles grafted on. The calibrated model takes in a whole jumble of data – real GDP, financial market prices, consumption, the kitchen sink, forecast inflation, etc. – and spits outs economic forecasts.

It can also tell us what it thinks the natural interest rate is. From the perspective of the New York Fed DSGE team, the economy is only just exiting the ZLB:

Barsky et al (2014) of the Chicago Fed perform a similar exercise with their own DSGE model and come to the same conclusion:

Instead of using a microfounded DSGE model, John Williams and Thomas Laubach, president of the Federal Reserve Bank of San Francisco and director of monetary affairs of the Board of Governors respectively, use a reduced form model estimated using a Kalman filter. Their model has that the natural rate in fact still below its lower bound (in green):

David Beckworth has a cruder but more transparent regression model here and also finds that the economy remains on the cusp of the ZLB (in blue):

If anyone knows of any alternative estimates, I'd love to hear in the comments.

With this fact established, we have worked through the entire argument. To summarize:

  1. The Fed thinks about the world through a New Keynesian lens
  2. The New Keynesian model of a liquidity trap says that to prevent a depression, the central bank must keep rates low even after the ZLB stops being binding, in order to create an economic boom
  3. The economy is only just now coming off the ZLB
  4. Therefore, a good New Keynesian should support keeping rates at zero.
  5. So: why is the Fed about to raise rates?!

III. What's the strongest possible counterargument?

I intend to conclude all future posts by considering the strongest possible counterarguments to my own. In this case, I see only two interesting critiques:

A. The NK model is junk

This argument is something I have a lot of sympathy for. Nonetheless, it is not a very useful point, for two reasons.

First, the NK model is the preferred model of Fed economists. As mentioned in the introduction, this is a useful exercise as the Fed's actions should be consistent with its method of thought. Or, its method of thought must change.

Second, other models give fairly similar results. Consider the more monetarist model of Auerbach and Obstfeld (2005) where the central bank's instrument is the money supply instead of the interest rate (I again attach my notes on the paper below).

Instead of prescribing that the Fed hold interest rates lower for longer as in Eggertsson and Woodford, Auerbach and Obstfeld's cash-in-advance model shows that to defeat a liquidity trap the Fed should promise a one-time permanent level expansion of the money supply. That is, the expansion must not be temporary: the Fed must continue to be "expansionary" even after the ZLB has ceased to be binding by keeping the money supply expanded.

This is not dissimilar in spirit to Eggertsson and Woodford's recommendation that the Fed continue to be "expansionary" even after the ZLB ceases to bind by keeping the nominal rate at zero.

B. The ZLB ceased to bind a long time ago

The second possible argument against my above indictment of the Fed is the argument that the natural rate has long since crossed the ZLB threshold and therefore the FOMC has targeted a zero interest rate for a sufficiently long time.

This is no doubt the strongest argument a New Keynesian Fed economist could make for raising rates now. That said, I am not convinced, partly because of the model estimations shown above. More convincing to me is the fact that we have not seen the boom that would accompany interest rates being below their natural rate. Inflation has been quite low and growth has certainly not boomed.

Ideally we'd have some sort of market measure of the natural rate (e.g. a prediction market). As a bit of an aside, as David Beckworth forcefully argues, it's a scandal that the Fed Board does not publish its own estimates of the natural rate. Such data would help settle this point.


 

I'll end things there. The New Keynesian model currently dominates macroeconomics, and its implications for whether or not the Fed should be raising rates in September are a resounding no. If you're an economist who finds value in the New Keynesian perspective, I'd be extremely curious to hear why you support raising rates in September if you do – or, if not, why you're not speaking up more loudly.

 








Jan 6, 2015

 
Confidence level (?): Medium

Edit: The critique in this post that NGDP targeting cannot achieve zero inflation in the long run without discretion is somewhat tempered by my 2017 follow-up here: perhaps zero long-run inflation would be inferior to a long-run Friedman rule; which in fact can be naturally implemented with NGDP targeting.

Summary:

  1. NGDP growth is equal to real GDP growth plus inflation. Thus, under NGDP targeting, if the potential real growth rate of the economy changes, then the full-employment inflation rate changes.
  2. New Keynesians advocate that the Fed adjust the NGDP target one for one with changes in potential GDP. However, this rule would be extremely problematic for market monetarists.
  3. Most importantly, it is simply not possible to estimate potential GDP in real time: an accurate structural model will never be built.
  4. Further: such a policy would give the Fed huge amounts of discretion; unanchor long term expectations, especially under level targeting; and be especially problematic if technological growth rapidly accelerates as some predict.

I want to discuss a problem that I see with nominal GDP targeting: structural growth slowdowns. This problem isn't exactly a novel insight, but it is an issue with which I think the market monetarist community has not grappled enough.

I. A hypothetical example

Remember that nominal GDP growth (in the limit) is equal to inflation plus real GDP growth. Consider a hypothetical economy where market monetarism has triumphed, and the Fed maintains a target path for NGDP growing annually at 5% (perhaps even with the help of a NGDP futures market). The economy has been humming along at 3% RGDP growth, which is the potential growth rate, and 2% inflation for (say) a decade or two. Everything is hunky dory.

But then – the potential growth rate of the economy drops to 2% due to structural (i.e., supply side) factors, and potential growth will be at this rate for the foreseeable future.

Perhaps there has been a large drop in the birth rate, shrinking the labor force. Perhaps a newly elected government has just pushed through a smorgasbord of measures that reduce the incentive to work and to invest in capital. Perhaps, most plausibly (and worrisomely!) of all, the rate of innovation has simply dropped significantly.

In this market monetarist fantasy world, the Fed maintains the 5% NGDP path. But maintaining 5% NGDP growth with potential real GDP growth at 2% means 3% steady state inflation! Not good. And we can imagine even more dramatic cases.

II. Historical examples

Skip this section if you're convinced that the above scenario is plausible

Say a time machine transports Scott Sumner back to 1980 Tokyo: a chance to prevent Japan's Lost Decade! Bank of Japan officials are quickly convinced to adopt an NGDP target of 9.5%, the rationale behind this specific number being that the average real growth in the 1960s and 70s was 7.5%, plus a 2% implicit inflation target.

Thirty years later, trend real GDP in Japan is around 0.0%, by Sumner's (offhand) estimation and I don't doubt it. Had the BOJ maintained the 9.5% NGDP target in this alternate timeline, Japan would be seeing something like 9.5% inflation today.

Counterfactuals are hard: of course much else would have changed had the BOJ been implementing NGDPLT for over 30 years, perhaps including the trend rate of growth. But to a first approximation, the inflation rate would certainly be approaching 10%.

Or, take China today. China saw five years of double digit real growth in the mid-2000s, and not because the economy was overheating. I.e., the 12.5% and 14% growth in real incomes in China in 2006 and 2007 were representative of the true structural growth rate of the Chinese economy at the time. To be conservative, consider the 9.4% growth rate average over the decade, which includes the meltdown in 2008-9 and a slowdown in the earlier part of the decade.

Today, growth is close to 7%, and before the decade is up it very well could have a 5 handle. If the People's Bank had adopted NGDP targeting at the start of the millennium with a 9.4% real growth rate in mind, inflation in China today would be more than 2 percentage points higher than what the PBOC desired when it first set the NGDP target! That's not at all trivial, and would only become a more severe issue as the Chinese economy finishes converging with the developed world and growth slows still further.

This isn't only a problem for countries playing catch-up to the technological frontier. France has had a declining structural growth rate for the past 30 years, at first principally because of declining labor hours/poor labor market policies and then compounded by slowing productivity and population growth. The mess that is Russia has surely had a highly variable structural growth rate since the end of the Cold War. The United States today, very debatably, seems to be undergoing at least some kind of significant structural change in economic growth as well, though perhaps not as drastic.


Source: Margaret Jacobson, "Behind the Slowdown of Potential GDP"

III. Possible solutions to the problem of changing structural growth

There are really only two possible solutions to this problem for a central bank to adopt.

First, you can accept the higher inflation, and pray to the Solow residual gods that the technological growth rate doesn't drop further and push steady state inflation even higher. I find this solution completely unacceptable. Higher long term inflation is simply never a good thing; but even if you don't feel that strongly, you at least should feel extremely nervous about risking the possibility of extremely high steady state inflation.

Second, you can allow the central bank to periodically adjust the NGDP target rate (or target path) to adjust for perceived changes to the structural growth rate. For example, in the original hypothetical, the Fed would simply change its NGDP target path to grow at 4% instead of 5% as previously so that real income grows at 2% and inflation continues at 2%.

IV. This is bad – and particularly bad for market monetarists

This second solution, I think, is probably what Michael Woodford, Brad DeLong, Paul Krugman, and other non-monetarist backers of NGDP targeting would support. Indeed, Woodford writes in his Jackson Hole paper, "It is surely true – and not just in the special model of Eggertsson and Woodford – that if consensus could be reached about the path of potential output, it would be desirable in principle to adjust the target path for nominal GDP to account for variations over time in the growth of potential." (p. 46-7) Miles Kimball notes the same argument: in the New Keynesian framework, an NGDP target rate should be adjusted for changes in potential.

However – here's the kicker – allowing the Fed to change its NGDP target is extremely problematic for some of the core beliefs held by market monetarists. (Market monetarism as a school of thought is about more than merely just NGDP targeting – see Christensen (2011) – contra some.) Let me walk through a list of these issues now; by the end, I hope it will be clear why I think that Scott Sumner and others have not discussed this issue enough.

IVa. The Fed shouldn't need a structural model

For the Fed to be able to change its NGDP target to match the changing structural growth rate of the economy, it needs a structural model that describes how the economy behaves. This is the practical issue facing NGDP targeting (level or rate). However, the quest for an accurate structural model of the macroeconomy is an impossible pipe dream: the economy is simply too complex. There is no reason to think that the Fed's structural model could do a good job predicting technological progress. And under NGDP targeting, the Fed would be entirely dependent on that structural model.

Ironically, two of Scott Sumner's big papers on futures market targeting are titled, "Velocity Futures Markets: Does the Fed Need a Structural Model?" with Aaron Jackson (their answer: no), and "Let a Thousand Models Bloom: The Advantages of Making the FOMC a Truly 'Open Market'".

In these, Sumner makes the case for tying monetary policy to a prediction market, and in this way having the Fed adopt the market consensus model of the economy as its model of the economy, instead of using an internal structural model. Since the price mechanism is, in general, extremely good at aggregating disperse information, this model would outperform anything internally developed by our friends at the Federal Reserve Board.

If the Fed had to rely on an internal structural model adjust the NGDP target to match structural shifts in potential growth, this elegance would be completely lost! But it's more than just a loss in elegance: it's a huge roadblock to effective monetary policymaking, since the accuracy of said model would be highly questionable.

IVb. Rules are better than discretion

Old Monetarists always strongly preferred a monetary policy based on well-defined rules rather than discretion. This is for all the now-familiar reasons: the time-inconsistency problem; preventing political interference; creating accountability for the Fed; etc. Market monetarists are no different in championing rule-based monetary policy.

Giving the Fed the ability to modify its NGDP target is simply an absurd amount of discretionary power. It's one thing to give the FOMC the ability to decide how to best achieve its target, whether than be 2% inflation or 5% NGDP. It's another matter entirely to allow it to change that NGDP target at will. It removes all semblance of accountability, as the Fed could simply move the goalposts whenever it misses; and of course it entirely recreates the time inconsistency problem.

IVc. Expectations need to be anchored

Closely related to the above is the idea that monetary policy needs to anchor nominal expectations, perhaps especially at the zero lower bound. Monetary policy in the current period can never be separated from expectations about future policy. For example, if Janet Yellen is going to mail trillion dollar coins to every American a year from now, I am – and hopefully you are too – going to spend all of my or your dollars ASAP.

Because of this, one of the key necessary conditions for stable monetary policy is the anchoring of expectations for future policy. Giving the Fed the power to discretionarily change its NGDP target wrecks this anchor completely!

Say the Fed tells me today that it's targeting a 5% NGDP level path, and I go take out a 30-year mortgage under the expectation that my nominal income (which remember is equal to NGDP in aggregate) will be 5% higher year after year after year. This is important as my ability to pay my mortgage, which is fixed in nominal terms, is dependent on my nominal income.

But then Janet Yellen turns around and tells me tomorrow, "Joke's on you pal! We're switching to a 4% level target." It's simply harder for risk-averse consumers and firms to plan for the future when there's so much possible variation in future monetary policy.

IVd. Level targeting exacerbates this issue

Further, level targeting exacerbates this entire issue. The push for level targeting over growth rate targeting is at least as important to market monetarism as the push for NGDP targeting over inflation targeting, for precisely the reasoning described above. To keep expectations on track, and thus not hinder firms and households trying to make decisions about the future, the central bank needs to make up for past mistakes, i.e. level target.

However, level targeting has issues even beyond those that rate targeting has, when the central bank has the ability to change the growth rate. In particular: what happens if the Fed misses the level target one year, and decides at the start of the next to change its target growth rate for the level path?

For instance, say the Fed had adopted a 5% NGDP level target in 2005, which it maintained successfully in 2006 and 2007. Then, say, a massive crisis hits in 2008, and the Fed misses its target for say three years running. By 2011, it looks like the structural growth rate of the economy has also slowed. Now, agents in the economy have to wonder: is the Fed going to try to return to its 5% NGDP path? Or is it going to shift down to a 4.5% path and not go back all the way? And will that new path have as a base year 2011? Or will it be 2008?

(Note: I am aware that had the Fed been implementing NGDPLT in 2008 the crisis would have been much less severe, perhaps not even a recession! The above is for illustration.)

(Also, I thank Joe Mihm for this point.)

IVe. This problem for NGDP targeting is analogous to the velocity instability problem for Friedman's k-percent rule

Finally, I want to make an analogy that hopefully emphasizes why I think this issue is so serious. Milton Friedman long advocated that the Fed adopt a rule whereby it would have promised to keep the money supply (M2, for Friedman) growing at a steady rate of perhaps 3%. Recalling the equation of exchange, MV = PY, we can see that when velocity is constant, the k-percent rule is equivalent to NGDP targeting!

In fact, velocity used to be quite stable:


Source: FRED

For the decade and a half or two after 1963 when Friedman and Schwartz published A Monetary History, the rule probably would have worked brilliantly. But between high inflation and financial innovation in the late 70s and 80s, the stable relationship between velocity, income, and interest rates began to break down, and the k-percent rule would have been a disaster. This is because velocity – sort of the inverse of real, income-adjusted money demand – is a structural, real variable that depends on the technology of the economy and household preferences.

The journals of the 1980s are somewhat famously a graveyard of structural velocity models attempting to find a universal model that could accurately explain past movements in velocity and accurately predict future movements. It was a hopeless task: the economy is simply too complex. (I link twice to the same Hayek essay for a reason.) Hence the title of the Sumner and Jackson paper already referenced above.

Today, instead of hopelessly modeling money demand, we have economists engaged in the even more hopeless task of attempting to develop a structural model for the entire economy. Even today, when the supply side of the economy really changes very little year-to-year, we don't do that good of a job at it.

And (this is the kicker) what happens if the predictability of the structural growth rate breaks down to the same extent that the predictability of velocity broke down in the 1980s? What if, instead of the structural growth rate only changing a handful of basis points each year, we have year-to-year swings in the potential growth rate on the order of whole percentage points? I.e., one year the structural growth is 3%, but the next year it's 5%, and the year after that it's 2.5%?

I know that at this point I'm probably losing anybody that has bothered to read this far, but I think this scenario is entirely more likely than most people might expect. Rapidly accelerating technological progress in the next couple of decades as we reach the "back half of the chessboard", or even an intelligence explosion, could very well result in an extremely high structural growth rate that swings violently year to year.

However, it is hard to argue either for or against the techno-utopian vision I describe and link to above, since trying to estimate the future of productivity growth is really not much more than speculation. That said, it does seem to me that there are very persuasive arguments that growth will rapidly accelerate in the next couple of decades. I would point those interested in a more full-throated defense of this position to the work of Robin Hanson, Erik Brynjolfsson and Andrew McAfee, Nick Bostrom, and Eliezer Yudkowsky.

If you accept the possibility that we could indeed see rapidly accelerating technological change, an "adaptable NGDP target" would essentially force the future Janet Yellen to engage in an ultimately hopeless attempt to predict the path of the structural growth rate and to chase after it. I think it's clear why this would be a disaster.

V. An anticipation of some responses

Before I close this out, let me anticipate four possible responses.

1. NGDP variability is more important than inflation variability

Nick Rowe makes this argument here and Sumner also does sort of here. Ultimately, I think this is a good point, because of the problem of incomplete financial markets described by Koenig (2013) and Sheedy (2014): debt is priced in fixed nominal terms, and thus ability to repay is dependent on nominal incomes.

Nevertheless, just because NGDP targeting has other good things going for it does not resolve the fact that if the potential growth rate changes, the long run inflation rate would be higher. This is welfare-reducing for all the standard reasons. Because of this, it seems to me that there's not really a good way of determining whether NGDP level targeting or price level targeting is more optimal, and it's certainly not the case that NGDPLT is the monetary policy regime to end all other monetary policy regimes.

2. Target NGDP per capita instead!

You might argue that if the most significant reason that the structural growth rate could fluctuate is changing population growth, then the Fed should just target NGDP per capita. Indeed, Scott Sumner has often mentioned that he actually would prefer an NGDP per capita target. To be frank, I think this is an even worse idea! This would require the Fed to have a long term structural model of demographics, which is just a terrible prospect to imagine.

3. Target nominal wages/nominal labor compensation/etc. instead!

Sumner has also often suggested that perhaps nominal aggregate wage targeting would be superior to targeting NGDP, but that it would be too politically controversial. Funnily enough, the basic New Keynesian model with wage stickiness instead of price stickiness (and no zero lower bound) would recommend the same thing.

I don't think this solves the issue. Take the neoclassical growth or Solow model with Cobb-Douglas technology and preferences and no population growth. On the balanced growth path, the growth rate of wages = the potential growth rate of the economy = the growth rate of technology. For a more generalized production function and preferences, wages and output still grow at the same rate.

In other words, the growth rate of real wages parallels that of the potential growth rate of the economy. So this doesn't appear to solve anything, as it would still require a structural model.

4. Set up a prediction market for the structural growth rate!

I don't even know if this would work well with Sumner's proposal. But perhaps it would. In that case, my response is… stay tuned for my critique of market monetarism, part two: why handing policymaking over to prediction markets is a terrible idea.

VI. In conclusion

The concerns I outline above have driven me from an evangelist for NGDP level targeting to someone extremely skeptical that any central banking policy can maintain monetary equilibrium. The idea of optimal policy under NGDP targeting necessitating a structural model of the economy disturbs me, for a successful such model – as Sumner persuasively argues – will never be built. The prospect that NGDP targeting might collapse in the face of rapidly accelerating technological growth worries me, since it does seem to me that this very well could occur. And even setting aside the techno-utopianism, the historical examples described above, such as Japan in the 1980s, demonstrate that we have seen very large shifts in the structural growth rate in actual real-world economies.

I want to support NGDPLT: it is probably superior to price level or inflation targeting anyway, because of the incomplete markets issue. But unless there is a solution to this critique that I am missing, I am not sure that NGDP targeting is a sustainable policy for the long term, let alone the end of monetary history.








Dec 28, 2013

In 2008, Christina and David Romer published an interesting paper demonstrating that FOMC members are useless at forecasting economic conditions compared to the Board of Governors staff, and presented some evidence that mistaken FOMC economic forecasts were correlated with monetary policy shocks.

I've updated their work with another decade of data, and find that while the FOMC remained bad at forecasting over the extended period, the poor forecasting was not correlated with monetary policy shocks.

First, some background.

Background
Before every FOMC meeting, the staff at the Board of Governors produces the Greenbook, an in-depth analysis of current domestic and international economic conditions and, importantly for us, forecasts of all kinds of economic indicators a year or two out. The Greenbook is only released to the public with a major lag, so the last data we have is from 2007.

The FOMC members – the governors and regional bank presidents – prepare consensus economic forecasts twice a year, usually February and July, as part of the Monetary Policy Report they must submit to Congress. (Since October 2007, FOMC members have prepared projections at four FOMC meetings per year. That data, from the end of 2007, is not included in my dataset here, but I'll probably put it in when I update it in the future as more recent Greenbooks are released.)

Summary of Romer and Romer (2008)
The Romers took around 20 years of data from these two sources, from 1979 to 2001, and compared FOMC forecasts to staff forecasts. They estimate a regression of the form

Where X is the realized value of the variable (e.g. actual GDP growth in year t+1), S is the staff's projection of the variable (e.g. the staff's projected GDP growth next year), and P is the FOMC's projection of the variable (e.g. the FOMC's projected GDP growth next year).

They find "not just that FOMC members fail to add information, but that their efforts to do so are counterproductive." Policymakers were no good at forecasting over this period.

They then ask if the mistaken forecasts cause the FOMC to make monetary policy errors that cause monetary policy shocks. The two use their own Romer and Romer (2004) measure, which I've updated here, as the measure of monetary policy shocks. They then estimate the regression

Where M is the measure of shocks, and P and S are as before. They only ran this regression from 1979 through 1996, as that was the latest the measure of shocks went up to in the 2004 paper.

They find that, "The estimates suggest that forecast differences may be one source of monetary shocks… An FOMC forecast of inflation one percentage point higher than the staff forecast is associated with an unusual rise in the federal funds rate of approximately 30 basis points."

That seemed like a very interesting result to me when I first read this paper. Could bad monetary policymaking be explained by the hubris of policymakers who thought they could forecast economic conditions better than the staff? It turns out, after I updated the data, this result does not hold.

Updating the data
I followed the same methodology as when I updated Romer and Romer (2004): first replicating the data to ensure I had the correct method before collecting the new data and updating. The data is from 1979 through 2007, and all my work is available here and here.

I find, first, that policymakers remained quite poor economic forecasters. Here is the updated version of Table 1 from the paper, with the old values for comparison:

The coefficient on the FOMC forecast for inflation and unemployment is still right around zero, indicating that FOMC forecasts for these two variables contain no useful information.

However, it appears that once we extend the monetary policy shock regression from 1996 to 2007, the second result – that forecast differences are a source of monetary policy shocks – does not hold. Here is the updated version of Table 2 from the paper, again with old values for comparison:

When the Romers published their paper, the R-squared on the regression of monetary shocks over all three variables was 0.17. This wasn't exactly the strongest correlation, but for the social sciences it's not bad, especially considering that monetary shock measure is fairly ad hoc.

As we can see in the updated regression, the R-squared is down to 0.05 with the extended data. This is just too small to be labeled significant. Thus, unfortunately, this result does not appear to hold.








Dec 21, 2013

Update: Now updated through 2008. Interestingly, the 2008 shock is not exceptionally large:


Original post:

I've updated the Romer and Romer (2004) series of monetary policy shocks. The main takeaway is this graph of monetary policy shocks by month, since 1969, where the gray bars indicate recession:

When the two published their paper, they only had access to date up through 1996, since Fed Greenbooks – upon which the series is based – are released with a large lag. I've updated it through 2007, the latest available, and will update it again next month when the 2008 Greenbooks are released.

The two interesting points in the new data are

  1. The negative policy shock before and during the 2001 recession
  2.  The negative policy shock in 2007 before the Great Recession

Below I'll go into the more technical notes of how this measure is constructed and my methodology, but the graph and the two points above are the main takeaway.

How is the R&R measure constructed?
First, the Romers derive a series of intended changes in the federal funds rate. (This is easy starting in the 1990s, since the FOMC began announcing when it wanted to change the FFR; before that, the two had to trawl through meeting minutes to figure it out.) They then use the Fed's internal Greenbook forecasts of inflation and real growth to control the intended FFR series for monetary policy actions taken in response to information about future economic developments, specifically RGDP growth, inflation, and unemployment.

In other words, they regress the change in the intended FFR around forecast dates on RGDP growth, inflation and unemployment. Then, as they put it, "Residuals from this regression show changes in the intended funds rate not taken in response to information about future economic developments. The resulting series for monetary shocks should be relatively free of both endogenous and anticipatory actions."

The equation they estimate is:

  • Δff is the change in the intended FFR around meeting m
  • ffb is the level of the target FFR before the change in the meeting m (included to capture any mean reversion tendency)
  • π, Δy, and u are the forecasts of inflation, real output growth, and the unemployment rate; note both that the current forecast and the change since the last meeting are used
  • The i subscripts refer to the horizon of the forecast: -1 is the previous quarter, 0 the current quarter, 1 the next quarter, 2 the next next quarter
  • All relative to the date of the forecast corresponding to meeting m; i.e. if the meeting is in early July 1980 and the forecast is in late June 1980, the contemporaneous forecast is for the second quarter of 1980

The Romers show in their paper that, by this measure, negative monetary policy shocks have large and significant effects on output and the price level.

It is worth noting the limitations of this measure. It is based on the federal funds rate instrument, which is not a very good indicator of the stance of monetary policy. Additionally, if the FOMC changes its target FFR between meetings, any shock associated with that decision would not be captured by this measure.

Results
First, I replicated the Romer and Romer (2004) results to confirm I had the correct method. Then I collected the new data in Excel and ran the regression specified above in MATLAB. The data is available here and here (though there might have been errors when uploading to Google Drive).

The residuals are shown above in graph form; it is an updated version of figure 1a in Romer and Romer (2004).

The coefficients and statistics on the regression are (this is an updated version of table 1 in the original paper):

Last, for clarity, I have the monetary policy shock measure below with the top five extremes removed. This makes some periods more clear, especially the 2007 shock. Again, I will update this next month when the 2008 Greenbooks are released. It should be very interesting to see how large a negative shock there was in 2008.









 
Confidence level: very high

Written with very high confidence, and ready to be seen by the world on the same level as polished research.

 
Confidence level: high

Written with substantial confidence, though open to revision.

 
Confidence level: medium

Written with some confidence, while acknowledging uncertainty and/or the possibility of gaps in my knowledge.

 
Confidence level: low

This is something I currently believe; but also estimate nontrivial probability that my view will change later.

 
Confidence level: zero

I don't believe this or no longer believe this, in part or in whole.









Monetary economics

  1. Monetary misperceptions, food banks, and NGDP targeting
  2. The Fed's preferred model says that now is not the time to raise rates
  3. A practical critique of NGDP targeting
  4. NGDP targeting and the Friedman Rule

Asset pricing

  1. The "Efficient Restaurant Hypothesis": a mental model for finance (and food)
  2. Behavioral biases don't affect stock prices
  3. Yes, markets are efficient – *and* yes, stock prices are predictable

Behavioral economics

  1. Loss aversion is not what you think it is