Tuesday, October 24, 2017

Is Taylor a Hawk or Not?

Two Bloomberg articles published just a week apart call John Taylor, a contender for Fed Chair, first hawkish, then dovish. The first, by Garfield Clinton Reynolds, notes:
...The dollar rose and the 10-year U.S. Treasury note fell on Monday after Bloomberg News reported Taylor, a professor at Stanford University, impressed President Donald Trump in a recent White House interview. 
Driving those trades was speculation that the 70 year-old Taylor would push rates up to higher levels than a Fed helmed by its current chair, Janet Yellen. That’s because he is the architect of the Taylor Rule, a tool widely used among policy makers as a guide for setting rates since he developed it in the early 1990s.
But the second, by Rich Miller, claims that "Taylor’s Walk on Supply Side May Leave Him More Dove Than Yellen." Miller explains,
"While Taylor believes the [Trump] administration can substantially lift non-inflationary economic growth through deregulation and tax changes, Yellen is more cautious. That suggests that the Republican Taylor would be less prone than the Democrat Yellen to raise interest rates in response to a policy-driven economic pick-up."
What actually makes someone a hawk? Simply favoring rules-based policy is not enough. A central banker could use a variation of the Taylor rule that implies very little response to inflation, or that allows very high average inflation. Beliefs about the efficacy of supply-side policies also do not determine hawk or dove status. Let's look at the Taylor rule from Taylor's 1993 paper:
r = p + .5y + .5(p – 2) + 2,
where r is the federal funds rate, y is the percent deviation of real GDP from target, and p is inflation over the previous 4 quarters. Taylor notes (p. 202) that lagged inflation is used as a proxy for expected inflation, and y=100(Y-Y*)/Y* where Y is real GDP and Y* is trend GDP (a proxy for potential GDP).

The 0.5 coefficients on the y and (p-2) terms reflect how Taylor estimated that the Fed approximately behaved, but in general a Taylor rule could have different coefficients, reflecting the central bank's preferences. The bank could also have an inflation target p* not equal to 2, and replace (p-2) with (p-p*). Just being really committed to following a Taylor rule does not tell you what steady state inflation rate or how much volatility a central banker would allow. For example, a central bank could follow a rule with p*=5 and a relatively large coefficient on y and small coefficient on (p-5), allowing both high and volatile inflation.

What do "supply side" beliefs imply? Well, Miller thinks that Taylor believes the Trump tax and deregulatory policy changes will raise potential GDP, or Y*. For a given value of Y, a higher estimate of Y* implies a lower estimate of y, which implies lower r. So yes, in the very short run, we could see lower r from a central banker who "believes" in supply side economics than from one who doesn't, all else equal.

But what if Y* does not really rise as much as a supply-sider central banker thinks it will? Then the lower r will result in higher p (and Y), to which the central bank will react by raising r. So long as the central bank follows the Taylor principle (so the sum of the coefficients on p and (p-p*) in the rule are greater than 1), equilibrium long-run inflation is p*.

The parameters of the Taylor rule reflect the central bank's preferences. The right-hand-side variables, like Y*, are measured or forecasted. That reflects a central bank's competence at measuring and forecasting, which depends on a number of factors ranging from the strength of its staff economists to the priors of the Fed Chair to the volatility and unpredictability of other economic conditions and policies. 

Neither Taylor nor Yellen seems likely to change the inflation target to something other than 2 (and even if they wanted to, they could not unilaterally make that decision.) They do likely differ in their preferences for stabilizing inflation versus stabilizing output, and in that respect I'd guess Taylor is more hawkish. 

Yellen's efforts to look at alternative measures of labor market conditions in the past are also about Y*. In some versions of the Taylor rule, you see unemployment measures instead of output measures (where the idea is that they generally comove). Willingness to consider multiple measures of employment and/or output is really just an attempt to get a better measure on how far the real economy is from "potential." It doesn't make a person inherently more or less hawkish.

As an aside, this whole discussion presumes that monetary policy itself (or more generally, aggregate demand shifts) do not change Y*. Hysteresis theories reject that premise. 

Monday, October 23, 2017

Cowen and Sumner on Voters' Hatred of Inflation

A recent Scott Sumner piece has the declarative title, "Voters don't hate inflation." Sumner is responding to a piece by Tyler Cowen in Bloomberg, where Cowen writes:
Congress insists that the Fed is “independent"...But if voters hated what the Fed was doing, Congress could rather rapidly hold hearings and exert a good deal of influence. Over time there is a delicate balancing act, where the Fed is reluctant to show it is kowtowing to Congress, so it very subtlety monitors its popularity so it doesn’t have to explicitly do so. 
If we imposed a monetary rule on the Fed, even a theoretically optimal rule, it would stop the Fed from playing this political game. Many monetary rules call for higher rates of price inflation if the economy starts to enter a downturn. That’s often the right economic prescription, but voters hate high inflation. 
Emphasis added, and the emphasized bit is quoted by Scott Sumner, who argues that voters don't hate inflation per se, but hate falling standards of living. He adds, "How people feel about a price change depends entirely on whether it's caused by an aggregate supply shift or a demand shift."

The whole exchange made my head hurt a bit because it turns the usual premise behind macroeconomic policy design--and specifically, central bank independence and monetary policy rules--on its head. The textbook reasoning goes something like this. Policymakers facing re-election have an incentive to pursue expansionary macroeconomic policies (positive aggregate demand shocks). This boosts their popularity, because people enjoy the lower unemployment and don't really notice or worry about the inflationary consequences.

Even an independent central bank operating under discretion faces the classic "dynamic inconsistency" problem if it tries to commit to low inflation, resulting in suboptimally high (expected and actual) inflation. So monetary policy rules (the topic of Cowen's piece) are, in theory, a way for the central bank to "bind its hands" and help it achieve lower (expected and actual) inflation. An alternative that is sometimes suggested is to appoint a central banker who is more inflation averse than the public. If the problem is that the public hates inflation, how is this a solution?

Cowen seems to argue that a monetary rule would be unpopular, and hence not fully credible, exactly when it calls for policy to be expansionary. But such a rule, in theory, would have been put into place to prevent policy from being too expansionary. Without such a rule, policy would presumably be more expansionary, so if voters hate high inflation, they would really hate removing the rule.

One issue that came up frequently at the Rethinking Macroeconomic Policy IV conference was the notion that inflationary bias, and the implications for central banking that come with it, might be a thing of the past. There is certainly something to that story in the recent low inflation environment. But I can still hardly imagine circumstances in which expansionary policy in a downturn would be the unpopular choice among voters themselves. It may be unpopular among members of Congress for other reasons-- because it is unpopular among select powerful constituents, for example-- but that is another issue. And the members of Congress who are most in favor of imposing a monetary policy rule for the Fed are also, I suspect, the most inflation averse, so I find it hard to see how the potentially inflationary nature of rules is what would (a) make them politically unpopular and (b) lead Congress to thus restrict the Fed's independence.

Friday, October 13, 2017

Rethinking Macroeconomic Policy

I had the pleasure of attending “Rethinking Macroeconomic Policy IV” at the Peterson Institute for International Economics. I highly recommend viewing the panels and materials online.

The two-day conference left me wondering what it actually means to “rethink” macro. The conference title refers to rethinking macroeconomic policy, not macroeconomic research or analysis, but of course these are related. Adam Posen’s opening remarks expressed dissatisfaction with DSGE models, VARs, and the like, and these sentiments were occasionally echoed in the other panels in the context of the potentially large role of nonlinearities in economic dynamics. Then, in the opening session, Olivier Blanchard talked about whether we need a “revolution” or “evolution” in macroeconomic thought. He leans toward the latter, while his coauthor Larry Summers leans toward the former. But what could either of these look like? How could we replace or transform the existing modes of analysis?

I looked back on the materials from Rethinking Macroeconomic Policy of 2010. Many of the policy challenges discussed at that conference are still among the biggest challenges today. For example, low inflation and low nominal interest rates limit the scope of monetary policy in recessions. In 2010, raising the inflation target and strengthening automatic fiscal stabilizers were both suggested as possible policy solutions meriting further research and discussion. Inflation and nominal rates are still very low seven years later, and higher inflation targets and stronger automatic stabilizers are still discussed, but what I don’t see is a serious proposal for change in the way we evaluate these policy proposals.

Plenty of papers use basically standard macro models and simulations to quantify the costs and benefits of raising the inflation target. Should we care? Should we discard them and rely solely on intuition? I’d say: probably yes, and probably no. Will we (academics and think tankers) ever feel confident enough in these results to make a real policy change? Maybe, but then it might not be up to us.

Ben Bernanke raised probably the most specific and novel policy idea of the conference, a monetary policy framework that would resemble a hybrid of inflation targeting and price level targeting. In normal times, the central bank would have a 2% inflation target. At the zero lower bound, the central bank would allow inflation to rise above the 2% target until inflation over the duration of the ZLB episode averaged 2%. He suggested that this framework would have some of the benefits of a higher inflation target and of price level targeting without some of the associated costs. Inflation would average 2%, so distortions from higher inflation associated with a 4% target would be avoided. The possibly adverse credibility costs of switching to a higher target would also be minimized. The policy would provide the usual benefits of history-dependence associated with price level targeting, without the problems that this poses when there are oil shocks.

It’s an exciting idea, and intuitively really appealing to me. But how should the Fed ever decide whether or not to implement it? Bernanke mentioned that economists at the Board are working on simulations of this policy. I would guess that these simulations involve many of the assumptions and linearizations that rethinking types love to demonize. So again: Should we care? Should we rely solely on intuition and verbal reasoning? What else is there?

Later, Jason Furman presented a paper titled, “Should policymakers care whether inequality is helpful or harmful for growth?” He discussed some examples of evaluating tradeoffs between output and distribution in toy models of tax reform. He begins with the Mankiw and Weinzierl (2006) example of a 10 percent reduction in labor taxes paid for by a lump-sum tax. In a Ramsey model with a representative agent, this policy change would raise output by 1 percent. Replacing the representative agent with agents with the actual 2010 distribution of U.S. incomes, only 46 percent of households would see their after-tax income increase and 41 percent would see their welfare increase. More generally, he claims that “the growth effects of tax changes are about an order of magnitude smaller than the distributional effects of tax changes—and the disparity between the welfare and distribution effects is even larger” (14). He concludes:
“a welfarist analyzing tax policies that entail tradeoffs between efficiency and equity would not be far off in just looking at static distribution tables and ignoring any dynamic effects altogether. This is true for just about any social welfare function that places a greater weight on absolute gains for households at the bottom than at the top. Under such an approach policymaking could still be done under a lexicographic process—so two tax plans with the same distribution would be evaluated on the basis of whichever had higher growth rates…but in this case growth would be the last consideration, not the first” (16).

As Posen then pointed out, Furman’s paper and his discussants largely ignored the discussions of macroeconomic stabilization and business cycles that dominated the previous sessions on monetary and fiscal policy. The panelists acceded that recessions, and hysteresis in unemployment, can exacerbate economic disparities. But the fact that stabilization policy was so disconnected from the initial discussion of inequality and growth shows just how much rethinking still has not occurred.

In 1987, Robert Lucas calculated that the welfare costs of business cycles are minimal. In some sense, we have “rethought” this finding. We know that it is built on assumptions of a representative agent and no hysteresis, among other things. And given the emphasis in the fiscal and monetary policy sessions on avoiding or minimizing business cycle fluctuations, clearly we believe that the costs of business cycle fluctuations are in fact quite large. I doubt many economists would agree with the statement that “the welfare costs of business cycles are minimal.” Yet, the public finance literature, even as presented at a conference on rethinking macroeconomic policy, still evaluates welfare effects of policy using models that totally omit business cycle fluctuations, because, within those models, such fluctuations hardly matter for welfare. If we believe that the models are “wrong” in their implications for the welfare effects of fluctuations, why are we willing to take their implications for the welfare effects of tax policies at face value?

I don’t have a good alternative—but if there is a Rethinking Macroeconomic Policy V, I hope some will be suggested. The fact that the conference speakers are so distinguished is both an upside and a downside. They have the greatest understanding of our current models and policies, and in many cases were central to developing them. They can rethink, because they have already thought, and moreover, they have large influence and loud platforms. But they are also quite invested in the status quo, for all they might criticize it, in a way that may prevent really radical rethinking (if it is really needed, which I’m not yet convinced of). (A more minor personal downside is that I was asked multiple times whether I was an intern.)

If there is a Rethinking Macroeconomic Policy V, I also hope that there will be a session on teaching and training. The real rethinking is going to come from the next generations of economists. How do we help them learn and benefit from the current state of economic knowledge without being constrained by it? This session could also touch on continuing education for current economists. What kinds of skills should we be trying to develop now? What interdisciplinary overtures should we be making?