Thursday, September 14, 2017

Consumer Forecast Revisions: Is Information Really so Sticky?

My paper "Consumer Forecast Revisions: Is Information Really so Sticky?" was just accepted for publication in Economics Letters. This is a short paper that I believe makes an important point. 

Sticky information models are one way of modeling imperfect information. In these models, only a fraction (λ) of agents update their information sets each period. If λ is low, information is quite sticky, and that can have important implications for macroeconomic dynamics. There have been several empirical approaches to estimating λ. With micro-level survey data, a non-parametric and time-varying estimate of λ can be obtained by calculating the fraction of respondents who revise their forecasts (say, for inflation) at each survey date. Estimates from the Michigan Survey of Consumers (MSC) imply that consumers update their information about inflation approximately once every 8 months.

Here are two issues that I point out with these estimates:
I show that several issues with estimates of information stickiness based on consumer survey microdata lead to substantial underestimation of the frequency with which consumers update their expectations. The first issue stems from data frequency. The rotating panel of Michigan Survey of Consumer (MSC) respondents take the survey twice with a six-month gap. A consumer may have the same forecast at months t and t+ 6 but different forecasts in between. The second issue is that responses are reported to the nearest integer. A consumer may update her information, but if the update results in a sufficiently small revisions, it will appear that she has not updated her information. 
To quantify how these issues matter, I use data from the New York Fed Survey of Consumer Expectations, which is available monthly and not rounded to the nearest integer. I compute updating frequency with this data. It is very high-- at least 5 revisions in 8 months, as opposed to the 1 revision per 8 months found in previous literature.

Then I transform the data so that it is like the MSC data. First I round the responses to the nearest integer. This makes the updating frequency estimates decrease a little. Then I look at it at the six-month frequency instead of monthly. This makes the updating frequency estimates decrease a lot, and I find similar estimates to the previous literature-- updates about every 8 months.

So low-frequency data, and, to a lesser extent, rounded responses, result in large underestimates of revision frequency (or equivalently, overestimates of information stickiness). And if information is not really so sticky, then sticky information models may not be as good at explaining aggregate dynamics. Other classes of imperfect information models, or sticky information models combined with other classes of models, might be better.

Read the ungated version here. I will post a link to the official version when it is published.

Monday, August 21, 2017

New Argument for a Higher Inflation Target

On, Philippe Aghion, Antonin Bergeaud, Timo Boppart, Peter Klenow, and Huiyu Li discuss their recent work on the measurement of output and whether measurement bias can account for the measured slowdown in productivity growth. While the work is mostly relevant to discussions of the productivity slowdown and secular stagnation, I was interested in a corollary that ties it to discussions of the optimal level of the inflation target.

The authors note the high frequency of "creative destruction" in the US, which they define as when "products exit the market because they are eclipsed by a better product sold by a new producer." This presents a challenge for statistical offices trying to measure inflation:
The standard procedure in such cases is to assume that the quality-adjusted inflation rate is the same as for other items in the same category that the statistical office can follow over time, i.e. products that are not subject to creative destruction. However, creative destruction implies that the new products enter the market because they have a lower quality-adjusted price. Our study tries to quantify the bias that arises from relying on imputation to measure US productivity growth in cases of creative destruction.
They explain that this can lead to mismeasurement of TFP growth, which they quantify by examining changes in the share of incumbent products over time:
If the statistical office is correct to assume that the quality-adjusted inflation rate is the same for creatively destroyed products as for surviving incumbent products, then the market share of surviving incumbent products should stay constant over time. If instead the market share of these incumbent products shrinks systematically over time, then the surviving subset of products must have higher average inflation than creatively destroyed products. For a given elasticity of substitution between products, the more the market share shrinks for surviving products, the more the missing growth.
From 1983 to 2013, they estimate that "missing growth" averaged about 0.63% per year. This is substantial, but there is no clear time trend (i.e. there is not more missed growth in recent years), so it can't account for the measured productivity growth slowdown.

The authors suggest that the Fed should consider adjusting its inflation target upwards to "get closer to achieving quality-adjusted price stability." A few months ago, 22 economists including Joseph Stiglitz and Narayana Kocherlakota wrote a letter urging the Fed to consider raising its inflation target, in which they stated:
Policymakers must be willing to rigorously assess the costs and benefits of previously-accepted policy parameters in response to economic changes. One of these key parameters that should be rigorously reassessed is the very low inflation targets that have guided monetary policy in recent decades. We believe that the Fed should appoint a diverse and representative blue ribbon commission with expertise, integrity, and transparency to evaluate and expeditiously recommend a path forward on these questions.
The letter did not mention this measurement bias rationale for a higher target, but the blue ribbon commission they propose should take it into consideration.

Friday, August 18, 2017

The Low Misery Dilemma

The other day, Tim Duy tweeted:

It took me a moment--and I'd guess I'm not alone--to even recognize how remarkable this is. The New York Times ran an article with the headline "Fed Officials Confront New Reality: Low Inflation and Low Unemployment." Confront, not embrace, not celebrate.

The misery index is the sum of unemployment and inflation. Arthur Okun proposed it in the 1960s as a crude gauge of the economy, based on the fact that high inflation and high unemployment are both miserable (so high values of the index are bad). The misery index was pretty low in the 60s, in the 6% to 8% range, similar to where it has been since around 2014. Now it is around 6%. Great, right?

The NYT article notes that we are in an opposite situation to the stagflation of the 1970s and early 80s, when both high inflation and high unemployment were concerns. The misery index reached a high of 21% in 1980. (The unemployment data is only available since 1948).

Very high inflation and high unemployment are each individually troubling for the social welfare costs they impose (which are more obvious for unemployment). But observed together, they also troubled economists for seeming to run contrary to the Phillips curve-based models of the time. The tradeoff between inflation and unemployment wasn't what economists and policymakers had believed, and their misunderstanding probably contributed to the misery.

Though economic theory has evolved, the basic Phillips curve tradeoff idea is still an important part of central bankers' models. By models, I mean both the formal quantitative models used by their staffs and the way they think about how the world works. General idea: if the economy is above full employment, that should put upward pressure on wages, which should put upward pressure on prices.

So low unemployment combined with low inflation seem like a nice problem to have, but if they are indeed a new reality-- that is, something that will last--then there is something amiss in that chain of logic. Maybe we are not at full employment, because the natural rate of unemployment is a lot lower than we thought, or we are looking at the wrong labor market indicators. Maybe full employment does not put upward pressure on wages, for some reason, or maybe we are looking at the wrong wage measures. For example, San Francisco Fed researchers argue that wage growth measures should be adjusted in light of retiring Baby Boomers. Or maybe the link between wage and price inflation has weakened.

Until policymakers feel confident that they understand why we are experiencing both low inflation and low unemployment, they can't simply embrace the low misery. It is natural that they will worry that they are missing something, and that the consequences of whatever that is could be disastrous. The question is what to do in the meanwhile.

There are two camps for Fed policy. One camp favors a wait-and-see approach: hold rates steady until we actually observe inflation rising above 2%. Maybe even let it stay above 2% for awhile, to make up for the lengthy period of below-2% inflation. The other camp favors raising rates preemptively, just in case we are missing some sign that inflation is about to spiral out of control. This latter possibility strikes me as unlikely, but I'm admittedly oversimplifying the concerns, and also haven't personally experienced high inflation.

Thursday, August 10, 2017

Macro in the Econ Major and at Liberal Arts Colleges

Last week, I attended the 13th annual Conference of Macroeconomists from Liberal Arts Colleges, hosted this year by Davidson College. I also attended the conference two years ago at Union College. I can't recommend this conference strongly enough!

The conference is a response to the increasing expectation of high quality research at many liberal arts colleges. Many of us are the only macroeconomist at our college, and can't regularly attend macro seminars, so the conference is a much-needed opportunity to receive feedback on work in progress. (The paper I presented last time just came out in the Journal of Monetary Economics!)
This time, I presented "Inflation Expectations and the Price at the Pump" and discussed Erin Wolcott's paper, "Impact of Foreign Official Purchases of U.S.Treasuries on the Yield Curve."

There was a wide range of interesting work. For example, Gina Pieters presented “Bitcoin Reveals Unofficial Exchange Rates and Detects Capital Controls.” M. Saif Mehkari's work on “Repatriation Taxes” is highly relevant to today's policy discussions. Most of the presenters and attendees were junior faculty members, but three more senior scholars held a panel discussion at dinner. Next year, the conference will be held at Wake Forest.

I also attended a session on "Macro in the Econ Major" led by PJ Glandon. A link to his slides is here. One slide presented the image below, prompting an interesting discussion about whether and how we should tailor what is taught in macro courses to our perception of the students' interests and career goals.

Monday, August 7, 2017

Labor Market Conditions Index Discontinued

A few years ago, I blogged about the Fed's new Labor Market Conditions Index (LMCI). The index attempts to summarize the state of the labor market using a statistical technique that captures the primary common variation from 19 labor market indicators. I was skeptical about the usefulness of the LMCI for a few reasons. And as it turns out, the LMCI is now discontinued as of August 3.

The discontinuation is newsworthy because the LMCI was cited in policy discussions at the Fed, even by Janet Yellen. The index became high-profile enough that I was even interviewed about it on NPR's Marketplace.

One issue that I noted with the index in my blog was the following:
A minor quibble with the index is its inclusion of wages in the list of indicators. This introduces endogeneity that makes it unsuitable for use in Phillips Curve-type estimations of the relationship between labor market conditions and wages or inflation. In other words, we can't attempt to estimate how wages depend on labor market tightness if our measure of labor market tightness already depends on wages by construction.
This corresponds to one reason that is provided for the discontinuation of the index: "including average hourly earnings as an indicator did not provide a meaningful link between labor market conditions and wage growth."

The other reasons provided for discontinuation are that "model estimates turned out to be more sensitive to the detrending procedure than we had expected" and "the measurement of some indicators in recent years has changed in ways that significantly degraded their signal content."

I also noted in my blog post and on NPR that the index is almost perfectly correlated with the unemployment rate, meaning it provides very little additional information about labor market conditions. (Or interpreted differently, meaning that the unemployment rate provides a lot of information about labor market conditions.) The development of the LMCI was part of a worthy effort to develop alternative informative measures of labor market conditions that can help policymakers gauge where we are relative to full employment and predict what is likely to happen to prices and wages. So since resources and attention are limited, I think it is wise that they can be directed toward developing and evaluating other measures. 

Thursday, July 27, 2017

The Obesity Code and Economists as General Practitioners

"The past generation, like several generations before it, has indeed been one of greater and greater specialization…This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems.”
The above quote may sound like a recent criticism of economics, but it actually comes from a 1936 article, "The Need for `Generalists,'" by A. G. Black, Chief of the Bureau of Agricultural Economics, in the Journal of Farm Economics (p. 657). Just 12 years prior, John Maynard Keynes' penned his much-quoted description of economists for a 1924 obituary of Alfred Marshall:
“The study of economics does not seem to require any specialized gifts of an unusually high order. Is it not, intellectually regarded, a very easy subject compared with the higher branches of philosophy or pure science? An easy subject at which few excel! The paradox finds its explanation, perhaps, in that the master-economist must possess a rare combination of gifts. He must be mathematician, historian, statesman, philosopher—in some degree. He must understand symbols and speak in words. He must contemplate the particular in terms of the general and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must lie entirely outside his regard” (p. 321-322).
While Keynes celebrated economist-as-generalist, Black's complaint about the trend of overspecialization, coupled with excessive "mathiness" and insularity, continues. This often-fair criticism frequently comes from within the profession--because who loves thinking about economists more than economists? But at the same time, there is a trend in the opposite direction, a trend towards taking an increasing scope of nature and institutions into regard. Nowhere is this more obvious than among the blogging, tweeting, punditing economists with whom I associate.

The other day, for example, Miles Kimball wrote a bunch of tweets about the causes of obesity. When I liked one of his tweets (because it suggested cheese is not bad for you, and how could I not like that?) he asked if I would read "The Obesity Code" by Jason Fung, which he blogged about earlier, and respond to the evidence.

I was quick to agree. Only later did I pause to consider the irony that I feel more confident in my ability to evaluate scholarship from a medical field in which I have zero training than I often feel when asked to review scholarship or policies in my field of supposed expertise, monetary economics. Am I being a master-economist à la Keynes, or simply irresponsible?

Kenneth Arrow's 1963 "Uncertainty and the Welfare Economics of Healthcare" is credited with the birth of health economics. In this article, Arrow notes that he is concerned only with the market and non-market institutions of medical services, and not health itself. But since then, health economics has broadened in scope, and incorporates the study of actual health outcomes (like obesity), to both acclaim and criticism. See my earlier post about economics research on depression, or consider the extremely polarized ratings of economist Emily Oster's book on pregnancy (which I like very much).

Jason Fung himself is not an obesity researcher, but rather a physician who specializes in end-stage kidney disease requiring dialysis. The foreward to his book remarks, "His credentials do not obviously explain why he should author a book titled The Obesity Code," before going on to justify his decision. So, while duly aware of my limited credentials, I feel willing to at least comment on the book and point out many of its parallels with macroeconomic research.

The Trouble with Accounting Identities (and Counting Calories)

Fung's book takes issue with the dominant "Calories In/Calories Out" paradigm for weight loss. This idea-- that to lose weight, you need to consume fewer calories than you burn-- is based on the First Law of Thermodynamics: energy can neither be created nor destroyed in an isolated system. Fung obviously doesn't dispute the Law, but he disputes its application to weight loss, premised on "Assumption 1: Calories In and Calories Out are independent of each other."

Fung argues that in response to a reduction in Calories In, the body will reduce Calories Out, citing a number of studies in which underfed participants started burning far fewer calories per day, becoming cold and unable to concentrate as the body expended fewer resources on heating itself and on brain functioning.

In economics, an accounting identity is an equality that by definition or construction must be true. Every introductory macro course teaches national income accounting: GDP=C+I+G+NX. What happens, we may ask, if government spending (G) increases by $1000? Most students would probably guess that GDP would also increase by $1000, but this is relying on a ceteris parabis assumption. If consumption (C), investment (I), and net exports (NX) stay the same when G rises, then yes, GDP must rise by $1000. But if C, I, or NX is not independent of G, the response of GDP could very well be quite different. For example, in the extreme case that the government spending completely crowds out private investment, so I falls by $1000 (with no change in C or NX), then GDP will not change at all.

The First Law of Thermodynamics is also an accounting identity. It is true that if Calories In exceed Calories Out, we will gain weight, and vice versa. But it is not true that reducing Calories In leaves Calories Out unchanged. And according to Fung, Calories Out may respond so strongly to Calories In, almost one-for-one, that sustained weight loss will not occur.

Proximate and Ultimate Causes

A caloric deficit (Calories In < Calories Out) is a proximate cause of weight loss, and a caloric surplus a proximate cause of weight gain. But proximate causes are not useful for policy prescriptions. Think again about the GDP=C+I+G+NX accounting identity. This tells us that we can increase GDP by increasing consumption. Great! But how can we increase consumption? We need to know the deeper determinants of the components of GDP. Telling a patient to increase her caloric deficit to lose weight is as practical as advising a government to boost consumption to achieve higher GDP, and neither effort is likely to be very sustainable. So most of Fung's book is devoted to exploring what he claims are the ultimate causes of body weight and the policy implications that follow.

Set Points 

One of the most important concepts in Fung's book is the "set point" for body weight. This is the weight at which the body "wants" be be; efforts to sustain weight loss are unlikely to persist, as the body returns to its set weight by reducing basal energy expenditure.

An important question is what determines the set point. A second set of questions surrounds what happens away from the set point. In other words, what are the mechanisms by which homeostasis occurs? In macro models, too, we may focus on finding the equilibrium and on the off-equilibrium dynamics. The very idea that the body has a set point is controversial, or at least counterintuitive, as is the existence of certain set points in macro, especially the natural rate of unemployment.

The set point, according to Fung, is all about insulin. Reducing insulin levels and insulin resistance allows fat burning (lipolysis) so the body has plenty of energy coming in without the need to lower basal metabolism; this is the only way to reduce the set point. The whole premise of his hormonal obesity theory rests on this. It is the starting point for his explanation of why obesity occurs and what to do about it. Obesity occurs when the body's set point gradually increases over time, as insulin levels and insulin resistance rise in a "vicious cycle." So we need to understand the mechanisms behind the rise and the dynamics of this cycle.

Mechanisms and Models

Fung goes into great detail about the workings of insulin and related hormones and their interactions with glucose and fructose. This background all aims to support his proposals about the causes of obesity. The causes are multifactorial, but mostly have to do with the composition and timing of the modern diet (what and when we eat). Culprits include refined and processed foods, added sugar, and emphasis on low fat/high carb diets, high cortisol levels from stress and sleep deprivation, and frequent snacking. Fung cites dozens of empirical studies, some observational and others controlled trials, to support his hormonal obesity theory.

Here I am not entirely sure how closely to draw a parallel to economics. Macroeconomists also rely on models that lay out a series of mechanisms, and use (mostly) observational and (rarely) experimental data to test them, and like epidemiological researchers face challenges of endogeneity and omitted variable bias. But are biological mechanisms inherently different than economic ones because they are more observable, stable, and falsifiable? My intuition says yes, but I don't know enough about medical and biological research to be sure. Fung does not discuss the research behind scientists' knowledge of how hormones work, but only the research on health and weight outcomes associated with various nutritional strategies and drugs.

At the beginning of the book, Fung announces his refusal to even consider animal studies. This somewhat surprises me, as I thought that finding a result consistently across species could strengthen our conclusions, and mechanisms are likely to be similar, but he seems to view animal studies as totally uninformative for humans. If that is true, then why do we use animals in medical research at all?

Persistence Creates Resistance

So how does the body's set point rise enough that a person becomes obese? Fung claims that the Calories In/Calories Out model neglects the time-dependence of obesity, noting that it is much easier for a person who has been overweight for only a short while to lose weight. If someone has been overweight a long time, it is much harder, because they have developed insulin resistance. Insulin levels normally rise and fall over the course of the day, not normally causing any problem. But persistently high levels of insulin, a hormonal imbalance, result in insulin resistance, leading to yet higher levels of insulin, and yet greater insulin resistance (and weight gain). Fung uses the cycle of antibiotic resistance as an analogy for the development of insulin resistance:
Exposure causes resistance...As we use an antibiotic more and more, organisms resistance to it are naturally selected to survive and reproduce. Evenually, these resistance organisms dominate, and the antibiotic becomes useless (p. 110).
He also uses the example of drug resistance: a cocaine user needs ever greater doses. "Drugs cause drug resistance" (p. 111). Macroeconomics provides its own metaphors. In the early conception of the Phillips Curve, it was believed that the inverse relationship between unemployment and inflation could be exploited by policymakers. Just as a doctor who wants to cure a bacterial infection may prescribe antibiotics, a policymaker who wants lower unemployment must just tolerate a little higher inflation. But the trouble with following such a policy is that as that higher inflation persists, people's expectations adapt. They come to expect higher inflation in the future, and that expectation is self-fulfilling, so it takes yet higher inflation to keep unemployment at or below its "set point."

Institutional Interest and Influence

How a field evaluates evidence and puts it into practice--and even what research is funded and publicized--depends on the powerful institutions in the field and their vested interests, even if the interest is merely in saving face. According to Fung, the American Heart Association (AHA), snack food companies, and doctors repeatedly ignored evidence against the low-fat low-carb diet and the Calories In/Calories Out model to make money or save face. His criticisms of the AHA, in particular, are reminiscent of those against the IMF for the policies it has imposed through the conditions of its loans.

Of course, the critics themselves may have biases or vested interests. Fung himself quite likely neglected to mention a number of studies that did not fit his theory. In an effort to sell books and promote his website and reputation, he very likely is oversimplifying and projecting more-than-warranted confidence. So how do I evaluate the book overall, and will I follow its recommendations for myself and my family?

First, while the book's title emphasizes obesity, it doesn't seem to be written only for readers who are or are becoming obese. It is not clear whether the recommendations presented in this book are useful for people who are already maintaining a healthy weight, but he certainly never suggests otherwise. And for a book so focused on hormones, he makes shockingly little distinction between male and female dietary needs and responses. Since I am breastfeeding twins, and am a still-active former college athlete, my hormonal balance and dietary needs must be far from average, and I'm not looking to lose weight. He also doesn't make much distinction between the needs of adults and kids (like my toddler).

Still, despite the fact that he presents himself as destroying the conventional wisdom on weight loss, most of his advice is unlikely to be controversial: eat whole foods, reduce added sugar, don't fear healthy fats. Before reading this book, I already knew I should try to do that, though sometimes chose not to. I was especially focused on nutrition during my twin pregnancy, and most of the advice was basically equivalent. After reading this book, I'm slightly more motivated, as I have somewhat more evidence as to why it is beneficial, and I still don't see how it could hurt. Other advice is less supported, but at least not likely to be harmful: avoid artificial sweeteners (even Stevia), eat vinegar.

His support of fasting and snack avoidance, and his views on insulin provision to diabetics, seem the least supported and most likely to be harmful. He says that fasting and skipping snacks and breakfast provides recurrent periods of very low insulin levels, reducing insulin resistance, but I don't see any concrete evidence of the length of time you must wait in between eating to reap benefits. He cites ancient Greeks, like Hippocrates of Kos, and a few case studies, as "evidence" of the benefits of fasting. Maybe it is my own proclivity for "grazing," and my observations of my two-year-old when we skip a snack, that makes me skeptical. This may work for some, but I'm in no rush to try it.

Friday, July 7, 2017

New Publication: Measuring Uncertainty Based on Rounding

For the next few weeks, you can download my new paper in the Journal of Monetary Economics for free here. The title is "Measuring uncertainty based on rounding: New method and application to inflation expectations." It became available online the same day that my twins were born (!!) but was much longer in the making, as it was my job market paper at Berkeley.

Here is the abstract:
The literature on cognition and communication documents that people use round numbers to convey uncertainty. This paper introduces a method of quantifying the uncertainty associated with round responses in pre-existing survey data. I construct micro-level and time series measures of inflation uncertainty since 1978. Inflation uncertainty is countercyclical and correlated with inflation disagreement, volatility, and the Economic Policy Uncertainty index. Inflation uncertainty is lowest among high-income consumers, college graduates, males, and stock market investors. More uncertain consumers are more reluctant to spend on durables, cars, and homes. Round responses are common on many surveys, suggesting numerous applications of this method.