Monday, February 13, 2017

Thoughts on Angrist and Pishke's "Undergraduate Econometrics Instruction"

Joshua Angrist and Jörn-Steffen Pischke, coauthors of "Mastering 'Metrics," have just released a new NBER working paper called "Undergraduate Econometrics Instruction: Through Our Classes, Darkly." They argue that pedagogy has not kept pace with trends in economic research in the past few decades:
In the 1960s and 1970s, an empirical economist’s typical mission was to “explain” economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for “true models” and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift.
The "pedagogical paradigm shift" they call for would include three main components:
One is a focus on causal questions and empirical examples, rather than models and math. Another is a revision of the anachronistic classical regression framework, away from explaining economic processes and towards controlled statistical comparisons. The third is an emphasis on modern quasiexperimental tools.  
Since I am relatively new to both teaching and economics-- I didn't major in economics as an undergraduate, and did my Ph.D. from 2010 to 2015-- the first economics course that I designed and taught at Haverford quite naturally adhered to many of Angrist and Pischke's recommendations. The course, which I taught in Fall 2015 and Fall 2016, is called Advanced Macroeconomics, but is essentially an applied econometrics course on empirical macroeconomic policy analysis. The students in the course are typically juniors and seniors who have already taken econometrics.

On the first day of class, we read excerpts from the 1968 paper "Monetary and Fiscal Actions: A Test of Their Relative Importance in Economic Stabilization" by Andersen and Jordan. The authors want to test whether "the response of economic activity to fiscal actions relative to that of monetary actions is (1) greater, (2) more predictable, and (3) faster." They use very simple regression analysis, essentially regressing changes in GNP on changes in measures of monetary and fiscal actions. This type of regression is now called a "St. Louis Equation," since Andersen and Jordan were at the St. Louis Fed. I ask my students to interpret the regression results and evaluate the validity of the authors' conclusions about policy effectiveness. With some prodding, the students come up with some ideas about potential omitted variable bias and data concerns. But they don't think about reverse causality or the idea of a "controlled statistical comparison." I introduce the reverse causality issue, and much of the rest of the course focuses on quasiexperimental tools.

The course has no textbook, but we use "Natural Experiments in Macroeconomics" by Nicola Fuchs-Schundeln and Tarek Hassan as the main reference. The course has four units: consumption, monetary policy, fiscal policy, and growth and distribution. In each unit, I assign natural experiment or quasiexperimental papers as well as other papers that attempt to achieve identification via other means, to varying degrees of success. The reading list was influenced by Christina Romer and David Romer's graduate course on Macroeconomic History at Berkeley, which introduced me to the notion of identification and ignited my interest in macroeconomics.

Angrist and Pischke also argue that "Regression should be taught the way it’s now most often used: as a tool to control for confounding factors" in contrast to "the traditional regression framework in which all regressors are treated equally." In other words, the coefficient of interest is on one of the regressors, while the other regressors serve as "control variables needed to insure that the regression-estimated effect of the variable of interest has a causal interpretation."

This advice on teaching regression resonates with my experience co-teaching the economics senior thesis seminar at Haverford for the past two years. Over the summer, my research assistant Alex Rodrigue read through several years' worth of senior theses in the archives and documented the research question in each thesis. We noticed that many students use research questions of the form "What are the factors that affect Y?" and run a regression of Y on all the variables they can think of, treating all regressors equally and not attempting to investigate any particular causal relationship from one variable X to Y. The more successful theses posit a causal relationship from X to Y driven by specific economic mechanisms, then use regression analysis and other methods to estimate and interpret the effect. The latter type of thesis has more pedagogical benefits, whether or not the student can ultimately achieve convincing identification, because it leads the student to think more seriously about economic mechanisms.

Sunday, January 8, 2017

Post-Election Political Divergence in Economic Expectations

"Note that among Democrats, year-ahead income expectations fell and year-ahead inflation expectations rose, and among Republicans, income expectations rose and inflation expectations fell. Perhaps the most drastic shifts were in unemployment expectations:rising unemployment was anticipated by 46% of Democrats in December, up from just 17% in June, but for Republicans, rising unemployment was anticipated by just 3% in December, down from 41% in June. The initial response of both Republicans and Democrats to Trump’s election is as clear as it is unsustainable: one side anticipates an economic downturn, and the other expects very robust economic growth."
This is from Richard Curtin, Director of the Michigan Survey of Consumers. He is comparing the economic sentiments and expectations of Democrats, Independents, and Republicans who took the survey in June and December 2016. A subset of survey respondents take the survey twice, with a six-month gap. So these are the respondents who took the survey before and after the election. The results are summarized in the table below, and really are striking, especially with regards to unemployment. Inflation expectations also rose for Democrats and fell for Republicans (and the way I interpret the survey data is that most consumers see inflation as a bad thing, so lower inflation expectations means greater optimism.)

Notice, too, that self-declared Independents are more optimistic after the election than before. More of them are expecting lower unemployment and fewer are expecting higher unemployment. Inflation expectations also fell from 3% to 2.3%, and income expectations rose. Of course, this is likely based on a very small sample size.
Source: Richard Curtin, Michigan Survey of Consumers

Saturday, December 31, 2016

Pushing the Boundaries of Economics

As a macroeconomist, I mostly research the types of concepts that are more traditionally associated with economics, like inflation and interest rates. But one of the great things about economics training, in my opinion, is that you receive enough general training to be able to follow much of what is going on in other fields. It is always interesting for me to read papers or attend seminars in applied microeconomics to see the wide (and expanding) scope of the discipline.

Gary Becker won the Nobel Prize in 1992 "for having extended the domain of microeconomic analysis to a wide range of human behaviour and interaction, including nonmarket behaviour" and "to aspects of human behavior which had previously been dealt with by other social science disciplines such as sociology, demography and criminology." The Freakonomics books and podcast have gone a long way in popularizing this approach. But it is not without its critics, both within and outside the profession.

For all that the economic way of thinking and the quantitative tools of econometrics can add in addressing a boundless variety of questions, there is also much that our analysis and tools leave out. In areas like health or criminology, the assumptions and calculations that seem perfectly reasonable to an economist may seem anywhere from misguided to offensive to a medical doctor or criminologist. Roland Fryer's working paper on racial differences in police use of force, for example, was prominently covered with both praise and criticism.

Another NBER working paper, released this week by Jonathan de Quidt and Johannes Haushofer, is also pushing the boundaries of economics, arguing that "depression has not received significant attention in the economics literature." By depression, they are referring to major depressive disorder (MDD), not a particularly severe recession. While neither of the authors holds a medical degree, Haushofer holds doctorates in both economics and neurobiology. In "Depression for Economists," they build a model in which individuals choose to exert either high or low effort; depression is induced by a negative "shock" to an individual's belief about her return to high effort.

In the model, the individual's income depends on her effort, amount of sleep, and food consumption. Her utility depends on her sleep, food consumption, and non-food consumption. She maximizes utility given her belief about her return to effort, which she updates in a Bayesian manner. If her belief about her return to effort declines (synonymous in the model to becoming depressed), she exerts less labor effort. Her total (food and non-food) consumption and utility unambiguously decrease, leading to "depressed mood." In the extreme, she may reduce her labor effort to zero, at which point she would stop learning more about her return to effort and get stuck in a "poverty trap."

The depressed individual's sleeping and food consumption may either increase or decrease, as consumption motives become more important relative to production motives. In other words, she sleeps and eats closer to the amounts that she would choose if she cared only about the utility directly from sleeping and eating, and not about how her sleeping and eating choices affect her ability to produce.

While this result does match the empirical findings in the medical literature that depression may either reduce or increase sleep duration and lead to either over- or under-eating, it seems implausible to me that depressed individuals sleep ten or more hours a day because they just love sleeping, or lose their appetite because they don't enjoy food beyond its ability to help them be productive. I'm not an expert, but from what I understand there are physiological and chemical reasons for the change in sleep patterns and appetite that could be independent of a person's beliefs about their returns to labor effort.

However, the authors argue that an "advantage of our model is that it resonates with prominent psychological and psychiatric theories of depression, and the therapeutic approaches to which they gave rise." They refer in particular to "Charles Ferster, who argued that depression resulted from an overexposure to negative reinforcement and underexposure to positive reinforcement in the environment (Ferster 1973)...Ferster’s account of the etiology of depression is in line with how we model depression here, namely as a consequence of exposure to negative shocks." They also refer to the work of psychiatrist Aaron Beck (1967), whose suggested that depression arises from "distorted thinking" motivates the use of Cognitive Behavioral Therapy (CBT), a standard treatment for depression.

The authors note that "Our main goal in writing this paper was to give economists a starting point for thinking and writing about depression using the language of economics. We have therefore kept the model as simple as possible." They also steer clear of suggesting any policy implications (other than implicitly providing support for CBT.) It will be fascinating to see whether and how the medical community responds, and also to hear from economists who have themselves experienced depression.

Monday, December 5, 2016

The Future is Uncertain, but So Is the Past

In a recently-released research note, Federal Reserve Board economists Alan Detmeister, David Lebow, and Ekaterina Peneva summarize new survey results on consumers' inflation perceptions. The well-known Michigan Survey of Consumers asks consumers about their expectations of future inflation (over the next year and 5- to 10- years), but does not ask them what they believe inflation has been in recent years.

In many macroeconomic models, inflation perceptions should be nearly perfect. After all, inflation statistics are publicly available, and anyone should be able to access them. The Federal Reserve commissioned the Michigan Survey of Consumers Research Center to survey consumers about their perceptions of inflation over the past year and over the past 5- to 10-years, using analogous wording to the questions about inflation expectations. As you might guess, consumers lack perfect knowledge of inflation in the recent past. If you're like most people (which, by dent of reading an economic blog, you are probably not), you probably haven't looked up inflation statistics or read the financial news recently.

But more surprisingly, consumers seem just as uncertain about past inflation, or even more so, as about future inflation. Take a look at these histograms of inflation perceptions and expectations from the February 2016 survey data:

Source: December 5 FEDS Note

Compare Panel A to Panel C. Panel A shows consumers' perceptions of inflation over the past 5- to 10-years, and Panel C shows their expectations for the next 5- to 10-years. Both panels show a great deal of dispersion, or variation across consumers. But also notice the response heaping at multiples of 5%. In both panels, over 10% of respondents choose 5%, and you also see more 10% responses than either 9% or 11% responses. In a working paper, I show that this response heaping is indicative of high uncertainty. Consumers choose a 5%, 10%, 15%, etc. response to indicate high imprecision in their estimates of future inflation. So it is quite surprising that even more consumers choose the 10, 15, 20, and 25% responses for perceptions of past inflation than for expectations of future inflation.

The response heaping at multiples of 5% is also quite substantial for short-term inflation perceptions (Panel B). Without access to the underlying data, I can't tell for sure whether it is more or less prevalent than for expectations of future short-term inflation, but it is certainly noticeable.

What does this tell us? People are just as unsure about inflation in the relatively recent past as they are about inflation in the near to medium-run future. And this says something important for monetary policymakers. A goal of the Federal Reserve is to anchor medium- to long-run inflation expectations at the 2% target. With strongly-anchored expectations, we should see most expectations near 2% with low uncertainty. If people are uncertain about longer-run inflation, it could either be that they are unaware of the Fed's inflation target, or aware but unconvinced that the Fed will actually achieve its target. It is difficult to say which is the case. The former would imply that we need more public informedness about economic concepts and the Fed, while the latter would imply that the Fed needs to improve its credibility among an already-informed public. Since perceptions are about as uncertain as expectations, this lends support to the idea that people are simply uninformed about inflation-- or that memory of economic statistics is relatively poor.

Thursday, November 10, 2016

Political Pressures on the Fed and the Trump Presidency

On Monday evening, Charles Weise gave a seminar at Haverford on "Political Pressures on Monetary Policy during the U.S. Great Inflation," a paper he published in 2012. In the paper, he details how Congress and the Presidents (especially Nixon) pressured the Fed, both directly and indirectly, to pursue loose monetary policy that contributed to the Great Inflation in the 1970s.

The paper highlights the fact that although the Fed is nominally independent, Congress and the President can influence the Fed's actions by threatening to restrict the Fed's independence. This is not necessarily a bad thing. One way to try to make the Fed accountable to the public is to make the Fed accountable to publicly-elected officials. This can be achieved by several (imperfect) means-- hearings and testimonies and other transparency requirements, the appointment process, and (threatened) legislation. Problems arise when the interests of the elected officials are not in line with the interests of the electorate. In the 1970s, for example, Nixon's interest in maintaining low unemployment at the cost of high and rising inflation was for the sake of political gain and neglected adverse long-run consequences. Problems can also arise when the interests of elected officials are in line with those of the public, but elected officials' understanding of monetary policy is severely flawed.

It was coincidental that this talk was the evening before Election Day. The candidates' views on the Fed got less attention than many other issues and aspects of the campaign, but they did come up from time to time. Donald Trump, for example, claimed that "We are in a very big, ugly bubble...The Fed  is more political than Hillary Clinton.”

Now, a big question is what Trump's election will mean for the future of the Fed. Beyond the relatively minor issue of whether this unexpected election result will cause the Fed to postpone its next rate hike, the larger issues have to do with legislation and future appointments.

In the Great Recession and ever since, we have seen many calls and proposals for more accountability for the Fed from both sides of the political spectrum. Most of these have at least some merit, even if they are misguided to varying degrees. They stem from a recognition that the Fed is powerful, and that its actions affect the distribution of resources and the health of the global economy. But the types of legislation that Trump seems likely to support would drastically restrict the Fed's independence and discretion-- he has even mentioned a desire to return to the gold standard.

Moreover, no legislation designed to promote accountability can be effective without choosing monetary policymakers that are well-qualified technocrats to skillfully implement policy. Janet Yellen's term as Fed Chair ends in 2018, and Trump has suggested that he will not reappoint her. This would represent a departure from the pattern established by Obama's reappointment of Ben Bernanke, who was originally appointed by George W. Bush. Obama's reappointment of Bernanke signalled that the Fed Chair was a technocratic position, not a partisan one. Yellen, like Bernanke, is well-credentialed for her post. Vice Chair Stanley Fischer's term also ends in 2018, and there are two other open seats on the Board of Governors. Monetary policy is complex enough that even a well-intentioned policymaker without substantial knowledge and skill could spell trouble. A neither well-intentioned nor highly skilled policymaker would almost guarantee disaster.

Finally, monetary policy will interact with other economic policies. Lower long-run growth means lower natural interest rates. This means that we are already uncomfortably close to the zero lower bound, and almost certain to hit it again with the next recession. Severely restrictive trade and immigration policy will even further reduce the economy's capacity for growth, compounding this problem.

Saturday, October 15, 2016

Independence at the CFPB and the Fed

One of my major motivations in starting this blog a few years ago was to have a space to grapple with the topic of central bank independence and accountability. One of the most important things I have learned since then is that independence and accountability are highly multi-dimensional concepts; different institutions can be granted different types of independence, and can fail to be accountable in countless ways. As a corollary, nominal or de jure independence does not guarantee de facto independence. Likewise, an institution may be accountable in name only.

A recent ruling by the U.S. Court of Appeals for the District of Columbia about the independence of the Consumer Financial Protection Bureau (CFPB) highlights the complexity of these issues. The CFPB was created under the Dodd-Frank Act of 2010. On Tuesday, a three-judge panel declared that this agency's particular form of independence is unconstitutional. Most notably, the Director of the CFPB-- currently Richard Cordray-- is removable only by the President, and only for cause.

The petitioner in the case against the CFPB, the mortgage lender PHH Corporation, which was subject to a large fine from the CFPB, argued that the CFPB's structure violates Article II of the Constitution. The Appeals Court's decision provides some historical context:
"To carry out the executive power and be accountable for the exercise of that power, the President must be able to control subordinate officers in executive agencies. In its landmark decision in Myers v. United States, 272 U.S. 52 (1926), authored by Chief Justice and former President Taft, the Supreme Court therefore recognized the President’s Article II authority to supervise, direct, and remove at will subordinate officers in the Executive Branch.

In 1935, however, the Supreme Court carved out an exception to Myers and Article II by permitting Congress to create independent agencies that exercise executive power. See Humphrey’s Executor v. United States, 295 U.S. 602 (1935). An agency is considered “independent” when the agency heads are removable by the President only for cause, not at will, and therefore are not supervised or directed by the President. Examples of independent agencies include well-known bodies such as the Federal Communications Commission, the Securities and Exchange Commission, the Federal Trade Commission, the National Labor Relations Board, and the Federal Energy Regulatory Commission... To help mitigate the risk to individual liberty, the independent agencies, although not checked by the President, have historically been headed by multiple commissioners, directors, or board members who act as checks on one another. Each independent agency has traditionally been established, in the Supreme Court’s words, as a “body of experts appointed by law and informed by experience."
The decision goes on to add that "No head of either an executive agency or an independent agency operates unilaterally without any check on his or her authority. Therefore, no independent agency exercising substantial executive authority has ever been headed by a single person. Until now."

Although the Federal Reserve, unlike the CFPB, has a seven-member Board of Governors, several aspects of their governance are similar: the CFPB Director, like the seven members of the Federal Reserve Board of Governors, is nominated by the President and approved by the Senate. The CFPB Director's term length is 5 years, compared to 14 years for the Governors-- but importantly, both have terms longer than the 4-year Presidential term. The Chair and Vice Chair of the Fed are nominated from the Governors by the President and approved by the Senate for a 4-year term. Both the CFPB Director and the Fed Chair are required to give semi-annual reports to Congress. See these resources for a more detailed comparison of the structure and governance of independent federal agencies.

I find it striking that the phrase individual liberty appears 32 times in the 110-page decision. The very first paragraph states, "This is a case about executive power and individual liberty. The U.S. Government’s executive power to enforce federal law against private citizens – for example, to bring criminal prosecutions and civil enforcement actions – is essential to societal order and progress, but simultaneously a grave threat to individual liberty."

Even though both the CFPB and the Fed have substantial financial regulatory authority, the discourse on Federal Reserve independence does not focus so heavily on liberty (I've barely come across the word at all in my readings on the subject); instead, it focuses on independence as a potential threat to accountability. As I have previously written, "the term accountability has become 'an ever-expanding concept,'" and one that is often not usefully defined. The same might be said for the term liberty. Still, the two terms have different connotations. Accountability requires that the institution carry out its responsibilities satisfactorily, while liberty is more about what the institution doesn't do.

Accountability is a key concept in the literature on delegation of tasks to technocrats or politicians. In "Bureaucrats or Politicians?," Alberto Alesini and Guido Tabellini (2007) build a model in which politicians are held accountable by their desire for re-election, while top-level bureaucrats are held accountable by "career concerns." The social desirability of delegating a task to an unelected bureaucrat depends on how the task affects the distribution of resources or advantages-- and thus, on the strength of interest-group political pressure. As Alan Blinder writes:
"Some public policy decisions have -- or are perceived to have -- mostly general impacts, affecting most citizens in similar ways. Monetary policy, for example...is usually thought of as affecting the whole economy rather than particular groups or industries. Other public policies are more naturally thought of as particularist, conferring benefits and imposing costs on identifiable groups...When the issues are particularist, the visible hand of interest-group politics is likely to be most pernicious -- which would seem to support delegating authority to unelected experts. But these are precisely the issues that require the heaviest doses of value judgments to decide who should win and lose. Such judgments are inherently and appropriately political. It's a genuine dilemma."
The Federal Reserve's Congressional mandate is to promote price stability and maximum employment. Federal Reserve independence is intended to promote these objectives by alleviating political pressure to pursue overly-accomodative monetary policy. Of course, as we have seen in recent years, the interest-group politics of central banking are more nuanced than a simple desire by incumbents for inflation. Interest rate policy and inflation affect different segments of the population in different ways. The CFPB is supposed to enforce federal consumer financial laws and protect consumers in financial markets. The average benefits of the CFPB to individual consumers is probably fairly small, while the costs of regulation and enforcement to a smaller number of financial companies is large. This asymmetry means that political pressure on a financial regulator like the CFPB (or on the Fed, in its regulatory role) is likely to come from the side of the financial institutions. In Blinder's logic, this confers a large value on the delegation of authority to technocrats, while at the same time raising the importance of accountability for political legitimacy.

Tyler Cowen writes, "I say the regulatory state already has too much arbitrary power, and this [District Court ruling] is a (small) move in the right direction." It is not the reduction of the regulatory state's power that will necessarily enhance either accountability or liberty, but the reduction of the arbitrariness of the regulatory power. This can come about through transparency (which the Fed typically cites as key to the maintenance of accountability), making policies and enforcement more predictable and less retroactive and reducing uncertainty. I don't know that the types of governance changes implied by the District Court ruling (if it holds) will substantially affect the CFPB's transparency or make it any less capable of pursuing its goals, as I tend to agree with Senator Elizabeth Warren's interpretation that the ruling will only require “a small technical tweak.”

Tuesday, September 27, 2016

Why are Long-Run Inflation Expectations Falling?

Randal Verbrugge and I have just published a Federal Reserve Bank of Cleveland Economic Commentary called "Digging into the Downward Trend in Consumer Inflation Expectations." The piece focuses on long-run inflation expectations--expectations for the next 5 to 10 years-- from the Michigan Survey of Consumers. These expectations have been trending downward since the summer of 2014, around the same time as oil and gas prices started to decline.  It might seem natural to conclude that falling gas prices are responsible for the decline in long-run inflation expectations. But we suggest that this may not be the whole story.

First of all, gas prices have exhibited two upward surges since 2014, neither of which was associated with a rise in long-run inflation expectations. Second, the correlation between gas prices and inflation expectations (a relationship I explore in much more detail in this working paper) appears too weak to explain the size of the decline. So what else could be going on?

If you look at the histogram in Figure 2, below, you can see the distribution of inflation forecasts that consumers give in three different time periods: an early period, the first half of 2014, and the past year. The shaded gray bars correspond to the early period, the red bars to 2014, and the blue bars to the most recent period. Notice that there is some degree of "response heaping" at multiples of 5%. In another paper, I use this response heaping to help quantify consumers' uncertainty about long-run inflation. The idea is that people who are more uncertain about inflation, or have a less precise estimate of what it should be, tend to report a round number-- this is a well-documented tendency in how people communicate imprecision.
The response heaping has declined over time, corresponding to a fall in my consumer inflation uncertainty index for the longer horizon. As we detail in the Commentary, this fall in uncertainty helps explain the decline in the measured median inflation forecast. This is a remnant of the fact that common round forecasts, 5% and 10%, are higher than common non-round forecasts.

There is also a notable change in the distribution of non-round forecasts over time. The biggest change is that 1% forecasts for long-run inflation are much more common than previously (see how the blue bar is higher than the red and gray bars for 1% inflation).  I think this is an important sign that some consumers (probably those that are more informed about the economy and inflation) are noticing that inflation has been quite low for an extended period, and are starting to incorporate low inflation into their long-run expectations. More consumers expect 1% inflation than 2%.