Friday, September 23, 2016

The Economics of Crime

On September 28, the Economics Department at Haverford College will hold its annual alumni forum. The topic this year is "The Economics of Crime and Incarceration." Our panelists will be
Eric Sterling (Haverford class of '73), Executive Director of the Criminal Justice Policy Foundation, and Mark Kleiman (class of '72), Director of the Crime and Justice Program at New York University’s Marron Institute of Urban Management. In anticipation of the event, especially for any Haverford students who might be reading my blog, I wanted to do a quick survey of the literature on the economics of crime and some of the major topics and themes in this literature.

Why are crime and incarceration economics topics? In other words, given that there is an entire field--criminology--devoted to the study of crime, why are economists studying it as well?  Gary Becker suggested in 1968 that "a useful theory of criminal behavior can dispense with special theories of anomie, psychological inadequacies, or inheritance of special traits and simply extend the economist's usual analysis of choice" (p. 170).  In other words, he believed that criminal behavior could be modeled as a rational response to incentives; that the private and social costs of crime, and the costs of apprehension and conviction, could be quantified; and that a socially "optimal" (likely non-zero) level of crime could be computed.

How does the criminal justice system affect the incentives for crime, and, in turn, criminal behavior? Causal effects are quite challenging to study empirically. For example, consider the question of whether a larger police force deters crime. Suppose the data shows a positive correlation between crime rates and size of police force. While it is possible that larger police forces cause more crime, it is also possible that causality runs in the reverse direction: cities with higher crime rates hire more police. Steven Levitt, whose "Freakonomics" fame came in part from his clever approaches to these types of questions, has looked for "instruments," or ways to identify exogenous variations in criminal justice policies.

It is also difficult to identify causal effects of incarceration on criminal recidivism and other outcomes. Prison sentences are not "randomly assigned." So if we see that people who spend longer in prison are more likely to commit a second crime, we can't say whether the extra time in prison had a causal influence on the recidivism. A recent working paper by Manudeep Bhuller, Gordon B. Dahl, Katrine V. Løken, and Magne Mogstad exploits the random assignment of criminal cases in Norway to judges who differ in their stringency of sentencing. They find that imprisonment discourages further criminal behavior. This decline in recidivism is driven by people who were unemployed before incarceration, and who participated in programs in prison aimed at increasing employability. The authors conclude that "Contrary to the widely embraced 'nothing works' doctrine, these findings demonstrate that time spent in prison with a focus on rehabilitation can indeed be preventive." But since not all prison systems have a focus on rehabilitation, they add that "It is important to recognize that our results do not imply that prison is necessarily preventative in all settings. While this paper establishes an important proof of concept, evidence from other settings or populations would be useful to assess the generalizability of our findings."

Some dimensions of crime can be difficult to measure. Many crimes go unreported or undetected. Black market activity, by its very definition, is hidden. Economists have also tried to come up with ways to measure illegal production or trade. See, for example, this study of elephant poaching and ivory smuggling. Online black markets, and other types of crime and fraud committed online, are also the subject of a growing economics literature.

Network economics is also applicable to the study of crime, since it can help with understanding the formation and workings of criminal networks.

Studies of the economics of crime are nearly always controversial. In part, this is because criminal justice itself is so controversial, so whenever an economic study draws implications about criminal justice, it is sure to find some resistance. In addition, many people find Becker's description of crime as a purely rational response to incentives to be lacking. Recall, for example, the controversy surrounding Roland Fryer's recent working paper on racial differences in police use of force. I think part of what people were uncomfortable with was the incorporation of racial discrimination into the utility function, and part was the distinction he made between "statistical discrimination" and racial bias.

I anticipate an interesting discussion on Wednesday and will try to update the blog with my impressions following the forum.

Sunday, August 28, 2016

The Fed on Facebook

The Federal Reserve Board of Governors has now joined you, your grandma, and 1.7 billion of your closest friends on Facebook. A press release on August 18 says that the Fed's Facebook page aims at "increasing the accessibility and availability of Federal Reserve Board news and educational content." This news is especially interesting to me, since a chapter of my dissertation-- now my working paper "Fed Speak on Main Street"-- includes some commentary on the Federal Reserve's use of social media.

When I wrote the paper, though the Board of Governors did not have a Facebook page, the Regional Federal Reserve Banks did. I noted that the most popular of these, the San Francisco Fed's page, had around 5000 "likes" (compared to 4.5 million for the White House.)  I wrote in my conclusion that "The Fed has begun to use interactive new media such as Facebook, Twitter, and YouTube, but its ad hoc approach to these platforms has resulted in a relatively small reach. Federal Reserve efforts to communicate via these media should continue to be evaluated and refined."

About a year later, the San Francisco Fed is up to around 6000 "likes," while the brand new Board of Governors page already has over 14,000. Only a handful of people post comments on the Regional Fed pages, and they are relatively benign. "Great story! I loved it!" and the SF Fed's response, "So glad you liked it, Ellen!" are the only comments below one recent story. Even critical comments are fairly measured: "adding more money into RE market only inflates housing prices, & creates more deserted neighborhoods," following a story on affordable housing in the Bay Area.

On the Board of Governors' page, however, hundreds of almost exclusively negative and outraged comments follow every piece of content. Several news stories describe the page as overrun by "trolls." "Tell me more about the private meeting on Jekyll island and the plans for public prosperity that some of the worlds richest and most powerful bankers made in secret, please," writes a commenter following a post about who owns the Fed.

It is not too surprising that the Board's page has drawn so much more attention than those of the reserve banks. One of the biggest recurrent debates since before the foundation of the Fed surrounds the degree of centralization of power that is appropriate. The Fed's unusual structure reflects a string of compromises that leaves many unsatisfied. The Board in Washington, to many of the Fed's critics, represents unappealing centralization. To be sure, many of the commenters are likely unaware of the Fed's structure, and maybe of the existence of the regional Federal Reserve Banks. They know only to blame "the Fed," which to them is synonymous with the Board of Governors.

In my paper, I look at data from polls that have asked people a variety of questions about the Fed and the Fed Chair. Polls that ask people about who they credit or blame for economic performance appear in the table below. Most people don't think to blame the Fed for economic problems. If asked explicitly whether the Fed should be blamed, many say yes, but many others are unsure. Commenters on the Facebook page are not a representative sample of the population, of course. They are the ones who do blame the Fed.


Arguably, the negative attention on the Fed Board's page is better than no attention at all. As long as they don't start censoring negative comments-- and maybe even consider responding to some common concerns in press conferences or speeches?-- I think this could actually help their reputation for transparency and accountability. It will also be interesting to see whether the rate of interaction with the page dwindles off after it loses novelty.

Tuesday, August 16, 2016

More Support for a Higher Inflation Target

Ever since the FOMC announcement in 2012 that 2% PCE inflation is consistent with the Fed's price stability mandate, economists have questioned whether the 2% target is optimal. In 2013, for example, Laurence Ball made the case for a 4% target. Two new NBER working papers out this week each approach the topic of the optimal inflation target from different angles. Both, I think, can be interpreted as supportive of a somewhat higher target-- or at least of the idea that moderately higher inflation has greater benefits and smaller costs than conventionally believed.

The first, by Marc Dordal-i-Carreras, Olivier Coibion, Yuriy Gorodnichenko, and Johannes Wieland, is called "Infrequent but Long-Lived Zero-Bound Episodes and the Optimal Rate of Inflation." One benefit of a higher inflation target is to reduce the occurrence of zero lower bound (ZLB) episodes, so understanding the welfare costs of these episodes is important in calculating an optimal inflation target. The authors explain that in standard models with a ZLB, normally-distributed shocks result in short-lived ZLB episodes. This is in contrast with the reality of frequent but long-lived ZLB episodes. They build models that can generate long-lived ZLB episodes and show that welfare costs of ZLB episodes increase steeply with duration; 8 successive quarters at the ZLB is costlier than two separate 4-quarter episodes.

If ZLB episodes are costlier, it makes sense to have a higher inflation target to reduce their frequency. The authors note, however, that the estimate of the optimal target implied by their models are very sensitive to modeling assumptions and calibration:
"We find that depending on our calibration of the average duration and the unconditional frequency of ZLB episodes, the optimal inflation rate can range from 1.5% to 4%. This uncertainty stems ultimately from the paucity of historical experience with ZLB episodes, which makes pinning down these parameters with any degree of confidence very difficult. A key conclusion of the paper is therefore that much humility is called for when making recommendations about the optimal rate of inflation since this fundamental data constraint is unlikely to be relaxed anytime soon."
The second paper, by Emi Nakamura, Jón Steinsson, Patrick Sun, and Daniel Villar, is called "The Elusive Costs of Inflation: Price Dispersion during the U.S. Great Inflation." This paper notes that in standard New Keynesian models with Calvo pricing, one of the main welfare costs of inflation comes from inefficient price dispersion. When inflation is high, prices get further from optimal between price resets. This distorts the allocative role of prices, as relative prices no longer accurately reflect relative costs of production. In a standard New Keynesian model, the implied cost of this reduction in production efficiency is about 10% if you move from 0% inflation to 12% inflation. This is huge-- an order of magnitude greater than the welfare costs of business cycle fluctuations in output. This is why standard models recommend a very low inflation target.

Empirical evidence of inefficient price dispersion is sparse, since there is relatively minimal fluctuation in inflation in the past few decades, when BLS microdata on consumer prices is available. Nakamura et al. undertook the arduous task of extending the BLS microdataset back to 1977, encompassing higher-inflation episodes. Calculating price dispersion within a category of goods can be problematic, because price dispersion may arise from differences in quality or features of the goods. The authors instead look at the absolute size of price changes, explaining, "Intuitively, if inflation leads prices to drift further away from their optimal level, we should see prices adjusting by larger amounts when they adjust. The absolute size of price adjustments should reveal how far away from optimal the adjusting prices had become before they were adjusted. The absolute size of price adjustment should therefore be highly informative about inefficient price dispersion."

They find that the mean absolute size of price changes is fairly constant from 1977 to the present, and conclude that "There is, thus, no evidence that prices deviated more from their optimal level during the Great Inflation period when inflation was running at higher than 10% per year than during the more recent period when inflation has been close to 2% per year. We conclude from this that the main costs of inflation in the New Keynesian model are completely elusive in the data. This implies that the strong conclusions about optimality of low inflation rates reached by researchers using models of this kind need to be reassessed."

Wednesday, July 27, 2016

Guest Post by Alex Rodrigue: The Fed and Lehman

The following is a guest contribution by Alex Rodrigue, a math and economics major at Haverford College and my fantastic summer research assistant. This post, like many others I have written, discusses an NBER working paper, this one by Laurence Ball. Some controversy arose out of the media coverage of Roland Fryer's recent NBER working paper on racial differences in police use of force, which I also covered on my blog, since the working paper has not yet undergone peer review. I feel comfortable discussing working papers since I am not a professional journalist and am capable of discussing methodological and other limitations of research. The working paper Alex will discuss was, like the Fryer paper, covered in the New York Times. I don't think there's a clear-cut criteria for whether a newspaper should report on a working paper or no--certainly the criteria should be more stringent for the NYT than for a blog--but in the case of the Ball paper, there is no question that the coverage was merited.

In his recently released NBER working paper, The Fed and Lehman Brothers: Introduction and Summary, Professor Laurence Ball of Johns Hopkins University summarizes his longer work concerning the actions taken by the Federal Reserve when Lehman Brothers’ experienced financial difficulties in 2008. The primary questions Professor Ball seeks to answer are why the Federal Reserve let Lehman Brothers fail, and whether explanations for this decision given by Federal Reserve officials, specifically those provided by Chairman Ben Bernanke, hold up to scrutiny. I was fortunate enough to speak with Professor Ball about this research, along with a number of other Haverford students and economics professors, including the author of this blog, Professor Carola Binder.

Professor Ball’s commitment to unearthing the truth about the Lehman Brothers’ bankruptcy and the Fed’s response is evidenced by the thoroughness of his research, including his analysis of the convoluted balance sheets of Lehman Brothers and his investigation of all statements and testimonies of Fed officials and Lehman Brothers executives. Professor Ball even filed a Freedom of Information Act lawsuit against the Board of Governors of the Federal Reserve in an attempt to acquire all available documents related to his work. Although the suit was unsuccessful, his commitment to exhaustive research allowed for a comprehensive, compelling argument to reject the justification of the Federal Reserve’s in the wake of Lehman Brothers’ financial distress.

Among other investigations into the circumstances of Lehman Brothers’ failure, Ball analyzes the legitimacy of claims that Lehman Brothers lacked sufficient collateral for a legal loan from the Federal Reserve. By studying the balance sheets of Lehman Brothers from the period prior to their bankruptcy, Ball finds “Lehman’s available collateral exceeds its maximum liquidity needs by $115 billion, or about 25%”, meaning that the Fed could have offered the firm a legal, secured loan. This finding directly contradicts Chairman Ben Bernanke’s explanations for the Fed’s decision, calling into question the legitimacy of the Fed’s treatment of the firm.

If the given explanation for the Fed’s refusal to help Lehman Brothers is invalid, then what explanation is correct? Ball suggests Secretary Treasurer Henry Paulson’s involvement in negotiations with the institution at the Federal Reserve Bank of New York, and his hesitance to be known as “Mr. Bailout,” as a possible reason for the Fed’s behavior. Paulson’s involvement in the case seems unusual to Professor Ball, especially because his position as a Secretary Treasurer gave him “no legal authority over the Fed’s lending decisions.” He also cites the failure of Paulson and Fed officials to anticipate the destructive effects of Lehman’s failure as another explanation for the Fed’s actions.

When asked about the future of Lehman Brothers had the Fed offered the loans necessary for its survival, Ball claims that the firm may have survived a bit longer, or at least for long enough to have wound down in a less destructive manner. He believes the Fed’s treatment of Lehman had less to do with the specific financial circumstances of the firm, and more with the timing of the its collapse. In fact, Professor Ball finds that “in lending to Bear Stearns and AIG, the Fed took on more risk than it would have if it rescued Lehman.” Around the time Lehman Brothers reached out for assistance, Paulson had been stung by criticism of the Bear Stearns rescue and the government takeovers of Fannie Mae and Freddie Mac.” If Lehman had failed before Fannie Mae and Freddie Mac or AIG, then maybe the firm would have received the loans it needed to survive.


The failure of Lehman Brothers’ was not without consequence. In discussion, Professor Ball cited a recent NYT article about his work, specifically mentioning his agreement with its assertion that the Fed’s allowance of the failure of the Lehman Brothers worsened the Great Recession, contributed to public disillusionment with the government’s involvement in the financial sector, and potentially led to the rise of “Trumpism” today. 

Thursday, July 21, 2016

Inflation Uncertainty Update and Rise in Below-Target Inflation Expectations

In my working paper "Measuring Uncertainty Based on Rounding: New Method and Application to Inflation Expectations," I develop a new measure of consumers' uncertainty about future inflation. The measure is based on a well-documented tendency of people to use round numbers to convey uncertainty or imprecision across a wide variety of contexts. As I detail in the paper, a strikingly large share of respondents on the Michigan Survey of Consumers report inflation expectations that are a multiple of 5%. I exploits variation over time in the distribution of survey responses (in particular, the amount of "response heaping" around multiples of 5) to create inflation uncertainty indices for the one-year and five-to-ten-year horizons.

As new Michigan Survey data becomes available, I have been updating the indices and posting them here. I previously blogged about the update through November 2015. Now that a few more months of data are publicly available, I have updated the indices through June 2016. Figure 1, below, shows the updated indices. Figure 2 zooms in on more recent years and smooths with a moving average filter. You can see that short-horizon uncertainty has been falling since its historical high point in the Great Recession, and long-horizon uncertainty has been at an historical low.

Figure 1: Consumer inflation uncertainty index developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/

Figure 2: Consumer inflation uncertainty index (centered 3-month moving average) developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/

The change in response patterns from 2015 to 2016 is quite interesting. Figure 3 shows histograms of the short-horizon inflation expectation responses given in 2015 and in the first half of 2016. The brown bars show the share of respondents in 2015 who gave each response, and the black lines show the share in 2016. For both years, heaping at multiples of 5 is apparent when you observe the spikes at 5 (but not 4 or 6) and at 10 (but not 9 or 11). However, it is less sharp than in other years when the uncertainty index was higher. But also notice that in 2016, the share of 0% and 1% responses rose and the share of 2, 3, 4, 5, and 10% responses fell relative to 2015.

Some respondents take the survey twice with a 6-month gap, so we can see how people switch their responses. Of the respondents who chose a 2% forecast in the second half of 2015 (those who were possible aware of the 2% target), 18% switched to a 0% forecast and 24% switched to a 1% forecast when they took the survey again in 2016. The rise in 1% responses seems most noteworthy to me-- are people finally starting to notice slightly-below-target inflation and incorporate it into their expectations? I think it's too early to say, but worth tracking.

Figure 3: Created by Binder with data from University of Michigan Survey of Consumers




Monday, July 11, 2016

Racial Differences in Police Use of Force

In an NBER working paper released today, Roland Fryer, Jr. uses the NYPD Stop, Question and Frisk database, the Public Police Contact Survey,  to conduct "An Empirical Analysis of Racial Differences in Police Use of Force." The paper also uses data collected by Fryer and students coded from police reports in Houstin, Austin, Dallas, Los Angeles, and several parts of Florida. The paper is worth reading in its entirety, and is also the subject of a New York Times article, which summarizes the main findings more thoroughly than I will do here.

Fryer estimates odds ratios to measure racial disparities in various types of outcomes. An odds ratio of 1 would mean that whites and blacks faced the same odds, while an odds ratio of greater than 1 for blacks would mean that blacks were more likely than whites to receive that outcome. These odds ratios can be estimated with or without controlling for other variables. One outcome of interest is whether the police used any amount of force at the time of interaction.. Panel A of the figure below shows the odds ratio by hour of the day. The point estimate is always above 1, and the 95% confidence interval is almost always above 1, meaning blacks are more likely to have force used against them than whites (and so are Hispanics). This disparity increases during daytime hours, with point estimates nearing 1.4 around 10 a.m.

Panel B shows that the average use of force against both blacks and whites peaks at around 4 a.m. and is lowest around 8 a.m. The racial gap is present at all hours, but largest in the morning and early afternoon.
Fryer builds a model to help interpret whether the disparities evident in the data represent "statistical" or "taste-based" discrimination. Statistical discrimination would result if police used race as a signal for likelihood of compliance of likelihood of having a weapon, whereas taste-based discrimination would be ingrained in officers' preferences. The data are inconsistent with solely statistical discrimination: "the marginal returns to compliant behavior are the same for blacks and whites, but the average return to compliance is lower for blacks – suggestive of a taste-based, rather than statistical, discrimination."

Fryer notes that his paper enters "treacherous terrain" including, but not limited, to data reliability. The oversimplifications and cold calculations that necessarily accompany economic models  never tell the whole story, but can nonetheless promote useful debate. For example, since Fryer finds racial disparities in police use of violence but not shootings, he notes that "To date, very few police departments across the country either collect data on lower level uses of force or explicitly punish officers for misuse of these tactics...Many arguments about police reform fall victim to the 'my life versus theirs, us versus them' mantra. Holding officers accountable for the misuse of hands or pushing individuals to the ground is not likely a life or death situation and, as such, may be more amenable to policy change."

Wednesday, July 6, 2016

Estimation of Historical Inflation Expectations

The final version of my paper "Estimation of Historical Inflation Expectations" is now available online in the journal Explorations in Economic History. (Ungated version here.)
Abstract: Expected inflation is a central variable in economic theory. Economic historians have estimated historical inflation expectations for a variety of purposes, including studies of the Fisher effect, the debt deflation hypothesis, central bank credibility, and expectations formation. I survey the statistical, narrative, and market-based approaches that have been used to estimate inflation expectations in historical eras, including the classical gold standard era, the hyperinflations of the 1920s, and the Great Depression, highlighting key methodological considerations and identifying areas that warrant further research. A meta-analysis of inflation expectations at the onset of the Great Depression reveals that the deflation of the early 1930s was mostly unanticipated, supporting the debt deflation hypothesis, and shows how these results are sensitive to estimation methodology.
This paper is part of a new "Surveys and Speculations" feature in Explorations in Economic History. Recent volumes of the journal open with a Surveys and Speculations article, where "The idea is to combine the style of JEL [Journal of Economic Literature] articles with the more speculative ideas that one might put in a book – producing surveys that can help to guide future research. The emphasis can either be on the survey or the speculation part." Other examples include "What we can learn from the early history of sovereign debt" by David Stasavage, "Urbanization without growth in historical perspective" by Remi Jedwab and Dietrich Vollrath, and "Surnames: A new source for the history of social mobility" by Gregory Clark, Neil Cummins, Yu Hao, and Dan Diaz Vidal. The referee and editorial reports were extremely helpful, so I really recommend this if you're looking for an outlet for a JEL-style paper with economic history relevance.

My paper grew out of a chapter in my dissertation. I was interested in inflation expectations in the Great Depression after serving as a discussant for a paper by Andy Jalil and Gisela Rua on "Inflation Expectations and Recovery from the Depression in 1933:Evidence from the Narrative Record." I also remember being struck by  Christina Romer and David Romer's, (2013, p. 68) remark that a whole “cottage industry” of research in the 1990s was devoted to the question of whether the deflation of 1930-32 was anticipated.
I found it interesting to think about why different papers came to different estimates of inflation expectations in the Great Depression by examining the methodological issues around estimating expectations when direct survey or market measures are not available. I later broadened the paper to consider the range of estimates of inflation expectations in the classical gold standard era and the hyperinflations of the 1920s.

A lot of my research focuses on contemporary inflation expectations, mostly using survey-based measures. Some of the issues that arise in characterizing historical expectations are still relevant even when survey or market-based measures of inflation expectations are readily available--issues of noise, heterogeneity, uncertainty, time-varying risk premia, etc. I hope this piece will also be useful to people interested in current inflation expectations in parts of the world where survey data is unreliable or nonexistent, or where markets in inflation-linked assets are underdeveloped.

What I enjoyed most about writing this paper was trying to determine and formalize the assumptions that various authors used to form their estimates, even when these assumptions weren't laid out explicitly. I also enjoyed conducting my first meta-analysis (thanks to the recommendation of the referee and editor.) I found T. D. Stanley's JEL article on meta-analysis to be a useful guide.