WWP

WWP: Forests without borders?

As the semester winds to a close, I've been trying to get back into a better habit of reading new papers. Really, I'm just looking for more excuses to hang out at my local Blue Bottle coffee shop - great lighting, tasty snacks, wonderful caffeine, homegrown in Oakland...what more could I ask for? Sometimes doing lots of reading can feel like a slog - but this week, I stumbled across a really cool new working paper that's well worth a look. The paper, by Robin Burgess, Francisco Costa, and Ben Olken, is called "The Power of the State: National Borders and the Deforestation of the Amazon." Deforestation is a pretty big problem in the Amazon, especially when we think that (on top of the idea that we might want to be careful to protect exhaustible natural resources for their own sake, and not pave paradise to build a parking lot) tropical forests have a large role to play in combating climate change, because trees serve as a pretty darn effective carbon sink. Plus they produce oxygen. Win-win! Despite these benefits, trees are also lucrative, and there are also economic benefits to converting forests into farmland. This has led to a spate of deforestation. 

To combat the destruction of the Amazon, Brazil enacted a series of anti-deforestation policies in 2005-06. It's important to understand whether these policies worked, and it's not obvious ex ante that they would: according to evidence from a friend (and badass spatial data guru/data scientist), Dan Hammer, a deforestation moratorium in Indonesia was unsuccessful: if anything, it led to an increased rate of deforestation. Oops. 

The key figure from Dan's paper. The grey bars indicate times when the Indonesian deforestation moratorium was in effect. The teal line is the deforestation rate in Malaysia, and the salmon line is the rate in Indonesia. Not much evidence here that …

The key figure from Dan's paper. The grey bars indicate times when the Indonesian deforestation moratorium was in effect. The teal line is the deforestation rate in Malaysia, and the salmon line is the rate in Indonesia. Not much evidence here that Indonesia's deforestation rate decreased relative to Malayasia during (after) the gray periods.

The big challenging with studying deforestation, especially when it's been made illegal, is that it's tough to get data on. Going out and counting trees requires a huge effort, and people don't usually like to self-report illegal activity. Like Dan before them, Burgess, Costa, and Olken (hereafter BCO) turn to satellite imagery. As I've said before, I'm excited about the advances in remote sensing - there's an explosion of data, which can be harnessed to measure all kinds of things where we don't necessarily have good surveys (Marshall Burke, ahead of the curve as usual). BCO use 30 x 30 meter resolution data on forest cover from a paper published in Science - ahh, interdisciplinary (and a win for open data!).

Of course, it's not enough to just have a measurement of forest cover - in order to figure out the causal effect of Brazil's deforestation policies, the authors also need an identification strategy. Maybe this is because I've got regression discontinuities on the brain lately, but I think what BCO do is super cool. They use the border between Brazil and its neighbors in the Amazon to identify the effects of Brazil's policy. The argument is that, other than the deforestation policies in the different countries, parts of the Amazon just to the Brazilian side of the border look just like parts of the Amazon just to the opposite side of the border. This obviously isn't bullet-proof - you might worry that governance, institutions, infrastructure, populations, languages, etc change discontinuously at the border. They do some initial checks to show that this isn't true (including a nice anecdote where the Brazilian president-elect accidentally walked into Boliva for an hour before being stopped by the border patrol), which are decently compelling (though we're always worried about unobservables). Under this assumption, BCO run an RD comparing deforestation rates in Brazil to its neighbors:

The first key figure from BCO: in 2000, well before Brazil's aggressive anti-deforestation policies, the percent of forest cover was much lower on the Brazilian (right-hand) side of the border.

The first key figure from BCO: in 2000, well before Brazil's aggressive anti-deforestation policies, the percent of forest cover was much lower on the Brazilian (right-hand) side of the border.

There's a clear visual difference between deforestation in Brazil and in its neighbors. But here's what I really like about the paper: even if you don't completely buy the static RD identifying assumption here, you have to agree that the following sequence of RD figures is pretty compelling.

This is a little annoying to compare to the first figure, since the y-axis here is different: this time, it's percent of forest cover lost each year - that's why Brazil appears higher in these figures than in the earlier graph. But: it clearly pops …

This is a little annoying to compare to the first figure, since the y-axis here is different: this time, it's percent of forest cover lost each year - that's why Brazil appears higher in these figures than in the earlier graph. But: it clearly pops out that In 2006, when Brazil's policies went into effect, the discontinuity disappears.

The cool thing about the data that the authors have, which differentiates it from many spatial RD papers, is that it's not static - they've got multiple years of data. This allows them to look at deforestation over time. Critically, even though Brazil's forest cover is much lower than its neighbors prior to the policy, its annual rate of forest cover loss slows dramatically in 2006, when the deforestation policies came into effect, and appears to remain equal to the neighboring country rate in 2007 and 2008. 

This is pretty strong evidence that the anti-deforestation policies put into place by the Brazilian government worked! You should still be slightly skeptical, and want to see a bunch of robustness checks (many of which are in the paper), but I really like this paper. It combines awesome remote sensing data with a quasi-experimental research design to study the effectiveness of important policies. It's not too often that we can be optimistic about the future of the Amazon - but it looks like we've got some reason to be hopeful here.

 

If you've made it all the way through this post, I'll reward you with John Oliver's new video on science.

WWP: Rain, rain, go away, you're hurting my human capital development

Greetings from String Matching Land! Since I seem to be trapped in an endless loop of making small edits to code and then waiting 25' for it to run (break), I'm going to break out for a little bit and tell you about a cool new paper that I just finished reading. Also, it's Wednesday, so my WWP acronym remains intact. Nice.  

Manisha Shah (a Berkeley ARE graduate!) at UCLA and Bryce Millett Steinberg, a Harvard PhD currently doing a postdoc at Brown who is on the job market this year (hire her! this profession needs more women!), have a new paper that I really like. I thought of this same idea (with slightly different data) recently, and then realized that this is forthcoming in the JPE (damn)- and it's excellently done. The writing is clear, the set-up is interesting, the data are cool, the empirics are credible, and the results are intuitive. Did I mention it's forthcoming in the JPE? 

In this paper, Shah and Steinberg tackle a prominent strand of development economics: what do economic shocks do to children at various stages of growth? There's a long literature on this, including the canonical Maccini and Yang paper (2009 AER), who find that good rainfall shocks in early life dramatically improve outcomes for women as adults (in Indonesia). This paper does a great job of documenting a treatment effect (if you haven't read it yet, metaphorically put down this blog and go read that instead), but less to say about the mechanisms behind it.

Steinberg and Shah take seriously the idea that rainfall shocks might affect human capital through multiple channels: good rain shocks could mean more income, and therefore consumption and human capital, or good rain shocks might mean a higher opportunity cost of schooling, leading to less education and human capital development. They put together a very simple but elegant model of human capital decisions, and test its implications using a large dataset including some simple math and verbal tests from India.  They show that good rain shocks are beneficial for human capital (as proxied by test scores) early in life, but lead to a decrease in human capital later in life. They demonstrate that children are in fact substituting labor for schooling in good harvesting years, and show that rainfall experienced in childhood matters for total years of schooling as well, which could help explain the Maccini and Yang result, though they don't find differential effects by gender. 

In the authors' own words (from the abstract):

Higher wages are generally thought to increase human capital production, particularly in the developing world. We introduce a simple model of human capital production in which investments and time allocation differ by age. Using data on test scores and schooling from rural India, we show that higher wages increase human capital investment in early life (in utero to age 2) but decrease human capital from ages 5-16. Positive rainfall shocks increase wages by 2% and decrease math test scores by 2-5% of a standard deviation, school attendance by 2 percentage points, and the probability that a child is enrolled in school by 1 percentage point. These results are long-lasting; adults complete 0.2 fewer total years of schooling for each year of exposure to a positive rainfall shock from ages 11-13. We show that children are switching out of school enrollment into productive work when rainfall is higher. These results suggest that the opportunity cost of schooling, even for fairly young children, is an important factor in determining overall human capital investment.
Obligatory stock photo of Indian school kids during the rainy season. Obviously not my own photo.

Obligatory stock photo of Indian school kids during the rainy season. Obviously not my own photo.

A few nitpicky points: I could've missed this, but the data are a repeated cross-section rather than a panel of students, so I wanted a little more discussion of whether selection into the dataset was driving the empirics. Also, when they start splitting things by age group, I'm surprised that they still have enough variation in test performance among 11-16-year-olds to estimate effects. I would've expected these students to max out the test metrics, given that the exam being administered is incredibly basic numeracy and literacy skills. But maybe not. Finally, since I'm teaching my 1st-year econometrics students about figures soon, these graphs convey the message but aren't the sexiest. Personal gripe. All in all, though, this is a really nice paper - I urge you to go read it. 

A final caveat: this is of course context-specific. I don't at all mean to suggest (and nor do the authors), for instance, that these results should have Californians glad that we're done with the rain and back to sunny weather. As much as I enjoy sunrise runs (n = 1) and sitting outside reading papers, I'd be happy with a little more of what El Nino's got to offer the West Coast.

WWP: Old fight, new tricks?

One of the most interesting papers I saw at the ASSA meetings in January was Ariel Ortiz-Bobea's new work on the climate and agriculture question. For anyone not in the know, there is a long (read: loooooooong) literature trying to estimate the effects climate change will have on agriculture. Most of this debate has focused on the US, largely for data reasons (and partly because US maize is way sexier than Kenyan maize...amirite?).  

An overly brief summary of this literature is the following:

  • In the Beginning, agronomists created the Crop Model. These models were created using test plots, and used to predict the effects of climate on agriculture.
  • Then, some economists came along, and made the point that the agronomists were selling the farmers short. Crop models ignore the potential for farmer adaptation. And thus the Ricardian model was born: these economists regress land values on average temperatures, plus a bunch of controls, and find mild-to-positive effects of climate change. 
  • But wait! Enter Team ARE. A second set of economists argued that the Ricardian approach, like most cross-sectional regressions, suffered from omitted variable bias. In particular, they note that the presence of irrigation dramatically changes agriculture, and suggested estimating different models for irrigated and non-irrigated regions (if you're keeping score at home, you can also implement this suggestion via an interacted model). When they account for irrigation, climate change looks pretty bad again.
  • A few years later, some other economists arrived on the scene. If you're worried about irrigation, they argued, you should be worried about a whole host of other omitted variables in the cross-section. But do we have the idea for you? These guys used a panel fixed effects model to remove time-invariant omitted variables - also sparking a debate about "weather vs. climate" (using short-run fluctuations rather than long-run variation to estimate the model in question) - and find again that climate change probably isn't so bad.
  • Unnnnnfortunately, our panel-data-wielding heroes had some data problems (brought to light by Team ARE). If you correct them, climate change harms US agriculture to the tune of tens of billions of dollars. Oops.
  • But the weather-vs-climate thing is still unsatisfying! So Team ARE: The Next Generation used a long-difference estimator to show that actually, farmers don't seem to be doing a better job responding to climate change over time - it'll still be bad.

Here's where Ariel's new paper comes along. He notes that (for various reasons glossed over above) we might actually want to run a Ricardian-style model: in essence, weather vs. climate hasn't been fully resolved. At the same time, though, we should be worried about omitted variable bias. But in particular, we should be worried about spatially-dependent omitted variable bias. The argument is pretty simple. Most things that might be left out of an agriculture-climate regression that would bias that regression vary smoothly over space. Conveniently for the econometrician, there are some newer estimators that we can use to understand the magnitude and direction of the bias that might result from these types of omitted variables. Ariel uses these tricks, and finds that (lo and behold) climate change might not be so bad for agriculture in the US after all.

Effects of climate change estimated using OLS: this is the original economist version.

Effects of climate change estimated using OLS: this is the original economist version.

New-fangled effects of climate change using the Spatial Durbin Model. Note the lack of hugely negative effects, especially towards the right of the figure.

New-fangled effects of climate change using the Spatial Durbin Model. Note the lack of hugely negative effects, especially towards the right of the figure.

This paper is full of technical details, makes some fairly strong structural assumptions about exactly how omitted variables vary over space, and ends up with fairly wide confidence intervals, but all in all, it makes a useful contribution to an important debate, and is worth a read. I'll be interested to see where it ends up, and how seriously the literature takes the re-posited suggestion that climate change really isn't that bad for US ag. If nothing else, this paper highlights just how important it is for us to figure out how to measure adaptation!

Bonus: If you've read this far down, you deserve something fun. Go check out my new favorite internet game. h/t Susanna & Paula.

 

Edited to fix links. Thanks to my usual blog-police for pointing this out.

Weekend Op-Ed: Delhi driving restrictions actually work [so far]!

New semester, new blog-resolutions. We're back with a WWP...except that the analysis I'm talking about here hasn't actually made its way into a working paper yet. That said, the work is interesting and cool, and extremely policy-relevant, so it's worth taking a minute to discuss, I think.

For those of you not up on your India news, Delhi's air pollution is horrendous. Air pollution data suggest that Delhi's PM2.5 and PM10 concentrations are the worst in the world - the city has even less breathable air than notoriously dirty Beijing. Having spent some time in Delhi last January, I can add some of my own anecdata (my new favorite Fowlie-ism) as well: after three days of staying and moving around in the city, I was hacking up a lung trying to walk up three flights of stairs to our airbnb. I'm certainly not the fittest environmental economist around, but a few steps don't usually give me trouble. 

So I was glad to hear that Delhi has recently been undertaking some efforts to improve its air quality. I was less glad to hear the method for doing so: between January 1 and January 15, Delhi implemented a pilot driving restriction. Cars with license plates ending in odd numbers would be allowed to drive on odd-numbered dates only, while cars with license plates ending in even numbers would be allowed to drive on even-numbered dates only. This sounds good - cutting the number of cars on the road by about half should have a drastic effect on air quality, right? The problem is that Mexico City has had a similar rule in place for years -  Hoy No Circula - and rockstar professor Lucas Davis took a look at its effects in a 2008 paper, published in the Journal of Political Economy. Unfortunately (I thought) for the Indian regulation, Lucas finds that license-plate-based restrictions lead to no detectable effect on air quality across a range of pollutants.

Here's Lucas' graphical evidence for Nitrogen Dioxide. If the policy had worked, we would've expected a discontinuous jump downwards at the gray vertical line. He shows similar figures for CO, NOx, Ozone, and SO2. 

Here's Lucas' graphical evidence for Nitrogen Dioxide. If the policy had worked, we would've expected a discontinuous jump downwards at the gray vertical line. He shows similar figures for CO, NOx, Ozone, and SO2. 

Lucas provides an interesting possible explanation for the lack of change: he has suggestive evidence that drivers responded to the regulation by buying additional vehicles - in the Delhi case, if I have a license plate ending in 1, but really value driving on even-numbered days, I might go out and get a car with a plate ending in 2 instead. In light of this evidence, I was less-than-optimistic about the Delhi case. 

So what actually happened in Delhi? New evidence from Michael Greenstone, Santosh Harish, Anant Sudarshan, and Rohini Pande suggests that the Delhi driving restriction pilot did have a meaningful effect on pollution levels - on the order of 10-13 percent! (A more detailed overview of what they did is available here). These authors use a difference-in-differences design, in which they compare Delhi to similar cities before and after the policy went into effect, doing something like this:

Effect = (Delhi - Others)_Post - (Delhi - Others)_Pre

Under the identifying assumption that Delhi and the chosen comparison cities were on parallel emissions trajectories before the program went into effect, this estimation strategy is nice because it removes common shocks in air pollution. The money figure from this analysis shows the dip in pollution in Delhi starkly:

It looks like, in its brief pilot, that Delhi was successful at reducing pollution using this policy. So why is the result so different than in Mexico City? Obviously, India and Mexico are very different contexts. It also seems like the channel Lucas highlighted, about vehicle owners purchasing more cars, is something that people would only do after being convinced that the policy would be permanent - so there might be additional adjustment that occurs that isn't picked up in a pilot like this. (Would you go out and buy a new car in response to someone telling you that over the next two weeks they're trying out a driving restriction? I don't think I have that kind of disposable income...) Also, the control group obviously matters a lot. I'd like to see (and expect to see, if this gets turned into an actual working paper) a further analysis of what's going on in the comparison cities over the same time period. The pollutants being measured are different - though I doubt that this actually affects much, given how highly correlated PM is with the pollutants measured in Lucas' paper. 

In general, I'm encouraged to see both that Delhi is taking active steps to attempt to reduce air pollution, and that these steps are being evaluated in credible ways. As the authors point out in their Op-Ed, and as I've tried to highlight above, we should be cautious about extrapolating the successes of this pilot to the long run - a congestion or emissions pricing scheme might be a more effective long-run approach to tackling air pollution.

I'd also like to briefly highlight the importance of making air pollution data available for these kinds of analysis. There's a cool new initiative online called OpenAQ that's trying to download administrative data from pollution  monitors and make this information publicly available - and they're not the only ones. Berkeley Earth is also providing some amazing data on Chinese air quality - and rumor has it they'll be adding more locations soon.  Understanding the implications of air quality on health, productivity, and welfare is increasingly important as developing country cities grow and house millions in dirty environments - the more data that's out there to aid in this effort, the better.