Filed away

The Economist has a great interactive infographic this week on the dangers of what the research transparency community calls the ``file-drawer problem.'' Not to sound like a whiny grad student, or anything, but research is hard. We're under a lot of pressure to produce one or two big, punchy results that are exciting and attractive to journals. Of course, for every published paper, there are probably upwards of ten others locked away somewhere - in the past, in file cabinets in academic offices; these days, buried in a Dropbox. The problem with this is that if we only publish those big flashy results, we end up with a distorted view of the world. If (I'm going to pick on an arbitrary set of studies here - I have no reason to think that this is necessarily the case) the only results on energy markets to get published are those that show evidence of the massive exercise of market power, we're likely to conclude that there is massive market power in electricity markets. It might be the case, though, that through a combination of authors not submitting and journals not publishing results that aren't statistically significantly different from zero, that the real story is that in most electricity markets, there is no evidence of market power. In this case, the scientific record misrepresents the actual state of the world, as well as the level of knowledge in the community.

Don't worry, little research project, you can come out of there!

Don't worry, little research project, you can come out of there!

So, how should we understand published results in light of this problem? There are a couple of things that we can do to counter these biases. First, we can do more meta-analysis. How does this help? Meta-analyses let us formally collect knowledge about the current state of the literature. Of course, a meta-analytic result will only incorporate research that made it out of the file drawer, but we can follow the example set by Sol Hsiang, Marshall Burke, and Ted Miguel in a reply in Climatic Change and test the sensitivity of our meta-analyses to the inclusion of the most contradictory studies or even to hypothetical studies with null results to assess how confident we should be in the published record (see Figure 1, Panel A in Hsiang, Burke, Miguel for a nice graphical depiction of this process).

Next, we can make an effort to make sure our results (both statistically significant and not) end up in the world. A working paper on a website isn't the same as a publication in Econometrica, but it's a start. Pre-registration is a great way to create a record of what people are working on, and will hopefully enable follow-ups if research that is pre-registered never appears. Journals can also contribute to this effort by becoming more willing to publish properly-executed null results. On the researcher end, I think we can also do more to poke our null results - did we have the requisite sample size to even detect an effect when we started? Are we using the correct estimation strategy? Are our econometrics defensible? If so, we should be willing to stand by them. 

I for one have been staring at figures full of null results for the past couple of days. I'll attempt to do my part by the following: I promise that some version of these results (whether they end up null or no) will end up as a post here, at the very least. Take that, file cabinet.

PS: Side plug for BITSS, the Berkeley Institute for Transparency in the Social Sciences. The guys over there are working hard at making social science research better and more transparent (and keeping work out of electronic file cabinets everywhere!). Highly recommend.