There was a big kerfuffle recently about the fact that researchers were only able to replicate about half of prominent psychology results. This is potentially worrisome, and suggests that we should be doing more replication exercises where possible, as well as implementing new norms in journals and elsewhere that encourage better incentives among researchers and editors.
There's another facet of replication, though, that is maybe a step below re-running the whole experiment: simply re-running posted replication code, and making sure that results reported in a study match what the code produces. In this sense, studies really should replicate. Surprisingly (and, I would argue, embarrassingly), Andrew Chang and Phillip Li at the Fed just released a paper describing the results of this type of replication exercise on 60 published papers in good journals. As their title ("Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say 'Usually Not'") suggests, just over half of the attempted replications failed - and more than that without additional correspondence with the papers' authors. David Evans at the World Bank has a nice blog post that describes the paper in more detail. The ultimate suggestions? Journals should require the posting of replication code and data (even today, some top journals aren't doing this - here's looking at you, QJE!); authors should be clear about what version of statistical software and what non-standard packages were used to conduct the analysis; authors should post expected run-times on code; authors should be sure to note in what order different code files should be run; and replication materials should absolutely include the code files to generate tables and figures. Seems like a reasonable set of requests (I myself have banged my head against a wall repeatedly trying to replicate code without proper documentation, packages, or software versions, so I'm very sympathetic to this cause.)
Here's some great commentary from Nature about replication in research.
Also: if you want to be part of a burgeoning movement to correct these types of problems in the social sciences, you should go to the Berkeley Initiative for Transparency in the Social Sciences' annual meeting (or better yet, submit a paper!)