Monday, November 14, 2011

Flogging a dead horse

Back to randomizations.  For a practicing academic economist, the name of the game is to publish in refereed journals.  This means that your work is scrutinized by one, two or sometimes three referees who usually, in their sarcastic reports on your work, compare your efforts (unfavorably) to those of a chimpanzee equipped with a laptop.  And that's when they are being kind. 

The purpose of this academic equivalent of Basic Training at Camp Lejeune (of "The few, the proud, the retarded" fame) is to ensure quality control in terms of what gets published and what does not.  Most of the time it works fairly well, conditional on the referees being competent: that's a huge assumption, and most of us in the academic arena have been confronted with situations in which the chimpanzee simile should more usefully be applied to the person doing the refereeing.  And in this case I am engaging in gross understatement.

Whether the system works or not is of no import here.  Much has been written about this and I do not wish to descend into this particular gladiatorial ring, at least for the time being.  The question I do want to raise is the following: what happens when you submit a paper to a scholarly journal and that paper is based on a Randomized Control Trial (RCT)?  In particular, what happens when, Allah kareem, the referees give you a "revise and resubmit" (affectionately known as R&R amongst the conoscenti) that asks for revisions to the paper? If the requested changes are esthetic, no sweat.  But what if the legitimate concerns of the referees have to do with your research design?  Humor me and let's work through a not so farfetched thought experiment.

Suppose that you have just spent three years and four hundred thousand dollars of World Bank money working on the impact evaluation of a major social program in a developing country, and that the impact evaluation in question is based on an RCT.  The referees tell you that your design is flawed.  Maybe they are right, and maybe they are wrong.  In the good old days when you actually had to think through your identification strategy instead of blithely confining your statistical savvy to running a dozen power calculations using Optimal Design, there was a glimmer of hope.  You could be smart.  You could be imaginative.  You could (with a good dose of luck and the concomitant IQ) find some way of addressing the referees's concerns by modifying your identification strategy through some econometric tour de force.

With an RCT, all of this is moot.  Either they buy it or they do not.  Your randomization, and the deployment of a multi-million dollar program are done.  Even with help from Stephen Hawking, you can't make time run backwards.  The bottom line? You are dead in the water.

The response to this might be that you should have thought more carefully about the research design for your RCT in the first place, and consulted the illuminati of the field ex ante facto.  This is a valid point.  But the likely consequence of such incentives is that original research designs that do not toe the Randomista Party line to the letter are likely to be condemned to the intellectual dung heap.  This is a shame, and does not advance scientific enquiry.  Though I doubt that any statistics will appear on this issue, my a priori is that the refereeing process on such papers will turn into a thumbs up or thumbs down game, with little Neros exercising their power of academic life and death even more than they currently do.

There is probably nothing that can be done about this, so it is not worth getting twisted knickers over.  I am simply flogging a dead horse.  In passing the eponymous (pace William Safire...this is nonstandard usage) title of this post refers --for those of you who are so uncooth as not to have picked up the reference-- to the Sex Pistols.  Now there was a bunch of guys who knew how to randomize with their guitars.

No comments:

Post a Comment