Thursday, November 10, 2011

The Talibans of Randomization

Economics, and Development Economics is no exception, is a field that is as fad-prown as Milan fashion week in the fall.  Karl Lagerfeld would definitely feel comfortable in my intellectual tribe if he mastered Pontryagin's Maximum Principle and knew how to bootstrap standard errors. 

The current fad in my academic field which is, broadly speaking, the microeconomics of development, is randomization.  Often referred to as RCT (Randomized Control Trial), this technique for evaluating the impact of a variety of social programs, ranging from conditional cash transfers (think of Bolsa Escola in Brazil) to HIV/AIDS testing, has achieved a degree of influence that is remarkable.  The high priestess of RCT is, of course, Esther Duflo of MIT, though a plethora of other clerics populate the RCT madrassah.

I should be thrilled at this triumph of an MIT faculty member, being an MIT product myself.  Thrilled I am not.  Don't get me wrong.  I am not against RCTs in and of themselves.  Some of my best friends do them... I am even running several RCTs myself at this very moment.  But for those of us who have spent an inordinate amount of time working in the field attempting to convince policymakers in developing countries to actually evaluate the effectiveness of their programs, RCTs have, to a large extent (at least in terms of the manner in which they are flogged by the Randomistas) not lived up to their promise.  Worse, my current prior is that they are rapidly approaching the point where they are counterproductive.

There are probably three reasons for this.  First, the Randomistas (this memorable characterization is due to Martin Ravallion of the Research Department of the World Bank), because of the purported purity of their faith, have a great deal of difficulty in actually talking to program initiators.  Once again, don't get me wrong.  There are lots of great and interesting RCTs going on out there (just take a look at what people such as Dean Karlan of Yale are doing).  But many RCTs are bogus in the sense that the NGOs (to take but one example) set up to implement the program being evaluated are under the total control of the randomistas themselves.  How many randomistas spend weeks or months in hot steamy developing country ministries actually explaining to the policymakers why they should evaluate their policies, and why an RCT would be a smart manner of doing so? The basic point is that many randomistas are selling RCT, not evaluation per se.  And it is evaluation per se, and not the chosen methodology, that is important in moving the policy agenda forward in the developing country context.

Second, RCTs represent the triumph of a technique over actual ideas.  Of course, this point is related to the first.  One of the legitimate selling points of an RCT is that it allows one to eschew complex econometric procedures and statistical assumptions that are untenable in favor of a procedure that turns program deployment itself into the evaluation technique.  One translates an essentially epidemiological approach into a social science setting:  this is great! Economists need a good dose of Lex Parsimoniae.  Moreover, contrary to popular opinion, an RCT is often the fairest manner of deploying a program.  If not everyone can receive the treatment, what would you prefer?  That only villages with connections to the presidential palace get the program money, or that all villages have an equal chance?  Point taken.  But my basic argument here is that randomistas, because of their fervent adherence to dogma, often engage in practices that are deleterious to the cause of evaluation per se.  I once saw a very famous randomista make a brilliant presentation on methodology to a group of developing country program managers.  After the presentation, one of the program managers (from Conakry) asked her what she would suggest as an alternative evaluation method given that an RCT was not possible in the context of the research question that the program wanted answers to.  The randomista's answer? Change your research question of course!  This is the tail wagging the dog, or looking for the car keys under the street lamp not because that is where you dropped them but because that is where the light is shining.  Can one be any more harmfull in terms of getting decisionmakers in developing countries to adopt evidence-based policymaking?  I rest my case.

Third, randomistas often characterize what they do as being the "Gold Standard" of evaluation.  Apart from the methodological idiocy of such a statement (more on this in a later post...), just think about it.  According to Peter Temin of MIT (or, if you want the movie version, Liaquat Ahamed's wonderfully readable and Pulitzer Prize-winning Lords of Finance) the Gold Standard was one of the main causes of the Great Depression. 

Some selling point.

1 comment:

  1. Very interesting article.
    A question that I wonder is how should one practise economics in the practical development field, or how much trust should we give to the strategies by economic research. Why have the aid policies been changing very 10 years, of which seem all reasonable and ideologically good and not working (or yet)?

    ReplyDelete