I'm not much of an econometrician, but at least I'm not afraid of trying to code things from scratch. This déformation professionnelle can probably be traced back to Frank Fisher who, in the first problem set in the PhD econometrics sequence at MIT, had us do OLS using a pocket calculator: yes, we had moved beyond Blaise Pascal's mechanical calculator of 1642 in those days. I guess you really respect PCs after you've been subjected to that sort of cruel and unusual punishment.
In this day and age, people are unfortunately used to pushing a button, or at least invoking some pre-packaged command, which will usually spew things out directly as a table into a Latex file, in an aesthetically pleasing format, wherever you want it to appear in the earth-shattering paper that you are writing. This is wonderful. It's simple. But do people, especially newly minted économètres en herbe, actually know what they are doing?
I know I sound antediluvian, but sometimes I really wonder... Let's take two concrete examples, which span the entire spectrum of the profession.
First, let's take someone on the highest (Great White Shark) rung of the economic food chain: Robert Barro. In a memorable 2005 Journal of Monetary Economics paper, Barro and Lee, apart from managing to run one of the crispest forbidden regressions in history (with a half dozen covariates included in the structural equation not appearing in the first-stage reduced forms) and, apparently without knowing it, multiplying their exclusion restrictions by a factor of three (by including a whole bunch of covariates in the first-stage reduced form that do not, in turn, appear in their structural equations), make the following statement (footnote 16 on p. 1253):
"Fixed-effects Tobit or probit models cannot be implemented because there does not exist a sufficient statistic allowing the fixed effects to be conditioned out of the likelihood. See Wooldridge (2002)."
Apart from demonstrating that Wooldridge's (2002) textbook was never opened (in section 16.8.2, on p. 541, Wooldridge shows how to use a garden-variety Mundlak procedure to correct for time-invariant heterogeneity in a panel data tobit regression), this would be news to Bo Honoré, one of the smartest econometricians out there. In a 1992 Econometrica paper, Honoré developed trimmed LAD, which is Tobit with fixed effects. He even furnished the corresponding GAUSS code, called PANTOB --PANelTOBit, get it?-- which I used with two co-authors in a 1998 paper. Come on, Honoré's work is 13 years before the Barro and Lee paper, and Econometrica is not exactly the sort of journal that goes unnoticed! And Jim Powell was already dealing with this issue in an Econometrica article back in 1986. That's 25 years ago! But I guess the specific command didn't exist in Stata. And that adding all of the covariates in their time-invariant country-specific means incarnation to a random effects tobit, which is all that the miracle of the Mundlak procedure calls for, was just not worth the trouble.
Second, for the plankton end of the food chain, consider the following conversation that I have had with economics graduate students, on several continents, a depressing number of times:
Me: "Why did you include a lagged dependent variable in your econometric specification given that your theoretical model didn't call for one?"
Student: "I had to."
Me: "What do you mean by 'you had to'?"
Student: "I wanted to use GMM to estimate the equation."
Me: "So? GMM is just efficient IV, so do the Nike thing and 'just do it'! I still don't see why you included the lagged dependent variable... it doesn't appear in the theoretical model."
Student: "Actually, it's because xtabond2 doesn't let me estimate if I don't include a lagged dependent variable."
Me: "So, estimate with pure GMM without a lagged dependent variable!"
Student: "I can't find a command that allows me to do that."
Here is another prima facie case (see my post on the Talibans of randomization) of the tail wagging the dog. The prosecution rests.
What is to be done about this? For Robert Barro and his merry band of co-authors, it is unfortunately too late. Arguments d'autorité rule in the rarefied atmosphere of top level academic publishing, and anyway, he has some very interesting stuff to say. So why gripe about the minor point that he would have been failed by Jerry Hausman in his PhD econometrics class. Frank Fisher, one of the sharpest intelligences it has been my priviledge to witness in action, would have simply given him a negative grade. I kid you not, Frank, quite reasonably in my humble opinion (now, with the benefit of hindsight, not back then...), preferred no answer to a stupid answer, and used to regularly hand out negative grades ---hummm, maybe I should consider this at the Graduate Institute.... I wonder how that translates into the strange Swiss 6 point grading scale...?
But for graduate students in Economics, there is still hope, although the manner in which graduate econometrics is usually taught leaves little room for optimism. I try to teach the stuff by getting people to code various procedures à l'ancienne in R, using basic matrix algebra. It's sort of like making your Hollandaise sauce from scratch, the old fashioned way, instead of tearing open a plastic bag of the chemically synthesized ersatz stuff. But this is not exactly a recipe for popularity, by dint of the simple fact that most graduate students these days are not really used to having to actually understand what they estimate.
Indeed, by roughly mid-November (i.e. now) I, alongside my partner in econometric crime Ugo Panizza, are without doubt the two most hated men in Rigot, the building resembling a refugee camp near the entrance to the UN in Geneva, where we teach. Sometimes I wonder whether the daily demonstrations on the Place des Nations aren't going to turn into an uggly free-for-all with angry protesters brandishing "no more R coding in Econometrics II !" placards breaking down my office door (which wouldn't be hard, given the state of our beloved building). I should probably keep a tear gas cannister handy next to my desk.
Now I have no problem with being seriously unpopular in the name of a noble cause. As Stalin is purported to have quipped, it is far better to be feared than to be loved. I positively revel in the opprobium. After all, if people are made significantly unhappy by having to struggle with code, it means that they had no idea how to do it in the first place, and probably did not really understand the procedures that they were running. The push-button antibodies are therefore likely to be taking hold in their organism. And that, to be fair, is painful --had a tetanus shot recently?
It's probably a losing battle, but what the heck, there is something glorious about charging the machine gun nest on horseback with sabre drawn. And that poor dog really has to regain control of his tail.
Saturday, November 19, 2011
Wednesday, November 16, 2011
Of sex and drugs and rock'n roll... and economic models
I guess, as the title of this post implies, that this is my Sex Pistols week (see the conclusion of my previous post). Economics, and development economics in particular, is not exactly the kind of discipline that is, to the best of my limited knowledge, associated with rock music, and by "rock" I mean truly classic rock (I'm showing my age here). All of this has to change.
We need to make development economics fun. We need to make development economics hip (I think it already is, but I am probably in a minority of one). We need to make development economics rock.
Here are a few suggestions in terms of some economic model-song pairings:
We need to make development economics fun. We need to make development economics hip (I think it already is, but I am probably in a minority of one). We need to make development economics rock.
Here are a few suggestions in terms of some economic model-song pairings:
- George Akerlof's "Market for Lemons" (1970) model: "The lemon song," by Led Zeppelin.
- Rosenstein-Rodan's (1943) model of the Big Push: "When push comes to shove," by the Grateful Dead.
- Grossman and Helpman's "Quality ladders in the Theory of Growth" (1991) paper: "Stairway to heaven," by Led Zeppelin.
- The Harris-Todaro model of migration (1969, 1970): "Should I stay or should I go," by the Clash.
- Robert E. Lucas's (1988) model of the mechanics of economic development: "Ted the mechanic," by Deep Purple. "Bob the mechanic" would have been ideal... can anyone mention this to Ian Paice?
- The Logic of Collective Action (1965) by Mancur Olson: "The logical song," by Supertramp (saccharin, but classic nevertheless).
- Robert Barro's (1974) "Are Government Bonds Net Wealth?" paper: "Taxman," by the Beatles.
- The Paul Romer paper that started everything in terms of endogenous growth models was "Crazy Explanations for the Productivity Slowdown" (1987). How about "Let's go crazy," by Prince (yeah, yeah, I know it's funky, but give me a break).
- The Ted Miguel et al (2004) JPE paper that uses rainfall shocks to identify the impact of GDP growth on the likelihood of the outbreak of civil war: "The rain song," by Led Zeppelin (an alternative pairing would be "Fool in the rain," also by Led Zeppelin).
Monday, November 14, 2011
Flogging a dead horse
Back to randomizations. For a practicing academic economist, the name of the game is to publish in refereed journals. This means that your work is scrutinized by one, two or sometimes three referees who usually, in their sarcastic reports on your work, compare your efforts (unfavorably) to those of a chimpanzee equipped with a laptop. And that's when they are being kind.
The purpose of this academic equivalent of Basic Training at Camp Lejeune (of "The few, the proud, the retarded" fame) is to ensure quality control in terms of what gets published and what does not. Most of the time it works fairly well, conditional on the referees being competent: that's a huge assumption, and most of us in the academic arena have been confronted with situations in which the chimpanzee simile should more usefully be applied to the person doing the refereeing. And in this case I am engaging in gross understatement.
Whether the system works or not is of no import here. Much has been written about this and I do not wish to descend into this particular gladiatorial ring, at least for the time being. The question I do want to raise is the following: what happens when you submit a paper to a scholarly journal and that paper is based on a Randomized Control Trial (RCT)? In particular, what happens when, Allah kareem, the referees give you a "revise and resubmit" (affectionately known as R&R amongst the conoscenti) that asks for revisions to the paper? If the requested changes are esthetic, no sweat. But what if the legitimate concerns of the referees have to do with your research design? Humor me and let's work through a not so farfetched thought experiment.
Suppose that you have just spent three years and four hundred thousand dollars of World Bank money working on the impact evaluation of a major social program in a developing country, and that the impact evaluation in question is based on an RCT. The referees tell you that your design is flawed. Maybe they are right, and maybe they are wrong. In the good old days when you actually had to think through your identification strategy instead of blithely confining your statistical savvy to running a dozen power calculations using Optimal Design, there was a glimmer of hope. You could be smart. You could be imaginative. You could (with a good dose of luck and the concomitant IQ) find some way of addressing the referees's concerns by modifying your identification strategy through some econometric tour de force.
With an RCT, all of this is moot. Either they buy it or they do not. Your randomization, and the deployment of a multi-million dollar program are done. Even with help from Stephen Hawking, you can't make time run backwards. The bottom line? You are dead in the water.
The response to this might be that you should have thought more carefully about the research design for your RCT in the first place, and consulted the illuminati of the field ex ante facto. This is a valid point. But the likely consequence of such incentives is that original research designs that do not toe the Randomista Party line to the letter are likely to be condemned to the intellectual dung heap. This is a shame, and does not advance scientific enquiry. Though I doubt that any statistics will appear on this issue, my a priori is that the refereeing process on such papers will turn into a thumbs up or thumbs down game, with little Neros exercising their power of academic life and death even more than they currently do.
There is probably nothing that can be done about this, so it is not worth getting twisted knickers over. I am simply flogging a dead horse. In passing the eponymous (pace William Safire...this is nonstandard usage) title of this post refers --for those of you who are so uncooth as not to have picked up the reference-- to the Sex Pistols. Now there was a bunch of guys who knew how to randomize with their guitars.
The purpose of this academic equivalent of Basic Training at Camp Lejeune (of "The few, the proud, the retarded" fame) is to ensure quality control in terms of what gets published and what does not. Most of the time it works fairly well, conditional on the referees being competent: that's a huge assumption, and most of us in the academic arena have been confronted with situations in which the chimpanzee simile should more usefully be applied to the person doing the refereeing. And in this case I am engaging in gross understatement.
Whether the system works or not is of no import here. Much has been written about this and I do not wish to descend into this particular gladiatorial ring, at least for the time being. The question I do want to raise is the following: what happens when you submit a paper to a scholarly journal and that paper is based on a Randomized Control Trial (RCT)? In particular, what happens when, Allah kareem, the referees give you a "revise and resubmit" (affectionately known as R&R amongst the conoscenti) that asks for revisions to the paper? If the requested changes are esthetic, no sweat. But what if the legitimate concerns of the referees have to do with your research design? Humor me and let's work through a not so farfetched thought experiment.
Suppose that you have just spent three years and four hundred thousand dollars of World Bank money working on the impact evaluation of a major social program in a developing country, and that the impact evaluation in question is based on an RCT. The referees tell you that your design is flawed. Maybe they are right, and maybe they are wrong. In the good old days when you actually had to think through your identification strategy instead of blithely confining your statistical savvy to running a dozen power calculations using Optimal Design, there was a glimmer of hope. You could be smart. You could be imaginative. You could (with a good dose of luck and the concomitant IQ) find some way of addressing the referees's concerns by modifying your identification strategy through some econometric tour de force.
With an RCT, all of this is moot. Either they buy it or they do not. Your randomization, and the deployment of a multi-million dollar program are done. Even with help from Stephen Hawking, you can't make time run backwards. The bottom line? You are dead in the water.
The response to this might be that you should have thought more carefully about the research design for your RCT in the first place, and consulted the illuminati of the field ex ante facto. This is a valid point. But the likely consequence of such incentives is that original research designs that do not toe the Randomista Party line to the letter are likely to be condemned to the intellectual dung heap. This is a shame, and does not advance scientific enquiry. Though I doubt that any statistics will appear on this issue, my a priori is that the refereeing process on such papers will turn into a thumbs up or thumbs down game, with little Neros exercising their power of academic life and death even more than they currently do.
There is probably nothing that can be done about this, so it is not worth getting twisted knickers over. I am simply flogging a dead horse. In passing the eponymous (pace William Safire...this is nonstandard usage) title of this post refers --for those of you who are so uncooth as not to have picked up the reference-- to the Sex Pistols. Now there was a bunch of guys who knew how to randomize with their guitars.
Friday, November 11, 2011
Liberté, égalité, austérité
It has long been apparent that France is something of an outlier when it comes to political debate. The Italian Communist party understood that it had to change way back in the days of Berlinguer (how many western communist parties had a bona fide nobleman as their head?), and the MSI (the heirs of the fascio, whose historic leader Almirante conducted some memorable debates with Berlinguer) did the same under Gianfranco Fini. Even the Greek Communist party adapted (though of course it split, as did the Italian party).
Nothing doing for the French Communist party, which is now as dead (politically) as a dodo. Remember, these are the people who even lagged behind the Soviets in terms of recognizing the nastiness that occurred in the Evil Empire under Uncle Joe. A significant (and vocal) fraction of the current crop of Socialist party cadres in France are the worthy continuators of this suicidal left-wing French tradition. Talk is of class warfare, of taxing the running dogs of capitalism (for example, penalizing companies that actually pay dividends to investors), and hiring tens of thousands of new civil servants. If only they had confined their rhetoric to Johnny Halliday's fiscal migration (Halliday is a rock star and Jerry Lewis is a comedic genius --I know it's an easy cheap shot, but these two facts speak oodles...).
There are, thankfully, exceptions. In a recent TV debate between the Right and the Left (on France 2, if memory serves me correctly), the Socialists were represented by Michel Sapin, their secrétaire national pour l'économie: at times, he sounded as if he were inhabiting a 1970s time warp. And yet, he was actually minister of finance back in the early 90s, so he should have no excuse in terms of a quick reality check. Of course the Right was represented, among other people, by the turncoat minister Eric Besson, so they did not have much mediatic sex-appeal either. Thank God that the other left-winger on the program was François Chérèque, who has been leader of the CFDT trade union for almost ten years. This man exudes raison, but he is woefully alone in the socialist camp. When talking about pension reform, he actually knows the numbers and talks sense, without descending into the current socialist phraseology that would make Bukharin or Preobrazhensky feel at home. If more Chérèques don't appear in the French Socialist camp, Sarkozy will in all likelihood squeeze by in the 2012 presidential election, and the French Left will have no one but itself to blame. Again.
What does this all have to do with economic development? Well, we know from a huge corpus of empirical work on developing countries that political institutions matter in terms of determining who grows and who does not. If the French political system doesn't start to produce some sort of consensus as to how to reform the welfare and healthcare systems and improve incentives for job creation, France will become a "soon to be developing" country. In the meantime, at the very least, it's liberté, égalité, austérité.
Nothing doing for the French Communist party, which is now as dead (politically) as a dodo. Remember, these are the people who even lagged behind the Soviets in terms of recognizing the nastiness that occurred in the Evil Empire under Uncle Joe. A significant (and vocal) fraction of the current crop of Socialist party cadres in France are the worthy continuators of this suicidal left-wing French tradition. Talk is of class warfare, of taxing the running dogs of capitalism (for example, penalizing companies that actually pay dividends to investors), and hiring tens of thousands of new civil servants. If only they had confined their rhetoric to Johnny Halliday's fiscal migration (Halliday is a rock star and Jerry Lewis is a comedic genius --I know it's an easy cheap shot, but these two facts speak oodles...).
There are, thankfully, exceptions. In a recent TV debate between the Right and the Left (on France 2, if memory serves me correctly), the Socialists were represented by Michel Sapin, their secrétaire national pour l'économie: at times, he sounded as if he were inhabiting a 1970s time warp. And yet, he was actually minister of finance back in the early 90s, so he should have no excuse in terms of a quick reality check. Of course the Right was represented, among other people, by the turncoat minister Eric Besson, so they did not have much mediatic sex-appeal either. Thank God that the other left-winger on the program was François Chérèque, who has been leader of the CFDT trade union for almost ten years. This man exudes raison, but he is woefully alone in the socialist camp. When talking about pension reform, he actually knows the numbers and talks sense, without descending into the current socialist phraseology that would make Bukharin or Preobrazhensky feel at home. If more Chérèques don't appear in the French Socialist camp, Sarkozy will in all likelihood squeeze by in the 2012 presidential election, and the French Left will have no one but itself to blame. Again.
What does this all have to do with economic development? Well, we know from a huge corpus of empirical work on developing countries that political institutions matter in terms of determining who grows and who does not. If the French political system doesn't start to produce some sort of consensus as to how to reform the welfare and healthcare systems and improve incentives for job creation, France will become a "soon to be developing" country. In the meantime, at the very least, it's liberté, égalité, austérité.
Thursday, November 10, 2011
The Talibans of Randomization
Economics, and Development Economics is no exception, is a field that is as fad-prown as Milan fashion week in the fall. Karl Lagerfeld would definitely feel comfortable in my intellectual tribe if he mastered Pontryagin's Maximum Principle and knew how to bootstrap standard errors.
The current fad in my academic field which is, broadly speaking, the microeconomics of development, is randomization. Often referred to as RCT (Randomized Control Trial), this technique for evaluating the impact of a variety of social programs, ranging from conditional cash transfers (think of Bolsa Escola in Brazil) to HIV/AIDS testing, has achieved a degree of influence that is remarkable. The high priestess of RCT is, of course, Esther Duflo of MIT, though a plethora of other clerics populate the RCT madrassah.
I should be thrilled at this triumph of an MIT faculty member, being an MIT product myself. Thrilled I am not. Don't get me wrong. I am not against RCTs in and of themselves. Some of my best friends do them... I am even running several RCTs myself at this very moment. But for those of us who have spent an inordinate amount of time working in the field attempting to convince policymakers in developing countries to actually evaluate the effectiveness of their programs, RCTs have, to a large extent (at least in terms of the manner in which they are flogged by the Randomistas) not lived up to their promise. Worse, my current prior is that they are rapidly approaching the point where they are counterproductive.
There are probably three reasons for this. First, the Randomistas (this memorable characterization is due to Martin Ravallion of the Research Department of the World Bank), because of the purported purity of their faith, have a great deal of difficulty in actually talking to program initiators. Once again, don't get me wrong. There are lots of great and interesting RCTs going on out there (just take a look at what people such as Dean Karlan of Yale are doing). But many RCTs are bogus in the sense that the NGOs (to take but one example) set up to implement the program being evaluated are under the total control of the randomistas themselves. How many randomistas spend weeks or months in hot steamy developing country ministries actually explaining to the policymakers why they should evaluate their policies, and why an RCT would be a smart manner of doing so? The basic point is that many randomistas are selling RCT, not evaluation per se. And it is evaluation per se, and not the chosen methodology, that is important in moving the policy agenda forward in the developing country context.
Second, RCTs represent the triumph of a technique over actual ideas. Of course, this point is related to the first. One of the legitimate selling points of an RCT is that it allows one to eschew complex econometric procedures and statistical assumptions that are untenable in favor of a procedure that turns program deployment itself into the evaluation technique. One translates an essentially epidemiological approach into a social science setting: this is great! Economists need a good dose of Lex Parsimoniae. Moreover, contrary to popular opinion, an RCT is often the fairest manner of deploying a program. If not everyone can receive the treatment, what would you prefer? That only villages with connections to the presidential palace get the program money, or that all villages have an equal chance? Point taken. But my basic argument here is that randomistas, because of their fervent adherence to dogma, often engage in practices that are deleterious to the cause of evaluation per se. I once saw a very famous randomista make a brilliant presentation on methodology to a group of developing country program managers. After the presentation, one of the program managers (from Conakry) asked her what she would suggest as an alternative evaluation method given that an RCT was not possible in the context of the research question that the program wanted answers to. The randomista's answer? Change your research question of course! This is the tail wagging the dog, or looking for the car keys under the street lamp not because that is where you dropped them but because that is where the light is shining. Can one be any more harmfull in terms of getting decisionmakers in developing countries to adopt evidence-based policymaking? I rest my case.
Third, randomistas often characterize what they do as being the "Gold Standard" of evaluation. Apart from the methodological idiocy of such a statement (more on this in a later post...), just think about it. According to Peter Temin of MIT (or, if you want the movie version, Liaquat Ahamed's wonderfully readable and Pulitzer Prize-winning Lords of Finance) the Gold Standard was one of the main causes of the Great Depression.
Some selling point.
The current fad in my academic field which is, broadly speaking, the microeconomics of development, is randomization. Often referred to as RCT (Randomized Control Trial), this technique for evaluating the impact of a variety of social programs, ranging from conditional cash transfers (think of Bolsa Escola in Brazil) to HIV/AIDS testing, has achieved a degree of influence that is remarkable. The high priestess of RCT is, of course, Esther Duflo of MIT, though a plethora of other clerics populate the RCT madrassah.
I should be thrilled at this triumph of an MIT faculty member, being an MIT product myself. Thrilled I am not. Don't get me wrong. I am not against RCTs in and of themselves. Some of my best friends do them... I am even running several RCTs myself at this very moment. But for those of us who have spent an inordinate amount of time working in the field attempting to convince policymakers in developing countries to actually evaluate the effectiveness of their programs, RCTs have, to a large extent (at least in terms of the manner in which they are flogged by the Randomistas) not lived up to their promise. Worse, my current prior is that they are rapidly approaching the point where they are counterproductive.
There are probably three reasons for this. First, the Randomistas (this memorable characterization is due to Martin Ravallion of the Research Department of the World Bank), because of the purported purity of their faith, have a great deal of difficulty in actually talking to program initiators. Once again, don't get me wrong. There are lots of great and interesting RCTs going on out there (just take a look at what people such as Dean Karlan of Yale are doing). But many RCTs are bogus in the sense that the NGOs (to take but one example) set up to implement the program being evaluated are under the total control of the randomistas themselves. How many randomistas spend weeks or months in hot steamy developing country ministries actually explaining to the policymakers why they should evaluate their policies, and why an RCT would be a smart manner of doing so? The basic point is that many randomistas are selling RCT, not evaluation per se. And it is evaluation per se, and not the chosen methodology, that is important in moving the policy agenda forward in the developing country context.
Second, RCTs represent the triumph of a technique over actual ideas. Of course, this point is related to the first. One of the legitimate selling points of an RCT is that it allows one to eschew complex econometric procedures and statistical assumptions that are untenable in favor of a procedure that turns program deployment itself into the evaluation technique. One translates an essentially epidemiological approach into a social science setting: this is great! Economists need a good dose of Lex Parsimoniae. Moreover, contrary to popular opinion, an RCT is often the fairest manner of deploying a program. If not everyone can receive the treatment, what would you prefer? That only villages with connections to the presidential palace get the program money, or that all villages have an equal chance? Point taken. But my basic argument here is that randomistas, because of their fervent adherence to dogma, often engage in practices that are deleterious to the cause of evaluation per se. I once saw a very famous randomista make a brilliant presentation on methodology to a group of developing country program managers. After the presentation, one of the program managers (from Conakry) asked her what she would suggest as an alternative evaluation method given that an RCT was not possible in the context of the research question that the program wanted answers to. The randomista's answer? Change your research question of course! This is the tail wagging the dog, or looking for the car keys under the street lamp not because that is where you dropped them but because that is where the light is shining. Can one be any more harmfull in terms of getting decisionmakers in developing countries to adopt evidence-based policymaking? I rest my case.
Third, randomistas often characterize what they do as being the "Gold Standard" of evaluation. Apart from the methodological idiocy of such a statement (more on this in a later post...), just think about it. According to Peter Temin of MIT (or, if you want the movie version, Liaquat Ahamed's wonderfully readable and Pulitzer Prize-winning Lords of Finance) the Gold Standard was one of the main causes of the Great Depression.
Some selling point.
Subscribe to:
Posts (Atom)