I don’t think anyone was surprised by this year’s “Nobel” prize in economics, which went to three American-based specialists in the design of on-the-ground experiments in low income countries, Abhijit Banerjee, Esther Duflo and Michael Kremer. I think the award has merit, but it is important to keep in mind the severe limitations of the work being honored.
The context for this year’s prize is the long, mostly frustrating history of anti-poverty projects in the field of development economics. Much of the world, for reasons I’ll put to the side for now, is awash in poverty: billions of people lack access to decent sanitation, medical care, education and physical and legal protection, not to mention struggling to put food on the table, a roof over their head and cope with increasing demands for mobility. A lot of money has been spent by aid organizations over the years to alleviate these conditions, without nearly enough to show for it. (My specialty, incidentally, has been in child labor, which has been the focus of a large piece of this work.)
There have been various reactions to the lack of progress. One has been to argue that the effort has been too weak—that we need more money and ambition to turn the corner. This is Jeffrey Sachs, for instance. Another is that the whole enterprise is misbegotten, a relic of colonialism that was always destined to fail. You can get this in either a right wing (William Easterly) or left wing (Arturo Escobar) version. (I critiqued the “left” stance on child labor here.) A third is where this Nobel comes in.
Maybe the reason development projects weren’t working was because they had never been properly tested before widespread adoption. Societies and the people in them are complicated, and ideas that may make sense in the abstract often fail in practice. So really test them. Set up controlled experiments, whose design will ensure that measured outcomes represent causal mechanisms. One of the common elements of these designs was randomization of treatment to avoid confounding influences on who might be included in a program versus those in the control group, hence the term “randomistas”.
Without question, the experimental approach has produced genuine insights. We have a much better sense, for instance, of the role played by institutional malfeasance in places like schools and hospitals: teachers that don’t teach, medical practitioners that don’t show up or follow protocols. Just throwing money at organizations without reforming them is a dead end. In fact, implementing programs to enhance their experimental value is central to the concept of adaptive management; it should be standard practice everywhere.
All the same, there are serious limitations to a strategy centered on experimental design. Here are a few:
1. Good experimental design results in internal validity, where measurements actually measure the things they’re supposed to and confounding influences are suppressed. External validity, the extent to which results can be generalized to a wider array of situations beyond the confines of the experiment is a different matter. There are two specific aspects of experimentalism that raise questions on this front, the tendency for experiments to be small, local and time-bound (like a set of schools in one state in India in the mid-00’s) and the effects of experimental control itself, when a sort of artificiality creeps in. I’m familiar with the literature on experimentally designed conditional income transfers, for instance, where every new study, with a new country location, time period or set of design tweaks seems to alter the bottom line of what works and how.
2. The strategy of experimental design virtually requires a reductionist, small-bore approach to social change. A more sweeping, structural approach to poverty and inequality introduces too many variables and defeats experimental control. Thus, without any explicit ideological justification, we end up with incremental reformism when the entire social configuration may be the true culprit.
3. Carefully controlled social experiments can be very expensive! When I read the work of the prize-winners and their coauthors, I often find myself wondering how much did it cost to do this research, and who paid for it? This is a form of Big Science, and it requires big support. That in turn lends power to the funding institutions, which can decide what problems and potential solutions deserve attention. In addition, on-the-ground experiments depend on participation from the institutions being studied. There is a tendency for randomista work to challenge the people on the lower rungs of hierarchy, like the teachers and nurses mentioned above, and leave their bosses—not to mention the elites at the top—unexamined.
On balance, I think it’s fine that this prize honors experimentalism, but we shouldn’t lose sight of the larger picture. Using experimental methods to incorporate more learning in program administration should be standard practice; perhaps some day it will be. But the big problems in poverty and oppression are too complex and encompassing to be reduced to experimental bits, and there is no substitute for theoretical analysis and a willingness to take chances with large-scale collective
Good blog, Angry Bear.
I’m a K-12 practitioner. Have done a bit of the work tested by Kremer, Fryer, Angrist, et al, over the years.
20 years ago I was much more bullish on the RCT. But as you point out in #1, the external validity turned out to be more problematic than I’d expected.
Still, I would quibble with #2. I agree that RCTs should not constrain development of Big Ideas. But familiarity with specific RCT successes and failures could contribute to Better Big Ideas.
Welcome to Angry Bear. First time commenters always go to moderation to weed out spam, spammers and advertising.