My recent blog on randomised control trials led to enthustiastic comments about Robert Picciotto’s recent paper: Experimentalism and development evaluation: Will the bubble burst?.
I am delighted that Robert agreed to explain the main ideas in this guest blog. Robert (“Bob”) was previously Director General, Evaluation, at the World Bank and is now a Visiting Professor at King’s College, London.
Probing the paradox of the RCT craze in international development
The growing popularity of randomised control trials (RCTs) in the international development domain is not accidental. It reflects tensions within an economics profession humbled by the failure of standard development recipes. It is also the result of a well funded campaign aimed at raising the bar in development evaluation quality that has unfortunately backed the wrong horse.
The “randomistas” visualize a new age of scientific progress in development economics. They point to the success of experimental methods in the medical establishment. Many of them are micro-economists intent on unseating macro economists from the commanding heights of development theory. They have found a willing audience among politicians and philanthropists: their random trials evoke rigour and objectivity relative to the self-serving assessments all too often generated by internal evaluation units that lack independence.
Yet, the evaluation community has learnt the hard way that experimental methods have a limited role. While RCTs are expected to assess attribution, i.e. to address the “does it work?” question they are inappropriate except where the intervention being evaluated is stable and relatively simple and when it produces relatively quick and large effects relative to other potential influences.
The paradox of the RCT craze is especially pronounced in the development business since development interventions take place in volatile and complex environments and successful interventions tend to be tailor-made, adaptable and flexible. The stark reality is that most development programs are not amenable to experimental treatment. Bio-medical clinical trial procedure cannot be replicated in the economic and social domain where reflexivity is the norm and feedback loops are legion: administering a pill is different from administering a social programme.
Even where RCTs are feasible they tend to be expensive. They require scarce skills. Their statistical requirements are demanding and they often face ethical constraints. Finally, they do not enhance accountability since they do not tackle the “why, who and so what” questions. Nor are they designed to assign responsibility for the success or failure of a development intervention to individual partners. Furthermore, the insidious capture of medical research by vested interests demonstrates that threats to evaluation validity originate in lack of independence more than in methodological sloppiness.
The assault on non experimental methods in development evaluation is eerily reminiscent of the “paradigm wars” that raged in the United States decades ago. Given that RCT proponents in development are unaware of this history they are condemning the development evaluation community to repeat it. But there is light at the end of the tunnel. Methodological dogmas feeding simplistic development narratives have begun to fade away. Rather than seeking a methodological silver bullet, the widespread public yearning for social accountability will be sated when development evaluations are fully independent and equipped with the full panoply of evaluation tools. Thus when the dust settles and the doctrinal fires stop burning, mixed methods will emerge as the solution of choice. Experimental and quasi experimental methods will only be attempted where feasible and appropriate, i.e. in relatively few cases.