Guest blog: Robert Picciotto on randomised control trials

My recent blog on randomised control trials led to enthustiastic comments about Robert Picciotto’s recent paper: Experimentalism and development evaluation: Will the bubble burst?.

I am delighted that Robert agreed to explain the main ideas in this guest blog. Robert (“Bob”) was previously Director General, Evaluation, at the World Bank and is now a Visiting Professor at King’s College, London.

Robert Picciotto

Probing the paradox of the RCT craze in international development

The growing popularity of randomised control trials (RCTs) in the international development domain is not accidental. It reflects tensions within an economics profession humbled by the failure of standard development recipes.  It is also the result of a well funded campaign aimed at raising the bar in development evaluation quality that has unfortunately backed the wrong horse. 

The “randomistas” visualize a new age of scientific progress in development economics. They point to the success of experimental methods in the medical establishment. Many of them are micro-economists intent on unseating macro economists from the commanding heights of development theory. They have found a willing audience among politicians and philanthropists: their random trials evoke rigour and objectivity relative to the self-serving assessments all too often generated by internal evaluation units that lack independence.    

Yet, the evaluation community has learnt the hard way that experimental methods have a limited role. While RCTs are expected to assess attribution, i.e. to address the “does it work?” question they are inappropriate except where the intervention being evaluated is stable and relatively simple and when it produces relatively quick and large effects relative to other potential influences.  

The paradox of the RCT craze is especially pronounced in the development business since development interventions take place in volatile and complex environments and successful interventions tend to be tailor-made, adaptable and flexible. The stark reality is that most development programs are not amenable to experimental treatment. Bio-medical clinical trial procedure cannot be replicated in the economic and social domain where reflexivity is the norm and feedback loops are legion: administering a pill is different from administering a social programme.

Even where RCTs are feasible they tend to be expensive. They require scarce skills. Their statistical requirements are demanding and they often face ethical constraints.  Finally, they do not enhance accountability since they do not tackle the “why, who and so what” questions. Nor are they designed to assign responsibility for the success or failure of a development intervention to individual partners. Furthermore, the insidious capture of medical research by vested interests demonstrates that threats to evaluation validity originate in lack of independence more than in methodological sloppiness.     

 The assault on non experimental methods in development evaluation is eerily reminiscent of the “paradigm wars” that raged in the United States decades ago. Given that RCT proponents in development are unaware of this history they are condemning the development evaluation community to repeat it. But there is light at the end of the tunnel. Methodological dogmas feeding simplistic development narratives have begun to fade away. Rather than seeking a methodological silver bullet, the widespread public yearning for social accountability will be sated when development evaluations are fully independent and equipped with the full panoply of evaluation tools. Thus when the dust settles and the doctrinal fires stop burning, mixed methods will emerge as the solution of choice. Experimental and quasi experimental methods will only be attempted where feasible and appropriate, i.e. in relatively few cases.

5 Responses

  1. Great to see these thoughts. We all agree that aid can be more effective and that well-formed questions and well-executed, applied research can offer many relevant clues about this. We all want to see deeper thinking behind the doing.
    But I am, like many others, worried about the implications of donors moving towards making RCTs yet another conditionality of aid, more food to satisfy their seemingly insatiable appetite for evidence. Here’s a post on my advice for donors on supporting and using RCTs:

  2. RE: How-matters — if people insist on making RCTs a pre-condition for aid, how something simpler: local-endorsements as THE prerequisite for local implementation?

    Example: Peace Corps SPA funds, which require a min. 25% community contribution for funding and WORK.

  3. Reblogged this on on think tanks and commented:
    Robert Piccioto on the RCT craze provides an excellent sense of history and measure to what they can and cannot do.

    One thing though; not all those promoting them are micro economists. Most, in fact have little or no experience in quantitative research. A dangerous combination: hype and ignorance.

  4. The post is nice. He gives the idea of a new age of scientific progress in development economics. In a one word the post is very interesting.

  5. […] Piccioto gave a masterful critique of the limitations of RCTs on this blog. But they remain influential with policy makers. So, at least for the moment, Whitty […]

Leave a Reply to Marc Maxson Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: