Misusing randomised control trials

Building schools in Burkina Faso

I just reviewed two impact evaluations of education projects in West Africa. Both were required by the same major donor. Both were carried out by the same high-end US consultancy. Both used what they call the ‘gold standard’ of randomised control trials (RCTs).

Both seem to have been a shocking waste of time and money – because they used the wrong tools for the job. 

The evaluations were carefully planned, requiring a lot of money and local people’s time (e.g. just one of them surveyed 8,790 households and tested 21,730 children).

They conclude that one project had a big impact on primary school enrollment and test scores in Burkina Faso, because it built schools where there previously weren’t  any. The other didn’t, because the project was stopped early and there were already schools in most villages where it operated in Niger.

So tell us something new! The reports don’t contribute new knowledge for managers or policy makers in the future. They have precisely no policy implications beyond: it is more useful to build schools in places where there aren’t any already.

The reports mention some important development issues, like the Niger government’s decisions on education prospects, and adult literacy & community mobilisation efforts in Burkina Faso. But they don’t explore or discuss them, because these issues fall outside the RCT boundaries.

Nor do the evaluations help people on the ground learn from what they have done (e.g. school principals, government officials, parents, children or NGO staff).

These exercises may be politically useful for the donor, to demonstrate ‘accountability’ and report back to their donors. But I doubt they can learn anything of practical use from the reports or that the reports will influence their future actions.

I’m a fan of RCTs for the specific purpose of generating knowledge on policy relevant questions. I really am! But this was never the intention here. To my mind, these reports are good examples of the misuse of RCTs, wasting a lot of people’s time and money that could and should have been spent better.

An alternative, participatory approach could have been much more useful for everyone involved: generating the same findings for external reporting (more cheaply), focusing on practical issues at different levels and helping the people working on education beyond the lives of these projects (specially school principals, teachers & children) improve what they do.

11 Responses

  1. Thanks for sharing this experience Alex, really useful and honest. Made me think to a meeting yesterday with Rob Lloyd at Bond together with some other UK INGO staff. We were discussing how to use some identified principles of good measurement to assess if a M&E process was robust. The key issues that came up (at least for me) were ‘appropriate’ and ‘justified’. It was agreed that there are some key aspects to good measurement but indiscriminately applying a ‘gold standard’ or ‘organisation favourite’ method was poor practice, the key was to identify what is the appropriate approach to take and then justify that method – be it RCT, observation, stakeholder-led, small or large sample etc. As with many practices in development, there many good approaches being identified which is great, but how these are then discerningly applied is an equally important part of the puzzle.

  2. Let’s face it, RCTs are subject to the same risks and limitations as most others – dare I say all – tools which are employed to “properly” evaluate the impact of any development project. The list of these limitations is daunting, but one which is common is that the majority of them lack context and hence have little value in terms of policy recommendations. The few organizations which have begun to more actively evaluate the long-term impact of their programs have recognized this limitation and include, albeit minimal, meta-data along with their assessments. However, it is indisputable that RCTs are characterized by being heavily resource dependent and typically only provide a mere “snapshot in time” of what is occurring in whichever setting they are used. From a cost-benefit ratio, their value is questionable in many circumstances.

    As with any RCT used in an applied science field, they are highly susceptible to the influence of externalities. Unlike the use of RCTs in pure science fields (i.e. biomedical), environmental/social/economic contexts can have a tremendous influence on their findings since such contexts are highly volatile and lack any constant. Additionally, RCTs are used by humans and on human conditions and are therefore subject to human error. An example of the effect of the human element can be exemplified by asking “Exactly how random is your RCT?”. For obvious reasons Alex cannot specify which agency was contracted to perform these RCTs, but we should not be naïve and assume that everything was conducted objectively and without worry of possibly exposing negative aspects of the contracting organization. Call my cynical, but I studied political science and international development and understand incentives.

    Nothing of what I am saying is novel to any person in this field, it is just a reminder that the devil is always in the details. As with any tool used in the NGO field, it is important that we all recognize their limitations and make no decisions based solely on the report of one evaluation, indicator or set of indicators. As for waste, well… “efficiency” is sadly something which is not often used to define our sector. I learned this when I witnessed an organization in Northern Uganda receive 55,000 USD to study why women could not spend more time with their children. Hopefully these RCTs will provide some hard data for an evidence-based approach to bigger issues which they identified, but maybe now I am being too optimistic.

    My final addition to this post is an article I read a few months ago which may be of interest to the audience of this topic: http://financialaccess.org/node/3795

  3. In a development discourse that is still ruled by economic academics, I respect those doing sound work with RCTs, such as Innovations for Poverty Action, which attempts to bridge this knowledge base with aid practitioners’ experience. We all agree that aid can be more effective and that well-formed questions and well-executed, applied research can offer many relevant clues about this. We all want to see deeper thinking behind the doing.
    Where I differ, however, is on some fundamental beliefs about what prevents this and what ails the aid industry overall. Is it a lack of information about “what works”? Or is it a lack of respect for local initiatives and understanding about complex power dynamics that impede authentic relationships among development partners? And if it’s the latter, are RCTs just a band-aid on a deeper issue?
    Here’s my advice for donors about using RCTs as measurement so we can avoid more situations like these: http://www.how-matters.org/2011/05/25/rcts-how-matters-advice-for-donors/

  4. Very interesting and sensible post. Do you mind sharing what RCT evaluations you’re referencing?

  5. Thanks, Alex, for sharing that “real-world” example of misuse of RCTs! In my view those who promote the use of RCTs “no matter what the evaluand” suffer from myopia. As in your school-building example, some of the questions being asked could be answered through common sense! In any case, points to the need for a wide-angle lens to determine what’s needed before defaulting to a ‘microscope’ to measure what (supposedly) can be measured, but misses the bigger point. As you’ve mentioned, much money is being wasted on such simplistic and irrelevant “experimental” research projects. Money that would be much better be invested in development itself!

  6. RCT is not really a gold standard in the realm of social development. If it is applied in such environment (which is often fluid) it sometimes leads to abuse as JR rightly points out. Social development programs perhaps require designs and methodologies which are properly anchored in the context of the program implementation. Whereas RCTs can be useful in piloting innovations in rather a highly controlled environment – laboratory like confinements. Social development programs require real-world evaluation designs and tools. In real – world, the context is always dynamic – requiring perhaps adaptive evaluation approach so that we learn what works and what does not work. This will not only help the program executioners increase the posibility of program success but will as well help in testing and fine-tuning the development hypothesis/ToC. Every program will indeed require a defferent evaluation approch as may be dictated by the context and the information needs of the program stakehoplder. Evaluators therefore need to be reflective and adaptive. In the real world, there is really no gold standard, as evaluation will most often have to fit the context and respond to the needs of the stakeholders.

    • Very relevant to this discussion — just this morning I read Professor Robert Picciotto’s EXCELLENT article on “Experimentalism and development evaluation: Will the bubble burst?” in the current edition of “Evaluation: the periodical professional publication of EES. E-version available at http://evi.sagepub.com/. I HIGHLY

  7. … comment it to all of you reading this blog!

  8. I have also been reading Robert Picciotto’s excellent article. Thanks to Jim for sharing. It’s very insightful.

  9. […] blog on randomised control trials led to enthustiastic comments about Robert Picciotto’s recent paper: Experimentalism and […]

  10. […] When ‘impact’ means changes in poor people’s lives, then very many other factors, outside NGOs, influence it. So impact evaluations are expensive and tend to be distant from the issues that project managers face. They seldom improve real-time decision making; though they can inform future policies, if that’s what they’re designed to do. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: