I just reviewed two impact evaluations of education projects in West Africa. Both were required by the same major donor. Both were carried out by the same high-end US consultancy. Both used what they call the ‘gold standard’ of randomised control trials (RCTs).
Both seem to have been a shocking waste of time and money – because they used the wrong tools for the job.
The evaluations were carefully planned, requiring a lot of money and local people’s time (e.g. just one of them surveyed 8,790 households and tested 21,730 children).
They conclude that one project had a big impact on primary school enrollment and test scores in Burkina Faso, because it built schools where there previously weren’t any. The other didn’t, because the project was stopped early and there were already schools in most villages where it operated in Niger.
So tell us something new! The reports don’t contribute new knowledge for managers or policy makers in the future. They have precisely no policy implications beyond: it is more useful to build schools in places where there aren’t any already.
The reports mention some important development issues, like the Niger government’s decisions on education prospects, and adult literacy & community mobilisation efforts in Burkina Faso. But they don’t explore or discuss them, because these issues fall outside the RCT boundaries.
Nor do the evaluations help people on the ground learn from what they have done (e.g. school principals, government officials, parents, children or NGO staff).
These exercises may be politically useful for the donor, to demonstrate ‘accountability’ and report back to their donors. But I doubt they can learn anything of practical use from the reports or that the reports will influence their future actions.
I’m a fan of RCTs for the specific purpose of generating knowledge on policy relevant questions. I really am! But this was never the intention here. To my mind, these reports are good examples of the misuse of RCTs, wasting a lot of people’s time and money that could and should have been spent better.
An alternative, participatory approach could have been much more useful for everyone involved: generating the same findings for external reporting (more cheaply), focusing on practical issues at different levels and helping the people working on education beyond the lives of these projects (specially school principals, teachers & children) improve what they do.