A few weeks ago, the Independent Commission for Aid Impact (ICAI) launched an open consultation “to understand which areas of UK overseas aid … the Commission should report on in its first three years”.
The ICAI is brand new, set up by the UK coalition government to increase scrutiny of DFID. It is independent from DFID and aims to carry out “20 evaluations, reviews and investigations” of aid per year. Reports will be published and sent to the select committee in parliament as well as DFID.
A small group of commissioners have just been appointed. Some commentators have said that while they are highly experienced professionals, they have limited experience of evaluating aid and the continued debates about how to make evaluations most useful. The ICAI’s consultation runs until 7th April 2011.
How about this five point submission? It’s built on the agenda and idea of performance set out in the rest of this website. It’s more about the ‘how’ than the ‘what’ of evaluation. I’d be very grateful for any suggestions for improvements.
PS See Rick Davies’ excellent blog on the consultation process and his submission.
1. Collect monitoring data first
Evaluations should be built on existing monitoring data, where it is available. Where it is not available, the first priority should be to build up monitoring data, in close dialogue with programme managers and local collaborators, including intended beneficiaries. The data should be useful for managers, helping them run high quality programmes that respond flexibly to local partners’ priorities. In order to improve the impact of aid, managers need to be continually learning how to improve their work, in dialogue with local collaborators. This is more important, and more developmental, than using evaluations to try to plan perfect policies and programmes in advance.
2. Collect feedback from collaborators and intended beneficiaries
Evaluation and monitoring systems should systematically collect feedback from local collaborators and intended beneficiaries, including their views of how well programmes are working for them. Examples show how this data can be quantified and benchmarked. The approach is in line with core development principles of helping people gain more power over the factors that influence their lives – including aid projects. It can bring to the surface the different views of different social groups, like men and women. It generates similar information to customer feedback in the commercial world. Amazingly, local people’s views are not often systematically monitored in development projects. Every evaluation and monitoring system should include them, unless there are specific reasons not to.
3. Do not assume that ‘what works’ in one place will work in another
‘What works’ in one place at one time will not necessarily work in other places at other times. Development assistance is not analagous to medical science. While human bodies share most characteristics with each other, the contexts that development happens in are all different and change over time. So randomised control trials are not always the most appropriate method for generating useful knowledge from programmes. Evaluation and monitoring systems should use a range of methods that are accessible and useful for the people who will use the data they produce – in particular, local programme managers.
4. Measure progress from the point of view of local people
Evaluation and monitoring data should consider whether development outcomes are improving from local people’s perspectives. This is different to asking whether a specific programme is achieving its goals. The former approach keeps agencies focused on the local people who are meant to benefit. It also encourages collaboration between development agencies, which is crucial for impact. The latter approach encourages an ‘agency-centred’ view which tends to fuel competition between agencies. It risks encouraging the back-to-front question: “how well are local people helping us meet our objectives” rather than: “how well are we helping local people meet their objectives”.
5. Focus on value added, not just impact
Evaluation and monitoring systems should generate data on how well a development agency is contributing to people’s own efforts to improve their lives and societies. They should show how much value an agency is adding to other collaborators. They should also generate data on longer term impact and development outcomes. By focusing on agencies’ “value added”, measurement systems can shine a spotlight on how well agencies are doing their job, which they do have direct control over, rather than only on longer term changes, which they do not have control over.