Advice for the new aid watchdog

A few weeks ago, the Independent Commission for Aid Impact (ICAI) launched an open consultation “to understand which areas of UK overseas aid … the Commission should report on in its first three years”.

The ICAI is brand new, set up by the UK coalition government to increase scrutiny of DFID. It is independent from DFID and aims to carry out “20 evaluations, reviews and investigations” of aid per year. Reports will be published and sent to the select committee in parliament as well as DFID.

A small group of commissioners have just been appointed. Some commentators have said that while they are highly experienced professionals, they have limited experience of evaluating aid and the continued debates about how to make evaluations most useful. The ICAI’s consultation runs until 7th April 2011.

How about this five point submission? It’s built on the agenda and idea of performance set out in the rest of this website. It’s more about the ‘how’ than the ‘what’ of evaluation. I’d be very grateful for any suggestions for improvements.

PS See Rick Davies’ excellent blog on the consultation process and his submission.


1. Collect monitoring data first

Evaluations should be built on existing monitoring data, where it is available. Where it is not available, the first priority should be to build up monitoring data, in close dialogue with programme managers and local collaborators, including intended beneficiaries. The data should be useful for managers, helping them run high quality programmes that respond flexibly to local partners’ priorities. In order to improve the impact of aid, managers need to be continually learning how to improve their work, in dialogue with local collaborators. This is more important, and more developmental, than using evaluations to try to plan perfect policies and programmes in advance.

2. Collect feedback from collaborators and intended beneficiaries

Evaluation and monitoring systems should systematically collect feedback from local collaborators and intended beneficiaries, including their views of how well programmes are working for them. Examples show how this data can be quantified and benchmarked. The approach is in line with core development principles of helping people gain more power over the factors that influence their lives – including aid projects. It can bring to the surface the different views of different social groups, like men and women. It generates similar information to customer feedback in the commercial world. Amazingly, local people’s views are not often systematically monitored in development projects. Every evaluation and monitoring system should include them, unless there are specific reasons not to.

3. Do not assume that ‘what works’ in one place will work in another

‘What works’ in one place at one time will not necessarily work in other places at other times. Development assistance is not analagous to medical science. While human bodies share most characteristics with each other, the contexts that development happens in are all different and change over time. So randomised control trials are not always the most appropriate method for generating useful knowledge from programmes. Evaluation and monitoring systems should use a range of methods that are accessible and useful for the people who will use the data they produce – in particular, local programme managers.

4. Measure progress from the point of view of local people

Evaluation and monitoring data should consider whether development outcomes are improving from local people’s perspectives. This is different to asking whether a specific programme is achieving its goals. The former approach keeps agencies focused on the local people who are meant to benefit. It also encourages collaboration between development agencies, which is crucial for impact. The latter approach encourages an ‘agency-centred’ view which tends to fuel competition between agencies. It risks encouraging the back-to-front question: “how well are local people helping us meet our objectives” rather than: “how well are we helping local people meet their objectives”.

5. Focus on value added, not just impact

Evaluation and monitoring systems should generate data on how well a development agency is contributing to people’s own efforts to improve their lives and societies. They should show how much value an agency is adding to other collaborators. They should also generate data on longer term impact and development outcomes. By focusing on agencies’ “value added”, measurement systems can shine a spotlight on how well agencies are doing their job, which they do have direct control over, rather than only on longer term changes, which they do not have control over.

9 Responses

  1. Hi Alex
    1. This advice submission is a good move. As far as I can see the ICAI’s idea of consultation only extends as far as an online survey. So the first challenge will be to get your view above noticed by the Commissioners. Will you be emailing them a copy (or links to same)?
    2. You have inspired me to make a similar effort, on issues that concern me.
    3. Re your point 5, I think how to evaluate “value added” is a challenge and one worth writing more on, because it overlaps with the heavy emphasis on Value for Money in the ToRs for the contracted evaluators. There needs to be some serious and critically constructive thinking done in this area.

  2. Dear Alex,

    Special pleading time. There is a handful of organizations such as Mango, People In Aid and RedR that deliver our services to agencies and their staff so the direct beneficiaries are not ‘ the local people’ in the sense I think you imply. It is unlikely that local people will be able to tell the difference that better financial skills, HR policies nor Management and Technical skills make to an agency’s delivery but they truly do add value.

    How can we factor this in?


    Martin McCann

  3. Hi Alex, same point, they don’t seem to be asking for submissions. We put this short para in the comments section at the end of the survey:

    A very significant way to increase the value for money of UK aid is through increasing the local content of development programmes, particularly in comparison to channelling aid through multilaterals. Locally led programmes are likely to be more cost-effective.Our research suggests locally led disarmament demobilisation and reintegration programmes delivered by local organisations cost one tenth of those delivered by multilaterals, and need not be smaller scale. Hence,in the specific field of conflict resolution, channelling 10% of funding to locally led strategies would effectively almost double the programme’s output. Locally led programmes are also more likely to be sustainable and fit for purpose, as well as providing a multiplier effect through channelling income into the local area.

    Incidentally the Multilateral Aid Review, which gives grades to different mulitlaterals, does not have a single criterion about supporting local efforts to make progress on development…

  4. Hi Alex,

    Although supporting your arguments and five head points, the fact that evaluations are often contracted out through a tender procedure easily undermines whatever is proposed to improve evaluations. NGO are no longer an exception in this regard.
    The entire tender procedures have turned into an unfair playing ground. Hardly the right candidates are selected; there is so much going on under the table!
    Sorry, just a quick remark.


  5. Alex

    Thanks for taking the initiative here – the approach ICAI takes to evaluation will shape DfID projects for the next decade, so it is really important to get this right. As you say, the Commissioners appointed so far have lots of professional experience, but little knowledge of development evaluation, and the backgrounds of the Commissioners may tempt them towards an approach driven by preventing fraud/corruption and/or assessing value for money, rather than a developmental approach.

    In this context, your proposed response makes some very good points. The points about regularly surveying target beneficiaries as a core part of evaluations (where relevant) is critical, as is the need for flexibility in approach used – randomised control trials are often not the answer!

    This is a great opportunity for a fresh look at DfID’s whole approach to evaluations. The current process of tendering evaluations, while meeting good governance requirements within the UK, perpetuates a system where evaluations are too often expensively commissioned to Northern agencies, with the expertise and resources to submit bids that tick all the boxes.

    A more developmental approach is needed – looking at how we can promote and support the capacity of local and national partners to carry out and use their own M&E, and developing national capacity in developing countries. This will in my view need a much simpler approach to evaluation than the technically complex approaches being promoted by 3ie and others currently.

    Ken Caldwell

  6. Alex, A good start . There is also a more profound issue behind this , which relates to a push to see development in very mechanistic, positivist terms. The MDGs have done no favours in this regard. Hence if the simplistic indicators are pushed as ” development” then we shouldn’t I suppose be surprised by a M&E solution which prioritizes short term ” results” based on the output level ( Numbers in school, with access to water etc ) where the major challenges facing most poor people are not so easily categorised, and as more countries hit mid income status , but without necessarily resolving issues of inequality of power, access, as well as resources, then we are in danger of evaluating a series of indicators which increasingly seem less relevant to thew real issues of longer term structural poverty. To “play the game” and enter this ” results” debate is to help depoliticize development , its underlying structural challenges and possible solutions. I would have thought that the current focus on the Middle east is a good example of resource rich countries failing to come to grips with structural inequality despite being resource rich.

    Your 5 points are a good start though , but the commission as constituted is in danger of being irrelevant almost from its inception , as increasingly is British Aid , except in the field of what people euphemistically call social protection ( welfare ) as a means to reduce conflict and possible security threats . .

  7. Quick update. I met Graham Ward, the Chief Commissioner of the ICAI, and his colleague Clare Robathan yesterday at an excellent event organised by Mango. They said:

    – They are in learning mode and very keen to hear comments about ‘how’ they should commission evaluations as well as ‘what’ they should evaluate.

    – There will be other opportunities for sector experts to engage with the ICAI, beyond the current public consultation.

    – They expect to announce who has won the 4 year contract to deliver evaluations by the end of May.

    – Graham Ward specifically underlined the importance of listening to the voice of recipients as central to ICAI’s approach.

  8. Great start Alex, I think developmental evaluation should be an overriding principle. These are a few quick points:

    1) I think interest in RCTs will wane. Besides ethical arguments and low external validity, economists don’t like them as they are a-theoretical. There have been several interesting critiques of RCTs by senior Bank officials recently and DFID’s terms of reference for a review of impact assessment approaches notes RCTs can’t be used for a large part of its portfolio.

    2) Your point about value added is good and speaks to issues of attribution, which I think should be resisted on ethical grounds. (I notice DFID/s standard indicators suggest that they expect to be able to attribute at output level and not outcome, though I don’t know when they were proposed.) Christian Aid’s new M&E system is looking to capture data on its leverage – the particular role or value added it plays in change processes vis a vis other actors. I would make that a minimum requirement of any evaluation system.

    3) Christian Aid and ActionAid also seek to describe changes in power relations as an essential metric. I would propose that changes in power relations and equity are an essential aspect of all evaluation and this begins to speak to some of Brian’s concerns. I would ask that every programme be informed by a power analysis and it assessed on what it aims to do/has done to contribute to greater equity and shifts in structural power relations.

    4) VFM metrics and sustainability: Somewhat related, perhaps there are ways to try and make the results debate more political. I have just been looking at some of the VFM approaches and metrics used by DFID country programmes for technocratic neoliberal service delivery projects with the kind of results Brian mentions. Some do not appear to consider sustainability of outputs or outcomes, which makes some VFM metrics fairly meaningless. So my next proposal would be that VFM metrics should not be ‘off the shelf ‘and need to be contingent and carefully considered for each particular project and programme context being evaluated. Somehow they should include sustainability criteria to ensure that a low unit cost for a health programme providing drugs for three years doesn’t mask the possible negative impacts of increasing citizens short term expectations and then taking the drugs away. I think a case could be made that the VFM of short-term supply driven service delivery approaches could be enhanced if they are implemented within programmes that include components to empower citizens to demand more of states in the longer term. Obviously we need evidence that this can work. I think Action Aid is trying to develop a VFM approach to illustrate that a rights based approach is more cost effective than direct service delivery and has done so in India? Perhaps it would be worth trying to find out more about that?

    5) Perhaps complexity thinking could provide a framework for informing evaluation and VFM approaches. This would enable the identification of simple aspects of programme systems that perhaps can assessed using reductionist cause effect relationships though I agree these should always be informed by ‘beneficiaries’. But a complexity framework could also illustrate areas where it makes no sense to take a RBM approach and thus inform other more creative developmental approaches that would ultimately prove better value for money.

    6) Development of a framework to assess the VFM of various evaluation approaches that speaks to issues you raise and some points above. I certainly think all proposed evaluations methodologies should be scrutinised to see if they make any sense or add value to those involved in implementation and receiving of aid programmes.

  9. Is anyone else concerned that KPMG is one of the contractors in the ‘partnership’ but it is at the same time managing rather large DFID programmes (e.g. The Governance and Transparency Fund -about £100 million)?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: