Hats off to Oxfam. But are they asking the right question?

Oxfam GB has just published their first project effectiveness reviews.

Impressively, they’re available on line, telling an unvarnished story of what Oxfam achieved in 26 projects, along with the problems they’ve faced. This is great transparency. It’s also great for other NGOs to learn from their experience. So hats off to Oxfam!

Oxfam’s Effectiveness Reviews

Karl Hughes, Oxfam’s Programme Effectiveness Team Lead, recently explained their approach and what they found.

They randomly selected around 30 projects from a global portfolio of 1,200 across 55 countries. Then they carried out ‘relatively rigorous’ impact evaluations at a cost of around £10,000 each plus staff time.

Karl makes some comments about carrying out the evaluations:

  • They faced inevitable constraints of time and money. In some cases, this caused some problems with the quality of evaluations.
  • They tackled some hard-to-measure variables, like ‘resilience’.
  • In humanitarian responses, established quality standards were very useful.
  • The sample of projects is too small to draw overall conclusions about Oxfam’s effectiveness.

What They Found

Karl also comments on the findings:

  • Overall results are mixed. “For most projects, there is evidence of impact for some measures but none for others.”
  • Some projects have been highly effective, like a project in Pakistan that provided 48 hours advance warning of floods.
  • Others have been “disappointing”.
Karl Hughes in a field

Karl is impressed that managers are taking the reviews seriously, and committing themselves to acting on their results. I was impressed that their management responses are also published on-line.

Finally, he comments that the reviews are “in no way immune from internal controversy”, because some of the projects selected are small, and because of the time and resources required.

Six Reactions

First off, we’ve got to salute Oxfam on two fronts. They’re trying a new approach to a problem all big NGOs face: how to develop reliable ways of assessing performance and being more accountable to local people, donors and managers. And they’re doing it openly and transparently.

As a sector, we urgently need to pilot more new ways of doing this.

Secondly, it would be fascinating to know more about what field staff think about the reviews – along with the partners and people Oxfam works with. Do the reviews address their concerns & priorities? Have the reviews helped them learn new things and do their work better? Do they feel that the benefits are worth the time the reviews took?

Thirdly, what do senior managers or trustees think about the benefits & costs? I love that the process provides credible evidence of successes and failures. All too often, senior managers only hear about successes. This more balanced view could usefully inform high level strategies and policies (along with fundraising literature).

Fourth, this approach is specifically designed for a large generalist NGO, doing a lot of different things in different places. It wouldn’t be appropriate for a small NGO with a tight focus. They could be more intentional about which projects to evaluate.

Fifth, the individual reports include traffic light ratings showing how much evidence there is of impact for each outcome the project intended to achieve. This echoes the UK Independent Commission on Aid Impact’s approach. Personally, I think it’s terrific, allowing senior managers to compare performance across different projects. I guess it would have been controversial internally – it’s striking that the traffic lights aren’t included in the overall summary. No one likes being on the receiving end of this kind of rating!

Sixth, this stuff is time consuming and needs specialist skills. In particular, trying to measure ‘impact’ takes a lot of careful work for each different intervention. As Oxfam recognises, staff time and support has been a real constraint, even for such a large organisation.

Accountable to Whom?

Two major tests for this kind of exercise are: (a) who is Oxfam being accountable to, and which actions will be influenced as a result? and (b) how much does it help field staff do their work better?

I’m not 100% sure who Oxfam is aiming to be accountable to through these reviews. It feels like a mixture of internal accountability to senior managers / trustees, and external accountability, at a corporate level, to general donors. It’s not about helping staff be more accountable to the people they’re trying to help at the project level.

Given that everyone is always overworked, this raises the question of whether the exercise turns staff time away from direct day-to-day work. It may also have sent a strong signal about organisational priorities.

Asking The Right Question?

Overall, we don’t yet know if the benefits of this approach are worth the costs, to everyone involved. The proof of the pudding will come if the system leads to better decisions and improved performance.

As I’ve written elsewhere, I’m not convinced that NGOs should spend the limited resources they have available for assessment on trying to evaluate impact. It tends to burn up too much precious staff time for not enough useful insight.

When ‘impact’ means changes in poor people’s lives, then very many other factors, outside NGOs, influence it. So impact evaluations are expensive and tend to be distant from the issues that project managers face. They seldom improve real-time decision making; though they can inform future policies, if that’s what they’re designed to do.

Instead, NGOs could focus on measuring how much value they are adding to local people and how good a job they are doing in providing assistance. Unlike impact, this is in their control. It’s also directly related to the daily issues managers face. It may provide a basis for accountability that also drives immediate learning and improvement.

But of course, this is unproven – which is why experiments like Oxfam’s are so valuable.

7 Responses

  1. Good one Alex.

    I am also very pleased to see this bold approach to transparency. One great side-effect is that this can encourage leaders in other orgs (like World Vision) to greater openness. I will certainly bring this to their attention.

    WV are also experimenting with how to use the effectiveness agenda to put local decision makers in the driving seat. I am convinced that if monitoring and evaluation is designed primarily to help local decision makers make good decisions, this will increase their ownership the monitoring (& evaluation), which will improve the quality of monitoring (&E) data, which will then be more useful to higher level decision makers, and external accountability requirements, while all the time empowering the local level.

    At the moment, it seems like the information requirements of the aid agency or donor take precedence, at the expense of local decision makers, which is inevitably dis-empowering.

    Local decision makers can be aid agency project managers, or community members involved in multi-stakeholder local level projects.

  2. Thanks Alex – good piece, thought provoking. I agree that Oxfam is doing the right thing and promoting reflection & transparency, but your question on asking the right questions is also relevant. To me, it raises two important questions – the purpose (why should we be doing the reviews) and accountability (to whom). Answers to these two questions will significantly impact on the design of such reviews and subsequently, its use.

  3. Many thanks for sharing this Alex. The question of factors of impact outside the sphere of influence of NGOs is an interesting one. NGOs usually seek to influence these factors through advocacy and Oxfam is particularly active in this field. Oxfam recognizes that their community level inputs often have little impact due to global economic policies or national level governance challenges. It is important that NGOs map these factors and demonstrate that they understand the bigger context in which they work, which I believe Oxfam does.

    I believe that these reviews help to gain credibility and drive performance, although I agree with you that expensive and time consuming evaluations need to be decided on carefully and strategically.

  4. Great, thoughtful response from DFID’s evaluation team to Oxfam’s work: http://www.oxfamblogs.org/fp2p/?p=12254

    And a five point plan for improving evaluations:

    http://www.oxfamblogs.org/fp2p/?p=12262

    Interesting to see the reliance on good monitoring data. And the work involved in putting these five points into action.

  5. Hi Alex, great write up here and really interesting to read through this work. I am also very impressed that the report is 4 pages long, and the methodology piece only 1 page! You might also be interested to take a look at World Vision Australia’s work, also publically available, which won them a transparency and quality award. Scroll down for Annual Programme Review. They just completed a new one that should be up there soon too. http://www.worldvision.com.au/aboutus/AnnualReportsAndReviews.aspx

  6. I really like your blog. Please keep it up.

  7. […] ratings of evaluations of all its programmes. Oxfam went some way down the same route with its Project Effectiveness Reviews. This is a great trend towards real transparency. Let’s hope other agencies will follow […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 338 other followers

%d bloggers like this: