Oxfam GB has just published their first project effectiveness reviews.
Impressively, they’re available on line, telling an unvarnished story of what Oxfam achieved in 26 projects, along with the problems they’ve faced. This is great transparency. It’s also great for other NGOs to learn from their experience. So hats off to Oxfam!
Oxfam’s Effectiveness Reviews
Karl Hughes, Oxfam’s Programme Effectiveness Team Lead, recently explained their approach and what they found.
They randomly selected around 30 projects from a global portfolio of 1,200 across 55 countries. Then they carried out ‘relatively rigorous’ impact evaluations at a cost of around £10,000 each plus staff time.
Karl makes some comments about carrying out the evaluations:
- They faced inevitable constraints of time and money. In some cases, this caused some problems with the quality of evaluations.
- They tackled some hard-to-measure variables, like ‘resilience’.
- In humanitarian responses, established quality standards were very useful.
- The sample of projects is too small to draw overall conclusions about Oxfam’s effectiveness.
What They Found
Karl also comments on the findings:
- Overall results are mixed. “For most projects, there is evidence of impact for some measures but none for others.”
- Some projects have been highly effective, like a project in Pakistan that provided 48 hours advance warning of floods.
- Others have been “disappointing”.
Karl is impressed that managers are taking the reviews seriously, and committing themselves to acting on their results. I was impressed that their management responses are also published on-line.
Finally, he comments that the reviews are “in no way immune from internal controversy”, because some of the projects selected are small, and because of the time and resources required.
First off, we’ve got to salute Oxfam on two fronts. They’re trying a new approach to a problem all big NGOs face: how to develop reliable ways of assessing performance and being more accountable to local people, donors and managers. And they’re doing it openly and transparently.
As a sector, we urgently need to pilot more new ways of doing this.
Secondly, it would be fascinating to know more about what field staff think about the reviews – along with the partners and people Oxfam works with. Do the reviews address their concerns & priorities? Have the reviews helped them learn new things and do their work better? Do they feel that the benefits are worth the time the reviews took?
Thirdly, what do senior managers or trustees think about the benefits & costs? I love that the process provides credible evidence of successes and failures. All too often, senior managers only hear about successes. This more balanced view could usefully inform high level strategies and policies (along with fundraising literature).
Fourth, this approach is specifically designed for a large generalist NGO, doing a lot of different things in different places. It wouldn’t be appropriate for a small NGO with a tight focus. They could be more intentional about which projects to evaluate.
Fifth, the individual reports include traffic light ratings showing how much evidence there is of impact for each outcome the project intended to achieve. This echoes the UK Independent Commission on Aid Impact’s approach. Personally, I think it’s terrific, allowing senior managers to compare performance across different projects. I guess it would have been controversial internally – it’s striking that the traffic lights aren’t included in the overall summary. No one likes being on the receiving end of this kind of rating!
Sixth, this stuff is time consuming and needs specialist skills. In particular, trying to measure ‘impact’ takes a lot of careful work for each different intervention. As Oxfam recognises, staff time and support has been a real constraint, even for such a large organisation.
Accountable to Whom?
Two major tests for this kind of exercise are: (a) who is Oxfam being accountable to, and which actions will be influenced as a result? and (b) how much does it help field staff do their work better?
I’m not 100% sure who Oxfam is aiming to be accountable to through these reviews. It feels like a mixture of internal accountability to senior managers / trustees, and external accountability, at a corporate level, to general donors. It’s not about helping staff be more accountable to the people they’re trying to help at the project level.
Given that everyone is always overworked, this raises the question of whether the exercise turns staff time away from direct day-to-day work. It may also have sent a strong signal about organisational priorities.
Asking The Right Question?
Overall, we don’t yet know if the benefits of this approach are worth the costs, to everyone involved. The proof of the pudding will come if the system leads to better decisions and improved performance.
As I’ve written elsewhere, I’m not convinced that NGOs should spend the limited resources they have available for assessment on trying to evaluate impact. It tends to burn up too much precious staff time for not enough useful insight.
When ‘impact’ means changes in poor people’s lives, then very many other factors, outside NGOs, influence it. So impact evaluations are expensive and tend to be distant from the issues that project managers face. They seldom improve real-time decision making; though they can inform future policies, if that’s what they’re designed to do.
Instead, NGOs could focus on measuring how much value they are adding to local people and how good a job they are doing in providing assistance. Unlike impact, this is in their control. It’s also directly related to the daily issues managers face. It may provide a basis for accountability that also drives immediate learning and improvement.
But of course, this is unproven – which is why experiments like Oxfam’s are so valuable.