Here’s another powerful experiment by a major NGO on how to measure results. It’s from World Vision, a few years ago. The experiment was to roll out the same 12 standard impact indicators across all their programmes, worldwide. (World Vision currently works in over 80 countries worldwide.)
I’m greatly impressed by how much material World Vision has published. Their original approach was ultimately unsuccessful. But they’ve continued to evolve the ideas. Both the original effort and their current materials have powerful lessons for other NGOs today.
Starting the journey: mandatory indicators
In 2003 World Vision asked all of its local programme offices to measure the same 12 indicators in the communities where they work. They were called Transformational Development Indicators. In 2008, World Vision evaluated the process. Subsequently, they made substantial changes – because the original idea didn’t work well enough to justify the time and costs involved.
The original 12 indicators included:
- Percent of boys and percent of girls who are enrolled in or have completed the first six years of formal education.
- Percent of children aged 12 – 23 months fully immunized.
- Community participation in development (analysed using specific rating guidelines).
The goals of the process and indicators were:
- To be valuable to communities, World Vision’s programmes and partners.
- To strengthen an organisational culture of quality and accountability.
- To meet the information needs of funding offices.
- To allow World Vision to review overall effectiveness.
By 2008, more than 700 local programme offices had measured these indicators. This is a huge effort, requiring a vast amount of staff time along with many guidelines, training sessions and research activities. A lot of this material is still available on their dedicated website.
The evaluation report questions the quality of data, the use of the results and the costs of the process. They are familiar issues from any research exercise at this scale. The report includes stories of how useful the indicators were locally, when they were high quality – for instance, to guide programme strategies. But overall, it found that the indicators were not consistently relevant for their programmes:
The [indicator] surveys enabled World Vision staff to obtain a clearer picture of the situation in the communities directly or indirectly targeted by their programs. … However, in many cases there has been a disconnect between measurement of the [indicators] in the broader community and what [programme staff] feel responsible for. Even when there has been a second survey … there has been a reluctance to attribute those changes (or lack thereof) to World Vision’s work.” (p. 5)
Effectively, World Vision’s staff found that they could either invest their time in running demanding development programmes or in measuring the standard indicators. But the indicators were not consistently relevant enough to the work being carried out for staff to combine the two.
This is an incredibly important conclusion. As a sector, we can learn from it. We don’t need to spend more time and money trying to roll out mandatory indicators across many varied contexts and programmes.
What happened next: local ownership
Seamus Anderson, from World Vision’s Programming Effectiveness Team, commented that the Transformational Development Indicators experience created a positive legacy, including:
– increased staff capacity in baseline assessments, monitoring and evaluation,
– greater understanding of the potential power of good quality evidence, and a commitment to find quality data at reasonable cost,
– a need to balance the organisational requirements for evidence with the local programme and community capacity to generate good data,
– the need for monitoring and evaluation to be adapted so it’s relevant to the local context,
– the need for local ownership in any form of monitoring and evaluation.”
(source: private email 26/3/13)
These are more excellent and powerful lessons.
World Vision has acted on them and developed their approach further. They’ve created a menu of non-mandatory indicators, called Child Well Being Outcomes. These are part of a wider approach to managing and reporting the quality of programmes, which includes a serious review of programme design as well as monitoring, reflection and evaluation – all (a) owned at a local level and (b) meeting global standards.
Best of all, they’ve published all this material on a simple and user-friendly website, available in three languages, and supported by a range of tested participatory tools. All available to everyone, for free: click on ‘programming tools’.
I’m not aware that the new approach has been evaluated. But it feels like a highly appropriate approach to balancing the demands for centralised reporting with the need to support local managers to run flexible programmes that are relevant to the local context.
In other words, it helps staff use the limited resources available to do great monitoring, rather than focus on evaluation – which is 100% in line with recent well informed comment.