Lessons from World Vision: Mandatory indicators don’t work!

Screen shot 2013-03-27 at 22.11.38Here’s another powerful experiment by a major NGO on how to measure results. It’s from World Vision, a few years ago. The experiment was to roll out the same 12 standard impact indicators across all their programmes, worldwide. (World Vision currently works in over 80 countries worldwide.)

I’m greatly impressed by how much material World Vision has published. Their original approach was ultimately unsuccessful. But they’ve continued to evolve the ideas. Both the original effort and their current materials have powerful lessons for other NGOs today.

Starting the journey: mandatory indicators

In 2003 World Vision asked all of its local programme offices to measure the same 12 indicators in the communities where they work. They were called Transformational Development Indicators. In 2008, World Vision evaluated the process. Subsequently, they made substantial changes – because the original idea didn’t work well enough to justify the time and costs involved.

The original 12 indicators included:

  • Percent of boys and percent of girls who are enrolled in or have completed the first six years of formal education. 
  • Percent of children aged 12 – 23 months fully immunized. 
  • Community participation in development (analysed using specific rating guidelines). 

The goals of the process and indicators were:

  1. To be valuable to communities, World Vision’s programmes and partners. 
  2. To strengthen an organisational culture of quality and accountability. 
  3. To meet the information needs of funding offices. 
  4. To allow World Vision to review overall effectiveness. 

By 2008, more than 700 local programme offices had measured these indicators. This is a huge effort, requiring a vast amount of staff time along with many guidelines, training sessions and research activities. A lot of this material is still available on their dedicated website.

The evaluation report questions the quality of data, the use of the results and the costs of the process. They are familiar issues from any research exercise at this scale. The report includes stories of how useful the indicators were locally, when they were high quality – for instance, to guide programme strategies. But overall, it found that the indicators were not consistently relevant for their programmes:

The [indicator] surveys enabled World Vision staff to obtain a clearer picture of the situation in the communities directly or indirectly targeted by their programs. … However, in many cases there has been a disconnect between measurement of the [indicators] in the broader community and what [programme staff] feel responsible for. Even when there has been a second survey … there has been a reluctance to attribute those changes (or lack thereof) to World Vision’s work.” (p. 5)

Effectively, World Vision’s staff found that they could either invest their time in running demanding development programmes or in measuring the standard indicators. But the indicators were not consistently relevant enough to the work being carried out for staff to combine the two.

This is an incredibly important conclusion. As a sector, we can learn from it. We don’t need to spend more time and money trying to roll out mandatory indicators across many varied contexts and programmes.

What happened next: local ownership

Seamus Anderson, from World Vision’s Programming Effectiveness Team, commented that the Transformational Development Indicators experience created a positive legacy, including:

– increased staff capacity in baseline assessments, monitoring and evaluation,
– greater understanding of the potential power of good quality evidence, and a commitment to find quality data at reasonable cost,
– a need to balance the organisational requirements for evidence with the local programme and community capacity to generate good data,
– the need for monitoring and evaluation to be adapted so it’s relevant to the local context,
– the need for local ownership in any form of monitoring and evaluation.”
 (source: private email 26/3/13)

These are more excellent and powerful lessons.

World Vision has acted on them and developed their approach further. They’ve created a menu of non-mandatory indicators, called Child Well Being Outcomes. These are part of a wider approach to managing and reporting the quality of programmes, which includes a serious review of programme design as well as monitoring, reflection and evaluation – all (a) owned at a local level and (b) meeting global standards.

Best of all, they’ve published all this material on a simple and user-friendly website, available in three languages, and supported by a range of tested participatory tools. All available to everyone, for free: click on ‘programming tools’.

I’m not aware that the new approach has been evaluated. But it feels like a highly appropriate approach to balancing the demands for centralised reporting with the need to support local managers to run flexible programmes that are relevant to the local context.

In other words, it helps staff use the limited resources available to do great monitoring, rather than focus on evaluation – which is 100% in line with recent well informed comment.

2 Responses

  1. Alex, WVI also published a peer-reviewed article on the TDI experience. The talks candidly about our experience, and emphasises the practical lessons that we took from the experience into our new approach. It explains the approach we’re taking now, and how it’s built on previous experience. It’s a good read & you may find it of interest.

    The abstract is available at the link below
    http://link.springer.com/content/pdf/10.1007%2Fs12187-011-9109-3

  2. Hi Alex, thanks for highlighting this experience. WVI learned a huge amount from this (now historical) experience and also from the global evaluation of it. We have come a long way since that time.

    A couple of clarifications:
    1. World Vision still uses standard indicators to measure progress across programmes, countries and regions, but the way it is now done is much improved.
    2. There is an equal focus on monitoring and evaluation – its just the purpose and role of each has been better clarified to optimise utilisation.

    As the tile of the article we published suggests (thanks Seamus for posting the link) the aim is to find the right balance between standardisation and flexibility.

    Standardisation: As a global organisation, finding a way to produce agency level results is essential for accountability and learning. This requires a level of standardisation. Plus staff time / resources are wasted developing outcome indicators where global good practice ones already exist. We found hundreds of similarly worded indicators all measured slightly differently, preventing us from being able to use the data more broadly. It is difficult to draw meaningful lessons from thousands of idiosyncratic projects – lessons learned documents just end up gathering dust. We need some way of sythesising and summarising.

    Flexibility: This is balanced with the focus on working with partners and community led-processes. So we have a “compendium” of standard indicators and programmes are required to use them – but they can pick and choose which ones. Three filters guide selection:
    – relevance to project or programme objectives,
    – appropriateness for local context
    – in line with the country office strategy.
    Selected indicators from this global set are measured alongside any context specific indicators developed locally.

    Strengthening monitoring AND evaluation: WVI is focusing on strengthening its evidence base – and this includes improving our internal evaluation capacity, data quality and better utilisation of results. Good monitoring is essential and the focus of our monitoring is to inform local decision making. But this does not take away from the important role evaluation plays for accountability and learning. We need to know not only if we are doing what we said we would do, but if we are doing the right things. So local staff focus on monitoring and staff from the country office manage evaluation processes. Its not an either or – its both.

    We are currently piloting a process for country offices to produce their own summary reports, annually, on child well-being, focused on measuring progress (using monitoring data) and change (using baselines and evaluations) to generate important learning and recommendations to continually improve effectiveness. These reports use the variety of data available from one financial year. So far this has been successful and useful. In a few years we will need to again review if this has translated into more effective programmes and verifiable improvements in child wellbeing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: