Measuring results: the demand-side revolution

Upside down world

An excellent TV show this week followed a British midwife who worked for a fortnight in a Liberian hospital. The first thing she did was turn on two shiny new incubators that UNICEF had provided. They hadn’t trained the staff how to use them.

My wife was shocked: how could such expensive kit be provided without training? After years in aid, I wasn’t so surprised. It’s a familiar story, coming down to the incentives that shape how aid agencies actually run projects. Current incentives focus attention on planning and the supply side, rather than taking responsibility for making things work on the ground.

With the growing pressure to measure results, there’s a real opportunity to bring together inspiring new practice on demand-side indicators: new ways of measuring performance that create the incentives to plug in the incubators and use them, not just dump them.

First, a note on the pressure to measure results. DFID’s relentless focus on results and value for money. Cash on delivery. Social impact investing. NGOs’ own efforts to improve performance. They all depend on measuring results better. Everyone wants to do it.

Second, the established ways are discredited. There’s no other word for it.

Tools like logical frameworks focus attention on planning. But we’ve reached the limits of how we can improve performance by better planning. We know social change is too complex to be mapped out in advance.

Impact evaluations assess performance after projects have finished, at a hefty cost. They may improve how future projects are planned. But they don’t provide managers with real-time information to improve work that’s up and running. And arguably, that’s the biggest need for performance information.

As a sector, we urgently need better ways of monitoring performance, so the work remains responsive to people’s needs. Monitoring should also empower local people, as one of development’s core aims. Finally, it should allow performance to be easily compared between organisations, so funds can be allocated to the best performers and standards rise across the board.

So what can we build on? An inspiring new set of approaches is emerging. Here are three examples from different fields:

In the UK, the Outcomes Star provides a structured way of assessing how homeless people are doing across 10 key dimensions like physical health, managing money and accommodation. For each dimension, there is a 10 point ‘journey of change’ scale. A homeless person and their case worker discuss together which point of the scale the person is at. Changes can be monitored over time, generating performance data almost as a by-product.

In Bangladesh, a similar tool is used to measure empowerment. Designed in consultation with local community groups, it identifies specific indicators about what ‘empowerment’ means for them, grouped into 4 areas. Every year, the groups assess whether they have achieved indicators like attending school meetings, accessing welfare entitlements or keeping their own accounts. Changes in empowerment are compared between men and women, and used to monitor staff performance as well as reporting to donors.

The Keystone Partner Survey used a carefully designed questionnaire to define the ways that Northern NGOs expect to add value to their Southern Partners. Over 1,000 partners filled it in, giving their perspective on how well the Northern NGOs do their job. It generated simple, comparable data that was benchmarked between organisations. Each NGO could see exactly how it performed compared to its peers and where it could improve.

These examples share important characteristics:

  1. A carefully designed tool defines the expected outputs & outcomes. Tools are designed in consultation with beneficiaries and partners. They are sector-specific and focus on proximate ‘value added’ and direct changes, rather than long term ‘impact’.
  2. The tools quantify beneficiaries’ perceptions. This makes data fairly cheap to collect and easy to use. It is also inherently empowering: beneficiaries’ views count.
  3. The tools are used as the basis for dialogue between organisations and beneficiaries, in order to measure progress and improve performance.
  4. The quantitative data allows trends to be monitored and different teams & organisations to be compared. The comparisons drive learning.

I wouldn’t suggest that beneficiaries’ perceptions are an infallible guide to ‘the truth’. But they are a crucial set of views about how well an organisation is actually helping people that agencies cannot afford to ignore. They can also provide great performance indicators.

The approach applies customer satisfaction in aid work and has strong similarities to Beyond Budgeting. It’s staggering that NGOs do not systematically gather local people’s perceptions of the quality of their work. Just imagine if they did: suppose NGOs’ funding was tied to local women’s perceptions of how useful their work was. That would be a real revolution.

A number of other examples are emerging, like the Coping Strategies Index, Listen First and – most widely used – Community Scorecards. New technologies make it cheap to hear from large numbers of people quickly and regularly.

It’s easy to start imagining other similar tools that could be developed. How about a Humanitarian Star, defining the key outcomes that aid agencies aim to achieve in emergency responses, such as access to clean water, shelter, information, protection etc? Or a Health Clinic Scorecard, or a generic Community Empowerment Framework?

There are some challenges to be tackled, like adapting generic tools to different contexts and making sure that women and marginalised people’s voices are heard. But the principle feels strong.

The goal is surely not to design approaches that are perfect, but approaches that are (a) credible in 80% of different cases, and (b) better than logframes and how we currently measure performance.

Imagine the power of matching up feedback from communities through a Community Scorecard with Cash on Delivery – or Social Impact Bonds. Generating a real link between resource allocation and local people’s perceptions of the work. Some terrific experiments are happening now.

NGOs urgently need a concerted effort to develop these demand-side measurement tools, so they can describe their results, be more accountable to the people they aim to assist and assess their value for money.

We must be able to do better than logframes and counting the number of people trained – or even incubators delivered – which we know is no guarantee our efforts are useful. We need to measure performance based on demand, not supply, and create the right incentives for responsive, effective assistance.

What other practical approaches are there? I’d love to hear.

15 Responses

  1. Thanks for this post Alex. Like you I’ve been thinking about the effects of supplier-centric planning – see http://www.brabyn.com/2011/02/supplier-induced-demand-should-you-lead.html – though my focus has been the ways in which suppliers induce demand.

    You make several valuable points about the models being developed to rebalance the “market” for aid to reflect the needs of the beneficiaries, but what do you think can or should be done to address the vested interests that shape the supplier agenda?

    I spoke recently to an epidemiologist who compared the moral hazards inherent in the US and UK healthcare systems in which suppliers are rewarded for inputs (surgical procedures completed etc) rather than outputs (satisfied healthy patients) – and contrasted this with approaches being developed in Australia where healthcare providers are rewarded based on the reports of patients.

  2. Very interesting Alex.

    I’ve also been thinking about how to incentivise NGOs and aid agencies to develop better measurement tools and in particular, encourage beneficiary feedback.

    A number of independent “review and ranking” organisations (eg. http://greatnonprofits.org/ and http://www.givewell.org/) seem to be gaining popularity. They offer an important incentive to NGOs: visibility and credibility with the general public, which possibly leads to more funding.

    So the thought is: develop and promote these public forums where the results of better measurement and results are rewarded; this gives NGOs a financial and credibility incentive to innovate.

  3. Thanks Alex.

    It is good to see that these issues are being debated. Certainly at Peace Direct we are keen to help our partners develop their own M&E and use communities to identify the indicators and targets. We are about to have the third and final ‘Peace Exchange’ in Uganda next month where we hope we can conclude on a locally-led approach to evaluating peacebuilding.

    Our partners have identified the need to involve the communities in identifying and monitoring any evaluation indicators. In doing so, the indicators will be more relevant, as well as offering the communities greater transparency. This will lead to greater project support – something vital in peacebuilding projects – as communities will understand clearly how the project will benefit them. In turn, this will lead to greater accountability of us and our partner organisations as it will be the communities that set the parameters for success.

    This kind of inclusive approach is a win-win situation as responsibility for M&E is shared with the communities. By choosing indicators that are identified by the communities the indicators are typically more visible making them easier to collect. For example, rather than conducting a psychosocial questionnaire to assess the re-integration of a combatant, they look at indicators that are easy to see, such as building a house or starting to farm. The ease of these indicators means that staff with little or no M&E experience can still do the evaluation. In fact, it is feasible that someone within the community could be appointed to monitor the progress.

    I will look at the links here with interest and as this approach develops I will keep you posted.

  4. It all sound great and your argument is very convincing. However, I don’t think that getting rid of logframes is the answer. The problem with logframes is not the tool itself but rather the fact that they are often poorly completed. No matter what indicator you come up with, you can put it in a logframe.

    The major problem we face is that while developing indicators and results frameworks with beneficiaries is great in principle, it is extremely time consuming and more often than not there simply is not time to do this before submitting proposals.

    I totally agree that we need to be much more focused on measuring results/change as opposed to inputs but I would advocate using the system we have – just use it better. Too often, we in the NGO world come up with new ways of doing things when all we really need is time to fuigure out how to improve the methods/approaches we’re already using

  5. Thanks for the comments and emails. Some quick replies:

    Tariq, I agree that ratings agencies have an important role to play. But, given the diversity of rating agencies and funding sources, how strong is the incentive they create for agencies in reality?

    Tom, thanks very much for the example. It sounds like Peace Direct is doing more great work. I love what you say about local communities identifying indicators and a monitoring system they can run. Do please keep us all posted. How about Stewart’s point, about how long it takes to work this way?

    Stewart, you are spot on that developing indicators with communities can take time. It’d be interesting to see how Peace Direct find this. The examples in the blog are all flexible enough to be used in different situations. Once the initial investment has been made (and it has), the tools can be used and re-used much more easily.

    On logframes there’s strong evidence against you, I’m afraid, e.g: https://ngoperformance.org/related-initiatives/logframes/. They consistently do not deliver what is expected from them. In the face of this, we’ve got to look for alternatives.

  6. Nice little piece in this month’s Harvard Business Review: “Don’t Get Blinded by the Numbers” . Suggests the “management by numbers game” is losing currency in the business world — ‘more and more we are coming to see that strategy is about interpreting [data qualitatively] rather than about analyzing [the numbers].’ The best executives, it says, are realizing the importance of qualitative assessment in making decisions. ‘Increasingly firms are putting aside data-driven approaches and borrowing from disciplines like ethnography’. ‘Successful strategists of the future will have a holistic, empathetic understanding…’ The article goes on to say that classic business school education will need to adapt to the new emphasis on qualitative approaches. Seems to me the NGO sector usually lags behind the business sector — how long before they catch up with this one?

  7. […] feels very similar to the approach I blogged on this week about the demand-side revolution in measuring results, taken to a whole new level. There are huge challenges, for instance in balancing standardisation […]

  8. The critiques of logframes match my own experience with numerous NGOs; however I also agree that “throwing the baby out with the bath water” is not what is needed. We use, and have found more and more NGOs coming to us or others, an approach that improves on logframes: Theory of Change (TOC) methodology. Now TOCs can be done well or poorly, so it still comes down to using the method to its fullest.

    Though often critiqued because it, too, is linear, TOC actually allows and provides for both feedback loops and “organic” developments. A TOC is far, far easier to read than a logframe and does convey when things happen in a time sequence. These things can be labeled, and coincide with, outputs, objectives, etc…or not. The point is that every box in a TOC represents a condition that is necessary for change and that box can be called whatever you want (traditionally they are called outcomes).

    Theories can be reworked easily each time something new or unexpected is learned, or time passes, or context changes.

    And that is perhaps the most relevant and important aspect of any results-oriented method – that it take political context (and all context) into account, so that it feels real to the people using it, and they will, actually, use it as a guide for planning, for changing plans and for evaluation.

  9. While not having much experience international development, I have been developing performance measures for social service programs in the USA for over 20 years. Recently I have been working with a framework that simplifies the questions that need to be answered, although as the post acknowledges and other commentors confirm, the right measures are complicated by the situation being measured and the context in which the work occurs.

    All that being said, Results-Based Accountability (Mark Friedman) asks three simple questions: How much did we do? How well did we do it? and Is anyone better off?

    My wife who works in the area of infant and young child feeding in emergencies has found some value in the simplicity of this framework as a starting point.

  10. […] need for sector specific, demand-side measures is more urgent than […]

  11. […] for the soundbite, Sceptical Secondo). Claire, supported by Penny Lawrence and Alex Jacobs (with an excellent link to some practical examples) thinks it […]

  12. […] I’ve argued that feedback is a crucial way of assessing performance, generating powerful, cost-effective data that puts beneficiaries first. It could also be used in […]

  13. […] NGOs, we urgently need other approaches to planning and monitoring our work, which focus on the people involved in development and how much value we are adding to their […]

  14. […] on lessons I have learnt in the last two years working on such projects. As NGOs, we urgently need other approaches to planning and monitoring our work which focus on the people involved in development and how much value we are adding to their […]

  15. […] on lessons I have learnt in the last two years working on such projects. As NGOs, we urgently need other approaches to planning and monitoring our work which focus on the people involved in development and how much value we are adding to their […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: