An excellent TV show this week followed a British midwife who worked for a fortnight in a Liberian hospital. The first thing she did was turn on two shiny new incubators that UNICEF had provided. They hadn’t trained the staff how to use them.
My wife was shocked: how could such expensive kit be provided without training? After years in aid, I wasn’t so surprised. It’s a familiar story, coming down to the incentives that shape how aid agencies actually run projects. Current incentives focus attention on planning and the supply side, rather than taking responsibility for making things work on the ground.
With the growing pressure to measure results, there’s a real opportunity to bring together inspiring new practice on demand-side indicators: new ways of measuring performance that create the incentives to plug in the incubators and use them, not just dump them.
First, a note on the pressure to measure results. DFID’s relentless focus on results and value for money. Cash on delivery. Social impact investing. NGOs’ own efforts to improve performance. They all depend on measuring results better. Everyone wants to do it.
Second, the established ways are discredited. There’s no other word for it.
Tools like logical frameworks focus attention on planning. But we’ve reached the limits of how we can improve performance by better planning. We know social change is too complex to be mapped out in advance.
Impact evaluations assess performance after projects have finished, at a hefty cost. They may improve how future projects are planned. But they don’t provide managers with real-time information to improve work that’s up and running. And arguably, that’s the biggest need for performance information.
As a sector, we urgently need better ways of monitoring performance, so the work remains responsive to people’s needs. Monitoring should also empower local people, as one of development’s core aims. Finally, it should allow performance to be easily compared between organisations, so funds can be allocated to the best performers and standards rise across the board.
So what can we build on? An inspiring new set of approaches is emerging. Here are three examples from different fields:
In the UK, the Outcomes Star provides a structured way of assessing how homeless people are doing across 10 key dimensions like physical health, managing money and accommodation. For each dimension, there is a 10 point ‘journey of change’ scale. A homeless person and their case worker discuss together which point of the scale the person is at. Changes can be monitored over time, generating performance data almost as a by-product.
In Bangladesh, a similar tool is used to measure empowerment. Designed in consultation with local community groups, it identifies specific indicators about what ‘empowerment’ means for them, grouped into 4 areas. Every year, the groups assess whether they have achieved indicators like attending school meetings, accessing welfare entitlements or keeping their own accounts. Changes in empowerment are compared between men and women, and used to monitor staff performance as well as reporting to donors.
The Keystone Partner Survey used a carefully designed questionnaire to define the ways that Northern NGOs expect to add value to their Southern Partners. Over 1,000 partners filled it in, giving their perspective on how well the Northern NGOs do their job. It generated simple, comparable data that was benchmarked between organisations. Each NGO could see exactly how it performed compared to its peers and where it could improve.
These examples share important characteristics:
- A carefully designed tool defines the expected outputs & outcomes. Tools are designed in consultation with beneficiaries and partners. They are sector-specific and focus on proximate ‘value added’ and direct changes, rather than long term ‘impact’.
- The tools quantify beneficiaries’ perceptions. This makes data fairly cheap to collect and easy to use. It is also inherently empowering: beneficiaries’ views count.
- The tools are used as the basis for dialogue between organisations and beneficiaries, in order to measure progress and improve performance.
- The quantitative data allows trends to be monitored and different teams & organisations to be compared. The comparisons drive learning.
I wouldn’t suggest that beneficiaries’ perceptions are an infallible guide to ‘the truth’. But they are a crucial set of views about how well an organisation is actually helping people that agencies cannot afford to ignore. They can also provide great performance indicators.
The approach applies customer satisfaction in aid work and has strong similarities to Beyond Budgeting. It’s staggering that NGOs do not systematically gather local people’s perceptions of the quality of their work. Just imagine if they did: suppose NGOs’ funding was tied to local women’s perceptions of how useful their work was. That would be a real revolution.
A number of other examples are emerging, like the Coping Strategies Index, Listen First and – most widely used – Community Scorecards. New technologies make it cheap to hear from large numbers of people quickly and regularly.
It’s easy to start imagining other similar tools that could be developed. How about a Humanitarian Star, defining the key outcomes that aid agencies aim to achieve in emergency responses, such as access to clean water, shelter, information, protection etc? Or a Health Clinic Scorecard, or a generic Community Empowerment Framework?
There are some challenges to be tackled, like adapting generic tools to different contexts and making sure that women and marginalised people’s voices are heard. But the principle feels strong.
The goal is surely not to design approaches that are perfect, but approaches that are (a) credible in 80% of different cases, and (b) better than logframes and how we currently measure performance.
Imagine the power of matching up feedback from communities through a Community Scorecard with Cash on Delivery – or Social Impact Bonds. Generating a real link between resource allocation and local people’s perceptions of the work. Some terrific experiments are happening now.
NGOs urgently need a concerted effort to develop these demand-side measurement tools, so they can describe their results, be more accountable to the people they aim to assist and assess their value for money.
We must be able to do better than logframes and counting the number of people trained – or even incubators delivered – which we know is no guarantee our efforts are useful. We need to measure performance based on demand, not supply, and create the right incentives for responsive, effective assistance.
What other practical approaches are there? I’d love to hear.
Filed under: Accountability, Aid, Development, Feedback, Management, Monitoring, NGO, Performance | Tagged: coping strategies index, demand-side, incentives, keystone partner survey, listen first, logframes, measuring empowerment, outcomes star, results, Value for money |