Charity Awards 2024

Made to measure? How should charities assess effectiveness?

Charities are being asked to measure their impact, but the idea is being met with resistance. David Ainsworth looks into why.

Over the last decade, there has been a drive to get charities to measure their effectiveness – one which has arguably had indifferent success.

The underlying principles behind this drive are understandable. Charities, unlike commercial organisations, are not driven by money. They are driven by a desire to make people’s lives better. Yet while we have very precise metrics to measure how much money flows in and out of charities, we have much weaker tools to assess whether charities are making a positive difference in the world – whether they are delivering social impact (see What is impact? below)

This appears to be a reversal of the proper order. Surely the important thing to measure and audit is not whether a charity is making money, but whether it is doing good?

As a result, we have had a drive to measure from several sources – from the public, from institutional funders and social lenders, and from within the sector itself, where those pursuing measurement-led models have been critical of their peers.

But this impact revolution has not fully taken hold. Many in the sector appear to view it with suspicion, and there is evidence that merely adopting the language of impact reporting, far from improving work, may actually make staff depressed (see Impact and happiness at work, below).

So what is going on? Does the sector need to do more to measure its impact, or not? And why the backlash?

Genevieve Maitland Hudson, head of evaluation and impact at Lotterybacked funder Power to Change, suggests that when we ask if the sector should measure its impact, this is really two questions, not one. “The first question is whether we should use metrics and measurement to assess the effectiveness of our work,” she says. “And the answer to this is yes, we absolutely should.

“But the second is whether we should report on the impact of our work, and this is much more complex and difficult.”

Measuring impact itself, as opposed to your outputs or outcomes, is a tricky process, she says. It requires a lot of paperwork and a lot of guesswork. And it requires you to start measurement before your programme starts, not after it has finished.

“Most people aren’t very good at processing information unless they have special training. As a result, impact measures shouldn’t be done by individual organisations. Social purpose organisations do not exist to provide homes for researchers. Impact reports should be done by professionals with the proper resources, looking at lots of organisations over a long time frame.

“That’s not happening. As a result, most impact reports are pure marketing.”

Caroline Fiennes, director of Giving Evidence, a consultancy which helps funders identify projects which deliver strong value for money, recommends that constructing your own measures should not be the starting point. “A proper measurement of impact is extremely complicated and expensive,” she says.

“It’s also unnecessary. There is usually already plenty of evidence about what works. Charities should start with that.

“Then if you do measure your impact, what are you going to do with that knowledge once you’ve got it? It’s only useful if you can compare it with something, and do something different as a result.”

Fiennes has spent considerable time reading the research into charitable interventions, and found startling variation in how useful projects are. She points to two very similar campaigns pushing for chlorinated water in Kenya, one of which saved twice as many lives as the other because it delivered chlorine to the water pump, not people’s houses. She also refers to a drug prevention scheme in the USA which probably increased marijuana use.

She says the first thing for anyone to do, before launching a new intervention, is to look at what is already proven to work. Most likely, someone has already identified the best solution.

Bad funding

The major problem is that evaluation and impact reports do not appear geared to presenting evidence which can be used to deliver better social outcomes in future. Julia Morley, a researcher at the London School of Economics, looked into social impact reporting on 138 charity and social enterprise websites, and found that there was little evidence that charities were using the data they collected to deliver better results.

Morley says the primary purpose of social impact reports appeared to be “business washing” – appearing rigorous and professional to impress institutional funders and social investors – rather than gathering data to improve the lives of vulnerable people.

So why is this happening? The answer, it appears, is that many funders and social investors have rightly looked to support charities that can evidence their effectiveness, but have inadvertently created perverse incentives. Charities which want to seek institutional funding must produce evidence and evaluations. As a result, any evaluations they do are likely to be designed to maximise the chances of attracting that cash.

Fiennes says that funders are drawn, unsurprisingly, to organisations which routinely report success. As a result, charities are frequently guilty of publication bias – only revealing results which say something they want to hear – and of overclaiming on outcomes.

This would not be a problem if funders which demand impact reports were good at weeding out the poor evidence, and using the best reporting to select grantees. But it does not appear provable that charities which actually deliver better results have routinely attracted more money.

Maitland Hudson examined data from GrantNav, which collects data on grants given around the UK, to assess whether funders give preference to organisations which could demonstrate a track record of success. The answer was no. Funders followed a herd instinct, and just piled in.

“If grantmakers were funding on the basis of reliable results, then you would expect to see cycles of funding that leave sufficient time to elapse for interventions to be established, data to be collected and analysed, and results published,” she wrote last year for Civil Society News. “That would mean funding cycles like those of social research, with grants available for up to five years.

“This isn’t how funding operates. Instead, grants are distributed within relatively short periods and for continually evolving programmes.”

Maitland Hudson says it is questionable whether funders use much of the data they collect.

“I see this all the time,” she says. “People are asked to collect data. When you go to the funder, they aren’t doing anything with it. They just felt they ought to ask for it.”

Sarah Handley, a senior consultant in the measurement and evaluation team at NPC, a think tank often seen as pro-impact reporting, agrees that charities are often better focusing on practical, immediately useable measures, rather than in-depth external evaluation reports. “The reality is that a lot of organisations want to measure impact to deliver proof of effectiveness, because they’re working in a competitive funding environment,” she says. “We think they should be doing it because they want to improve.

“In an ideal world, the drive would come from charities wanting to measure their own effectiveness, and funders would be happy with that. If the funders were able to trust the charities to do a good job, then you might not have the false incentives that we have.

“Some funders have got it right, but some haven’t. We think funders need to be willing to spend money to support the process of evaluation. And they also need to assess their own impact, and the reporting requirements they’re asking for. If you’re a responsive funder, dealing with a wide range of grantees, you can’t ask for information in the same way as a specialist funder who knows their market very well.”

What next?

So what is the solution? How do we shift to a situation where charities measure their interventions effectively, and report on it honestly?

While measurement is not a good tool to attract funding, it is a good tool to support beneficiaries. So charities’ internal motivations should push us in the right direction.

But that is not enough by itself. Charities must be helped to build the necessary skills. And they will need to overcome resistance: individual charities are strongly invested in ensuring their own particular activities continue, whether or not they are the most effective, and the people delivering specific programmes and interventions have jobs they want to hold onto.

While charities might measure accurately internally, they will still be incentivised to exaggerate and cherry-pick when reporting externally. Funders’ behaviour needs to change to help reduce this.

However, we cannot be too prescriptive towards those who freely give away their own money. Charities would argue that it is usually better to have bad funding than no funding at all, so any incentives must not drive philanthropists out of the market.

One answer is scrutiny. Previous issues of Charity Finance have discussed whether the sector should introduce impact audits, in which external practitioners test the public benefit of a charity, and a Charity Navigator-type body to measure effectiveness against basic metrics.

If these processes did exist, they would be able to call out overreporting by operational bodies, but also provide funders with honest feedback on their own practices – something grant recipients cannot do at present without biting the hand that feeds them.

How to measure well

So if you are a finance director or chief executive looking to embed a culture of measurement into a charity, where should you start? What are the realistic goals, and where should you try to end up?

“Start by reading,” says Fiennes. “Find out what everyone else has done. A lot of charities are delivering things that other charities have done for years. Talk to them. Find out what results they have got.

“Do market research. Become an expert in the existing literature. Find out whether the results you are getting are in line with the rest of the literature.”

She gives the example of the Education Endowment Foundation, which recently carried out a randomised control trial assessing the effectiveness of breakfast clubs. “You can just read that and use it,” she says. “If you are running a breakfast club and your results are much worse, is it because you’re doing something wrong, or do you have a different beneficiary group? If your results are much better, have you discovered some great new intervention? Or are you measuring it wrong?

“Then move on to other simple things, like talking to your beneficiaries about whether they are happy with your service. I’m working with some mental health charities, and few did evaluations with their beneficiaries.”

Theory of change

A good way to approach measurement, says Handley, is to develop a theory of change. This is simply a process that begins with identifying what it is you want to achieve. What change do you want to implement, with what beneficiary group, in what area? Then the process identifies what interventions you believe might achieve that goal, and how you might measure whether those interventions are succeeding. Ultimately it moves onto a cycle of measure, test, alter, repeat (see Theory of Change, below).

“Ideally you’d start with a blank sheet of paper, but none of us do,” she says. “We’re all starting from somewhere already. The key, if you’re already delivering programmes, is to be prepared to change if something isn’t working.”

Most organisations are capable of throwing up vast quantities of data if you track everything, but not all that data can be used for effective measurement. To get a good sense of what is happening, it is important to be able to identify key metrics that you can track longitudinally. It is helpful if there is a history of data which already exists.

But, as ever, measurement will have to deal with the vagaries of human behaviour. Maitland Hudson says it is vital to recognise that in almost any organisation, some form of measurement will already be in place, and it may not be the system that people are supposed to be using.

“Managers will be using some sort of evidence to reach decisions. Is it the bottom line? Is it the way people smile when they leave a service? If you’re going to introduce a measurement system, you have to start with what’s there.”

Before planning out a grand theory, she says, it is important to recognise the existing exigencies, and accommodate them in your work. Any evidence-based revolution will have to survive contact with the staff – prejudices, vested interests and all.

“You need to ask what the norm is, and where there might be pockets of resistance to change. You need to take people with you, not impose it from above. And that’s not measurement, it’s management.” Simon Hopkins, chief executive of Turn2Us, a charity that helps people in financial hardship, agrees with the importance of management.

As with every process anywhere, culture and relationships are key.

“This has to come from the top,” he says. “It has to come from the chief executive. This isn’t one of those things where the chief executive just does PowerPoint and kisses babies, and the FD does all the work. The chief executive has to drive this.

“You need to get the culture right in your organisation. You need a culture of honesty and of high performance, where people understand the limitations and the uses of data. You need to get all the staff on board.”

He says that if you are trying to embed systematic measurement, you must be prepared to accept that not everything will work out as planned. “The sector struggles sometimes because we don’t have many people who can cope with ambiguity,” he says. “In the early days of developing a measurement culture, the lack of perfect answers becomes off-putting.

“Work out what you have got and what you can measure, and accept that where you start off from isn’t the most satisfactory level. Starting simply is so often the key.

“You will have stuff in your dashboard that says ‘measure under development’, and that’s okay. I was director of planning and performance in one of the biggest departments of government and we had ‘measure under development’ peppered all over our work. But you do need to have a plan for how you are going to fill those gaps.

“Get the right people together and be nosy. Ask what the data is telling you. Data will allow you to identify your weaknesses, and act on them.

“And make sure you communicate with staff and volunteers. You have to articulate your measures simply, in a way that everyone understands and accepts.”

Staff measures?

Maitland Hudson also says that you should not just sell your measurement culture into your staff. You should focus on collecting data on them, too. While measurement often focuses on external stakeholders, internal metrics can be more valuable.

There are two main reasons for this. Firstly, it is much easier to collect accurate data on your own employees – their performance and job satisfaction. Secondly, there is a very high level of evidence linking staff satisfaction to high performance.

“Happy staff are likely to have better relationships,” she says. “If the staff are happy, it’s usually a sign things are working. If they aren’t, it’s a sign there are problems.

“We’re often very bad at looking after staff in the sector. We assume they’re driven by their own motivations. We think it’s bad to offer good pay or permanent contracts or pensions. But without these things, staff get unhappy. And miserable staff have a significant impact on your beneficiaries.”

Keeping it simple

Broadly, Maitland Hudson agrees with Hopkins’ assertion that you should start simple. “Measurement is an ordinary part of everyday life,” she says. “If you’re a theatre director, you use simple measures to tell if a production is good or not. Reviews. Bookings. Standing ovations. How many people leave in the interval.

“The key thing is to have clear measures that tell you if you’re performing to your expectations, and where you can do better. That won’t look much like social impact measurement, but it will probably give you what you need. Just track outputs. Track whether your beneficiaries are happy.”

She gives the example of the Samaritans, which would struggle to truly measure its impact. It helps an unknown number of anonymous people not to commit suicide. “But you can just measure whether people are ringing up,” she says, “and whether they’re staying on the phone. If they are, your service is doing something useful.”

And she counsels accepting that there will always be limitations. “At the more academic end of the impact industry, there’s a very perfectionist idea of how measurement works,” she says. “It says that if you get it right you will know precisely how effective you are.

“That’s very difficult. In fact, it’s impossible. You can’t measure anything useful with certainty about human beings – particularly about how they behave.

“You have to understand the limitations of data, and what’s not worth doing.”

One key use for data is to tell you to stop doing things, she says. Too often, charities are reluctant to do this. “Sometimes evidence comes up against belief systems. And sometimes the evidence doesn’t win,” she says. “If the evidence is that something is a waste of time, and people want to carry on, you have to confront that.”

And sometimes the act of measurement itself creates results. For this reason, says Maitland Hudson, one key thing to do is to share your key metrics with your service users.

“Look at consecutive days of sobriety in anti-alcoholism programmes,” she says. “Is it the best measure? Perhaps not. But it’s simple. It gives people something they can understand, and something they can work to. That’s important.”

In short, then, the feeling of many experts and professionals when it comes to measurement is a surprisingly basic one: look at other people, start simple, be consistent, check what everyone else is doing, have a clear idea of what you want to achieve, change things that aren’t working, and rely on what works for you.

David Ainsworth is group online editor at Civil Society Media


What is impact?

Broadly, the work of the charity sector can be measured at four levels.

Inputs: Things you do, like holding a training course or publishing advice on a website.

Outputs: Things that happen as a result of your actions: people attend your course or read your advice.

Outcomes: People get jobs, or feel happier, or know more about their health, in part because of the support you provided.

Impact: This has two slightly different definitions, depending on who you talk to. Either it is your share of the outcomes – the proportion of the things which happened because of you. Or it is the benefit of the outcomes to the individuals and society – the value of the knowledge or the work, in terms of overall wellbeing.


The problems of measurement

Really accurate measurement is hard, which is why many professionals are dubious about the value of charity self-evaluations. Problems include:

Identifying proxies

You cannot measure everything that happens. If you wish to assess impact, you have to use a limited number of key metrics as proxies for your overall effectiveness. But how do you know those are the right metrics? How do you know they really represent the change you want to see?

Valuation

How do you value the impact of what you do? It is charitable, after all, for a Christian to convert a Muslim to his religion, and vice versa. But this is a zero sum game – the number of believers remains unchanged – so the impact is hard to measure. Similarly, there is likely to be little public consensus on the impact of saving a donkey in the Sudan, or preserving the bones of a prehistoric fish, or campaigning for more rights for minorities. In these cases, quantifying impact is very hard, and a more narrative approach is needed.

Comparability

Then there is the problem of comparability. If you charity is serious about measuring its impact, it isn’t enough to merely report it. Your report must be comparable at least to other charities with similar funding models, delivering similar work. Otherwise, there is no method for assessing whether your intervention is more effective than anything else.

Overclaiming

Even if you’ve identified the change you want to see, and quantified its value, how do you know you caused it? Most charities deliver lots of interventions, alongside lots of other charities. So it is extremely hard to pick out the difference that one organisation is making. You cannot simply look at what would happen if your organisation was not there, because it forms part of an interlocking whole with other entities, and all are necessary together for success. The value that would be lost if you didn’t play your part in the system is a shared achievement, not something you solely created.

External factors

Is it your intervention making the difference, or are other factors responsible? Perhaps the economy is improving. Perhaps government policy has changed. Perhaps social pressures have shifted. What would have happened if you had not done anything? This means it is not enough to measure what’s happening to your beneficiaries. You need a control group who are similar to your beneficiaries in every way, except that they are not receiving your help.

Displacement

What if you aren’t solving the problem, but shifting it? If you want to keep Britain tidy, why not picnic in France? How can you be sure, even if your beneficiaries show improvements and your control group doesn’t, that you have not simply displaced the social problem to another area or another time? What if your crime prevention strategy in one part of the town is driving up crime down the road? What if your patients lose weight during your trial, and put it back on straight afterwards?

Distortion

You can’t measure something without moving it. So how do you know that what you’re measuring would have happened if you hadn’t been watching? Would your staff still have behaved in the same way? Would your beneficiaries? And how are you compensating for your own desire to see the programme succeed? How can you be sure you are not being selective about the evidence as a result?


Theory of change: A ten-step guide to good measurement practices

  1. Assess the external evidence to understand what an effective intervention looks like.
  2. Build a framework to assess what is happening already – the interventions, and the measures applied to them.
  3. Develop an idea of the results you would like to see – and the geographic area, the beneficiary group, and the timeframe you would like to see it in.
  4. Identify the interventions and approaches most likely to deliver those improvements.
  5. Develop a group of metrics which would indicate success, and a measurement framework that allows you to collect them. Include internal, staff-focused metrics. Build them into the delivery process.
  6. Deliver services and collect data.
  7. Assess the data and measure whether you achieved the expected results. Identify why not, and whether in your view you are delivering useful results.
  8. Identify improvements and changes that might produce better results, including shutting down anything which is ineffective.
  9. Report honestly on those results, and benchmark against your peers.
  10. Return to the start of the process, and do it again.

     


Impact and happiness at work

It appears that just talking about measuring impact can be enough to make charitable staff miserable. That is the very early conclusion of research conducted by Julia Morley, a lecturer in accounting at the London School of Economics.

Morley’s findings are published in a draft paper entitled Social impact reporting as reputation management: Effective practice, symbolic adoption or businesswashing? They are based on studies of 128 websites of charities and other social purpose organisations, and 21 interviews with staff. Her first finding – backed up by comments elsewhere – is that charities are “business-washing”.

She says there is little evidence that impact reports commissioned by the charity “actually lead to changes in structure and resource allocation internally”, adding: “They are using impact reporting as a marketing tool to legitimise themselves,” she says. “It is purely symbolic.”

Another potential finding – at this stage Morley says it is only a “theoretical contribution” – is that the process of reporting on impact is creating a disconnect between how staff view their work, and how it is reported.

“Some people engaged with the description of charitable work using this language. But others found it demotivating.”

How people describe things is very important, she continues. “If the kind of language used by an organisation doesn’t fit with how people see themselves, those people can become disconnected from the organisation.”

She says that social purpose organisations are already susceptible to “means-end decoupling”, where they forget about the end goal – a better life for the beneficiary – and become focused on the means, sometimes leading to mission drift and a loss of impact. She believes that social impact reporting, with its particular terminology, could perhaps lead to “language means-end decoupling”, where the choice of language used to describe the end effectively separates it in the minds of those delivering it, and makes it difficult for them to focus on the end goal: helping people.