We are hosting discussions on each of our ten ‘burning issues’. We’ll explore some of the big questions and hear different perspectives from across civil society.
Working with New Philanthropy Capital (NPC), we hosted an event to explore how to more meaningfully measure and evaluate social change. In this summary, I’ve focused on four questions – how, who, why and what next.
How do we measure and evaluate social change?
Social change is complex and often unpredictable. In contrast, many models of evaluation are linear or are used in a very fixed way. How do we articulate and capture complexity when trying to evaluate our work?
NPC started with a summary of their current thinking on the value of a systems-led approach, including the pitfalls of applying a linear version of Theory of Change. As one participant pointed out, conceptual models are useful only if you recognise that they are an approximation of reality, and switch easily between the two – ‘It’s not a map of the system, it’s a map of the system in your head’.
The systems-led approach more closely reflects the way that campaigners tend to think instinctively. Working to bring about social change requires an understanding of the ecosystem of different actors, focussing on what you can do and persuading others to play their part.
Some participants felt that there was too much pressure to ascribe attribution, which can be difficult when it comes to campaigning. Maybe sometimes there is a benefit in keeping things simple – as one participant put it, just asking ‘if we weren’t here, what wouldn’t have happened?’
Who decides what we measure and evaluate?
Another issue we discussed was who defines ‘success’ and how this affects the way we measure it. Often funders determine what should be reported, but there can be disconnect between the outcomes that a funder is looking for and the reality of what is achievable, useful or measurable by those doing the work.
Measurement and evaluation tend to put the organisation at the centre, rather than the beneficiaries or service-users – but ultimately, they should decide what success looks like. Are we doing enough to enable people to self-advocate, and reflect their lived experience? This doesn’t mean just being able to tick a box or produce a case study in a report; it’s about meaningful opportunities for people to review and express what is or isn’t working for them.
Why do we measure and evaluate change?
Often, it’s because we’re required to. As one individual pointed out, it can feel difficult to justify spending any of our limited budget on measurement and evaluation buy soma online rather than frontline work. This is understandable, but good evaluation should, in fact, benefit our cause.
Essentially, it’s about reflecting on our work and finding ways to do it better. This informal reflection and review is what many grassroots change-makers do instinctively – asking what is working, what isn’t working and how can we learn from that and improve. So why does it get more complicated than that?
Under organisational and funding pressures, it can be difficult to say that something didn’t work, even if you want to learn from it and make improvements. Campaigners may also be pushed towards doing what is short-term and easy to measure, because this can be more easily evaluated and reported. But this can stop us from achieving the long-term, more difficult social change, where we might not see the impact straightway, but it is ultimately more rewarding.
Participants discussed the benefits of:
Long-term approaches – there was a broad call for patience, understanding and trust from funders and senior managers that change doesn’t happen overnight, or sometimes over a year, even if we can report short-term goals along the way.
Learning from failure – wider recognition that it’s good to fail and then learn from it is needed, perhaps following the example of Engineers without Borders Canada, which publishes an annual ‘failure report’.
Evaluating together – collective, collaborative or thematic evaluation, pooling the measurement and evaluation budgets of a number of organisations working on a similar issue to conduct a sector-wide evaluation. This could remove some of the organisational pressures that prevent meaningful review, and ensure limited resources are used effectively.
Self-reflection from funders – more meaningful consultation from funders, actively seeking honest feedback from grantees about the usefulness of evaluation methods.
Self-reflection from grantees – doing more to advocate for the ways in which we want to be held to account or evaluated. Many agreed that we need to work with funders, organisations and campaigners to establish realistic expectations, to be trusted to try new approaches, learn from what does or doesn’t work and implement improvements. A good example is the Paul Hamlyn Foundation’s ‘Explore and Test’ and ‘More and Better’ grants, which allow organisations to test out approaches, find what works, and build on it.
If you have any thoughts you would like to share on this ‘burning issue’, or any of the questions raised, please get in touch (firstname.lastname@example.org). Stay up to date on the latest social change thinking, and take part in future debates, by signing up to our newsletter and following us on social media.