In discussing evaluation and impact
, it is easy to get caught up in a numbers game. There is tremendous pressure to report on how our work is making a difference, and scale (“Billions and billions served!”) seems like the most expedient way to demonstrate why our organizations and our programs matter.
When the goal is simple volume – for example, how many people can we move through this drive-in window?
– the metrics can indeed be straightforward. But if the goal is changing social behaviors – how can we get students to stay in school, or how can we break the cycle of poverty, or how can we improve health outcomes?
– the numbers can tell us a great deal, but rarely can they tell a complete story.
To tell the whole story, we often need to put the numbers we do have into context. How often have we read – or even ourselves reported – a statement like this: “We reached 400 participants in our program.” For those readers not directly involved in our program, it may be difficult to know whether “400” is a cause for celebration (if, say our goal was 350), or for alarm (we expected 800!) While this is of course a simplistic example, it is often the case that in an effort to satisfy and impress our donors and funders, we turn to a list of numbers – or perhaps even a series of colorful charts. But our charts and numbers don’t always express the information that is most important to understanding whether something is working, and why (or why not).
Here are some common pitfalls of our over-reliance on quick data points:
We have an ambitious, long-term vision, but no clear intermediate goals.
We all know how to set a bold vision. So often, we want no less than to transform people’s lives: To improve health outcomes through better access to healthy foods, to lift families out of poverty by high quality education. It is these noble goals that inspire us. But the paths to those outcomes are long and complicated and very often cannot be accomplished simply by increasing the scale of resources we put toward them. We have to understand – and articulate – what we believe the necessary steps along that path to be, and what the signposts toward our goal will look like.
It is also important to take into account what we have a reasonable degree of control over. Is demonstrating your impact reliant on data to which you have no reliable access? If so, what information can you track? How can you break down your theory of change to show you are affecting the behaviors or factors that contribute to the long-term goal?
We have data that shows steady increases (or decreases, if that’s the goal) over time, but no standards against which to evaluate these trends.
We are often encouraged by a graph that shows growth trends – it is satisfying to be able to say, “Look, we are up 25% from last year,” or “We have exceeded our goal.” But this report on performance can be of limited value if the goal we have set is not meaningful. For example, if we have increased attendance at our annual event every year, but each year’s goal is based on the prior year’s attendance, is that a meaningful measure? To pose the question another way, will a dip in attendance mean the event was not successful?
Or, if for example you are hosting the event with the expectation of growing your membership, it is not necessarily the total number of attendees that you want to see increasing, but the percentage of attendees who become members.
The information readily available to us and that we can easily track is necessary, but not sufficient to demonstrate progress toward our goals.
Most organizations maintain data on a number of things that are – relatively speaking – easy to track. Time sheets are a good example of this. Easy to track, you know what the numbers mean, you even know the goal for what they should be (full-time employees work 40 hours, or staff are expected onsite 5 days per week). However, most managers would agree that attendance alone does not equate with productivity or quality of work. In fact, there are frequently more complicated processes in place to evaluate staff performance, often based on a set of criteria that are more difficult to quantify – including attitudes, behaviors, and aspirations.
In considering the data that you have and that you report, it may be helpful to think about the difference between metrics that demonstrate basic accountability and those that more thoroughly demonstrate what is of value to you – at the level of program or organization.
"Numbers can tell us a great deal, but rarely can they tell a complete story. To tell the whole story, we need to put the numbers into context."
The comments I’ve made here are by no means intended to diminish the importance of setting clear, measurable goals, and using reliable data to inform decision-making. Rather, it is a call to look closely at the numbers we are using and reporting, and ask the difficult questions about whether they truly demonstrate what is meaningful to our work.
As always, I welcome your feedback and your questions. Would you like to discuss a current challenge or offer your program as a case study? Email me at: email@example.com