Measuring Impact

How do we know when something is actually working?


Real impact is the difference between what happened with you and what would have happened without you.

— Kavin Starr & Laura Hattendorf


It’s great to start something new with good intentions. But, eventually we need to ask the question: is it actually working?

It might sound trivial, on the surface. But when we try to measure and report on our impact it inevitably ends up being much more difficult, in practice. In some cases it’s very difficult to separate out the signal from the noise. In other cases it’s just too soon to tell.

So, how can we really know?


Cutting to the core

There are four different aspects we can try to measure:

Once we understand these layers, the temptation is always to reach for the later option: don’t just measure inputs measure outputs; don’t just measure outputs measure outcomes; don’t just measure outcomes measure sustained outcomes or system changes etc.

However, there are two big problems with doing this:

When we try to measure outcomes it can be hard to isolate the specific things we do from the results we get, because there are so many other factors that can influence those results over time. It requires us to show both that something happened, and that it happened because of the things we did. Even if we can point to those outcomes happening, that is still an incomplete answer. At best it shows a correlation. We need to be able to link the inputs with the outcomes, and attribute success. Otherwise we risk confusing activity for progress. We can measure the effort we put in, but also need to be honest about how much of that is actually moving us closer to our desired outcome.

Also, possibly even more importantly, we have to wait much longer before we get any answer. Sometimes it can be years or decades before outcomes are obvious or even visible. In the meantime there is nothing to give anybody confidence that things are on track and that we’re building momentum. Without the feedback loop created by scrutiny and consequences we are unlikely to meet our expectations.1

For something that is well established and has been done previously, it’s absolutely correct to ask about the outcomes rather than just measuring inputs or activity. But when we think about measuring something brand new and (as yet) unproven, that doesn’t work. There is too much uncertainty.


Throwing vs. Catching

When we learn to juggle it’s easy to assume that the important skill is catching.

But actually the key is throwing.2

This isn’t intuitive but makes sense when we think about it: if we can learn to accurately launch the balls (or flaming torches, bowling pins or chickens) we’re trying to juggle so that they land where we can catch them effortlessly - e.g. without needing us to lunge and without distracting our attention - then catching takes care of itself.

We can use this same idea to help us improve how we report on our progress and measure our impact, even when we’re dealing with significant uncertainty.

We just need to clearly idenfity the important activity that we believe will lead to the outcomes we want over time, if we do a good job. And then, expand on exactly what we mean by “do a good job” (e.g. our equivalent of “throwing so that the ball lands where we can catch it easily”). That gives us something specific to start measuring immediately.

Taking this approach forces us to articulate up-front what we think is hard and what “good” means; it creates a much shorter and easily measured feedback loop, so we can track our progress and improvement on a scale over time (which is much better than a binary success/fail measure); it eliminates more external variables; and it explicitly acknowledges the leap of faith we’re taking.

Of course, there is always a risk that we choose the wrong thing. But at least this way we have something concrete to be wrong about. And, as long as we acknowledge that when it happens, we can learn from being wrong and try to be less wrong next time.

Remember, we use evidence to identify problems but we need experiments to solve problems.


Intentions → Impact

These are the four simple questions that we need to answer in advance about any proposed solution: 3

  1. Who does this help?
  2. What constraint do they have?
  3. How do we hope to reduce or remove that constraint for them?
  4. How will we show that it’s working?

The first three questions insist that we’re much more specific about the who, what and how.

Rather than starting with the solution and then trying to prove that it helps somebody, we should start with the specific customer we have in mind and the problems they actually have then work backwards from there to the answer.

The fourth question gets to the specific thing(s) we can measure, what evidence we expect to have, and when we will know. It’s important that we define all of these things in advance. Otherwise it’s too easy to just shoot an arrow and then draw the bullseye around whatever it hits.

These measures also need to include some early indicators and milestones. It’s not enough to say that we will only be able to tell right at the end. When something is working we can generally measure something right away that demonstrates momentum.

And, finally, we should also define a control group which can be used as a comparison, so that we can attribute any positive impact we measure to the things we’ve done.

If we already know it’s going to work it’s not an experiment. Doing all of these things forces us to be much more honest about the things we don’t know, which are also the things we might be wrong about.


When somebody asks “is it actually working?” we shouldn’t start by describing the things we’ve done, or the results so far. Instead, we should start by describing the people we’re trying to help and work backwards from there to show how the things we’re doing are actually helping them.

If we get this right we can get beyond working on things that just feel good to things that are really good.


  1. See Ceri Evans’ definitions from Perform Under Pressure:

    • Expectations = what standard have we set ourselves?
    • Scrutiny = how are we going to know if we have achieved those standards?
    • Consequences = what happens if we do/don’t achieve those standards?
     ↩︎
  2. Credit to Seth Godin for this metaphor↩︎

  3. All of these questions are heavily inspired by the ideas generously shared by Laura Hattendorf & Kevin Starr at Mulago Foundation.

    They have spent more time than most thinking deeply about impact investment and how to identify and invest in the highest impact giving opportunities.

    If you’re interested in this, I recommend you start with the “What we look for?” section of Mulago’s “How we fund?” website - which identifies three things:

    1. A priority problem;
    2. A scalable solution; and
    3. An organisation that can deliver.

    Kevin’s PopTech talk from 2011, which explains all of these ideas in the context of their philanthropic work is one I’ve referenced many times over the years and recommend it to anybody who is working on a startup derivative now:

     ↩︎


Related Essays