There are only three ways to be wrong about our impact: neglect, error and malice.
What matters for most people is not how much they know, but how realistically they define what they don’t know.
What impact are we having on the world?
What is the cumulative effect of the work we do? What are the outputs or outcomes from the actions we take? When we bump into something how does it move?
It seems like such a simple question, but it’s actually complicated and difficult (sometimes impossible) to answer.
Actually, there are only three ways to be wrong about our impact: neglect, error and malice.
The first (and in my experience most common) way to be wrong is to simply avoid ever asking the question.
We do some work and observe some changes happening, but how do we know that the things we do are the cause of the changes that we see? If we don’t take time to try and answer this then we won’t be able to say with any conviction that there is a connection.
Behaving this way is the grown-up equivalent of the toddler hiding behind their hands in a game of hide-and-seek.
The world is never static, so it can take a lot of effort to isolate the impact of a specific action.
Let’s say we want to know if a new activity works (it could be anything from a new cancer drug to a new marketing campaign for your startup)…
The gold standard in terms of scientific rigour is to do a “double blind” study: that is, randomly split the population we’re trying to change into two groups: the test group, who receive the intervention, and the “control” group, who don’t. Importantly neither the test subjects or those running the test know which group any individual is in until after the test is completed and the results are collected.
If we can show that those who got the intervention have a different set out outcomes from those that didn’t then we’ve proven a connection between the action we took and the impact it had. And if not, then we’ve still potentially learned something useful.
It’s not always possible to be so clean in our testing. But, even in those cases there are still lots of ways that we can observe results and collect data try to establish this connection, if we’re so inclined. We can do a “single blind” test - where the subjects don’t know which group they are in but we do. Or even just a survey - it’s amazing how much we can learn by observing what people actually do compared to what they say they would do.
One common, but often overlooked, form of neglect is group think - that is, when there are enough people involved in an activity that everybody assumes that somebody else is doing the work to test the connection, but actually nobody has.
So, we can easily be wrong just by neglecting to try and find out what our impact actually is.
The second way to be wrong is to make a mistake.
Really understanding our impact to a high standard is often difficult, expensive and time consuming. So there are almost infinite opportunities to introduce error, as we attempt to make it easier, cheaper or faster.
Any measurement is inherently inaccurate.1
In scientific testing, researchers typically make the distinction between accuracy (the difference between the measured value and the actual value) and precision (how similar multiple measurements of the same thing are to each other).
(Sometimes also called “validity” and “reliability”)
If we imagine throwing a bunch of darts at a dart board trying to hit the bullseye the accuracy of our throwing is measured by how close to the bullseye our darts are, while the precision is measured by how close to each other our darts are. If, for example, we throw five darts and they all hit double nineteen when we were aiming for the bullseye, then we could say our throwing is precise but inaccurate.
Precise, but inaccurate (left) vs. Accurate, but imprecise (right)
There are lots of different types of errors that we can make when we’re trying to test for impact:
So, even when we are testing for impact, it’s still possible to be wrong when errors are introduced.
Last, but not least, the third way to be wrong about the impact we’re having is just to be dishonest.
Often the underlying (and unspoken) motive for doing this comes down to personal interest - perhaps being honest would have a negative impact on our career or reputation, perhaps it would be politically expensive in our organisation or community, or perhaps too much has been invested already on the assumption that an action will have an impact for it to be comfortable for us to admit otherwise.
Either way, the con just requires us to know that there is no link between the things we do and the results we see. At that point we can either say nothing, and hope that nobody else notices, or (worse!) we can straight out lie and pretend that the opposite is true.
The good news is those are the only three ways we can be wrong, and there are relatively easy things we can do to avoid all of them.
It’s best to always assume that what we’re doing isn’t working and then challenge ourselves to prove otherwise.
Secondly, try to switch from thinking in absolutes to thinking about how confident we are in your assumptions. Rather than saying “I know… " say “I’m x% sure that…” and base the x on the actual measurement we’ve done.
This forces us to think about what corners we might be cutting in our testing and what impact that might be having on the results. Listen to what others are saying, as that helps us to identify any bias we may have. And, it helps to create feedback loops in advance - especially when the results don’t go the way we’d hoped. Always debrief and try to understand why so that our results improve over time.
Lastly, be honest about the results. This is either very easy or very hard, depending on our character. It helps to setup external checks in advance, so that others who don’t have such a vested interest in the specific outcome can be the final judge. And, work hard to not take it personally if and when the testing shows that the actions we took didn’t generate the result we were hoping for. The more important question should be what did we learn and how can we apply those lessons to what we do next?
Remember, all we need to do to be right is to avoid being wrong.
Experimental Errors and Uncertainty, G A Carlson ↩︎
How do we know when something is actually working?
To be considered successful we just have to do those things that most people don’t.
How can understanding our relationship with pain help us conquer challenges?
Leap of Faith
It’s important that we don’t confuse risk with uncertainty. We need to be clear about what we know is true and what we hope might be true.