It’s not what the software does, it’s what the user does that matters.
There are only two industries that refer to their customers as users: high tech and illegal drugs. Is this the company we want to keep?
Anybody building software usually starts with the same goal, more-or-less. We all want to have lots of happy users.
And yet, most software is terrible - it’s complicated; it’s unreliable; it’s bloated with options and settings, which often feel like a long list of secret handshakes that we’ll never learn.
It’s actually even worse than that: because those os us who build software are so rarely in the room with those people who are using what we build, mostly these frustrations are invisible and overlooked, if not completely ignored.
We need to aim higher. The challenge isn’t to make it possible for users to be successful. The goal is to make this the default. Or, in the very least, by far and away the majority outcome. Somebody seeing it for the first time should be able to just follow their nose, make the obvious choices, and end up in the right place.
Having users invariably makes us wrong. Then, hopefully, it makes us humble…
There is a pattern in software development that I’ve seen repeated so many times now that I think it’s worth codifying:
Imagine we are observing the first usability test on some software we have built.1
The first user to try it out completely misses the seemingly obvious cues in the user interface. The button they need to tap might well be big and red and flashing with a marching ants border, but they just don’t see it.2
“Dumb user” everybody thinks.
The second user also quickly ends up hopelessly lost.
“Two stupid users in a row … what are the odds?”
The third user. Same story.
At this point, we’re all hopefully slapping our foreheads and thinking “how could we do this better?”
The key is getting to the third user. Until then we haven’t really learnt anything.
This exact same pattern applies to bug reporting, once the software we’ve built is out in the wild - the first alert is probably random noise, the second is annoying, the third is a sign that there are actually two problems: something is wrong with the software and something is wrong with the person who ignored the first two warnings!
Of course, that assumes that we’re tracking errors or bugs in the first place. It is surprising how often that isn’t the case.
Rather than treating testing as validation, we need to have the mindset that we’re likely wrong and that users can teach us how to be less wrong.
People are complex. It’s easy to say we just need to imagine being in their shoes, or watch them while they use our software, or be more mindful of errors that are reported. But actually we need to get in their heads and try to understand their context. We need to be aware of the other tools they currently use and are more familiar with that will inform how they expect our new thing will work.
That’s really hard work.
It’s too easily forgotten, as we get carried away by the buzz of building something new, and get sucked into the challenges and complications involved in the engineering and construction.
Our goal should be to build a “pit of success” 3 - a shallow hole that people will just immediately fall into without even having to think about it (ideally without even realising it) and which leads them in the correct direction. We need to make this the path of least resistance, like a bobsled track - once we get people moving they should gather momentum and slide on down!
(If we see users repeatedly taking a different route then we should also pay attention to that, rather than putting up fences to stop them going that way - sometimes desire paths reveal a different and better product).
If we don’t have happy users, it’s not their fault. It’s our fault. We didn’t make it easy enough. We can do better.
It’s not what the software does, it’s what the user does that matters.
Perhaps, as a start, it would help to more often think of our users as real people…
Over the years the book I’ve most often recommended to anybody building a software product is Don’t Make Me Think by Steve Krug. It’s pretty dated now (it was originally published way back in 2000, although there is a revised edition from 2013), so most of the examples are throw backs to a previous era of web design, but it is still worth reading just for the simple explanation of usability testing. ↩︎
Anybody who has worked with me on UI design over the years will be familiar with my sarcastic request for a marching ants border, as a cynical reminder that just making things more obvious to us isn’t always the solution. ↩︎
I first discovered the “pit of success” via Jeff Atwood: Falling into the pit of success.
But I believe it was first described by Rico Mariani at Microsoft Research, quoted here by Brad Adams:
↩︎In stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our [software]. To the extent that we make it easy to get into trouble we fail.