A stitch in time saves nine — how a couple of hours of design research can save you from failure

Milly Schmidt
Insights & Observations
12 min readMay 20, 2021

--

Mistakes are unavoidable. This post is about helping you accept that we all make mistakes — and instead of trying to be perfect, we recommend you use research to make it it safe for you and your team to make mistakes. In fact, a mindset that you’ve produced perfect work that doesn’t need testing deprives you of opportunities to learn through research.

Many problems can be fixed quickly if you catch them early enough — but the longer you leave them, the bigger they grow. Just like a dropped stitch ruins a sweater as it becomes a gaping hole, a rogue assumption can ruin your project — and the effects can get worse the earlier the assumption is introduced.

Let’s start with a story

Let’s say you work as a marketing manager in a department store. One of your quarterly goals is to develop an outdoors campaign for new kitchen appliances. You quickly write a brief, contact the appliance manufacturer for some images and send the brief away to your agency to book in media. The creative comes back with few options to choose from — you immediately circulate these with your internal stakeholders for feedback (perhaps your line manager, your brand representative, sales, customer service etc).

You’ve got your favorite design that has a unique image with a cute pun. After the feedback rounds, one of your colleagues looks over your shoulder and says something that surprises you a bit, and that the creative looks a little bit like a person. You shrug it off — it’s just one opinion, nobody else has mentioned it.

Your agency finalizes the creative, makes it look polished and beautiful. The design is really eye catching, and you make sure your brand colors and fonts are on point. One of the designers makes an off-hand comment about not being sure whats being advertised, but it’s not really their job to criticize the concept and you send it to agency to send out to media to publish — your sales team needs the campaign out immediately to start hitting the targets, and the media team needs assets right now from the agency not to miss out on the premium spot by the highway.

The next day day, you drive to work and catch a glimpse by the side of the road that there is a billboard with your freshly printed ad. And as you drive by you feel your stomach fall through the floor.

You’ve accidentally printed a billboard that evokes the face of the most hated person in all of human history.

Why do these things happen?

Why do we see funny listicles published with “top 10 marketing fails”?

How do oversights like this make it to production?

Why do projects sometimes go to primetime with “obvious” mistakes baked in?

Why don’t we catch them earlier?

The first step towards testing your assumptions is recognizing them

There’s a few reasons why this happens: Teams get lost in assumptions. In the story I told, two different colleagues pointed out problems with the design, but the marketing manager ignored their feedback and didn’t really think to test this at all.

There are lots of reasons why we fail to test and thus fail to discover these mistakes including running out of time or even being distracted by the highest paid person’s opinion — or perhaps you just don’t know how you’d go about validating or invalidating hypothesis like that. But that’s why we’re here to help.

As a designer I know this one well — but we’re all making assumptions all the time, it’s just how we operate as humans. Some of the big ones that drove this hypothetical campaign to this conclusion might have been…

  • that the concept “bells and whistles” and the pun would be recognizable to the target market,
  • that the billboard is a great place that’s highly visible,
  • that the kettle is a familiar object and wont cause any problems that’s easily recognizable and
  • that the brand is easily recognizable — who is the advertiser and what are they advertising.

Each of those statements are assumptions, and once you get to the stage where you can articulate them, you can also start thinking about how risky they are and whether you need to test them.

Cognitive biases and sneaky thinking errors

Cognitive biases and thinking errors can trick us into thinking we know what’s what, when really we should be being more critical of ourselves. They are quite tricky to navigate because they’re technically subconscious, happening in the background. This is where the behavioral psychology side of things comes in.

You might have had experience feeling like you’re in a “bubble” with your design. For me, once I’ve been working on a design for too long, I stop being able to really see it — I need to somehow see it with “fresh eyes”. When I show a design I’ve gone blind with to others, they often pick up something that I have completely missed. This high level “blindness” can be broken down as three big biases at play (and probably many smaller ones).

The Mere Exposure Effect is a psychological phenomenon where people tend to develop a preference for things because they’re familiar with them. You tend to just like things that you have seen more of. This is why when we test multiple different designs with people we have to be aware that people will become attached to the designs they’ve already seen.

A team might also be affected by cognitive bias called The IKEA effect. Where we place a disproportionate value on a the product we’ve partially created. This came out of an experiment where asking people who have constructed a piece of IKEA furniture to put a value on them compared ones that to already constructed pieces of furniture. And they valued the ones they built higher. We tend to overly value the work that we have been a part of and it becomes harder for us to view this with a dispassionate view.

Finally, Escalation of Commitment is a human behavior pattern where an individual or group facing increasingly negative outcomes from a decision, action, or investment, nevertheless continues the behaviour instead of altering the course — essentially a loss avoidance behaviour. For example, gamblers who keep upping their bets even though they’re losing are being deeply affected by the Escalation of Commitment thinking error.

A compounding and amplifying effect to all these is the pressure to deliver makes it even less likely that you take your time and check things thoroughly. The closer you get to the pointy end of the deadline, the more interested you’ll be in delivering it than checking every possible assumption. Unless you’ve already built these pauses and check-points in to your process. One of the best ways to mitigate this is to find low effort and low friction tools for your testing so it doesn’t seem like such a big ask.

How do we avoid a fascist kettle? Test early and often

In the example above, the marketer could have tested many of those assumptions we listed and saved themselves a lot of time, money and reputational damage, internally and externally for their customers.

This is the first and most important lesson — test early and often — much earlier than you think! Not after launch, not even a day before launch — test as early as concept stage for the absolute best results. In fact, the earlier you test, the more time you have to respond to the feedback and adjust course. It’s so common for even the most Agile teams to not leave enough time or the budget to react to the feedback they’ve gathered.

Testing often also means testing at different levels of fidelity — at concept stage, when it’s mocked up and then finally when it’s in production. At each increase of fidelity, assumptions trickle in to the project from different collaborators, different constraints and different decisions. The only way to catch those sneaky assumptions that continue intrude on your project is to regularly test as a project evolves.

You also hit up against sunk cost fallacy at the pointy end of those deadlines — another thinking error that makes us feel attached to something because of all the perceived or actual investment into it. You’ll often experience this as a fear of making changes in the creative because it’s perceived by the team as rework and a waste of money. The key the mitigating sunk cost fallacy is understanding that it’s never too late. It’s always better to learn about a mistake before it happens rather than after — especially when the risk of shipping something disastrous is bigger than the cost of rework.

Research doesn’t have to be an ordeal

Driven by an already steady increase in interest in research ops and a trend towards more distributed organisations, and then absolutely amplified by COVID-19, the world of online research tooling has exploded recently.

Researchers from all walks of life (software, UX, marketing) are leveraging those tools that allow research to happen even if we can’t be in the room with our subjects. The biggest misconception about research is that it’s unavoidably time consuming; that you have to recruit hundreds of people for statistical significance and you have to compensate them all with a lot of money — or that you might have to work with a recruitment agency and the process takes months.

The good news is that that is not the case anymore: research can happen in as quickly as an hour or so, for a very low cost if you use the right tools and you know what you’re doing.

UsabilityHub lets you run tests at a relatively low cost with a broad reach and not a lot of time. Our panel of of 340000+ users across the world means that your results can come back in as little time as it takes to make a coffee — and there’s lots of other tools out there too.

First impressions really matter

Studies indicate that individuals form an initial impression of an object within a short period of time: 3 seconds (Lindgaard et al., 2006); 4 seconds (Kaiser, 2001); 5 seconds (Perfetti, 2005); and, 7 seconds (Ramsey 2004) in human-to-human interaction.

In addition, recent studies indicate that this time span may be very brief (i.e., as short as 50 milliseconds (Hotchkiss, 2006) when applied to the online context. In short, first impressions count.

In a different study, researchers analyzed page visits from 205,000+ websites, each with more than 10,000 visits (in total, aggregated data was more than two billion individual site visit events). The results showed that first ten seconds are critical in deciding whether a user stays or bounces.

So in 50 milliseconds, first impressions set in — that’s the limbic system, or the lizard brain. You then have around 10 critical seconds to try to grab someone’s attention. In those first 50 milliseconds, their gut instincts kick in. In the first 10 seconds users, look more carefully at something, engaging prefrontal cortex that is responsible for more making complex reasoning. To gain several minutes of user attention, you must clearly communicate your value proposition within 10 seconds.

Another framing is Daniel Kahneman’s is Fast and Slow thinking. Your fast, instinctive thinking and your and your slow, considered thinking both need to be optimised for when you’re designing. All this data about attention might be a little confronting — but the good news is that there are some amazing tools out there that help you test exactly this.

Let’s test it!

I’ll share an extremely simply test that took only about an hour to create. Firstly, I’m going to ask a user to view the image for five seconds and ask some simple questions:

  • What is this ad about?
  • Did you notice anything unusual about the image?
  • Have you seen this billboard before?

I’m then going to show them the same image again, but without a time limit, giving the time to engage slow thinking, and then ask:

  • Now that you’ve had a bit of time to examine the image, how would you describe it?

And that’s it, that’s the whole test. You’re welcome to take it yourself. It took me about an hour to create, and I sent it out to the broadest possible audience. It cost me 150 credits which is $150USD on our platform, maybe a lot compared to doing no research at all, but very cheap compared to a big moderated research project. And the turnaround time was five minutes.

Here are some of the results. Note that my questions were not leading — I didn’t ask “Does this look like Hitler to you?” (as this study did) — instead I left it very open, so I didn’t suggest the answer at all.

I think it should go without saying that even if you have one comment from anyone that says that your design evokes the image of Hitler that you probably need to revise your design — but as we’ve already gone over, there are many reasons why you might ignore just one or two comments.

However, getting those comments from totally unbiased people adds a certain amount of weight, making them much harder to ignore.

The three questions that I asked are also very versatile and reusable. I simply asked

  • A comprehension question for the limbic system: What was the ad about?
  • A question looking for blind spots: Did you notice anything unusual?
  • A double check for the prefrontal cortex: Now that you’ve had time to think about it, would you describe it differently?

These are the questions you can use on almost any campaign. This generalized testing methodology is both simple and very powerful, and as you can see, can help reveal things that you or your team might miss due to bias and thinking errors, time constraints and project pressure. In the case of our illustrative story, the team didn’t see the facist teapot themselves, but if they had run this test, they may have been surprised by the comments come through. They would have have discovered this error through research before they printed many billboards across the country.

What did we learn?

Beware the bubble

Remember that that as you proceed with your project, you and everyone involved is adding their own assumptions as you go along. Due to various biases and thinking errors, compounded by pressure to deliver, you’ll lose your ability to see which of your assumptions are risky when you spend a lot of time in the bubble of the project. To mitigate this: do some research.

You don’t have to be a sophisticated scientist to do research

Research is complex, but you can avoid catastrophic project failure without being a scientist. So many tools exist that help you test those assumptions easily, cheaply and quickly. You can even make research fun — get your team involved! Getting feedback sounds scary, but my experience is that it gives you a little dopamine hit when you learn that you’ve hit the mark, or you find out something that you’ve missed.

Research is easier than you might think

It takes less than an hour and less than $200 to set up a quick test and you should get the results back the same day if not faster. The questions you ask don’t have to be complex, either.

Testing assumptions helps you decrease the project risk

Obviously you can use research to avoid mistakes and mitigate risk. But some secondary benefit exist as well — you can use research to support more ambitious plans and gambits. By running research, you might be able to get more buy-in from your team about an idea that’s a little bit more out there. By gaining insight from outside authority, you’ll have the confidence to pitch your wild ideas and make sure it doesn’t sound quite so scary.

In summary: real world feedback is the best — it’s so useful, energizing and important — it’s almost silly not to do it. You can view the results of the test we ran here.

If you’re looking to get started testing your ideas quickly, easily and affordably, you can sign up to UsabilityHub today to get started.

--

--

Milly Schmidt
Insights & Observations

Director of product building design research tools at UsabilityHub.