Have you ever written a test plan that made perfect sense to you, but it totally confused your test participants? Whether you’re new to user research or a seasoned vet, it happens to everyone. But, why does this happen? I’ll explain, but first I want to tell you a story. When a colleague of mine was in a speech class in high school, his teacher had everyone in the class write down step-by-step instructions of how to make a peanut butter and jelly sandwich on a 3x5 card. Once they were done writing, the teacher called a couple students to the front of the room and gave them each a different set of ingredients. He also gave them a randomly selected 3x5 card, and told them to make a PB&J sandwich by following the exact instructions on their card.
Even though writing PB&J instructions sounds easy and straightforward, none of the instructions worked as they were intended to.One student had to rip open the plastic bag of her Wonderbread to get her slices because the instructions on her card asked her to pull out a piece of bread (without specifying how to open the bag).And another student had to tear a french roll with his hands and spread the PB&J with his fingers because he didn’t have a knife. As you can probably imagine, it was a total mess.
The PB&J exercise illustrates the same thing that happens when a research participant goes off track during a test. The tasks and questions that made perfect sense to you are unclear to them.Uncertain about what they’re supposed to do, participants interpret your instructions differently than you intended. Once this happens, the test goes completely off course and the results aren’t useful or informative.
Test participants could go off course for a number of reasons. The wording of your test plan may be unclear or misleading. If that’s the case, it’s easy to fix. Doing a dry run should uncover all of the major flaws in your test design.But a bigger issue is that you might have recruited the wrong audience. You might have written a great test plan for people who use Wonderbread, but recruited participants who have a french roll and no knife. If you recruit the wrong people, there’s a far greater chance that your test results are going to be useless—no matter how well-designed your test plan is. Here are the lessons I learned from my coworker’s PB&J experience. If you want to make sure you recruit the right participants and design tests that produce actionable insights, then follow these 5 steps:
Getting great test results isn’t just about writing a great test plan. That’s part of it, but it’s certainly not the most important part. It’s about understanding who your users are so you can recruit the right participants for your test. How well do you know your users? Do you know what kind of bread they use to make their metaphorical PB&J sandwich? Do they use french rolls, wheat bread, Texas toast, or something else? What about utensils, do they have all the tools they need? If you don’t understand your users, how are you supposed to recruit the right participants for your test?
You could write the most perfect, academically rigorous test plan in world, and it’s going to give you useless feedback if you have the wrong audience going through it. And on the other hand, if you have a mediocre test plan but you’ve recruited the right people, they’ll give you useful insights and feedback you can use to improve whatever you’re working on. Here are a few resources to help you recruit the right test participants for your study:
Now that you understand your customers and you’ve recruited the right people, it’s time to sit down to write your test plan. Remember: you’re not your user, so leave all of your assumptions out of it. Think about how your participants are going to experience the step-by-step flow of your test. Where might things go wrong? What parts are they going to have issues with? Where do you think they might get confused or lost? What do they need more clarification about? Proactively asking yourself these questions will help you see the pitfalls of your test before you launch it. Also, being aware of the assumptions that you’re making will help you avoid biased questions.
You have certain objectives and goals you’re trying to accomplish when you’re designing a test. And to make sure you get the kind of feedback that meets your objectives, you need to flip the script. Once you’re done writing your test plan, adopt the mindset of the person who’s taking your test and read through it. Ask yourself more questions. Does this question make sense? Is any of the wording here misleading? Am I asking a leading question? What’s the most effective way to frame this? Imagine you’re the target audience; is this test going to produce the kind of results you’re looking for? As you go through the test plan, make any changes you see fit.
Once you think your test plan is ready, launch a dry run and be ready to make edits. Other people are going to do things that you didn’t expect. Pay attention to where the user gets confused or off-track during your dry run. Their misinterpretations will show you exactly what you need to revise. Once you’ve analyzed your dry run and covered all of the major issues in your test plan, your test is ready to launch.
If you want to get actionable user feedback that helps you make improvements to your website or your app, then you need to understand who your target audience is. That way you can recruit the right people for your test. My co-worker’s PB&J story taught me that I could write the best test plan in the world, but if I don’t recruit the right users, they’re not going to give me results that are accurate and useful. The absolute best results come when you can do both really well: recruit the right audience and write a great test plan for them. That’s how you get the type of feedback that helps you make better products. And if you want to learn more about writing test plans that produce helpful feedback, download our free eBook: A Complete Guide to User Testing Your Next Project!