A tricky part of qualitative usability testing is crafting test questions. The way you write or ask questions can affect the answers you get. There are plenty of usability testing tools available to help you. However, it’s easier than you think if you try the funnel technique, which instructs usability studies to start with broad questions and narrow in as you go, and follow these best practices:
Before you start, here’s what you need to know about common usability testing questions.
For getting feedback, there are two common question types. You'll need to choose: open-ended vs closed-ended questions.
Open-ended questions encourage free-form answers. They cannot be answered by “yes” or “no” responses. They start with words like “how,” “what,” “when,” “where,” “which,” and “who.”
TIP: Avoid “why” questions because they lead people to make up answers when they don’t have a response. Instead, say something like, “please tell me more about that.”
When conducting qualitative usability research, you want to ask more open-ended questions because that’s how you get human insight. There's no statistical significance if you only run a handful of interviews. So there’s no point in getting answers that can be analyzed statistically, so focus on getting richer data.
Here are examples of open-ended questions:
Closed-ended questions have a set of definitive responses, such as “yes,” “no,” “A,” “B,” “C,” etc. These questions are great for unmoderated surveys or type box responses because your users don’t have to respond as much and instead offer validation or lack thereof. These answers can be analyzed statistically, so they’re better suited for quantitative research than qualitative.
Here are examples of closed-ended questions:
Now that we know the types of questions available to us, here are the types of usability test questions you need to know about:
Screener questions, also known as screeners, are questions intended to evaluate a contributor’s qualifications and target specific groups of contributors. These multiple choice questions eliminate contributors who don’t qualify to participate in your study.
Screeners allow you to find contributors based on their demographics or statistical data collected for a particular population by identifying variables and subgroups, like the following examples.
You can also filter contributors based on psychographics, data that collects and categorizes the population by using characteristics like interests, activities, and opinions:
To get started, identify the right target audience before creating screener questions, which ensures you get actionable feedback. For example, if most of your customers fall into a specific age range or geographic location, these might be the parameters you set. However, if you want to hear from those who may not be so familiar with your product for an unbiased outlook, think about enlisting users from outside the usual demographic.
The trick to effective screener questions is asking them in a way that identifies your audience without leading contributors to a particular answer or revealing specific information about your test.
For example, if you’re looking to test out a new mobile app intended for parents who live in the midwestern United States, you want to find contributors who fit the criteria. Instead of asking someone if they live in the midwest United States, you would ask: in which of these regions do you live and give answer choices for many different areas. Add distractor answers to your screener questions to deflect from the correct answer.
Here’s how we would find our target audience of parents who live in the midwestern United States via screener questions:
As you can see, we’ve woven the response we’re looking for with distractor responses to increase our odds of getting the right contributors without revealing details about the test or who we’re looking for.
Now that you’ve set up screener questions to find the ideal contributors, it’s time to ask questions to learn more about your contributor before the test influences how they might answer. Pre-test usability questions give context to your contributor’s actions and test answers. They can be open or closed-ended questions.
For example, you might want to know how experienced your contributor is with mobile apps before the usability study. This will help you better understand why they take specific actions.
Here are some examples of pre-test questions:
In-test usability questions are questions directly related to your testing objective. They should start general and get more specific. Always ask open-ended questions during your test.
Whether qualitative or quantitative, usability testing helps you understand the what, why, and how behind your customers and their actions. You can discover bugs or errors, get customer feedback, know your audience, learn whether something works as expected, and more.
When running an unmoderated usability test, you want to ensure your questions allow for open and honest feedback. Letting contributors know when you’re open to critical or negative feedback is also a good idea. After a contributor finishes a task, here are some common open-ended questions to ask:
When running a moderated usability test, the moderator can probe deeper into the contributor’s responses. A good rule of thumb is to stay quiet and let the contributor do most of the talking, but here are questions for promoting feedback:
Follow-up usability test questions are a set of questions that end the sessions. These might include clarifying or probing questions.
After a usability test, you have another chance to ask contributors about their experience for additional context. Get feedback on the experience overall and see if there’s something they want to talk about that you didn’t ask. Follow-up questions can be closed or open-ended.
As you can see, no test is complete without usability questions. For inspiration, take a look at some more usability testing examples. Or, to start testing, browse the UserTesting templates gallery for inspiration for your next project. Complete with pre-built templates designed by research experts, they can be used as-is or customized to fit your needs.
Bring customer perspectives into more decisions across teams and experiences to design, develop, and deliver products with more confidence and less risk.