When people find out I used to be a lawyer, the inevitable question follows: Why did you leave law for UX research? It’s a fair question; most lawyers don’t leave the practice of law, and the connection between law and UX research may not be readily apparent. I’m not the lawyer most people imagine in their heads or see on TV. I spent the first half of my legal career with a nonprofit law firm focused solely on helping people below the federal poverty line. I worked one on one with my clients to help them get out of abusive marriages, fight slumlords, stop abusive debt collection companies, and many other forms of civil litigation.
I spent the second half of my legal career representing disabled blue-collar workers in over 500 Social Security Disability hearings, helping them obtain the benefits they needed in order to survive being unable to work. In short, I spent five years working in roles that required an incredible amount of empathy for people in situations entirely different from my own. That empathy is a cornerstone of user research.
As much as my clients needed help, they rarely volunteered the information I needed to help them. I had to get it out of them, in the most objective way possible, while understanding the situation from their point of view. This is exactly the task we face in user research.
The most crucial aspect of obtaining valid user research is not to lead the user. In trial law, we differentiate methods of questioning into two categories: leading and direct. Leading questions are questions that suggest an answer within the question itself. For example, “Are you reading this article because it’s just too early to be doing real work?” The question inherently directs you to either agree or disagree. Leading questions limit your response to predefined answers, like, “Are you reading this article because it’s just too early to be doing real work, or because you’re really just bored?” Either way, the result is the same: the person reading the question is prompted to give a certain response. Leading questions preclude the opportunity to hear an open answer.
UX researches distinguish between open ended vs closed ended questions. Direct questions are inherently open-ended. By asking open-ended questions, you allow the person to answer in exactly his or her own words. For example, “Why are you reading this article?” I have suggested nothing to influence your answer. You may answer “Because I’m bored,” or you may answer, “Because lawyers are fascinating.” Whatever your response, it was not suggested in the body of the question. Unlike a leading question, the entire universe of answers is available to you. Direct questions assume nothing, which allows the person answering the question to tell their own story entirely in their own words.
This gives us the clearest insight into the reasoning behind their actions. Leading questions, on the other hand, cloud responses with our own bias.
In user testing, we have one goal: understand user behaviors. Everything else in our world flows from that truth. Therefore, in order to observe natural user behaviors, we must abstain from influencing those behaviors as much as possible. We can achieve this goal by utilizing direct questions.
A direct question assumes nothing. Test questions should begin in the broadest possible sense. Questions should be simple and clear, using classic question words like when, where, who, and how. The test should flow naturally along the expected path of user behavior, but the test should always be behind the user: in other words, the user should naturally arrive at the subject of the next question before it is asked. This way, by the time you give the direction to take an action, the user has either already completed the action or has discussed that they anticipate the next task. This is the art of direct questions: the users lead themselves down the path while we merely observe.
To put some context to this, let’s imagine you’ve been asked to test the checkout process for an e-commerce website. You want to test the usability of the entire flow, from adding an item to the cart, to completing the purchase. Let’s take a look at how such a test might look, first using direct questions:
Please take a few minutes to explore this page. Remember to think aloud. What can you do on this page?
This is where we will get the best data on user behavior. What will draw their attention? Is the call to action clear? Does the user understand the purpose of the page and anticipate what will happen next? Because we suggested nothing to the user beyond requesting they observe the page, their behavior will be completely natural, and therefore, the most valuable and reliable for us.
Imagine you want to buy a box of widgets. How would you do this?
Now we have directed the user to do something and have therefore influenced the user’s behavior. The user will now search for some way to make a purchase, at the exclusion of other elements on the page. However, we still used a direct question. We haven’t suggested a path for how to make a purchase. We want to observe the user’s behavior and listen to their thoughts on how to perform this direction. Our best data lies in comparing the user’s behavior between Task 1 and Task 2. If the user did not clearly understand that the page is for buying widgets in Task 1, then we have already identified the single biggest problem. If the user did understand the purpose of the page in Task 1, then Task 2 should flow naturally with their mental model for interaction, and is therefore very unobtrusive to their natural behavior.
Once you have selected a box of widgets, how you would complete the purchase?
This question would test the user’s understanding of the remaining check-out flow. Note that, again, there is no suggestion as to how the user should finish the process; we’ve left it up to them. This allows us to observe whether the process is intuitive to the user, which yields the best data for us.
The wrong way to test is by using leading questions that suggest an answer to the user, and therefore, influence the user’s behavior to match our own premeditated biases.
This is our new widget sales page. Is it clear that this page is designed for selling widgets?
This is a textbook leading question. We have told the user the answer we want. When presented with a leading question, most users will be inclined to answer affirmatively. Some users may say “no, it’s not clear,” but even, in that case, our data isn’t as good because we have limited the scope of the users’ response. Most of all, we have missed out on the biggest opportunity in user testing – observing natural, undirected behavior. With a question like this, we have learned nothing about how a real user will interact with the site.
If you want to buy a box of widgets, would you use the “Purchase Now” button, or would you drag the box of widgets to the shopping cart?
Again, we have suggested the answer to the user as well as limited the user’s response. We will gather no useful data from this question because we will not understand if a user would naturally use either of these methods because we have told them how to complete the task.
The path to better user research begins by observing your users’ natural behaviors. By effectively using direct questions, you can build tests that allow let your users lead the way. Great experiences are achieved when we follow.