Usability testing is a fundamentally human process. But it can be greatly enhanced by employing automation in a thoughtful way.
There are lots of reasons for organizations to invest time in automating usability testing. When set up properly, automation will:
The speed and ease of automation is a double-edged sword. It's important to exercise caution when running automated tests because a lack of careful preparation and piloting could result in 50 bad sessions before you uncover the issue. Whereas, with moderated testing, if the first session goes poorly, a few tweaks to the procedures or stimulus before the next session are all that’s needed to course-correct.
In this article, we'll cover a standard usability testing process with an emphasis on required changes and best practices for automation.
Usability testing can range from evaluating designs for minor feature change(s) (A/B or multivariate testing) to benchmarking the usability of an application or site.
The purpose of your test determines the scope and specificity of your tasks. Here are a few examples:
Getting clarity at this stage will also help you to determine which usability testing tools will be the most appropriate for your needs.
This is where manual and automated usability testing differ the most. Unless you have an unlimited budget and no time constraints, manual usability testing is mostly limited to smaller numbers of participants and more qualitative data. With a low number of participants, glaring usability issues and a few helpful insights will likely be discovered.
For critical decisions involving choosing between similar design variations or for benchmarking an application, larger numbers are required and automated usability testing is critical to achieving those numbers efficiently.
In most cases, you need both qualitative and quantitative data to drive design decisions — while also providing actionable and relatable insights or examples of specific issues. One approach is to automate a lower number of talk-out-loud usability evaluations, while also automating a large benchmarking study. This provides the high-confidence numerical evaluations of improvement over time along with impactful clips and quotes with greater details.
You must convert those participant requirements into a clear screener.
If providing the screener to a panel company, this would be a simple document with clear participant acceptance criteria.
For automated usability testing platforms that incorporate participant recruitment, develop a screener with appropriate logic and conditions to screen participants without making the ‘correct’ answer obvious to potential participants.
With moderated usability testing, task success is sometimes subjective. If an automated test platform requires video analysis to determine task success, the determination may still be subjective. Determination of success can also be based on:
For simple testing of design iterations, this could be very brief, simply outlining the items discussed above for clarity and consensus within a research and design team.
For large benchmarking efforts, taking the time to preview test plans with stakeholders, including agreement on the tasks to be included, is critical for greater impact of the end results.
With a test plan in hand, creating the study in an automation platform will be simpler and faster.
If your platform supports recruitment, pay particular attention to all the platform’s built-in participant attributes (the targeting criteria for invitations).
For example, if your platform allows you to select participants’ employment industry, choosing the relevant industry will tailor your invitations, reduce screen-outs, and possibly eliminate or reduce your recruitment costs.
After that, use screener questions to eliminate potential participants that don’t match your intended profile. Creating segments with caps within the screener ensures that any subgroups of participants are appropriately filled.
Next, just as with a moderated test, provide some introductory text, as well as scenarios and pre-task and post-task questions. More sophisticated automated platforms may include logic to ask one set of questions after task success and another after task failure.
Pilot testing is important for all usability tests, but with moderated tests, any issues detected in the first few sessions can often be corrected before many participant sessions are wasted.
With automated usability tests, the pace of study completion and the quantity of participants makes any errors in setting up an automated study compound quickly. Pilot test yourself, then with colleagues, and finally with a limited number of participants.
Launching an automated study is usually simple, but there are unique considerations. While moderated studies control the timing of participant interaction, automated recruitment platforms that rapidly recruit and fill a quota may unintentionally bias results by:
These biases may be reduced by including segmentation criteria, but a more effective solution may be using an automation platform that sends invitations in waves spread out over a longer time period (such as a day).
As you can see, successful automation requires careful planning and care throughout the testing process.To summarize, make sure you:
If you follow these steps, you'll set yourself up for faster, easier results through automated usability testing.