Think of monitoring your UX research project the way people monitor their health. We measure our blood pressure and compare it to a normal, healthy score. If it’s off, we seek help to diagnose the problem and take steps to bring the numbers back into a healthy range. We conduct these tests as often as needed to ensure we reach a good score. In the same way, we measure our products’ UX scores to keep them healthy.
The benefits of measuring experiences consistently and quantitatively are as follows:
Like we discussed last week, great UX research leaders know their own data. Maintaining a scorecard to report on the UX research practices is fantastic. Great UX research leaders also measure the impact of UX research efforts on the business KPIs.
It’s critical to measure and report on successes on a project-by-project basis in collaboration with the business or product owners. This can easily be done by showing the direct impact of UX improvements on metrics such as customer experience scores, net promoter scores, customer satisfaction scores, product conversion rate, or customer service call volume.
In a recent survey, NN/g asked respondents how they judge the success of a design project. The responses broke down like this:
Unfortunately, only 24% of respondents said that both quantitative and qualitative research was used to judge the success of their projects. Even worse, 18% admitted that they “don’t really know” if their design changes are actual improvements or not.
Great UX research leaders claim that they’re data driven, but do you know your product’s current UX score? Do you know if it’s a healthy score?
Great UX leaders track their product UX scores and gain consensus intelligently and confidently, by doing the same you can accelerate UX growth from old-fashioned research to a more continuous journey yielding big moves and big gains.
But what’s a ‘norm’ in UX research? There are several ways to establish one. You can compare ease-of-use ratings with the average score of similar task-based studies you’ve conducted in years past. Even better, conduct a competitive benchmark study of your top three competitors’ sites to know where you stand in comparison.
When you measure a UX score consistently over time, you’ll establish a baseline score. This helps you and the stakeholders make informed ‘go/no go’ decisions.
Great UX research leaders use consistent user experience measures and track them over time to show the trends. They also know why UX scores are up or down, because they ask the right questions at the right time. They can also propose solutions to improve the scores.
Let’s look at a case study from a major financial institution that wanted to measure user experience when shopping for a credit card on the website.
The analytics report shows a large number of users dropping off before reaching the application stage, while other users remained and successfully applied for a card. But the report couldn’t tell the team why users bailed and it couldn’t tell them if the experience was easy for those who stuck it out.
So how can you measure this particular user experience?
The customer survey report may have some information, but I can almost guarantee that the survey doesn’t have the specific feedback needed to diagnose the problem. In most cases, the satisfaction rating isn’t specific to the experience, which in this case is shopping for a credit card.
So how do we tackle these issues?
An unmoderated, task-based study is a good setting to start looking for intelligent answers. From there, let’s follow in the footsteps of great UX research leaders.
Iterate, test, rinse and repeat. You’ll gradually see a rise in usability scores.
Generally speaking, if nearly half of users report that a task wasn’t easy, that’s a good sign to refine the design and test again. Each organization, of course, can set its own benchmarks for triggering a second look at a particular feature.
The data provided statistical evidence that the compare-cards feature needs major improvements. Once it was redesigned based on user feedback (those crucial whys), the ease-of-use score improved by almost 30%. The success rate went up by 22% and the confidence rate jumped 10%.
We kept monitoring the analytic reporting month over month with anticipation. And sure enough, after twelve months from releasing the new design, the analytic report showed a lower drop off rate than before 75% drop off rate in 2013 vs. 48% in 2014.
Additionally, customer service reported a decrease in call volume on that subject 65% in 2013 vs. 30% in 2014.
Here’s how the institution captured results before and after the redesign:
BEFORE
AFTER:
If you choose to do a competitive benchmark study, the results reported can look like this:
The QXscore is an experience score that combines various measurements, collecting both behavioral data (such as task success, task time, page views) and attitudinal data (such as ease of use, trust and appearance) – the purpose of this is to create a single benchmarking score for your product.
Once you’ve entered your results into our UX scorecard calculator, we’ll generate a QXscorecard that looks like this…
It’s a simple, clear and persuasive tool for communicating user research results to stakeholders, and should help with getting future buy-in.
If you’d like to know more about how UserTesting can help measure, manage and action your user research data, get in touch!
Executives leaders and stakeholders are likely to be interested in a quarterly or annual update on UX scores. However, they’ll also want to know how the ratings compare to the norm, so be sure to establish one.
Everything you need to know to effectively plan, conduct, and analyze remote experience research.