Great UX leaders measure experience consistently and quantitatively

Posted on July 20, 2023
5 min read

Share

Think of monitoring your UX research project the way people monitor their health. We measure our blood pressure and compare it to a normal, healthy score. If it’s off, we seek help to diagnose the problem and take steps to bring the numbers back into a healthy range. We conduct these tests as often as needed to ensure we reach a good score. In the same way, we measure our products’ UX scores to keep them healthy.

The benefits of measuring experiences consistently and quantitatively are as follows:

  • Establish a baseline score to measure the success of subsequent redesigns and improvements.
  • Show the value of usability research and its impact on business goals.
  • Earn influence on projects prioritization, and lead more high-quality, impactful research within your organization.
  • Enjoy the sight of stakeholders promoting the story and demanding the same standard for every product during the development life cycle.

Like we discussed last week, great UX research leaders know their own data. Maintaining a scorecard to report on the UX research practices is fantastic. Great UX research leaders also measure the impact of UX research efforts on the business KPIs.

It’s critical to measure and report on successes on a project-by-project basis in collaboration with the business or product owners. This can easily be done by showing the direct impact of UX improvements on metrics such as customer experience scores, net promoter scores, customer satisfaction scores, product conversion rate, or customer service call volume.

In a recent survey, NN/g asked respondents how they judge the success of a design project. The responses broke down like this:

Unfortunately, only 24% of respondents said that both quantitative and qualitative research was used to judge the success of their projects. Even worse, 18% admitted that they “don’t really know” if their design changes are actual improvements or not.

Great UX research leaders claim that they’re data driven, but do you know your product’s current UX score? Do you know if it’s a healthy score?

Great UX leaders track their product UX scores and gain consensus intelligently and confidently, by doing the same you can accelerate UX growth from old-fashioned research to a more continuous journey yielding big moves and big gains.

But what’s a ‘norm’ in UX research? There are several ways to establish one. You can compare ease-of-use ratings with the average score of similar task-based studies you’ve conducted in years past. Even better, conduct a competitive benchmark study of your top three competitors’ sites to know where you stand in comparison.

When you measure a UX score consistently over time, you’ll establish a baseline score. This helps you and the stakeholders make informed ‘go/no go’ decisions.

Great UX research leaders use consistent user experience measures and track them over time to show the trends. They also know why UX scores are up or down, because they ask the right questions at the right time. They can also propose solutions to improve the scores.

Measuring UX: a case study

Let’s look at a case study from a major financial institution that wanted to measure user experience when shopping for a credit card on the website.

Business goals:

  • Increase the credit card applications submitted online by 25% in a year
  • Decrease call volume in regard to credit card applications by 30% in a year

The challenge:

The analytics report shows a large number of users dropping off before reaching the application stage, while other users remained and successfully applied for a card. But the report couldn’t tell the team why users bailed and it couldn’t tell them if the experience was easy for those who stuck it out.

So how can you measure this particular user experience?

The customer survey report may have some information, but I can almost guarantee that the survey doesn’t have the specific feedback needed to diagnose the problem. In most cases, the satisfaction rating isn’t specific to the experience, which in this case is shopping for a credit card.

The research strategy:

So how do we tackle these issues?

An unmoderated, task-based study is a good setting to start looking for intelligent answers. From there, let’s follow in the footsteps of great UX research leaders.

  1. Gather a representative group of users and ask them to search for a credit card that suits their needs. What sample size do you need? This article can help.
  2. As they search, track their behaviors. What are they clicking on? Where are they looking?   
  3. Track the amount of time users spend on each task.
  4. Ask participants to rate the ease of completing each task. Be sure to ask why they found it easy or hard. That why is crucial.
  5. Ask users to assign a difficulty score (say, 1 to 5, with 5 being easiest) to their responses. Consistency is key here.
  6. Identify the usability issues and rank them by severity level with recommendations for improvements.

Iterate, test, rinse and repeat. You’ll gradually see a rise in usability scores.

Generally speaking, if nearly half of users report that a task wasn’t easy, that’s a good sign to refine the design and test again. Each organization, of course, can set its own benchmarks for triggering a second look at a particular feature.

The results:

The data provided statistical evidence that the compare-cards feature needs major improvements. Once it was redesigned based on user feedback (those crucial whys), the ease-of-use score improved by almost 30%. The success rate went up by 22% and the confidence rate jumped 10%.

We kept monitoring the analytic reporting month over month with anticipation. And sure enough, after twelve months from releasing the new design, the analytic report showed a lower drop off rate than before 75% drop off rate in 2013 vs. 48% in 2014.

Additionally, customer service reported a decrease in call volume on that subject 65% in 2013 vs. 30% in 2014.

Here’s how the institution captured results before and after the redesign:

BEFORE

AFTER:

If you choose to do a competitive benchmark study, the results reported can look like this:

The QXscore

The QXscore is an experience score that combines various measurements, collecting both behavioral data (such as task success, task time, page views) and attitudinal data (such as ease of use, trust and appearance) – the purpose of this is to create a single benchmarking score for your product.

Once you’ve entered your results into our UX scorecard calculator, we’ll generate a QXscorecard that looks like this…

It’s a simple, clear and persuasive tool for communicating user research results to stakeholders, and should help with getting future buy-in.

If you’d like to know more about how UserTesting can help measure, manage and action your user research data, get in touch!

What to expect:

Executives leaders and stakeholders are likely to be interested in a quarterly or annual update on UX scores. However, they’ll also want to know how the ratings compare to the norm, so be sure to establish one.

Next steps:

  • Create a shared Google doc that tracks your scorecards created in step one.
  • Identify the business metrics and the relationship among those and possible UX metrics.
  • Gather feedback and improve the metrics you track and research methods you use.
  • Establish and use a consistent user experience measure for all common tasks your customers perform when using your product.
  • Test your product before, during and after a redesign (while live in production).
Cover illustration for UserTesting's complete guide to testing websites, apps, and prototypes

Get started with experience research

Everything you need to know to effectively plan, conduct, and analyze remote experience research.

In this Article

    Related Blog Posts

    • Top down view of 4 colleagues at a round desk in a meeting

      Blog

      How to build a customer experience strategy framework

      In today’s competitive market, great customer experience is a key driver of success. As...
    • Photo of UserTesting THiS London stage

      Blog

      Digital innovation and insights driving customer-centric transformation: THiS Connect London 2024

      The Human Insight Summit (THiS) Connect: London 2024 was a must-attend event for digital...
    • Blog

      How to achieve product-market fit

      According to CISQ, $2.26 trillion is spent on software re-work in the US So...