In this guide

Keeping score: the value of experience benchmarking for executives

    Keeping score: the value of experience benchmarking for executives

    Keeping score: The value of experience benchmarking for executives

    Customer experience metrics can uncover valuable insights that help your teams create better products. But if you’re reading this, you probably already know that. As an executive, customer experience is one of many business aspects you’re paying attention to on a daily basis. 

    The larger question is: how does an improved customer experience positively impact revenue and cost—the two fundamental metrics that make an executive’s world go round. It’s not an easy question to answer. One study found that nearly 60% of executives admit they don't have the data needed to prove its impact on the business. Without that knowledge, it’s difficult for executives to predict the outcomes of their digital investments.

    Experience benchmarking programs are one of the best ways to clearly tie customer experience to business value. In the near term, it can help you validate your designs against business objectives and compare your product with its competitors. As part of a long-term strategy of consistent user research, benchmarking your customer experience can inform better product decisions and demonstrate the positive impact of your team’s efforts over time.

    Benchmarking an experience, however, isn’t always straightforward. After all, each user’s interaction with your product is unique. If you’re going to understand the business impact of a customer’s experience, you need to be capturing that data and tying the right metrics back to business objectives. But what should those metrics be? Customer experience is a thorough mix of quantitative and qualitative factors. How can those be measured? 

    In this guide, we’ll discuss why benchmarking is so valuable and show you how to implement a benchmarking study in your organization. We’ll also show you how executives can benefit from implementing QXscore™ a reliable “check-engine” light for any digital experience that helps you mitigate risk and find opportunities for improvement. Additionally, you’ll see how UserTesting’s Professional Services team can act as an extension of your organization, ensuring your experiences align with the business goals that matter most to executives.

    What is experience benchmarking?

    Experience benchmarking is the process of testing the progress of a product or brand’s customer experience over time. It can be measured through

    • Different iterations of a prototype 
    • Different versions of an app
    • Before and after versions of a feature or function update
    • Comparing your products to your competitors’

    Whether you’re benchmarking your product against its own past performance or a competitor’s performance, there are multiple ways it can add value to your organization.

    Why benchmark your digital experiences?

    Benchmark studies add a quantitative component to your research that helps support your findings with trackable data. Over time, you’ll see trends that will help inform better design and business decisions.

    There are numerous benefits to benchmarking, but here are a few of the most common.

    Stakeholder alignment with quantitative, historical data

    Any user experience, by definition, is subjective to each individual. It’s not surprising, then, that UX researchers and design teams often rely on qualitative measures when evaluating their user experience.

    This creates a disconnect from data-driven stakeholders who prefer to reach statistical significance before making a change.

    Benchmarking your experiences not only adds a quantitative layer to research but also enables organizations to track results with data everyone can quickly understand historically. This empowers teams to create a stronger link between the impact of customer experience and key business metrics like revenue and retention rates.

    For example, let’s assume you know that product videos increase a customer’s likelihood to make a purchase—as well as their total order size. Benchmarking your experiences over time will allow you to demonstrate how improvements to your video experience lead to increases in purchases and total purchase amounts.

    Establish a competitive advantage

    It’s not just your products you should be interested in. While your teams will naturally strive to create the best experience possible, always improving upon a previous version, it’s hard to know how much better your product is if you’re not comparing it to your competition.

    When you incorporate competitive experience benchmarking with your UX strategy, you give yourself a clear, measurable way to keep score with your competition.

    No matter how great your product is now, there’s always a chance your competition will catch up with you. By establishing a baseline of KPIs and measuring them against your competition’s, you’ll always know where you stand and where you can improve.

    Understand omnichannel opportunities

    Your customers think of their interactions with a company as one continuous experience, not a series of separate experiences. That means they expect the same look, feel, performance, and level of service across all channels. Benchmarking across channels will help you better prioritize features and improvements, ensuring a consistent experience across all channels. This is what's known as omnichannel testing

    If your desktop site is where most of your digital efforts are focused, use your desktop experience as the baseline to benchmark your other channels against. Does the site fall short on smartphones or tablets? How does your mobile app measure up? To take it one step further, how do those experiences compare to any in-person experiences customers have with your brand?

    Uncover hidden strengths and weaknesses

    Like it or not, your competitors train your customers on what to expect. The rising quality of customer experiences that all companies set—not just your competitors—impacts customer expectations around ease of use, clarity, and speed.

    The good news is you can learn from your competition’s successes and failures through competitive analysis. If you know that the competition will implement a new feature, run benchmark studies to see how that feature resonates with customers before and after the change. The results will help inform future design decisions. Find out what’s working and what isn’t from your competitors and iterate to improve upon that.

    QXscore: The pillar metric for benchmarking studies

    "The value in providing the QXscore to product owners is that it gives us a common metric with our product counterparts to be able to track success over time."

    Lucas Lemasters, UX Research Principal at American Airlines

    The missing link in quantifying experience

    Even though we know the advantages of benchmarking, many companies struggle to identify metrics relevant to user experience and instead rely on indirect measures or outcomes—if they try to measure at all. This lack of meaningful metrics makes it difficult to demonstrate the business value of customer experience to executives. 

    We created QXscore, a holistic, easily understood standard for measuring user experience that quantifies customers’ attitudes and behaviors into a single score and identifies opportunities to improve. It can be used to measure user experience progress over time, relative to competitors, or across multiple lines of business, digital properties, and products.

    How QXscore works

    QXscore, short for Quality Experience Score is the only measure of experience that quantifies both customers’ attitudes and their behaviors into a standardized score on a 100-point scale. Traditional metrics, like NPS or abandonment, only tell half of the story. 

    These metrics show you what’s happening but not why it’s happening. QXscore creates a full picture by giving you one score that combines behavioral and attitudinal data, with task-level insights, helping your team understand where to focus and what to prioritize. The following attitudinal and behavioral metrics are measured to generate one reliable metric a digital experience.

    • Usability
    • Task success
    • Customer satisfaction
    • Reliability and trust
    • Appearance
    • Net Promoter Score

    As the data is collected, a scorecard is generated. Your final score, a number 1–100, is your QXscore—the single experience score for this product.

    QXscore

    The advantages of QXscore

    Traditional metrics, like NPS or abandonment, only tell part of the story. QXscore offers one score combining behavioral and attitudinal data, with task-level insights.

    It’s a simple, clear, and persuasive tool for communicating user research results to stakeholders and business leaders. Instead of trying to explain multitudes of metrics, being able to simply say, "We had a QXscore of 75, and now it’s an 84 after our latest iteration" is powerful, crystal-clear evidence of impact. 

    It can also be used as a diagnostic tool for improvement. If your QXscore is 75, you can easily see how it can be improved by diving into specific component areas and their related tasks. 

    QXscore allows teams to avoid DIY benchmarking programs that can easily become biased and produce murky results, that aren't actionable for executives. It's designed to help your team and the entire organization understand where to focus and what to prioritize without just using subjective "gut feel" and ensure the voice of the customer is used to drive business decisions.

    QXscore's effect on KPIs

    • Conversion rate: By identifying and addressing pain points in the customer journey, retailers can improve their conversion rates.

    • Customer satisfaction (CSAT): A higher QXscore indicates higher customer satisfaction, which can lead to increased loyalty and repeat business.

    • Net promoter score (NPS): Insights from QXscore help retailers enhance the overall customer experience, positively impacting NPS.

    • Customer Retention: By continuously monitoring and improving QXscore, retailers can reduce churn and retain more customers.

    Examples of experience benchmarking success with QXscore

    American Airlines

    Remote video URL

    Challenge

    Most flyers now book their flights online, making the digital experience crucial for customer satisfaction and retention. American Airlines, transporting around half a million passengers daily and attracting hundreds of millions of website visitors annually, faces a wide audience and has significant UX challenges. Despite being a large organization, American Airlines needed a better way to measure and improve its digital product experiences. Integrating UserTesting into their workflow provided the necessary tools to quantify, benchmark, and demonstrate progress effectively.

    Solution

    Lucas Lemasters, UX Research Principal at American Airlines, stated that UserTesting offers essential quantitative and qualitative data for product decision-making. The effectiveness of UserTesting is especially evident in American's benchmarking programs, where it collaborates with Research Consultants to utilize QXscore. This proprietary metric assesses both the usability and enjoyment of digital experiences, guiding product decisions and success.

    American Airlines put its entire sequence of digital experiences for searching and booking flights in front of potential customers with UserTesting, and then they quantified the usability and enjoyment of their website by QXscoring four key customer experiences:

    • Booking a flight
    • Changing and managing a flight
    • Checking in for a flight
    • The AAdvantage program

    For the benchmarking programs, they measured the performance of their digital experiences against the same experiences from competing airlines, carefully tracking every conceivable engagement a customer could have when using the American Airlines website. 

    Results

    American’s UX team provides QXscores to the organization’s product owners. This gives the teams a common, quantifiable metric to track success over time. The teams share the related data sets and provide a common set of measures to senior executives, to justify their decisions. 

    Kimberly Cisek, VP of Customer Experience, says, “Without human insight, you may become reliant on data that is static. What you really want to know is how the customers are using and engaging with the functions that you put out there for them.” 

    Banco Sabadell

    Remote video URL

    Challenge

    Banco Sabadell strives to be the Spanish bank with the best digital experiences for customers, and they have a rich history of digital innovation to stand on. They were the first to launch online banking in Spain in 1998 and the first, years later, to offer digital payments via bank accounts. Banco Sabadell employs approximately 40 researchers and designers, who ensure that the bank’s digital experiences are robust and built for the needs of the customers. 

    Manel Garcia, Banco Sabadell’s Director of User Acquisition & Activation, references two key challenges. “There are two parts when it comes to digitalization. On one hand, you’re creating digital experiences for processes that exist—but are not yet digital. On the other hand, you are trying to build new digital products and services that do not yet exist nowadays for the bank.” 

    Banco Sabadell’s digital innovators took a bold approach to this challenge. They decided that clearly defining a numerical standard for all of their digital products, with the help of QXscore, would help them bring a higher caliber of experiences to the market–and secure the successful launch of any new function or redesign. 

    Solution

    In the past year, Banco Sabadell conducted over 500 tests with UserTesting, evaluating all proposed digital experiences for usability. They tested various functions, including online account creation, bill payments, loan simulations, and site navigation, to boost conversion rates.

    UserTesting's QXscore® metric is crucial for tracking customer usability and satisfaction. The bank believes QXscore correlates with its Net Promoter Score (NPS).

    Silver Bruna, Design Director, states, “UserTesting is integral to our design process. We ensure any design for engineers is validated with UserTesting. QXscore is our quality benchmark.” Every design requires a minimum QXscore of 85. Scores are recorded in Jira for Project Managers to decide if designs can advance to production.

    Results

    Measuring customer perceptions has revolutionized Banco Sabadell’s digital experience development. Shared metrics facilitate collaboration among designers, researchers, and project managers and help communicate with executives.

    This approach was particularly successful in developing the online account signup process. After running over 40 tests, the process was refined, leading to over half of new customers signing up online, a significant shift from in-branch signups. Standardizing processes has increased research project speed by 50%, and UserTesting has enhanced profitability by focusing on customer-valued functions.

    Prove the ROI of experience benchmarking with UserTesting Professional Services

    With an understanding of benchmarking's value in place, combined with the capabilities of QXscore, teams are armed with the tools to not only build a better experience for customers but to measure and prove business impact. 

    Organizations that want to ensure their benchmarking program is of the highest quality can bring on UserTesting’s Professional Services team. This group of research experts can establish and teach teams how to run a best-in-class benchmarking program that is directly tied to the KPIs of their specific executive team's needs. 

    Professional services help organizations:

    • Connect the experiences you deliver to organizational outcomes by leveraging QXscore.
       
    • Link customer experience to organizational outcomes by benchmarking the performance of the experiences you deliver and communicating a clear roadmap for action.
       
    • Leverage UserTesting to capture actionable insights and communicate the value of improvements with the end-to-end delivery of a true experience-based benchmark. 

    Our expert research team helps you source participants, perform testing and analysis, and deliver a customized report of recommendations, letting you focus on implementing improvements.

    Depending on your needs, you can benchmark competitive experiences by comparing how your user experience performs relative to up to three experiences. Benchmarking longitudinal experiences helps you establish a baseline measurement of your desktop or mobile experiences that can be compared with future experience changes to quantify performance over time.

    Benchmarking programs run by a team of professionals reveal a wealth of insights you’d never have access to otherwise. Conducting experience research regularly and comparing your progress over time gives you the metrics you need to prove the ROI of your research.

    Experience benchmarking best practices

    For those running benchmarking studies independently, UserTesting allows you to set up, run, and analyze studies in an incredibly short time frame from anywhere in the world.

    Here’s how to get started conducting a benchmarking study of your own.

    1. Identify the metrics to capture

    After identifying your area of focus, it’s important to choose metrics that reflect your objectives and the overall KPIs of your business. What’s the difference between KPIs and metrics? KPIs (key performance indicators) reflect the overall goals of your business - such as revenue growth, retention, or increase user numbers. Metrics are the all measurements that go towards quantifying these higher goals. Typical metrics you could capture include these task-level behavioral measurements (what users did): 

    • Task Success 
    • Task Time 
    • Pageviews 
    • Problems and Frustrations 
    • Abandonment Rate 
    • Error Rate 
    • Number of Clicks

    You may also want to capture attitudinal metrics (what users said). 

    Here’s an overview of common ways to benchmark attitudes in a quantifiable way:  

    • NPS: Net Promoter Score. This is a survey you can include at the end of your tests. NPS helps you measure loyalty based on one direct question: “How likely on a scale of 0 - 10 would you recommend this company/product/ service/experience to a friend or colleague?” The actual score is calculated by subtracting the percentage of customers who responded with a score of 0 to 6 from the percentage of customers who scored 9 or 10. 
       
    • SUS: System Usability Scale. For every usability test carried out, users complete a short questionnaire and a score is derived from that. It’s on a Likert scale (a 5 or 7 point scale that offers a range of options, from one extreme attitude to the opposite, with a neutral attitude at the midpoint). This helps ascribe a quantitative (or numerical) value to qualitative opinions. 
       
    • SUPR-Q: Standardized User Experience Percentile Rank Questionnaire. This is an 8 item questionnaire for measuring the quality of the website user experience, providing measures of usability, credibility, loyalty and appearance. 
       
    • CSAT: Customer Satisfaction Score. This again measures customer satisfaction but doesn’t have the strict question limit parameters of NPS or SUPR-Q, as you can ask anything from one single question to a full-length survey. Results are measured as a percentage. 

    2. Write your script

    Once you’ve established what you want to track, it’s time to write your test script. For
    benchmark studies, test scripts should be easy to understand, non-leading, and consistent.

    Remember that to achieve the most accurate results, your test script should be the same for every subsequent test after your initial baseline study and be achievable across any other sites you may be comparing against. 

    For example, if you’re trying to study the journey of a user finding a red duffel bag that costs less than $100, make sure that all the sites you’re testing sell an item that fits that criteria.

    Within UserTesting, you can drag and drop common tasks and questions, write your custom tasks, or even choose from our popular test templates. When benchmarking against your competition, the relative changes in results matter most—not just the score. 

    For example, if you’re benchmarking your brand against a global leader in your industry, your scores may not be as high, and that’s okay. What does matter, however, is how your score changes over time compared to their score. Are you gaining ground? Where are the biggest opportunities for improvement? What inspiration can you take from the user experience of a best-in-class brand?

    Create goal-oriented tasks

    Tasks should be straightforward and with a specific goal in mind. Tasks like “Sign up for an account” or “Find a product you’d like to buy” are good examples. The task should present your test participants with a specific goal without providing them with any guidance on how to accomplish that goal.

    Include evaluative questions 

    Each task should be paired with evaluative questions to help quantify the experience. These typically consist of a rating scale (e.g. “How difficult (1) or easy (5) was it to complete this task?”) or multiple choice (“Compared to other similar sites, was it easier, harder, about the same, or none of the above to complete this task?”).

    3. Identify your participant demographics

    Determine who you’ll want to get feedback from. In most cases, you’ll want to choose participants who match your target market. Consider basic demographics (like age, gender, and income) as well as criteria like job function, whether they are already customers, how often they shop online, or other relevant factors. With UserTesting’s large Contributor Network, you can choose demographics and set up custom screener questions to recruit your exact target audience to your study.

    Number of participants

    For many UserTesting studies, a minimum of 5 is a great starting point. However, if your budget allows, benchmark studies can benefit from using 10-50 participants. A larger sample size helps provide more quantitative data, making it easier to identify trends.

    Consistent participant sample sizes

    While you don’t have to run your studies with the same individuals every time, you should use the same number of participants. If you establish your baseline study with 20 participants, then each subsequent study should also use 20 participants.

    Variety in participants

    If you’re benchmarking the user experience across multiple brands, to avoid fatiguing your test participants, don’t have each user evaluate each brand in your study. Instead, use a different set of participants for each experience you want to study and then compare the results objectively.

    4. Validate your tasks

    As you build your study, you should carefully test each task to make sure that the metrics can be captured consistently. When competitive benchmarking, bear in mind that another company’s website can be prone to change without you expecting it, so you should carefully monitor both while building your study and while the study is live to ensure your task still makes sense. When you set up any usability test, you must define the tasks and determine what success or failure looks like. 

    This means you’ll need to figure out how you validate the success of your task. 

    • Validation by URL - this can be used if task success is dependent upon reaching a specific page or finding content available on a specific page. When conducting usability testing, study participants are asked to perform a series of tasks on a website. They are presented with instructions for each task and are taken to a specific start URL to begin the task. Validation can occur on the last page of the process. If people can navigate to that specific pre-programmed page URL, they will be marked as a success. 
       
    • Validation by question - if you’re testing a trading card website and you ask participants to find current financing incentives for a Muggsy Bogues rookie card, then task success can be validated by the following question, “What are the current financing offers and incentives available for a Muggsy Bogues rookie card in the 94404 zip code?” Participants are presented with a list of pre-defined answer choices. Those who successfully complete the task should be able to answer the validation question correctly.

    5. Soft launch your study

    Benchmark studies deliver increasing levels of value as you run them repeatedly over time. That’s why it’s particularly important to QA (quality assurance) your study and run several dry runs with a small sample of users before your first benchmark study. This minimizes the risk of needing to change your study script or structure in the future, and gives you the assurance that you will be comparing apples to apples. Or, if running tests on a non-grocery site, pick your own metaphor.

    6. Conduct a baseline study

    To conduct effective experience benchmarking studies, you first need to understand where you’re starting from. Once you decide what you’ll be tracking, conduct the studies necessary to establish your baseline. You’ll compare your user experience (and your competitors’) to this over time.

    7. Commit to regular testing

    Establishing a regular testing schedule to track your progress over time is important. Once you’ve established a baseline, you can compare all future measurements against it. Some companies benchmark their digital experiences against their competition quarterly, while others do it monthly. Whatever cadence you choose, be sure to stick with it consistently.

    8. Analyze results

    If you’re planning on conducting several rounds of competitive benchmarking, it’s important to look out for the relative changes in results, not just the absolute change in your score. This is especially true if you’re comparing with competitors of a different scale or brand reputation than yours. If you have a high level of market share, and customers are familiar with your website, then your absolute performance against a new competitor who is less well known may not be the most relevant thing to note. 

    Comparing your overall experience with that of your competitors will be an interesting exercise and may provide the type of engaging content that gets stakeholders interested in research. However, the insights that provide you the direction and pointers on where to improve can be found within your task-level tests. Analyzing relative differences in time on task, ease of use, and task success for different tasks will provide the context of where to investigate further.

     

    QXscore: Your competitive advantage

    Learn more about how QXscore can transform your products and drive vital KPIs. 

    Frequently asked questions about experience benchmarking