In this guide

Get insights faster: a CX guide for agencies

    Get insights faster: a CX guide for agencies

    Whether through focus groups, in-lab testing, or guerilla research, most agencies with a design practice use customer feedback as part of their process. But, there’s a difference between viewing it as a box to check off versus a differentiator. If you’re selling services on the idea that end-users will value the finished product, you need the proof to back it up. 

    What sets the work you do for your clients above and beyond what they could get from another agency? How do you tap into CX insights efficiently when working on billable hours?

    Explaining the why behind their customer’s actions is something most companies can’t do independently. 

    As human beings, stories resonate with us and make it easier to digest and retain information. While clients may not be eager to pay for traditional research methods that are complex, costly, and time-consuming, getting a better understanding of your client’s business and customers should be a non-negotiable for agencies. 

    More and more clients are asking for end-user feedback because budgets are being scrutinized. Clients want to feel confident that they’ll see results before they throw money at a problem. 

    With first-hand customer feedback, teams across your agency can:

    • Uncover real needs and challenges of an account as a starting point for designing better solutions and submitting hyper-relevant RFP responses
    • Make faster, more informed decisions and iterate using genuine feedback 
    • Maintain a customer-first approach throughout your process

    After all, understanding your client’s end-users and getting first-hand insight ensures you provide clients with the best possible solutions for their customers.  

    This guide will reiterate why having a deep understanding of your clients and their end-users is essential. Then, we’ll walk through a step-by-step on how to plan, conduct, and analyze studies quickly to get the most valuable insights as fast as possible. Lastly, we’ll talk about scaling this process across your agency to start more projects with existing clients and win over new accounts. 

    Know your customers better than they know themselves

    Let’s face it, a lot of companies are out of touch with their customers. According to Capgemini, 75% of companies think they’re customer-centric, yet only 30% agree. We call this the empathy gap.

    As an agency, part of your job is to design solutions that help bridge this gap. By showing clients a healthy mix of qualitative and quantitative data, you can reinforce that CX is the deciding factor for whether businesses thrive or drift into irrelevance. Guiding them toward the light of customer empathy will not only improve their results but will save you some headaches too. 

    With UserTesting, Liquid Agency ran 12 tests over 3 weeks to build out a campaign. It was 75% faster than traditional research, only cost 25% as much, and gave the agency access to customer insights they would have otherwise missed.

    For us, UserTesting is key not only for sharpening the work we produce for our clients, but also to help build their comfort level with research and feedback. It's as much a means of education as it is a means of enabling customer understanding—a magical intersection that really makes a difference for us and our clients.
    Nathan Sundberg, Strategy Director, Liquid Agency
    Nathan Sundberg Strategy Director, Liquid Agency

    Stop being the bad guy 

    While your clients may not always be right, their end-users are. This can put you on an awkward tightrope walk of giving clients what they want (or can afford) versus what’s best for their business. 

    The good news is that clients are reaching out to you because you’re an expert. So it’s up to you to set them on the right path—the one of least resistance for yourself. As a professional setter of expectations, you have the confidence and skill to say to a client that something of theirs isn’t awesome, but you can help them make it awesome—with customer feedback. 

    Why not let your client’s customers tell them first-hand what they like and don’t like. This turns you into the faithful messenger and not the crusher of ill-begotten ideas. Plus, you can feel more confident about designing solutions that will help your clients move the needle with the data to back it up. 

    Remote video URL

    Step-by-step guide for getting insights for clients

    All agencies have their process. It helps grease the wheels and keep things moving smoothly. So, where does getting feedback fit into this? The answer is everywhere. 

    discover-mini

    Discovery

    4-12 weeks
    Define the problem
    Conceptual designs and prototypes

    Appreciation heart

    Client buy-in

    Present findings to the client
    Discuss challenges

    design and build icon

    Design and build

    20+ weeks
    Breakdown features and road map
    Start detailed design work

    With a remote testing option, you can test as frequently as needed. While most of us are accustomed to behemoth-sized testing, the point of getting quick feedback is to do it early and often. It’s a scaling tool for iterating quickly. 

    Here’s how: 

    Define your problem

    The first step toward gathering actionable feedback is defining the problem you’re trying to solve for your client. If you don’t know what information you’re looking to obtain, you risk launching a study that fails to yield actionable insights. Instead, ask yourself: What am I trying to learn?

    You don’t need to uncover every issue or problem in a single, exhaustive study. While it’s tempting to cover as many challenges as possible, running with one specific objective is easier and more productive. You’re more likely to get focused feedback that guides you toward a solution or supports a particular decision or change. According to the Nielson Norman Group, run as many small tests as you can afford, but only test with up to five end-users. Otherwise, you risk diminishing returns. 

    As you define your problem, think about outcomes, results, or KPIs that will have the most significant impact on your client’s business outcomes. 

    For example, if you notice that end-users who visit a specific page of your client’s website don’t typically convert, you can run a custom study on that page related to that KPI.

    UserTesting helps us continuously bring the customer back to the center of the conversation.
    Kane Ford, Senior Experience Designer, Chico's FAS
    Kane Ford Senior Experience Designer, Chico's FAS

    Example of a problem that’s too complex:
    Can end-users easily find our products and make an informed purchase decision?

    This objective contains three very different components:

    1. Finding a product
    2. Getting informed
    3. Making a purchase
       

    Example of a well defined-problem:
    Can end-users find the information they need?

    Determine the best way to get the answers you seek and create your test

    When creating a new test, you’ll need to set up your tasks and questions, which you can edit until it’s finalized and ready to share. 

    You may have the choice between conducting a recorded test or a live interview. No hard and fast rules exist about which option is the better choice for a given situation, but you should try to pair your expected outcome with the type of study that will support your efforts.

    Find out: which qualitative method is right for you?

    Unmoderated test

    A self-guided test or study, also known as an unmoderated study, is unsupervised and completed by participants in their own time. Create the study with a series of prompts, including tasks and questions, which the participant follows as they complete the study. You’ll receive a recording of the completed study, where you can see the participant’s screen and what each participant is viewing as opinions and actions are narrated. 

    Recorded tests are great if you have a product, like a website or an app, and you want the participants to go about an activity or action as they would in real life. Because participants are completing this study on their own and in their native environment, it’s as close as you can get to being “a fly on the wall”—observing how the consumer would naturally act and make decisions. 

    Moderated test

    Moderated tests, or user interviews, are great if you’re looking to investigate a topic, have a free-form conversation, such as a discovery call, or are early in your process and still learning about customer pain points and problems. 

    Remote video URL

    Identify the experience or flow you’re testing

    Next, you need to share the experience or flow you want to get feedback on. This could include live properties, such as websites and apps, unreleased products, prototypes, or landing pages and apps still in development. 

    Conducting a study on live assets entails providing a user with a URL or the app’s name in a digital marketplace (such as the App Store or Google Play). Similarly, many prototyping tools, such as InVision, or file hosting solutions, such as Google Drive or Dropbox, allow you to download a link for sharing. 

    Pro tip: When sharing a URL, be sure to open sharing permissions so that all users can view the content. 

    Identify users

    In many cases, it’s helpful to get feedback from a wide range of users, spanning age, gender, and income level. This helps you understand how a more comprehensive audience perceives the experience. This way, you can determine whether it’s easy to navigate and relevant to many or whether it presents challenges to some groups.

    Or perhaps you have a particular customer profile or persona in mind, and you want to make sure that you get feedback from people who meet a specific list of criteria.
                
    It’s been demonstrated that five participants will uncover 85% of the usability problems on a website and that additional users will produce diminishing returns. So resist the temptation to “boil the ocean” by doubling or tripling the number of participants in an attempt to uncover 100 percent of your usability problems. It’s easier and more efficient to run a study with five participants, make changes, run another study with a different set of five users, and continue iterating until all significant challenges are resolved. 
                        
    If you’re looking for trends and insights beyond fundamental usability issues, including a larger sample size is helpful. We recommend five to eight users per audience segment. So, for example, if you have three distinct buyer personas, you’ll want to set up each persona within a study for a total of 15–24 users overall.

    Build your test plan for unmoderated studies

    If you’re creating an unmoderated test, start assembling your study by creating a test plan. Your test plan is the list of instructions your users will follow, the tasks they’ll complete, and the questions they’ll answer during the study. 

    Create a series of tasks

    Tasks are the actions that you want the user to complete. One task is completed before moving onto the next, so keep task order in mind as it will guide the user from start to finish.
                            
    When creating tasks, focus on the specific problem you’re trying to solve. If you have multiple areas you want to test, break them up into different studies. In most cases, it’s best to keep your survey around 15 minutes in length, so keep this in mind as you plan your tasks. Consider using both broad and specific tasks.

    Broad tasks are open-ended and give users minimal explanation on how to perform an activity. They allow you to learn how users think and are helpful when seeking insight on branding, content, layouts, or any “intangibles” of the customer experience. Broad tasks are suitable for observing natural user behavior—what people do when given a choice and the freedom—which can yield valuable insights on usability, content, aesthetics, and sentimental responses. Keep in mind that answers may vary significantly from one test participant to the next. Be prepared to get diverse feedback when using broad tasks.

    Example: Find a pair of shoes to purchase to wear to a formal event. Share your thoughts out loud as you go. 
                                            
    Pro tip: When you’re not sure where to focus your test, use broad tasks like “explore this website or app for 10 minutes, speaking your thoughts aloud.” You’ll surely uncover new areas to study in a more targeted follow-up test. 

    Specific tasks are defined activities that users must follow. They provide the user with clear guidance on what actions to take and what features to speak about. Users are instructed to focus on a specific action, webpage, or app and talk precisely about what they think about the experience. 

    Specific tasks are best for situations in which you have identified a problem area or a defined place where you desire feedback.

    Example: Go to the page for women’s shoes, select a red sneaker to buy. Explain how you go about doing this. 

    people icon

    Help clients access larger, more diverse audiences

    Adobe knew the average Photoshop user was having trouble leveraging a new feature, but they couldn't connect with enough users of varying skill levels and expertise to find out more. With UserTesting, Adobe made a number of updates to impact their CX in a positive way for the majority of their users.

    magnifying icon

    Help clients build empathy

    When The Home Depot launched an initiative to create meaningful experiences by better understanding customer emotion, they needed a tool that could pinpoint “the why” behind customer stories. With UserTesting, The Home Depot continues to increase customer empathy through product enhancements and more. 

    icon chat

    Help clients adopt customer-centricity

    When Costco Travel planned to launch a new feature, they had an idea for how it should work. However, the experience wasn’t intuitive for customers. With UserTesting, Costco Travel took a more customer-centric approach to the design which increased bookings and led to 82% fewer support calls. 

    Use a logical flow when planning tasks

    The structure of your study is essential. We recommend starting with broad tasks such as exploring the home page, then using search, then adding an item to a basket. Then, move in a logical flow toward specific tasks such as putting things in their online shopping cart in preparation for a purchase. The more natural the flow, the more realistic the study will be, putting your user at ease and enabling better, more authentic feedback.

    Example: Explore the global navigation options > search the site > find an item > create an account > checkout

    Your test plan should align with solving the problem you initially set for your study. If you’re interested in discovering the user’s natural journey or acquiring a better understanding of their motives and rationale as they navigate your products, give them the freedom to use the product in their way. However, if you’re more focused on understanding attitudes and behaviors in a well-defined context, be specific in the tasks you assign and the questions you ask.

    Starting broad and moving to specific tasks is essential if you suspect that the task is complicated or there’s a high risk of failure. Putting these functions near the end of the study prevents the user from getting stuck or being thrown off track initially, which could negatively impact the remainder of your study. 

    Once you’ve mapped out a sequence of tasks in your test plan, start writing the questions. 

    Use clear, everyday language

    You want to provide a seamless experience for users completing your study. Avoid industry jargon or phrases that userss might not know. Terms like “sub-navigation” probably won’t resonate with the average user, so don’t include them in your questions unless you’re confident users will understand. 

    Include a time frame

    If you’re asking about some sort of frequency, such as how often a user visits a particular site, make sure you define the timeline clearly at the start of the question. This ensures consistency and accuracy in the responses that you receive. 
            
    Example: In the past six months, how often did you visit this website?    

    Pro tip: Avoid asking questions about what participants are likely to do in the future. This is not sound data. People don’t always do what they say or think they will do. 

    Frame questions to get standardized responses

    Gathering opinion and preference data can be tricky. To collect actionable insights that support changes and decisions (instead of a range of diverging views), standardize the experience so users answer the same question and provide helpful feedback.

    Example: What three words would you use to describe this website?

    Ensure rating scales are balanced

    Be fair, realistic, and consistent with the two ends of a rating spectrum. If you’re asking users to select the more accurate response from two options, ensure they’re weighing each end of the spectrum evenly.
                        
    Example of an unbalanced scale:

    “After going through the checkout process, to what extent do you trust or distrust this company?” 

    I distrust it slightly ←→ I trust it with my life

    Example of a balanced scale:

    “After going through the checkout process, to what extent do you trust or distrust this company?” 

    I strongly distrust this company ←→ I firmly trust this company

    Separate questions to dig into vague or conceptual ideas

    Some concepts are complex and can mean different things to different people. For example, satisfaction is a difficult concept, and people may evaluate other things when concluding whether or not they’re satisfied with a product or experience. 

    To ensure that your study yields actionable feedback, break up complex concepts into separate questions. Then, consider all of the responses in aggregate when analyzing the results. You can even create a composite “satisfaction” rating based on the results from the smaller pieces. 

    Example of question on a complex concept: 

    On a scale of 1 to 5, with 1 being “Very satisfied” and 5 being “Very unsatisfied,” how would you rate your satisfaction with this online catalog?

    Example of breaking up questions on a complex concept:  

    On a scale of 1 to 5, with 1 being “Very easy to use” and 5 being “Very difficult to use,” how would you rate the ease of use of this online catalog?

    On a scale of 1 to 5, with 1 being “Very appealing” and 5 being “Very unappealing,” how would you rate the visual design of this online catalog?

    Remember one of the great benefits of qualitative research is getting to the “why” of human behavior. If you find yourself asking “why” a person might respond a certain way, you have the opportunity to include additional questions to better understand the source of an opinion. 

    Avoid leading questions

    When asking questions, you can inadvertently influence the user’s response by including slight hints in the questions’ phrasing. Unfortunately, in doing so, you control the outcome of your studies because you get biased, incorrect responses and results. Double-check that your questions are neutral, unbiased, and free of assumptions.
                                        
    Example of a leading question: 

    “How much better is the new version than the original home page?”

    Example of better wording for a similar question: 

    “Compare two versions of the home page. Which do you prefer?”  

    Launch a dry run

    Before launching your study to all users, we recommend conducting a dry run (sometimes called a pilot study) with just one or two users. This allows you to identify flaws or confusing instructions within your original test plan and make improvements before launching it fully.

    Review responses and listen as they process each task and question. Take note of any trouble they encounter or if any responses fail to provide the type, level, or quality of insight you expect. 

    You may discover that the problem your client thinks they need to solve is not as big of a priority to their end-users as they believe it to be. You may find that an entirely different issue burdens end-users. So, don’t be afraid to follow where the data takes you. This is the kind of valuable insight that your clients need the most. 

    Pro tip: If you ask a user to complete a purchase on a website, provide a promotional code.

    Analyze your results

    Once you receive results, analyze the responses and begin extracting insights to support your recommendations.

    As you review answers to your questions, keep an eye out for similar responses and themes or any significant deviations. If a substantial number of users provide similar feedback, this could signal an issue that impacts a more extensive customer base and deserves extra attention. If one or a small number of users share a unique piece of feedback, you can hone in on this particular video to better understand why that user had such a different experience.

    Take note of user frustrations as well as items that users find particularly helpful or exciting to share with clients. By knowing what people love about the experience, you avoid the possibility of fixing something that’s not broken. Learning what customers struggle with or enjoy is what qualitative studies are all about. 

    One of the ways that Carmax makes superior products is by empowering its product team with customer empathy. The Carmax product team tests everything to give them a sense of ownership over their projects. They connect with audiences throughout the design process to quickly evaluate, validate, and iterate on ideas that impact revenue and conversions.

    What I love about UserTesting is that we can launch a test and minutes later we are getting valuable customer feedback.
    Chip
    Chip Trout Sr. Manager, Product Design, Carmax

    Share your findings

    After you’ve uncovered insights, present them to your client along with your solution.

    It’s essential for everyone to remain objective and neutral when presenting and hearing feedback. Be careful not to place blame on your clients. If you have a lot of negative findings, choose your words carefully. For example, “users found this feature frustrating” is much easier to hear than “this feature is terrible.”

    While receiving negative feedback can be challenging, frame it as an opportunity for clients to improve their products and experience for customers. Encourage them to ask questions about the findings instead of making excuses. 

    Scale insights and win new business

    Now you understand how to leverage feedback from your client’s end-users and you’ve set them on the path toward creating great, customer-centric experiences. All that’s left to do is exactly the same for yourself. It’s time to walk the walk. 

    Baking customer-centricity into your process is like joining a gym. At first, it might feel super awkward and you’ll probably be sore the next day. Over time, it gets easier and you’ll get stronger and more confident in your ability to build better CX.

    Be an example of customer-centricity for your clients

    We encourage you to gather customer insights throughout your development process and across multiple teams and departments.

    Get feedback early and often. This will help you better understand your client’s customer’s pain points and challenge them to do more. Share insights with clients as proof that you’ve done the hard work and you know what you’re talking about. Leverage feedback when responding to RFPs so that you have the edge over the competition. 

    friction detection

    Get actionable insights today

    Uncover human insights that make an impact. Book a meeting with our Sales team today to learn more.