How to generate ideas for CRO tests and how often should you run them?

Posted on July 20, 2023
4 min read

Share

For businesses looking to provide a better user experience and improve conversion rates, there are a range of methods to use, including voice of customer surveys, analysis of customer journeys, basket abandonment emails, and testing.

I’m not going to say that one method is better than the other, as the truth is that the smartest marketers will be using a blend of these techniques to find the best results for their own website.

However, this article will look at testing, which is the basis of much work around conversion rate optimisation. Testing is vital as it allows businesses to see their ideas and theories put into practice, providing clear quantifiable evidence on what changes affect the behavior of users on your site.

Testing should remove guesswork from the equation. While many people may have ideas about how a website should look and which features are important, testing provides proof on what does and doesn’t work.

I asked several CRO practitioners and retailers for their views and how they generate ideas for testing.

Where do ideas for tests come from?

You need to find a starting point for testing, and this can be where other CRO methods come into play. For example, analytics may help you to identify a page with higher than normal drop-out rates, or customers may provide feedback that points at an issue with checkout forms.

Armed with this information, you can observe users on the site to gain more insight, or carry out A/B tests to try out solutions.

Stuart McMillan, Deputy Head of Ecommerce at Schuh, looks at three main sources for tests:

  • A mixture of analytics and user testing.
  • Mystery Shopping, combined with analytics and user testing.
  • Business need. For example, testing certain content provides value for money.

Ideas can come from various sources within the company too. Sean Collins, Head of CRM at Mr & Mrs Smith, regularly asks everyone in the company for ideas:

“The key is to then make an open prioritization session out of all the ideas so you pick the best ideas, not just the high profile ones. And say thank you and credit the person who identified it”

Paul Rouke, Founder and CEO at PRWD, also recommends using internal sources, such as ensuring that customer service teams are capturing and grouping feedback.

Another route is to analyze completed tests and session recordings in detail to identify other areas to improve, as well as new variations of completed tests.

As Orangeclaw’s Chris Lake points out, there are lots of ideas out there already: “Ideas can come from site data, user research, customer feedback, team suggestions, competitor benchmarking, research, blog posts, events, and so on. I have a database of around 1,000 ideas for testing, which I cross-reference when analyzing sites.”

How often should you test?

UX and CRO are closely related, in that both are often looking to achieve the same goal, of improving site performance. There may be some conflicts as the end goals differ, but providing the best possible user experience often suits the best interests of both users and retailers.

As with improving usability, there’s always unfinished business with CRO.

You may have a great site, generating lots of sales, but that should be viewed as a temporary state of affairs – there are always ways to improve, and a need to keep up with the competition.

Testing should be part of this continuous optimization process, whether it’s user testing or A/B testing, or preferably both.

So, there’s a need for continuity but there is still a question of how many tests you need to generate reliable insights that you can rely on, while our contributors add the important caveat that quality should be prioritized.

Stuart McMillan emphasizes the need for statistical significance:

“This is a very rough rule of thumb but, look at the number of conversion events on your website per month, divide by 2,000 and that’ll roughly be the number of A/B tests you can run in a month. Why 2,000? Well, assuming a 50/50 split, that should be enough to either get statistical significance or to be fairly sure that running it for longer won’t improve the statistical significance.”

There's another important, albeit slightly separate, point to consider: testing isn't solely about achieving positive outcomes. As Stuart points out, a test is considered a failure when the data is unreliable, not when a preferred variant doesn't win or the results are inconclusive.

“If your variant wins; great you’ve got some new functionality that will make you money. But what if it lost and the control won? Well, the test was still interesting: why did this new fancy design which is supposed to be better for customers not actually make it better? What if it was a draw and they both had the same effect? That is also interesting; why are two quite different designs functionally equivalent to customers?”

Then there’s the vital issue of putting quality first, a point well made by Paul Rouke:

Before even thinking about how often you should carry out tests, put quality first – quality of the research, quality of the data analysis, quality of the hypothesis, quality of the UX design, quality of the copywriting. Once you establish quality as the foundation of your A/B testing efforts, then quantity of testing becomes a consideration. It’s the difference between sanity and vanity metrics in conversion optimization.

Businesses that take CRO and UX seriously enough will allow their strategy to be driven by customer insight.

Quantity of testing events should be secondary to quality. The quality of data used for analysis, the reliability of the hypothesis, the design and more all matter far more.

Decorative image of a woman sitting in front of a laptop speaking and gesturing with her hands

Make an impact with human insights

Try UserTesting for free today!

In this Article

    Related Blog Posts

    • Top down view of 4 colleagues at a round desk in a meeting

      Blog

      How to build a customer experience strategy framework

      In today’s competitive market, great customer experience is a key driver of success. As...
    • Photo of UserTesting THiS London stage

      Blog

      Digital innovation and insights driving customer-centric transformation: THiS Connect London 2024

      The Human Insight Summit (THiS) Connect: London 2024 was a must-attend event for digital...
    • Blog

      How to achieve product-market fit

      According to CISQ, $2.26 trillion is spent on software re-work in the US So...