Checkout testing is one of the most important things you should be doing for your ecommerce site yet many businesses don’t prioritise testing. Without it, you’re essentially providing a checkout without really knowing what makes people convert and what causes abandonment.
Dummies guide if you’re new to this terminology
[If you already know what this is, skip to the next heading!]
In ecommerce, AB and MVT (multivariate) are two testing methodologies aimed at identifying best performing versions for content assets or webpages.
AB (more accurately ABn) uses two or more versions of the same asset to test against each other in real-time. This requires a separate webpage for each test option. For example, you test 3 different versions of your checkout sign-in page.
MVT offers a more sophisticated testing framework, enabling you to test different combinations of multiple assets within a single webpage simultaneously. This means you can test more variations and quicker with MVT than AB. With MVT, you don’t need to create a separate webpage for each combination, the variants are served dynamically.
Testing should start with a goal and hypothesis. Don’t run a test until you know what you’re trying to prove and why. To create your hypothesis, there needs to be an issue or challenge to address, in other words a reason to test. Usually an issue is flagged via web analytics data, customer feedback or internal review. The data is then interrogated to build context from which the testing hypothesis can be defined.
A good example is this case study from The Olympic Store. The goal was to reduce checkout abandonment and the hypothesis that the existing multi-step checkout increased the opportunities for exit. In the test, version B converted at 42.9% vs. 35.2% for the control.
Version A – multi-page checkout (control)
Version B – single page checkout (test)
Nailing down your goal & hypothesis is essential; without it the test is undirected and you’ll end up with results that are hard to interpret. Take the example below from Dell for their order confirmation page. At first glance, Version B looks better for customers because it has more service-related information and useful links. However, the goal of the test was to increase follow-on sales after purchase, not to improve customer information. In this context, Version A was designed to drive cross selling activity and actually increased follow-on conversions by 15.3%.
Testing divergent conversion paths
Often the top-level conversion funnel data is fundamentally flawed for ecommerce sites; it’s based on a linear path yet many checkouts comprise divergent paths based on visitor decisions. For example, multi-channel retailers offer delivery to home and collect in store, branching the checkout process.
The risk with optimising the checkout with a linear perspective is that you focus on the wrong areas and miss key touch points that drive abandonment. For example, one retailer used a flat conversion path for the checkout that bundled home delivery with collect in store. The conversion rate was slowly increasing vs. the previous year – all was good right?
Actually, no. When the collect in store data was stripped out, the home delivery checkout funnel was performing worse than prior year. The collect in store was so successful that it masked conversion problems with other checkout flows. The testing program was focused in the wrong areas and was quickly changed.
In Anil Batra’s blog post “Most likely your conversion rate is wrong” (from 2011 but still relevant) he uses the example of a checkout with an order-by-phone CTA. His argument is that the linear conversion funnel ignores the conversion on this divergent path, so either inflates or deflates true conversion.
Micro conversion testing
A less well-known quirk of checkout testing is targeting micro-conversions. Top-level funnels are good for reporting but no so great for driving optimisation decisions. Take the example of using live chat during the checkout. Let’s assume that in a 5-step checkout you see the biggest exit between steps 3 and 4. A logical choice is to focus your AB tests there, right?
However, how do you know what to test? What is driving the exit? If you look at the micro-conversions taking place, you often get a clearer picture. Let’s say you isolate visits that use live chat and then discover that these have a much higher exit rate than non-chat visits. You’re able to pinpoint a problem, not necessarily ecommerce related, and factor this into the testing program.
When there are external factors affecting checkout performance (e.g. quality of live chat support), then a pure focus on testing ecommerce page elements may miss the problem and you’ll be scratching your head as to why results aren’t improving.
Statistical significance of results
Ecommerce is always in a hurry but your tests aren’t; they’re waiting for a cell size that enables results to be robust enough to draw conclusions from. You may find an ‘early winner’ and the temptation is to rush off and implement changes.
Be careful. The performance of test variations can fluctuate during the early phase of a test due to bias and you need to wait for a long-term pattern to emerge before committing to a decision.
Don’t forget the role of voice-of-customer data
As a parting thought, I wanted to remind you that using web analytics data and testing to optimise the checkout, whilst good practice, isn’t in itself going to solve the riddles. The data tells you what is happening but not why. Be sure to integrate voice-of-customer activity such as on-site surveys and user testing to help build a clear picture.
Here are our top tips for AB & MVT testing in checkouts:
- Start each test by clearly defining the goal and hypothesis to help you plan a structured test.
- Identify micro-conversion paths and drill down to find out where problems lie.
- Don’t stop testing; visitor mix changes over time and technology isn’t fool proof, so don’t assume that just because the checkout works today you don’t need to test in future.
- Don’t use AB/MVT testing in isolation – learn to mix with voice-of-customer data to understand why things are happening.
Comments and questions
What do you think? Please drop by and share your comments, questions and experience. Please also share any relevant links you think readers would be interested in.
- There’s a useful reference post on statistically significant test results by Anil Batra.
- ABTests.com is a helpful repository of AB test case studies.
- Whichtestwon.com is a brilliant online resource for test results – I recommend going for the subscription to get access to the main vault of case studies.
- Corey Eridon wrote a helpful blog on The Critical Differences Between AB and Multivariate Tests for Hubspot.
- Elisa Gabbert’s write-up for Wordstream of surprising tests from 24 marketing experts.