The elementary purpose of an A/B test is to improve user experience, eventually increasing the chance of converting visitors to leads. While you already might have several ideas you want to A/B test, you should pick the one which is grounded in data-backed reasoning. In testing terms, the crux of a solid A/B test is its hypothesis.
Building a solid, data-backed hypothesis to guide your A/B testing is a good practice that enhances the quality of subsequent tests, builds your testing skills, and sharpens your intuition.
What is a hypothesis? And why is it important that you craft one so carefully?
A hypothesis is a theory that you form after rigorous data collection and either prove or reject via testing.
Having a solid testing hypothesis will help you gain clear insights from a test, even if it is inconclusive. Further, you can easily map conversion lifts to particular tests—this can be great in reverse-engineering a scenario that caused a substantial conversion lift.
Create a hypothesis primed to get to you actionable A/B test results by making sure it checks off the three boxes below. Your hypothesis should be:
Based on a problem identified by quantitative and qualitative analysis
A change that solves the problem
A quantifiable goal that measures the impact of the change
Here's a template you can use while deciding your A/B test hypothesis:
By [doing x], my visitors will [benefit y], which I can measure through [metric z].
Let's explore each of those three hypothesis elements in further detail.
Running quantitative and qualitative analysis to identify the problem [y]
Quantitative analysis:
Track website metrics to zero in on pages that have relatively high traffic yet a considerably high count of bounces/dropoffs. Continue to analyze visitor behavior on these pages with heatmaps, funnel analysis, and session recordings. Reports from each of these will reveal points of friction on your web page.
Qualitative analysis:
Build a one-on-one connection with your visitors using on-site polls, in-app surveys, and usability tests. Collect feedback about user experience and isolate instances that prevent visitors from converting.
When you combine data from both of these analysis types, you can identify a problem your visitors are facing and give priority to the one that is costing you the most conversions.
For example, you run a feedback poll and find out visitors are dropping off the product page. In their responses, the visitors say that they aren't sure if the product fits their requirements. This might be because neither the images nor the copy touch on the product specification clearly. This is the first element of our hypothesis.
Crafting a change that solves the problem [x]
You can make as many changes as you like to the treatment/variation (your solution) as long as they are based on a shared theme and are working together to solve a common problem.
Let's consider the previous example: The changes you plan to test can include adding a product specification table in the copy and updating the product images to ones with better context—say ones with measurements and perspective. So now we have the second element of our hypothesis.
Setting measurable goals for validating the hypothesis [z]
Make sure the treatment has a metric you can use to measure the effectiveness of the change. For instance, in our case, tracking the number of purchases from the product pages on the control and treatment can accurately quantify the impact of the change. That gives us the last element of our hypothesis. So, for the example we discussed, our final hypothesis will look something like this:
By adding a product specification table in the copy and updating the product images to ones with better context, my visitors will have a better understanding of the product and its capability, which I can measure through successful product purchases.
Not all A/B tests will give you a winning variation. Several might give you marginal lifts while others might end up being inconclusive. But having a solid, goal-based hypothesis increases the test's quality, making every test (inconclusive or not) an opportunity to learn something new about your visitors.
Want more? Here is a hack that touches on what you can learn from inconclusive A/B tests.
Comments