It is likely you want to try something new on your website. A brand-new functionality, an added feature, different types of content, or even a new layout.
Whatever it is, you would like to know if your new toy actually adds value, right?
We test this with these four steps:
- Define a measurable hypothesis
- Sample an experimental group
- Implement the right test
- Communicate actionable insights
Let us run you through these steps by looking at how we experimented on the website of an ecommerce food retailer.
Step 1: define a measurable hypothesis
Our customer added a new feature that allowed their visitors to browse through recipes and add the ingredients with one single push of a button.
On the surface, this seemed like a great idea. But does it really add value?
In order to test this, you need clear-cut objectives. To set these objectives, resort to the core question that drives your innovation: what Key Performance Indicator(s) (KPI) should this contribute to?
Our customer wanted to find out whether visitors that added recipes became more loyal to their brand. Loyalty in itself is not a clearly quantifiable metric, so the question quickly became: how could we measure this? Was there a way to trace this in our data?
To assess the effect on your identified objective, you will need to uncover a unique metric in your data. For the food retailer, this became the number of orders placed after the first use of the new recipe button.
This metric led us to assume that website visitors who added recipes to their basket eventually placed more orders than other customers.
Step 2: sample an experimental group
It is time to pick your sample group. This is a balancing act; a small group may lead to a loss of significant results, a big one will increase risk of failure.
Statistically speaking, a bigger sample size is better. However, business constraints often do not allow us an unlimited supply of respondents. Our general rule of thumb is that any number from three up to four digits is usually acceptable.
For the food retailer, we decided to evaluate when 1 500 users added a recipe to their baskets. Taking a control group of the same size in consideration, we ended up with an A/B test of 3 000 customers.
Step 3: implement the right test
When it comes to analyzing the results from an A/B test like this, you will need a healthy dose of common sense.
For example: if our group of 1 500 respondents turned out to be five times more loyal compared to our control group, we would not even need a statistical test to uncover the clear difference in loyalty.
In reality, however, results are almost never as clear-cut. The recipe-adding group showed a 13% increase in loyalty compared to our control group. To make sure this means the respondents are more brand loyal, we needed to perform a statistical test.
From all statistical tests you can perform, we regularly use one of the following tests
- T-test, in which we compare the means of two groups
- Analysis of Variance (ANOVA), in which we compare the means of more than two groups
- Chi square, in which we check the dependence between two groups
We performed a t-test and concluded that the 13% difference was significant enough to say the brand loyalty of the recipe-adding respondents was higher. In other words: the new feature actually added value to our customer’s website.
Step 4: communicate actionable insights
With this knowledge in hand, it is time to make your results actionable. Like in the first step, keep your original KPI(s) in mind. What do you want to achieve?
In the case of the food retailer, we decided to make the recipe button more visible on the homepage and set up an email campaign promoting its usage. This led to a substantial increase in loyalty rates among our their customer base.
This simplified case is just one of many examples of how data helps you improve your business. Convinced of our scientific approach to digital marketing? Reach out to us, our data consultants will gladly help you get started.