Published by Lovely Chauhan on September 8, 2015

A/B Testing It’s Science not Guess work!

A/B testing has long been a favorite means for email marketers to understand customer behavior and optimize conversion rate. As marketers continue to experiment and test campaigns, I am going to delve into the science bit of A/B Testing how best to perform this A/B test so you get the best result.

A/B testing today looks pretty simple; you take 2 variables and test on a data set and bang! You have a winner. But is this the right way of doing it? What are we achieving here? How are we evaluating this? What’s the long term goal we want to achieve by doing this? A/B is not just validation of guesswork. It’s a scientific way to optimize customer experience and maximize responses.

There are 4 important steps for A/B testing. Before performing any action, let’s look at the check list for of must do’s:

Step 1: Analyze target data:

Understand the data set and their behavior pattern before testing and gauge the regular performance. One must always combine the quantitative and qualitative data to improve the test evaluation.

Example: For quantitative data, you must consider taking reasonable email counts for testing to derive conclusions from the outcome (refer below screenshot). And for qualitative data, always make sure you pick the most responsive data set for testing (openers or clicker of past 60 days). This data set will help you give the best and the most accurate results.:

1

Step 2: Form a hypothesis:

The second step is forming a good hypothesis. A good hypothesis is made up of three parts:

Variable: Isolate one element of the email (Subject Line, Call to Action, etc) that you need to test
Eg: Subject line (with and without brand name)
Result: The predicted outcome e.g. higher unique opens, more clicks, etc
In our case, let’s say we predict that using brand name in subject line should fetch us higher responses
Rationale: What assumption will be proven wrong if the experiment is a draw or loses?
The following is the result of our hypothesis:

2

Step 3: Construct an experiment:

Every experiment is a combination of 2 parts Content and Design. Prepare the mailer along with constant and variable to test, select the % of data set to experiment.

Look beyond the obvious ways to experiment wherein you mention different aspects that can be experimented in a/b split.

Example 1: Create two versions of your email creative and split them amongst your email lists. Mailer 1 Responsive and Mailer 2 Non-Responsive design. Research says that today almost 40-45% of emails are opened on mobile device and hence, it becomes utmost important for mailers to be responsive in nature. (Refer our blog : Respond to the responsiveness in mobile email marketing)

Example 2: Using different text in pre-headers – Variation in the pre-header text can also bring higher responses for email campaign. (Refer our blog : Is preheader a part of your email marketing strategy to increase open rates?)

Step 4: Evaluate results and build strategy:

For every experiment you run, you want to be sure that the observed change was not due to chance. Statistical significance provides that indicator. Post evaluation of test it is very important for business to understand and build future strategies around the learnings to make sure business benefits out.

In Example 1, if the test result shows that mailer 1 performed better, then business must focus on building a mobile first strategy for email so as to optimize user experience.

Best way to learn more keep experimenting the unconventional way!

You can find me on: