Alaniz Marketing

Running A/B Tests to Improve Your Lead Nurturing Campaigns

marketing meeting alaniz marketing

Email lead nurturing camapigns aren't dead! They're more effective than ever!The email experience is vastly different than it was a few years ago, but so too are the technologies and tactics employed by today’s companies. Email turns out to be the best channel for ROI in 2015 (Adestra). It’s performance is totally unmatched in the digital realm.

According to the recently released Email Marketing Industry Census, in addition to ranking as the best source for ROI, revenue from email increased by 28% year-over-year for respondents. We see similar statistics from other studies and reports:

  • Email marketing yields an average 4,300% ROI for U.S.-based businesses (DMA)
  • Companies using email to nurture leads generate more “sales-ready” leads and at a 33% lower cost. (Forrester Research).
  • For every $1 spent on email marketing, the average ROI is $44.25. (ExactTarget)
  • Email conversion rates are three times higher than social media.(McKinsey & Co.)

One way to maximize conversions in your email marketing and lead nurturing is by doing A/B testing. For any B2B company that’s serious about its marketing, A/B testing isn’t optional any more — it’s a standard part of lead nurturing campaigns.

How to A/B test your lead nurturing campaigns

A/B email split testing is a way to test two slightly different lead nurturing emails to your audience so that you can see which triggered the most engagement, and then use the more effective email for the final email send.

You can use A/B email marketing testing to test minor changes in your email such as different design elements like headlines, subject lines, images and colors

Most people miss the fact that you could also start with making your entire email a variable — design two completely different emails and test them against each other. This type of testing yields the biggest improvements, so it’s a good place to start and then continue your optimization testing with smaller tweaks.

Decide what you want to test

If you want to get more granular than testing two entirely different emails, be sure to test only one thing at a time to get accurate results. Things you might consider testing include:

  • Call to action
  • Subject line
  • Testimonials to include
  • The layout of the message (one column or two column?)
  • Personalization (in the subject line or in the body of the email?)
  • Body text
  • Headline
  • Closing text
  • Images
  • Colors

Test the important things first. If not many people are opening your email, start with testing the subject line. Or, if you’re sending from a departmental email, try sending from a personal email address. If your emails are being opened, try testing the headline and call to action.

Determine your sample size and testing time frame

I’m not a statistician, but I know that my email test should have enough delivered emails to give me “statistically significant” results over a certain period of time. So I need a “big enough” sample size, and I need a defined “period of time.”

For landing pages, you can gather data over week or months. Need more traffic? Push out some social media links to the landing page. With email it’s not so easy.

Your email is being sent to a certain list and that’s all — you can’t add more people to that list to get more responses. So you need to think about how to send an A/B test to the smallest portion of your list that will still give statistically significant results. Then you can pick the winner (either A or B) and send that email out to the rest of your list.

Do you have enough contacts on your list to run an A/B test?

Let’s cut to the chase: if you have a list of less than 1,000 contacts, simple send the “A” version to 50% of them and the “B” version to the other 50%. Check your results between these two groups and learn for next time.

If you have a list of at least 1,000 contacts, how many of them need to be in the A/B test in order to produce statistically significant results?

Let’s assume an 85% Confidence Level and a 5% Margin of Error. The confidence level is the amount of uncertainty you can tolerate. Both of these assumptions should be fine for email campaigns.

The table below shows the size of each sample. That is, the A and B sample should BOTH be this number.

  List Size   Sample Size
1000 172
2000 188
5000 200
10000 204

If your list has 1000 deliverable email addresses, then your A sample needs 172 people in it, and your B sample needs 172 people in it. The remaining 656 people should get the email which performs better in the A/B test. (Data from: Raosoft)

Determine the testing timeframe

Your marketing emails are usually optimized to send at a certain time of day. So if you want your email to be  statistically significant, you might miss out on being timely and relevant — thereby defeating the entire purpose of sending the email.

You can use past data to help make decisions about timing — look at when your email clicks/opens start to drop off. What percentage of total clicks did you get in the first day after your latest three email sends?

If you got 80% of your clicks in the first 24 hours, and 5% or less on succeeding days, it probably makes sense to cap your email timing at 24 hours. It’s simply not worth waiting and delaying your results for several more days.

Finally, make sure to send both the A and the B versions of your email at the same time, since we know that open rates vary throughout the day.

Analyze the results

Once you’ve run your A/B test with a statistically significant number of contacts, and you’ve waited the full length of time (like 24 hours) so that results have mostly come in, it’s time to analyze the results. You probably want to look at three things:

  • Open rate
  • Click-through rate
  • Conversion rate (once they’re on your website)

Make a little spreadsheet so you can easily see the results:

Emails Delivered Open Rate Click-through Rate Conversion Rate
Sample A 172 25% (43) 7% (12)  100% (12)
Sample B 172 18% (31) 14% (24) 70% (17)

Notice in this example that the conversion rate on Sample B is lower, but yielded more conversions. This is probably due to the alignment of the email copy with the landing page copy. Probably some more testing on this example would be useful. Maybe you could get the open rate to 25% AND the CTR to 14%. That would be a winner for sure!

Subscribe to Our Blog

Join hundreds of people who get free and fresh content every week.

Exit mobile version