Email campaigns and newsletters can generate repeat orders as well as new customers. Most likely, you already have a pre-selected database of contacts of users who have confirmed that they want to receive information. And many of them have probably already ordered something from you. Everyone knows that it is easier and cheaper to keep existing customers than to get new ones.
That’s why it’s important to do A/B testing when creating new methods and formats for your email marketing campaign.
Improving the conversion rate of these campaigns will help you to increase your profits much more than other marketing activities, especially those that are comparable to email campaigns.
Decide what you will test
The first step in creating effective A/B testing is to decide what you will be testing. Although you can test more than one element, it is important to test only one at a time to get accurate results. Email message elements that you can test:
- Call to action (Example: test “Buy now” instead of “See rates and prices”) Email subject line (Example: “ABC sale” instead of “ABC discount”)
- Enable reviews (or consider including them at all)
- The structure of the letter (Example: one or two columns, placement of various elements of the letter in different ways)
- Personalization (Example: “Dear Sergey Ivanovich” instead of “Sergey”)
- Body Text Header Text End Images Special Offer (Example: “20% Off” or “Free Shipping”)
Each of these elements can have a direct impact on the overall conversion rate of your email campaigns. For example, a call to action will obviously influence how many people buy your product or go to your landing page. On the other hand, the subject of the letter will directly affect the number of people who open it.
Think about this when deciding which elements to test first. If not many people open your emails, then it’s probably worth starting by testing the email subject line. The headline and call to action will affect the conversion rate more than images. Test the items that matter most first, gradually moving to the lesser ones.
Test the entire list of subscribers or just a part of it?
In most cases, testing is required on the entire list of subscribers. This is important for getting a more accurate picture of how users react to changes in your advertising campaigns. However, there are cases where you may not be able to test on the entire list:
- If you have a very large list of subscribers and the service you use for A/B testing charges for each email address included in the mailing list. In this case, test with as many subscribers as you can afford and make sure the addresses are chosen randomly for accurate results.
- If you’re trying to test something very dramatic, you might want to limit the number of people who can potentially see it. In this case, make sure that at least a few hundred people view each of the tested versions. However, it is better, of course, if it is several thousand.
- If you’re sending out a time-limited offer and you want to get as many conversions as possible, then test with a small sample (a few hundred subscribers) and then send the best version to the entire list.
The more users who take part in testing, the more accurate results you will get. Make sure the user separation is done randomly.
Manual selection of recipients (or even using two lists from different sources) can cause skewed results. The purpose of testing is to collect empirical data to find out which version of the item under test actually performs best.
What does success mean?
Before you send out emails using different email options, it’s important to decide what you’ll be testing and what you consider to be success. First, look at your previous results.
If you’ve been using the same email campaign style for months or even years, you’ll have plenty of data to build on. If the average conversion rate over the course of email campaigns is 10%, then the goal might be to increase it up to 15% to begin with.
Of course, at the initial stage, the goal of A/B testing may simply be to increase the number of email opens. In this case, take a look at your previous data for that metric, and then decide how much of an increase in that metric you want to see. If you don’t see any improvement on the first A/B test, do a second A/B test with the other two versions.
Most email marketing services and software have built-in A/B testing tools. Examples of such tools: Campaign Monitor, MailChimp, Active Campaign.
If the software package with which you conduct advertising email campaigns does not contain an A / B testing tool, you can do it manually.
Simply split your current contact list in two, then send one version of the email to one list and another version to the second. After that, you will need to manually compare the results, although exporting the resulting data to a spreadsheet can help with processing.
After running email campaigns with two different versions of emails, you should analyze its results.
There are several different categories of indicators for which it is worth conducting an evaluative analysis: the percentage of open emails the percentage of clicks to the site the conversion rate of the site for this traffic source
The reasons for tracking the first two indicators are obvious. But many might wonder why we want to track site conversions from this source. Perhaps they do not depend directly on a specific email campaign, but only on the factors of the site itself?
Yes and no at the same time. Ideally, the emails sent should have nothing to do with the conversion of the site as a whole. If one version of an email leads to 10% of the recipients going to the site, and the other to 15%, then the second email should lead to 50% more conversions than the first. But this does not always happen.
It is important that the message you place in the email you send is consistent with the message on the site itself. If you promise visitors a special offer, but in fact it is not at all obvious on the site, then in this case you will lose customers. The same thing can happen if the emails don’t resonate with the look and feel of your site. Visitors can get confused and surprised when they land on the right page.
Make sure you track the conversion rate for each version of the email you send out to avoid the possibility of losing potential sales. The ultimate goal in this case is conversion, not just a transition to the site. It may happen that one version of the email brings more visitors to the site, but the conversion rate of the second email is much better.
In this case, it will be possible to conduct a few more additional tests to determine such a letter format, which increases not only the number of transitions to the site, but also the conversion rate.
Here are some tips to help you A/B test your email campaigns:
- Always test two versions of the email at the same time, this will reduce the possibility of skewed results over time.
- Test as many examples as you can to help you get more accurate results.
- Listen to what the data you have collected from practice tells you, not your intuition.
- Use the tools available to you to quickly and easily conduct A/B testing.
- Test as early and often as possible for best results.
- Testing only one element at a time will give the best result. (If you want to test more than one, consider doing multivariate testing instead of A/B testing).