Optimize your Email Campaign Performance with Testing

White Paper

Email testing is the most important way to get to know what engages your subscribers. By implementing a good testing strategy you’ll learn what really works. You’ll be able to continuously improve your results and increase ROI on future email marketing campaigns. In this guide, you’ll discover: The two types of email tests and when to use them, which parts of an email can be tested and how, essential steps to follow to ensure a good, reliable test, how to get the most from your test results and a handy planner to keep your tests and results in one place.

Get the download

Below is an excerpt of "Optimize your Email Campaign Performance with Testing". To get your free download, and unlimited access to the whole of bizibl.com, simply log in or join free.

download
Email testing: the basics Why you should test You know how many people opened your last email. But do you know why? Testing is the best way to find out. It’s much simpler than it may seem and it can help you achieve great results. Practice beats theory Testing tells you what really works for your audience. That beats using other people’s best-practice tips, as who says they’ll apply to your customers too? Email testing results in better ROI, as seen in the Econsultancy/Adestra Marketing Industry Census: Marketers who test their emails - 74% report a “good” or excellent” ROI Marketers who don’t test their emails - 37% report a “good” or “excellent” ROI Benefits of testing Testing gives you feedback straight from your subscribers. The results: Improved results from your email marketing Deeper understanding of your customer base Better value from the other data you hold How it works Two tests to know about There are two ways to test your emails: A/B testing, also known as split testing Multivariate testing A/B (or split) testing: one simple change How it works: change one variable, such as your subject line. And try to make one small change, like a word, rather than the whole subject line. This will give you a more robust test - we’ll explain why later on. What it’s for: compare two different options or test a new idea. Multivariate testing: multiple combinations How it works: change several variables and combine them in multiple ways. What it’s for: refine and optimize an existing idea or design. What can you test? You can test virtually every element of your email, although some tests will deliver more conclusive results than others. It all depends on what you’re trying to find out. Try split-testing your... Subject line: long or short, humorous or offer-led, generic or personalized Sender: brand or company vs. a named individual Pre-header text Time or day of sending Buttons: colour or text Copy: length, content, tone, format Images: size, type, placement, image-text ratio Call to action: choice of words, words vs. images, placement, type of offer such as free shipping vs. a discount Footer: promoting social media accounts, logos, text You’ve got a winner. Now what? Send it to the rest of your database. And the best part is... You can choose to do this automatically. For example, MessageFocus lets you choose to have the system launch the winning version (based on the criteria you set out for what constitutes a “win”) after the waiting period you’ve specified. Case study Split-testing a subject line gave Oxford University Press English Learning Testing a 6% uplift in their open rate when they launched a campaign about their first global webinar. They tried adding the word ‘Webinar’ at the beginning or end of the subject line, initially testing 10% of the list. The MessageFocus split-testing feature then launched the winning subject line to the rest of their subscribers, resulting in an open rate of 30%. Subject line A: Webinar – Placement testing: more than just grammar and vocabulary? Subject line B: Placement testing: more than just grammar and vocabulary? – Webinar> [Download PDF to see Image] Split scenario Here’s how it might work with an A/B test. [Download PDF to see Image] You can test more variables, such as an A/B/C test. For example, testing red, yellow, and green buttons. You’re still only testing one variable at a time, though you’ll need a bigger sample to get worthwhile data. Later on, we’ll look at establishing relevant sample sizes, and the statistical significance of your results. Essential steps Five rules to follow when planning your email test: Decide what you’re going to test and why Preview and check your design before sending your test Choose your samples randomly – never try to handpick Send both versions simultaneously unless you’re testing the time Keep a record of the test and your findings Case study Condé Nast trialed three different subject lines with three different offers for their Wired Magazine campaign. They used exclusivity, price and technology as the focus of the copy. After testing on 10% of the list, they launched the winning variant to the rest of the list. This got 35% higher open rates than the next best-performing version. Subject line A: Your Exclusive Offer to WIRED Subject line B: Get WIRED in print and on the iPad for just £9 Subject line C: Get WIRED on the iPad with your special offer> [Download PDF to see Image] Get the best results Dos and don’ts of split-testing Follow these golden rules for great results: Do start with a hypothesis Know what you’re testing and why. Have a clear objective from the start and you’ll better understand the impact of your test. Case study Dobbies Garden Centre knew 68% of their opens came from mobile devices, but their emails weren’t optimized for mobile. Their hypothesis: adapting their format for mobile would get better results. To test the theory, they ran a split-test on their existing format against a new responsive template, and found it was worth making the change – the new template increased click-to-open rates by 400% overall. If your data shows a need for a responsive template, why not test and see what results it brings for you?

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

Do keep track Record your results and repeat the test over time to see if things change. A year can be a good interval to wait, but this depends on the individual case. Do watch your timing Send both versions simultaneously, unless you’re testing different send times. Otherwise, changing the time would add a second variable to your split-testing and skew the results. Do understand what you’re testing Know which element your test actually affects. Changing an image? Measure clickthroughs, not open rates. On the other hand, testing the subject line, sender name and pre-header text will most likely impact your open rates. Do base the test on one change It’s very important not to change more than one element at a time between your versions. Having three or more variants of colors in a call-to-action button is a correct split-test, but changing the copy as well as the colors is not. The reason for this is changing more than one element at a time will make result attribution impossible. That’s why it’s also more reliable to test one word change at a time in a subject line or headline, rather than change the whole sentence. It’s trickier with a tone of voice test, but we recommend you keep the message similar to have some consistency. If you want to test multiple elements, you need to use the multivariate testing methodology explained later in this guide. Don’t muddle your variables Be clear about what you’re testing: ‘Download your voucher now’ vs ‘Get your voucher now’ is a copy test ‘Download your voucher now’ vs ‘What are you waiting for?’ tests your call-to-action or tone of voice Don’t stop testing Even if you think you’ve found the perfect combination, you still need to carry on testing to see how things change. After all, your business doesn’t stay the same - and neither will your customers! You’ll have new products, promotions and messages. Their interests and finances may change, and the novelty of your latest enhancements may wear off after a while. Statistical significance Sample size: why it matters Your sample needs to be significant in terms of numbers or percentage size. This is the most important thing to get right – otherwise your data won’t be relevant, and you won’t know if your results are just down to a random variation. So what’s the minimum? Think about what you’re trying to measure. You have to take into account: Population – this would be the total number of contacts that actually received your emails. For reliable results, you should have at least 1,000 contacts. If you don’t, try running the test multiple times in similar conditions. Confidence level – this tells you how sure you can be of your result so you want it to be as high as possible. We recommend at least 95%. Confidence interval – is the margin of error you are willing to accept on your test. When you begin experimenting with A/B tests, a confidence interval of 5 is a good starting point. That’s because you’re not just looking for any change. You want to know there’s been enough change. To make it easier to find out your minimum sample size required, try this calculator from Survey System. Next step: read the results The nature of your results will depend on which variable you’re testing and what you want to achieve. But just as your sample size needs to be statistically significant, it should also be the case with your results. Again, we recommend that before firmly declaring a variant the winner, it has a 95% confidence level in the result. And don’t worry, establishing that needn’t be complicated. There are plenty of statistical significance calculators online to help you, including this one made by Kissmetrics. Results you can measure Results you might consider: Open rate Click-through rate Click-to-open rate Unsubscribe rate What about multivariate testing? You can use multivariate testing to test the impact of different combinations of elements. It’s more complex than A/B testing. How it works Multivariate tests let you measure multiple factors and see how they interact with each other. For example, you could test two variables (image and call to action) using two variations of each: [Download PDF to see Image] Watch out for the steep increase As we saw above, testing two variables with two variations gives us four possible versions of the email. Test three variables with three variations and you’ll need 27 combinations. The more combinations you add, the bigger your sample size needs to be to get meaningful results. It’s a complex process that needs experience You’ll need to be able to handle and analyze a lot of data just to check your results are statistically significant. Otherwise, you won’t know if they’re really down to your changes or just random variations. So you it might be a good idea to involve a data analyst or someone with great statistical skills. Ready to start? Try these Here are three ideas for A/B tests to get you started. Three ways to split-test your subject line Change one word: Version A: Taste our new flavor – here’s your free drink Version B: Taste our newest flavor – here’s your free drink Change the offer: Version A: Taste our new flavor – here’s your free drink Version B: Taste our new flavor – here’s £5 off Add personalization: Version A: Taste our new flavor – here’s your free drink Version B: Taste your new flavor, Sally – here’s your free drink Find more inspiration in our Subject Line Analysis Report. We hope you’ve found this Email Testing Guide useful and informative. You should now understand the different types of email tests, what you can test and how to extract useful data from your findings. On the next page you’ll find a handy planner ready for you to print, and fill in with your exciting tests!

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy