Dive into the world of online marketing and you’ll find yourself surrounded by blog posts offering the best practices for creating the highest performing marketing assets. From lead collection forms to catchy banner ads, there is no shortage of opinions on what is going to grow your business.

a/b test display ads
Which of these display ads do you think would resonate better with your audience? We can run a test to find out!

This may seem helpful, and it’s a good starting point, but “best practices” are designed to reach the most people possible, which often disregards the specific needs of your customers. Your audience isn’t everyone. They’re influenced by unique motivations and held up by unique concerns. You probably have a sense of these motivations and concerns and your intuition will take you pretty far, but intuition is highly influenced by your own personal experience and you won’t always be able to get into your customer’s shoes.

This is why marketers often lean on tried and tested techniques to validate their theories and improve customer conversion. In online marketing, one of the most utilized techniques is A/B testing, a methodology that allows you to improve your marketing approaches steadily with confidence.

So, what is A/B testing?

An A/B test, also called a split test, is a way of comparing how two different versions of a marketing asset perform with an audience. In digital marketing, customers are exposed to many different assets that will (hopefully) encourage them to buy. These can be ads, emails, landing pages, banner ads, onboarding flows, forms or shopping carts. A/B tests can be used to discover how creative changes to these assets—like new copy or images—will impact conversion rates.

A/B testing works by splitting an existing audience into two groups and presenting each group with a different version of a creative element like a landing page or a banner ad. Group A, acting as the control group, is shown the current creative, while group B, the treatment group, is shown the variant.

Facebook A/B test
One of these Facebook ads was clearly more successful than the other. Is it the one you would have guessed?

How do I create an A/B test?

If this is your first experience with A/B testing, your primary obstacle will be the technological challenge of randomly splitting your audience. The emphasis here is on “random.” If you were to split your audience by say gender and show a different landing page version to each group, you’d get results influenced more by audience than by the change you made to the landing page. As a rule, you want the two groups to look as much alike as possible.

You will need different tools to split your audience depending on the what you want to test. Advertising and email vendors (like Facebook and Mailchimp, respectively) typically have tools built into their platforms that you can use to split audiences. To test on-site creative, like landing pages or funnel changes, you’ll need a different set of more sophisticated tools. At 99designs we’ve used Optimizely and Unbounce successfully. Google Analytics has a useful free tool as well. These tools allow you to have two separate versions of the same web page running at the same time, and to show them to different visitors at random, so you can see which is better at making people do what you want them to do.

Google analytics
Google analytics can help you split your audience into two groups.

Once you know how you’ll be splitting your audience it’s time to define your experiment. This part will need some patience and attention! Like most tests, the best starting place will be your hypothesis, which is simply your prediction for the results of the test (and the reason you’re running the test in the first place). Your hypothesis should be specific and relate directly to your objectives, following an “if ____ then ____” formula.

An example might be: “If the landing page includes a relatable hero image, then the lead collection form completion rate will increase.”

With a clear hypothesis in mind, now it’s time to do some math. It’s easy to look at results after a day and say, “Oh, the new version did better. Hooray!” But if you want to numerically prove your hypothesis, you’re going to have to make sure your sample size is large enough so that you can say the uplift was not a result of random chance. You can determine how long you’ll need to run a test to reach statistical significance by doing what’s called a pre-test analysis. Online tools like this one make it easy.

A/B testing guides like this one can guide you through a pre-test analysis.

Here’s an example: imagine that you currently get 2000 visits to your landing page every week and your current completion rate is 2%. (This is your control, nice and simple.) If the variant with new copy and images gets an uplift of 50% (meaning a 3% completion rate), then you’ll be able to get a statistically significant result within 2 weeks. By comparison, if the uplift is only 25%, then it will take 6 weeks to reach significance. This is because it is harder to measure a small change than a big change, and requires a larger population of visitors to do so.

Now, what do I do with the results?

You’ve split your audience and created your hypothesis, now you can start the A/B test. Most tools will let you track progress while the test is running and hopefully you’ll be able to see one version performing better than the other. If the improvement is obvious you may even be able to end the experiment early. But what if the opposite is true?

Email capture
vs. CTA button

Let’s say you are testing a landing page with email collection against a simple call to action. Based on your weekly traffic you were expecting to show a 25% lift after 6 weeks. But 6 weeks have passed and you’ve only seen a 10% improvement. Since it’s harder to have confidence in a smaller change, you would need to run the test for 32 weeks at the current pace to get an audience sample large enough to have confidence in the 10% improvement.

Unless this is a really important landing page, that’s just too long to wait. In cases like this, it is important to recognize that the difference is just too small to easily prove and you’re better off spending your time and energy on a test that will have a bigger impact.

You can test more than just creative. Here we tested whether our customers were more interested in a dollar-off promotion or an upgrade.

Rinse and repeat

cute unicorn bath
Soapy unicorn mascot by 3AM3I

Improvements from A/B testing are iterative, meaning that you should be continuously learning more about your customers with each test. Every A/B test brings you closer to understanding what motivates them to buy from you and equally important, what drives them away.

As you continue to A/B test, you’ll be able to form more informed hypotheses and identify more impactful tests to run, leading to a broader—and happier—customer base!

Some useful A/B testing resources

Questions or comments? Share your thoughts in the comments, below.