June 22, 2018

Testing, Testing, ABC: How To Run Great Tests

Don’t buy more marketing tools: improve your metrics through testing. In this article Reneé Doegar, Head of Marketing for the London Review of Books, tells you how to run great tests in your organisation and in her quickfire presentation at the Figaro Digital Marketing Conference she will be showcasing some of the tests she’s run and will share her results – the good, the bad, and the (rather) ugly!

Why Test?

These days, marketing professionals have a wealth of tools and channels available to them. It just tends to pile up: you keep doing the activity that works and then tack more and more onto it, ending up looking for new customers through the newest platforms. But that doesn’t need to be the case – you can make your existing channels work better for you.

A/B testing isn’t new: good marketing practice has been built around testing for decades. But with increased digital promotional channels, the set-up, running, measurement, and insight from testing is better than ever before, easily making it possible for testing to be the bedrock of your marketing strategy.

What You Need To Run A Test

First of all, you need a goal. Don’t test something just because! Conversion goals can be a whole range of things, from purchases to sign-ups to views to clicks. But if you don’t know what you are testing and why, then you will never know if it wins. If you spend just five minutes writing a list of possible goals, I expect you’d come up with several that you can use as a framework for your testing. Then simply arrange those goals into priorities (weighted by business goals, ease of implementation and so on) and you will have the bare bones with which to start building your testing strategy.

Second of all, you need a way to measure your testing. It is unhelpful to say that you want to test something to “make your brand more likeable” without a measure in place for that (exit survey? Analytics?).  And since goals don’t have to just be to “make more money” (they can be anything from “decrease bounce rates” to “improve social shares” to ““increase basket value”) you have to make sure you have your measuring tools in place.

And a little testing tip: try to test one thing at a time, so you can definitively single out what it is that is providing the result you’re seeing.

Measuring A Completed Test

Marketers are constantly asking how long they need to run a test for. The truth is that there’s no easy answer for that – it all depends on how big of a change you are making on whatever you are testing, and the amount of traffic to your test. There are some helpful free tools online (google “test duration calculator”) that can help you determine how long your test should run. But don’t forget about other factors that could affect the test (unexpected traffic referrals to your site, for example, or a regional uplift from specific email content). If you get a reading that you don’t expect, always retest! For my organisation, I always run a test for at least two weeks (coinciding with our publication schedule) just in case there are fluctuations as a result of our content or web traffic.

The other primary measurement you will need to figure out is how to pick a winner. It seems simple (you choose the one with the higher result!), but this can sometimes be misleading. There is a factor in testing you should consider, called “confidence rating”. This is the measure of the statistical significance of your winning test, or the degree of certainty that your results are correct. It is also called the “p” value and there are numerous free online calculators, where you can enter your results and it can tell you the degree of significance of your results. This is a super helpful way to know whether the result of your test is significant enough to implement wider action in other channels as a result.

You’re In Good Company

And remember, your tests may fail. In fact, they most likely will! But you’re not the only one:

  • Microsoft have said that around 1/3 of the ideas they tested were positive and statistically significant, 1/3 were flat, and 1/3 were negative and statistically significant
  • At Bing, the success rate is even lower
  • Google said that only 10 per cent of the 12,000 experiments they ran over one year led to business changes (stated in Uncontrolled by Jim Manzi)
  • Netflix “considers 90 per cent of what they try to be wrong” (Mile Moran from Netflix).

But above all, the strongest advice I can give you is to make testing fun! The more fun it is, the more your teams will get involved, and the more you will see your metrics improve.

 

Reneé will be speaking about how to run great tests at the Figaro Digital Marketing Conference on 12 July 2018. Click here to see the full agenda and to book your ticket now.