Predictive Marketing

Test Results: Are They Reliable?

Testing is one of the building blocks of predictive marketing. Sites such as Marketing Experiments and Marketing Sherpa frequently report the results of marketing tests, with the implication being that you can apply the results to your own business. As we saw in my last post, however, you can’t infer best practices from somebody else’s test. You have to run tests with your own target audience to know what really works with them.

There is another huge problem with the reporting of test results on these sites. Let’s take a look at this test result, as reported on the Marketing Experiments site, regarding good vs. bad email copy:

email-test-results-example-4

What is wrong with this? There is absolutely no way to tell if the results are statistically significant! If you are not familiar with the concept of statistical significance, it is absolutely essential to understand if you are going to be testing.

Here’s a fact that tends to get lost in the shuffle when these popular sites review test results: if you ran the test again, you are virtually guaranteed to get different results. The % difference above may be 70%, or 30%, or even -20%, instead of 49.5%. If you haven’t thought about this before, it might seem somewhat shocking. But it’s true.

To say that the results of a test are statistically significant means that you can be confident that the next time that you run the test, you can expect similar results. Let’s take a look at one of the results I reviewed in my last post:

subject-line-test-3At the bottom of the test results, you see a statement that “The difference in conversion rates is statistically significant at the 99% level.” What does this mean? It means that if you repeatedly conducted the test again, 99 times out of 100 the first subject line would once again beat the second subject line.

The reason tests are conducted is to determine how much a particular marketing variable, such as copy, affects response, on a small portion of your target audience, before you roll it out on a large scale to your entire target audience. And before you do that, you have to be extremely confident that at the very least you can show an improvement in conversions. That’s where statistical significance comes in. If you don’t know the level of statistical significance for your tests, you are playing with fire, and might get burned.

If you would like a free calculator to help you test the statistical significance of your tests, email me at rhodgson@predictive-marketing.com.


Speak Your Mind

Tell us what you're thinking...

You must be logged in to post a comment.

Predictive Marketing