Many email service providers admit that there has been a gradual decline in open rates over the past few years. While the open rate doesn’t tell the whole story on on email success, it is still vital to measure. After all, if your audience doesn’t open up your email, they have no chance to read it and respond to it.
One of the primary reasons cited for the decline is inbox clutter. According to Forrester Research, 60% of consumers believe they receive too much email. In another study, Customer Knowledge is Marketer Power, Forrester found that the chief reason that marketers who believe email will be less effective in 2 years is “too much clutter in consumer inboxes.” A belief that “SPAM” will drive the decline was cited by only 59%.
Clearly, we are all becoming increasingly numb to the steady stream of email arriving in our inboxes. A second, related reason often given for the decline in open rates is the increasing effectiveness of spam filters that help manage this flood of email.
A third reason, and a significant one, is technological. The way that opens are measured is by including a tiny image (usually a 1 pixel by 1 pixel gif or jpeg) within the email. Once the images that are embedded in the email are served, the email is recorded as opened. The problem is that there are a lot of email readers don’t automatically serve the images in an email. In fact, ExactTarget estimates that 50% of all email is now delivered to email readers that either don’t automatically render images or are unable to render images, such as Outlook, Gmail, AOL, and handheld devices such as Blackberries. Thus, there is an inherent bias in not detecting all of the opens.
If you’re running an email campaign, it’s important to know the true open rate, so you can gauge the true reach of your email message. There’s an easy way to do this. It’s based on the insight that click-throughs are always measured, even if opens aren’t. Even though the email reader may not be indicating an open, because it hasn’t rendered the images, the recipient of the email can still click on the links. That means that some recipients will be tracked as clicking through, but not opening an email. Let’s walk through an example.
Here’s the initial tracking information for an email:
Here’s how to estimate the true open rate:
- Download the list of the email addresses that have opened the email from your email service provider.
- Download the list of the email addresses that have clicked on a link in the email. Now match up the list of those who have clicked through, to see if they were tracked as opening the email. In the case above, it turns out that 105 recipients clicked a link in the email, but only 75 of them were tracked as having opened the email.
- Multiply the open rate above by the ratio 105/75. This gives an estimate of the true open rate, assuming the same click through to open ratio for the group that clicked on a link in the email, but was not tracked as having opened the email. The revised tracking information is as follows:
As you can see, because not all of the email reader render images, the estimated open rate in this case was actually 40% higher than reported. Here’s how you can use this information:
- In order to maximize your click through rates, make sure that message in your emails does not rely on images. That way, if the recipient of your email doesn’t see the images, they can still respond to your message. As demonstrated above, this can help increase your open rates by 40% – or more.
- It’s vital to know what the real underlying trends are for your email campaigns, so you can make adjustments as necessary. You’re in a better position to know that if you monitor the estimated open rate, as described above, because it eliminates quirks in the tracking system. You need to make adjustments in your strategy based on real changes in customer behavior, rather than changes in the way email readers render images.
- With the estimated open rate, you now have a better estimate of the cumulative penetration of your message to your target audience. For example, if the reported rate shows a cumulative penetration of 33% after several emails, and you actually have a 40% higher open rate, a better estimate of your penetration is 1.4 x 33% or roughly 46%. You can then make better decisions about how to most effectively reach the rest of your target audience.
SEO is a critical component of marketing for every website. There are many tips and techniques that are widely available that can help you increase the chances of getting a high ranking for the search keywords and phrases that are central to your marketing strategy. Everyone knows that a higher ranking is better, but exactly how high does your ranking have to be to generate significant traffic for your website? Is it possible to predict how much traffic you can generate for a given search phrase and ranking?
It is well known that you can use a resource such as the Google Keyword Tool to estimate monthly traffic for a keyword. Once you have that number, the question becomes: given a particular ranking, what percentage of those searches will result in a visit to your website? You can’t really create a reliable, comprehensive search phrase strategy without this critical piece of information.
There is a variety of counsel and opinion on this topic, not all of it consistent. For instance, one website, which provides research, training and educational services exclusively for the publishing industry, states the following rule of thumb:
“When your website or landing page turns up on page one in Google, you’re getting 100% visibility..But what happens when your landing page ends up on page two or three? We estimate that you’re getting about 32% Google visibility on page two, meaning only about 32% of users ever click through to page two, and a meager 7% visibility on page three. If you’re on page four or beyond, you simply don’t have a chance of being seen by your potential customers.”
The authors cited no source for this rule of thumb, or explanation of how they developed it. There are a number of other rules of thumb about click distributions floating around on the web, which are entirely inconsistent with the above. I’m not going to dwell on these here; I’d rather get right to the data I believe is the most credible and useful.
SEO Click Disributions – The Best Data Avaliable
There have been several eye-tracking studies that have been done over the past few years, all of which produce consistent results. Perhaps the best-known among them is a study that was performed at Cornell University that showed the following:
Source: SEO Researcher
This data tells a far different tale than the rule of thumb cited above: the first three ranks get 80% of the clicks, and the first page gets 98.9% of the clicks!
You might object, and I would agree, that this data is derived from an eye tracking study, not actual searches, and would thus compel some caution on extrapolating the results. Fortunately, there is some actual data available. In 2006, AOL leaked some data on over 36 million queries. The data was analyzed by Richard Hearne, and the results are as follows:
These results, by and large, are consistent with the Cornell eye-tracking study, in that the first page attracts an extremely high percentage of the clicks. The first three ranks garner 63% of the clicks; the top 10, 90%; the top 20, 94.5%. Here are the percentages for ranks 1-21, 31, and 41:
Viewed another way, an improvement in rank from second to first will almost quadruple the number of clicks. The number one ranking produces as many clicks as ranks two through eight combined. The drop-off in clicks is enormous by the time you get to the second page; a rank off 11 produces only .66% of the clicks; in comparison a rank of 10 produces more than 4 times as many, and the number 1 rank more than 60 times as many!
This click distribution has also been confirmed by an independent set of search data analyzed by Enquisite, a firm that specializes in search optimization software. Based on a proprietary data set of 300 million searches, the first page grabbed 89.71% of the clicks; the second 5.93%; the third, 1.85%, the fourth, .78%; and the fifth, .46%.
Since there are several methods that have produced highly similar results, there is a high degree of confidence that this data provides a reliable foundation on which to base an SEO strategy.
Implications for SEO Strategy
- The ranking you can achieve for any given search phrase depends on a number of factors, including how well you optimize your pages for the search phrase, your page rank, and the amount of competition. If you opt to compete for high volume search phrases with a lot of competition, you have to realistically weigh the chances that you can make the first page.
- A better option may be to pursue a long tail strategy, in which you set your sights on achieving a number one ranking on lower volume search phrases with lower levels of competition. This strategy necessarily involves multiple keywords in order to generate significant volumes of traffic for your website.
- But perhaps the best option of all, made possible by this data, would be to pursue a mixed strategy. The increase in traffic you can expect from improving your ranking for any particular search phrase can now be predicted. You can therefore weigh the incremental increase in your website traffic for an entire portfolio of search phrases, and allocate your efforts in a way that will optimize your ROI.
Testing is one of the building blocks of predictive marketing. Sites such as Marketing Experiments and Marketing Sherpa frequently report the results of marketing tests, with the implication being that you can apply the results to your own business. As we saw in my last post, however, you can’t infer best practices from somebody else’s test. You have to run tests with your own target audience to know what really works with them.
There is another huge problem with the reporting of test results on these sites. Let’s take a look at this test result, as reported on the Marketing Experiments site, regarding good vs. bad email copy:
What is wrong with this? There is absolutely no way to tell if the results are statistically significant! If you are not familiar with the concept of statistical significance, it is absolutely essential to understand if you are going to be testing.
Here’s a fact that tends to get lost in the shuffle when these popular sites review test results: if you ran the test again, you are virtually guaranteed to get different results. The % difference above may be 70%, or 30%, or even -20%, instead of 49.5%. If you haven’t thought about this before, it might seem somewhat shocking. But it’s true.
To say that the results of a test are statistically significant means that you can be confident that the next time that you run the test, you can expect similar results. Let’s take a look at one of the results I reviewed in my last post:
At the bottom of the test results, you see a statement that “The difference in conversion rates is statistically significant at the 99% level.” What does this mean? It means that if you repeatedly conducted the test again, 99 times out of 100 the first subject line would once again beat the second subject line.
The reason tests are conducted is to determine how much a particular marketing variable, such as copy, affects response, on a small portion of your target audience, before you roll it out on a large scale to your entire target audience. And before you do that, you have to be extremely confident that at the very least you can show an improvement in conversions. That’s where statistical significance comes in. If you don’t know the level of statistical significance for your tests, you are playing with fire, and might get burned.
If you would like a free calculator to help you test the statistical significance of your tests, email me at email@example.com.
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” – Mark Twain
In today’s rapidly evolving markets, you can never take anything for granted. The average lifetime of so-called “best practices” is shorter than ever.
I experienced this first hand recently on a couple of email tests designed to drive registrations for some upcoming conferences. Both conferences were targeted at highly technical IT audiences.
My past experience had always indicated that the best subject lines had offers and calls to action, especially when closing in on a limited time offer pricing deadline. So the following test results were as expected:
I had seen this result dozens of times before; invariably, the subject line that emphasized the dollar savings and created a sense of urgency had always emerged triumphant. Imagine my surprise, then, when I saw the results of the following test for a different conference directed at a similar audience:
Not only did the standard subject line offer no improvement on the alternative; it produced a result that was significantly worse. There is no doubt that, for this particular audience, the second subject line produced more conversions than the first.
The beauty of testing is that marketers don’t have to figuratively stumble around in the dark searching for the best way to communicate with customers. Test and learn strategies provide a way to find out directly, from prospects and customers, what they value and want most. There is no longer any excuse for a marketer to rely on hunches, anecdotes, and biased opinions in order to make marketing decisions. Even the seemingly most insignificant of decisions – the color of a registration button, for example – may have an effect on conversion rates which can be quantified.
However, when employing a test and learn approach to marketing, there is a trap to be avoided, which is illustrated by this example. Every business, and every customer set are different. There is no one set of best practices that apply in every situation. Think of the body of knowledge that you gain by testing to be a set of “best guidelines” rather than best practices. And know that no testing program ever arrives at a final best answer. Your customers are always changing. Always be testing.