Predictive Marketing

The Best Time of Day to Tweet

Twitter has become an increasingly popular and important tool for businesses to keep in touch with their customers. Twitter is a medium unlike any other. Each tweet has a limited life-span – if it is not read within a short time of its being posted, the chances of it ever being read drop exponentially. The constant stream of new tweets from the group of individuals each twitterer is following makes it unlikely that the tweet will be read if it is a few hours old. For few twitterers capture all of their tweets in RSS feeds, or take the time to examine all the latest tweets from more than a handful of individuals. For a business hoping to broadcast a message that is read my the most followers possible, timing is of the essence.

So then, what is the best time of day to tweet? There have been several approaches to answer this question:

As you can see, there are a lot of different opinions about the best time to tweet. In order to develop the best answer possible to this question, I collected data over the course of several weeks for a business whose followers consist primarily of event professionals.

The data set consisted of several thousand tweets, including the username, the time and day of the tweet, and the tweet itself. For the purpose of this analysis, I assumed that the best indicator of a given twitterer’s degree of engagement was whether or not they had tweeted within a given hour. So in order to determine the best time of day to tweet, what is most important is not the number of tweets being posted at a particular time, but the number of unique users posting tweets. Here’s the data, in Eastern Time:

For this group of followers, there are actually two optimal hours to tweet – 10:00 – 11:00 AM and 12:00 – 1:00 PM. Tweets during these two hours reach 23.7% of the total number of followers, an 18% advantage over the next best time, 11:00 AM – 12:00 PM, and a 31% advantage over 1:00 PM – 2:00 PM. These increases in total available audience are  highly significant to a business with thousands of followers.

Notice that, for this particular group, a tweet during the hour beginning at 9:00 AM, the beginning of Gary McAffrey’s time window, would only reach an available audience that is two-thirds the size of that available during 10:00 – 11:00 AM and 12:00 – 1:00 PM. Malcolm Cole’s suggestion of 4:01 PM reaches an available audience that is less than half the size – only 41% – of that of the best time to tweet.  Guy Kawasaki’s formula of four tweets varied over 8 – 12 hour intervals is a hit-or-miss proposition. In this particular case, the Social Media Guide is right on the money – the hour beginning at 9:00 AM Pacific/12:00 PM Eastern is best.

But does this pattern hold for every group of followers? Or does each group of followers have a unique pattern, a sort of “time fingerprint”? To answer this question, I examined a second group of followers of a CRM company. Here’s the data, once again expressed in Eastern Time:

This group is far different! The group following the CRM company is much more likely to be active during the morning hours, and is more evenly distributed over the entire day. As a result, a tweet to this group reaches a maximum of 10.8% of the total available audience, as compared to the group of event professionals, which peaked at 23.7%. The CRM group reaches its maximum at 11:00 AM – 12:00 PM, rather than the hour before or after, as in the previous case. So while a close approximation, the Social Media Guide guideline of 12:00 PM Eastern Time would for this group reach an audience 17% smaller than the peak time period of 11:00 AM – 12:00 PM.

As these two data sets demonstrate, there is no one best time to tweet for every business. Each business has a unique set of followers with their own Twitter “time fingerprint”.  You have to track the habits of your own set of followers in order to determine the best single time of day for your business to tweet.

Develop this graph for your own set of followers. How much different is your group compared to these two?

One of the most important insights from these two examples is that at any given time, you can only reach 10% – 24% of your followers with a single tweet. In a future post, I’ll examine what percentage of a group of followers can be reached with multiple tweets.

 

How to Price Your Conference or Exhibition

Conferences, exhibitions, and events were the original forms of social media. In recent years attendees have been more difficult to attract, due to the rise of the Internet, the increased hassle of travel, and an economic recession. But even as event producers have struggled against these forces to maintain or grow attendance levels, they have in many cases ambitiously attempted to increase revenue by increasing prices to attend their events. And as the recent experience at a number of events demonstrates, this can be a formula for failure.

One of the most forthright and savvy publishing operations on the Web, Mequoda Daily, recently discovered that price increases can backfire. In their own words:

Mequoda Summit: Rolling Back Prices to 2009

A 3-day program for the price of 2-days

After a multi-month test we have decided to reduce the price of the Seventh Mequoda Summit. We originally tested a theory explained in today’s Mequoda Daily post. It basically consisted of our desire to add more content to this year’s Mequoda Summit, to further enhance the experience for our attendees.

So we went forth with the test. This included increasing the content of the Summit by 25%. To be able to support the time and resources spent on this additional content, we decided to increase the price by 14%.

As a result we concluded that a 25% increase in content and a 14% increase in price yielded a 38% decrease in attendance.

In turn we have ended our test, and have shared our results with all of our loyal readers. We hope that you consider our findings when planning live events in the future. We are also offering admittance to the Summit for last year’s price, which is $200 cheaper than our original offer for 2010.

This decision by Mequoda Daily is at once smart and courageous. I’m sure they saw registrations and revenue increase immediately.

Test Multiple Price Points to Determine the Best Price

The best way to determine the appropriate price for an event is by testing several different price points at the start of your marketing campaign. The graph below displays the results of a price test I recently conducted for a client at conference fees that ranged from from $1,395 to $1,995. The test enabled us to determine that the best price was $1,595, which provided a projected incremental $60,000 in revenue over the next best price of $1,395, and more than $130,000 of incremental revenue over the worst outcome at $1,995.

Pricing is often a seat of the pants decision for an event producer. There are  many methods that can be employed to determine the price for your conference – what your competitors charge, how many days it lasts, or how much content you have.  Testing provides a way to find out directly from your customers what value they place on your product. The best strategy, and the one that will generate the most revenue, is to set your pricing through testing.

The Revenue Implications of Charging for Exhibit Attendance

Many conferences have an exhibit area that also provides a significant revenue stream. In order to maximize traffic on the exhibit floor, event management usually offers free passes to individuals who would like to visit the exhibits, but not attend conference sessions.

In some cases an event producer may decide to charge a nominal fee for passes to visit the exhibits to generate some additional revenue. This can be a major mistake.

Let’s take a look at some actual data and the revenue implications of charging for admission. In this case, the event producer decided to offer free admission to the exhibits if the attendee pre-registered, and charge a $50 fee if the attendee registered on site. This was a change in policy from the previous year, when admission was free regardless of when the attendee registered. The change in policy permits a year to year comparison that provides a dramatic illustration of what can happen when a fee is charged for exhibit attendance.

The first thing to note about the data is the significant drop in on site registrations, which declined from 397 to 103, a drop of 74%, in contrast to an increase of 81% in pre-registrations. One could assume that if the $50 fee had not been applied for on site registrations, they would also have grown by 81%. So attendance grew by 39% (to 2,067), when it should have grown by 81% (to 2,680). Actual attendance was 23% lower than it should have been.

The effect of this shortfall in attendance had a major, negative impact on the exhibit sales. Since the size of the exhibit floor grew by 50% (from $342,000 to $521,000), but attendance grew by only 39%, the density of attendees on the exhibit floor decreased by 11%. For an event, the size of attendance is perceived by the density of attendees on the exhibit floor – how crowded it looks. Even though actual attendance grew by 39%, because the size of the exhibit floor grew by 50%, it looked like there were actually fewer visitors in attendance.

The effect on exhibit sales and revenue was immediate. The percentage of exhibitors who signed contracts on site to exhibit at the next conference dropped from 79% to 59%, resulting in a revenue level that was $104,200 lower than it should have been. This revenue loss far exceeded the $5,150 in revenue realized from charging a $50 on site fee for exhibits passes.

This whole scenario could have been avoided by a simple price test on the exhibit pass at the start of the attendee marketing campaign. Event management would than have known the effect of the increase on price on overall attendance, and could have made the pricing decision accordingly.

It never pays to set prices first and react later. Always be testing!

Email: What’s Your Real Open Rate?

Many email service providers admit that there has been a gradual decline in open rates over the past few years. While the open rate doesn’t tell the whole story on on email success, it is still vital to measure. After all, if your audience doesn’t open up your email, they have no chance to read it and respond to it.

One of the primary reasons cited for the decline is inbox clutter. According to Forrester Research, 60% of consumers believe they receive too much email. In another study, Customer Knowledge is Marketer Power, Forrester found that the chief reason that marketers who believe email will be less effective in 2 years  is “too much clutter in consumer inboxes.” A belief that “SPAM” will drive the decline was cited by only 59%.

Clearly, we are all becoming increasingly numb to the steady stream of email arriving in our inboxes. A second, related reason often given for the decline in open rates is the increasing effectiveness of spam filters that help manage this flood of email.

A third reason, and a significant one, is technological. The way that opens are measured is by including a tiny image (usually a 1 pixel by 1 pixel gif or jpeg) within the email. Once the images that are embedded in the email are served, the email is recorded as opened. The problem is that there are a lot of email readers don’t automatically serve the images in an email. In fact, ExactTarget estimates that 50% of all email is now delivered to email readers that either don’t automatically render images or are unable to render images, such as Outlook, Gmail, AOL, and handheld devices such as Blackberries. Thus, there is an inherent bias in not detecting all of the opens.

If you’re running an email campaign, it’s important to know the true open rate, so you can gauge the true reach of your email message. There’s an easy way to do this. It’s based on the insight that click-throughs are always measured, even if opens aren’t. Even though the email reader may not be indicating  an open, because it hasn’t rendered the images, the recipient of the email can still click on the links. That means that some recipients will be tracked as clicking through, but not opening an email. Let’s walk through an example.

Here’s the initial tracking information for an email:

Here’s how to estimate the true open rate:

  1. Download the list of the email addresses that have opened the email from your email service provider.
  2. Download the list of the email addresses that have clicked on a link in the email. Now match up the list of those who have clicked through, to see if they were tracked as opening the email. In the case above, it turns out that 105 recipients clicked a link in the email, but only 75 of them were tracked as having opened the email.
  3. Multiply the open rate above by the ratio 105/75. This gives an estimate of the true open rate, assuming the same click through to open ratio for the group that clicked on a link in the email, but was not tracked as having opened the email. The revised tracking information is as follows:

As you can see, because not all of the email reader render images, the estimated open rate in this case was actually 40% higher than reported. Here’s how you can use this information:

  • In order to maximize your click through rates, make sure that message in your emails does not rely on images. That way, if the recipient of your email doesn’t see the images, they can still respond to your message. As demonstrated above, this can help increase your open rates by 40% – or more.
  • It’s vital to know what the real underlying trends are for your email campaigns, so you can make adjustments as necessary. You’re in a better position to know that if you monitor the estimated open rate, as described above, because it eliminates quirks in the tracking system. You need to make adjustments in your strategy based on real changes in customer behavior, rather than changes in the way email readers render images.
  • With the estimated open rate, you now have a better estimate of the cumulative penetration of your message to your target audience. For example, if the reported rate shows a cumulative penetration of 33% after several emails, and you actually have a 40% higher open rate, a better estimate of your penetration is 1.4 x 33% or roughly 46%. You can then make better decisions about how to most effectively reach the rest of your target audience.

SEO: Predicting the Payoff

SEO Strategy – The Landscape

SEO is a critical component of marketing for every website. There are many tips and techniques that are widely available that can help you increase the chances of getting a high ranking for the search keywords and phrases that are central to your marketing strategy. Everyone knows that a higher ranking is better, but exactly how high does your ranking have to be to generate significant traffic for your website? Is it possible to predict how much traffic you can generate for a given search phrase and ranking?

It is well known that you can use a resource such as the Google Keyword Tool to estimate monthly traffic for a keyword. Once you have that number, the question becomes: given a particular ranking, what percentage of those searches will result in a visit to your website? You can’t really create a reliable, comprehensive search phrase strategy without this critical piece of information.

There is a variety of counsel and opinion on this topic, not all of it consistent. For instance, one website, which provides research, training and educational services exclusively for the publishing industry, states the following rule of thumb:

“When your website or landing page turns up on page one in Google, you’re getting 100% visibility..But what happens when your landing page ends up on page two or three? We estimate that you’re getting about 32% Google visibility on page two, meaning only about 32% of users ever click through to page two, and a meager 7% visibility on page three. If you’re on page four or beyond, you simply don’t have a chance of being seen by your potential customers.”

The authors cited no source for this rule of thumb, or explanation of how they developed it. There are a number of other rules of thumb about click distributions floating around on the web, which are entirely inconsistent with the above. I’m not going to dwell on these here; I’d rather get right to the data I believe is the most credible and useful.

SEO Click Disributions – The Best Data Avaliable

There have been several eye-tracking studies that have been done over the past few years, all of which produce consistent results. Perhaps the best-known among them is a study that was performed at Cornell University that showed the following:

Source: SEO Researcher

This data tells a far different tale than the rule of thumb cited above: the first three ranks get 80% of the clicks, and the first page gets 98.9% of the clicks!

You might object, and I would agree, that this data is derived from an eye tracking study, not actual searches, and would thus compel some caution on extrapolating the results. Fortunately, there is some actual data available. In 2006, AOL leaked some data on over 36 million queries. The data was analyzed by Richard Hearne, and the results are as follows:

These results, by and large, are consistent with the Cornell eye-tracking study, in that the first page attracts an extremely high percentage of the clicks. The first three ranks garner 63% of the clicks; the top 10, 90%; the top 20, 94.5%. Here are the percentages for ranks 1-21, 31, and 41:

Viewed another way, an improvement in rank from second to first will almost quadruple the number of clicks. The number one ranking produces as many clicks as ranks two through eight combined. The drop-off in clicks is enormous by the time you get to the second page; a rank off 11 produces only .66% of the clicks; in comparison a rank of 10 produces more than 4 times as many, and the number 1 rank more than 60 times as many!

This click distribution has also been confirmed by an independent set of search data analyzed by Enquisite, a firm that specializes in search optimization software. Based on a proprietary data set of 300 million searches, the first page grabbed 89.71% of the clicks; the second 5.93%; the third, 1.85%, the fourth, .78%; and the fifth, .46%.

Since there are several methods that have produced highly similar results, there is a high degree of confidence that this data provides a reliable foundation on which to base an SEO strategy.

Implications for SEO Strategy

  • The ranking you can achieve for any given search phrase depends on a number of factors, including how well you optimize your pages for the search phrase, your page rank, and the amount of competition. If you opt to compete for high volume search phrases with a lot of competition, you have to realistically weigh the chances that you can make the first page.
  • A better option may be to pursue a long tail strategy, in which you set your sights on achieving a number one ranking on lower volume search phrases with lower levels of competition. This strategy necessarily involves multiple keywords in order to generate significant volumes of traffic for your website.
  • But perhaps the best option of all, made possible by this data, would be to pursue a mixed strategy. The increase in traffic you can expect from improving your ranking for any particular search phrase can now be predicted. You can therefore weigh the incremental increase in your website traffic for an entire portfolio of search phrases, and allocate your efforts in a way that will optimize your ROI.

Next Page »

Predictive Marketing