Predictive Marketing

How to Price Your Conference or Exhibition

Conferences, exhibitions, and events were the original forms of social media. In recent years attendees have been more difficult to attract, due to the rise of the Internet, the increased hassle of travel, and an economic recession. But even as event producers have struggled against these forces to maintain or grow attendance levels, they have in many cases ambitiously attempted to increase revenue by increasing prices to attend their events. And as the recent experience at a number of events demonstrates, this can be a formula for failure.

One of the most forthright and savvy publishing operations on the Web, Mequoda Daily, recently discovered that price increases can backfire. In their own words:

Mequoda Summit: Rolling Back Prices to 2009

A 3-day program for the price of 2-days

After a multi-month test we have decided to reduce the price of the Seventh Mequoda Summit. We originally tested a theory explained in today’s Mequoda Daily post. It basically consisted of our desire to add more content to this year’s Mequoda Summit, to further enhance the experience for our attendees.

So we went forth with the test. This included increasing the content of the Summit by 25%. To be able to support the time and resources spent on this additional content, we decided to increase the price by 14%.

As a result we concluded that a 25% increase in content and a 14% increase in price yielded a 38% decrease in attendance.

In turn we have ended our test, and have shared our results with all of our loyal readers. We hope that you consider our findings when planning live events in the future. We are also offering admittance to the Summit for last year’s price, which is $200 cheaper than our original offer for 2010.

This decision by Mequoda Daily is at once smart and courageous. I’m sure they saw registrations and revenue increase immediately.

Test Multiple Price Points to Determine the Best Price

The best way to determine the appropriate price for an event is by testing several different price points at the start of your marketing campaign. The graph below displays the results of a price test I recently conducted for a client at conference fees that ranged from from $1,395 to $1,995. The test enabled us to determine that the best price was $1,595, which provided a projected incremental $60,000 in revenue over the next best price of $1,395, and more than $130,000 of incremental revenue over the worst outcome at $1,995.

Pricing is often a seat of the pants decision for an event producer. There are  many methods that can be employed to determine the price for your conference – what your competitors charge, how many days it lasts, or how much content you have.  Testing provides a way to find out directly from your customers what value they place on your product. The best strategy, and the one that will generate the most revenue, is to set your pricing through testing.

The Revenue Implications of Charging for Exhibit Attendance

Many conferences have an exhibit area that also provides a significant revenue stream. In order to maximize traffic on the exhibit floor, event management usually offers free passes to individuals who would like to visit the exhibits, but not attend conference sessions.

In some cases an event producer may decide to charge a nominal fee for passes to visit the exhibits to generate some additional revenue. This can be a major mistake.

Let’s take a look at some actual data and the revenue implications of charging for admission. In this case, the event producer decided to offer free admission to the exhibits if the attendee pre-registered, and charge a $50 fee if the attendee registered on site. This was a change in policy from the previous year, when admission was free regardless of when the attendee registered. The change in policy permits a year to year comparison that provides a dramatic illustration of what can happen when a fee is charged for exhibit attendance.

The first thing to note about the data is the significant drop in on site registrations, which declined from 397 to 103, a drop of 74%, in contrast to an increase of 81% in pre-registrations. One could assume that if the $50 fee had not been applied for on site registrations, they would also have grown by 81%. So attendance grew by 39% (to 2,067), when it should have grown by 81% (to 2,680). Actual attendance was 23% lower than it should have been.

The effect of this shortfall in attendance had a major, negative impact on the exhibit sales. Since the size of the exhibit floor grew by 50% (from $342,000 to $521,000), but attendance grew by only 39%, the density of attendees on the exhibit floor decreased by 11%. For an event, the size of attendance is perceived by the density of attendees on the exhibit floor – how crowded it looks. Even though actual attendance grew by 39%, because the size of the exhibit floor grew by 50%, it looked like there were actually fewer visitors in attendance.

The effect on exhibit sales and revenue was immediate. The percentage of exhibitors who signed contracts on site to exhibit at the next conference dropped from 79% to 59%, resulting in a revenue level that was $104,200 lower than it should have been. This revenue loss far exceeded the $5,150 in revenue realized from charging a $50 on site fee for exhibits passes.

This whole scenario could have been avoided by a simple price test on the exhibit pass at the start of the attendee marketing campaign. Event management would than have known the effect of the increase on price on overall attendance, and could have made the pricing decision accordingly.

It never pays to set prices first and react later. Always be testing!

You Can Measure Social Media ROI: The Incredible $20,000 Tweet

While some social media enthusiasts struggle with the question of how to measure the ROI from social media, the free market is alive and well and functioning. Consider this: an unnamed celebrity was recently paid $20,000 for a single tweet to endorse a product. A company called Sponsored Tweets matches advertisers with celebrities to create sponsored conversations on Twitter. According to Ted Murphy of Izea, the company that runs Sponsored Tweets, “It was actually an incredible value for the advertiser, since the net cost per click came out to less than $.50 per click.”

Sound familiar? This is nothing more than Old School mass media advertising. Considering that there are 350 million Facebook users, 75 million Twitter users, and over 50 million Linkedin users, it is not surprising that companies are figuring out ways to leverage the vast reach that these platforms can provide, sometimes in the most mercenary of ways.

But you don’t have to be a mass marketer to derive measurable ROI from social media. Take this example: a high tech conference, with a mere 350 Twitter followers, recently sent out a series of tweets promoting its conference. The links in each of the tweets were coded to enable tracking from a click on the link in the tweet through a completed registration. The result: $15,000 in registrations from new customers. The process can be analyzed as follows:

When you think about it, the progression from tweet to registration as illustrated above is similar to an email. Here are the parallels:

So in one instance, the case of the $20,000 tweet, Twitter is being used as mass media. In another instance, a series of tweets promoting a conference that generated $15,000 in registrations, Twitter is being used as a direct response vehicle.

A lot of the confusion over measuring the ROI of social media is a result of its chameleon-like qualities. For businesses, it can be leveraged as mass media, one-to-one marketing, customer service, a business intelligence tool, a source of new product ideas, competitive intelligence, market research, and in a host of other ways. Methods of measuring the ROI of every one of these disciplines have already been established in other contexts. These methods can be adapted to measure ROI on social media. But only one in six companies measures the ROI from their social media investment today.

Your company can measure the ROI of social media, and continuously improve it. Here’s how:

  1. Establish clear objectives for the use of social media. As we have seen, social media can be used to achieve multiple objectives. You need to be clear about how you intend to achieve and measure the results of every one of them.
  2. Categorize each type of customer interaction according to the objective it will help achieve. On Twitter, for example, each tweet will fall into a different category, based on its objective.
  3. Develop a tracking system that enables you to measure the results of each customer interaction in comparison with the desired result.
  4. Analyze the results in light of your objectives.
  5. Optimize your strategy – choose the tactics that are providing the most ROI. Eliminate the ones that aren’t.
  6. Go back to step 1. Set new objectives, and start the cycle again.

If you have clear objectives for your company’s use of social media, the ROI can be measured. Admittedly, some objectives may be harder to measure than others. But if you don’t measure it, you can’t improve it.

When approached in this way, I believe that social media will generate a high ROI for most companies. Social media advocates don’t need to struggle with the ROI question any longer. The ROI is there if companies approach their social media efforts the right way.

Email: What’s Your Real Open Rate?

Many email service providers admit that there has been a gradual decline in open rates over the past few years. While the open rate doesn’t tell the whole story on on email success, it is still vital to measure. After all, if your audience doesn’t open up your email, they have no chance to read it and respond to it.

One of the primary reasons cited for the decline is inbox clutter. According to Forrester Research, 60% of consumers believe they receive too much email. In another study, Customer Knowledge is Marketer Power, Forrester found that the chief reason that marketers who believe email will be less effective in 2 years  is “too much clutter in consumer inboxes.” A belief that “SPAM” will drive the decline was cited by only 59%.

Clearly, we are all becoming increasingly numb to the steady stream of email arriving in our inboxes. A second, related reason often given for the decline in open rates is the increasing effectiveness of spam filters that help manage this flood of email.

A third reason, and a significant one, is technological. The way that opens are measured is by including a tiny image (usually a 1 pixel by 1 pixel gif or jpeg) within the email. Once the images that are embedded in the email are served, the email is recorded as opened. The problem is that there are a lot of email readers don’t automatically serve the images in an email. In fact, ExactTarget estimates that 50% of all email is now delivered to email readers that either don’t automatically render images or are unable to render images, such as Outlook, Gmail, AOL, and handheld devices such as Blackberries. Thus, there is an inherent bias in not detecting all of the opens.

If you’re running an email campaign, it’s important to know the true open rate, so you can gauge the true reach of your email message. There’s an easy way to do this. It’s based on the insight that click-throughs are always measured, even if opens aren’t. Even though the email reader may not be indicating  an open, because it hasn’t rendered the images, the recipient of the email can still click on the links. That means that some recipients will be tracked as clicking through, but not opening an email. Let’s walk through an example.

Here’s the initial tracking information for an email:

Here’s how to estimate the true open rate:

  1. Download the list of the email addresses that have opened the email from your email service provider.
  2. Download the list of the email addresses that have clicked on a link in the email. Now match up the list of those who have clicked through, to see if they were tracked as opening the email. In the case above, it turns out that 105 recipients clicked a link in the email, but only 75 of them were tracked as having opened the email.
  3. Multiply the open rate above by the ratio 105/75. This gives an estimate of the true open rate, assuming the same click through to open ratio for the group that clicked on a link in the email, but was not tracked as having opened the email. The revised tracking information is as follows:

As you can see, because not all of the email reader render images, the estimated open rate in this case was actually 40% higher than reported. Here’s how you can use this information:

  • In order to maximize your click through rates, make sure that message in your emails does not rely on images. That way, if the recipient of your email doesn’t see the images, they can still respond to your message. As demonstrated above, this can help increase your open rates by 40% – or more.
  • It’s vital to know what the real underlying trends are for your email campaigns, so you can make adjustments as necessary. You’re in a better position to know that if you monitor the estimated open rate, as described above, because it eliminates quirks in the tracking system. You need to make adjustments in your strategy based on real changes in customer behavior, rather than changes in the way email readers render images.
  • With the estimated open rate, you now have a better estimate of the cumulative penetration of your message to your target audience. For example, if the reported rate shows a cumulative penetration of 33% after several emails, and you actually have a 40% higher open rate, a better estimate of your penetration is 1.4 x 33% or roughly 46%. You can then make better decisions about how to most effectively reach the rest of your target audience.

Test Results: Are They Reliable?

Testing is one of the building blocks of predictive marketing. Sites such as Marketing Experiments and Marketing Sherpa frequently report the results of marketing tests, with the implication being that you can apply the results to your own business. As we saw in my last post, however, you can’t infer best practices from somebody else’s test. You have to run tests with your own target audience to know what really works with them.

There is another huge problem with the reporting of test results on these sites. Let’s take a look at this test result, as reported on the Marketing Experiments site, regarding good vs. bad email copy:

email-test-results-example-4

What is wrong with this? There is absolutely no way to tell if the results are statistically significant! If you are not familiar with the concept of statistical significance, it is absolutely essential to understand if you are going to be testing.

Here’s a fact that tends to get lost in the shuffle when these popular sites review test results: if you ran the test again, you are virtually guaranteed to get different results. The % difference above may be 70%, or 30%, or even -20%, instead of 49.5%. If you haven’t thought about this before, it might seem somewhat shocking. But it’s true.

To say that the results of a test are statistically significant means that you can be confident that the next time that you run the test, you can expect similar results. Let’s take a look at one of the results I reviewed in my last post:

subject-line-test-3At the bottom of the test results, you see a statement that “The difference in conversion rates is statistically significant at the 99% level.” What does this mean? It means that if you repeatedly conducted the test again, 99 times out of 100 the first subject line would once again beat the second subject line.

The reason tests are conducted is to determine how much a particular marketing variable, such as copy, affects response, on a small portion of your target audience, before you roll it out on a large scale to your entire target audience. And before you do that, you have to be extremely confident that at the very least you can show an improvement in conversions. That’s where statistical significance comes in. If you don’t know the level of statistical significance for your tests, you are playing with fire, and might get burned.

If you would like a free calculator to help you test the statistical significance of your tests, email me at rhodgson@predictive-marketing.com.


Next Page »

Predictive Marketing