Predictive Marketing

How to Price Your Conference or Exhibition

Conferences, exhibitions, and events were the original forms of social media. In recent years attendees have been more difficult to attract, due to the rise of the Internet, the increased hassle of travel, and an economic recession. But even as event producers have struggled against these forces to maintain or grow attendance levels, they have in many cases ambitiously attempted to increase revenue by increasing prices to attend their events. And as the recent experience at a number of events demonstrates, this can be a formula for failure.

One of the most forthright and savvy publishing operations on the Web, Mequoda Daily, recently discovered that price increases can backfire. In their own words:

Mequoda Summit: Rolling Back Prices to 2009

A 3-day program for the price of 2-days

After a multi-month test we have decided to reduce the price of the Seventh Mequoda Summit. We originally tested a theory explained in today’s Mequoda Daily post. It basically consisted of our desire to add more content to this year’s Mequoda Summit, to further enhance the experience for our attendees.

So we went forth with the test. This included increasing the content of the Summit by 25%. To be able to support the time and resources spent on this additional content, we decided to increase the price by 14%.

As a result we concluded that a 25% increase in content and a 14% increase in price yielded a 38% decrease in attendance.

In turn we have ended our test, and have shared our results with all of our loyal readers. We hope that you consider our findings when planning live events in the future. We are also offering admittance to the Summit for last year’s price, which is $200 cheaper than our original offer for 2010.

This decision by Mequoda Daily is at once smart and courageous. I’m sure they saw registrations and revenue increase immediately.

Test Multiple Price Points to Determine the Best Price

The best way to determine the appropriate price for an event is by testing several different price points at the start of your marketing campaign. The graph below displays the results of a price test I recently conducted for a client at conference fees that ranged from from $1,395 to $1,995. The test enabled us to determine that the best price was $1,595, which provided a projected incremental $60,000 in revenue over the next best price of $1,395, and more than $130,000 of incremental revenue over the worst outcome at $1,995.

Pricing is often a seat of the pants decision for an event producer. There are  many methods that can be employed to determine the price for your conference – what your competitors charge, how many days it lasts, or how much content you have.  Testing provides a way to find out directly from your customers what value they place on your product. The best strategy, and the one that will generate the most revenue, is to set your pricing through testing.

The Revenue Implications of Charging for Exhibit Attendance

Many conferences have an exhibit area that also provides a significant revenue stream. In order to maximize traffic on the exhibit floor, event management usually offers free passes to individuals who would like to visit the exhibits, but not attend conference sessions.

In some cases an event producer may decide to charge a nominal fee for passes to visit the exhibits to generate some additional revenue. This can be a major mistake.

Let’s take a look at some actual data and the revenue implications of charging for admission. In this case, the event producer decided to offer free admission to the exhibits if the attendee pre-registered, and charge a $50 fee if the attendee registered on site. This was a change in policy from the previous year, when admission was free regardless of when the attendee registered. The change in policy permits a year to year comparison that provides a dramatic illustration of what can happen when a fee is charged for exhibit attendance.

The first thing to note about the data is the significant drop in on site registrations, which declined from 397 to 103, a drop of 74%, in contrast to an increase of 81% in pre-registrations. One could assume that if the $50 fee had not been applied for on site registrations, they would also have grown by 81%. So attendance grew by 39% (to 2,067), when it should have grown by 81% (to 2,680). Actual attendance was 23% lower than it should have been.

The effect of this shortfall in attendance had a major, negative impact on the exhibit sales. Since the size of the exhibit floor grew by 50% (from $342,000 to $521,000), but attendance grew by only 39%, the density of attendees on the exhibit floor decreased by 11%. For an event, the size of attendance is perceived by the density of attendees on the exhibit floor – how crowded it looks. Even though actual attendance grew by 39%, because the size of the exhibit floor grew by 50%, it looked like there were actually fewer visitors in attendance.

The effect on exhibit sales and revenue was immediate. The percentage of exhibitors who signed contracts on site to exhibit at the next conference dropped from 79% to 59%, resulting in a revenue level that was $104,200 lower than it should have been. This revenue loss far exceeded the $5,150 in revenue realized from charging a $50 on site fee for exhibits passes.

This whole scenario could have been avoided by a simple price test on the exhibit pass at the start of the attendee marketing campaign. Event management would than have known the effect of the increase on price on overall attendance, and could have made the pricing decision accordingly.

It never pays to set prices first and react later. Always be testing!

Test Results: Are They Reliable?

Testing is one of the building blocks of predictive marketing. Sites such as Marketing Experiments and Marketing Sherpa frequently report the results of marketing tests, with the implication being that you can apply the results to your own business. As we saw in my last post, however, you can’t infer best practices from somebody else’s test. You have to run tests with your own target audience to know what really works with them.

There is another huge problem with the reporting of test results on these sites. Let’s take a look at this test result, as reported on the Marketing Experiments site, regarding good vs. bad email copy:

email-test-results-example-4

What is wrong with this? There is absolutely no way to tell if the results are statistically significant! If you are not familiar with the concept of statistical significance, it is absolutely essential to understand if you are going to be testing.

Here’s a fact that tends to get lost in the shuffle when these popular sites review test results: if you ran the test again, you are virtually guaranteed to get different results. The % difference above may be 70%, or 30%, or even -20%, instead of 49.5%. If you haven’t thought about this before, it might seem somewhat shocking. But it’s true.

To say that the results of a test are statistically significant means that you can be confident that the next time that you run the test, you can expect similar results. Let’s take a look at one of the results I reviewed in my last post:

subject-line-test-3At the bottom of the test results, you see a statement that “The difference in conversion rates is statistically significant at the 99% level.” What does this mean? It means that if you repeatedly conducted the test again, 99 times out of 100 the first subject line would once again beat the second subject line.

The reason tests are conducted is to determine how much a particular marketing variable, such as copy, affects response, on a small portion of your target audience, before you roll it out on a large scale to your entire target audience. And before you do that, you have to be extremely confident that at the very least you can show an improvement in conversions. That’s where statistical significance comes in. If you don’t know the level of statistical significance for your tests, you are playing with fire, and might get burned.

If you would like a free calculator to help you test the statistical significance of your tests, email me at rhodgson@predictive-marketing.com.


Always Be Testing

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” – Mark Twain

In today’s rapidly evolving markets, you can never take anything for granted. The average lifetime of so-called “best practices” is shorter than ever.

I experienced this first hand recently on a couple of email tests designed to drive registrations for some upcoming conferences. Both conferences were targeted at highly technical IT audiences.

My past experience had always indicated that the best subject lines had offers and calls to action, especially when closing in on a limited time offer pricing deadline. So the following test results were as expected:

subject-line-test-3

I had seen this result dozens of times before; invariably, the subject line that emphasized the dollar savings and created a sense of urgency had always emerged triumphant. Imagine my surprise, then, when I saw the results of the following test for a different conference directed at a similar audience:

subject-line-test-2

Not only did the standard subject line offer no improvement on the alternative; it produced a result that was significantly worse. There is no doubt that, for this particular audience, the second subject line produced more conversions than the first.

The beauty of testing is that marketers don’t have to figuratively stumble around in the dark searching for the best way to communicate with customers. Test and learn strategies provide a way to find out directly, from prospects and customers, what they value and want most. There is no longer any excuse for a marketer to rely on hunches, anecdotes, and biased opinions in order to make marketing decisions. Even the seemingly most insignificant of decisions – the color of a registration button, for example – may have an effect on conversion rates which can be quantified.

However, when employing a test and learn approach to marketing, there is a trap to be avoided, which is illustrated by this example. Every business, and every customer set are different. There is no one set of best practices that apply in every situation. Think of the body of knowledge that you gain by testing to be a set of “best guidelines” rather than best practices. And know that no testing program ever arrives at a final best answer. Your customers are always changing. Always be testing.

Predictive Marketing