Our society has become accustomed to real-time information. As technology becomes more and more prevalent, we become increasingly impatient.  In our business, that means that as soon as we deploy an email, we want to know how it did. We don’t want to wait until the end of the campaign to assess its success. We want – and sometimes need – to understand performance in real time.

Let’s look at different campaigns from multiple clients and see how we can establish benchmarks useful for forecasting. This is a visualization of several recent campaigns for a number of our retail clients.

The chart shows cumulative open percentages over time. The Y-axis shows the cumulative percentage and the X-axis identifies the hour since email deployment. In general, of the emails that are eventually opened, approximately 80% of them are opened within the first 24 hours and more than 96% are opened within five days. Those trends are intuitive and make sense. But you can also see that campaigns don’t behave identically. And why should they? Campaigns are as different as the product or service they are promoting. Yet, some emails achieve greatness in mere hours while others linger for days. Why are some emails “slower” to perform than others?

  • Was it sent at an inopportune time, such as 3:00 a.m.?
  • Was it sent during a holiday weekend?
  • Does the email only provide redundant content, such as an annoying reminder of an email already received?
  • Is the call-to-action vague?

Why are some emails “faster” than others?

  • Was it sent early on a Monday morning?
  • Was the offer some form of flash sale?
  • Are the subject line and the offer particularly enticing?
  • Is the audience a targeted subset of your subscriber base, one that is active and engaged?

Despite our continual demand for speed, remember that “slower” is not synonymous with “less effective.” Some emails have so much content they take more time to be digested by the consumer. Some products are more expensive and require more research and time before the recipient makes a final decision. Avoid drawing general conclusions about the success of a campaign based on the speed alone, and carefully consider all of the factors that contribute to success for each particular email in your overall analysis.

The client-specific trends become clearer when we collapse individual campaigns, as shown here.

The retailer in blue has a much “quicker” overall open rate than the retailer in green. Again, that doesn’t mean the green retailer is less effective; it only means its customers open the emails at a slower pace.

So why does this matter? Because generalizations are just that – generalizations – and forecasts must be specific to each client and each situation. Forecasting total opens after only 24 hours by a generic 80% will overstate the final metrics for the blue retailer, because that customer base historically opens their emails at a faster rate than the “average” consumer. And making a generalization that “all email metrics are essentially final after 48 hours” would, in these clients’ cases, understate the true level of engagement.

When developing forecasting models, you must segment your campaigns and only group similar campaigns together. Compare your apples to other apples. Analyze each campaign in isolation, and only consolidate those campaigns with similar performance for forecasting. Subsets could be newsletters, friends and family campaigns, final reminders, and flash sales. You will also likely see differences in seasonality. Even if you send emails multiple times each week, you will identify trends and be able to group campaigns into a finite number of subsets for predictive forecasting.

Once you’ve determined which type of campaigns behave similarly, you can begin the forecasting process. For example, you may find that, for your Christmas preview campaign, after 6 hours, you have 50% of the clicks and 27% of the total revenue. With those benchmarks in place, you can estimate the total number of engaged subscribers and the projected revenue to be generated. This is particularly important in the holiday season, when retailer marketing volume increases substantially, and they demand faster response time. Preliminary findings used for projections allow you to be more nimble with your strategies.

By going through the process of establishing baselines for metrics such as open rates, click-through rates, conversion rates, and average order values, and then beginning to understand what the goal of each individual message is, you can begin the process of predicting the total value of each campaign while the campaign is still running. The context that those baselines bring to your campaign is the only way to provide meaningful measurements to those stakeholders who want to know what’s happening right now.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Comments