We know we need to test. Metrics abound, and while we analysts would be happy to play with numbers all day, context is crucial. What should define success?
Deliverability (# delivered/ # sent): Unfortunately, “delivered” means simply “made it to the recipient” not “made it to the inbox.” Which means that your messaging could be sitting in junk mail or spam folders.
- INDICATES: Valid addresses where our message can be received. When testing new lists, compare deliverability rates to determine which are the cleanest sources of addresses.
- SO WHAT? Sending emails to addresses that never receive them is money down the drain. Plus, when deliverability sinks to a certain threshold, service providers can delay or block your sends. If that happens, then many of your subscribers who want to hear from you can’t.
Open Rate (# unique opens / # delivered): We don’t want to look at “total opens” because we want to identify the number of unique subscribers who opened our email. Total Opens includes consumers who open our same message repeatedly. Open rate is a much better look at, well, who’s looking at your message.
- INDICATES: Subject line effectiveness. When testing subject lines, open rate is likely the only metric you should include in your evaluation.
- SO WHAT? A strong open rate is the first major step toward conversion. If the email doesn’t get opened, you have no opportunity for the recipient to convert. It demonstrates both an interest in your overall brand and intrigue with your current promotion.
Click-Through Rate (# unique clicks / # delivered) or Click-to-Open Rate (# unique clicks/# unique opens):
- INDICATES: An attractive offer. In most situations where you’re testing creative options or campaign offers, Click-Through Rate (or CTOR) should be the success metric of choice.
- SO WHAT? The offer and the creative have to be appropriate for the target audience for them to consider the product.
Conversion Rate (# orders / # delivered):
- INDICATES: Right products, right time, right price, delivered to the right person. Conversion rates represent overall effectiveness, but they may not be the best testing success metric. When you are testing certain attributes, the natural differences in test groups may accumulate to show a bias that masks success. Be sure to isolate the attribute that you are testing and choose the test metric that will evaluate the success.
- SO WHAT? All things come together with conversion rate. But remember that even when you do everything right, conversion can be a very small number. Further, comparing conversion rates between tests can be deceiving unless you have a very precisely defined test.
Revenue per Email ($ revenue / # delivered):
- INDICATES: Overall gross sales per email. RPE is loosely tied to conversion rate, but it also represents overall effectiveness. If you are testing offers, RPE alone may show sales success, but net profitability is important too. Promotional emails with deep discounts can generate a lot of sales, yet not a lot of profit.
- SO WHAT? Retailers’ primary objectives are to sell more products more profitably. So when you are testing offers, consider the total and net sales to determine which is truly the best.
You cannot test everything at once, so make a hypothesis. Define the single attribute that will vary between test groups. Determine the single metric of success. Isolate one metric at a time. Then test…and re-test. Determine statistical significance before you make permanent changes. And then, after a certain period of time passes, retest it again.
The world is changing, and something you tested six months ago may be very different today. The upcoming holiday season is fertile ground for all kinds of testing. Test continually and correct your strategies during this retail season so that you can continually improve.