Testing is an important aspect of marketing but is not particularly useful unless executed correctly. Going beyond our previous post “Holiday Rush: Five Helpful Holiday Testing Tips,” we will now identify important testing components that need to be implemented during the planning and analysis phases to ensure optimal results. Addressing these issues in the planning phase takes out some of the uncertainty in interpretation once you are analyzing your results, and ultimately encourages sound testing practices.


Hypothesis Testing: What’s Going To Happen?

The first thing to identify when creating a test is what question do you want to answer? In formal hypothesis testing, one must first identify the null hypothesis, or make a general declaration that the test scenario is not related to a change in the outcome measure. Examples of null hypotheses would be: the web page enhancements will not affect purchase behavior, responsive design templates will have no impact on email engagement, etc.

This hypothesis or statement is assumed to be true until evidence shows otherwise, which is where the test hypothesis comes in. Should there be a relationship between the test scenario and outcome, what would that look like and how would we identify this relationship? When developing the test hypothesis, one must identify what result would cause you to reject the null hypothesis and decide the test had an effect on user behavior.

One-tailed Vs. Two-tailed Testing: Different Or Better?

When implementing a test you not only need to identify what is being tested, but also what outcome you want to examine. In certain situations, simply identifying if two scenarios are different is sufficient for your purposes. Can’t decide between a green banner or an orange banner? A two-tailed test would identify if a different color makes a difference at all, and if so, how they compare. In other instances, you may be implementing enhancements that take extra effort/manpower/money, so this would only be implemented if it is better than what’s been done in the past. While similar interpretation in the end, this will have an effect on the analysis side and the statistics used to interpret the results.

Identifying Top Performers: Not All KPIs Are Created Equal

Oftentimes marketers will run a test and easily identify the control versus test, what they expect tohappen, and how this will affect their marketing strategy. The missing component there is what identifies the test as “better.” Selecting what KPI is the most important indicator of performance in advance reduces bias and encourages thoughtfulness on the purpose of the test. While in many situations all KPIs will perform similarly across the control/test conditions, there will be times that different KPIs tell a different story.

For example, evaluating a subject line test, a common KPI to identify the top performer would be open rate. What happens if the test condition saw an extremely high open rate but significantly lower revenue generated? If you previously established open rate as the primary indicator, there should be no question which condition performed better. Outlining in the planning stages what KPI will identify top performers removes bias in evaluating and interpreting the results.

Non-significant Vs. Significant Results: What Now?

Not Significant: Should your results show no significance, is all lost and should you abandon all hope of making any meaningful changes? Probably not. Instead, this is a chance to reevaluate how the test was conducted to see if there are any improvements that can be made. Maybe next time randomly sample across all lists instead of testing on a specific list, or test a new site design on a different portion of the site that has higher or more variable engagement values. Non-significance does not always equal failure, but it should be treated as a valuable opportunity to learn from and to identify ways to optimize testing. That being said, the numbers don’t lie, so while it is disappointing your expectations were not met, you should respect the outcome and admit there was not statistical evidence of a meaningful effect.

Significant: Significant results! It worked! Success!… Now what? The point of running a test is to measure whether specific changes should be made to enhance and optimize your marketing strategies. In the end, you want to be able to implement new strategies, keeping in mind the level of generalizability of your results. Depending on the test situation, population sampled from, and other characteristics, not every test will translate to every situation. If you only tested a subject line with a single list in the US, it’s difficult to make a case for using this same strategy with a completely different audience in Australia. Obviously, this is something that should be considered on a case-by-case basis, but best practice is to generalize results to similar scenarios only.

Hopefully by implementing these helpful tips into your next testing plan, you will feel more confident in the execution and analysis of the tests that make the most sense for your business needs. By systematically identifying the most effective strategies, you and your team should be able to optimize implementation of these strategies and make this holiday season the most successful one yet!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>