Are you Sabotaging Your Testing Plan?

Posted by Alexandra Braunstein on

Most marketers agree testing is one of the most effective ways to optimize your email program and increase ROI. As outlined in Return Path’s recent eBook, All About A/B Testing, even a simple A/B test, such as testing subject lines or promotional offers, can provide valuable insights for future campaigns. However, in order to run a successful and impactful test, marketers need to avoid common pitfalls that can sabotage their results.

When running your test, always keep the following rules in mind:

  • Test one variable at a time. Limiting testing elements to one per campaign allows you to accurately measure the impact of each variable. For example, if you’re testing call-to-action language (such as “Shop Now” vs. “Click Here”) this should be the only difference between content in version A and version B of your test. If you change the call-to-action language and also alter call-to-action placement, it will be unclear which variable drove your results.
     
  • Send tests at the same time. As obvious as it may seem, we often forget this critically important detail that can skew results. Unless you are testing send time, make sure you deploy both versions of your tests simultaneously; otherwise, it will be difficult to tell which variable actually caused the results.
     
  • Use a statistically significant sample size. One of the key factors in running a successful test is making sure your test audience is set up correctly, otherwise you'll make important business decisions based on inaccurate results. Use Return Path’s sample size calculator to ensure your test audience size is statistically significant. In most cases, as a rule of thumb, aim to include 2,000-3,000 subscribers per test cell for a 95% confidence level and 2% margin for error.
     
  • Be patient. Although you may be excited to incorporate test findings into your email program, tests need sufficient time to run in order to obtain clear and accurate results. Wait between 48 to 72 hours to determine a winner and also make sure to run the same test, at least, three times to see if results remain consistent from test to test before declaring a true winner.
     
  • Track your results. Create a detailed spreadsheet outlining each test and it's results. Track audience size, testing variables, subject line, deployment time, and all metrics used to determine success, such as opens, clicks, and conversions. Not only does this allow you to review past tests, but it also allows you to easily share findings with your larger organization.

Not only is testing a helpful way to optimize your email strategy, but it's also the key to ensuring long-term program success. If it’s not already on there, add testing to your email to-do list and check out this post for 50 testing ideas to get you started today!

Author Image

About Alexandra Braunstein

Alexandra has been helping world class brands grow and optimize their email marketing strategies and initiatives for over a decade. As an Email Strategist for Return Path, Alexandra uses her passion for analytical and creative thinking to help marketers refine their email programs, resulting in more emails getting delivered to the inbox, improved subscriber engagement, and increased ROI.

Author Archive

Your browser is out of date.

For a better Return Path experience, click a link below to get the latest version.