The Scientific Method Applied to Your Email Program: A Simple How-To Guide
In the world of email marketing, it is best to be dynamic with your email program and be open to trying new things. Examples of changes may include cross-promoting across different business units to expand your subscriber file, incorporating symbols into your subject lines, trying out new email templates, or sending a re-engagement campaign to an old portion of your file.
If you are not looking into new ways to improve your program, you could be putting it at risk of becoming stale resulting in underperformance on deliverability and engagement metrics, lower conversion rates and even hurt your brand image. If, however, you are rolling out significant changes to your program, you could run the risk of causing abnormally high spam folder delivery, blocking at ISPs, or even black listings. In order to avoid either scenario, let your creative juices flow and try out your latest great idea – just be sure to test it in a careful, measured way.
Here are a few tips to consider when testing any change:
- Identify two small subsets of your subscriber file that are the same size, but are also comparable qualitatively. Designate one to be the control group and the other be the test group.
- Ensure you have a way to isolate and compare the test results of each group after the treatment period that you decide upon. One way to do this would be to isolate the test group to one IP, and the control group to another.
- Treat the test group with the change you are experimenting with for a period of time that will allow for measurable impact. If the groups that you are including in the project receive a weekly newsletter, it is probably best to test for at least 4 weeks. However, if the groups only receive a monthly newsletter, you may want to test for 3 or 4 months.
Compare all metrics that you have available to you such as:
- Opens and clicks
- Inbox / Spam rates
- Unsubscribe rates
- Complaint rates
- Actual list size of both groups following treatment period
There is no reason to fear consequences as a result of making changes to your email program when you can test and measure the results on small scale before rolling them out universally. Furthermore, seeing encouraging results can help you earn the buy-in from those in your organization necessary to unleash the creativity that you know you have. So, make it a New Year’s resolution to come up with one new idea for your email program, hypothesize, test it against a control group, and then compare results.
Do you need some inspiration for what ideas to try out? Check out the latest Return Path infographic that provides some crucial insights into the latest on mobile email. Some of the findings may make you re-think how well your emails are optimized for mobile. If you think adding symbols into your subject lines may get you better response rates, check out this article. To round out your new testing framework, check out another post describing five email testing pitfalls to avoid.