In-market testing can give you insights that are unavailable through other forms of research. But how do you ensure those insights are actionable and don‚t waste your resources and budget?
Create a Learning Agenda.
A Learning Agenda will help define what to learn, as well as how to measure the test and the impact. Creating a Learning Agenda makes certain that every test delivers actionable insights that are scalable and transferable, and also ensures the test has a large potential impact on the bottom line.
To illustrate the process, let‚s take an example of a B2C client. Insights they gathered from consumer research indicated that, for their industry, Saturday delivery for consumer-based CRM emails was showing lifts in engagement. But this was contradictory to many other studies. The client decided to test this notion using their eNewsletter, since it had seen large and continual degradation in engagement.
Here are six simple steps to creating your own Learning Agenda:
- Develop a clear set of questions to test.
Don‚t be shy. Create as many as you think are relevant, and then prioritize them. Here are some examples of what to ask:
- Does Saturday deployment
- increase engagement metrics over previous weekday deployments?
- increase final conversion?
- increase the number of articles read (e.g., does the content-engagement index increase)?
- encourage inactive customers to engage?
- Do segments respond differently to day-of-the-week deployment?
Note that each question is unique. They all help determine how to set up the test and what data you need to measure.
- Does Saturday deployment
- Create hypotheses to statistically measure success.
At this stage, also determine what action will be taken if the hypothesis is true.
Hypothesis 1: Saturday deployment increases open rates by 15%.
If TRUE, take these actions:
- Roll out with Saturday deployment for this campaign.
- Evaluate other email programs for testing day-of-the-week deployment.
- Identify the success metrics mapped to the tactic‚s marketing objectives.
Next, it is important to distinguish all KPIs and the data needed to capture these metrics. For this example, open rate, click-through rate and click-to-open rate are the success metrics, as this is the objective of the eNewsletter.
- Establish the diagnostic metrics.
These are often forgotten, but are necessary to evaluate why a test worked. They are mapped directly to the set of Learning Agenda questions. Diagnostic metrics include conversion rate, number of articles clicked/read, reactivation of inactive customers and response by different segments.
- Establish data collection rules.
A data map is created to ensure that all metrics can be measured. In this case, along with the standard data capture of engagement and conversion broken out by segment, response files from previous eNewsletter deployments must be available to answer some of the Learning Agenda questions.
- Determine the impact level of the test and its risk.
Most important, a simple business case is created to evaluate the possible impact of the test and the potential risks. This is a key element in a Learning Agenda. At KERN, we evaluate the possible benefits and business impact by leveraging previous campaign test data or insights.
This robust test of Saturday deployment increased open rate by 11% and CTR by 24%. Additionally, for segment 1, the CTR jumped by 86%. For segment 2, it increased by 53%. Five additional hypotheses had positive results.
Following these six steps, the test was clearly defined and developed properly for deployment, eliminating costly mistakes. All of the Learning Agenda questions were answered, avoiding the need for retesting. The client took action immediately after the test results were presented, since all possible actions were included in the Learning Agenda. What more could you want from an in-market test?
More Business articles from Business 2 Community: