Thomas Edison once said:
“If I find 10,000 ways something won't work, I haven't failed. I am not discouraged, because every wrong attempt discarded is another step forward.”
Experimentation is by its nature inclusive of taking steps in the wrong direction. But if we let the possibility of temporary regression keep us from trying new things, then we will never improve. At Varolii, the Performance Management team is constantly working with our clients to optimize the results of their proactive customer outreach. In doing so, we develop and define Best Practices that can be built into our products. However, each client and each demographic is different and Best Practices will only get you so far. By adopting a plan for experimentation you can design and implement tests to improve performance and ROI.
What does it mean to experiment?
In the context of outbound communications, the goal is to contact and engage customers to provide service updates, time sensitive information or, in some cases, to request payment on past due accounts. The most effective strategy for any one of these objectives depends on a number of factors, but how can you know what is or isn’t working for your particular customer base?
Experiment & Analyze
The most common test scenario is one that uses a “Before/After” methodology, where a single solution is modified. Data is then collected, analyzed and compared before and after the change to evaluate any impacts. The problem with this sort of linear approach is that it’s not possible to know if the differences in results are entirely due to the changes or were impacted by something else. The only way to truly measure the success of an experiment is with a simultaneous A/B – or “Champion/Challenger” – environment.
With an A/B platform, two identical campaigns are built, and one of the campaigns is modified in a controlled way in order to capture data and measure the impacts of each change. The change might include testing a different channel (text vs. voice) or orchestrating channels, trying a different tone, a different time of day or day of week. Input records are split so that a certain percentage is sent to the first solution and the rest to the other one. The performance of the two solutions can then be analyzed, and performance differences can be definitively attributed to the changes.
What are the risks?
To be honest, there are risks. If you’re dealing with hundreds of thousands of customers and your solution doesn’t support an A/B environment, then you risk making a change that could negatively affect outcomes. Your only options in this case are to make additional adjustments and hope for improvement, or roll back the change entirely.
However, if you're able to leverage A/B campaigns, the risks can be greatly minimized. Within a controlled environment, where a percentage of volume is allocated to the test campaign, an experiment that doesn’t work as hoped can simply have the volume “dialed” back until it can be modified and tested again. The opposite is also true. If the test is successful, additional volume can be allocated in order accelerate the positive results.
Perhaps the greatest risk to long-term efficacy, though, is choosing not to experiment at all.
What are the rewards?
Let’s use an example from a collections solution. The application was unchanged for nearly two years prior to experimenting with a few modifications. The first change improved automated payment rates by nearly 54%. The second experiment increased payments an additional 20%, nearly doubling their original rate. What does this mean in actual dollars?
We know that payments average roughly $200 for this solution. The first change brought in an additional $2.5M in its first 9 months above the previous rates, the second set of changes has brought in an additional $3.5M over the original rates thus far.
If this client had been unwilling to take a risk and make changes, they would still have a decent Return on Investment (ROI) using the automated solution, but they would have left more than six million dollars on the table these last 18 months, or would have at least spent additional time, money and resources to collect it.
Experimentation does not come without risk but ask yourself, can you afford not to do it?
This is part one of a two part series exploring the work of Varolii’s Performance Management team and the results they deliver for our clients.