The primary problem that retailers have with testing is that running a test always involves revenue risk. Typically, this inevitable loss is acceptable as long as the test produces meaningful insights that lead to greater profits. However, because retailers naturally avoid anything that can directly lead to revenue loss they often avoid testing, which leads to opportunity cost.
Just like inevitable revenue risk, the opportunity cost is a loss of revenue, and that’s why it’s important for retailes to be testing for optimization as much as they possibly can.
To understand what the opportunity cost of not testing for onsite optimizations entails, it’s important to first break down what website testing is at its core.
Onsite testing is the process of rendering two or more variations of a site element or experience, in order to discover which version performs better with users, based on a selected objective. This process is important because retailers don’t always know what the highest performing variation of every element on their site is, so they must test different variations to discover just that.
When a test is conclusive it shows a clear winning variation, which allows the retailers to optimize by ensuring that variation is the one that customers experience from that point forward. The increase in performance due to this optimization often makes it easier for retailers to forgive the revenue they may have lost by showing weaker performing variations during the test. Conversely, it’s never easy for a retailer to accept revenue loss no matter the pay-off.
Decreasing the Cost of Doing Business with Continuous Optimization
Let’s take an example of a standard split test with three different variations of a site element.
Typically, a retailer sets the test up to allocate the same amount of traffic to each of the variations over a set period of time. In other words, if they need a sample size of 300,000 visitors in order to achieve statistical significance, then the traffic allocation breakdown would be to deliver the three variations to 100,000 visitors each.
If the test then concludes that the first variation (A) results in a $2.54 revenue per visitor (RPV) whereas the second variation (B) results in a $2.81 RPV and the third variation (C) results in a $2.66 RPV, then the retailer can deduce that Variation B is the top-performer. The downside is that they also then have to accept that by delivering Variations A and C to 200,000 of their site visitors, who they could have been delivering Variation B to, they lost that difference in revenue.
On the surface, this may seem insignificant, but when it’s all added up the loss amounts to:
- Variation A: 100.000 visitors with a $2.54 RPV = $254.000
- Variation B: 100.000 visitors with a $2.81 RPV = $281.000
- Variation C: 100.000 visitors with a $2.66 RPV = $266.000
If all traffic would have been allocated to Variation B immediately, sales would have been $843.000, the theoretical maximum potential, but because some traffic was allocated to the lesser performing variations in this example sales only amounted to $801.000. This comes out to a $42,000 loss of revenue, which is the cost of running the test.
The need to minimize this loss from the testing process is quite obvious. Hence, the invention of Continuous Optimization, which uses machine learning to dynamically allocate more traffic to the winning variation as soon as the winner starts becoming clear.
Let’s see how this changes the loss of revenue attributed to the test:
- Variation A: 69.000 visitors, 23% of traffic, RPV $2.54 = $175.260
- Variation B: 156.000 visitors, 52% of traffic, RPV $2.81 = $438.360
- Variation C: 75.000 visitors, 25% of traffic, RPV $2.66 = $199.500
It’s clear that by dynamically allocating traffic to the winning variation during the test significantly reduces the amount of revenue lost due to running the test. In fact, the merchant would save over $12,000 with Continuous Optimization:
- Testing with equal allocation: $801.000
- Testing with Continuous Optimization: $813.120
- Theoretical maximum potential: $843.000
Minimizing Human Error in Test Results with Continuous Optimization
Unfortunately, sometimes test results end up inconclusive and retailers feel like they lost potential revenue for no good reason. Inconclusive test results can occur for a myriad of reasons, but one of the most common is the test not running long enough to reach a high enough confidence level to declare a winner. Retailers will often end a test as soon as they believe it has reached its conclusion, but adding this layer of human error can often lead to mistakes. This is just another reason why testing solutions should include a Continuous Optimization feature.
With Continuous Optimization, not only is the negative effect that testing has on revenue minimized but the need for human interference to end a test is eliminated. This leads to fewer tests ending before they’re able to reach conclusive results and even fewer tests resulting in false conclusions.
An example of how a test can come to a false conclusion is when one variation appears to be winning over the others early on, and the person running the test decides to end it in order to minimize the revenue risk but doesn’t realize that the current trend is only temporary. In order to avoid this, retailers should run tests for at least two to four weeks, or even longer.
To avoid these hasty conclusions, Continuous Optimization uses a short learning period at the beginning of a test to allow each of the variations the opportunity to shine. Since the feature optimizes toward the chosen goal separate of volume, it also is able to dynamically change which variation it sees as winning in real-time.
For instance, if initially one variation is performing well but starts to underperform, regardless of the volume of goals met, Continuous Optimization will adjust itself to allocate more traffic to the new frontrunner. While volume and past learnings matter, the mechanism is based on the probability of future scenarios.
Decreasing the cost of running tests while also minimizing room for human error in test results is something everyone can agree is highly beneficial to retailers everywhere.
Continuous Optimization should be used by retailers to explore the more adventurous ideas they may want to test. For example, if a merchant has two similar variations of their product detail page recommendations and a third very different variation, with no historical data to back the inclusion of it as a viable option, then this feature gives them the opportunity to include the more adventurous variation. After all, profound growth doesn’t come from maintaining the status quo.
To get started with our latest offering, get in touch with our team (via demo request or the chat box in the bottom right corner).