Why A/B testing is truly risk free, even when every lost sale matters
People usually have lots of unfounded excuses for not doing A/B testing. A major excuse right on top of the list is “OMG, WE DONT WANT ANY LOST SALES”. Its toned down interpretation means if test variations are not good enough, A/B testing can actually hurt a website’s conversion rate. This reason is a major deterrent for eCommerce (or other kinds of) websites where every sale matters. So, if a variation isn’t performing as well as the control (or deafult), that means sales are getting lost during A/B testing. On the face of it, the argument sounds really convincing.
However, the reasoning is quite weak, and that is because:
a) Your benchmark should be overall site conversion rate, not the conversion rate of control
Poorly performing variations is the price you pay for doing A/B testing. That said, if some variations aren’t doing well, other variations may be doing much better. Even if multiple variations perform badly but your overall conversion rate is good (perhaps due to one or two extremely well performing variations), you should be in your comfort zone. Actually, the fear of poorly performing variations resulting in lost sales is completely irrelevant as long as overall conversion rate (accounting all variations) is higher than your existing (control’s) conversion rate. So, before starting to worry take a peek at the right metrics
b) A secret ninja trick: you can always disable poorly performing variations
Most modern A/B testing tools (including Visual Website Optimizer) provide one-click option to disable variations that you don’t want to keep in the test. If you think a particular variation is not performing well, simply disable it. (Remember: don’t disable a variation at the slightest hint of poor performance – always wait for statistical significance).
c) Still not convinced? You can always let the Visual Website Optimizer monitor the test for you
Here comes the magic part. Visual Website Optimizer has recently introduced a feature called “Risk-Free Testing” whereby (if you choose) your test is monitored in the background and VWO automatically disables poorly performing variations without any human intervention. That means your test is guaranteed to perform as well as your website’s existing conversion rate because all non-performing variations get disabled as soon as they are detected .
d) Okay, you even want maximum conversion rate possible? Sir, you can implement the winning variation on your site automatically
Another part of “Risk-Free Testing” is an option to automatically allocate 100% traffic to the winning variation once a winning variation has been found (during monitoring in background). This dramatically reduces the amount of attention you need to devote to the test as it ensures that the winning variation will be implemented on your website automatically. No IT department involved for post-test implementation and your developers can go on a much needed vacation.
e) In nutshell, your only risk is the time invested. But, hey, look at the upside.
As argued above, “Risk-Free Testing” removes all risk involved in doing an A/B test. Given zero risk, what is the worst that can happen if you do an A/B test? Your time invested can get wasted? Yes, your 20 minutes invested in thinking a new headline or 2 days redoing the sales page can indeed get wasted. However, look at the potential upside: 20% increase in sales, 90% increase in conversions and what not.
By the way, a “no-result” isn’t worthless – it tells that your existing page is doing really doing well and you should avoid tinkering with it. Or it says you are testing wrong sections or your variations are poor. It is actually a lot of information and should give you interesting insights for your next A/B test.
To sum this rather long post, here is a golden nugget for you: What cannot come down can only go up. (And, yes, we are talking about conversion rate here)