Talk to a sales representative

+1 844-822-8378

Write to us


What you really need to know about mathematics of A/B split testing

Posted in A/B Split Testing on

Recently, I  published an A/B split testing case study where an eCommerce store reduced bounce rate by 20%. Some of the blog readers were worried about statistical significance of the results. Their main concern was that a value of 125-150 visitors per variation is not enough to produce reliable results. This concern is a typical by-product of having superficial knowledge of statistics which powers A/B (and multivariate) testing. I’m writing this post to provide an essential primer on mathematics of testing so that you never jump to a conclusion on reliability of a test results simply on the basis of number of visitors. What Exactly Goes Behind A/B Split Testing? Imagine your website as a black box containing balls of two colors (red and green) in unequal proportions. Every time a visitor arrives on your website he takes out a ball from that box: if it is green, he makes…

How reliable are your split test results?

Posted in A/B Split Testing on
Visual Website Optimizer Reports

With split testing, there is always a fear at back of the mind that the results you observe are not real. This fear becomes especially prominent when you see an underdog variation beating your favorite variation by a huge margin. You may start justifying that the results may be due to chance or you have too less data. So, how do you really make sure that the test results are reliable and can be trusted? Reliability here means that running the test for more duration isn’t going to change the results; whatever results you have now are for real (and not by chance). So, how do you determine reliability of your A/B test? Hint: you don’t. You let your tool do the work for you. Visual Website Optimizer employs a statistical technique where your conversion events are treated as binomial variables. Above a certain sample size (10-15 visitors), binomial variables…