Skip Navigation
VWO Logo VWO Logo
Dashboard
Request Demo

A/B Split Test Significance Calculator

Built with ❤️ for testing, optimization, UX, CRO, and design teams.

Number of Visitors
Number of Conversions
Control
Number of Visitors
Number of Conversions
Variation
Number of Visitors
Number of Conversions

P-Value

0

Significant?

Yes

The P-Value is x.xx Hence, your results are statistically significant!

What do you think this means?

Awesome, you understand what p-value stands for! Unfortunately most people are unable to correctly interpret p-values. Hence we built VWO SmartStats, a Bayesian statistical engine that dispenses with the need of a p-value altogether.

Unfortunately, this isn't what the p-value actually means. Don't worry, most people are unable to correctly interpret p-values. Hence we built VWO SmartStats, a Bayesian statistical engine that dispenses with the need of a p-value altogether.

Variations
Conversion Rate
Improvement
Probability
to be best
Absolute
potential loss
Conversions/
Visitors
C Control Baseline
V Variation -
Uncertainty Overlap
Variations Conversion Rate
Improvement
Significance Value
Conversions/
Visitors
C Control Baseline -
V Variation -

P-Value

(Range from 0-1)

0.334

Significance

No

Why do we use Bayesian statistics?

Intuitive Test Reports

We realized our non-statistical users frequently (and wrongly) interpreted the frequentist p-value as a Bayesian posterior probability (the probability that variation is better than control). So we built the industry's first Bayesian statistical engine that gives you an easily understandable result. An intuitive result ensures that you don't make a mistake while A/B testing revenue or other critical KPIs.

Creating A/B Test Variations

No Sample Sizing Required

VWO SmartStats relies on Bayesian inference which unlike a frequentist approach doesn’t need a minimum sample size. This allows you to run A/B tests on parts of your website or apps that might not get a lot of traffic to improve them. However, getting more traffic on your tests allows VWO to determine your conversion rates with more certainty allowing you to be more confident about your test results.

Creating A/B Test Variations

Actionable Results, Faster

VWO SmartStats was engineered keeping one key metric in mind: Speed. We have traded-off some accuracy for speed, not a lot, just a tiny bit, enough to get quicker results without impacting your bottom line. This frees up your time enabling you to test more. Also, on the off chance that you would want to be absolutely and completely sure, we calculate the maximum potential loss you'd be taking, and you can decide if the loss value matches your risk appetite.

Creating A/B Test Variations

Frequently Asked Questions

The null hypothesis states that there is no difference between the control and the variation. This essentially means that the conversion rate of the variation will be similar to the conversion rate of the control.

The p-value is defined as the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct, where the null hypothesis in A/B testing is that the variant and the control are the same.

Statistical significance quantifies whether a result obtained is likely due to a chance or some to some factor of interest. When a finding is significant, it essentially means you can feel confident that a difference is real, not that you just got lucky (or unlucky) in choosing the sample.

Statistical power is the probability of finding an effect when the effect is real. So a statistical power of 80% means that out of 100 tests where variations are different, 20 tests will conclude that variations are the same and no effect exists.

A/B testing made fast and simple. Try VWO for free today.

Start Free Trial Request Demo
This is the median conversion rate you can expect from the variation. The 'best case' and 'worst case' conversion rates represent the 99% credible interval where the conversion rate is likely to be contained.
This is the median improvement you can expect over the baseline if you implement the variation. The 'best case' and 'worst case' values represent the 99% credible interval where improvement is likely to be contained.
The probability of a variation to perform better than all other variations including control.
The ratio of the number of conversions to the total number of visitors.
In the area where there is an overlap among variations, we are uncertain about which variation is performing better. If your best performing variation has a lot of uncertainty overlap, we strongly recommend that you should run the test for a longer duration.
By how much your conversion rate might still be improved. If your Absolute Potential loss is 2% and the expected conversion rate is 10%, it means you still have a chance to improve this conversion rate and increase it to 12%.
Indicates the confidence you can have in a variation to perform better than the control. Higher the Significance level, greater are the chances that the variation will perform better than the control (original version). For example, 95% chance to beat control means you have the confidence level of 95% that a variation will convert better than the control. However, please note that there is still a 5% probability that variation may not deliver as you thought. Several factors influence the Significance level of a variation including the duration of the test, the number of visitors involved, and so on.

All you Need to Know About Conversion Optimization Get the Guide