This website works best with JavaScript enabled. Learn how to enable JavaScript.

## BLOG

### on Conversion Rate Optimization

Today’s case study is very simple, but has some deep ramifications for anyone selling anything online. It shows how if you’re not A/B testing your prices, you’re probably leaving money on the table.

Six Packs Abs Exercises is a website run by Carl Juneau which provides training videos and guides on how to have a set of “rock hard abs”. At the time of the test, the page selling the abs work out looked like this in both Control and Variation:

Control: When clicking “Add To Shopping Cart” visitors were taken to the checkout page where the price was \$19.95

Variation: Same checkout page, only change was that the price was now \$29.95

### The Result

The above image shows that out of 1227 visitors who saw the original pricing (\$19.95), 1.1% ended up buying. Out of the 1375 visitors who saw the \$29.95 price, 1% ended up buying.

Split testing the prices using Visual Website Optimizer, Carl found out that both conversion rates were statistically the same. Which means customers did not differentiate between the \$19.95 and \$29.95 price points. By A/B testing them, he made an extra 61.67% revenue from Variation over Control. If he were to compare them by looking at the conversion rates from a 1000 visitors each, the \$29.95 price would still make him an extra 36.48% in revenue.

### What do we get from this test?

While it’s obvious that you should A/B test your prices to make the most revenue, what’s also important is that you’re learning the difference in how you and your customers value your offering. To explain, I’ve used a bit of Economics.

This is the traditional price elasticity curve. As the seller’s price increases, the demand for a product decreases.

This is the price elasticity curve for this particular case. Notice the green line. There can be two reasons for that:

1. The buyers value the Abs Training products higher than was anticipated by Carl which is why increasing the price did not decrease conversion rate.
2. The buyers were indifferent to both prices and would have bought either ways.

As you’ll realize, both these situations are very agreeable for SixPackAbsExercises.com and Carl. Now all he has to do is A/B test his prices further to see if he’s still leaving any money on the table.

### You should also check out our other Pricing related posts

Over the years, we’ve collected a large collection of posts and Case Studies related to pricing A/B tests. Check them out.

##### Siddharth Deswal

I do marketing at VWO.

1. I’d love to hear a statisticians take on what you mean by “both conversion rates were statistically the same”.

While it’s obvious that the current conversion rates are almost identical, it seems to me that the possible margin of error could be high.

In the screenshot of the results it says control is 1.1% (+-) 0.4% and variation1 is 1% (+-) 0.3%

If I undertand correctly, the range which we expect the control to be is 0.7% to 1.5% and the range for variation 1 is 0.7% to 1.3%

This means what could happen going forward is that the control would improve to 1.5% and variation one would get worse to 0.7%

Looking at the actual numbers by themselves, I personally would have waiting longer a little longer before reaching the conclusion that the difference between conversion rates is less than 33% (which is what’s needed to declare the the variation a winner)

On the other hand, factoring in the actual product that is being sold, from my experience having a \$29 price vs a \$19 price probably won’t be the deciding factor.

If a person believe the product will solve their problem the \$10 difference won’t matter to much.

Ophir

2. You know I love you guys but I’m really surprised you posted that, with statistics being so incredibly small.

That’s not even remotely close to being a valid test…

Tell him he should NOT move forward with his new change until he gets something statistically valid.

Jeremy Reeves
http://www.JeremyReeves.com

3. @Jan
Very interesting article. We’re putting up something similar on our Knowledgebase soon.

In the current test, the conversion rate is less, true, but the number of visitors tested is a large. However, I’ve asked Carl to weigh in and we should get further insight if he decides to provide more numbers.

@Jeremy,
Thanks for dropping by.

The reason we decided to go ahead with this case study was because it provides readers an insight into the advantages of A/B testing prices.

And as Ophir says, in this industry something like this could very well happen.

4. Hey guys,

This is Carl.

I’m a PhD student in public health and I’ve passed 5 135-hour statistics courses. I guess I know more about stats than most users.

Ophir, yes “the range which we expect the control to be is 0.7% to 1.5% and the range for variation 1 is 0.7% to 1.3%.” And yes, “This means what could happen going forward is that the control would improve to 1.5% and variation one would get worse to 0.7%.”

But what statistics do is they tell you the chances of this happening. And the chances of Variation 1 beating the control is this case is 46%, which is close to the “null” of 50% chance (head or tail? Flip a coin: you’ve got 50% chance).

Jeremy, you’re wrong. In statistics, you find significant differences or you don’t. If you split test two identical pages, your test will never end. You could be split testing for 10 years, you’ll never find a statistically significant difference between two identical pages, because there’s none. What you’ll see is Variation 1 has 50% chances of beating the control (head or tail?).

So, after some time, what you gotta do is stop the test and conclude that no differences were found.

This is what I did in this case, and I think it’s correct.

Cheers guys,

Carl

5. Siddharth, great post by the way. Thanks for the link love.

6. @Carl – Huh? What’s there to be wrong about? For a test to be valid, it needs enough of a sample size. This test doesn’t have nearly enough statistics to show any kind of correlation either way.

Therefore, it is not valid.

“Maybe” the results would show that the new price gave a better result, but there’s no way to tell that either way, based on this test.

I do agree though that you stop it since it took 5 months. In your case, you’d be much better off investing your time into more traffic, rather than split-testing.

7. @ Jeremy, buddy, you’re wrong all the way with your stats.

There’s 3 mistakes just in:

“For a test to be valid, it needs enough of a sample size. This test doesn’t have nearly enough statistics to show any kind of correlation either way.”

Here’s a quick guide to get you started:

http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm

I’m sure most of your clients won’t know the difference, but you’d do them and yourself a favor if you read and understood that.

-Carl

8. Haha, Carl, you can try to sound as smart as you’d like, if it makes you feel better.

The fact is…

YOU STILL HAVE A HIGHER CHANCE OF BEING WRONG THAN YOU DO OF BEING RIGHT.

Therefore, there’s absolutely no possibly way I would ever consider this a valid test. You have absolutely NO idea if these results would hold if you added more traffic/conversions.

Simple as that.

9. Okay, this is cool, but hard to implement on, let’s say a Magento shop.

Product
Resources Home