VWO Logo

VWO SmartStats for Smarter Business Decisions

VWO’s Bayesian-powered statistics engine is designed to do the heavy lifting when it comes to calculations and accuracy for you and gives you all the ingredients you need to make the right business decisions.

Know more
Complies with:
VWO GDPR Ready Badge
VWO CCPA Ready Badge
VWO G2Crowd Leader Spring Badge
Trustradius badge for VWO

We tweet useful stuff daily

Related content:

Learn How Experts Derive Insights from A/B Test Results

7 Min Read

You conducted an A/B test—great! But what next?

How would you derive valuable insights from the results of A/B testing? And more importantly, how would you incorporate those insights into subsequent tests?

As Deloitte University Press Research on Industrialized Analytics suggests, acquiring information is just the first step of any robust data analysis program. Transforming that information into insights and eventually, the insights into actions are what yields successful results.

A/B testing Result- Data analytics

This post talks about why and how you should derive insights from your A/B test results and eventually apply them to your conversion rate optimization (CRO) plan.

Analyzing Your A/B Test Results

No matter how the overall result of your A/B test results turned out to be—positive, negative, or inconclusive—it is imperative to delve deeper and gather insights. Not only can this help you to aptly measure the success (or failure) of your A/B test, but also provide you with validations specific to your users.

profile picture for Bryan Clayton, Greenpal
As Bryan Clayton, CEO of GreenPal puts it, “It amazes me how many organizations conflate the value of A/B testing. They often fail to understand that the value of testing is to get not just a lift but more of learning.

Sure 5% and 10% lifts in conversion are great; however, what you are trying to find out is the learning about what makes your customers say ‘yes’ to your offer.
Only with A/B testing can you close the gap between customer logic and company logic and, gradually, over time, match the internal thought sequence that is going on in your customers’ heads when they are considering your offer on your landing page or within your app.”

Here is what you need to keep in mind while analyzing your A/B test results:

Tracking the Right Metric(s)

When you are analyzing A/B test results, check if you are looking for the correct metric. If multiple metrics (secondary metrics along with the primary) are involved, you need to analyze all of them individually.

Ideally, you should track both micro and macro conversions.

profile picture for Brandon Seymour, founder of Beymour Consulting
Brandon Seymour, founder of Beymour Consulting rightly points out: “It’s important to never rely on just one metric or data source. When we focus on only one metric at a time, we miss out on the bigger picture. Most A/B tests are designed to improve conversions. But what about other business impacts such as SEO?

It’s important to make an inventory of all metrics that matter to your business, before and after every test that you run. In the case of SEO, it may require you to wait for several months before the impacts surface. The same goes for data sources. Reporting and analytics platforms aren’t accurate 100 percent of the time, so it helps to use different tools to measure performance and engagement. It’s easier to isolate reporting inaccuracies and anomalies when you can compare results across different platforms.”

Most A/B testing platforms have built-in analytics sections to track all the relevant metrics. Moreover, you can also integrate these testing platforms with the most popular website analytics tools such as Google Analytics.
To track A/B test results via Google Analytics, you can also refer to this article by ConversionXL.

Conducting Post-Test Segmentation

You should also create different segments from your A/B tests and analyze them separately to gauge a clearer picture of what may be happening. The results you derive from generic nonsegmented testing will provide illusory results that lead to skewed actions.

There are broad types of segmentation that you can create to divide your audience. Here is a set of segmentation approach from Chadwick Martin Bailey:

  • Demographic
  • Attitudinal
  • Geographical
  • Preferential
  • Behavioral
  • Motivational

Post-test segmentation in VWO Testing allows you to deploy variation based on a specific user segment. For instance, if you notice that a particular test affected new and returning users differently (and notably), you will want to apply your variation only to that particular user segment.

However, searching through lots of different types of segments after a test means you are assured of seeing a lot of positive results just because of random chance. To avoid that, make sure you have your goal defined clearly.

Delving Deeper into Visitor Behavior Analysis

You should also monitor visitor behavior analysis tools such as  Heatmaps, Scrollmaps, Visitor Recordings and so on to gather further insights into A/B test results. For example, consider a search bar on an eCommerce website. An A/B test on the navigation bar works only if users actually use it. Visitor recordings can reveal if users are finding the navigation bar friendly and engaging. If the bar itself is complex to understand, all variations of it can fail to influence users.

Apart from giving insights on specific pages, visitor recordings can also help you understand user behavior across your entire website (or conversion funnel). You can learn how critical the page on which you are testing, is in your conversion funnel.

Maintaining a Knowledge Repository

After analyzing your A/B tests, it is imperative to document the observations from the tests. This helps you not only in transferring knowledge within the organization but also in using them as reference later.

For instance you are developing a hypothesis for your product page, and want to test the product image size. Using a structured repository, you can easily find similar past tests which could help you estimate patterns on that location.

To maintain a good knowledge base of your past tests, you need to structure it appropriately. You can organize past tests and the associated learning in a matrix, differentiated per their “funnel stage” (ToFu, MoFu or BoFu) and “the elements that were tested.” You can add other customized factors as well to enhance the repository.

profile picture for Sarah Hodges, co-founder of Intelligent.ly
Look at how Sarah Hodges, co-founder of Intelligent.ly, maintains track of the A/B test results[1], “At a previous company, I tracked tests in a spreadsheet on a shared drive that anyone across the organization could access. The document included fields for:

  • Start and end dates
  • Hypotheses
  • Success metrics
  • Confidence level
  • Key takeaways

Each campaign row also linked to a PDF with a full summary of the test hypotheses, campaign creative, and results. This included a high-level overview, as well as detailed charts, graphs, and findings.

At the time of deployment, I sent out a launch email to key stakeholders with a summary of the campaign hypothesis and test details, and attached the PDF. I followed up with a results summary email at the conclusion of each campaign.

Per my experience, concise email summaries were well-received; few users ever took a deep dive into the more comprehensive document.
Earlier, I created PowerPoint decks for each campaign I deployed, but ultimately found that this was time-consuming and impeded the agility of our testing program.”

Applying the Learning to Your Next A/B Test

After you have analyzed the tests and documented them according to a predefined theme, make sure that you visit the knowledge repository before conducting any new test.

The results from past tests shed light on user behavior on a website. With better understanding of the user behavior, your CRO team can have a better idea about building hypotheses. This can help the team create on-page surveys that are contextual to a particular set of site visitors.

Moreover, results from past tests can help your team come up with new hypotheses quickly. The team can identify the areas where the win from a past A/B test can be duplicated. Also, the team can look at failed tests, know the reason for their failure and steer clear of repeating the same mistakes.

Your Thoughts

How do you analyze your A/B test results? Do you base your new test hypothesis on past learning?

Vaishali Jain
Vaishali Jain Majored in English literature and currently working with content at VWO. Loves her coffee, books and movies.
More from VWO on A/B Testing
Introducing Browse Mode: create an A/B test even behind the login walls of a website

Introducing Browse Mode: create an A/B test even behind the login walls of a website

We get bored (and frustrated) if we think it has been long since we last…

Read More
Paras Chopra

Paras Chopra

2 Min Read
Your Intuition is Wrong – A/B Testing Results that Surprised the Experts

Your Intuition is Wrong – A/B Testing Results that Surprised the Experts

One of the things I absolutely love about split testing is its inherent ability to…

Read More
Justin Rondeau

Justin Rondeau

7 Min Read
A/B testing is like chess

A/B testing is like chess

The rules of chess are easy to remember: a pawn moves one step forward, the…

Read More
Paras Chopra

Paras Chopra

4 Min Read

Scale your A/B testing and experimentation with VWO.

Start Free Trial Request Demo
Shanaz Khan from VWO

Hi, I am Shanaz from the VWO Research Desk.

Join our community of 10,000+ Marketing, Product & UX Folks today & never miss the latest from the world of experience optimization.

A value for this field is required.

Thank you!

Check your inbox for the confirmation mail