VWO Logo
Follow us and stay on top of everything CRO
Webinar

Meta-Analyses in Experimentation: The Whats and Hows

Duration - 40 minutes
Speaker
Ruben de Boer

Ruben de Boer

Lead Conversion Manager

Key Takeaways

  • Implement an evidence-based prioritization system for your experiments. This involves using data from completed experiments to rank test ideas based on their win rate and average uplift per winner. This approach keeps the ranking of your test ideas up-to-date based on your latest experimentation results, leading to more successful experiments.
  • Consider the success of different behavioral hypotheses on different pages when prioritizing test ideas. For example, if the social proof hypothesis has been more successful on the homepage than the product finding hypothesis, prioritize test ideas related to social proof on the homepage.
  • Update the win rate and average uplift per winner after every experiment. This ensures that your prioritization score remains current and is based on the most recent results.
  • Add additional attributes to your prioritization model to better align with your business goals. These could include alignment with business goals or OKRs, the percentage of traffic that will see the change, the minimal detectable effect, revenue going through the page, urgency, and ease of implementation.
  • Balance easy and complex experiments for velocity and impact. While it may be tempting to prioritize easy experiments, complex ones can often have a greater impact. Therefore, it's important to maintain a balance between the two.

Summary of the session

In this webinar, Divyansh and Ruben delve into the intricacies of A/B testing and its role in conversion rate optimization (CRO). Ruben, a CRO expert, emphasizes the importance of learning from both successful and unsuccessful experiments. He explains how to analyze data from A/B tests to verify or deny behavioral hypotheses, and how this knowledge can drive revenue and conversion rates.

Ruben also demonstrates how to create a detailed A/B test report, highlighting the difference between results and learnings. He advises against simply summarizing results, instead encouraging attendees to answer key questions to gain valuable insights. These include whether the hypothesis was confirmed, how the results align with previous knowledge, and what can be inferred about customer needs and motivations. Ruben introduces the concept of evidence-based prioritization, which helps build on learnings and successes. The webinar concludes with a poll asking attendees about their current prioritization models. 

 

Webinar Video

Webinar Deck

Top questions asked by the audience

  • Can you share with us your process to make user research please? And which tool are you using to share with your team, Motion or Myra? Thank you for your generous sharing Ruben.

    - by Hasna
    Got it. That's fine. So tools I use for user research. That can be a lot; I love everything related to polls and surveys, which can be from, several tools like Hotjar. I love Usability Up, which is a ...nice tool, user zoom for remote speed testing, but I also work with clients who have used video apps, even with eye tracking. Indeed for brainstorming sessions, I can use Myra to share with your colleagues, I like that part, be creative, and use your experimentation mind. So see what sticks. If it's sending an email with summaries and works perfectly, if it's a Slack message, we use that. If it's monthly update, lunch and learn, presentations, gamification, or whatever, works, for sharing in your organization. See what works there and experiment in your organization to see how you can best share learnings. I hope it kind of answers to the question.
  • I just wanted to ask if there were some free tools for someone who would like to start, then air something. Sorry. I forgot the name.

    - by Linda
    Airtable. Yes. Airtable is a free version, and that's actually everything we covered in the presentation. You can do so in the free Airtable. So it's very easy, actually. The course is free. Airtab ...le is free. So, yeah. There is a paid option, but you don't need it for this as I displayed in the presentation.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Divyansh from VWO: Hello. Hi, everyone. Thank you so much for joining the VWO webinar. But we always try and try to upgrade and inspire you with everything around experimentation and conversion rate optimization. I’m your host, Divyansh. I’m a marketing manager at VWO, ...
a full-funnel website experimentation platform. Today, we have a special guest who I feel a lot of people already know or will know after this presentation. Welcome Ruben from Online Dialogue.

 

Ruben de Boer:

Thank you. 

 

D:

Sure. Before starting with the actual discussion, I want to let attendees know that you too can participate in this discussion. GoToWebinar does not allow me to switch on your cameras, but I can switch on your mic if you want to share your thoughts on the questions being discussed. Send me a request using the chat or a questions box from the control panel, and I’ll be happy to unmute you. Ruben.

Take it away.

 

RB:

Sure. Thanks. Great to be here.

I’ve been working with, VWO for over, I think, 10 years now, approximately. So it’s a pleasure to be a speaker today. So we’re gonna talk about meta analysis in experimentation, but let’s first briefly introduce myself. This is me on one slide.

At least I tried to put everything on one slide. So my name is Ruben. I’ve been in the business of experimentation optimization for over 10 years now. Currently, I’m the lead CRO manager and consultant at Online Dialogue, an experimentation agency based in the Netherlands. I’m also the owner of Conversion Ideas.

And with this company, I want to help people learn and sell in conversion optimization and experimentation with online courses for very affordable prices. Currently, I have over 10,000 students worldwide. Mostly an international keynote speaker, blogger, and regularly, I appear on podcasts. That’s me in my business life, but I also love optimizing, in my personal life. I’m a big fan of personal growth, and I do sports seven, eight times a week.

So I love optimization also in my personal life. I live in Amsterdam, Netherlands. I absolutely love that city and everything it has to offer. But for the 3rd picture, I didn’t think it was very appropriate to show me drinking a beer at a festival in Amsterdam. So instead, I’ve decided to show a picture of me and my girlfriend Zoe and our dog Happy in nature because of our time off and because we love hiking in nature.

That’s me in a nutshell. Now let’s start with, three questions. So I hope everyone at home will participate and raise your hand if your answer to the following questions is yes and keep it raised and put it down when it’s no. So the first question is pretty easy to see if you’re paying attention, who here experiments? Raise your hands if yes.

If no, keep them down. I assume that almost everyone has their hand raised because they’re here for VWO, VWO testing tools. I assume that most people run experiments and AB tests. 

Next question. Keep your head raised if the answer to this question is yes as well. Who here documents experiments’ learnings? Yes. Raise your hands. No. Put it down. 

Now the final question. Who combines learnings from most A/B tests to learn about customer behavior? 

So, of course, this all has to do with the presentation today.

Because we as CRO specialists or those who work on experimentation and AB testing, we are always very eager to tell our managers that learning is just as important as winning experiments. We love to say that it does not matter if our test is a winner, a loser, or inconclusive, as long as we learn. If you did not answer yes to all three questions, or perhaps even if you did, you can likely be learning a lot more from your experiments. You can truly understand your website visitor in every step of the customer journey on your website and digital products. And you can use this for prioritization of your test ideas, your experiments.

You can use your learnings, your prioritization. And this will be the topic for the next 30 minutes. The goal of this presentation is that I’ll show you a few easy tweaks. So you truly learn much more, but also happily decrease your biases. I’ll show you how to use data analysis to know what hypothesis to address in what step in the customer journey and how.

And this will help you to become a lot more successful at your job. Of course, if you have any questions, please feel free to ask, you don’t have to wait until the end of the presentation. So don’t be shy, and let me know your questions while I’m presenting. So let’s start at the beginning. Like I said, we all know learning is important.

And the rule of thumb in our market is that only 25% of experiment results are an improvement. There are only 25 percent of our experiments that improve our conversion rate, our user experience, and our revenue. At online dialogue, we have a win rate of approximately 40%. And one of the reasons, one of the main reasons is that we learn. But if your win rate is 25%,

it basically means the 75% of what we do results in no growth for the business or even decreases the user experience, conversions, and therefore revenue.

 

RB:

Still with a win rate of 25%, 40%, or even a bit more, in our line of work, we feel a lot. Right? And that is why we do conversion rate optimization. That is why we A/B test. Without testing, we would just implement everything on the website straight away without testing, this will not result in growth.

You simply do not know if you’re implementing winning changes, changes that make no impact at all or even losing changes. If you do not test, you don’t know what you’re implementing and your growth is pretty much stable. Now when you run an experiment for every change on your website and digital products, you can implement only winners and not implement losers, and that results in growth for your organization. And this is how the big tech giants operate – those with an experimentation culture. And this is how they grow much faster than the other S&P 500 companies as we see in the stock prices displayed on this slide. This slide shows the stock prices of the companies that we know have an experimentation culture. So they experiment almost every change they make. And we can see that the experiments index of those companies that experiment, their stock prices grow much, much faster. 

But it’s not just because they only implement winning designs. They also learn from their experiments. Because if we do experimentation and conversion rate optimization correctly, we are going from a 75% failure rate to a 100% learning rate. And proper learning starts with the hierarchy of evidence. Alright. You have evidence this model comes from science, and it’s essential to understand for experimentation as well.

In science, this pyramid shows the hierarchy of evidence and is used to rank the strength of results obtained from scientific literature. For experimentation, the hierarchy of evidence looks like this as displayed on the slides. At the bottom, we have the expert opinion and competitors. Above that, we have data, user research, then we have data, then randomized controlled trials, which are A/B tests, and at the top, we have made an analysis. Higher in impairment means a higher quality of proof and thus more reliable results and insights, and thus a lower risk of decisions.

So at the top, we find the metadata analysis. In science, a meta analysis is performed when multiple scientific studies address the same question. And this is because a single study can have a degree of error, it will have some degree of error. And a rate analysis aims to derive a pooled estimate closest to the truth, and the same applies to experimentation. A single A/B test can be prone to errors.

You can, for instance, have a false positive in your test. This means that your test is a winner, but when you implement the change, there’s actually no difference. It doesn’t make any impact, a false positive. Another error could be that your A/B test is a winner, but for a different reason than stated in your hypothesis. Let’s look at a quick example.

So here we have an A/B test, and let’s assume that the hypothesis for this test was that social proof early on in the customer journey results in more conversions. So you add social proof elements, reviews, ratings, on an important landing page or perhaps the homepage. Now when this results in a winning experiment, it is very tempting to state that the hypothesis is correct. It is very tempting to state that indeed, social proof early on in the customer journey is important for your customers. However, maybe it was not the social proof that caused this experiment to be a winner.

Maybe the block highlights in red here on the slide could be a conversion killer. When people see this, they will leave the website and not convert. Now in the variant, in the b variant, you place the social proof elements just below the header. This causes that block to move towards the bottom of the page. It goes lower on the page, meaning less people will see this block.

Thus conversions increase. It was not the social proof, but simply placing this block lower on the page that caused your experiment to win. So a single A/B test can be prone to errors. Therefore, we conduct a meta analysis to get the closest to the truth about what drives our visitors and our conversion rates. So we conduct multiple A/B tests to run a metadata analysis, and we can do so in five steps.

Let’s go through them. One by one. Of course, we start with step number 1, which is to create a behavioral hypothesis based on your research.

 

D:

Ruben, sorry, sorry to interrupt. We have a question from Hasna. Hasna, can I unmute you and, you can go ahead and ask your question?

 

RB:

Sure.

 

D:

I’ll just do that. Hasna, you can go ahead and ask your question to Ruben. I’ve unmuted you. I think Hasna is not available as of now. 

So Hasna has left the question in the questions panel. Hasna wanted to know, can you share with us your process to make user research please? And which tool are you using to share with your team, Motion or Myra? Thank you for your generous sharing Ruben.

 

RB:

Okay. Which tools do I use for use research and for sharing with my colleagues? Is that correct? Is that the question?

 

D:

Yeah. Probably that is what Hasna meant.

 

RB:

Got it. That’s fine. So tools I use for user research. That can be a lot; I love everything related to polls and surveys, which can be from, several tools like Hotjar. I love Usability Up, which is a nice tool, user zoom for remote speed testing, but I also work with clients who have used video apps, even with eye tracking.

Indeed for brainstorming sessions, I can use Myra to share with your colleagues, I like that part, be creative, and use your experimentation mind. So see what sticks. If it’s sending an email with summaries and works perfectly, if it’s a Slack message, we use that. If it’s monthly update, lunch and learn, presentations, gamification, or whatever, works, for sharing in your organization. See what works there and experiment in your organization to see how you can best share learnings.

I hope it kind of answers to the question.

 

D:

Thank you, Ruben. You can go ahead. Also, Hasna has said thank you.

 

RB:

Sure. Alright. So, thanks. Nice question.

So five steps to do your meta analysis and learn much more. As mentioned, step number 1 is to create a behavioral hypothesis based on your research, which indeed is also user research, a very relevant question. So a behavioral hypothesis is a general hypothesis stating something about your visitors’ behaviors, needs, and motivations. It is a statement and hypothesis that is not sized to a single adjustment on your website. Its hypothesis is based on your user data cited research and completed experiments. And when you use different research sources, you plaster insights that belong together and create behavioral hypotheses.

So let’s look at a few examples. And let’s check out the case I use in one of my best-selling CRO courses on Udemy. In this course, I used the Google Merchandise Store as an example case to explain the conversion rate optimization process including all the research analysis, and A/B testing. In the store, in the Google Merchandise store, you can purchase Google Merchandise as the name kind of explains already. So you can buy sweaters, shirts, hats, cups, everything with the Google logo on it. I used it in my course because the analytics count of this website is freely accessible. So if you want to practice Google Analytics or universal analytics, you can use this account. But let’s say we are working on this website together.

And let’s assume that together, we did research for the Google merchandise store. And we found that people buy Google merchandise because they love the brands. And let’s say you found it in several research sources, like a poll, interviews, surveys, science, etcetera. As I imagine, we also found that people want to be part of the Google community. So those who buy from Google merchant, that store wants to be part of the Google community.

And in science, we found that people identify themselves with the brands when they purchase merchandise. And in a poll, let’s imagine we found that customers are a fan of Google and in interviews, we found that our customers use many different Google products. So these are insights from our research, but they’re all related. So, therefore, we can create a behavioral hypothesis that by elaborating on the brand and community feelings, our sales will increase. So this is a hypothesis based on research, on several research sources, and is not related to a single change on your website or digital product.

It is an overarching statement about your user behavior and needs based on our research. Now to run a meta analysis, we can test this behavioral hypothesis in several A/B tests. For instance, we can add a value proposition on the most important landing pages stating that people can become part of the Google community. We can display the number of Google fans worldwide. You can display pictures of a large Google event with fans wearing our merchandise.

And, of course, we can elaborate on the fact that we are the official Google Merchandise store. And I’m sure you can think of many more experiments based on this behavioral hypothesis. So you’re gonna run these experiments. And if you find a lot of winners, the behavioral hypothesis could be very true. And it means we learned something valuable, something validated by several A/B tests.

So for one behavioral hypothesis, we can verify or deny it with several A/B tests. Now for your websites or additional products, you want to aim for 5 to 10 behavioral hypotheses. So let’s look at 4 quick examples. Let’s say in our research and again, you use different research sources, we found a customer problem that visitors have a hard time finding the right products on our website. So this could lead to the behavioral hypothesis by making it easier for visitors to find the right product sales will increase.

Now let’s also imagine we found that visitors required social proof and needed to be long on our website and for our product. So by increasing social proof, our sales will increase another strong day hypothesis, not related to a single change, with an overarching hypothesis stating something about website visitors’ needs and behaviors. Two more examples, and they will work with these examples. We found research that visitors have a hard time choosing the right product. So when we include guidance advice on the right product, sales will increase and the last one, let’s imagine, in our research, in our extensive research, of course, we found that visitors are hesitant to purchase due to feelings of uncertainty related to the product delivery and terms. So by providing certainty, sales will increase. Just a quick example, few quick examples. 

So now we have 5 behavioral hypotheses based on our research. Now I’ll take a sip from my water. Take a second yourself to think about 1 or 2 behavioral hypotheses, for your website or your digital product. So think of 1 or 2 behavioral hypotheses for your website for a second. Alright.

Let’s go to step 2.

For every experiment, document the page, you know, the page – so where the experiment is running, the behavioral hypothesis related to the experiments, and the optimization strategy or optimization direction, just how you name it. So what is the optimization strategy? The optimization strategy, these are the most important ways to optimize your journey based on psychological knowledge. At Online Dialogue, we have these 5 optimization strategies based on analyzing a lot of A/B tests and a lot of scientific papers.

But this is our model. There are several models in the market, and you can use the one you prefer to work with, of course. We have ability, attention, motivation, certainty, and choice architecture. Now I can make this a lot easier for you, thanks to my colleagues, the psychologists at Online Dialogue. Because someone on your website wants to complete a certain action, for instance, buying Google merchandise or signing up for a newsletter.

Now let’s imagine this action, which someone wants to do, is crossing a bridge. Now ability is you want to cross the bridge, but the bridge is broken. So once you want to purchase a product, but a checkout is broken. Attention. There are 2 bridges in front of you, a normal one, and the other made of gold.

Which one will you choose? Automation. You could cross the bridge, but why would you? Certainty? You want to cross the bridge, but only if you trust the one who built it.

And finally, choice architecture. There are 2 prices for crossing the bridge. A normal ticket for €5 and a premium ticket for  €80.50, which is most chosen, of course. I would like to do it on our websites. So those are the 5 strategies we use, which you can choose any framework you want.

So step 2 is to document the page, behavioral hypothesis, and optimization strategy for each experiment. In your documentation tool, it could look something like this. We use Airtable for this for our documentation. Of course, you can choose any tool you like. You could even try it in Google Sheets, but I would not recommend that at all.

Like I said, in Online Dialogue, we love using Airtable with all our clients. If you want to start using Airtable, we have a free course on how to set up your Airtable base, and we have a free course on Udemy. This course will help you set up your CRO process and insights in Airtable step by step. So if you want to start using Airtable, you can check out this course. It is free.

So you document the page optimization strategy and behavior hypothesis. And, of course, you also want to document your test results and the uplift you found in your test. Of course, you can add more fields to your documentation tool like primary goal, screenshots, etcetera, etcetera. but these are the fields that are important for our meta analysis for the Google Merchandise Store. So let’s quickly recap.

You have the behavioral hypothesis based on extensive research. You run A/B tests related to the behavioral hypothesis on all pages. On the homepage, on the list page, on the product details page, and on all other pages on your website. Now in those A/B tests, you can use the optimization strategies. Some tests will be related to ability, others to motivation, attention, certainty, and choice architecture.

And when you also document the result of experiments, so winning A/B test, running A/B test or inconclusive, you start seeing a picture, picture of what works, where, and how. Here we can see that the best page to address this hypothesis is the list page, and the best way to do so is using motivation in our A/B test. So again, this is your documentation. We have this set up. You did all the hard work.

But, of course, with this overview, it is impossible to do the meta analysis. It will be a lot of work. So that’s what we need step 3 for. So step 3 is to set up meta analysis in your documentation tool because we now have all the information we need for running multiple meta analyses.

For instance, we can cross-reference the behavioral hypothesis and the page. Here we see an example for the home page and the different behavioral hypotheses. So with this data, you know, what hypothesis to address in what step of the customer journey. In this example, the behavioral hypothesis related to social proof works great on the home page because it has a win rate of 70% and an average update per winner of 8%. The behavioral hypothesis related to certainty does not work at all here.

We didn’t find any winning experiment. But this is the home page. For the checkout, you might see something completely different. In this example, it is the behavioral hypothesis related to brand feeling that works best in the checkout with a win rate of 60% and an average uplift of 7% for each winner.

Again, in this example, the certainty behavioral hypothesis performs worst. Now if you would see this, the behavioral hypothesis rate’s uncertainty performing worse on all other pages as well, so we might have to discard it. We found it in research, but in reality, it’s not that big of a problem. But we can make more combinations.

We can also make a combination of strategy and page. Here’s the ability and how it performs on different pages. So ability experiments are most successful on the product page. And, of course, we can combine strategy and behavioral hypotheses. A/B test related to motivations are, in this example, most successful in combination with the behavioral hypothesis of brand feeling with the highest win rate of 70% and the highest average uplift per winner of 6%.

So with the data we have, with the page, with the behavioral strategy, behavioral hypothesis, and, of course, optimization strategy, we can make all these cross-references and really start learning what, where, and how with our experiments. In Airtable, it can look something like this. Of course, this is fake data, but we can see that the certainty and choice architecture strategies work best on this home page with a win rate of 50%. So this is how it can look like in Airtable. So, again, we are using A/B tests to run meta analysis.

We use A/B tests to verify or deny a behavioral hypothesis to learn what drives our digital users and what drives our revenue and conversion rates. Now you also want to use this information for the learnings of your A/B test. And right when you when you finish an A/B test, this is what a general A/B test report looks like. Right? 

You start with the name of the AB test. Then the reason for it and the hypothesis, then the setup and screenshots of experiments, then results on the main KPIs. Then we get to the A/B test learnings and we end with conclusions and recommendations. That’s a general report, that most of us make when we complete an A/B test. Now we want to talk about A/B test learnings because once you get to the learnings, I see many specialists write learnings like this.

So overall, the variant results in a 4.3 percent uplift, hypothesis is confirmed for mobile users. Results on the desktop are inconclusive. New users especially seem to like to change and for users coming from a paid marketing campaign uplift is highest with 5.1% etcetera, etcetera, etcetera. That’s how I see a lot of specialists write their learnings, but please do not do this. These are a summary of your results.

These are not learnings. Learnings and results are very different. With the data of your A/B test and the data of your meta analysis, you can answer the following questions to craft real valuable learnings and insights. So when you get to the A/B test learnings of your report or your analysis, answer these questions.

First, was the hypothesis confirmed?

If not, was the hypothesis wrong or the execution?

Next, a very important question. Combine the results of this experiment with what you already know from your meta analysis on that page, on that hypothesis, and on that strategy. Is it completely in line? Did you learn something new?

It’s surprising. What did you see in this A/B test combined with the meta analysis? Now what could you say about your customer’s needs, motivations, and behaviors when you make this combination?

With this knowledge, is there anything you would change in your approach like a different hypothesis, different strategy, or test this on a different page? 

And when you’re and when you answer these questions, that’s when you can come up with good follow-up experiments. If you answer these questions, you are learning and you are creating great follow-up experiments based on these learnings. So we covered a lot already. There are still two steps to go.

But now you know, which behavioral hypothesis to address on what page, what step in your custom journey, and how by using the correct optimization strategy because we have all this information, we can create all these meta analyses. 

Now step 4, set up evidence-based prioritization. To get the most out of your meta analysis, you want to set up evidence-based prioritization. This will help you keep building on your learnings and successes as you will have many more winners. Thus you will strengthen your meta analysis with more and more proof. So this is step number 4.

Let’s do a quick poll, and I believe Divyansh has it prepared. The question is, what prioritization model do you currently use? I’m curious. Did you ask, can you start the poll?

 

D:

Sure, Ruben. The poll has been started. I request all the attendees to please answer the question.

 

RB:

So you have the ICE, the PIE model, PXL model, evidence-based, which I will explain in a few minutes, and others. And let’s give it a few more seconds. You can see the result. Right, Divyansh?

 

D:

Yeah. I can close the poll now.

 

RB:

Can you all see the results?

 

D:

Yeah. Yeah. I’ll just share the results.

 

RB:

Yeah. Okay. Ah, there we go. It’s very small for my window. So I see most use ICE, pie model is pretty popular, PXL, some evidence-based is nice.

It’s not my evidence-based model. I’m very curious how you use it. Great. Awesome. So, not a lot of evidence-based yet.

That’s good. Thanks. Yeah. There you go. Thank you.

See my slides again. So step number 4, set up evidence-based prioritization. I’ll explain how it works. So if this is your data, here we see the page combined with the behavioral hypothesis again. On the homepage here in this example, we completed 130 experiments with these behavioral hypotheses and these results.

Right? Then a test idea related to the social proof hypothesis should get a higher prioritization score than an idea related to finding the right products on this page. Because your completed experiments have shown that the social proof hypothesis is much more successful on the home page compared to finding the right products. Here we see the data again for the combination homepage and the social proof behavioral hypothesis. Now the expected impact of your next experiments on the home page with social proof is your win rates times the average uplift per winner.

In this case, it’s 70% times 8%, which is 5.6%. That’s the expected impact of your next experiment. And this is evidence-based prioritization. After every experiment, the win rate and average uplift per winner gets updated. Thus the ranking of your test ideas stays up to date based on your latest experimentation results. And this will, of course, result in many more winning experiments as the test ideas with the highest purchase are based on results from completed experiments.

So just to show it in drawing. Thanks to my colleague. Let’s say we did home page, social proof, hypothesis related to the search proof, and need to belong. And it was a winner.

In that case, for this combination, the win rate increases. Therefore, the prioritization score increases. Thus, we will keep testing the home page and social proof combination as the test ideas get higher, the test study is related to the homepage and social proof gets a higher prioritization score. Of course, when it returns results in an inconclusive or loser, the win rate for this combination will drop, the prioritization score will drop, and perhaps we should test different hypotheses or test this social proof hypothesis on a different page. 

Now you can add a few additional attributes to your prioritization model and score them as you like. Let’s cover a few examples. I would suggest add 1 to 3 additional attributes if you like and applicable to your business. So you could set 1, alignment with business goals or OKRs. Important test goals get a higher score.

You can add percents of traffic that will see the change. So above the fold gets a higher score than an experiment in the footer. A score from a minimal detectable effect. Of course, a lower minimal detectable effect. That’s a higher score in the prioritization.

Revenue going through the page, you can use that as an extra score. Urgency. More urgent means a higher score and ease. But do make sure when you use ease to balance easy and complex experiments for velocity and impact. So you can add 1 to 3 additional attributes.

But ensure that the FMS-based scores have the highest impact on the overall score. This has proven to be successful on your website or your website visitors and your product. You can, for instance, multiply it by a certain number. So an example, this could be the formula for your prioritization score. You have to win rate times the average uplift per winner. That’s the expected impact for our next experiment. We do times 5. Then we add the business goals for a MDE score and an e score, and that’s a prioritization of your test idea. With the evidence-based bar, the win rate, net worth of the winner, having the highest impact on that score. 

So evidence-based prioritization. Again, use a documentation tool you like. It is reasonably easy to set this up in Airtable, where you can mainly automate it. This makes it even quicker to get your prioritization score than the more traditional models, which we saw in the poll by ICE and PXL. You’ve set up correctly in Airtable. It will be automated and it’s quite easy and fast to get your paired to safety score.

If you want to do so, I have a video and article on LinkedIn on this topic. If you want to have the video, feel free to connect with me there, and I’ll happily share the link with you. 

So let’s recap. Five steps to truly get to know your digital users. The first one was to create the behavioral hypotheses based on your research and use several research sources.

2nd, for every step, for every experiment, document the page, behavior hypothesis, and optimization strategy. 3rd, set up meta analysis in the documentation tool. 4th, set up evidence-based prioritization as we just covered. And now you may wonder what is step 5? What is step 5?

Well, that is, of course, to celebrate because with your insights and with your learnings, many departments can benefit from this. Many departments can benefit from your meta analysis. You can share these learnings with online marketing, offline marketing, product innovation, and even higher management can benefit from your meta analysis. Because you know a lot about your customers. You have many more learnings.

And because of that, many more experimentation winners, many more A/B test winners. So step 5, don’t forget to party. So that’s it. Five steps. It might seem like a bit of extra work in the beginning, but believe me, you will learn much more and become much more successful once you set this up, I’ve seen tremendous results with my clients applying this to their process.

So thank you for listening. I share a lot of information about this and about experimentation on LinkedIn, so feel free to connect there. And if you’re looking for affordable online courses, including the free Airtable course, check out my website, converge aggregates.com. Of course, if you need an agency, like Online Dialogue, feel free to contact me as well. For now, thank you very much for learning, and, I’ll answer any question there is right now.

And if you have any questions later, feel free to ask later. Thank you.

 

D:

Ruben, we have got a few questions.

 

RB:

Perfect. Nice.

 

D:

Linda, I’ll just unmute, Linda. You can go ahead with questions and ask Ruben directly. Linda, you can go ahead.

 

Linda:

Great. yeah, I just wanted to ask if there were some free tools for someone who would like to start, then air something. Sorry. I forgot the name. Yeah.

 

RB:

Airtable. Yes. Airtable is a free version, and that’s actually everything we covered in the presentation. You can do so in the free Airtable. So it’s very easy, actually.

The course is free. Airtable is free. So, yeah. There is a paid option, but you don’t need it for this as I displayed in the presentation.

 

Linda:

Okay. Thank you.

 

RB:

Very welcome. Other questions.

I request all the attendees. If you have any questions, do reach out to Ruben, right now, or as he mentioned, and you can also see his socials on the screen.

Yeah. Or email me whenever you have a question, feel free to contact me. I’ll be very happy to help you out. Yeah.

 

D:

It seems there are no more questions, Ruben. Thank you for this presentation. It was very well put, and I hope it was of value to our users. All of us enjoyed listening to you. Thank you so much, Ruben. I hope you have a great day.

I hope you all have a great day going forward as well. Thank you so much.

 

RB:

Likewise. Thank you very much. Have a great day.

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Deck
  • Questions
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out