VWO Logo Partner Logo
Follow us and stay on top of everything CRO
Webinar

Beyond Statistical Significance: Determining Impact Of Experimentation On Customer Lifetime Value (LTV)

Duration - 40 minutes
Speakers
Ruben Ugarte

Ruben Ugarte

Data and Decision Strategist

Bhavya Sahni

Bhavya Sahni

Ex - SaaS Revenue Professional

Key Takeaways

  • Analyzing user behavior and how it predicts certain outcomes, such as making a purchase, can provide valuable insights for business growth. This can be done by running reports on organized and readable data.
  • Optimizing certain events or behaviors can be predictive towards actions that businesses care about, such as increasing sales or customer engagement.
  • The use of tools like Compass can help in predicting outcomes based on user behavior. This can be particularly useful in understanding what actions lead to a purchase event.
  • Feedback from the audience is crucial for improving future webinars and addressing relevant topics. Participants are encouraged to share their thoughts and suggestions for future discussions.
  • Reach out for further questions or clarifications via email or through the speaker's website. Open communication channels are important for continuous learning and improvement.

Summary of the session

The webinar, led by a seasoned data analytics expert, Ruben Ugarte delves into the complexities of data tracking, analytics, and the role of branding in product development. Ruben, with his vast experience in advising companies, emphasizes the importance of starting with questions rather than tools, focusing on established, production-ready tools, and using an assumptive scoring system to make tool selection more efficient. 

Ruben concludes by inviting further questions via email or through his website. The webinar is beneficial for both VWO customers and non-customers, providing a comprehensive overview of experimentation.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • How to measure impact on LTV when your LTV is, a quote-unquote bad.

    That might be a case where LTV may not be the right metric or the right KPI to look at. You might need to look at something else, in the meantime. Right. So, a quick KPI example here for a SaaS growth ... team. Right? And I specify here because the product team will be different. So for a growth team, the North Star might be something like new paid subscribers. Then some of the KPIs supporting this or helping us understand this North Star metric are things like CAC, cost of acquisition, lifetime value, and sign-up rate. How well do users sign up from, let's say, the landing page to the actual product? And then we can have some supporting metrics. Maybe we want to break it down by market channel or marketing campaigns, maybe do a demographic breakdown, male versus female, age, any other demographic data that we can collect about our users. And then we, for example, had a client that had a very similar structure here, and their onboarding turned out to be one of the biggest gaps in their product. So the campaigns were driving users. The cost per acquisition was solid. It all seemed to work until they got to the onboarding. And their onboarding was really quite weak. So they would lose a lot of users in the onboarding, and users, of course, have to go through onboarding to become a paid subscriber. So you may have things like that where that helps you understand the entire picture.
  • What if the page URL is undefined, for example in the checkout, shipping page is a part of the overall checkout with a string abc.com/checkout?xyz=123, which is different every time. Would we use the redux to identify this page?

    - by Vlad Kovinsky
    I would say, yeah, you can use something like redux to filter down to a specific page, and be able to create an event whether you do it on code or within your analytics tool itself. So you create an ...event for each page of the checkout when the string changes. Let me know if that helps.
  • Do you include URL in your event data to report on data by page?

    - by Josh Fowler
    Yes. We do. Yeah. It's quite helpful. And for those who run single-page apps, which is very common nowadays. We always make sure that we're firing an event every time the page changes, even if it does ...n't reload. So if just the, let's say, the hash changes or the string changes, we still treat it like a page view and capture that URL.
  • How do you calculate LTV of an A/B test after the test is finished and implemented? I see the test increased revenue by 2% over 1 month. Do you simply extrapolate 2% month over month for the rest of the period?

    - by Christian Torgerson
    Yeah. So it's very much along the lines. Right? What can be reasonable for an impact? Right? If you run a test and your short term metrics, say, your conversion rate increased by 2%, you could extr ...apolate that going forward. What I think would be helpful is to measure it over the long term as well. Right? So looking at something like that cohort analysis report that we saw, and measuring the revenue of those users who maybe view variation 1, like how convert it highly or higher and seeing what their actual revenue is 12 months from now. That will tell you maybe the revenue was higher in the 1st month, and then it came back to the average over time. Or maybe you just kept going higher and higher. Right? So I think that will give you a sense of if the impact that you did was something that's short term or if it's truly a long term impact.
  • Is it healthy to set GMS as your NSM?

    - by Yanice
    Yeah. It can be. So I would say it depends on us. We're just having a discussion, right, about the why behind the metric. What and where we still accomplish this. Be very careful about what you choose ... as your metric. Most companies are actually very successful at optimizing whatever the North Star metric is, whether it's experiments being run or GMS. So if you look out 12 months from now and then you say, hey, we, you know, we improved GMS by 10%, 15%. Is that what we really want? Right? Is that gonna drive the other business outcomes that we care about, that you care about, that your team cares about? Right? And if you feel confident in that answer, then that can become a good North Star metric. But if you see some gaps in that assumption, you know, the idea that maybe if customers are running more experiments, they might still be unhappy or they might still cancel, they might still not be paying customers, then that could be something that might be worth double-checking.
  • In your LTV, did you only measure average revenue that occurred after this was broken down by ARPU?

    - by George
    That's probably the typical way you can measure it. I also find it helpful to look at the outliers on both ends. So maybe people who are really high on that LTV value. So who are they, right? Were the ...y impacted by the test? Right? Maybe you find, you know, that's the famous phrase that, you know, averages lie, right? An average can lie and hide a lot of things. You might find that the average went up, but maybe it's a field of low-paying customers and a handful of really high-paying customers, right? So the average is helpful, but then also seeing the breakdown underneath that, right? Are the customers who are in that higher average the kind of customers you want over the long term? Right? An example here is you might run an A/B test that uses discounts, right? And you're discounting quite heavily and your average goes up. Just the sheer volume of people increases the average. But maybe the underlying composition of that average is not very suitable for the long term.
  • How A/B test outcomes can indicate causation for a customer upgrade to a higher plan. This is less so than LTV and most of our expansion.

    Yeah. So, really a very popular question, especially with my clients, is understanding the behaviors the user must do before becoming a paying customer. It's kinda like the holy grail for a lot of com ...panies. It can be very impactful. There's no straight answer, but you can combine a handful of reports to do that. A report like this one, for example, is kinda what you can imagine. So you take your different behaviors, maybe not to sign up, but them sending a message, right, or some other key product action, and then your outcome is them becoming a purchase event, then becoming a paying subscriber. You also have events like, let's see, Compass. Compass is the same idea, right? You take users, you take some events, again, some behavior, and then you wanna see how well this will predict some kind of outcome, right, whether it's them becoming, let's say, the same thing we had before. Right? Some event and how that will predict them doing a, like, a purchase event, right, and you get a gauge of how predictive that is. So the same thing. Once you have your data in order and it's organized and it can be read, you can run a lot of these reports on your data and then be able to run some models on how if you, you know, if you optimize the searing event post our behavior, is that gonna be predictive towards.

Reading Recommendations

  • Hooked: How to Build Habit-Forming Products

    by Nir Eyal

    Hooked by Nir Eyal delves into the psychology behind successful products and services, exploring how companies create habit-forming experiences. The book provides insights into how to create products that captivate consumers' attention and keep them coming back for more. Using the "Hook Model," Eyal outlines a four-step process—Trigger, Action, Variable Reward, and Investment—that drives consumer behavior and engagement. This insightful guide blends behavioral science and practical business strategies to help designers, marketers, and entrepreneurs build products that captivate users and create lasting habits.

  • Stillness is the Key

    by Ryan Holiday

    The book explores the importance of cultivating inner peace in a chaotic world. Drawing on timeless wisdom from Stoic, Buddhist, and Confucian philosophies, Holiday emphasizes the value of slowing down, embracing solitude, and practicing mindfulness. Through engaging stories and practical advice, the book guides readers toward achieving mental clarity, focus, and resilience, ultimately leading to a more fulfilling and balanced life.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Bhavya from VWO: Thank you so much for coming in here. Today’s topic is a very controversial subject. There have been a lot of debates around what we’re going to talk about today. Yet, in all those conversations that we have seen and heard to date ...
about how to attribute revenue uplift to experimentation, you still haven’t found something which is really, really convincing. Whilst I was deliberating about this topic because of a lot of ongoing conversations with a lot of our key prospects and the partner ecosystem that we have, in terms of digital marketing agencies and solution partners, they’ve all come back to us and said, ‘Okay, VWO, we’ve run, say, 200 tests in a year, 40 tests won, we all push the final variations to production, but we still don’t know how much revenue these variations contributed over a long period of time.’I’ve been reading about the gentleman who is seated with us today, Ruben. I’ve been reading his newsletter for quite a while. He writes a very, very thought-provoking newsletter, which goes by the name of Weekly Growth Needle, and he used to speak about all these things. He used to write very frequently about attribution challenges and experimentation, and just one fine day, I kind of saw myself emailing him and said, ‘Hey, Ruben, you want to do this with us?’

Because we have been seeing a lot of people having a lot of comprehensive questions which have no solid answer to the plan. Ruben was kind enough and very gracious enough to spend some time and not just tell us about how the revolution model should be drafted, but actually show us a couple of variable examples. With this, I will hand over the baton to Ruben.

Ruben, why don’t you introduce yourself to the audience and kind of just take it from here?

 

Ruben Ugarte:

Yeah. Thanks. Thanks, Bhavya. Hey, hey, everyone. So as Bhavya mentioned, my name is Ruben, and I’m the Principal of Practico Analytics.

And to give you a little sense of my background, I’ve been working in the marketing space for about 15 years now, starting when I was 15 or 16, and I I taught myself how to code, how to build websites. I was always very fascinated by this idea of selling stuff online, even 15, 20 years ago. I went on to run my own company. And over the past 5 years or so, I really specialize in data and analytics. So I work really mostly with technology companies.

Typically, they have a web app or a mobile app, and we’re solving a lot of the problems that I think a lot of you are experiencing attribution, right, understanding the impact of marketing dollars, trying to figure out what users are actually doing in your product, improving long term retention, trying to measure the impact of A/B tests, and overall growth. So we’ll, you know, I’ll come in, help companies set up their data infrastructure or data foundation, and then use that to drive insights and answer questions that the company cares about. So in today’s webinar, we’re really looking at A/B testing, in trying to understand the impact of a single A/B test or might be an entire program of A/B tests on your entire product. And what I first want to start with to set the expectation is this idea that if you’re running A/B tests, if you have an established program or you have a few A/B tests at the moment, in a sense, you’re really winning. Right?

I was having this conversation with a hockey coach a couple of weeks ago, and he trained elite, young hockey players. We’re talking about the difference between what it takes to train, maybe a beginner hockey team versus an elite hockey team, and it’s just night and day. Right? There are different drills. You could focus on different things. You don’t have to worry about things like, are you gonna show up on time?

So it’s a different world. And this is really what we’re talking about here. We can talk about basic Baja startup A/B tests, but we’re really saying you got tests. You’re running. You have winners.

Let’s take it to the next level, right, let’s improve all of that, and build those successes. That’s good. 

 

Bhavya:

So, Ruben, you work with a lot of clients. You know, you help and you just don’t advise; you also do both strategy as well as execution. Right? I mean, there are a couple of stories. I mean, in your own observation, how many companies have a very established CRO program where there’s a lot of maturity?

There are many multiple tests, both server-side as well as client-side. Not just pre-login pages are being tested, but in our activity is being diced and sliced from multiple levels. What is your observation and where is it headed? I mean, if you can just throw some light, maybe that would kind of fit the appetite of everybody who is here.

 

Ruben:

Yeah. Yeah. So if I had to put a number, I think probably less than half of the companies have an established CRO program and what they will call it. Most of them and a lot of the clients that I work with, they typically all function on the same sort of model. They have a product.

They have all the expertise they need in-house, for the most part. But everything is sort of driven by anecdotes. You know, we heard that a user wanted this. We think we should do this. They could do it.

They build a product. They build a marketing campaign. They find a way to measure it, whether that’s at a very rudimentary level or very advanced. And then they think, ah, it kinda worked. Let’s go from there.

And for a lot of companies, it’s typically a lot of volume. So you do enough things over the long term and some things work out. However, I am seeing more and more companies build upon this, get more cases, trying to build a stronger methodology in how they run tests, how they build the product, how they run campaigns, and make data typically a central focus of all their initiatives. But it takes time. And, you know, it takes time because companies typically have to hire roles that they don’t have right now: data analysts, data scientists, data engineers, marketers that are more technical in nature, that can sort of build most of the actual tests on their own using decent tools like VWO and Google Tag Manager and so on. So it’s an overall shift. And, you know, right now, most of it’s in the US and Canada, and a lot of the world, we’re in the zero unemployment economy.

So companies are struggling to find the right people to put in the right spots.

 

Bhavya:

Is that one of the biggest challenges that you see?

Reuben:

Yes. Right now, yeah. There’s money flowing into a lot of companies, especially, you know, venture-backed companies and companies who are growing and doing well, but finding the right people is tough. You know, I had a client tell me that people like me in my role, being myself, damn technical as an engineer, but I really spend most of my time in marketing. She was calling people like that unicorns, right, people who understand marketing and are technical at the same time.

And they wanted to hire more people like that. They just weren’t enough. So they were either trying to find engineers who maybe would like to learn marketing or find marketers who would like to learn coding, right? And we’re gonna talk about making them proficient in both fields, but having the basic understanding of both is crucial.

 

Bhavya:

Sure. I mean, yeah, that’s something that is globally recognized and everybody’s cognizant of that problem. Yeah, that really helps. Yeah.

 

Ruben:

Perfect. Also, for everyone on the webinar, if you have any questions, post them along the way. We’ll have a Q&A section at the end. But I’m hoping you could dive into a lot of the things, and it may be more relevant to tackle things as we go along.

 

Bhavya:

Yeah.

 

Ruben:

So we come back here. You know, we can see A/B tests as a snapshot in time, right? Imagine every dot here in this chart as an A/B test. And it’s great, you know, tools like VWO. We can see a test.

We can see the conversion rate and how to say a variation compared against the control. But we wanna be able to take this over the long term and say, hey, is this actually impacting the bottom line, right? So be able to take all the snapshots and connect them going forward. Something that I tend to tell clients is, you know, when you have insights, imagine an insight as you realize that you had to rebuild your onboarding flow to only three steps, or you had to change the copy in the landing page. Once you have the insight, you have to put it into some kind of vehicle, some kind of way of implementing that insight.

And there are many vehicles, right, A/B tests. It’s one of those, which is what we’re talking about here, but you can make changes to the copy. You can make changes to the product. You can make changes to marketing campaigns. So trying to find the right vehicle for an insight is also crucial.

 

Bhavya:

Sure.

 

Ruben:

So we’ll cover the entire process here. You know, it’s a matter of doing the right things consistently, and then we can sort of get to the end result, where we’re looking at A/B test data beyond any given test. And we’re really looking at three things. We wanna understand the right KPIs, talk a little bit about North Star metrics, look at data implementations as an overview, and then finally connect A/B testing data with analytics tools. You know, we can’t just jump into the third point because there are a lot of prerequisites and pre-work that needs to happen.

So we’ll spend a lot of time on each section, and then finish up and bring the whole story together.

 

Bhavya:

Yeah. I mean, Ruben, what’s important for both of us to understand today is that we have both VWO customers, or they might have spoken to VWO at some point of time. So, whilst we are showing them the actual demo, let’s keep in mind that they might not all be VWO customers. So we have to kind of show them the minor nuances of the product there.

 

Ruben:

Perfect. So let’s jump into KPIs and metrics. So, North Star metrics, a very popular topic over the past couple of years. It isn’t used at a lot of smaller companies or companies who are growing quite quickly. And the basic idea is choosing one metric that tends to really be the highest priority at that given time and then organizing your entire company or multiple teams around that metric.

Very effective, very simple idea: there’s a famous story from Facebook where their North Star metric in the early years was user growth. And every single initiative or campaign or idea was viewed through that lens. Is this gonna help us grow the user base? Yes or no? And they rejected a lot of ideas based on that.

Right? Any ideas around monetization, for example, would be rejected.

 

Bhavya:

So, it was detrimental to user growth, right?

 

Ruben:

Exactly. Yeah. Yeah. So it’s a very elegant idea. It does have pros and cons.

Again, it works well for small teams. If you’re trying to, I think, companies who are very disorganized, and the North Star metric can be a focus for them. But as a company gets larger and larger, it’s also a little too simplistic, right? It’s hard to boil down a business to just one metric. So what you see in larger companies is maybe a change in having a North Star metric for the entire company, and you might have a North Star metric for your team, your growth team, or maybe you’re in charge of a part of the product, and you have one metric for your product.

So it’s an evolution of how this can be used.

 

Bhavya:

Sure.

 

Ruben:

When looking at KPIs, some general guidelines here can be helpful. You know, there’s a difference also to be made here between KPIs and metrics. KPIs, from my perspective, are the critical numbers that matter to your team. There’s usually only a handful of them, 3 or 5, and metrics are things that can support that. They can just expand on that, and I have an example here coming up.

I know when planning this webinar, we were talking quite a bit about growth. And every company uses growth in a slightly different way. For example, for most of my clients, growth to them usually means user growth. So to acquire more users, building the product, revenue, interestingly enough, is not as relevant to them. Eventually, it is, but not initially, and this makes sense, you know, they are venture-backed. They are thinking about raising the next funding round, which usually is driven by growth, on user growth, less so on revenue, right, kind of very techno a technology company model.

 

Bhavya:

Might not be the case anymore, Ruben.

 

Ruben:

Yes.

 

Bhavya:

And what you’re seeing happening now with the work and everything, it might not be the case anymore.

 

Ruben:

Exactly. Yeah. So there’s those, you know, edge cases.

 

Bhavya:

Yeah.

 

Ruben:

But for the webinar today, growth means revenue, right? So we’re talking about LTV. So we’re very focused on revenue. And, you know, there’s an interesting story here about vanity metrics.

I remember a few years ago, I was working with a tourism agency, and they, you know, used to spend dollars on campaigns. Okay. Marketing campaigns. And they showed me the reports because they wanted to get my feedback on them. And the reports were really mostly page views and bounce rate and page sessions. And I remember at that time thinking, wow, this is, like, just vanity metrics, right?

They’re trying to optimize the bounce rates. They’re trying to optimize how many pages per session someone views. And in hindsight, it wasn’t actually because their focus was really on brand, on getting enough eyeballs on a given page or a given campaign. So page views made sense for them, right? And that’s the other thing about KPIs: finding the KPIs that make sense for your team or your company at any given time is slightly different, right? Maybe you’re at the stage where LTV matters, maybe you’re not.

I know there was a question before the webinar about how to measure impact on LTV when your LTV is, a quote-unquote bad.

 

Bhavya:

Yeah.

 

Ruben:

That might be a case where LTV may not be the right metric or the right KPI to look at. You might need to look at something else, in the meantime. Right. So, a quick KPI example here for a SaaS growth team. Right?

And I specify here because the product team will be different. So for a growth team, the North Star might be something like new paid subscribers. Then some of the KPIs supporting this or helping us understand this North Star metric are things like CAC, cost of acquisition, lifetime value, and sign-up rate. How well do users sign up from, let’s say, the landing page to the actual product? And then we can have some supporting metrics. Maybe we want to break it down by market channel or marketing campaigns, maybe do a demographic breakdown, male versus female, age, any other demographic data that we can collect about our users.

And then we, for example, had a client that had a very similar structure here, and their onboarding turned out to be one of the biggest gaps in their product. So the campaigns were driving users. The cost per acquisition was solid. It all seemed to work until they got to the onboarding. And their onboarding was really quite weak.

So they would lose a lot of users in the onboarding, and users, of course, have to go through onboarding to become a paid subscriber. So you may have things like that where that helps you understand the entire picture.

 

Bhavya:

So, why was that happening, Ruben?

 

Ruben:

In their case, the onboarding they designed initially, which was really designed based on the feel and anecdote on what they thought their users would want, just wasn’t very effective.

 

Bhavya:

And how did they figure this out? How did they discover it?

 

Ruben:

So we had a funnel report, in a bit of Amplitude, and we could see the drop-off rate from new users who just create an account to, let’s say, users who go through the onboarding or become a paying subscriber.

 

Bhavya:

Alright. So, funnel analysis is how we were able to get it. Okay. Yeah.

 

Ruben:

A tool here that can help you go through this process. The KPI process is something that’s called a measurement plan. There are different examples online. Here’s an example we have used in the past. And effectively, it’s a Google document, Word document, and we’re just planning things into categories.

Right? So we may start with questions we want to answer, right? Which marketing channels are driving the best users? KPIs, tied to business objectives, some segmentation, right, can also be seen as metrics. Then other metrics that can support it and targets. It can be helpful sometimes to have very concrete targets.

This document is primarily built from the business side. Right? So these are business stakeholders. Product marketing teams are typically building this document.

 

Bhavya:

Of course.

 

Ruben:

And as I mentioned, it really depends on what you focus on. I was having a conversation the other day with a company that has very clear LTV numbers. They sell to corporate clients, and we were going through all the numbers and diving deep into every conversion rate, LTV, and churn. And he stopped me. He said, you know, Ruben, I appreciate that.

It matters to us, you know, I have P&L responsibility—Profit and Loss responsibility—but we’re really focused on the brand. So let’s figure out how we can quantify the impact of our brand or the strength of our brand and then build around that. Right? So we can’t assume that the company is always focused on the same things.

 

Bhavya:

So how did you end up doing it? I’m sorry. I mean, this has always been an area of interest, and I’m sure it is an area of interest for everybody. So, because, I mean, we here at VWO even, we kind of find it’s challenging every day as to how we come out of the brand quotient with all the campaigns that we are running?

How much is branding amplified? Of course, direct traffic is one of these metrics, but I mean, is there anything beyond that, in your experience?

 

Ruben:

It’s tough. Once you move away from the concrete of LTV, churn, you know, subscription rate, all that kind of stuff, it’s tough. And it really is in the eye of the beholder. Right? So every director or executive view is slightly different.

In their case, brand to him meant, for example, customer satisfaction. Right? Are our customers happy? Are they reporting problems on those checking calls, things like PR mentions?

 

Bhavya:

Sure.

 

Ruben:

And how is the brand being mentioned? What’s the overall perception of the brand for new prospects when they go through a sales call? Right? So they’re qualitative metrics. We can try to quantify them in a simplified way, but they’re still qualitative so it’s limited to how we can interpret it.

 

Bhavya:

Of course. Of course. I mean, that is a webinar topic for me. That’s an entire topic for a separate webinar, maybe we’ll do one of these. Carry on, please.

 

Ruben:

Exactly. So we got KPIs. You got metrics. Let’s now focus on how we’re gonna implement data. This is a famous graph here.

There’s lots of options, lots of tools like that you can use. On the positive side, it does mean you have lots of tools. So today, I’ll show you maybe about 4 or 5 of the most popular ones. But, really, there are just lots of ways that you can slice this.

 

Bhavya:

As I was looking at you the other day, if only we would have all got a dollar for every single time we were exposed to this graphic, we’d all be millionaires by now. I have seen this now 10,000 times. I see it twice a day or thrice. It doesn’t make for a rosy picture, but, yeah, I mean, that’s what it is. That’s the reality now.

Exactly how do you navigate through all this? Even though you have so many options and I’m sure you are also on an advisory panel for a lot of companies who are trying to figure out the market stack. I mean, is there something that you can kind of just in very short, then there’s a formula if there’s a blueprint of sorts as to how to navigate this way and pick out the best ones for you?

 

Ruben:

Yeah. Three things, actually. First is this idea of starting with questions and not tools. So saying we need Amplitude or we need Google Analytics is very tool-focused instead of saying we want to solve market attribution or we want to understand product performance, right?

So figure out the questions first. Number one. Two, really, for a lot of my clients, when we’re looking at the category of tools, let’s say market attribution, we really focus only on the tools that we consider to be production-ready and are more established. So that is I get pitches all the time from younger startups, who are just starting out with a tool, and it’s a great place to be but I can’t really recommend it to my clients. We already have so many things to worry about. We don’t want to worry about downtime or something that’s not going to load. So that really, in any given category, just usually 3 or 5 options tend to dominate that category.

So the eighty-twenty rule in action.

 

Bhavya:

Yeah. That’s true.

 

Ruben:

So now we go from multiple categories to just one, to 3 or 5 tools, and then on the remaining tools, create this simple format we call the Assumptive Scoring. And we rank things based on what we care about. It’s a bit quantitative. And then we are able to come up with a choice. You know, for a lot of companies, they spend enormous amounts of time on research and tools.

But this process can help simplify it, and we can make decisions quite quickly.

 

Bhavya:

Of course.

 

Ruben:

Perfect. So tools, there’s a question here too, as a question before the webinar for attribution. I’ll actually ask you to maybe specify a little bit what kind of attribution you want. If you’re talking about web attribution or mobile attribution, it’s slightly different. But when it comes to implementing data and what we want, what we want here is we want to actually track everything else around the product, right? So we know the A/B test is one small part, a snapshot.

We now want to be able to show the remainder of the product. And that means tracking the activity that happens on the website, like the marketing website, when they sign up, when they go through the onboarding, when they actually use the product. And to do all this, we build a tracking plan. So let me actually leave my full screen if that works. Perfect. And I got you a real-life tracking plan here from a client. They’re a mobile app.

A consumer app for learning languages.

 

Bhavya:

So can you make this full screen, please, so that everybody can check it out?

 

Ruben:

Yeah. Let me go full screen. Okay. Yeah. Think.

So this is a plan for a mobile app. And in their case, actually, the tracking is across 2 mobile apps, iOS and Android. They have a web component, and they combine it all into a single plan. So really what we’re doing here in this tracking plan is we’re trying to understand all the different actions a user can take in a product. For example, you know, they can buy a subscription. Here, we have some of the mobile app events, you know, when they log in, some of the ongoing events. And if you take any event, for example, here’s an event where they’re creating carts, very specific to the product.

We can define an event name, when this fires, and then properties and so on. Okay. And use the properties. And what we’re saying here is this is all the data we want to track. We can then figure out what tool we want to send it to.

That may be Amplitude or Mixpanel or Google Analytics, or maybe just a data warehouse. Right? And, you know, we send it there. So tracking plan, there’s resources online for how to build this. But this is what… Let me exit here.

Go back to my presentation.

 

Bhavya:

That’s how a real-life tracking plan looks like.

 

Ruben:

Exactly. Yeah. So this is, you’ll see some of the data right now that needs to come from this process, right? And effectively, we want to combine our A/B testing data with the rest of our product data.

So we can start to get a better sense of the long-term impact of any given test.

 

Bhavya:

Sure. And I’m assuming in subsequent portions of the webinar, if it’s coming up, that will reach for that, right?

 

Ruben:

Exactly. Yeah. So we got KPIs. We got our metrics.

We have to find some of the other data that we care about, maybe make choices on tools for marketing attribution or product behavior. And now we can continue on to the third step and be able to connect A/B tests and data.

 

Bhavya:

Before you do that, Ruben, I have a quick poll question that I need to show to all our folks here. So, guys, what we’re going to do is that we want to take a small pulse of how our audience thinks about their analytics plan right now. And how they rate them on a scale of 1 to 5. Rate your tracking analytics setup. Just take a moment, please, so that we all understand the collective maturity of your individual programs.

And, once you have taken that moment out, we will go back and we’ll continue with the rest of the webinar. Just take a moment, please.

And done. Thank you.

 

Ruben:

Okay. Let me go back here. Perfect. So let’s now look at the process for taking A/B testing data and sharing it, or combining it with the rest of your data. The overview of the process here, and we’ll look at real-life examples, technical examples, but the process that we really care about here is to be able to fire an event, just like we saw in the tracking plan before, whenever a user views an experiment. And this can happen on the web, it could happen on mobile, it could happen on the server side as a back-end experiment.

It doesn’t really matter. We’re gonna take that data now and combine it with the rest of our data, and we’re gonna connect it using some kind of identifier. So maybe an email or a user ID or something else. And then we can build reports with the A/B testing data.

 

Bhavya:

Sure.

 

Ruben:

So let me show a basic demo here that I built for this webinar. So let’s start in VWO. Right? We’re running a test here, a web test in this case. And I was just changing the headline of my homepage.

Very simple.

 

Bhavya:

So you’re having this experiment on your home page. Yes. Just to clarify for everybody, you are running this experiment on your home page and that is for the purpose of this test.

 

Ruben:

Exactly. Yeah.

 

Bhavya:

Alright.

 

Ruben:

So you define your test in, again, any way, whereas a web test, mobile test, server test. And then in something like…

 

Bhavya:

And what are you testing here, Ruben?

 

Ruben:

Sorry?

 

Bhavya:

What are you testing here? What is the exact one being tested? What is the exact element being tested or what is the nature of this experiment?

 

Ruben:

It’s the H1 on the home page. So this headline is actually on the home page.

 

Bhavya:

Okay. Yep.

 

Ruben:

So running the test with one variation only. So when we build a test, we have the ability to take this data out and send it to other places. So this is under the settings and others. Right? The other types of a test typically tell you how to test a performance within VWO: Reports, heat maps, click maps, all that.

And there’s a session like this one where we can integrate with 3rd party products. So there’s a couple of things out of the box that we can use. We have Google Analytics, the 2 versions of Google Analytics, the classic and the universal, Google tag manager, and Clicktale. And there’s a few other ones here, you know, we can see, kiss metrics and a few WordPress and few other options. Yeah. For this demo, I’m actually gonna use Google Tag Manager.

So what I’m doing here is ton ton, VWO, send experiment data, to Google Tag Manager. And from there, I’ll do something with it. Right?

 

Bhavya:

Sure.

 

Ruben:

So if we’re here on the home page and I refresh this, you’ll see Google Tag manager here is running at the bottom.

 

Bhavya:

Mhmm.

 

Ruben:

So when we come to the home page, the A/B test runs, whether it’s the control or the variation. Right?

 

Bhavya:

Mhmm.

 

Ruben:

And let’s just see an event here. This is an event that the VWO is firing into the data layer.

 

Bhavya:

Mhmm.

 

Ruben:

And it’s giving us some basic information. One is giving us a campaign name right here. Right? It’s just campaign 1, but whatever you name your campaign, and it’s telling us that if it’s the control. This is a variation 1, 2, 3, 4, whatever it is.

 

Bhavya:

Sure.

Based on this, we can now actually fire an event from Google Tag Manager somewhere else. Right? So if we come here to Google Tag Manager, I have a small JavaScript event here that’s going to Segment. This is Segment.com, and then going to Amplitude. But from here, you can really send that anywhere. You could send it directly to Mixpanel, directly to Amplitude.

You could send it to a data warehouse. If there’s something like a Snowplow. Now pretty much as long as you have a way to send events, you can do that here. And you’ll see that I’m firing an event. The event is called ‘View A/B test’, and it has 3 properties. One is the experiment ID.

This is the, you know, the VWO campaign ID. In this case, it’s campaign 1. And the second one is variation ID. So it’s just a control. It’s variation 1.

And then just the URL of the page. Right?

 

Bhavya:

Sure. 

 

Ruben: 

So this event is fine every single time someone views that homepage experiment, whether it’s the control or the variation.

So now, you know, we’re…

 

Bhavya:

Every time  there is a visitor who gets bucketed into one of these two either control or variation, you send it to Google Tag Manager via the code snippet that you just showed us.

 

Ruben:

Exactly. Yeah.

 

Bhavya:

Yep. Yeah. Okay.

 

Ruben:

So now we can take the data out of VWO and then we can combine it. Right? So I got a demo here in Amplitude. And Amplitude is…

 

Bhavya:

For people who are uninitiated, Amplitude is a platform, a magnificent platform in fact, that helps both marketing acquisition as well as product teams put all the data in one repository and make it a central source of truth for all teams concerned.

 

Ruben:

Exactly. Yeah. Primarily used by software companies, but it’s very flexible. We can do a lot with it. So once we have the data out inside Amplitude, we can build, let’s say, a very simple funnel report.

VWO has a report like this, but we can also do it here. What we’re doing here is we’re taking our A/B test event, then we take some kind of second event, maybe the conversion that we care about. In this case, let’s say email sign up. So when someone signs up. And in the final, we can break it down, right? We can break it down by control, which is blue, variation 1. This is how many people view the A/B test over the period and then the conversion rate for each one.

Yep. Again, VWO has this, but now what you can start to see here is we can combine our A/B testing data with the rest of our data in a more flexible format. Because here we can then slice and dice this in many different ways. We can also look at something like a cohort analysis. Right?

And so now we can start going beyond simple funnel reports. Yeah. And really start looking at the long term impact of an A/B test.

 

Bhavya:

Sure.

 

Ruben:

So a cohort analysis is a report that comes from the medical world. And what we do here is we bucket users based on some kind of starting event. So in this case, we’re saying all the users who view A/B tests, specifically where they view variation 1.

 

Bhavya:

Yeah.

 

Ruben:

And then they return and that’s something, right, that something might be purchased, might be email sign up, whatever it is. So down here, we can then see that retention or long term impact of how many users did the first action, view the AB test on a daily basis, and then, you know, looking out day 3, day 4, day 6 out of that what is the performance? Are they coming back and doing this over and over again? Right? Yeah.

There was actually a question before the webinar about something like, looking at A/B testing data alongside diversity of data, this is really one of those reports, right? We can now take a user who views a specific A/B test and then track their performance in a very isolated way over days, weeks, months after that happened. Right?

 

Bhavya:

Yeah. So it just binds the entire story for you. It just does not say, okay. This variation 1 with a percentage points of, like, I mean, the flip was one person. We’re actually going beyond the single number and seeing what happened after that.

Right? I mean, that is the point you’re trying to make. Right?

 

Ruben:

Exactly. Yeah. Yeah.

 

Bhavya:

Yeah. Yeah. Full circles to it. Okay. That’s cool.

That’s very cool. Yeah.

 

Ruben:

And, of course, here, you know, we could do multiple tests here. Right? We could do, instead of just a specific variation, We could do multiple campaigns here. So we look at maybe a group of campaigns and how that performs over time.

 

Bhavya:

Sure.

 

Ruben:

And lastly, we can look at something like LTV. Right?

 

Bhavya:

Oh, that’s great. Yeah. I think that’s the promise of our webinar too. Yep.

 

Ruben: 

So recent sample data here, but we have an event here. Let’s say, you know, this is an event that has revenue attached to it. So it could be a purchase. It could be a subscription. Right?

And what we do is we’re taking this event and we’re viewing it by the revenue. Right? Sure. So this is we’re taking the average revenue in particular. So this is the average revenue for that event.

So then here, we can break it down into two buckets effectively. We have all our users, which is everyone. So we hit we’re basically in the average revenue for everyone. And then we can actually take a second bucket for just this A/B test, right? So we’re looking at campaign 1.

So we’re saying all the users who view campaign 1. Right?

And we get to compare their revenue over the last 30 days. Right?

So we see in our sample data, you know, AB test 1 performs a little bit better. But what we’re trying to capture here is taking our A/B test data, tagging users who view any given A/B test, whether it’s 1, 2, 50, or 100, and then grouping them, across the rest of our metrics, right, and specifically long term metrics. Right, because here, this is last 30 days, but we could do this over the last year, over the last 6 months, 12 months, and start to get a better gauge as to users who perform a certain A/B test or view a certain variation, how do they actually perform across the rest of our metrics? Retention, revenue, LTV, all that kind of stuff.

 

Bhavya:

Sure. Again, Ruben, we’ve got a very interesting question on the same lines before the webinar. A lot of these purchases or revenue events that we’re talking about, purchases for eCommerce, or paying the subscription for a paid plan in terms of SaaS. How much of it is a correlation versus causation problem. Right?

Because not one. I mean, yes, he or she signed up because of an A/B test, which might have influenced the upper chase then, but say 6 months down the line, she expands. Alright. And she signed up for a larger program because, then she dives into the product and not necessarily all those parts of features are in that A/B test. Right?

They’re universal. They’re for everybody. And because of that, when she had a fantastic experience with the product, she had a fantastic experience with support, and then she expects, and she buys a better subscription package. I’m sure this is not correlation to causation here.

Would this be 100% accurate in all environments? Or is there a disclaimer of sorts? Is there a caveat that we should kind of… I’ve had to simplify my question. Is it as simplistic as we can see as in the format that we’re seeing it right now, or are there a couple of red flags or caveats?

 

Ruben:

No. There’s definitely a red flag. Right? The correlation versus causation is a major issue. I think companies actually right now are being created just to solve this, and it’s a very hard problem to solve. The flex is what you think about, some are obvious, some are less obvious.

First, there’s a general common sense of what kind of impact a test can have. So let’s take our test here. We’re changing the headline on the home page. Will changing the headline on the homepage be able to affect the long term impact of a revenue end user? probably not. Right?

It’s a very minor change. If you’re talking about maybe a whole different campaign, let’s say, Facebook users versus organic users, now we have a better case scenario here, right, where we may have the, the composition of those users may be completely different. So making sure that the tests you’re trying to measure on the long term are impactful enough is one of them. Right? Otherwise, you might just literally find randomness in a button change and then realize that that really didn’t matter.

Right? The second thing is that a lot of the correlations versus causation work that’s taking place today it’s driven by machine learning and maybe AI if you prefer to use that term. But to be able to run models that can dive into this data and give you a probability of an impact on something, the prerequisite to all of that is having your data in place. Right?

So being able to take your data in the way we’re doing it right now and connect it from A/B test to the rest of our events, that’s a prerequisite. Once you have that in fact, tools like Amplitude actually have reports around this. So for example, if I go to the demo, I can show you a report. This is, as I mentioned, a big problem. And companies like Amplitude are trying to build easier ways to use machine learning, with your data. So they actually have a report, which I believe they call ‘Predict’.

Let me see if I can load it here. ‘Impact’. Actually, they call it ‘Impact’. So it’s, roughly, the same idea. You know, how does performing an event, this could be the A/B testing event or a specific campaign impact outcomes like revenue.

And it’ll take your same data and it’ll give you a chart that, let’s say, how, you know, users signing up, then become, let’s say, add integration. Right. And it gives you some, an estimate based on machine learning, actually, on what they think that impact of that event is on that eventually long term.

 

Bhavya: 

Wow, got it. Got it! 

 

Ruben: 

But to do this…

 

Bhavya:

That actually answers a lot of correlations versus causation problems. I’m sure.

 

Ruben:

Exactly. Yeah. So this is a report that’s available now.

Right? So you can see it took me a minute to set to run the report. But what takes a long time is to get the data in place and ensure the data is accurate, it’s clean, And that’s really where a lot of my work is, because I tell companies, hey. You know, we get our data in order to get in place. We can run tons of machine learning models on this.

Some are easy like this, some are harder. But, the issue the company’s face is their data is just not organized.

 

Bhavya:

Of course. Ruben, we have a very interesting question that is coming from. Just a second. I’m sorry if I’m not pronouncing the name correctly. It’s Vlad Kovinsky. 

What if the page URL is undefined, for example in the checkout, shipping page is a part of the overall checkout with a string abc.com/checkout?xyz=123, which is different every time. Would we use the redux to identify this page?

 

Ruben:

Yes. It does sound like yeah.

Yeah. Yeah. I would say, yeah, you can use something like redux to filter down to a specific page, and be able to create an event whether you do it on code or within your analytics tool itself. So you create an event for each page of the checkout when the string changes. Let me know if that helps.

 

Bhavya:

Just a second. Vlad, you can please type in the chat box here itself if this answers your question. I’m sure that that will come back to us. Josh Fowler writes, do you include URL in your event data to report on data by page?

 

Ruben:

Yes. Yes. We do. Yeah. It’s quite helpful.

And for those who run single-page apps, which is very common nowadays. We always make sure that we’re firing an event every time the page changes, even if it doesn’t reload. So if just the, let’s say, the hash changes or the string changes, we still treat it like a page view and capture that URL.

 

Bhavya:

That’s great. Christian Torgerson, how do you calculate LTV of an A/B test after the test is finished and implemented? I think we’ve kind of answered that, but I see the test increased revenue by 2% over 1 month. Do you simply extrapolate 2% month over month for the rest of the period. I think Ruben has very elaborately answered that, but Ruben still will go ahead and take a stab at it if you want to.

 

Ruben:

Yeah. So it’s very much along the lines. Right? What can be reasonable for an impact? Right?

If you run a test and your short term metrics, say, your conversion rate increased by 2%, you could extrapolate that going forward. What I think would be helpful is to measure it over the long term as well. Right? So looking at something like that cohort analysis report that we saw, and measuring the revenue of those users who maybe view variation 1, like how convert it highly or higher and seeing what their actual revenue is 12 months from now. That will tell you maybe the revenue was higher in the 1st month, and then it came back to the average over time.

Or maybe you just kept going higher and higher. Right? So I think that will give you a sense of if the impact that you did was something that’s short term or if it’s truly a long term impact.

 

Bhavya:

Alright. This is a little bit, I mean, it touches the statistical side of things, Ruben. How do you overcome differences in… Oh, okay I think this is a product question. How do you overcome differences in Bayesian reports in VWO versus Frequentists? Will the report in the tool be reliable enough? 

Mizela, I will reach out to you offline.

I have a very detailed answer to that, but I don’t think this webinar is the right path forward. I will email you separately.

Is it healthy/efficient to say GMS as your NSM? Yanice asks us Ruben. Do you have a clue as to what JMS stands for here?

 

Ruben:

I do not know. If you can specify that. I know a lot of acronyms, but not that one.

 

Bhavya:

Yeah, Yanice, it would be great if you can, kind of elaborate on what GMS stands for in your context. I’m sure it’s contextual. We are just not aware of that context. So it will be great if you can kind of come back to us with what GMS stands for. I’m sure.

Ruben will take a stab at it.

 

Ruben:

I could take a stab at it, but it might not be the right answer.

 

Bhavya:

Please go ahead. I’m sure it might just turn out to be right.

 

Ruben:

You know, actually…

 

Bhavya:

What do you think GMS means?

 

Ruben:

Oh, no. Google Tag Manager. I’m actually not sure what GMS means. Yeah.

 

Bhavya:

I mean, I know it can’t be a North Star metric for sure. So, alright, I’m sure he’ll come back to us. We had a lot of other questions before the webinar as well. So let me just pull it out. Can you please give me one moment?

So I have to pull out the sheet here. What are you reading these days? Until the time I pull out that sheet over, what are you reading these days?

 

Ruben:

Yeah. So I’m actually reading the Hooked book by Nir Eyal. I’ve been seeing him online quite a bit. How to Build Habit-Forming Products. I found that quite interesting. Little bit Ryan Holiday, his new book, ‘Stillness Is the Key.’

 

Bhavya:

It’s key. Yeah.

 

Ruben:

Yeah. And then I think in the space itself, people like Brian Balford are always writing great stuff.

Amplitude and Mixpanel. I think their stuff is also quite good. Again, they’re sort of at the forefront on how to make analysis easier, through a lot of reports that we’re seeing here. So it’s very interesting.

 

Bhavya:

There’s a question I personally have, Ruben, about analytics. North Star metric. You’ve kind of spent some time, but of course you couldn’t spend, detail and whether it’s again, it’s a topic for the other webinar altogether. But even for us, for example, like VWO. For a long time, we were deliberating about what our North Star metric should be.

If it should be that it is the number of experiments that on an average of VWO customers run and we should all chase to increase that number of experiments. Alright? And I mean, our hypothesis was that the greater the number of experiments being run, the success rate would be higher.

There would be more revenue. Everybody wins. Alright? Turns out, it wasn’t that great a metric because, once you are trying to push our customers to learn more experiments, they were kind of running them, but they would also get an uplift. More number of experiments means more number of winners, right?

Is this the kind of conversion that you need that a lot of customers come back to? We’ve tried and run a couple of experiments based on your ideas. But what it has done is that it has given us a higher number of leads or sales, but our average value has gone down because that’s not the customer we are running after. So then we were like, okay.

This is clearly not the right North Star metric. So what appeared very obvious was not that obvious. Alright. And in the face of say if a company is setting up its data or team on day 0. Alright.

And they have to go for the North Star metric. What do you think we should be going? What is the process here so that unlike us, they did not go through a trial and figure out, okay, no. That was not the right metric in the first place?

Yeah, yeah. You know, it’s tough. I find a lot of companies have, and, I mean, and really people in companies have a hesitation to just go after what they really want. Right?

So in a lot of companies, I’ll come in and they start telling me, you know, we want to increase customer satisfaction and the number of users and people who do this and do that. And it kind of becomes like, okay, like, why do you want that? Well, we wanna increase revenue. Right? So the answer is they wanna increase revenue.

So the North Star metric should probably be increased revenue. Right? Okay. Then you can supplement it. Right, you can supplement to say, you know, a higher revenue would probably require higher customer satisfaction, which we measure in NPS, let’s say, right. So you can have supported metrics, but I find this diving deeper into the why…if someone wants more experiments, why do we care about that?

Because they’ll be happier or they’ll get a bigger lift, like you said, right? If they get a bigger lift, why do we care about that? Because they’ll be more likely to become a, to stay a subscriber, to be a paying customer for us. Kinda add to, sort of the final why is like, okay. It seems to be revenue.

Right? Venture-backed companies, on the other hand, typically, don’t have this problem as much because they know clearly, like, the most important thing for us over the next 2 years, 18 months, is to raise a second fund, do a second funding round. And to do a second funding round here is, like, the handful of things that investors care about, we’re gonna hit those. So it can be a little bit more clear what really matters to them because they have a very constrained existence.

 

Bhavya:

So Yanice came back. GMS is equal to Gross Merchandise Sales.

 

Ruben:

Got it. Okay. And what was the question?

 

Bhavya:

Is it healthy to set GMS as your NSM?

 

Ruben:

NSM.

 

Bhavya::

North Star Metric. Got it. Yeah. It can be. Yes. Yeah.

So I would say it depends on us. We’re just having a discussion, right, about the why behind the metric. What and where we still accomplish this. Be very careful about what you choose as your metric. Most companies are actually very successful at optimizing whatever the North Star metric is, whether it’s experiments being run or GMS. So if you look out 12 months from now and then you say, hey, we, you know, we improved GMS by 10%, 15%.

Is that what we really want? Right? Is that gonna drive the other business outcomes that we care about, that you care about, that your team cares about? Right? And if you feel confident in that answer, then that can become a good North Star metric. But if you see some gaps in that assumption, you know, the idea that maybe if customers are running more experiments, they might still be unhappy or they might still cancel, they might still not be paying customers, then that could be something that might be worth double-checking.

 

Bhavya:

Sure. Sure. I hope that answers the question. Please come back to us if it doesn’t, then we will elaborate on it.

George has another question. In your LTV, did you only measure average revenue that occurred after this was broken down by ARPU?

 

Ruben:

That’s probably the typical way you can measure it. I also find it helpful to look at the outliers on both ends. So maybe people who are really high on that LTV value. So who are they, right? Were they impacted by the test? Right? Maybe you find, you know, that’s the famous phrase that, you know, averages lie, right? An average can lie and hide a lot of things.

You might find that the average went up, but maybe it’s a field of low-paying customers and a handful of really high-paying customers, right? So the average is helpful, but then also seeing the breakdown underneath that, right? Are the customers who are in that higher average the kind of customers you want over the long term? Right? An example here is you might run an A/B test that uses discounts, right?

And you’re discounting quite heavily and your average goes up. Just the sheer volume of people increases the average. But maybe the underlying composition of that average is not very suitable for the long term.

 

Bhavya:

Sure. I think we have time for one more question before we have to wrap up the show. I’ll pick this one. I think we’ve already answered it – interested specifically in how A/B test outcomes can indicate causation for a customer upgrade to a higher plan. This is less so than LTV and most of our expansion.

I think we’ve already answered it, but do you want to take one final stab at it?

 

Ruben:

Yeah. Yeah. So, really a very popular question, especially with my clients, is understanding the behaviors the user must do before becoming a paying customer. It’s kinda like the holy grail for a lot of companies. It can be very impactful. There’s no straight answer, but you can combine a handful of reports to do that.

A report like this one, for example, is kinda what you can imagine. So you take your different behaviors, maybe not to sign up, but them sending a message, right, or some other key product action, and then your outcome is them becoming a purchase event, then becoming a paying subscriber. You also have events like, let’s see, Compass. Compass is the same idea, right? You take users, you take some events, again, some behavior, and then you wanna see how well this will predict some kind of outcome, right, whether it’s them becoming, let’s say, the same thing we had before. Right? Some event and how that will predict them doing a, like, a purchase event, right, and you get a gauge of how predictive that is. So the same thing.

Once you have your data in order and it’s organized and it can be read, you can run a lot of these reports on your data and then be able to run some models on how if you, you know, if you optimize the searing event post our behavior, is that gonna be predictive towards.

 

Bhavya:

Sure. It’s a wrap, Ruben. It was fantastic having you. And for everybody who’s still hanging out here, I think we’ve got around fifty-odd people still here. That’s great.

Just write back to me. Kind of tell me that I mean, the folder is the idea that you supply us with. What do you want to hear about next? If there’s something that you want Ruben to come back and address, I mean, that’s even something that’s a possibility. We’d love to answer some of your questions that we couldn’t today.

Just write to me at bhavya.sahni@vwo.com and I’ll take it forward from there. Thank you so much, Ruben. Any parting notes to this lovely audience that is still here?

 

Ruben:

No. No. I think it was great. Let me know if you have any questions by email or on my website, and I had fun.

 

Bhavya:

Thank you so much, Ruben.

 

Ruben:

Have a good day!

 

Bhavya:

You too. And thank you, everybody. Thank you so much. Bye.

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Deck
  • Questions
  • Books recommendations
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out