Follow us and stay on top of everything CRO

Beyond Statistical Significance: Determining Impact Of Experimentation On Customer Lifetime Value (LTV)

Duration - 40 minutes

Key Takeaways

  • Analyzing user behavior and how it predicts certain outcomes, such as making a purchase, can provide valuable insights for business growth. This can be done by running reports on organized and readable data.
  • Optimizing certain events or behaviors can be predictive towards actions that businesses care about, such as increasing sales or customer engagement.
  • The use of tools like Compass can help in predicting outcomes based on user behavior. This can be particularly useful in understanding what actions lead to a purchase event.
  • Feedback from the audience is crucial for improving future webinars and addressing relevant topics. Participants are encouraged to share their thoughts and suggestions for future discussions.
  • Reach out for further questions or clarifications via email or through the speaker's website. Open communication channels are important for continuous learning and improvement.

Summary of the session

The webinar, led by a seasoned data analytics expert, Ruben Ugarte delves into the complexities of data tracking, analytics, and the role of branding in product development. Ruben, with his vast experience in advising companies, emphasizes the importance of starting with questions rather than tools, focusing on established, production-ready tools, and using an assumptive scoring system to make tool selection more efficient. 

Ruben concludes by inviting further questions via email or through his website. The webinar is beneficial for both VWO customers and non-customers, providing a comprehensive overview of experimentation.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • How to measure impact on LTV when your LTV is, a quote-unquote bad.

    That might be a case where LTV may not be the right metric or the right KPI to look at. You might need to look at something else, in the meantime. Right. So, a quick KPI example here for a SaaS growth ... team. Right? And I specify here because the product team will be different. So for a growth team, the North Star might be something like new paid subscribers. Then some of the KPIs supporting this or helping us understand this North Star metric are things like CAC, cost of acquisition, lifetime value, and sign-up rate. How well do users sign up from, let's say, the landing page to the actual product? And then we can have some supporting metrics. Maybe we want to break it down by market channel or marketing campaigns, maybe do a demographic breakdown, male versus female, age, any other demographic data that we can collect about our users. And then we, for example, had a client that had a very similar structure here, and their onboarding turned out to be one of the biggest gaps in their product. So the campaigns were driving users. The cost per acquisition was solid. It all seemed to work until they got to the onboarding. And their onboarding was really quite weak. So they would lose a lot of users in the onboarding, and users, of course, have to go through onboarding to become a paid subscriber. So you may have things like that where that helps you understand the entire picture.
  • What if the page URL is undefined, for example in the checkout, shipping page is a part of the overall checkout with a string, which is different every time. Would we use the redux to identify this page?

    - by Vlad Kovinsky
    I would say, yeah, you can use something like redux to filter down to a specific page, and be able to create an event whether you do it on code or within your analytics tool itself. So you create an ...event for each page of the checkout when the string changes. Let me know if that helps.
  • Do you include URL in your event data to report on data by page?

    - by Josh Fowler
    Yes. We do. Yeah. It's quite helpful. And for those who run single-page apps, which is very common nowadays. We always make sure that we're firing an event every time the page changes, even if it does ...n't reload. So if just the, let's say, the hash changes or the string changes, we still treat it like a page view and capture that URL.
  • How do you calculate LTV of an A/B test after the test is finished and implemented? I see the test increased revenue by 2% over 1 month. Do you simply extrapolate 2% month over month for the rest of the period?

    - by Christian Torgerson
    Yeah. So it's very much along the lines. Right? What can be reasonable for an impact? Right? If you run a test and your short term metrics, say, your conversion rate increased by 2%, you could extr ...apolate that going forward. What I think would be helpful is to measure it over the long term as well. Right? So looking at something like that cohort analysis report that we saw, and measuring the revenue of those users who maybe view variation 1, like how convert it highly or higher and seeing what their actual revenue is 12 months from now. That will tell you maybe the revenue was higher in the 1st month, and then it came back to the average over time. Or maybe you just kept going higher and higher. Right? So I think that will give you a sense of if the impact that you did was something that's short term or if it's truly a long term impact.
  • Is it healthy to set GMS as your NSM?

    - by Yanice
    Yeah. It can be. So I would say it depends on us. We're just having a discussion, right, about the why behind the metric. What and where we still accomplish this. Be very careful about what you choose ... as your metric. Most companies are actually very successful at optimizing whatever the North Star metric is, whether it's experiments being run or GMS. So if you look out 12 months from now and then you say, hey, we, you know, we improved GMS by 10%, 15%. Is that what we really want? Right? Is that gonna drive the other business outcomes that we care about, that you care about, that your team cares about? Right? And if you feel confident in that answer, then that can become a good North Star metric. But if you see some gaps in that assumption, you know, the idea that maybe if customers are running more experiments, they might still be unhappy or they might still cancel, they might still not be paying customers, then that could be something that might be worth double-checking.
  • In your LTV, did you only measure average revenue that occurred after this was broken down by ARPU?

    - by George
    That's probably the typical way you can measure it. I also find it helpful to look at the outliers on both ends. So maybe people who are really high on that LTV value. So who are they, right? Were the ...y impacted by the test? Right? Maybe you find, you know, that's the famous phrase that, you know, averages lie, right? An average can lie and hide a lot of things. You might find that the average went up, but maybe it's a field of low-paying customers and a handful of really high-paying customers, right? So the average is helpful, but then also seeing the breakdown underneath that, right? Are the customers who are in that higher average the kind of customers you want over the long term? Right? An example here is you might run an A/B test that uses discounts, right? And you're discounting quite heavily and your average goes up. Just the sheer volume of people increases the average. But maybe the underlying composition of that average is not very suitable for the long term.
  • How A/B test outcomes can indicate causation for a customer upgrade to a higher plan. This is less so than LTV and most of our expansion.

    Yeah. So, really a very popular question, especially with my clients, is understanding the behaviors the user must do before becoming a paying customer. It's kinda like the holy grail for a lot of com ...panies. It can be very impactful. There's no straight answer, but you can combine a handful of reports to do that. A report like this one, for example, is kinda what you can imagine. So you take your different behaviors, maybe not to sign up, but them sending a message, right, or some other key product action, and then your outcome is them becoming a purchase event, then becoming a paying subscriber. You also have events like, let's see, Compass. Compass is the same idea, right? You take users, you take some events, again, some behavior, and then you wanna see how well this will predict some kind of outcome, right, whether it's them becoming, let's say, the same thing we had before. Right? Some event and how that will predict them doing a, like, a purchase event, right, and you get a gauge of how predictive that is. So the same thing. Once you have your data in order and it's organized and it can be read, you can run a lot of these reports on your data and then be able to run some models on how if you, you know, if you optimize the searing event post our behavior, is that gonna be predictive towards.

Reading Recommendations

  • Hooked: How to Build Habit-Forming Products

    by Nir Eyal

    Hooked by Nir Eyal delves into the psychology behind successful products and services, exploring how companies create habit-forming experiences. The book provides insights into how to create products that captivate consumers' attention and keep them coming back for more. Using the "Hook Model," Eyal outlines a four-step process—Trigger, Action, Variable Reward, and Investment—that drives consumer behavior and engagement. This insightful guide blends behavioral science and practical business strategies to help designers, marketers, and entrepreneurs build products that captivate users and create lasting habits.

  • Stillness is the Key

    by Ryan Holiday

    The book explores the importance of cultivating inner peace in a chaotic world. Drawing on timeless wisdom from Stoic, Buddhist, and Confucian philosophies, Holiday emphasizes the value of slowing down, embracing solitude, and practicing mindfulness. Through engaging stories and practical advice, the book guides readers toward achieving mental clarity, focus, and resilience, ultimately leading to a more fulfilling and balanced life.


Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Bhavya from VWO: Thank you so much for coming in here. Today’s topic is a very controversial subject. There have been a lot of debates around what we’re going to talk about today. Yet, in all those conversations that we have seen and heard to date ...