Follow us and stay on top of everything CRO
Webinar

Continuous Experimentation: How to build an experimentation capability that helps you test more ideas rapidly

Duration - 60 minutes

Key Takeaways

  • Encourage experimentation within your organization and get senior leadership on board with this approach.
  • Utilize machine learning for meta-analysis of past experiments, but remember that it's always looking back and may not account for changes in the environment or competitors.
  • Human creativity is still a crucial part of the experimentation process and cannot be entirely replaced by machine learning.
  • Recordings and slides from webinars can be a valuable resource for those who couldn't attend or want to revisit the information.
  • Stay updated with new articles and insights from industry experts to continuously learn and improve.

Summary of the session

The webinar, hosted by Ajit, features Kevin Anderson, a seasoned marketing expert from Vista. Kevin emphasizes the importance of hypothesis-driven management, the power of A/B testing, and the need for marketers, data analysts, UX designers, and developers to understand and apply this approach daily. He encourages attendees to consider these insights for their career progression.

The webinar concludes with a Q&A session, where Kevin addresses questions about handling potential interaction effects between experiments and the role of communication and tooling in this process. This webinar is a must-watch for those interested in the practical application of marketing experimentation.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • How do you handle potential interaction effects between all the experiments that run?

    - by Ricardo
    Yes. So how do we handle interaction effects between the experiments? I think there are two ways to deal with that. So one aspect is just the organizational aspect. So it's multiple teams probably ...working together that could interfere with each other. So I think that was just solved by communication and, the center of excellence can provide tooling giving insights into what kind of experiments are being developed, in what stage they are, and what kind of metrics they're trying to optimize, and that in the end will help teams to understand, okay, this team is trying to do something that is probably interfering with something that we are developing. So that is just communication, right, just collaboration, knowing what is being done, and I don't think tuning well, tuning can support that, but it won't solve it. The other part is that, in case experiments are being run, then you need to have some models in place that account for interaction effects. And that part of the local team or the center of expert expertise can develop tools to enable people to see. Okay, I've been running my experiment, but in the meantime, people in another experiment were, were affected or were part of my experiment. Sometimes you need to cross-check, right, if the results differ when you segment it for different experiments. So that's one approach. So I think it's 2 ways of communication and also providing tooling to give that gifted insight.
  • Let's say you're running 20 plus tests per month, what's the best way to keep everything organized and documented without creating loads of manual admin work that is manually creating results stocks, etcetera?

    - by Mike
    Yeah. So if you are approaching a level like 20 plus experiments, how do you prevent that it's a lot of work, right, to manage all those things. But I think here, again, that the trick here is to conn ...ect the reporting or the structure of your program with the actual work. So for example, if you are moving something from development to test, then tracking that activity by someone should automatically update the ticket of your experiment that it's actually in a new state. So, well, currently, what we use in this search may be a little bit technical, but what we currently use at Vista is we use Jira for this. And I think lots of organizations use it. So we've set up a board where we ask people to document their hypothesis, but as soon as new information comes, or a ticket has been updated, then, we send this into Slack. So this is all automated processes. So we take the responsibility for updating people who subscribe to a specific experiment, but still, people need to update the ticket with relevant information. And I think automation is trickier. In the end, yeah, people still need to do the work. Right? So we need to come up with a hypothesis. We need to do the variant. So that won't change, but what you want to, what you need to prevent is that people just have to do whole admin stuff, on the side of that.
  • How do you get the management to start focusing on experimentation efforts on running experiments and sharing the results?

    Yeah. I think that the best approach is for senior leaders to get inspired or almost convinced about experimentation. From someone outside of your organization. So that means they go to a conference a ...nd they see a presentation from Booking.com or Amazon about the amount of experiments that they are doing, and then they come back and say to someone in the organization, okay, I want this as well. I've heard so many stories about that. I think Booking started experimenting this way when someone joined, one of the sessions from Roni Kohave while he was at Amazon. I think it was the CEO even though back then it was a small company. And he came back and said to the team, I want to become this experimentation, okay, I want to build this cap experimentation capability. We need to do this as well. So that's maybe hard for you to organize from within, but try to look at areas where you see good examples and try to get that in front of your senior leadership. So, yeah, I think that's always the best approach.
  • What is your vision and approach around the state of the art of machine learning able to predict experiment results?

    - by Nicole
    This is a fascinating area, of course. The question behind this, do we need all the people running a beta, or can we build something that almost predicts what's what will come out? I do see a lot of v ...alue in, like, meta-analysis audio experiments that you have been running within your organization. The big problem here is that it's always looking back. Right? And I think that's true for all machine learning, all predictions, we are relying on the data that we have. And if things change in the environment or with competitors or you have better tooling or, well, dozens of things can change. Then it's oftentimes better to just do a new experiment than just to rely on all kinds of old results. Having said that, I think there's a huge benefit in having 10, 20, or even 30 experiments all showing the same direction on a specific topic. then you can come to some kind of, well, almost a truth for your customers. And, of course, that needs to be taken into account in new development. I'm not so sure if machine learning will take away this. I think the creativity part of humans is still very strong, and then, hopefully, that will separate us from machines, shortly and in the long future as well. But who knows? I might be wrong. I don't know. I think for the next 10, 20 years, at least my career, your career, I think this is a fascinating area, and machine learning is well, mostly outside of that, I would say. That's my bet.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Ajit from VWO: Okay. Good morning. Good afternoon. Good evening, based on where you are. The time is 12:32 CST. So those of you who have joined, a round of applause for your personality. And while folks are still joining ...