VWO Logo
Follow us and stay on top of everything CRO
Webinar

Continuous Experimentation: How to build an experimentation capability that helps you test more ideas rapidly

Duration - 60 minutes
Speaker
Kevin Anderson

Kevin Anderson

Product Manager Experimentation

Key Takeaways

  • Encourage experimentation within your organization and get senior leadership on board with this approach.
  • Utilize machine learning for meta-analysis of past experiments, but remember that it's always looking back and may not account for changes in the environment or competitors.
  • Human creativity is still a crucial part of the experimentation process and cannot be entirely replaced by machine learning.
  • Recordings and slides from webinars can be a valuable resource for those who couldn't attend or want to revisit the information.
  • Stay updated with new articles and insights from industry experts to continuously learn and improve.

Summary of the session

The webinar, hosted by Ajit, features Kevin Anderson, a seasoned marketing expert from Vista. Kevin emphasizes the importance of hypothesis-driven management, the power of A/B testing, and the need for marketers, data analysts, UX designers, and developers to understand and apply this approach daily. He encourages attendees to consider these insights for their career progression.

The webinar concludes with a Q&A session, where Kevin addresses questions about handling potential interaction effects between experiments and the role of communication and tooling in this process. This webinar is a must-watch for those interested in the practical application of marketing experimentation.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • How do you handle potential interaction effects between all the experiments that run?

    - by Ricardo
    Yes. So how do we handle interaction effects between the experiments? I think there are two ways to deal with that. So one aspect is just the organizational aspect. So it's multiple teams probably ...working together that could interfere with each other. So I think that was just solved by communication and, the center of excellence can provide tooling giving insights into what kind of experiments are being developed, in what stage they are, and what kind of metrics they're trying to optimize, and that in the end will help teams to understand, okay, this team is trying to do something that is probably interfering with something that we are developing. So that is just communication, right, just collaboration, knowing what is being done, and I don't think tuning well, tuning can support that, but it won't solve it. The other part is that, in case experiments are being run, then you need to have some models in place that account for interaction effects. And that part of the local team or the center of expert expertise can develop tools to enable people to see. Okay, I've been running my experiment, but in the meantime, people in another experiment were, were affected or were part of my experiment. Sometimes you need to cross-check, right, if the results differ when you segment it for different experiments. So that's one approach. So I think it's 2 ways of communication and also providing tooling to give that gifted insight.
  • Let's say you're running 20 plus tests per month, what's the best way to keep everything organized and documented without creating loads of manual admin work that is manually creating results stocks, etcetera?

    - by Mike
    Yeah. So if you are approaching a level like 20 plus experiments, how do you prevent that it's a lot of work, right, to manage all those things. But I think here, again, that the trick here is to conn ...ect the reporting or the structure of your program with the actual work. So for example, if you are moving something from development to test, then tracking that activity by someone should automatically update the ticket of your experiment that it's actually in a new state. So, well, currently, what we use in this search may be a little bit technical, but what we currently use at Vista is we use Jira for this. And I think lots of organizations use it. So we've set up a board where we ask people to document their hypothesis, but as soon as new information comes, or a ticket has been updated, then, we send this into Slack. So this is all automated processes. So we take the responsibility for updating people who subscribe to a specific experiment, but still, people need to update the ticket with relevant information. And I think automation is trickier. In the end, yeah, people still need to do the work. Right? So we need to come up with a hypothesis. We need to do the variant. So that won't change, but what you want to, what you need to prevent is that people just have to do whole admin stuff, on the side of that.
  • How do you get the management to start focusing on experimentation efforts on running experiments and sharing the results?

    Yeah. I think that the best approach is for senior leaders to get inspired or almost convinced about experimentation. From someone outside of your organization. So that means they go to a conference a ...nd they see a presentation from Booking.com or Amazon about the amount of experiments that they are doing, and then they come back and say to someone in the organization, okay, I want this as well. I've heard so many stories about that. I think Booking started experimenting this way when someone joined, one of the sessions from Roni Kohave while he was at Amazon. I think it was the CEO even though back then it was a small company. And he came back and said to the team, I want to become this experimentation, okay, I want to build this cap experimentation capability. We need to do this as well. So that's maybe hard for you to organize from within, but try to look at areas where you see good examples and try to get that in front of your senior leadership. So, yeah, I think that's always the best approach.
  • What is your vision and approach around the state of the art of machine learning able to predict experiment results?

    - by Nicole
    This is a fascinating area, of course. The question behind this, do we need all the people running a beta, or can we build something that almost predicts what's what will come out? I do see a lot of v ...alue in, like, meta-analysis audio experiments that you have been running within your organization. The big problem here is that it's always looking back. Right? And I think that's true for all machine learning, all predictions, we are relying on the data that we have. And if things change in the environment or with competitors or you have better tooling or, well, dozens of things can change. Then it's oftentimes better to just do a new experiment than just to rely on all kinds of old results. Having said that, I think there's a huge benefit in having 10, 20, or even 30 experiments all showing the same direction on a specific topic. then you can come to some kind of, well, almost a truth for your customers. And, of course, that needs to be taken into account in new development. I'm not so sure if machine learning will take away this. I think the creativity part of humans is still very strong, and then, hopefully, that will separate us from machines, shortly and in the long future as well. But who knows? I might be wrong. I don't know. I think for the next 10, 20 years, at least my career, your career, I think this is a fascinating area, and machine learning is well, mostly outside of that, I would say. That's my bet.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Ajit from VWO: Okay. Good morning. Good afternoon. Good evening, based on where you are. The time is 12:32 CST. So those of you who have joined, a round of applause for your personality. And while folks are still joining in, let’s skip them a minute ...
or 2 to settle in, so that they don’t miss out on anything. And while we wait, how about you tell us the name of the city that you’re joining us from? That way we can know how diverse this audience is. So I’ll start with myself.

I’m from New Delhi. Meanwhile, please tell us the name of the city in the questions panel. Okay. So, Thomas is from Barcelona. Robbie is from New Delhi, India, and Kavesh is from Durban, South Africa. Dirk if I’m not able to pronounce it. But Dirk is joining us from Luban. Shyla Burberg is from the Netherlands. Okay. Timo Roberts is from Hamburg.

Nice. So, uh-uh, Ricardo is from The Hague. Somebody asked the question. Will I be recording the webinar? Yes. We are going to record the webinar, Youssef, and we are also going to send it over to you. Yes. No problem. So, pretty diverse audience here, it seems, and thank you, all of you for joining. And now I see it is 12:34 CST. So we are. I think we are good to start. So without further ado, welcome to the webinar. 

My name is Ajit. I’m a marketing manager at VWO. And in today’s edition of the VWO webinar, we have a wonderful speaker who normally spends this time running optimization programs for Vista. Vista is a European company, which is a marketing and design partner for millions of small businesses worldwide. So apart from running optimization programs, this gentleman travels around the world, speaking at a known, fancy growth conference, which, by the way, will set you back $100 if you choose to attend. Okay?

But that’s what we’re gonna spare. Like I said, in the promotional email as well. So without further ado, please welcome Kevin Anderson. Hello, Kevin. Thank you so much for joining us today. Please tell us why you’re here.

 

Kevin Anderson:

Thank you. Thank you for the opportunity to share. And it’s great that many people around the world are joining this webinar.

I’m Kevin from the Netherlands. And I’m here today to share how people and organizations can join the journey towards continuous experimentation. I’ll explain everything about what continuous experimentation means. So that’s the big goal for me to share in the next, in this webinar. So, yeah, my name is Kevin Anderson.

 

A:

Kevin, before you get started, can I make a small announcement to people, who are still joining? I see that a decent amount of people have joined by now. So folks, a few pointers. If you like to ask any questions with Kevin. Please add your question in the questions tab.

Ask him anything. We are having him here for 1 hour. Also towards the end of the webinar, you are allowed to ask a question to Kevin directly.

Okay. So right now, you are in muted mode. You’re not audible, but, towards the end, we are going to have a question session where you can ask your question directly. Please enjoy the webinar and all your questions. Thank you so much. And meanwhile, I’m going to turn off my video so that all the attention is on Kevin. Okay. All the best, Kevin, O.

 

KA:

Thank you. Yeah. And that’s exactly what I intend to do here. So sharing some ideas, some inspiration from other companies, and research that I’ve been doing, but there’s a lot of time for you to ask questions. So let’s dive in.

A little bit about me. So Kevin Anderson, indeed, I work at Vista as a product manager for experimentation. On the side, I do a part-time PhD, which is quite a struggle, but it’s really interesting. And I want to start by sharing my journey into experimentation. So how did it all start for me?

Maybe you can relate. Maybe it’s completely different. But my journey started way back in 2008 when I worked here in the Netherlands in Amsterdam for a company called Postbank. People from the Netherlands might still know this brand. It’s now part of ING.

But I started my career there almost as a web analyst. So analyzing customer behavior online data was my thing. One of the first big projects I was tasked with was analyzing the behavior so that we could do a redesign for the new website where 2 brands got merged. So we got a Postbank brand and an ING brand. Completely different companies by that time, completely different websites, and as you can imagine, there was a lot of discussion. 

So a lot of things that people thought, no, we are right. I am right. This is the way we should do it. And, of course, you need to bring data to the table. My role as a web analyst was to provide insights on what customers were doing there.

The most interesting thing is that there were two guys on my team, and they had the function of experiment managers. And that was the first real introduction for me, at least, with A/B testing. And they were extremely focused on running the A/B test. They were coming up with the hypothesis. They were creating the variation.

They were doing the analysis, running the A/B test from start to end, the full thing that was their job. They came up with great insights, and great learnings that were in the end almost always implemented. So that was one side. There was this small pocket of people running experiments where it was isolated.

One day, I brought this guy to a meeting and he proposed that we could just run an A/B test here.  We can have all the ideas that we have. In this case, there were 3 ideas on how to structure a landing page. I know this is 2008, probably looks very ugly in our minds right now, but this was, at that time, and this completely shifted the conversation from opinions to data because it shifted the conversation to, okay, what do we want to improve here?

What do we want to optimize for? And I think that is the power of A/B testing. It changes the mindset of people to come up with, okay, what are we optimizing for? What do we want to measure? In the end, you still need to have the creative part, but in the end, the discussion focuses more on the outcomes.

And I think that is the biggest lesson I took away, with me, when entering that A/B testing scene. Later on, I became the team manager of the team and from that moment, I tried to do A/B testing within ING and Postbank. Now years later, I’ve been reading lots of stuff and probably seen this quote many times, but you are reading stories about other companies like Amazon that say that, though, this is the best place to fill in. This is the best place where you can run experiments. And you have to do it. If you want to advance, you have to experiment. That was what Jeff Bezos was saying. And then I was reading stories from Microsoft, like the growth, the number of A/B tests that Bing was showing. And this was all caused by this capability that was introduced.

I think Ron Kohavi was the main driver there, building this center of excellence within Microsoft, within Bing, that could enable all the others to run A/B tests and measure the impact of their new features or new releases they were doing there. So that was another story. And close to me here in Amsterdam, is a company called Booking.com. They also ran 1000 concurrent A/B tests, across hundreds of teams. And my current manager, Lucas Vermeer, called me over there.

It’s not about how much data we have as a company, but about the data we need to prove the thing that you want to prove. Experiments or A/B tests are just the way to create data. Right? So it’s not so much about the data that you have. 

Working with ING as a big financial institution. We have lots. We had lots of data, but often you want to change something, and that is you’re entering a realm where there is no data or not yet, at least. You still have to gather the data. And running an experiment is a way to gather that data more easily. So those are the big powerhouses.

Right? So then I was looking at my situation, then at ING, and we were running a couple of A/B tests and all those massive skills that I just showed you from other companies. So that triggered me. And maybe it’s a good point in time to just ask the question here, Ajit, first, I’m really curious about how many A/B tests you, the audience, are currently running every month. Could you put up the poll, please?

You’re curious to see. So maybe you’re just starting. That’s also fine. Right? But maybe you’re already running a lot of beta. So I just want to have a feeling of what the audience is like right now.

 

A:

So 93% of the people have already voted, uh-uh, Kevin. Okay. I’m waiting for the remaining 7% who are still too lazy to vote. So, many people have voted by now. Can the remaining people also vote? Okay. I think they have gone to the kitchen to get water or something. Anyway, I’m closing the poll. Let’s see. Should I share those? Yes, please.

 

KA:

Okay. So this is interesting. So, most of the people here in the audience are running between 5-10 A/B tests per month. And that’s interesting. That just blows my mind. And maybe this is the ceiling for your organization, but probably not. So we’ve been reading and seeing all those powerhouses with A/B testing.

Then you look at your organization and you think, well, at least we could improve. And that was the same story that I wrote in this article, recently, I think it was published in April, where they highlight the experimentation gap. I think you need to close the poll. Right?

 

A:

Yep. Give me a second.

 

KA:

Yeah. So it just blows my mind. That’s my main message here. And this was the article that I was mentioning So there’s this big experimentation gap between companies who invest a lot in experimentation, make it easy to run A/B tests and there’s the rest who are not even aware of it or maybe trying, maybe have 1 or 2 CRO specialists running around but that’s about it.

And then comes the question. So why is that the case? What is hindering us? While so many companies are claiming that this is the way to go, what is hindering us to start moving on this track?

So I have a second question, a way to go. So again, what do you think are the biggest blockers here? So this is the second poll and don’t worry. There won’t be a poll every two slides. Those are the only two.

I’m just really curious about what you think are the biggest blockers here. Maybe it’s the tool, maybe it’s the culture. Maybe it’s a mix but just pick 1.

 

A:

So nearly 90% of people have voted again. Those 78% of people who did not vote the last time have not voted this time either. So I’m closing the poll, Kevin.

 

KA:

Okay. So it’s quite all over the place. The tool is the least fancy option. So the tool is probably not the biggest issue, but it’s probably in the other areas. So it’s leadership, it’s culture, its processes, but maybe something else.

Right? I’m curious to know what that could be. And I think this is, if you could, remove the poll again, please, thanks. I think that this is interesting to do for your organization. This is the way I tried to do this at ING.

This is a fishbone diagram, which tries to map the causes of something that you are observing. So in this case, I was observing that we are not running a lot of experiments. And I was talking with a lot of people and seeing a lot of things happening in that company. So I came up with a lot of reasons why this was the case. So in some areas, there was just no priority.

In some areas, people just weren’t aware of the value or they didn’t need it. A lot of times people had no experience, in some areas, there was a lack of technology. So that’s the tool thing. So those were the broad categories, but if you dive in a little bit deeper, then you can start taking away the consequences or the causes that are hindering people from running experiments. So I think this is something that you need to be doing.

And this is not a one-time exercise. Right? This is something that you keep on doing all the time and see if you can list the blockers in priority. And then chopping away a bit by bit. This is also related to the flywheel paper, Alexander Fabian, and look at what Premier and some other people wrote, but you don’t just change this overnight.

You just pick one thing, make it easier, make it better, and then pick the other one. So I think that’s the way to go. Of course, that’s the easy option. So let me explain a little bit more about what you can start doing today.

I’ve shared a little bit of background. So here’s my bias. Right? I’m sharing ideas here, but I am being biased. First of all, I’ve been with ING for over a decade. Now for almost eight months, I’m with Vista. I’ve been talking to a lot of people from the industry. I like to go to conferences, share ideas, and hear what other people are experiencing. I also read a lot of papers and a lot of books about experimentation, about change management. So this is my context, and this is what I like to understand better.

I publish a newsletter every week where I share articles that I read and try to reflect a little bit from my perspective, and I share this every week. And this also helps me to understand, okay, what are the things that are blocking people from moving forward? I have other relevant work that is relevant in this area. A couple of weeks or a couple of months ago, this paper was released.

This is a joint work by me and a couple of colleagues within ING. But also with Denise Fisher from bol.com, that’s the biggest e-commerce company here in the Netherlands and Belgium, I guess. And we made a deep dive into the organization, did a bunch of interviews, and then also surveyed to quantify the results to understand what are the biggest blockers here. So this is something, and you can find this research online. So if you are interested, you can read that.

Another one that I wrote a couple of years ago was about server-side experimentation. Also brainstorming with people from 12 companies, seeing, okay, what is hindering people from moving on that part? So that’s an interesting paper, I guess. And this is recently where I was a co-author and look at it from here. Did they have you lifting here?

And that’s maybe more specific, but it’s something that, I think it’s proof that you can improve your experimentation platform by building some automation and some detection on top of your third-party A/B testing tooling. So these are just things that I just want to mention and that are out there that you can have a look at.  Alright. Now I want to start with your situation. So if we talk about continuous experimentation, we need to expand from, well, where we are today.

And most companies today are what we call CRO, which means conversion rate optimization. I think it’s interesting to research also by online dialogue. Well, if you look at vendors, tool companies, but also agencies, they try to pitch you to buy our tool or, buy our services, and we can improve conversion rates. And I think that’s fine, but we need to go beyond that. And there are a couple of challenges here.

So first of all, it’s too often seen as a tactic. Right? So to optimize conversion rates, we try to optimize landing pages, but we get a budget and that budget is also shared with the AdWords team or the SEO team. So it’s in the realm of more like a tactic, and we are competing with others. Of course, if you want to do experimentation on a larger scale, we are just optimizing the whole process, not just one tactic.

I think the other problem is that we are so focused on winning tests. Now we don’t talk about failures, sometimes we talk about learning, but most of the time, we need to prove our ROI. How many people are working on this? How much does the tool cost? Did we deliver, and as a result of our investment, the ROI?

And, of course, there are lots of things to learn, especially when most beta assistants fill, so to speak. Then I think, oh, the name is wrong. We’re not in the game of optimizing conversion rates. That’s just too narrow.

But maybe more important for you as a CRO specialist or someone working in this injury, I think, industry. I think it limits our impact. We can just do a lot more. This is what Craig Sullivan calls the burning rubber in the A/B testing car park.

It’s lots of fun. We’re doing exciting things. There’s a lot of smoke. There’s a lot of activity, a lot of tension. But in the end, we’re still stuck in the car park.

Right? We’re not getting air, anywhere. We’re not driving the car to a nice destination. We’re not moving the needle. We’re just stuck there, but it feels like fun. And I think we can do more. Now We started, 0, and let’s expand. The first direction is we need to embed experimentation. With embedding, it means that we are enabling others to run experiments, not only the CRO specialist but also other people in the organization. Now let me explain the difference between CRO and embedded presentation.

So in one case, the product team is just owning the customer journey. There are building features, And the CRO team is just doing the work of a CRO specialist adding the A/B variant and trying to run tests. But in the end, you’re interfering with what the product team is responsible for. So the role of the product team here is to build features and maintain features.

And the optimization happens outside of that product team. If we want to switch to the embedded experimentation model, the product team is also building the variant, building the hypothesis, and running the A/B test. While still building features and while still maintaining those. And then the CRO team is enabling others. So they become more the center of excellence instead of the person or the team just doing all the work.

Now the funny thing is that all leaders in the industry have adopted this model. Right? So Booking, Amazon, well, all the big brands you see up here, they all have this centralized center of excellence enabling others, but they don’t do the experiments themselves. Right? They don’t come up with hypotheses.

They don’t build the variant. Those teams enable others to do the defensive work. And the center of excellence is responsible for taking away all the barriers. So they train people, they enable tooling, they integrate everything. So that’s the role of the center of excellence.

Now another way of looking at it is if just in case, companies are trying to optimize the customer experience, I think this is true across the board. So on one side, you have your customer. On the other side, you have a lot of teams organized in departments or tribes. And they all are optimizing for this one customer experience.

So in this case, this was an example of ING, they are integrating stuff into the mobile banking app or on a mobile website, and more business people can use the content management system, the CMS, to create content, create new pages, create landing pages, etc, and more developers can create features by just coding them. And this comes together in an integrated pipeline, which in the end is controlled, is being monitored, is, well, all the things injection balances are in place. And we push this live to a customer. And then we start iterating and based on data, of course, improve this. Now most CROs are currently just hacking the website, right?

It’s just opening a small door while forgetting that the whole organization is trying to optimize this. But really, implementing a third party tool as oftentimes this is just client side. The big benefit is that you can hire external developers, but in the end, you’re just layering on top of the website or your app, a different experience that all other people in the company are not aware of. This causes a lot of risk issues, especially at a big financial institution, but I guess this is also true for many other companies. Now the trick here is, of course, not to create this backdoor door, this hacking website, but it’s about integrating experimentation into their workflow.

Into that pipeline where on one side you have the organization and on the other side you have the customer. And this can be done with 3rd party tuning. But you can build it yourself. Right? And it could probably be a mix of both.

So what you often see is that there’s an analytics component. There’s a personalization component. There’s an experimentation component. This could be one tool, or it could be different services. Now focusing on experimentation, I think you need to do two things.

You need to be able to run pilots and rollouts, so some kind of feature toggling. On the other hand, you need some kind of way to control A/B testing and control randomized trials. And within that box of experimentation, there are a lot of other necessary components. So first of all, you must be able to assign a variant to a specific version. You need to be able to manage those experiments.

You need to capture the data, of course, you need some kind of scorecard or dashboard. In the ideal situation, you also have the library of all the experiments that you have been running. Those are tagged. Those are searchable. In the scorecard, you need a statistical component that people can trust.

People will need to be able to trust the results that they are getting. Oftentimes there’s an API connecting those elements. If you look at features, you need to be able to toggle on and off something with an easy interface. Sometimes there’s even personalization or targeting implemented here.

There needs to be a sticky ID so that if customers come back, they will be assigned to the same variant. Some kind of logic needs to be in place there. And, of course, it needs to be as secure as possible. And I think that this part, this tooling aspect should be managed by one team, with the sole responsibility, to make it as easy as possible to integrate this stuff and to enable others to run experiments. So this is, I think, the way to go for many organizations.

Now question for you. And I’ll be listening to 7 questions that you can take away from this. I think the first question is, how easy is it to make a change? In the end, you need to create teams with end-to-end responsibility and provide them with the service, and the tool of A/B testing. So if it’s really hard to make a change, then oftentimes we go to CRO, root, and try to hack the website.

I think the better approach is trying to solve the things that are blocking, in general, because that improves the whole situation as well for other releases that you’re doing. Now the second question you can ask is, how integrated is experimentation? How integrated is it into the pipeline or bringing something and an ID to a customer? And if it’s not, I think that’s probably the approach you should start working on today. And that doesn’t mean you have to build your experimentation platform.

You can work with a tool from the market and integrate that into your old systems. Now the third question is related to this topic, and that’s more personal. Right? So if you are a CRO specialist and you love the job that is generic or across the domains.  You’re doing research. You may be proficient in HTML and CSS, building the variation in your A/B testing tool, and love some statistics or want to do the analysis as well. You’ll have to present results to others.

So this is the whole life cycle of an experiment. But I think if you want to become the center of excellence, then you need to let go of at least some parts of this, of this whole life cycle. So doing the hypothesis, doing the creation, building that, I think that’s part of the local product teams or marketing teams, and, you know, you need to be comfortable with letting that go. Otherwise, people will start looking or keep looking at you and your team to run the experiments, and that will be limited to your CRO bubble.

So we’ve been talking about how to move from 0 to embedded experimentation. So this is about tools and processes, but also your personal development. Do you want to let go of this? And this is, I think, an important factor as well. Now, the second thing I want to talk about is expanding experimentation and becoming more strategic. So not only giving others the tools but just doing bold stuff, moving away from landing page optimization and other very core processes in your organization that are having an impact on customers, and trying to run experiments over there. Now this sounds maybe a little bit vague, but let me show you some examples. The first main message here is that experimentation is broader than CRO. So I think we need to call it experimentation.

I think that’s a better term here. Because it also entails CRO, but it’s much more than that. Yep. In the end, the experimentation is just applying the scientific method in our business. If you look at it, the scientific method, it’s about observing something.

It’s about coming up with a hypothesis. Then saying, okay. I think this will happen. So you come over with a prediction, and then you run the experiment. And if you apply this scientific mo method to all processes in your company, then you’re running experiments on a high-velocity skill.

Now, of course, this concept is not new. This book has been popularized by Eric Reese, the Lean Startup, And it’s also part of many big organizations. So it’s not only for startups. So you build something, you measure something, and then you learn from it. And those learnings you take into the next build again. And if you do it the other way around, if you start with, okay, what do we want to learn? What do we need to measure to come up with that learning, then you know what you need to build. So you can do this in multiple ways. I think the book is already quite a few years old now.

So the funny thing is that most of the time, it’s not about building, measuring, or learning, but it’s actually about lots of building. Maybe some measuring and a tiny bit of learning, but it’s mostly about building. Right? So we have this concept of feature factories. We just build a new feature, all the time.

And there’s no demand for actually wanting to know if it’s been adopted by customers or if it’s being appreciated by customers. So I think this is the harsh reality. And if you relate and I’ve been there as well. So don’t worry. Now if we talk about continuous experimentation, I think there are 2 phases.

I think here in blue are the concepts of the discovery phase. So you need to continuously discover. You need to learn. You need to do research. Aiming to hype, come up with hypotheses. And those need to flow into the delivery cycle, and that’s called continuous delivery. Right, many companies have adopted agile, but most of the time, this is just DevOps, developers, and operations working together. Picking what is most important, building that, and delivering it to production, and then having a look at the backlog and doing it again. This is fine, but you need to be sure that the backlog is being prioritized based on that research in that discovery phase. So if you do this continuously, then we’re talking about continuous experimentation. So it’s not a one-time off, not just doing it for one project, one A/B test.

Now it’s your continuous life cycle of features that needs to go through this and need to be optimized. I think that that’s the big shift in many organizations now that we, well, we implement agile, and maybe we get quicker software, but is it solving the pain point of the customer? That’s the next phase. Now I’m gonna give you some examples mostly by ING where we can become more strategic. So one way we can use experimentation is to validate strategic programs.

That’s maybe not an A/B test, but it is using that scientific method to see what are the assumptions behind those strategic programs. How do we know that those are true? What evidence can we gather? But also maybe we should just pivot. Right? If there’s no evidence or there’s no evidence being gathered while the program is already running. Maybe we should pivot or maybe completely stop this program. 

So, of course, you want to ideally, you want to do this before you come up with this big strategic program, but I think this if you make this part of of your whole life cycle of all the big projects that you’re running in your organization, then, you know, sometimes you just say, okay. We thought this was a good idea. We made a bet, but it’s not working, so we stopped it for now. And to me, that’s also part of experimentation.

And maybe that’s even the core functionality. So the question to you is, how often do you discuss strategy with your senior leaders? Especially if you’re focused on landing pages, testing, or maybe some product pages, do you understand the big push that the company’s making? And do you challenge leaders, senior leaders about their vision, about their strategic program with experiment results that you have access to? So I think if you bring those 2 together, you can have this discussion, a really interesting discussion, and you can move up the letter not only career-wise but move up the ladder with the impact your A/B testing or experiments can make.

Another example here is that sometimes we need to apply more advanced techniques. So again, from ING, there was this feature released where we would enable customers to make it easier to start saving. So it was easy to make a savings goal or change it. And there were lots of other features in the realm of making finance easier, having more control over your finances. But, well, of course, you can measure the usage of that, but is it having an impact on that customer?

Sometimes those things are just lagging data. Right? So they are so far away. You probably need to come up with other techniques to identify the causal relationship between implementing a feature and measuring the impact. And the other thing is that sometimes you don’t want to or you can’t do an A/B test. Right? You just want to deliver something to everyone. And you don’t want to have a control group leaving behind the benefits of this feature. So question number five, what are areas in your business that you need to apply more and faster techniques? Maybe you have data scientists running around and they’re already doing this, but if you start talking to each other you see what kind of experiments you are running, and the things they are working on. I think there are lots of interesting collaborations possible here.

Now this is probably completely different from the things that you’re used to. Right? If you’re focused on online A/B testing, then thinking about experiments that you could do on other processes in your organization is, well, maybe, a bit far-fetched. I think it’s really important. So at ING, this pilot was being run, and it’s a controlled, randomized controlled trial.

With offering, unlimited leave. So people could say, I don’t have 20 days of holiday, but, in collaboration and, talking with my team, we can come up with a good preparation on how we define the work. And if that’s all working fine and you’re delivering on the things that you want to do and you need to do, then if you take another day off, that’s well, not a problem. Right? So that’s the core thing of unlimited leave.

But, of course, you, is that possible with work? What is the experience of people? Of course, you can do a lot of research and survey, but in the end, you get the best data if you just actually try it. So in this case, ING together with the university did a study, to see what is working here and what it’s not. So people who did apply for this were arranged in a randomized control trial.

So some people just did the normal thing, and one group got the opportunity to, unlimited leave, and that was dependent also on the rest of the team. Also, people who didn’t apply also had to be part of that experiment, and there was a control group. So this was a good scientific study where ING could see the impact of implementing this. And I think there’s gonna be a paper out soon. So, I think I can share more data on this as well.

Now moving, a bit back to Vista. It prints a lot of things for small businesses. So in the factory, there are also a lot of processes. Maybe we can run experiments, by setting up different machines using a different paper. A lot of things to optimize for. So this is also an area where people can run experiments in an area where we will be improving to enable that as well.

So the big question here is question number six, which other parts of the business should experiment? Don’t focus only on the online part, on the digital part. Think broader, again, maybe by discussing it with senior leaders, call centers, mailing operations, factories, HR, and all kinds of organization parts that are around the digital area, which can benefit from experimentation. In the end, experimentation is just a way, a very good way to optimize the whole business. Alright.

So we’ve been talking about expanding experimentation in 2 ways. Embedding it, enabling more people to do it themselves, and becoming more strategic, other parts of the organization but also, other techniques maybe. So that’s the more strategic part. And I think if you approach those two ways, in the end, the culture of experimentation will arise, and will almost come by itself. Right?

If people are doing this, they are being enabled to do this and have the power to do this. Then this culture of what is your assumption? Why don’t we draw an experiment here? This almost automatically arises. It’s not easy.

It will take time. Probably years. Maybe, some people need to move the organization within or outside. But I think in the end, this is the way to go. Right?

You can’t start with implementing a culture. This culture just needs to arise from the blow. And, of course, you need to educate. Right? You need to educate people only about how to run an A/B test, but how do you come up with assumptions around the project?

How do you translate those into our hypothesis that you can test? And, again, this doesn’t have to be online always, but it can be much broader than that. And at ING, I think, well, 1700 colleagues were trained in this methodology. So that was a big team, pushing for that as well. Alright.

So you’ve probably been listening and thinking, okay. That is a lot. Is that right? And there’s no time. Right?

You only have time now for this webinar, maybe, an hour max. But this A/B test needs to be developed. You need to move on. So alright. Where do we start?

True. Right. It all costs time, but I think if you move a little bit further, I think the role of a CRO specialist in the case you are a CRO specialist now, will, in the end, I think will, will dissolve into multiple roles. Right? So we need people who are building and integrating tooling.

That’s the career track that I’ve been choosing right now, at least. So I like to work with developers to understand the problems and the friction out there. And work with developers to come up with solutions and implement those things. But there’s also a very important track to train people on the methodology.

So there’s training, consulting, hours understanding the business question and coming up with the best experiment approach, and teaching people how to fish. In the end, we need leaders who ask, we need leaders who give this mandate, right? So, to build this, experimentation capability. But we also need senior managers who are asking for experiment results. So if you are on the management track, And this is your career path.

Become that manager that always asks, okay, what is your hypothesis? What are your assumptions? Why didn’t we experiment? So this is a way to go, especially when you know the power of A/B testing, then I think you are obliged to do this as well. And then, of course, we just need a lot of marketers, data analysts, UX Designers, and developers who understand this approach but more importantly, just apply it on a day-to-day basis.

Right? We can’t just have all people telling us how important this is. We need people to do it. Use the tools, use the processes to, actually run experiments and do it by, by themselves. Final question, what will be the next step in your career?

So if this sounds interesting, I think you can pick maybe one of those directions I just told you. And then, of course, it will take a lot of time and a lot of work, but this is an interesting approach you can take. So these are the seven big questions I think you can ask today. And in the end, if you start working on answering and solving, probably, that’s also important. Solving those questions, you will be heading towards this code of experimentation.

Thank you. And I think we have time for questions, Yeah.

 

A:

That was a lovely presentation, Kevin. And we do have time for questions. We have already got some questions in the chat. The questions panel. But, they can go through that, and I’ll also help them, at least know that you folks are also allowed to ask questions verbally, via, so currently you’re muted, but in case you have any questions for Kevin, I’ll unmute you and you can also ask them directly.

So in case you have any questions for Kevin, please raise your hand. There’s a hand icon that you have to click on. As soon as you click on it, I will get the signal that you have a question and I will unmute you. Meanwhile, we have a question in the chat as well, from yeah, from Ricardo who asked, “How do you handle potential interaction effects between all the experiments that run?”

 

KA:

Yes. So how do we handle interaction effects between the experiments? I think there are two ways to deal with that. So one aspect is just the organizational aspect.

So it’s multiple teams probably working together that could interfere with each other. So I think that was just solved by communication and, the center of excellence can provide tooling giving insights into what kind of experiments are being developed, in what stage they are, and what kind of metrics they’re trying to optimize, and that in the end will help teams to understand, okay, this team is trying to do something that is probably interfering with something that we are developing. So that is just communication, right, just collaboration, knowing what is being done, and I don’t think tuning well, tuning can support that, but it won’t solve it. The other part is that, in case experiments are being run, then you need to have some models in place that account for interaction effects. And that part of the local team or the center of expert expertise can develop tools to enable people to see. Okay, I’ve been running my experiment, but in the meantime, people in another experiment were, were affected or were part of my experiment. Sometimes you need to cross-check, right, if the results differ when you segment it for different experiments. So that’s one approach. So I think it’s 2 ways of communication and also providing tooling to give that gifted insight.

 

A:

Nice. We have another question. I got two questions from Mike. The first question is interesting. How many respondents did you get? Okay.

So I think he’s asking the number of respondents. So, Mike, I’ll respond with the question separately after the webinar. I’ll have to see how many people responded. And his second question is, let’s say you’re running 20 plus tests per month, what’s the best way to keep everything organized and documented without creating loads of manual admin work that is manually creating results stocks, etcetera? Did you get that, Kevin, or should I repeat?

 

KA:

Yeah. So if you are approaching a level like 20 plus experiments, how do you prevent that it’s a lot of work, right, to manage all those things. But I think here, again, that the trick here is to connect the reporting or the structure of your program with the actual work. So for example, if you are moving something from development to test, then tracking that activity by someone should automatically update the ticket of your experiment that it’s actually in a new state. So, well, currently, what we use in this search may be a little bit technical, but what we currently use at Vista is we use Jira for this.

And I think lots of organizations use it. So we’ve set up a board where we ask people to document their hypothesis, but as soon as new information comes, or a ticket has been updated, then, we send this into Slack. So this is all automated processes. So we take the responsibility for updating people who subscribe to a specific experiment, but still, people need to update the ticket with relevant information. And I think automation is trickier. In the end, yeah, people still need to do the work.

Right? So we need to come up with a hypothesis. We need to do the variant. So that won’t change, but what you want to, what you need to prevent is that people just have to do whole admin stuff, on the side of that.

 

A:

Kevin, this is my question. Is it documented anywhere how you guys do the documentation part at Vista? Normally, you know, what we do at Wingify is, you know, we document the findings in a Google doc, and we maintain the repository in a Google Sheet, which is not ideal. Okay. It is a lot of work, and it’s very confusing as well.

So what you do sounds interesting. Is it documented anywhere how you do it exactly so that we can also emulate that, or people like us can simulate that?

 

KA:

I think this is a nice topic for a new blog post or maybe an ebook paper. We haven’t published about it. But I think it’s interesting. And we’re improving on that as well right now. So it’s been an OCR for the last quarter to optimize that flow in Jira because we had a lot of questions.

Most questions were just skipped by many people. So we removed a lot of things. But we haven’t published about it. So, something, I think we can do.

 

A:

Yeah. We are pleased to do that and circulate it at least with VWO because we’d love to see that. Okay, we have another question, how do you get the management to start focusing on experimentation efforts on running experiments and sharing the results?

 

KA:

How do you focus management on that?

 

A:

How yeah. How would we get the management to start focusing on experimentation, efforts? Yeah.

 

KA:

Yeah. I think that the best approach is for senior leaders to get inspired or almost convinced about experimentation. From someone outside of your organization. So that means they go to a conference and they see a presentation from Booking.com or Amazon about the amount of experiments that they are doing, and then they come back and say to someone in the organization, okay, I want this as well. I’ve heard so many stories about that.

I think Booking started experimenting this way when someone joined, one of the sessions from Roni Kohave while he was at Amazon. I think it was the CEO even though back then it was a small company. And he came back and said to the team, I want to become this experimentation, okay, I want to build this cap experimentation capability. We need to do this as well. So that’s maybe hard for you to organize from within, but try to look at areas where you see good examples and try to get that in front of your senior leadership. So, yeah, I think that’s always the best approach.

 

A:

Okay. I think we are getting a lot of questions. Okay? Folks, I would have you know that in case you want to ask a question directly, you know, verbally, I will unmute you and you can ask that anytime. Okay?

So please, you know, feel free to do that. Meanwhile, we got another question from Nicole and we got 2 more questions. 1 or 2 questions in line. Nicole asked, what is your vision and approach around the state of the art of machine learning able to predict experiment results?

 

KA:

This is a fascinating area, of course. The question behind this, do we need all the people running a beta, or can we build something that almost predicts what’s what will come out? I do see a lot of value in, like, meta-analysis audio experiments that you have been running within your organization. The big problem here is that it’s always looking back. Right? And I think that’s true for all machine learning, all predictions, we are relying on the data that we have.

And if things change in the environment or with competitors or you have better tooling or, well, dozens of things can change. Then it’s oftentimes better to just do a new experiment than just to rely on all kinds of old results. Having said that, I think there’s a huge benefit in having 10, 20, or even 30 experiments all showing the same direction on a specific topic.  then you can come to some kind of, well, almost a truth for your customers. And, of course, that needs to be taken into account in new development.

I’m not so sure if machine learning will take away this. I think the creativity part of humans is still very strong, and then, hopefully, that will separate us from machines, shortly and in the long future as well. But who knows? I might be wrong. I don’t know.

I think for the next 10, 20 years, at least my career, your career, I think this is a fascinating area, and machine learning is well, mostly outside of that, I would say. That’s my bet.

 

A:

That’s fascinating. Like, to get the result of experimentation in advance. That would be interesting technology indeed.

Okay. We got another question from Okay. A lot of people are also asking for the recordings, so I don’t think that qualifies this question. Yes, we are recording from now, and we’ll send it to you as well. We will also be publishing this recording as well as files of the webinar.

So feel free to go to the landing page a couple of days out of the webinar, and you will find the recordings as far as the size there. Okay. Meanwhile, I’ll also email you. I’ll also email people who have registered. Will the slides be shared afterward? Okay. Shaila, thanks for asking.

Okay. Kevin, where can we find your new status articles?

Okay. That’s an excellent question. And I think I will take that question, Kevin. So along with this thing recording email, I’ll also add the link to the newsletter of Kevin.

Okay, Ricardo. And, you know, so everyone who has will get the link to the chemistry setup. Okay. So, yeah, that’ll be sorted.

So I think that’s all the questions that we have. Thank you so much for doing this. Enjoyed the webinar.

I enjoyed hosting it, and I am grateful to you that you chose to be our guest and come to after a lot of persistence, notwithstanding. People who are still, listening. I would have you know that Kevin said yes to the webinar a long time ago, but I managed to get a hold of him now and get him to do this for now only daily. Okay.

So he got that form earlier, but, you know, I am glad that I was able to get both of them. And I’m also grateful that we managed to do this, and we were able to help so many people and have them ask the question directly to Kevin. And, thank you so much, Kevin, for doing this.

 

KA:

You’re welcome. And thanks for your persistence. It was fun to do, and I loved the questions. And if you want to discuss further, anyone, just reach out, by the contact details here and, well, go experiment, I would say.

 

Ajit:

Yeah. Go experiment. That’s the last word that we are living with. Thank you folks. Thank you for joining.

Thank you, Kevin. Have a nice day. Bye.

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Deck
  • Questions
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out