Key Takeaways
- Secure Buy-In: Ensure all stakeholders understand and support the experimentation process. This will help in creating a culture of experimentation within your organization.
- Establish a Defined Process: Create a clear, well-defined process for your experiments. This will provide a roadmap for your team to follow and ensure consistency in your experiments.
- Adhere to the Process: Stick to the process, even if an experiment doesn't yield the desired results. This will help you learn from your failures and improve future experiments.
- Create Visibility: Develop a system for tracking and sharing your experiments. This will create transparency and foster a data-driven decision-making culture.
- Learn from Successful Companies: Look at successful companies like Amazon and Booking.com for inspiration. They run thousands of experiments, and while you may not be able to match their scale, you can learn from their principles and approach.
Summary of the session
This webinar, hosted by Divyansh from VWO, featured Nils Koppelmann from 3tech discussing the importance of experimentation and conversion rate optimization. Nils emphasized the need for a strong experimentation program within a company, outlining three key building blocks: people running experiments, the buying process, and visibility. He stressed the importance of training and budgeting for experimentation efforts, as well as the necessity of having the right tools for research and A/B testing.
Nils also highlighted the crucial role of reporting in validating buy-in and maintaining a constant loop with stakeholders. He introduced the concept of ‘experimentation maturity’, explaining how it evolves from a few individuals running experiments to a company-wide culture of experimentation.
He further discussed the operational aspects of running an experimentation program, including the need for a systematic approach to monitoring experiments, defining success, and working in sprints. He encouraged attendees to document their process and regularly assess it.
Nils also mentioned a talk by Lucas Beamer on detecting errors during experimentation. He concluded by urging attendees to check out his LinkedIn for more resources on scaling experimentation and inviting them to collaborate with his company. The webinar was interactive, encouraging attendee participation and questions.
Webinar Video
Webinar Deck
Transcription
Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.
Nils Koppelmann:
Hey, man. Thanks for inviting me. Happy to be here today.
D:
Before starting with the actual discussion, I want to let attendees know that you too can participate in this discussion. Go to Webinar does not allow me to switch on your cameras, but I can switch on your mics. So do share your thoughts on the questions being discussed. Send me a request using the chat or the questions box from the control panel, and I’ll be happy to unmute this. Nils, please take it away.
NK:
Cool. Cool. Give me one second because people keep calling me here.
So can you all see my screen already? I think so. Yeah. Alright.
Cool. so welcome, everyone. I’m happy to host this session today on building blocks of a strong experimentation program.
So we work with clients ranging from more or less small eCommerce brands to bigger eCommerce brands, but also in other areas. And our goal as an agency is usually not just to be a siloed function where we run experiments and people in the company, just get the fruits from them, but, really, we want to build an experimentation program basically scale it up so, the company can then later use the spirit of experimentation for their growth. And so with this talk, I want to give a bit of an idea of what we see as the 3, you actually took that already away, three building blocks of an experimentation program. But before we go in, I quickly want to tell a story, unfortunately, I can’t reveal who that was specifically, but it was the CMO of one of our clients. And I was just writing a message, a while back, saying that I really appreciate working with him and his team, and that’s always a pleasure. And he replied with a bit of a story, and it was basically, I translated this from German because we’re a German company, and the conversation initially took place in German, but, he basically described how things turned over the duration of us working together and building the experimentation program in the company. And he described it as being initially a bit annoying because it was extra work to how they were initially rolling out features, rolling out changes to their websites and to their apps. But over time, that actually changed and the the way they saw experimentation, the way they saw A/B testing, that changed.
And, yeah so, I’m really happy to actually share this, because this was also a fun moment for for us because it again, showing that experimentation once deployed in a company, and once it’s not only just us working on running tests and rolling them out, can really have an impact. But now, this as a quick intro, as a quick start, but if we go into the next slide here, I quickly wanna show you what we wanna talk through today. The first one is looking at people running experiments throughout a company, then the actual building blocks, and I’ll share it’s, what, basically, 3 building blocks that we see, one buying process and visibility, then we will talk a bit about experimentation maturity and what it means for you and what you can take away from it for the next day and, for, hopefully, the growth of experimentation within your company. And so quickly, I don’t wanna do too much about it. But if you, like this talk, if it’s at all interesting, check out my LinkedIn, have some resources there about experimentation. My bits and bobs, what I think about, how to scale experimentation in your company.
And, yeah, also, of course, if you wanna work with us, this is, the place to go and check out.
So quickly, I mentioned, like, people running experiments in your company. And we, I like this graphic here quite a bit where on the bottom end in this circle, we have basically singular. So experiments are run maybe by 1 or 2 people, but often heavily, monitored by management or leadership where every experiment has to first go through a rigid approval process, not really what you would think about an experimentation program or an experimentation culture within the company that encourages to try things out. Right? Then there is the next bubble, right?
So and I described it as some because there might be some people working on experimentation, working on rolling out experiments, testing features, what I see this is oftentimes limited to marketing. So people run marketing experiments, not necessarily only in A/B testing, but also in running ad tests, right, like testing out different ad sets or whatever or creatives in this process. This is often where agencies start to come in. Right? It’s like the company realizes, so you might realize, hey, we need to use experimentation.
You might have heard about A/B testing, and you wanna scale it to the next level. This is usually where agencies come in or where a small team, is somewhat siloed working on experiment. And then there is, I just thought like this, infinity sign would describe it in the best possible way. But where literally everyone in the company, is allowed to run experiments or at least to commission and contribute in this process of experimentation that you want to grow. Right? So this is our idea.
But, yeah, basically, our idea of how, where everybody can run experiments, how eventually companies grow. And you can see this in the the examples of Microsoft, Booking.com, Amazon, all these big companies, where it’s not really 1 or 2 people running experiments, right? You can’t even imagine, like, one person doing all of that for one of the aforementioned companies, but it’s really in all the departments, people are running experiments, right?
I want you to before we continue, quickly assess where you are or where your company is in these buckets. And, maybe later in the questions, you can share but also, just keep it in mind for a later portion of this webinar.
I promised some building blocks, right? And before I mean, I think I already mentioned them, but before I go into revealing them here in this slide, I quickly wanna say something about how I see an experimentation program growing. Right? So at first, an experimentation program won’t grow if you don’t have anybody enabling it. And you have, it won’t work if you don’t have anybody driving it. It won’t scale if you don’t have anybody growing it.
So there’s the 3 building blocks that we see is literally, like buy in, getting leadership buy in, then having a fundamental process. And this is like the core of this webinar as well. Is like knowing each and every step in the experimentation program and having somebody look over that. And then visibility because without visibility, there is no way to grow it because nobody will know that you run experiments, but we’ll go into the individual building blocks throughout this webinar, and let’s start with buying as the first one. So buy in is basically getting leadership or people who give the money, or who make the decisions, especially in companies that are less mature in the beginning to buy into the idea of experimentation.
They need to understand that experimentation, running things and not just, like, creating a rigid roadmap of features or changes to the website is what actually, can drive growth.
And they also need to understand what this does in regards to risk management and all these kind of things. So what if I was to take anything away, even without the process, like, get leadership to buy in to to the idea of experimentation, right, sell them the idea, sell them what experimentation, running tests, but even the mindset behind it, what this can have and what kind of impact this can have to your company and to your growth and to making data driven decisions. Right? But this is step 1 in getting buy in, and making buy in from leadership, from management, an important part of experimentation.
Because without them, you won’t have any chance. Right? So the second one that I really like to look at is setting clear goals, right, because if you don’t have any goal, it’s first very hard to sell the idea of experimentation because nobody knows in which direction you’re going, what do you want to achieve, and what kind of results this can lead to. And of course, in an experimentation program, if you’re already running tests, you know the best. It’s very hard to know what you’re gonna get out of it. Because people always, all the time like to say, okay, we want a 30% increase in revenue or sometimes even more extra, like, higher amounts. But is this really what counts? And isn’t like, also part about risk management, right, making the right decisions for your company.
What I always suggest and what we do with clients, if they don’t have it already, but actually most companies do have it, especially in management tiers where decisions are currently being made, there is a goal tree map, or some kind of representation of how you achieve where you wanna go. Right? So this is what I recommend is actually looking at a gold tree map. Aligning the goals you already have as a company, and maybe also work with was it like your sprints, the quarters, or how OKRs, however your company works right now, don’t change at all, but work with these metrics with these goals to align your experimentation efforts in these areas.
So this is a very important step, and I would not suggest just skipping it and getting directly into running experiments because you need buy in. And, if people on all levels don’t know what’s at stake, then there is no point. But the third one is also budget. Right? And I also added in agencies because especially if you’re not there yet, you need some kind of budget, but budget also extends to people. Like, if you wanna grow experimentation across the company, there might be new roles you need to hire for. And while I could make an entire presentation, totally about how to budget for experimentation and the kind of teams you need or the kind of teams that can really help, yeah, it, like, evolved your experimentation program throughout time, I think it, it just comes down to thinking about, like, the people you need.
You already have and creating a core team, but also training. You don’t want people to just have the same information, the same knowledge rock that they have right now, but you want them to actually grow over time. Right? And, generally, something that already happens in most companies but make sure to also budget for these things specific to experimentation and to these efforts. And also, of course, I mean we’re in a VWO webinar. But generally, like, think about what tools do you need?
What research tools? What A/B testing tools do you need? Or maybe does your company require completely custom experimentation solution? Think about these things and get the budget for it. It’s super important because without the budget, without that means you can try as hard as you want, but you just won’t get anywhere.
Because in the end, just money might be missing. Right?
And then one thing, and we did a lot of mistakes in the beginning here, especially as an agency, but what we figured out is that reporting is very, very crucial, not just because it gives us an idea where we are, how the program is evolving or like, how individual tests perform, but it’s very important also to validate the buy in that we got initially, and to keep a constant loop with people that are still providing us and and enabling us to do this. And this is especially important in the beginning because oftentimes, experimentation programs start out as a pilot phase. Right? People or companies start, let’s try this for 3 months, 6 months, a year, but over this time, constantly reminding of the progress you’re making, but also sharing the learnings, right, sharing what stuff went wrong.
And it’s very important to build this kind of mindset that people understand. It’s not just about the revenue added. It’s very important and or whatever the metric is you’re optimizing for. But that people actually know what’s happening. This is super important now. Later, when we talk about visibility, the 3rd building block that I’ve been talking about today, I’ll talk about meeting cadence and reporting cadence a bit more.
Alright. This is, the first building block I think done. And before we go into the second one, funny enough, whoever at VWO who created the post and to promote, today’s webinar had this, yeah, this quote in there from Edward Stanley and I’ll quickly read it to you.
“If you can’t describe what you’re doing as a process, then you just literally don’t know what you’re doing.” And this couldn’t be more true in experimentation than it is in so many other fields. And the reason is because if you don’t really have a process, if you just catch and test the idea from here and then implement experiment here and make a decision there, and things are just going their own weird ways. But the real power of experimentation lies in having a process that you can optimize. And we’ll later also see why this is so important. But let’s quickly dive into the process because this is eventually what’s gonna drive experimentation in your company.
And I probably won’t go into all in every single step because this would just also take way more time than we have today. In this scope and, in, as part of this representation, but, if we start looking at how we, as an agency, start experimentation with clients, we usually have a couple steps, a couple phases, and then I’ll go into next step. I’ll show you also how that could look like. But, we first need to define success. And you see the similarities to what this means in regard to getting buy in.
Obviously, for an agency like us, we most of the time already have buy in from leadership because they basically create the contract with us, right? But defining success is so important because we need to understand also not just for reporting’s sake, but really need to understand what we should optimize for and where we can optimize. And so this is usually phase 1. Phase 2 is getting an understanding of your customers, understanding bottlenecks, and analyzing the website or apps or whatever the asset is that we are optimizing as part of the scope of working with clients. But and also really doing deep research.
And this is a part you’ll see later again, but that’s oftentimes skipped. It’s just doing the research. It’s so easy to jump to the next best test ideas, but research is the foundation of creating good test ideas and coming up with well, ideas and the plan and the strategy of executing, your ideas, of course. Right? So and then hypothesis. I see it so often in experimentation programs or amateur, I would say, even in experimentation programs where hypothesis are oftentimes very, very far at the back, like, after implementation, people think or after evaluation of experiment results where people think about what did we actually do? They might come up with an idea and just, implement it or test it, but never think about what the actual meta structure of this experiment, this A/B test might be.
So, creating sound hypotheses, and there’s think in my recent newsletter, I referred to a couple of articles about this, and, I’m happy to share this on LinkedIn again, but there’s very good material on how to create proper hypotheses based on the research you’re already doing, hopefully that you’re doing. So very, very important step.
And then planning and running your tests. Right? There are several frameworks that are also shared by other agencies or people in the space that you can use to really plan and make a plan of how to run experiments over time. So it’s not just, one time efforts of, let’s create, like, 10 experiments and then run them over the next half a year, but actually creating a plan and strategy behind the kind of tests you want to run, to to learn more about your users, but eventually also to improve the product or the website you have. So, and there is obviously also prioritization that goes into that already as part of the hypothesis step. But then even running tests, there’s so many things at play into it.
You need to design it. You need to develop it. You need to create a statistical design for the test, right, something that’s also oftentimes overlooked because people jump to developing it too fast or jump to hitting the run for however many weeks in their testing tool.
But this is all steps that I very much encourage you to look at how you’re currently doing, what are you currently doing, and just writing down the process like, very meticulously. And there’s things like Q & A. How does Q & A work? How is it executed on a technical level and all these kind of things, all the way to the point where you’re, like, hitting that button and start the experiment. But then also as a next step, right? Create a system for how you deal with the situation with monitoring when experiments are running because you need to understand, based on metrics is an experiment actually, like, hurting you right now.
And there was a a great talk by Lucas Beamer, that you can find online about, how you can detect, when when errors happen, right, and, probably not applicable to all the companies that run experiments, but, it’s super important to have a plan for every step of the way. So from first insight, to all the way when you’re ending an experiment. So I can just highly encourage you to write it down. It’s a bit of a tiresome process, but writing down the process and also figure out ways to optimize that. And this is, I think I skipped this one, so the full process that we have is usually accompanied by a more enclosed testing process that is actually than iterative. So, just wanted to mention that, of course we don’t do defining success in every cycle. Alright. A white page, but usually, how we work with clients is in retainers.
Doesn’t really matter for the school of this presentation, but it just put these slides in because this is something we use whenever we present to a client how we’ll be working with them together. And I just want to give you a quick, glimpse at how this usually works. And for some companies in some situations, I’m happy to go into how this could actually look like, but this can be a blueprint of how to kick off experimentation or the first couple spins of the so called experimentation flywheel. And you see the the similarities again to what I was just presenting to you. We start with defining success, then we do a deep dive. That’s at least what I called it here, where we do research and all these kind of things to find out where we are and where we wanna be, and find out more about users, their behavior, and also do analysis of the website, of course. But then we work in sprints, right? This is also operationally something that you want to think about. How do you work, in these circles and in these cycles? And, yeah, this is something I can highly encourage you. It doesn’t have to be a fancy presentation like this, or it can simply be a document where you define the process, where you define how operationally you will run this process, how you will drive it and then do to yourself a favor and just write it down, share it with people, and make it actually something that you regularly be assessed also. And also here, I actually have a bit of a, deep dive.
Let’s go into what these individual work packages oftentimes look like. We adjust that for clients, but, yyou see, we cover operations. We cover how and what kind of sprints we work. We cover what is usually included as part of the continuous research and also as part of reporting. And this is something that’s super important and not that we do all these things, but that we have it written down somewhere. And not just because we’re an agency and we kind of need to, get budget for stuff, but because also it helps us actually operate with this process. So that’s enough about us and how we do it, but, let’s go a bit into what the foundations also of the process are. And I mentioned it before, it’s like research. This is one of the things that is skipped so many times, but it can provide so much clarity and so much information that you oftentimes just miss.
And in companies that are very, like, already work in a user -centric way, this is maybe something that’s not obvious, that’s more obvious than in others. But put your user first, talk with people, right, to do qualitative analysis, like, look through customer voice analysis and make this part of the process at every step of the way and or at least at regular intervals, but also use data. You’re so many companies have so much data that goes unused, it’s just collected over time, but never really used, but use this for research.
You can find anomalies in behavior, wherein and how people just use in the customer journey, how people use the website, and use it to gather insights. But what do we do with all these insights, then later test ideas, and experiments? Well, what oftentime happens is people just write it down in the Zana tasks, right, for more product companies, in Jira, maybe. But what we really want is to have a sound, a really good documentation of what happened. Again, what happened every step of the way. So we’re, mostly working with Airtable here, recommendation that I got.
Maybe you’re listening, but from Rubin, a couple of years back, and this has been really, really helpful. For us, as an aging agency right now, we’re working, on building a tool, though, but it also helps us better document and also scale, scale to this documentation process and learn more across clients also. But make sure you’re creating a structure for documentation, of your process so it doesn’t all go to waste. And then the 3rd, like, yep, building block or as part of a process is like, make sure you really understand how to use hypothesis and how to actually create them so they mean something. So they’re not just a thing that you do and then it lands somewhere or is in the presentation or your slide deck, but make your hypothesis, the backbone of your strategy. So these actually represent what you’re doing and that they will eventually help you also learn more from your experiments when you’re analyzing them at the end.
And then the fourth one, and oftentimes also very much missed is like automation.
In the beginning, when we started, especially with Airtable, we didn’t have much automation in. But what we realized over time is that there’s so many recurring tasks that really nobody liked. It’s like oftentimes creating presentations. They kind of look the same all over, like, every single time, not in terms of content, but in terms of structure. But then stuff like gathering test ideas and notifying people of when stuff has happened and creating tasks in your task layer and all these kind of things, all this stuff, can be automated. And, so I’ll actually go and continue here with a bigger picture of that tool landscape can range from various things.
So we, work here with Airtable a lot, but we’re also on building this, our conversion lab studio, but then we use tools like Slack, Asana, Jira on client side a lot of times, but we want to connect all these, we want to connect all these tools so they make sense for us. And so we don’t have to do all of the heavy lifting and heavy work. So what I recommend is like looking at what kind of connections you have in terms of API or, use leverage state here. It’s like it doesn’t cost a lot, but it can really help you automate processes and stuff.
So yeah. But this requires, of course, you to know what your process looks like and identifying also at every step of the way where you can, automate. And what steps are repetitive. Right? So let’s go into visibility because we talked a lot about process and how to drive the process, but I also want to focus on and give a bit of an idea on visibility because without people knowing what you’re doing, like, if you’re for even as an agency, like, if nobody knows in the company what we’re doing, how would we even encourage them to participate, to partake in this process? Bring in their own ideas for maybe other, from other parts of the yeah, operations and and all these kind of things. So visibility is very, very important. And I created, I think this slide for presentation, in, for Conversion Hotel, a couple months back, I think, in November, but just to share a bit how visibility through meetings and touch points over time can really help people to be more engaged in this process. And something I think I referred to it also earlier, is like create a meeting and reporting cadence. So just make sure you have certain meetings in the calendar where you talk with stakeholders, with leadership, but also then think about all the other people who are or could be involved in the experimentation process.
And what I, for example, really like to do I mean, we’re maybe a bit of a special case because we’re an agency, but I like to bring in new people into meetings, regularly. Because if they have ideas that can help us, or help the company drive the experimentation process, or just come up with new ideas for experiments, then I really want that. I want people to have good ideas, or I want people to be able to put their ideas into the process, but this would never happen if there wasn’t this interface.
So interfaces could be, for example, meetings where people share their ideas, but an interface could also be, and we usually have a Slack channel with clients, and we’ve sometimes multiple ones where we share and and actually pin a message. Where we tell people, hey, if you have an idea, just go into this form quickly jot down your idea, with a couple of extra pieces of information, and he’ll be notified when this one actually makes it into an experiment. And this is one way of tying people back into the process and making them, yeah, part of it.
We also have, like, scorecards and presentations that, in our case, are fully automated. So when with reports, coming in from experiments, once they’re ready and done, we have an automation that creates experimentation, like these scorecards for us. And, yeah, what helps us understand and analyze experiments. So visibility can take a lot of shapes throughout the company, or depending on the company you’re in. But what I really encourage you to is, like, make more people part of it and be a bit of an evangelist in your organization and share with people.
And that could be like a newsletter. This could be a Slack channel, and there could be just meetings where you sometimes come up with ideas together and make this a bit more of a game and place bets on stuff that could work or not, but visibility will enable you to grow experimentation in the company. Right. So if we come back to this, yeah, quote in a way, of this small conversation, you can see how a lot of these things, a lot of the building blocks can actually help you go from, oh, this is a bit of an annoyance to actually having the team engaged. And as this was coming from the CMO of that company, I was very happy to see that because he also has a good a idea of what’s happening in various departments. So, yeah, I was very happy to see that.
But now when we look at, where you are as a company and with your current experimentation program, we can also talk about experimentation maturity. And earlier, I think I’ll go to this one first. Earlier, we talked about how many people are running experiments throughout the company or commission experiments. And if I was to break this down more into a ladder that people are into steps where people can walk up to or a company can evolve over time, there’s this framework called experimentation maturity where you can see that there are various levels starting from initial, right, where experiments are run more on an ad hoc level. And then second, where in definition it’s called, I think, where people where there’s already a good process in place, and there’s integration where the experimentation process is also integrated across the tool landscape, for example, and more into the process as general operations.
And for management and measurement, where experimentation is actually already spread across the organization. and then the ultimate level, basically, optimization where it’s not just about optimizing individual assets but optimizing the process itself. And if you think about it where you want to be as a company, I think you wanna be more on the upper end of this ladder, of these steps, of the staircase, whatever you wanna call it. And this is why it’s so important to take these building blocks, start out on the bottom, get buy in, create a process, and you’re already properly somewhere here. Right? Even if it’s with an agency, right, this is super legit to work with an agency in the beginning.
What’s important though is to make sure that the work done by the agency is not siloed across the company or in the company, but that actually you can, and that the agency encourages you to run more experiments even outside of their scope. And then step by step, actually getting to the point where experimentation or experiments are run all across the company, and where people have actually an interest in running experiments. And of course, this is done by visibility. And there we go.
Experimentation maturity is something that will take a lot of time. I wish it was was a 3 month project that you could do in a company, but there are so many factors that contribute to that – culture, resources. Again, we talked initially about budget methodology, which is basically the process and, and all the the bits and bobs that come with creating a sound process, infrastructure, making sure you do have the tools. You do have everything that’s needed, even infrastructure can be like meetings and having structure for that. And then integration as basically having leadership also make their decisions based on experiments.
And now that we talk about leadership and experiments, it’s a bit weird because previously, we were talking about, purely A/B tests. But creating this culture of decision making based on data. This is where experimentation really shines because people start understanding that their intuition might not always be right. And people will see that running experiments, again, doesn’t have to be an A/B test, but sometimes can just be a bet that you place on something, and then you measure it, you measure the outcome, and then you see where you are and where this compares to your initial hypothesis.
So all these factors play into experimentation maturity and that combined with the building blocks, and here’s the takeaways again. It’s super simple. Not in execution, but in the way it’s constructed. Get buy in, create and define a process, stick to the process, even if an experiment doesn’t work, stick to it, and then create visibility over time. This is not the easiest thing, but if you take a couple of the things I was suggesting, and maybe come up with more ideas on how to create visibility in your specific case, then this can really help grow experimentation in your company and go towards a more data driven decision making process.
And eventually, just look at the examples of the big ones like Amazon, right, like Booking.com, and others, they’re running experiments all the time, thousands of them, probably not something that you can achieve, like, with just a couple of things I shared in this presentation, but the principle is that the one that counts. So I encourage you get this right, and you’re already set off to a good start with experimentation in your company. And, yeah, that’s me saying thanks. Appreciated, so many people showed up, and I’m open.