• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Booking.com’s Playbook for Everyday Experimentation

Building a culture where testing is routine doesn’t happen overnight. This conversation breaks down how Booking.com embedded learning into day-to-day product work, from choosing what to test to handling failure, sharing insights, and scaling decisions across teams and regions—without slowing innovation.

Summary

This session looks at how experimentation became embedded in Booking.com’s product culture. Through practical examples, it shows how teams prioritize ideas, learn from failed tests, and make confident decisions at scale across products, teams, and markets..

Key Takeaways

  • Embedding testing into everyday workflows reduces bias and speeds up decision-making.
  • Moving quickly and learning often is more effective than waiting for perfect ideas.
  • Shared metrics and transparent insights help teams scale change without unintended impact.

Transcript

NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.

Jordan, we’re thrilled to welcome you to Convex, VWO’s annual summit where experimentation leaders come together to share what really worked. Welcome to the summit.

Nice to be here.

Awesome. You’ve built an incredible reputation as a champion for everyday experimentation, and Booking dot com has become the gold standard for turning testing into a company wide habit.

We are so excited to dive into your playbook, how you how you make experimentation part of the daily rhythm, how you help teams learn, and what’s you know, what your learnings have been along the way. So shall we get started?

Yeah. Definitely.

Awesome, Jordan. So can you take us back to how experimentation first became a daily practice at Booking dot com? What did those early steps look like? How did the culture shift, you know, start to happen?

I’m not actually a hundred percent sure how it actually how they the the the the it became, like, fully hundred percent integrated in in Booking. But what I do know is when I started at Booking, it was already part of and it was eight years ago. Right? So it was already part of the product development cycle.

So every product team, when they were running when they were building features or or or fixing bugs or introducing new things, new products, they would always have experimentation as their development part. So you start with customer pain points, you start with data research, and then you create a hypothesis. You add more data to it, then you you start an experiment, you analyze, and you have this this iteration loop. And those experimentation was already part of the the whole process.

So that’s the from that point of view, it’s actually quite efficient to make it part of your process. Commented, then there’s also less discretion about if you should or not should do it should not do it.

And I think that’s yeah. That’s that’s already that was already in place, like, eight years ago. And the question that one that I often get is, like, how how did Booking become like data data driven. Right?

So where is this coming from? And I had the same question when I started. So I actually asked one of the old directors, and she actually didn’t know. So it’s like it was already there.

So the hypothesis is a hypothesis too. The hypothesis is how we became data driven. It was always already there. That our that our founders, they were engineers, that they you know, that was part of who they were and how they introduced it to the culture and how they like to work.

For instance, with Airbnb, they have to UX designers or designers as founders. Right? So the culture there might be a little bit different and that all may also maybe explain why they’re, you know, reducing the number of experiments that they’re running.

Because it’s such a gold standard in the industry, it’s it’s really important to start from the get go, and it’s really an inspiration for a lot of us. And now coming to the next part, which is obviously there are multiple ideas test around and in fact product ideas are flying by the second. So how do you and your team decides what’s worth testing? Is there a method you use to prioritize which experiment actually get run?

Think that’s that’s one of the most important, the most difficult questions to answer. Right? So which experiments do you run?

Because ideas are ideas are cheap right now, especially with AI. You can just ask, okay, do the research. This is my website. Give me a hundred ideas to test.

I mean, so what we do usually do is we start with kind of key customer pain points. So what are the pain points that we wanna solve for? And then then from all these pain points, we had we we get data to kind of understand which is the one that’s most either urgent or the biggest impact. And that’s that’s quite straightforward. But then you get to kind of the solution part where every problem, there’s, like, unlimited amount of solutions. And then the question is, how do you from all the solutions, how do you pick the right one? And there’s a lot of frameworks that you can use, prioritization frameworks, but they’re all kind of biased.

Mean, because if you if you if you ask somebody to to to think about, so what is the future potential impact, especially a product manager, goes, oh, it’s gonna be huge. It’s gonna be amazing.

And So then all the product ideas from the product product manager will be prioritized sooner.

So so we use we have a binaural framework, so it’s kind of like the so every part of the the the the prioritization frame, we can see either yes or no questions to reduce a bias. But still then it’s really difficult to to pick, you know, the right idea. That that also explains why, like, most of our experiments actually do not add failure. Right? So seventy three percent of all our experiments, they don’t go live. And that is that is because, you know, it’s really difficult to predict which which experiment you should run.

And our solution is don’t think too long. Just run an experiment. And this may be our luxury because we can run a lot of experiments all the time. So instead of thinking, okay, what is the best idea?

Okay. We have some filtering from the from the research and some data, but in the end, run as many experiments as fast as possible because you don’t know. You you just don’t think about it too long. Just run an experiment.

If you are starting, then it’s kind of then it’s you don’t have like a high experimentation velocity, then it will be a little bit more difficult.

Don’t use any frameworks which which adds impact because that’s impossible. Use as much data to understand.

I think that’s it. I think this is the most super interesting question, so thanks for asking. This is maybe maybe you should ask AI. Right? What’s that yourself? But AI will also know now. But this is kind of the difference between high quality ideas that you put into the experimentation machine to see if they’re actually valuable.

Absolutely. And I think there’s no best way to learn apart from testing, and it has really championed with, obviously, you had the helm of it. So maybe if you were to recall one switch experiment, Jordan, which was, you know, very counterintuitive whereby there were a lot of bets that this is not worth running or maybe this is not testing, but you had to build it through because of the velocity. And it really became like a impact driver within the organization. Would you like to share that?

There’s also an example I give during the the talk I do on ten year fragmentation is that we were trying to improve the way we show occupancy to our guests to show how many people fit into a room First is, you know, a different way to look at of your tea is also to say, how many people did you search for? So this kind of understanding, did I search for two people, the the room gets three, how do you communicate this? And we changed one little icon just to make it clearer, and this experiment had such a negative impact that people are reaching out to me like, hey, Jordan, what’s happening here? This is this is super negative. This was like so and the realization is such a small icon, and it it might be a small change, but it really impacts the the behavior of your customer.

And that and this is why it had such a huge impact. And these fast this kind of keeps on fascinating me that such so such small things can impact the customer behavior so big.

Another thing is that we had a lot of discussions about, okay, so if you if you ask us four, what works best? Do you have, like, one big form with three questions or three questions and three steps?

We keep on having this discussion, like, seventy three of of our our experiments is is actually not adding value for customers. And it keeps on being fascinating that we try to hire the best people, we have we try to you know, have reliable data and tooling, and still it’s so difficult to find actually something that moves the needle or creates an insight. Yeah. And this is why this why this experimentation is so so amazing. Right? It’s for the curious mind, it is the best place to be in the world.

Hundred percent agree. But in a case where, you know, there are couple of losses and there’s a streak of losses, how do you make sure that the momentum is kept up and how does the team then is motivated and focused on the learning part of it?

Any culture tips there?

Yeah. So for our teams, they are set up for running experiments, and we give them also nine guiding principles. And the first principle is embrace failure. And when you run when you kind of already see that, you know, most of your experiments are not successful, then you kind of, you know, you understand this is part of the process. You think you try stuff and you you automatically kind of understand and learn that most of the things will fail.

So for us, it’s kind of you could use it when you start running experiments. But what is, I think, more difficult is to understand when to stop running experiments. Right? So you have an idea, and you you you and then you start running experiments and it’s not working, not working.

You you get learnings. This this works a little bit. You know, you see them behavioral metrics. But how many experiments do you do you do before you actually stop running trying to validate this hypothesis.

Right? Is it five, ten, fifteen? So this this this yeah. What they call don’t fall in love with the solution, fall in love with the problem.

Right? That’s kind of the really difficult things that I see seeing you see at Booking a lot.

So but at the beginning, I remember when I started running experiments, what really helped was to highlight the learnings and also show the learnings. Right? So people have work. They they they create an hypothesis. They build experiments, and they’re really proud of this, and they look they look forward, and then there might be yeah. The mind that likes to see that it’s not successful, but then we always presented every experiment with the learnings and the next steps to a kind of larger audience so that that that everybody could still feel proud of the work they did, and everybody could still see that they’re, yeah, on a trajectory to success. So I think sharing these learnings in a in a big audience, that really really helps to still make make make the team proud and and show the progress.

And that, you know, is is a perfect segue into my next question, which is, you know, Booking dot com is known for sharing learnings fast. What practical routine or tools help spread insight across such a big organization so others can apply them?

I’m not aware that we’re actually known for sharing sharing learnings fast. I think this is one of the things that are big problem well, it’s a big problem. It’s it’s a big challenge at Booking. We have so many teams running AI experiments. We we don’t have kind of an easy way to share kind of these learnings. Every team shares the learnings within their own small little department and everybody knows what’s going on there. But for instance, the topic I used to work on was kind of a was as an app screen related to rooms.

And there were thirty teams running experiments on that just that screen. So understanding what they were running and then understanding what their learnings were and then sharing it with everyone, and that that’s for every screen.

We do have that our tooling kind of we have one tool, we an internal internal tool that we build ourselves that that contains type offices, the screenshots, all the data, the decisions, all the communication. So we have one repository of knowledge about from about this experiment. This helps, but finding something is really difficult. So our challenge is how do we have create this, yeah, generic learnings, distill this generic learnings, and then share this with the right teams.

At the moment, we don’t have the solution. Of course, the solution to everything at the moment is AI, so I’m I’m expecting AI to solve this. We are testing with an AI bot, which an, you know, which can read all the information in our experimentation tool. This contains every experiment since beginning of experimentation in two thousand and five, and you can ask questions, but we’re not there yet.

So, yeah, I think that’s this is a challenge that we haven’t figured out yet.

On the on the on the the others oh, no. Sorry. You were saying?

Yeah. But I think that you one incredible playbook to, you know, download it for all of us. Make out an experimentation in c r o two c that contains all the failures, the learnings, and probably what made Booking dot com what it is today. So that’s a that’s an incredible insight into how we can approach and build out maturity within our experimentation CRO personalization efforts.

So one one thing when it comes to sharing learnings is that we are fine with running similar experiments across it. So one team might run, like, let’s say, add a new feature here, and then two years later, a new team will have the same idea and they will run the same. That’s totally fine because things might change. Maybe it didn’t work then and it works now, or maybe it was a false positive then and it works now. So the learnings is not about not preventing teams not to rerun the same experiment or the same idea, it’s we wanna have kind of better ideas, and that’s that helps when when there’s one kind of way to share it.

So And maybe a peek into how you manage this across geographies because Booking dot com is just not Booking dot com in UK or America.

It’s Booking dot com everywhere. So how do you contextualize this information from such a vast team and organization at that scale?

Yeah.

So what we say internally is that we wanna be locally relevant but globally scalable.

So for us, it’s not it’s it’s not scalable if every every country has their own website or every every country is unique.

And we because we wanna run experiments with the biggest yeah. The the biggest power, meaning the largest traffic, meaning we wanna experiment across the whole world all the time. So we run our experiments across all countries at the same time.

And that being said, there are local differences when it comes to, of course, language. So we automatically translate in forty three languages. Actually, it’s forty four now.

We do have with currency, of course, the differences, there’s legal compliance differences per country. But also, we also know that using photos of Dutch people doesn’t work in Japan. Right? So there are local differences that we tune with our localization teams, but it’s still the more specific you become, the less scalable you will become. So that’s kind of the balance for us.

Absolutely.

And that’s a great, great peak there. And since, you know, we talked about a global marketplace and there are, you know, regional context, what really balances numbers with human stories? That is a call between quantitative data and qualitative feedback. So is there a time when user testing or feedback changed the direction of an experiment?

So most of the time, it’s it’s the other way around. So we do do we do do a lot of user research and a lot of surveys. We wanna understand what’s you know, what are the biggest pain points, what do what our customers are thinking.

But most of the time that I see often is that that people say one thing and actually do something else. So then we actually need experimentation to validate this. That’s you use experimentation to validate the insights from the qualitative research.

On the other hand, experimentation doesn’t tell you the why. Right? So this is the amazing stuff about about talking to customers. You can actually try to understand the why.

So what we’re doing now is we’re we’re trying to use surveys at scale where we ask it’s it’s kind of like customer effort score, and we ask this all across the funnel, and and then we link this back to our experiments. So then we have the hard data, the conversion metrics, the behavioral metrics, and then we also have actually what what do customers actually see? Because we we see that conversion is a really good proxy for customer experience, but it’s not hundred percent. So now we’re trying to ask customers, is this actually better?

You know, what’s you know, what what is missing? And then at at scale, then you can actually get to like really nice nice data points based on a surface next to your experiment. Then you should be able to see that the the the hard data says it’s getting better, and then the soft data says no. Oh, okay.

So we need to kind of figure out what’s happening here. We’re not there yet, but that’s what the trend that I’m seeing, and this is actually about what we actually want. We want to have we wanna make sure that we’re improving the experience and not just the the KPIs.

I completely, you know, relate to the point. It’s the why that is driving the how and lot many solutions.

Now coming on to my next part, which is around collaboration, Jordan. I you know, org, at a scale of Booking dot com, there are multiple teams, including product, tech, you know, marketing. How do you make sure that the collaboration happens seamlessly and there is a experimentation mindset across the teams?

So what I mentioned is that in some parts, in this case, the the the room screen, there’s thirty teams running experiments in in the same area.

And how do you combine this all? So marketing has their own experimentation program. And so we don’t on the product side, so in in the hotel funnel, we have limited contact with the with the marketing. They have their own ways.

But we have more than I don’t know. Another ten twenty teams that are all just running experiments across the funnel. So how do you manage this?

First of all, you don’t. So it’s total chaos, and that’s fine. Because if there’s if anything breaks, experimentation tool will tell us. Right? So if if one team is running experiment and you break some something somewhere else, then expert the experimentation tool will say, hey. Something is wrong. And you should be able to see if there if something is going wrong.

On the other hand, there’s there’s we try to communicate and over communicate a lot. So the teams that are running experiments try to kind of share what is coming up in the future, what they are planning to do, and then you can only kind of see if there’s any overlap or any any conflict.

But whatever we try, it’s it’s still total chaos. And I think that’s probably in every company.

And you can try to add more process, but that’s just not gonna make work a lot more fun. Or you can embrace the chaos and have the tool kind of prevent the worst things.

And this kind of also aligns to with the whole discussion that has been in LinkedIn for a while. I said, you know, interaction effects. How do you manage interaction effects?

And I think the the the consensus seems to be that interaction effects between experiments, they don’t happen a lot. If they happen, the impact is limited. And maybe you should actually want this to happen because then you can actually see where things are are breaking.

And then you will learn learn from this. I mean yeah.

But on corporation and communication is is very important, but it’s also quite difficult. So we have multiple yeah. I think everybody uses Slack. We have some like a an internal kind of like a Facebook for work where people also can kind of see what’s happening. It’s just it’s more like a pull medium where Slack is kind of a push medium.

Of course, you have email. So we have a lot of ways to communicate. You need to over communicate. But, yeah, in the end, if it breaks, we’ll the most important thing is that if it breaks, let you know that it’s breaking.

And then you get back to experimentation.

Absolutely. I think chaos as a process is a is the way forward, not just the process or not just the chaos. You know? In the twenty in twenty twenty three, AI was a back burner. In twenty twenty four, twenty twenty five, it really came front and center. And now, for a fact, we all know that it really enables hyperscale and intelligence for a lot of teams. So, you know, how are you using AI at Booking dot com, and how are you building the next generation of experimentation within your team?

Yeah. So I think there’s for booking sorry. For my for my team, there’s two parts three parts, actually. So there’s booking as a company, which says AI is is is a key strategy.

But it’s actually quite funny because previously, Booking was always kind of a technology lagger, so we were always following technology. We were not kind of very innovative.

And and then when AI came, we suddenly noticed that that CEO of OpenAI was in the board of Expedia, which is one of bigger one of our biggest competitors. And I was like, oops. What’s happening here? So so was this got everybody spooked, and then now AI is is a part of every strategy, and there’s three parts. So this how the the tooling that’s available to us as employees.

So we have lots a lot of access to all kinds of tooling. So when I open a browser, I get we use Glean, which is kind of like an internal AI.

It is it is based on ChatTPP. It’s connected to all our documents, all our Gmail files, all Slack, and and you can ask questions, and you can find teams, you can find products, create ideas. But this is how you how we manage, like, on a employee level. And it’s always shaped up a lot of work just by searching and finding easier.

It’s just quite fascinating. So then there’s the part as as a product team. How do you incorporate AI in your products to make it easier for customers? There’s already a couple of examples that are live, so it’s a smart filters, which that, you know, we have like a gazillion filters when you search for a hotel, but so how can you make it easier?

So we have AI that connects natural language, which actually with the filters, so you can actually easily filter. We have ways to summarize reviews, summarize the differences between rooms, so that you can easily though we’re already using AI also on the on the product side, and we’re learning how to develop this faster and faster. So this was on the product side. And when when I hear hear stories that, yeah, it’s not AI is not adding product value yet.

That’s not correct. We are already seeing this more and more. And it’s it’s it’s a difficult a different way to to to build products. It’s it’s a more extensive process.

It’s more difficult to optimize these prompts, but it’s getting better and better and faster and easier. So we’ve been doing this for a half years now. And the last part is using AI in experimentation.

There’s all discussion about synthetic users, but it’s also a discussion in Booking, but we have we have we’re not doing anything about this now. The big question is how reliable will these will this be? Can you actually create either agents or or or similar synthetic user that that can actually replicate all these kinds of behaviors, all these biases, all these fears, all these you know, when you were trying to book a hotel and then, you know, your your your kid walks in and you’re distracted, can it replicate those kinds of behavior? Maybe some needed, but these are interesting questions.

How can you use AI for research? Right? Analyzing huge amounts of data. So they were also doing this already in research, but also in asking what I mentioned was the AI bot which we use to ask questions to all the experiments in our experimentation tool.

So there’s lots of little things that we oh, is it little? It is actually quite quite quite big. That we’re using to make experimentation easier.

When it comes to analysis, we’re a bit slow there because it’s the the current setup is actually quite it’s quite easy for teams to to to analyze their own experiments. But, yeah, AI will make it much easier in the future to analyze your experiments and make these trade offs which now sometimes you needed the data scientists for.

I think AI is kind of testing and learning as we speak and it’s within the spirit of optimization it’ll be a really positive force for a lot of products, a lot of industries. And it’s a pleasure to learn that there are so many ways that it’s already impacting.

I think it covers the next part of my questions, which which is, you know, around So one thing we need we we AI is also a hype hype.

Right? So we need to be careful and need to be be need to keep on thinking, is this the right tool for the right problem?

You shouldn’t use AI for everything, for every product solution. Right?

Maybe you can just use normal machine learning because that’s much cheaper and also better for the for the environment. So there’s let’s keep on thinking. Let’s see the positive sides, but also let’s let’s be aware of the the downsides and or the risk that’s that’s coming with AI.

Completely agree, Jordan, that because there’s so far, there’s no substitute for human ingenuity. I’m yet to see a good product or a good brilliant product company come out of Charge GPT. Maybe it’s there in the future, but there’s none as of yet. So I completely agree to your point that we’d have to work out more efficient ways going forward.

And maybe also reflecting now on your really diverse experiences across multiple industries, how do you think experimentation has evolved from, let’s say, finance, tech to travel? And what are key differences that you see, you know, industry wise that can give a idea to our viewers?

I think the main differences are in most from my experience. Right? So some and our and also within Booking, we also have kind of the fintech departments and we have a supply department. So there’s different parts of the business which is different metrics, which where the tooling needs to be implemented differently.

But the key thing is, I think, the metrics part on how do you measure impact from a fintech point of view or from a supply point of view, which is much more back end, much deeper in the back end. It’s really difficult to measure what what these kinds of changes have on on on on the customer facing on the customer.

So, yeah, main main difference or main difference from my personal experience is the is the metrics.

I I agree.

There’s got to be a measurable impact on each effort across, you know, be it in Travodar, in fintech, but obviously, any deeper numbers or any, you know, such set guardrails that you build out in your experimentation program, which are also, you know, kind of advised lots of viewers that you would like to share?

So definitely so.

One of the things is that if many teams are experimenting with different metrics in the in the same area, and with this most expert teams actually luckily have the same metric because if every team is is optimizing or working with the same metrics, then, you know, that’s perfect. All the teams go in take the same direction, but, of of you see, that’s not always the case.

So what we have is that we have what we now call golden metrics, but it’s also guardrail metrics. And these are key business metrics which we are aligned with our business with our booking priorities, with our booking business goals. And these metrics are ultimately added to each experiment.

So that to make sure, hey, I’m changing this. This is for my for my own department. This is an objective for other department. But, oh, wait. It’s actually hurting another team which is focusing on a business priority.

And by doing this, it allows all teams to be aware of any negative side effects on on things they might not be aware of.

And this is kind of how we try to make sure that teams have all opportunity to check draw any experiment they want, and they can use the metrics that they need for this, but they also see what the impact is on kind of business metrics. And all, for instance, customer service tickets is always one of those metrics.

And then there are also some specific metrics related to our our business booking business priorities. And, yeah, it’s super cool to see this because sometimes you don’t know that you’re actually breaking stuff somewhere else.

I agree. And I think, you know, discipline is also a very important part of, you know, getting this through, being consistent with your practice. And maybe any tips or advices on getting started and building a lasting habit of experimentation that is cooperating in your daily workflows, ideas that you build and ship. So any, you know, tip that has made sure that experimentation sticks with your with your team and how, you know, maybe organizations which are just starting with experimentation aim towards being more mature in their practice?

Yeah. Let’s start with how to make it stick. I think what works best for us is actually making it part of the process.

I I understand this not for everyone because, you know, every every company has their own process. But if you can make it part of the process, then definitely make it do this. If it’s not, make sure there’s a top down sponsor which can say, hey. Which can remind teams, you know, every change should be an experiment or said experimentation important. Let’s make sure that all key changes because if you’re starting, probably not running a lot of experiment. Any big decision decision should be validated with an experiment.

And then, yeah, it helps to have a sponsor in leadership.

And yeah. So how I personally started, I was kind of these lone wolves. So when you start, most of the time you see a one peep one person is kind of driving everything, what we call the lone wolf, the lone zero wolf or experimentation wolf. And this person is, you know, promoting it, being ambassador, helping others, sharing learnings, organizing sessions, it’s, you know, and it’s partly maybe experimentation is not even part of this person’s role.

Right? It’s kind of more a side project and slowly starts growing. So it helps if you have, like, I mean, the carrot and the stick. Right?

The the carrot is the the the long wolf hero specialist or experimentation person, and the stick is kind of top level sponsorship by saying like, oh, just, you know, every month you tell me how many experiments you run and ran and how many what the results were. Right? So both of these things, they they help starting. And what also helps, and and this is what I’m also sharing you in my talk, because Remi has made me a question for you.

Why do you think that Booking actually started running experiments?

Question back to you. Right? Now that the rules are turned, let’s see how you perform. So why did Booking start running experiments?

I think it’s really important to know what is the right price to travel. If you are a booking customer, you’d want the right price to really define the experience and maybe the right visual to complement your experience and lot many things. And that can be different for a person in India to a person in UK. And how would you be sure that what is the right one path going forward?

So if you’re validating product decision, right, that was not the reason why we started. The reason why we actually started was actually managing expectations.

Because what was happening is that in the expectations of our hotels, because they were calling us every time we made a change on our website, were calling, hey, you made a change. I think this might be impacting our business. What are you doing? Are you sure this is not hurting?

But then we had then we used experimentation to show, hey. Now we have actually data to show this is not hurting this is not hurting your business. It’s not it’s not hurting booking. It’s not breaking anything.

And that’s actually how we started. So if you’re starting and you have different stakeholders, there’s multiple reasons why you can run experiments. Right? So you can, like, read it, how we started, manage expectations, but you could also lose percent loss.

Right? So maybe because conversion rate optimization is the the most difficult part, but maybe you can already help your your manager saying, hey, let’s make sure we know it’s not breaking anything and we’re not hurting our customers. Right? So that’s you know, everybody should know this.

And if you’re talking to your, you know, some tech counterparts or tech stakeholders, hey, we can use this as a, say, rollout. Right? So you’re you’re wrapping the experiment. You can see if it’s breaking.

If it’s breaking, you’re gonna actually pinpoint which experiment it is so you can see where it’s breaking and you can easily stop it. So you don’t need to do a whole rollback. You just you just stop the experiment. So there’s different ways how you can kind of talk about experimentation to different stakeholders.

And then and then next to kind of improving the customer experience through first rate optimization, there’s also innovation. Right? Big big changes, like really big designs, changes in your algorithm, new products, new value propositions, and of course, new and now we also have AI. Right?

So does AI actually work? Does it add value? Or so there’s a there’s a whole whole set of use case you can you can discuss and use experimentation form when you’re starting.

Conversion rate optimization or improving the experience for customers is I think the most difficult one. So don’t think you all you just need to start like the with that. Starts kind of, you know, presenting harm. That’s probably that’s also worth a lot of money, and that’s also much easier.

Yep. I agree.

And helps with stakeholder buying Exactly.

Builds out a build out a mandate for the, you know, for the team, and a lot many important hurdles are automatically sought. I completely agree. I think it’s in many ways, experimentation is counterintuitive when in you know, I too had different perception, it seems. But, you know, it’s been an incredible peek into the massive scale and size of everyday experimentation at dot com.

Thank you so much, Jordan, for this an incredible walkthrough.

If there’s if there’s if there’s one thing I could add.

So one of the things how experimentation changed me personally is control.

Yeah. So what I what I always thought is that I needed to have control over over which kind of changes are happening in in in the area we’re responsible for, what what results were, what was everyone everyone doing, am I missing some information, yes or no? And both working at booking, because things happen so fast that you you there’s yeah. You cannot be up to date.

You cannot know everything. And also, most of your ideas fail, and so many things are happening on the product sides, multiple teams, but also everybody can run their own experiments. And what it’s learned me personally is that I can kind of let step step a little bit back and let not let control go, but not be kind of stressed out if I don’t have control. Right?

Because the experimentation tool will will help me understand if if something is breaking and you have your your long long term KPIs. I think in the end, what I’m trying to say is experimentation helps me sleep better, and I hope that also does for a lot of other people.

I think that we are really good alternate for us to test the headline of this session. Yes.

And either experimentation helps you sleep better or it helps you, you know, with everyday decisions. But I would obviously would want to run a test on it.

Exactly. Nice. Exactly. Exactly. Nice. Okay.

Awesome.

Jordan, maybe as a parting thought, would you would like to share any book, specific course, or or any reading handout that that’s been on your desk. I see that there are lot of books in the background. Any So recommendation you’d like to give to our readers?

If you want, it’s it’s in the Merriam Rich. Okay. It’s called it’s transformed by Marty Cagan. And this actually this this book talks about the the product operational model, and this is actually how booking works.

And we didn’t know we were actually doing this until they wrote the book, and then we said, hey. This sounds familiar, and we’re actually doing it.

So I didn’t count on a professional level. Almost seventy professional level, I would recommend where is it?

The The Undoing Project, and it’s about the lives of Kahneman and Traversky. So these are psychologists.

And did this this is this is another book, Thinking Fast and Slow, but this is about their lives and how they came together and how they work together. And it’s even more fascinating when you read their backgrounds, who how they become became who they were, how they work together, and how they came to these these ideas. And it’s it’s a fascinating insight in in in these two people.

So next to learning about their theories, their biases that that probably a lot of people know, And we also get an insight looking to to how they became who who who they were because the the both are not here anymore. So transformed oh, and, of course, Design for impact.

So it’s it’s from Erin, and it really it’s it’s massively explains the whole process of experimentation in such a such an understandable way. So the the statistics, why need a power calculation, the process of ideation, I like yeah. It’s a it’s it’s it’s an amazing way on on how she kind of explained it so clearly in in a book. So I’m really impressed.

It’s a good and it was a champion of designer Yes.

Lot when words which are now becoming practiced. So thank you for for this recommendation. I think there could be a there could be a subsection of this video where we run through the entire library of books and wholesome read.

You know, collected over the years.

Oh, oh, this maybe Oh, so this I’m not alone. This is okay. Again, sorry. It’s too too many, but there’s so much stuff to learn, so much stuff to read. It’s just fascinating.

So I think for a curious mind experimentation is probably the best place to be. Yes.

Always be testing. So thank you. Thank you so much, Jordan, for all of these insights or detailed walkthrough of for a lot of us to take inspiration and really build out our own own spirit of experimentation. So with that, I think we’ll close the session, but thanks again.

Thanks, Vas. Thank you so much for your invite. Yeah, super much fun. So thank you.

Speaker

Jorden Lentze

Jorden Lentze

Senior Product Manager Apps, Booking.com

Other Suggested Sessions

Crafting Personalized Journeys at 35,000 Feet

EasyJet's Personalization Playbook: Vicky Routley and Chris Gibbins reveal how agile experimentation transforms customer experience in airline marketing.

[Workshop] Create a Data-Driven Experimentation Program With Speero's Research Playbook

Unlock Speero's B2B experimentation secrets with an interactive workshop, turning research into test hypotheses, plus all the tools you need to DIY!

Building a Culture of Experimentation in the Times of AI

Building sustainable growth requires more than tools or technology. This conversation focuses on how teams can create the conditions for continuous learning, adapt decision-making in an AI-driven environment, and embed better habits across product, marketing, and engineering without slowing execution.