• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Decathlon’s Blueprint for Server-Side Experimentation

Join Antoine Tissier of Decathlon as he shares expert insights on server-side experimentation, AI integration, and ethical eCommerce testing for global success.

Summary

Antoine Tissier, Decathlon's Lead Experimentation Analyst, delves into server-side experimentation, contrasting it with client-side testing and sharing insights into Decathlon's robust experimentation framework. He discusses overcoming challenges in large-scale experimentation, leveraging tools like PXL and RICE for prioritization, and integrating automation for efficient monitoring.

Antoine also explores ethical practices in data handling, the role of AI, and ways to structure teams for impactful experimentation in global eCommerce. His experiences showcase the importance of data-driven decision-making and collaboration across cross-functional teams.

Key Takeaways

  • Server-side experimentation enhances risk mitigation and scalability.
  • Effective frameworks like PXL and RICE minimize biases in test prioritization.
  • Automation tools, such as alerts, streamline monitoring and decision-making.
  • Privacy-conscious practices are essential in experimentation for ethical compliance.

Transcript

NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.

Vipul: Hello, everyone. Welcome to Convex 2024, VWO’s annual virtual summit for optimization experts. Thousands of brands across the globe use VWO to optimize their customer experience by gathering insights, running experiments and personalizing their purchase journey. I feel honored to have Antoine with me here, who is the lead experimentation analyst at Decathlon.

Hi Antoine, how are you today?

Antoine: I’m good, I’m good, and you?

Vipul: I’m absolutely fine and I’m excited to sit alongside and listen to your insights on the topic of server side experimentation. I think a good way to start this conversation would be to know a little bit more about you. Right. So if you can quickly let our audience know about your journey, to becoming an experimentation analyst at Decathlon, and what sparked your interest in this field.

Antoine: Yeah, okay.

Actually, it’s a bit odd. As a student, I was not interested at all in statistics, and this was not interesting for me. I’m not passionate about growth and money either.

And I think what interests me the most in experimentation is actually the ability to have an unbiased evaluation of an idea. And the fact that you can measure the impact of your work on that. It’s in the middle of many areas of expertise. You have to work with UX teams, project teams, IT teams, and data teams.

Before I joined Decathlon, I used to work in an international digital agency for 11 years. I started as a developer. I was very interested in the work of a team dedicated to business performance optimization. And they had very interesting use cases, so I often borrowed books from the library.

I joined that team as a digital analyst. And after six years as, as a digital analyst in their team, I joined a mutual health insurance company. I worked in analytics, privacy, and experimentation too. I was a little bit of a jack of all trades, so I developed experiments.

I made the analysis. I used session recording. It even happened, actually, that I designed the markup. Then I discussed with former teammates from the digital agency I was working for.

One of them worked for Decathlon, and I was convinced, not a lot of French companies could properly do experimentation, so I really wanted to join Decathlon for it. Actually, I think that not a lot of companies have enough resources to do a proper experimentation with enough traffic resources and so on. You need product resources. It’s better to be in a big company, I think, to do this kind of thing.

Vipul: Okay, great. Because this entire conversation is going to be focused around server side experimentation.

I’ll dive straight into this this topic because not a lot of businesses are actually able to implement or execute on server side experimentation. So I’m just curious to know, how would you explain server side experimentation as a concept to someone new in this field?

And what would be the key differences between client side experimentation and server side experimentation?

Antoine: Okay, fine. Actually, with client side experimentation, the content of the page is overridden with additional JavaScript and CSS. The initial content is retrieved on the network. So, it might be interesting if you have a tiny little change, technically speaking.

It can be even using ‘what you see is what you get’ interface. But you can have few issues with client side experimentation. For instance, you might only be able to test visual changes. You might get a flickering effect.

I don’t know if anybody knows what is a flickering effect. You might, for instance, see the A version, then once you’re going to see the B version, it might bias the results. It might be confusing for the users and so on. It might impact UX and web performance.

I thought we were using client side testing on a JavaScript resource where we’re taking megabytes. It had an impact on the web performance of the user experience. I’m afraid also that the code created with an edit testing tool, might be a little bit of single service code. I think sometimes you have to rewrite it almost from scratch.

And it might not be time efficient sometimes. I’m afraid also that if you have a roadmap on your product team, on the roadmap of an experimentation team that are making override, it might break sometimes. So I think it’s very important to have a good communication if it’s not handled by the product teams, and by experimentation team for instance. And I’m also afraid that sometimes client side experimentation tools are used to release acts bypassing IT resources and so on.

And I’m afraid sometimes it might be counterproductive. On the other hand, server side experimentation tools are not only used to test hypotheses, in my opinion. You can use them for filter management and risk mitigation. For instance, if you want to mitigate the risk of releasing a new feature in an application dedicated to authentication, you might decide, for instance, to ramp up this new feature at the beginning, 10%.

of the users if they come from a specific segment. For instance, if they authenticate from a very specific mobile application, not the eCommerce mobile application, but another one. And we have a look at the load of the servers and so on. Progressively, we will ramp up, and do a little bit of AB test to mitigate the risk.

So it’s not exactly the same kind of usage, I think.

Vipul: But, I, I only mentioned that a lot of companies actually find it challenging to execute on server side experimentation, right? And most of them actually, for them, the starting point for experimentation is client side experimentation, right? And, you rightly said, it’s about, there are certain flickering issues and, you know, the, it’s mostly used to, maybe, test smaller changes.

In your view, because you’ve been in this industry and, for so long, what are the biggest challenges that you think companies face when trying to implement server side experimentation?

And, at Decathlon specifically, how have you been able to overcome those hurdles?

Antoine: I, I think When you release a server side experiment, it’s almost like releasing the real feature, usually. So, if you have not enough resources to release new features, new version of your website, and you struggle to do a new release, I think server side experiments are not good for your resources. It’s not aligned with your physical resources. In this situation, you might do other things to mitigate the risk, than AB testing or server side experimentation.

So, yes, you really need resources, resources for development, resources in product discoveries and so on. It’s not relevant if you are unable to release a new version of your website once a month.

Vipul: Got it. So can you walk us through the structure of the, you know, the, the larger experimentation team at Decathlon and what roles are crucial for, for the success of, you know, a server side experimentation program, so to speak.

Antoine: I belong to a center of excellence dedicated to experimentation. We are providing tooling, support, documentation, and training for other teams. First team might work for eCommerce websites, eCommerce mobile applications on other teams, for instance. Or applications dedicated to authentication, for instance.

So, within our CoE, our Center of Excellence, there is a team leader, a UX researcher, three experiment strategists that will assist the product manager designing their experiments, working hypothesis, making decisions, and so on. And myself. I can do sometimes two pre-analysis, post analysis, I can create tools, I have to define a experimentation process. But actually, our team is not in charge of experimentation.

I think if we are really in charge of experimentation, we will be a bottleneck. We only provide support. Product teams are in charge of experimentation, and we want other teams to be more autonomous we can. They have developers, project designers, and so on.

A few hundred users have access to our experimentation platform.

Vipul: Got it. And do you manage a team?

Antoine: I do not manage the team. I’m not the team leader. I’m just a leader of the skills about the analysis of experiments of the process related to experimentation.

Vipul: Got it. But do you think are, are there any skills which are specific to, you know, server side experimentation as, as compared to client side experimentation. So let’s say if you were to hire for your team, would there be a certain set of specific skills and abilities that you would be looking for in, in a candidate that you’re hiring to run a server side experimentation program?

Antoine: I think there are no unique requirements for server side experimentation. So, actually, the people that are working on experimentation, they are product managers, they are developers, they are product designers, and so on. The experiment is just a final validation of a new change on the website. So we are just a casual project designer and so on.

That’s it. I think when you, more generally speaking, if you work on experimentation, I think it’s important to be humble, because thanks to experimentation, you realize that most of the times you’re wrong. And, I think also it’s

Vipul: that’s right.

Antoine: to be responsible and honest, and because the insight that will be shared will be used to make decisions. There are many things to learn related to experimentation, UX, project management, statistics, and so on. You are in the middle of many expertise. I think it’s interesting when you work on experimentation for instance, in your center of excellence, to be empathetic, and to be what I would say, an [].

I mean, that you keep on learning, and you do not just consider you know anything and everything on the stuff that you can do.

Vipul: Got it. In my experience, because I’ve been associated with VW for, you know, what, eight years now, and I have a certain reading of, How, businesses like to use, how, how businesses actually implement experimentation programs in the company.

One very, you know, key insight that we were able to come up with, or we were able to identify rather was a lot of experimentation programs die because they’re not structured well. They are very, they’re executed on a very ad hoc basis, right?

Decathlon is such a big company. It’s a multinational. Brand, right? Everybody purchases. Their sportswear from, Decathlon, at least I do, so I would assume safely that other people do as well.

But, so I’m keen to hear, and I believe the audience would be keen to hear as well. How do you structure your experimentation programs at, Decathlon? And also, if there are any frameworks or methodologies that, of course, you can share with the audience, I think that would be great. Of course, feel free to hide any confidential, you know, data.

But we’ll be keen to understand, you know, what frameworks or how you think, how Decathlon thinks about running experimentation programs.

Antoine: Yeah, okay, fine. So I think the way our product teams are structured will help structure our testing program because there are many, we have many product teams actually. For instance, we have a product team dedicated only to product ranking on the product list pages, a team dedicated to size selection on product pages, so it gives you a little idea of how many project teams we have. We worked with Speero, the agency, and we worked on a go to map for each product.

This go to map helps us ensure the product’s goals are aligned with the strategy of the company. It also helps us to ensure every metric that is needed by our product teams are correctly supported by our experimentation platform, our analysis tools, and so on. So we also have something called XOS. It stands for Experimentation Operating System.

It’s an Airtable database built by Speero. Product teams will be able to insert the experimentation IDs in it. Can be a product designer, for instance, can be the product manager. Even people outside of product teams might also suggest an ID using a form.

Those ideas are then moderated by product teams. For instance, a product manager might reject an hypothesis because the test might sink. They might consider the change should not be tested, but it should be just do it. Because it, for instance, consider the bug correction, or because we won’t have anyway enough audience in the segment to properly run a test.

And then the experimentation IDs are then prioritized based on something based called the PXL score. I will just show it to you. So I will talk about this way to prioritize experiment that we are using. So before we start again, it’s my opinion, very important to be clear about the rules.

I insist same thing for prioritization of experiments, even if it’s handled by a single person. The thing is, I think that all people that works on experimentation are aware that most of the time there are wrongs, and it is difficult not to be biased when we are making a decision. For instance, we might be more convinced by our own ideas, we might underestimate or overestimate the ideas of newcomers depending on, for instance, on the company they were working in, their age, how they communicate. There are frameworks that will ask you to score the potential of an idea with a score between 1 to 10.

And I’m afraid that might be difficult to answer properly without any biases. I prefer explicit questions such as is the change above the fold? Is your hypothesis backed up by quality data, [and/or]

quantitative data? This is why we’ve chosen to use the PXL framework created by Speero. So I’m not the author of this. Speero and CXL will probably be way better to explain it.

So, to use this framework, you have to define a list of questions that are quite explicit. So you can see on the slide, the default ones on the official template. For instance, is the change above the fold or not? It can be totally personalized.

You can add, modify, and remove questions. And this is what we did. For instance, we have a question, what devices or set of devices, the experiment will run on? Then hypothesis score is completed by adding up all score rated questions.

For instance, you can see for the test hypothesis researcher on rewrites the tool page. There’s a question above the fold on noticeable in five seconds. It’s above the fold is one. Noticeable within five seconds is two.

You have to add up these numbers to get the results on everything. When all the answers are added up, you have the final results. The final score of your PXL score. So we have personalized PXL for our own needs.

To be to be honest, I was, for instance, not totally convinced the formula should rely on addition. Because, for instance, the set of question in case the experiment should only be made on a specific devices for a very specific set of product pitches. It might rank higher even if the implementation is not easy. So I think it might not always be relevant to have just an addition.

So this is the reason why you actually mixed PXL and RICE. So, as I said, the clear advantage of PXL is that you have a list of explicit questions. On the other end, the advantage of RICE is that the formula is not a simple addition. The RICE score equals actually the REACH score multiplied by the IMPACT score, the CONFIDENCE score, divided by the EFFORT score.

So, as you can see in this slide. Each, the reach score is, about how many people will be impacted by your project within a given period of time. So the reach is weight a lot actually in the, in the, the, the scoring of the, of your experiment. The reach score on our side, might be computed based on the, the batch confidential zone.

The device category, the fact that it’s above the fold or not, but it’s, but it is on the specific segment or not. Impact. The impact score is between zero and three. It’s, we have different questions that, might, help us to, to define the score.

It’s, for instance, is it supposed to increase user motivation? Is it noticeable in five seconds? Is the change substantial, iterative, or disruptive? The confidence part is between zero and 100 percent.

The question that will be related to this, to this section is, for instance, is your hypothesis backed up by quantitative data or qualitative data? The more you have source, do we have sources, sorry, that back up the data, the better. And the effort is based on an explicit list of options. I will show you a screenshot of the effort questions right after.

And the thing is, as we have an adaptable database that store every hypothesis experiments results, we’ll be able in the future to do a meta analysis on adjust the score accordingly. For instance, if we notice that the chance that the experiment is a winner is highly correl correlated with usability test. We will, in the future, better prioritize hypotheses backed up with usability tests. So just a, just a screenshot about, this, different questions that are asked in the form, when you will send an experiment, when you create an experiment, so you can see, for instance, the, the effort question is the final one you have to, to explicitly say if, you think that you, the challenge will be just.

Made in few hours, it will, you have to spend one day, or, it will take more than five days. So it’s what, it’s not 100 percent accurate, but it’s still, it, it will help to, to prioritize it, and without a lot of biases.

Vipul: I was closely looking at these screenshots. And the question in my head was that how can someone actually quantify something like an effort? Right. Effort is quite subjective, right?

How do you know? How do you know? How do you put a number against a human effort? And you, in the screenshot that you shared, you actually have, you know, a certain, a certain questionnaire of sorts, right?

With, that you pass on to the team and you must have assigned it. Some number to it. Is that correct?

Antoine: Actually, the, the effort question, I will not answer it. I’m not in charge of answering this question. It will be, I think it will be u it should be adjusted by developers, I guess. Maybe it will be initiated with the people that have this in mind and it will be updated with the developers.

Vipul: Got it. Got it. So since you’ve already mentioned the framework, you’ve already shown the frameworks, this makes me curious to know if you have an example, uh to share with our audience An example of a challenging server side experiment that you or your team might have run at Decathlon. If you can also share what your key learnings were.

It doesn’t necessarily have to be a win experimentation. Most experimentations, as you rightly said, they fail, right? So, any example of the implementation of this framework would really be helpful.

Antoine: So, I will start by giving a few examples of things we handle on the server side to give an idea of what kind of things we can handle on the server side. And then I will pick one example. So, for instance, we are able to AB test the fact that we provide guest checkout. I think it could have been handled on the client side for this one.

We can also, for instance, meet people. We are able to mitigate the risk of switching to a complete new authentication application. We managed to mitigate the risk, when we switched, when we had to switch, sorry, to a new search engine. We, we are able to continuously optimize the ranking rules in our product list page.

So this is something that is, Not need any new release. The, the product manager is able to, to play, with the ranking route and to, to, to test new ranking rules without starting a new, releasing a new, anything, a new source code on a website. We, we are about to test the way the sites are grouped or together on the, for instance, on the product list page, you have filters w u he, the, the size, it’s not consistent on every product. Sometimes I don’t know if, it might be a 46 on the product, 46 other products, 40, 44 to 48.

So, we, we are about to AB test the way, this product, size, I go get it together to make it easier for the users. We are also to AB test. Product Natures and not, it’s not, just five or 10 on Product Natures. It’s a lot of Project Natures in Decathlon.

So it’s, it’ll be been difficult to it, I think, on JavaScript using, plain side experimentation. Also our experimentation platform is connected to Cloudera, so we can. Even test performance of two different technical stack. It’s also partially connected to Datadog, so we can observe the impact on web performance or on errors.

If I have to just to pick one experiment, for instance, I can, I will talk about time we had to switch the thing, the, ah, we had to mitigate the risk of switching to a new search engine. And this search engine was easily optimized by countries. We switched to a new one that was, that was, , AI, AI powered, you know. It was, it was supposed not to, to, to need a u fine tuning and so on because it relies on, on AI and so on.

But, thanks to, the rollout on the AB testing and so on. We managed to, to detect that it was not good enough at first and to progressively, fine tuning the new search engine, before we switched to the new one.

Vipul: Got it. I checked out your LinkedIn profile and found something really interesting. You mentioned, that you had worked on some automated alerts, some kind of a mechanism, right? So.

Just I noted a question that, what is it basically right? And what types of alerts does it generate? And how do they help your team respond more quickly to test the outcomes?

Antoine: So the first, the first highlight is related to sample ratio mismatch.

So, basically, when you are doing an A B test, usually, you expect to observe, for instance, 50 percent split between the control group on the test group on. It happens maybe around 10 percent of the time for Decathlon, but, for other companies to that the statistical significant gap between what is observed as a split on what is, what is expected for you might, for instance, see a 51 percent versus 49 percent with million of users on. It’s a thing is it’s a.

Usually it’s a, in this kind of situation, you’re supposed to ignore the result because you have a selection bias. And actually it’s not due to flow in the experimentation tools. Usually it’s due to an issue, in the experimentation code you are applying on, it’s. It’s on our side.

I think it’s happened a lot because we want to improve the sensitivity of your of our experiments. We want to remove as much noise as we can. So, for instance, if we are changing, we are trying to update something below the fold. We want only to measure users that have scrolled enough to see this change.

So this kind of thing might create a SRM. So, um. Just an example of what, what I created in the past. This is a slack alert, but, that has been sent to our, to, to, yes, it’s a slack message that is sent when a Salem issue is detected.

The advantage of having a slack alert is that people can interact with it. The team can. If it’s sent on the Slack channel, people can give an answer on chat about the alert. So I think it’s very convenient.

What I’d like to say is that I think sometimes there’s a debate between build or buy. We had this discussion a few years before, and I think it might be better to build and buy because I think whatever the experimentation tool you have, it might be sometimes convenient to create, to adapt it, and to create additional things to better switch on it. So for instance, now I do not, I temporarily paused this alert because I have also this alert elsewhere in our experimentation platform. I, I worked, I currently worked on the report to, more easily, detect what created the slack, the SLM issue.

And so now you can see, you can, in the, in the bottom of my screen on a specific page. Using this kind of tool, I managed to see that on specific page you have one alert user on the control and almost zero user on the test group. It’s not, it’s not relevant. It cannot happen by chance.

So it’s probably an issue. So this is what I’m working on currently. You can also have another kind of alerts. This one is currently running and I’ll probably improve it in the future.

Usually.

When you, when you are working on experimentation. People are using a fixed reason approach. It means that, before starting the experiment, they compute, the number of weeks that is needed to detect, an impact.

they start the experiment on. They are supposed to stop the experiments on make a decision on Lee, for instance, after three weeks, four weeks, two weeks, depending on the number of weeks they have computed before. So theoretically, it’s great. But if you have a minus 20 percent of cohesion rates, I think it’s harder to say to your stakeholders that, they need to wait, three more weeks before making a decision with a minus 20 percent of cohesion rate.

So, thanks to this sequential analysis alerts, we might make a decision early on without, having a negative impact on the u insight of our insight. It’s not a, when we have a signif significance, a ally, a analysis analyst, sorry. There’s, based on needs, we know that we really need to stop to step it and we can make a decision only, just, just an example of, an issue we had in the past. Um.

Sticky, sticker to cut is something that is said to be sometimes, a good practice. So a country tried to, to set up this, such an AB test. So as you can see on the control. The two card button is below the fault, so the user had to scroll to see the, that two cart.

And the expected variance is just to, to have a stickier two cart so the user can see easily that two card button. It doesn’t have to scroll, it can directly click on it. And, thanks to alerts we detected early that actually the conversion rate was actually decreased significantly, and we had to stop early. When we had a look at it, actually, the add to cart rate was also decreased significantly.

It didn’t make any sense because we tried to highlight and to enhance the visibility of the add to cart. But actually, the thing is, I think most of the time when people are doing a QA of a mobile website, if they are using, Chrome emulation, you know, to from a desktop or a laptop to, to, to, to be tested. And, but, in this case, the smart app button was above the sticky add to cart button. So it, the change was just to hide the add to cart button.

So this is a reason why it’s hard to conversion early on. It has to be interrupted very fast. Another thing is we, we can also have, Alerts in case something, very positive happens. It might be, sometimes it’s a very good sign.

We are happy with it. Currently, this is the case. We have something, I think we have a plus, 2. 8 percent and so on, with a sequential analysis alert, so it’s good.

But sometimes, it happens a lot that, we might have a result that is supposed to be too good to be true. This is also something that, that can happen because, actually we printed, sheets, with, with the quotes of the Twyman’s Law. I don’t know if, if it’s very, famous, but the quote of the Twyman’s Law is, any figure that looks interesting or different is usually wrong. Actually, Every time we had a plus 20%, 25%.

We try to, to do a deep dive to better understand what was working on what. And, actually usually it’s, an issue, a selection bias or, an issue that happened. But I, I think with an e-commerce website, but not, not a new one, not with logging fruits, it’s highly unlikely to have something higher at 5%. I think it might be, if you have a.

If you’re working on a startup on the small websites, I think it might be, realistic to have maybe sometimes something higher because you are trying to, for instance, to switch from, Features to benefits. You try to work on emotions. Who might, you might try to fix big issues. So I think specific case it might be relevant to have a plus 10 percent and so on.

But in our case, usually it’s when you have a big impact. It’s just a bad bad news. that’s it.

Vipul: Got it. That’s a that’s a great example, Antoine. Thank you so much for sharing it. I would like to sort of change the gears to a bit more to turn a bit more analytical in nature.

I would like to know what metrics do you consider most important when evaluating the results of a server side experimentation?

Antoine: I think it doesn’t change a lot, depending if it’s on the server side or on the client side, actually. Most of the time, I prefer to work on the conversion rates, because when we work on the simple conversion rates, it’s a binomial metric, so we are able to have a significant result faster. So, when, I think it’s a, When it’s, relevant to use the conversion rates, I prefer to use the conversion rates. it might be the number of batches at times and, or the average revenue per user.

We alsom, more and more trying to reduce returns. Good for the planner, good for the user experience, and good for the business. So we are working on it. Maybe what, what, what might be different between the server side and the client side is, I think, in the server side, you might create more, Non authority test.

I think sometimes your goal is not only to increase the metric, but it might be also to mitigate the risk when you release a new feature. And you just want to be sure that it’s not decreased more than the None of it imagined, but you have chosen.

Vipul: Got it. And how do you communicate these, these results to all the stakeholders and especially those who are non technical

Antoine: Yeah, so, our team is not, the only team in charge of communi communicating, communicate, sorry, about this experimentation. We have the chance to work with communication team on this communication teams. And there’s, different newsletters. That are sent to a different, to, to, to many people.

There are, for instance, a weekly newsletter that will talk about, the, the main takeaways, global results and so on. And, they also share a form to form, to invite the user to, to suggest new, new experiments. So, there might be sometimes, newsletters, also weekly, but, but, but are related to, to a specific product. For instance, there’s a, there’s a newsletter dedicated to the upper funnel, and, we have hypotheses, results, training, experimentation, next step, and so on.

There’s also, a monthly meeting. And during this monthly meeting, actually, we are, we are lucky because even the, sometimes the stakeholders are sharing, most important results. So it’s, creates a little bit of, it highlights, the importance of experimentation within the company. So it’s, it’s a good thing, I think.

We also have, XOS, as I said, you know, the database on the experimentation platform or so. And we also have, Slack, automated Slack message, automatic Google chat message that are shared by all the people that might be interested by the experiments.

Vipul: perfect.

And since AI is a hot topic these days, have you already used it? Integrated AI, your machine learning in your experimentation process. And if not, right, what are your plans?

And how do you see AI becoming part of this entire, you know, server side experimentation process?

Antoine: So I’m not entirely comfortable to answer because we have a team called AI Factory, and I do not belong to this team. So. I might be a little bit out of my comfort zone. Experiments are run to optimize, product recommendation models, partially by this team.

Data scientists from this team are also working on off site price experimentation and they made a pricing engine, engine for second hand products. There are also things to come related to AI, but I cannot discuss them here. Disclose publicly any details so far about them. It’s about products on, on, or so experimentation platform.

Vipul: Sure, no worries, no worries. We completely respect the privacy here, Antoine.

I just have one last question to, to have your opinion on, how do you ensure, in while running server side experimentation, how do you ensure the, the user privacy, right? Are there any specific, Considerations for, for e commerce brands in this regard.

Antoine: But user privacy, actually, what I do like with a server side experimentation is that we are in complete control of what is sent to the platform. So for instance, in our, in our case, on our main websites, so far, we do not collect IP address. And, there’s no JavaScript resources cookie, cookie dedicated to the, created by, by this platform. So it’s, there’s no direct connection between the experimentation platform on the browser of the users.

So we, we do not send IP address, email, whatever. It’s a very, So, even if it’s sent to a U. S. government and so on, it’s no use, in my opinion, because it’s nothing there.

So, yes, again, in privacy, this is the only thing I can say on that. We do not collect any information, any information in the epitesting platform without any consent.

Vipul: Got it. Perfect. Thank you so much for answering all these questions so patiently, Antoine and sharing examples.

I’d like to, you know, get some recommendations and I, I think the audience would love to have some recommendations from you when it comes to, books, right?

So we’d love to know what books are you currently reading and what would you recommend to the audience? Of course, not all It all work related. It could be anything outside of work as well.

Antoine: To be honest, I purchased many books, mostly dedicated to experimentation, but I often procrastinate reading them. I think I’m not the only one in this case. The one I’m reading now is Continuous Discovery Habits from Teresa Torres. This book was given by Decathlon to LPMs working on On our main e commerce website, and yeah, I haven’t finished it yet, but I like it to be, to be fair, so I’m aligned with everything on the, it’s very interesting.

One more thing, I think, when I joined Decathlon four years ago, to be honest, I was a little bit stressed out because, I think it was, A big responsibility because there are money at stake and so on. I didn’t want to, to give bad advice and so on. So for this, when I joined Decathlon, I read two books. I think they are very interesting.

It’s the Statistical Methods in Unline Able Testing from Georgi Georgiev. It’s about statistics, mainly, I think it’s very interesting on cross trophy and line controlled experiments from Roni Coevin. I think that it’s very interesting to start.

Vipul: Perfectly. Thank you so much for the recommendations for the book recommendations on to. And a lot of people actually recommend the book by Ronnie Gohavi. It’s a, it’s, it’s sort of a Bible for just everyone who wants to learn deeply about experimentation.

Thank you so much for taking out the time to speak to us and sharing your insights, your experience. With the audience today. Really loved every minute of it. Thank you so much.

And have a great day ahead.

Antoine: Thank you!

Speaker

Antoine Tissier

Antoine Tissier

Lead Experimentation Analyst, Decathlon

Other Suggested Sessions

Performance: Each 0.1 Second Is Worth $18M For Bing

Every millisecond decides whether a user will stay or leave Bing. In this session, hear from Ronny himself how Bing ensures a great experience to its users.

Product Management and Experimentation

Dive into Canva's approach to product management and experimentation with Pius Binder, uncovering strategies for impactful user experiences

Make Forms Easy for Users

Rommel shares his real world experience of optimizing a form that led to 30% uplift in CVR.