• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Building a Center of Excellence for Experimentation

Discover Atlassian's journey in building a successful experimentation COE, integrating data-driven culture and innovative strategies for growth."

Summary

Mithun, Head of Experimentation at Atlassian, shares his extensive experience in the field of experimentation, starting from eBay to Atlassian. His journey includes transitioning from an analytics role to a technical one, building experimentation platforms, and establishing centralized experimentation practices.

At Atlassian, Mithun focuses on creating a Center of Excellence (COE) for experimentation, adapting to the unique challenges of a B2B company. He emphasizes the importance of integrating experimentation into daily workflows, ensuring it's a core part of the company culture. Mithun also discusses the differences in experimentation approaches between enterprise and SMB companies, highlighting the need for customized strategies in disparate teams.

Key Takeaways

  • Experimentation is central to a company's operations, intersecting with various teams and stages of product development.
  • Building a COE involves focusing on people, processes, and platforms, with an emphasis on creating rhythms and ceremonies around experimentation.
  • The approach to experimentation differs between enterprises and SMBs, with enterprises often building in-house platforms and SMBs leaning towards third-party solutions.
  • Successful experimentation programs are marked by a culture that values data-driven decision-making and integrates experimentation into regular practices.

Transcript

[00:00:00] Vipul Bansal: Hello, everyone. Welcome to ConvEx ’23, VWO’s annual virtual summit for growth and experimentation professionals. 

[00:00:15] My name is Vipul and I’m a Senior Marketing Manager at VWO. Thousands of brands across the globe use VWO as their experimentation platform to run A/B tests on their websites, apps, and products.

[00:00:28] I’m excited to have Mithun with me here who heads experimentation at Atlassian. 

[00:00:33] Hi, Mithun. Good to have you. 

[00:00:36] Mithun Yarlagadda: Hey Vipul. Glad to be here. 

[00:00:38] Vipul Bansal: Awesome. Awesome. 

[00:00:39] So without any further ado, Mithun, let’s start off our discussion. So can you tell us about your journey as an experimentation professional and how have you gotten into this current position at Atlassian?

[00:00:52] Mithun Yarlagadda: Sure, sure. I did my master’s in computer science and with background in DBMS, data analytics, and right after my master’s I started at eBay, I was very lucky to be part of the experimentation team there. That team was more of like a think tank for experimentation at eBay. Got exposure to experimentation in that and then started as a data analyst.

[00:01:24] Eventually went to a point that, I was acting as a liaison between the analytics and the platform in there. At some point, I wanted to move to the dark side, move from the analytic side to the technical side. So I ended up being the PM for the experimentation platform at eBay.

[00:01:41] So then eventually, went into Target where the opportunity was for me to build an experimentation platform in a program from the scratch. That was a great opportunity for me. So jumped, right on that. 

[00:01:56] I went to Target where we worked on replacing third party tools to building an in-house experimentation platform and a program that centers around that platform.

[00:02:10] I had great learnings there and then for four years of working there and then, opportunity at Atlassian came through where Atlassian was a different kind of challenge because eBay was a C2C company, Target was a B2C company and Atlassian was a B2B company. So the challenges and the problem space was very different.

[00:02:36] So definitely sound exciting to me, where they never had a centralized experimentation team. So I was hired to centralize experimentation practices, create the experimentation center of excellence there. So, yeah, I’ve been there since the last three years. It’s been a great journey so far.

[00:02:56] Vipul Bansal: It also makes me curious to understand Mithun, because I was checking your LinkedIn profile. You have been in the space of experimentation almost throughout your career, right? 

[00:03:05] So what has led you to choose conversion rate optimization/experimentation as a field to work in and how has your experience been there? 

[00:03:16] Mithun Yarlagadda: Yeah, I believe experimentation always has a sweet spot in the whole enterprise or a company that’s working, right? 

[00:03:28] Because experimentation cuts across your engineering, dev, PM, PGM, analytics, anywhere, any team, any role, experimentation is part of it and being an experimentation team, there is this luxury and advantage of working with all of these teams, understanding a feature optimization all the way from ideation to a point of execution and then learning from it and then it’s the whole iterative circle completely privy to that.

[00:04:08] So that’s an awesome place to be in. You feel like you’re exactly in the center of the gravity right there. 

[00:04:15] Yeah, once you get the kick of it, I’m sure you don’t want to, you understand the passion behind it. Conversion rate optimization, of course, like that’s an important funnel.

[00:04:30] The difference being is experimentation is more of a practice framework and conversion rate optimization is a very specific goal and outcome that you’re looking for. So yeah, in that sense I feel both are the same, whereas experimentation is a little more theoretical.

[00:04:46] CRO is practical in that sense. Yeah again, as I said, being in that sweet spot and then the biggest advantage for me is looking at and being part of that whole journey of when an idea starts to a point that you plan around it, execute it and learn from it. So being part of that entire life cycle is what keeps me in experimentation. 

[00:05:18] Vipul Bansal: Being associated with VWO for so many years I have had the, I would say luxury of speaking to several experimentation experts from different verticals, different company sizes and it’s a common theme across all these conversations that it’s all about being data oriented, but also being able to understand the customer psychology even better through data and you have to think of an idea how to run an experiment. 

[00:05:48] Like what could the problem be? You create a hypothesis, you think about the test, you think about the metrics and everything and that is definitely very exciting to almost every experimentation expert or leader that I’ve spoken to in the past seven years that I’ve been at VWO.

[00:06:05] That’s really good to hear and of course you as well again, taking a leaf from your LinkedIn profile, you’ve been in this experimentation space for I think close to 16- 17 years now, right? And, of course you must have spoken to several other experimentation experts as well.

[00:06:23] Just wanted to get your, sort of, observation on the key differences that you might have noticed in approaches that an enterprise company would follow vs a small and medium sized company would follow with respect to experimentation. 

[00:06:40] Mithun Yarlagadda: Yeah, absolutely. 

[00:06:41] That has been my journey. 

[00:06:44] If you see, eBay with respect to the kind of traffic and the kind of use case that have it was more of enterprise whereas Atlassian is a group of products.

[00:06:58] So in a way it’s a amalgamation of multiple assemblies within the roof of Atlassian. So in that sense, yes, I moved from enterprise to here and the challenges are very different like you pointed out. So enterprise always as I said, as we have, it would allow you to run much better experiments. 

[00:07:20] When I say much better experiments, the most standard experimentation process. You don’t have to think too much around what experimentation process I need to choose because you have the luxury of the data there. And in general, what I’ve seen is like enterprise companies, you don’t need to sell experimentation. It’s already there. It already exists for most part of the company. 

[00:07:44] So it becomes much easier for you to bring that rigor, if you’re out of a practice to follow the practice and then keep up the quality of experimentation. And the kind of the tools that are already there, you see that the investments already been made. 

[00:08:04] So it becomes much more easier to build that iterative process. You’re not thinking about adding new tools because that problem is already sold for you. You’re focusing entirely on the learnings from the experiments. 

[00:08:18] When it comes to SMBs, the teams are very much focused on their own priorities because they need to keep rolling out features and everything.

[00:08:26] So experimentation doesn’t necessarily come as part of their prioritization. It doesn’t come as part of their regular practices because it’s not a priority. That means that, there is that additional work we need to do to, work with the teams to educate them to make them understand that how experimentation is not going to be on your way, but it is going to be a friend to roll out features pretty quickly. 

[00:08:49] Because SMB companies they want to ship features pretty quickly as they want their life cycles to be shorter and it’s a general assumption that experimentation is going to make it slow in that process which is the biggest thing we’ve always seen. 

[00:09:05] So, there is that additional work needed to educate them, to work with them, to show and then also make sure that we get investments into the underlying systems to make the experimentation process much more easier. I think primarily that is the biggest difference in the way how the culture prevails, you don’t need to sell here vs there’s a lot of investment you need to dedicate. 

[00:09:31] Investments and tools that’s all it is here. You’ll end up doing a hack of making the best of what you already have and making something work to a point that you can prove the efficiency and then eventually build that investments going forward. 

[00:09:49] Vipul Bansal: This has been a topic of debate whenever we talk about enterprise and SMBs, the debate around build vs buy.

[00:09:59] I’ve noticed that a lot of enterprise businesses invest in building their own experimentation platform as you were doing at eBay as well. 

[00:10:07] I’m not really sure about Atlassian, maybe you can share. But enterprise companies do invest in building their own experimentation platform instead of buying it.

[00:10:15] While, of course, because of the resources and other reasons, small and medium businesses tend to buy it. 

[00:10:21] So, of course, would love to know from you, what is Atlassian doing and why is it doing so? 

[00:10:30] Mithun Yarlagadda: Yeah, I’m smiling because that is exactly the focus and priority that we are working on right now and then that has always been a topic of discussion with all the teams from the point I joined. 

[00:10:50] So you’re absolutely right in enterprise companies as I said you don’t need to sell the idea of experimentation. That means that they know that it’s part of their foundation stack. It is part of the infrastructure. It is part of the data infrastructure. It is not seen as something superficial on existing tech. 

[00:11:10] So when it comes to SMBs, that’s the point of course, every decision is scrutinized to a point that it needs to be prioritized.

[00:11:20] So, looking at the long term as a short term build is definitely long term focused. There is a cost up front that you can see but it would be cheaper over a period of time because there’s no more maintenance cost and then there is super flexibility on the features that you want to build.

[00:11:35] SMBs obviously tend towards looking at short term because as I said it’s like, do I need right now? Okay, let me bring something in, test it and then only when I see the value, then I think of it, right? 

[00:11:50] So yeah, they tend to look more around buy. So that’s the journey we are also having at Atlassian. I don’t know I can’t go too much into details but yeah at some point, we wanted to invest in building something internally. But again, as I said, prioritization comes from with all the macroeconomic things happening around, decisions needs to be re-evaluated over a period of time.

[00:12:19] So it’s tending more towards looking at commercial solutions, third party solutions, where if not for the entirety of it bring individual parts and then system together, make it work. 

[00:12:35] So yes, SMBs tend to definitely look more at commercial offerings. There are quite few options out there right now.

[00:12:46] If you asked me five years ago, there aren’t many but right now I see a good chunk of choices out there. 

[00:12:54] Vipul Bansal: It’s pretty insightful. The flexibility in terms of the kind of features that you want is something that enterprise businesses can afford to create as well and definitely the requirements are also that complex, if not that complex, but very, very, what do you say, alert to their needs.

[00:13:14] Mithun Yarlagadda: Just to add to that, scale is where it breaks down, right? See even for the enterprise company, you can get started with something small but sooner or later you’ll realize that when you want to scale it at that particular scope it won’t work. 

[00:13:31] You need to definitely build something internally either when I say it won’t work, the cost is going to be way too much because most of these offerings are volume based. So the number of evaluations done, it’s going to come in the way of how you vision, put a vision for experimentation in general or optimization in general. 

[00:13:55] So yeah, sooner or later that will come into place and then scale and speed of course, right? SMBs, maybe there is some flexibility around how much risk averse they can do, risk mitigation they take. And not only that, I think the speed of the whole systems, in general, they’re okay with that, but enterprise everything, that is something that they can’t afford.

[00:14:25] Something like enterprise, it’s going to be democratic like the moment you open up the platform, you should expect experimentation to be anywhere, experimentation to be done by anyone. So that scale, that kind of opportunities generally, it’s much more suited if you build something internally because the exact requirements, the kind of profiles and then the kind of people that are going to access. 

[00:14:50] Whereas I think the commercial solutions I also observed that they start more on the website optimization. They’re much more on client side optimization focused. Again the recent one that I’ve seen, they’re also moving into server side optimizations and making it much more not necessarily only web optimization, but more beyond that. 

[00:15:11] I think the maturity that we might see on what this commercial offerings that might help make an offering much more closer to what you would have if you had done in house. But yeah, we’re not there yet but we’re getting there. 

[00:15:30] Vipul Bansal: Before starting this discussion, you mentioned about your time at Target and how you build a center of excellence there.

[00:15:38] So I think for the benefit of our audience it will be great if you can sort of elaborate on what goes into building a COE, center of excellence and are you building anything similar at Atlassian as well? 

[00:15:51] Mithun Yarlagadda: There are three core pillars that we start planning like the same thing and one of that learnings that I had at Target, I was fortunate enough to bring them back to Atlassian and basically people, process, and platforms. 

[00:16:10] So it’s a mix of everything. At Target, we had the luxury of building everything from scratch because we started with a blank slate. 

[00:16:17] So as we are building the platform, we started building the culture around it, the practices around it, the rhythms around it. So to a point that for example, we had a certification for experimentation and incentivized and gamified to the point that if you need to run an experiment, then you need to be certified at different levels.

[00:16:38] Like, for example, level one is just getting familiarity with the platform that we build. And what would that give? If you are certified for level one? There’s a course and then of course, there are certain things they need to get certified. 

[00:16:50] But the incentive for them is if only if they’re level one certified they can create experiments in the tool, the platform. Level two certification is mostly around data science skills like where it would give you access to creating metrics in that. 

[00:17:07] So level three is something that is a little beyond that, where you can propose additional statistical methods as part of your experimentation.

[00:17:15] Like the cause experimentation, you can go as deep as you want, but level three is much more advanced in that sense. And then we also want level four at some point where it’s like maybe get into much more advanced experimentation. 

[00:17:28] But yeah, the point is that because we had a luxury of the platform and then we centered the practices and culture around it, that’s one part of it. But the most important part is about the ceremonies and rhythms that we have, right?

[00:17:41] So we have weekly, we call it stand up meetings, governance meetings, prioritization meetings, but the goal and agenda is very similar like where you are creating this rhythm off backlog grooming for your ideations back up for experiments, just review the experiments in the pipeline right now. 

[00:18:01] So that is very important because you’re constantly putting this idea in the back of the head of your stakeholders, the squads that are working on about what experiments are you running today and what new experiments are you getting in the pipeline? 

[00:18:15] So it’s just putting that idea that it’s likely to process, we are not going to get just one experiment done. It’s likely to process. And the third thing is just to make sure that we have the right trigger around it we invested a lot into creating dashboard, metrics, mechanisms. For example, we have metadata quality for experiments. We have sort of a review of that metadata very frequently.

[00:18:41] And then we just tag those experiments that this experiment maybe needs something else to be done. 

[00:18:52] Just putting that idea that there is some sort of governance council on top of it. So that there’s also that incentive or accountability, like more on the accountability side that there is the right way to do it, probably you want it to look at that. So our goal is like, we started the volume first, made sure that we have built a platform that’s easy enough, simple enough for anyone to just create an experiment. 

[00:19:19] Once we have the volume, then start focusing on the quality. There are like three values that you can have, the three V’s, we call it – velocity, value and volume.

[00:19:32] So we started with volume again, the velocity comes next. Velocity is the speed of experimentation, right? Yes you’re getting experiments, but how fast can you run experiments. And the third one is the value which is super important which is where we take the learnings from experiments try to share it across the organisation and then so that there is that evangelism going on of experimentation. 

[00:19:51] Yeah, Atlassian is the similar kind of approach that we want to take but the challenge at Atlassian is that the teams are very disparate. Like we have multiple product lines, multiple product teams. So yeah, the culture is very different to them.

[00:20:04] So there’s a little more customization to that, but essentially the goal or the strategies are pretty similar in that sense. 

[00:20:13] Vipul Bansal: Pretty comprehensive, I mean, I can only imagine the kind of work that went into building a COE and I think this COE is definitely a big takeaway for our audience watching it right now.

[00:20:27] The levels that you’ve defined, the certifications and what does it allow an individual to do based on their levels and also the follow ups, the monitoring part of it, I think it’s very comprehensive. Kudos to that. 

[00:20:41] Mithun Yarlagadda: Vipul, I’ll also add one more part to it. I think, how do you start thinking about this, the COE team? As an, of course it’s definitely just like, who’s looking at your methods? 

[00:20:53] You want to have a resident data analyst who analyzes the metadata that you captured, experiment have pulled in the right way, so that you can share it across. 

[00:21:02] You want to have a resident testing program manager of sorts, like we have seen this super effectively done at eBay.

[00:21:12] There are dedicated testing program managers who work with individual product teams in that ideation, backlog, grooming or just making it easy for any team to just run experiments seamlessly.

[00:21:29] Again not every company has a luxury of identifying a testing program manager, but I’m saying that is a dream thing.

[00:21:36] And of course there’s a product manager who manages the tool and experimentation program. I’ve seen here there are two program managers, one focusing on the tool, just the tooling itself, like platform itself. The other one focusing on the experimentation program where you’re obviously creating the playbooks for the experimentation program like the specific goals and everything. 

[00:21:58] But at least we were able to identify these specific skills that are needed and if not in the same team, we are able to find those experimentation champions elsewhere, work with them, use their capacity and then see how they can help with furthering experimentation practices.

[00:22:23] Vipul Bansal: So that also makes me think about, in your opinion, what do you think a successful testing program looks like and how do you also measure the success to conclude basically that, hey this is a successful testing program. 

[00:22:41] Mithun Yarlagadda: I think that’s a good question. 

[00:22:44] So what’s our end goal? How do we know? 

[00:22:46] So experimentation should not necessarily be a additional process on top of your product building and feature optimization. It should be just part of your daily work.at 

[00:23:00] What does that mean? 

[00:23:02] Like if you’re a developer or engineer, then if you’re working on a feature, then you’re obviously thinking of okay once I have my feature ready, I’m going to configure an experiment around it and then have a measurement around it. If it becomes obvious to you in that part then that’s one. And then as a PM you’re constantly thinking about ideas for experimentation constantly. 

[00:23:27] You always have a list of ideas to experiment or experimentation backlog. That should be a norm and being obsessed about it. And then overall, if the company, if we get to a point that they’re obsessed about metrics and measurement as the means to make any decisions. As far as you can go from intuitive based to thinking to a data driven decisions and experiment. 

[00:23:54] The moment you have the company to care about that. There is no need for you to celebrate experimentation. Experimentation is the way how it’s going to turn your intuition into a data-driven decision. 

[00:24:12] And a successful testing company would obviously need the right tooling. So having a platform that could scale that could enable to experiment at any experience surface, having a good reporting mechanism where it can be completely self-serve to a point that a single person can go and configure experiment, run it, look at the data, make a decision, everything in it.

[00:24:42] I think again, I go back to the same three pillars. You have the people where you have a good resource to manage the level of understanding about experimentation. That’s on the people part. The process, we have these rhythms going on. You have like a weekly standups for experimentation, for example, you have the validation meetings going on and then you have readings that come out of experimentation learnings. Like there are readouts that we have for example, those are showed widely. And then comes to platform, where you have a platform that is aimed at self-serve goals.

[00:25:17] I go back to the 3 Vs of volume, velocity and value. So we use the same way. At Atlassian, we try to turn those into specific OKRs.  

[00:25:26] How many experiments did you run last month? That’s it. Simple question. Like we want to ask the same question to anyone, right? All the way from leadership to all the way to engineer.

[00:25:35] If everyone is able to answer the same question and then at least that will make them think like, am I running enough experiments or not? And then velocity is like okay, how many experiments can I run this year or like when it comes to planning all the way from planning right when they start thinking of the feature roadmap and then we want to tie it back to like how many experiments can I run because of the duration efficiency and all of that so we get better at it. 

[00:25:59] That a good testament of how good of experimentation program we are running, right? Again going back to the readers, it’s a mix of your ceremonies. 

[00:26:07] Yes, you’re running experiments. But what’s happening after that? Like, you can even measure the value of how many rollouts or how many feature rollouts are coming out of experiments.

[00:26:19] You can use that as your measure. So yeah, so that’s how I want to think about it. 

[00:26:26] Vipul Bansal: Great. Could you also share any example of a successful test that you might have run at Atlassian and what made it successful. And of course, feel free to not share the specifics of the test as you just mentioned but I think the audience would love to hear some examples from you.

[00:26:43] Mithun Yarlagadda: Yeah, sure. I have few examples. I’ll try to obscure the exact details to make sure that I don’t break any privacy issues. 

[00:26:52] So, we had an experiment where they’re trying to optimize the pricing models for our offerings that we have. So the team had to go through multiple iterations and of course, it’s in addition with this hypothesis behind it and then we have different product lines. 

[00:27:13] The team ran different experiments in that. It’s very interesting. Yes, the pricing models came out to be successful. So they were able to launch and roll out those. But what’s much more helpful is that the user behavior is very different to the same pricing model depending on what product they’re choosing. Which is of course, but having the data backing, help the team to go back, iterate that particular pricing model a few more times, customize it very specific to different product lines and then they most recently they ran their second or third iteration of the pricing model and then even this was successful. 

[00:27:49] I’ll give you one more interesting example where it’s on the other side where, this is a complete back end experiment. It was just supposed to be just formality.

[00:28:00] There’s a back end platform change moving the content delivery system. Again, not going into much details there, but essentially it’s back end. There’s no experience change for the end user. What would you expect? You expect that yeah everything is going to be fine but since this was a team that we closely partnered with, we ran an experiment on it.

[00:28:21] We created an experiment around that particular rollout process and what did we learn? That the conversion metric that they’re choosing has a big impact which is like a super surprising mystery on like why would a back end change has an impact to a metric that is on that.

[00:28:42] So very interesting. The same page was rolled out by different product lines similar to the previous experience and then different product lines had different outcomes. So one specific product line was a mystery because there’s a negative impact. The team went back because we ran an experiment.

[00:28:59] Number one, we are able to learn faster and then the team had to rolled back that experience so that we didn’t negatively cause the entire population and then the team went back and then identified certain things they would have not identified at all. Like, for example, they identified this page performance loading issues.

[00:29:19] And then there is a handshake between multiple different services. Bottom line, that experiment really helped them to see the impact that they never expected. That was never part of anyone’s intuition at all. So that’s other example.

[00:29:36] One example is that you have a model and then it’s successful and you understood more insights. The other model is, your intuition says this is a back end, nothing is going to happen. No this is a surprise. So there are a lot of examples like that or to the point that someone wants to revamp the whole homepage and then we change the tagline on the top. Again, it’s intuition based but we said, we need to run an experiment. Yeah, it’s coming from leadership. Yeah someone believes that he’s right, but no, we need an experiment. 

[00:30:08] Vipul Bansal: Yeah, interesting indeed. Thanks for sharing these examples, Mithun. So as the head of experimentation in Atlassian of course, that’s a leadership role that you carry. It’s a really big company, right? So sitting at such big companies, what are some of those leadership responsibilities when managing and scaling a large scale experimentation program and how do you basically align with the business goals and also the company stakeholders? 

[00:30:38] Mithun Yarlagadda: Absolutely. No, that’s a very important question. Thanks for asking that. So companies have different cultures within. There’s a top down, bottoms up but it’s very effective when it comes top down, right?

[00:30:54] When it comes to leadership, if the leadership can remind the teams again and again about how experimentation needs to be a part of their operational rhythm. And to a point if they are obsessed about the measurement as part of your team’s operations in general. That would go a long way than some of the things that, hey, do you want to consider experimentation?

[00:31:28] That’s going to be very different kind of discussion vs what leadership strongly believes. I’ve seen successful companies where the leadership has a weekly review sessions going over the metrics, going over the experiments, going over being obsessed with the dashboards that they have. That would provide both incentive and accountability to the teams because you know that if you made a difference that can be measured and that can be valued because there is an evidence right there vs claimining or assuming okay, I did a good work. I’m hoping that this can be done. 

[00:32:10] On the other hand if you have an evidence and if you have a leadership looking at that evidence then that gives a good incentive to you, to the team and same on accountability as well. 

[00:32:23] When we break things we will no sooner, again, when I say accountability, I don’t want to put in a negative connotation about just punishing someone but fail fast is essentially the part of experimentation, right?

[00:32:35] When you build something and then when you know that it might fail you want to know it as soon as you can and you want to make amendments as soon as you can. 

[00:32:46] So yeah, so having a leadership group only believes in a CRO or experimentation and not only believes then how they would operationalize those beliefs into practices.

[00:32:58] I think that would make a lot of difference. Other example is like there’s a monthly review sessions on just going on the learnings. Okay. What did we learn from the experiments in the last month? And what happened to those experiments? Just showing this, just having this, again, going back to the way how we tried to create a program around it.

[00:33:20] At the end of our all the slides we had one single question to the leadership. Did you ask your team like within the context of any feature rollout? Did you run an experiment on a feature rollout? 

[00:33:33] Just ask this one question like that’s all you need to do. Just ask a question okay, this is great you are shipping this feature, but did you run an experiment on this feature? That would go a long way.

[00:33:46] Vipul Bansal: Your views on, because also there’s a lot of buzz around it lately but your views on AI and experimentation. How do you think being in this space the organizations can leverage AI to their benefit by mixing it in experimentation? And it’ll be great, of course, if you can also let us know what specific area of experimentation can AI really help in? 

[00:34:13] Mithun Yarlagadda: I don’t have specific example where we know for sure that oh yeah, this is how AI can help, but there are definitely some areas where I see that AI can help. I think the opportunity sizing of the pre-analysis part and the post analysis are obvious choices for me in my mind and there’s also certain experimentation techniques like multi armed bandit, for example, where you have multiple bandits and then we call it AI. 

[00:34:42] Yeah but there’s a learning aspect to it where you have defined your predefined thresholds and then let the process identify the winner on top of it.

[00:34:52] Maybe there’s more sophistication we can arrive with an aid of an AI like how refined we can have those learning models aid the decision of identifying across the multiple different variants that have. Post analysis for a traditional frequentist or for traditional experiments.

[00:35:12] And I think post analysis right now it’s still manual for the most part of it. There may be something in there where we can identify where the AI will generate the insights for you and then at the end you already have it in front of you and then similar to the pre-analysis where you can probably have some sort of a process or a system where it’ll identify the gaps, the biggest opportunities.

[00:35:39] But again it might be done in bits and pieces but I believe that there might be some room in there where we can get much more rigorous, sophisticated around opportunity sizing and their own insight generation for sure. 

[00:35:53] The method itself I think multi armed bandit and again the sequential testing that’s also coming in there. Maybe there is something more to that. I believe so. 

[00:36:03] Vipul Bansal: Of course, everything has a pro and then everything will have a con as well. So just curious. 

[00:36:08] Mithun Yarlagadda: Yeah, no I think risk is always there like if you leave it to the, again, because of that we don’t know yet about there’s not enough trust on the decisions that a system can give us, right?

[00:36:28] Again I said not the full extent yet. So let’s say that you have a rollout that you get to a point you join your roller decisions based on AI and what if something breaks down, right? 

[00:36:44] Maybe you build guardrails around it. Again, as I said that’s how I would keep thinking about it because experimentation is always, I consider it as like that’s your best guardrail. 

[00:36:58] So that you don’t fail too big. You don’t fail too much. It’s mitigating that risk so maybe experimentation and AI together having the right guardrails is even better. I don’t know. 

[00:37:11] But again that’s where I see at this point it also depends on the quality of the data that we also feed to the systems.

[00:37:23] Garbage in garbage out. I’ve seen a lot of those use cases. So, there can be biases because of the quality of the data that we feed into that system. 

[00:37:32] So how are we safeguarding against it? How do we know that that’s not the case? I think those things are something that at least now I see that we can’t just leave it alone and not leave it alone. We cannot just solely dependent on it. 

[00:37:52] There are guardrails that you need to think there are other means that you need to check for biases. For example, as I said it’s still evolving but I don’t think we’re at a point that we can blindly go with a decision that is made within the context of a feature rollout and everything.

[00:38:12] Yeah, it can get to a point that’s why we separate the end of an experiment saying that there’s a recommendation on the decision rather than exactly saying this is exactly what you need to do. 

[00:38:25] Like, we are saying that given what the data says we believe that it’s good to roll out this particular feature. I think that’s all. I think if you’re using AI to that point and then have a better decision system after that maybe that’s a guardrail that we need. 

[00:38:41] Vipul Bansal: Yeah, definitely. Even I feel that outsourcing your decision making to AI, we should avoid it at all costs. We should definitely keep human in the loop and let them use their own brain to make a final decision.

[00:38:56] That’s really lovely to hear from you, your views on AI and experimentation and thanks for that. And I think what is also lovely for our audience to hear is the books that you are currently reading or if you’re watching any web series, it’ll be great to get some recommendations there as well. 

[00:39:20] Mithun Yarlagadda: Oh books. I get bored with one area. I keep switching from fiction to nonfiction to fantasy to everywhere. 

[00:39:30] I like Outliers, Freakonomics, and that kind of data books. Recently I’m interested in reading about Indian history and then some opinions about how the civilizations and then ‘Sapiens’ is one of my favorite book out there. 

[00:39:50] Experimentation specific they’re like Ronny Kohavi’s book about running experiments and then multiple of those books. I’ll be honest I haven’t finished all of the books.

[00:40:03] But what I generally do is pick the papers and the presentations. That’s much more doable for me, if it’s a serious thing. 

[00:40:13] Web series, again as I said I keep wandering across genres. It’s everywhere. Crime thrillers sometimes and then maybe documentary sometimes.

[00:40:27] You might find me boring in that sense but yeah, so I keep working across way too much on that. 

[00:40:35] Vipul Bansal: Any name that you can recommend to our audience? 

[00:40:40] Mithun Yarlagadda: I don’t know. I’m yet to finish the Stranger Things and you’ll be surprised after all the seasons. So I want to go back. 

[00:40:46] Naruto is well. Japanese anime to a point. And then again I go back to the old ones because I haven’t yet finished them. 

[00:40:56] So I keep going back to them and that’s where I am right now. Yeah. 

[00:41:03] Vipul Bansal: Great. Awesome. That brings us to the end of the discussion and this has been really insightful especially the highlight for me has been of course, everything that you shared is indeed an insight.

[00:41:16] The highlight for me to pick here would be the COE that you’ve built and the process, it was really comprehensive. And also I could maybe If I start speaking about all the insights I’ll be just giving a quick summary of this entire discussion but I’ll let the audience members pick their own insight and let me know and of course feel free to connect with Mithun and let him know how you found this session.

[00:41:45] So thank you once again Mithun for taking out the time to talk to us and sharing your knowledge with the audience on experimentation. Thank you so much and have a great day ahead. 

[00:41:55] Mithun Yarlagadda: Thanks Vipul. I’m happy to do this. Thanks.

Speaker

Mithun Yarlagadda

Mithun Yarlagadda

Experimentation & Data Science, Atlassian

Other Suggested Sessions

Acquisition Loops Cannibalization in B2B

Explore strategies to identify and manage acquisition loop cannibalization in B2B, optimizing growth and reducing overlap in marketing channels

[Workshop] Psychology and Controversy - How to do A/B Testing the Right Way

Meet Oliver, Katja, and Ivan for a fun, deep dive into A/B testing's quirks, from bias hacks to guessing game winners and hot debates in CRO.

The Experimentation Secret Sauce: A Multidisciplinary Story

Explore the experimentation industry with experts Firoz, Kees, Sander, and Denise, delving into how diverse, in-depth knowledge drives excellence.