• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

How AT&T Turns Data into Gold

Explore how Brianna Warthan from AT&T integrates experimentation into product development, accelerates feature launches, and leverages AI to drive product success.

Summary

Brianna Warthan, Associate Director of Product Management at AT&T, shares how experimentation drives product development and strategic decisions at AT&T. She discusses integrating client-side and server-side testing, prioritizing experiments across teams, and managing rollbacks for underperforming features. Through examples like AT&T's movers flow, she highlights the impact of structured experimentation in accelerating time-to-market and optimizing high-value features.

Brianna also explores AI's potential in automation, analytics, and scaling experimentation programs to enhance efficiency and decision-making in product management.

Key Takeaways

  • Experimentation is central to AT&T's product development, driving data-informed decisions.
  • Structured frameworks prioritize client-side testing for rapid, cost-effective insights.
  • Experimentation accelerates time-to-market by defining high-value features early.
  • AI holds promise for automating analytics, hypothesis generation, and scaling tests.

Transcript

NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.

Vipul: Hello, everyone. Welcome to Convex 2024, the annual virtual summit by VWO. Thousands of brands across the globe use VWO, optimize their customer experience by gathering insights, running experiments, and personalizing their purchase journey. I feel honored to have Brianna on stage with me, who is the Associate Director for Product Management at AT& T.

Hi, Brianna. How are you today? Good.

Brianna: I’m doing well. Thank you very much for having me.

Vipul: Yes, I feel excited, uh, adjacent to being honored about this conversation of ours today, which will be focused largely on Uh, product management as a practice and, and, uh, most importantly, how experimentation is utilized, uh, as a process in, uh, in, the day to day task of, of a product manager. So I think, the audience would love to know about your background and your journey of being a product management professional.

Brianna: Yeah, absolutely. Um, my background is largely in experimentation. I joined AT& T. Um, over 10 years ago and fell into experimentation while it was still closer to a CRL program.

And over the years, We’ve recognized the value that experimentation could really provide in terms of driving decisions with data and equipping our product managers, uh, with known impacts, uh, as they’re working through their backlogs. So we’ve really cultivated a program that, has. Really transformed from typical content and CRO testing to real impactful prototyping and product testing.

Vipul: Got it. So could you, could you, maybe start by explaining how experimentation fits in the into the broader product development lifecycle at AT& T?

Brianna: Yeah, absolutely. Uh, experimentation at AT& T is really woven into the fabric of our product management cycle. So we we serve as a foundational element for informed decision making and strategic direction. So we organize our efforts into four distinct categories.

When we approach our Our experimentation program, we really align it to our operations model, and we consider the output of the data, uh, for each campaign that we’re working with. So, product innovation is 1 category of work that we structure, which is really intended to inform new product submission and inform provide data that will support business cases from new investments. Product optimization is really where, uh, we’ll work closely with our funded roadmaps and our product managers to optimize for the work and the decisions that they’re making day to day in their backlogs. Product validation is where we’ll work with product managers to quickly and rapidly test their new code to see if their MVPs hit the way that they thought they might, or if there’s a need to go back and iterate and then content optimization as a program will support.

In utilizing existing capabilities, making sure that the capabilities are out there as merchant are merchandising as effectively as they can. So by organizing our experimentation model in this way, we’re working with the scope. And meeting it where it’s at in the delivery process. So we’re, we’re measuring concepts differently for innovation than we are for content.

So the process at AT& T is really intended to test any type of scope that our customer might bring to us.

Vipul: Got it. And AT& T, um, you know, it’s a, it’s a very old business, right? It has a lot of products, and you as someone, um, who handles a team, right? Uh, And your team is focused on optimizing for various different products.

So as a leader, as a product management leader, how do you prioritize experiments, when you’re dealing with diverse stakeholders needs,

Brianna: uh, that’s a really tricky problem that we over the years we have found working at a company of this size. Democratization is so important. Surfacing strategies is so important and supporting the success of your stakeholders. is very, very important.

So we leverage, um, really a center center of excellence model. So first we ration our program efforts across the different focus areas, um, for innovation, optimization, validation and optimization. We’re very deliberate about how much scope we pull in for any particular body of works that we’re certain to generate the data that our organization will Second, we dedicate resources to domain areas so that we’ve got resources available to invest in optimization at every point in our funnels. Um, that parallel track between product managers that are paid to optimize the products versus in experiment versus build the products and maintain them and make them the best they can be is really crucial to making sure that we have the autonomy to prioritize and, um, effectively according and meet all of our stakeholder needs.

Um, and then what is really, really beneficial. And helping to manage these prior type priorities is our collaborative intake process. We, we decentralize ideation and, uh, it really increases our program reach to support scope from any area of our organization, from ideation through deployment. So we might get ideas from our marketing teams.

We might get ideas from our marketing product managers. Our strategic business planning units, of course, from engineering, where the bulk of our product managers operate and then from our merchandising teams and throughout that intake process, because there’s so many participants in such a large company. We, we leverage our submission process for a readiness review, which operates as a center of excellence front door. We govern the quality and the inputs of the tickets to make sure that we’ve got, you know, enough hygiene, enough information, enough alignment to our business strategies that we can make a good impact assessment.

And then we leverage kind of a committee style review process where we loop in. People that participate frequently in our process and lead the charge for their business units, uh, to make sure that they can weigh in on intake scope items as they’re coming through the front door and they’re aware of things like potential results impacts. Uh, and, and then we share in the resourcing to make sure that people that want to participate in our concepts have the ability to do so. So we democratize everything that we can.

Vipul: Great. So, So, Brenna, I, I imagine that, um, since it’s a very large team that you manage, uh, are these, is this team or are these teams focused on just one specific region or are they focused on several regions where there might be some cultural and language differences?

Brianna: We have quite a decentralized team and have over the years we’ve, we’ve actually had a lot of different team compositions over the years. We’ve had fully centralized in one. building, working with developers, QA, product managers, everybody together. And that, that worked fluidly and nicely.

Um, but over the years, we’ve decentralized, we have a completely international team. My team itself has people, uh, and I lead particularly the product managers in charge of experimentation for their site areas. We work on the AT& T. com main domain and, uh, I’ve got folks on my team that lead experimentation for their domain area and work with the domain product managers, Now, the teams that participate in launching experiments.

is just everywhere. We’ve got folks in the Midwest. We’ve got folks on the east and west coast. We’ve got folks in Texas, of course.

Um, and then a lot of our team is in, uh, India from a development perspective. We have developers on the east coast. Um, so we’ve got a team that just really collaborates virtually and it’s actually, it’s pretty impressive how, you know, effective virtual communication has become.

Vipul: hypothesis also different for different regions? The experimentation hypothesis, I mean,

Brianna: I think, um, when we look at the, the types of ideas that come from culturally different areas. Yeah. I think that there are, there are differences because I mean, culturally people value different things. And so we tend to get a lot of really innovative, more technical, uh, efficiency tests from our offshore team.

partners. We do things like innovation jams, where we ask people to kind of do side projects and submit ideas. And, uh, a lot of, a lot of the ideas and the hypotheses that will come from our more technical teams will, will definitely be, Operations metrics oriented, efficiency oriented, and then you’ll get, you know, very marketing centric. Um, hypotheses from other areas.

Vipul: got it. One area that I have seen, um, have observed a rising interest, uh, from product management perspective is feature management, right? And it is becoming increasingly important to test features, uh, before actually rolling them out. so how do you go about it?

How do you, how does it, how do you or your team, you know, use experimentation for progressive, uh, feature releases?

Brianna: The structure that we have in place that I shared earlier, organizing our work first through innovation uh, that helps us to really justify the initial investment and level set the MVP feature definition, uh, for, for what’s important for that initial first step. and then when we start, um, really working towards product optimization, that product centric experimentation. you know, framework just really helps us carry, uh, the learning and the data driven insights from the initial funding business case into, uh, the product delivery. So a product manager might be working.

Within their agile framework to, uh, decide which features to prioritize at The top of their roadmap. If they require our assistance, if they don’t have the data that they need to make those decisions with confidence, they’ll tap on the experimentation team and submit hypotheses. that will be focused on choosing which feature goes first, measuring which one has the most impact based on their products, you know, rich data story that they use to measure their product day to day. Um, and then further when they do release those features, they might do validation tests to just make sure that they’ve hit the mark appropriately or if there’s a need to optimize further.

Vipul: got it. And, uh, in the context of feature rollbacks, right? Uh, what’s your process when, when an experimentation result comes out to be negative?

Brianna: Yeah. I mean rollbacks definitely You know are are a part of any I think good product managers. Um skill set elegant rollbacks. Um, I feel like Many of the product managers that I work with they will leverage data and experimentation up front So they’re rarely in a position where they’re having to roll back that they usually know their impact going into it because we’ve got that product innovation testing and that proactive optimization testing to equip them first, but but when it happens, and sometimes it does.

Um, they’ll usually use validation testing to get a rich look into why the feature is not performing and see if they truly need to roll back or do they just need to optimize and define their MVP differently so that they might do some quick optimization, do some fast follow work, but if they have identified something that’s not working, my hope is the product manager of experimentation is that I help them find that when they’re in the product optimization phase. Not after they’ve written their code. So that’s really one of the benefits of this really product centric experimentation program is having that ability to invest a lot less upfront and client side AB tests than writing code that you don’t know the impact for.

Vipul: Got it. And this, uh, this Reminds me of, uh, you know, the two sides of testing, which is client side and server side. And I was, uh, speaking to Antoine, uh, from Decathlon, uh, earlier, um, and he focused largely on server side experimentation. So what’s it, what, what, what is it like at AT& T?

Do you run more client side experiments or do you run more server side experiments?

Brianna: uh, our, uh, innovation and our optimization will typically be client side. A big goal of our program is really to generate usable data for our product managers. Um, if the product manager has to write code to launch a server side test first, that that process is so expensive in itself. So there’s definitely use cases for server side testing, but we will usually leverage our client side capabilities to learn sooner before we make that investment.

And we’ll do what we can to work within a definition process to trim scenarios down. To things that fit within the client side capability, you know, area of feasibility. So the folks that work on my team are experts in the art of possible, really defining how can we create a client side test? the data that our product managers need to feel comfortable in taking the next step in the project funding capacity or prioritization process.

So that’s really where that product centric framework comes into play is. The folks on my team will ask, what data do we need to generate? Who do we need to influence? And how much makes sense to invest at this point in the process?

And coach people through, um, defining great experiments to take the next step in the roadmap.

Vipul: hmm. Perfect. Uh, would you be able to walk us through, uh, an example, uh, where experimentation help, help you take, uh, you know, a major product decision at AT& T? Of course, feel free to hide any confidential information, but I think we could really use some, uh, insights from the example.

Brianna: Yeah, let me, uh, level set this one for you. Uh, we have lots of examples of product tests, uh, but my favorite this year. Um, has been around a, uh, test that we’ve done in our movers flow. We got a really, really tall order.

Um, from a stakeholder late last year. And this example I’m going to share really gives a great picture of how we can take a large concept, uh, start it product innovation and trace it through optimization, validation, the potential of this product centric framework that we leverage. Um, we received this request late last year for a product innovation concept, and it was from one of our internet teams and it was a really tall order. Uh, it was really.

The marketing team coming to us saying, Hey, we can’t close these mover transactions online. When a customer wants to move their internet service, you know, there’s a lot of investment that we have to make to make this transaction possible online. So we, you know, handle it in our call centers mainly. And we’d really love to be able to transact, you know, this customer online.

So, uh, In Q4 of 2023, we took a deep look at this and we said, can we fit this into a client side tool? And what we were able to do was define a really, really solid experience looking at certain flows within that larger project, looking at it from, a minimum success, you know, perspective, how much of an impact do we need to have before we can see that this flow would be worth investing in? And so we define the test in Q4 2023. And then in Q1 2024, we launched this large prototype to close these transactions online.

And we measured it, we optimized it as much as we could to make sure we had a really good experience. And then in Q2, we gathered data and We just observed and, uh, took the analysis and we recognized a really, big win for, um, for our client. And from there we took, we took the win from the product innovation program. And we moved it into the, uh, product optimization.

We tested offers to supplement and we went out and we asked for funding for the feature. And that’s the point at which, you know, the funding got approved. Work is actively happening to realize that flow online. And now the experimentation product manager on my team is working to kind of ideate on those next optimizations that will will start doing more client side testing.

That’s, you know, got a shorter path to production now that the foundation of that capability is moving online. So it’s a really, really powerful, effective framework that helps us meet that scope where it’s at in the delivery. Deliver the life cycle. So,

Vipul: hmm. Okay. That’s, that sounds, uh, uh, really good. And thank you so much for sharing that example, uh, Brianna.

Um, my next question is about, you know, the, uh, is concerned with speed, uh, uh, the expectation primarily, right? Product development doesn’t happen overnight, right? It doesn’t happen like you go today and you push out a feature tomorrow. So, uh, It’s a longer timeline to, to build features and to push them out.

Right. But there’s also, you know, a rising interest towards, you know, running more experiments, right? so how do you balance the need for rapid experimentation while also, um, you know, managing the, the longer timelines that a product development process.

Brianna: I mean, rapid experimentation, um, is definitely, I mean, balancing experiment timelines with development timelines is, is indeed challenging. They’re two completely different, um, paths And two, two different realms in my opinion. But I mean, it. is achievable with that right framework.

so That product centric framework where we’re looking at you know, test up front for things that are not yet impacting products, but will, once they have funding versus things that are actively on the optimization, uh, plan for product teams, um, really helps with that. You know, our program relies, um, on meeting work where it’s at in the cycle. And. We want to provide obviously the most meaningful data at each step in the flow.

So we consider that potential path to production at experiment intake. We ask that question before we break ground on defining an experiment to make sure that the concept’s eventual development life cycle is considered and we know what type of data we’re generating and providing to our stakeholders. So, When we introduce a parallel path to experimentation like we have in our overall structure, we’re not necessarily confined to the PI structure that the product team will be running in. We’ll run more Kanban in our experiment framework, and we’ll align to PI activities when it makes sense for the experiments.

Uh, it makes more effective, you know, data utilization from our program possible, but, but we’re not going to confine ourselves to the same timelines that the product team is. Additionally, say we find a really big optimization win that’s valuable, and the product team or the marketing teams really want to see that code in production sooner. When it makes sense, we will work with our product managers and kind of bridge the gap. We’ll keep an experiment, you know, once we learn that it is impactful, we might scale it to a larger percentage of people through the test tool itself until the product team can build that feature out.

And that can be really helpful when. You’ve got to make those decisions like you’re not really sure which way to go on your product. Um, we might have a couple of tests that are just optimizing for that. foundational code.

And we, we work in really close partnership with them, um, to balance delivery timelines and, and scope definitions.

Vipul: Got it. So does it impact the time to, uh, you know, time to market?

Brianna: For the eventual code delivery. I feel like, uh, when we leverage the experimentation program, um, it absolutely does impact the time to market because it helps to define the high value features that are actually critical to the project. So you don’t have so much waste. And also what we’ve learned over time, the more information that we can share from the definition process, the easier it is to get through the product definition process.

So we’re finding that we can go through all of the kind of The UX vetting and the, um, the, a lot of the conflicts earlier in the process. So by the time the information hits product, it’s a little bit more defined. Um, and we can move a lot faster through the delivery process.

Vipul: Got it. So in your experience, what types of experiments have you found to be most valuable uh, Insights for product managers,

Brianna: Um, I think the ones that are the most important to the product managers. So the concepts that most aligned to program goals, product initiatives. They’re the most valuable. I mean, uh, right now we’re focused on delivering as much product and capacity influencing insights as possible to our product managers for use in their day to day decisions.

So, innovation results, they’re always, you know, great, they’re big, impressive projects. Um, but product optimization tests are really the bread and butter of our program right now. Um, that’s the data that our product managers can use. And, um, So concepts that have that that usually have that high potential for a product manager in my organization, concepts that answer tough questions or challenge assumptions that help, you know, fight the battles through testing so that you don’t have to fight the battles in, you know, code development, q.

A. Or, um, definition, uh, within your product teams. And then stories that have, uh, undetermined impact or value when the product manager doesn’t know the potential impact of that, they might reach out to us and say, we need data before we move forward. And, and those ones are really meaningful.

And the reason that those are all meaningful concepts is they’ve got the potential to move the most important product metrics and they have the potential to impact capacity decisions. So every time we can move capacity from low value activities to high value activities.

Vipul: Got it. And in the context of measuring the impact of experimentation, what are those you know, metrics that you’d like to focus on?

Brianna: Um, I think when we measure the impact of, uh, the experimentation program, I think it’s, it’s important to measure experimentation as a product itself. So, I’ve got my metrics that I use for campaigns by site area, by product, by domain area, um, and we use those to guide our hypothesis decisions at a campaign level, but when we look at experimentation as a product the process itself and how we’re achieving on our experimentation program goals We’re tracking metrics over time, um, and by audience, understanding if we’re, uh, testing enough in certain areas, we’re looking at, do we see correlations in metrics or site behaviors by the testing volume, and are we delivering, you know, data to usable endpoints?

So, um, things that have, uh, been meaningful over the years, time to value for products or changes. Digital adoption and CSAT are interesting to look at over large periods of time and try to make connections to your, your program impacts to, um, to those metrics and measurement strategies, of course, funnel and revenue metrics play, play a part. Um, and then operational efficiency and delivery efficiency metrics matter a lot.

Vipul: Perfect. So in your view, uh, what are the key components of, uh, Uh, of an experimentation program structure that deliver or effectively support? Uh, the, the product management as a, as a process.

Brianna: Uh, I think very tactically, very tactically, uh, I organize experimentation strategies into Kind of the areas of accountability of program operations, and those are your folks who are defining your tests And looking at experimentation as a product. Then you have your delivery. Obviously, your technical folks that are building your tests and really, you know, in it to break the mold every day with those test tools. And then analytics and results, you know, consumption, data utilization is what we call it.

How much data are you generating? How well are you crafting your measurement strategies? And how effectively are you sharing that data with the organization? So I think those are really, really important bodies of, work within the experimentation program, that determine success.

Within those bodies, I think clear program objectives and program frameworks that are usable, that help make decisions happen at the lowest level, possible, so that you can move fast are important. Defining the purpose of your test program is super important. Um, because a CRO program is going to operate very, very differently than, than, uh, a product testing program, you know, velocity and quality and, uh, how you handle your individual concepts will differ based on, you know, that purpose. So it’s important to have one.

Um, and then setting achievable short term goals, working with your stakeholder groups and your teams to make sure that your framework is utilized effectively for that larger experimentation process is important.

Vipul: Got it. AI is gaining pace in terms of finding its utility in just every sphere of the business and experimentation is not, not a stranger to it. So looking ahead, how do you, how do you see AI becoming a part of experimentation in product management?

Brianna: I mean, I think definitely AI is already impacting a lot of the tactical day to day. Um, I mean, things like organizing information for cross impact purposes. I’m, I’m, I’m really, um, excited that AI will probably. help us achieve efficiencies in organizing information a lot easier.

So I’m excited for things like proactive analytics, spotting test opportunities and data sets, um, things like segment discovery for, for different strategies, understanding with a lot less tactical manual analytics, where some of the higher value opportunities are maybe some of the anomaly detection capabilities and connecting the data that you can see, um, the rich data that you can see to maybe some of the, um, language generation models to start writing hypo with writing hypotheses, without having to write hypotheses. Automating a little bit of that process based on standard inputs. a lot of that seems like it could be.

automated in really smart ways where a lot of testing programs could scale from individual concepts to larger program relevant initiatives with a lot of automation. So it would not mean more hands on keyboard.

Vipul: Are you, are you already using your AI, uh, in, at AT& T?

Brianna: Yeah, we, we, um, we, we have, you know, tools that I think everybody’s dabbling in right now. We haven’t mastered it, obviously. I don’t think anybody has, but there’s a lot of different, uh, there’s a lot of different ideas that I think, uh, We kick around simple stuff like, uh, even just, uh, think about, you know, from my perspective, I work in a huge company. My obligation for transparency is.

Some days feels unachievable, but with AI on the horizon, it seems like I can take lots and lots of information, you know, write a little, write a little code and, and have summaries sent out that meet the requirements of anybody that I work with, you know, so I’m excited about program applications like that, where it could just mean so much to, um, leverage technology to, you know, ease some of the, the very tactical burdens of our day to day.

Vipul: Like I being in this, uh, being in the marketing background, I don’t really ask AI to create campaigns for me, but when it comes to very tactical stuff, writing copies for landing pages and emails, that’s where, um, AI has sort of taken away a lot of burden and a lot of headache off me, right? And I’m really excited how it would, you know, evolve in the, in the future. I really hope that it doesn’t take away the jobs and it just stays the way it is right now. Uh, but yeah, I too am quite excited about how it, uh, it was.

And not just me, just everyone at VWO, uh, we are exploring different ways in which AI can become, or, or, you know, lend help to, to a human when it comes to running experiments. And, uh, we’ve been able to introduce some features which, uh, which are able to, uh, do just that. so that brings us to the end of our, uh, conversation, uh, Brianna. And, uh, As has this been a tradition, we would love to hear from you.

What books are you currently reading? If you have any recommendations for us, for our audience, I think it’ll be great to hear what books are you, would you recommend to us? Yeah.

Brianna: if I recommend it yet, but I’m currently reading, um, 1984 because I’ve just heard so much about it and I’ve never read it. I think I read parts of it long ago, but, um, so I’ve just picked that one up because, you know, You know, everybody lately has read that one, so I felt a little bit left out, but, um, it is, uh, an interesting read so far.

Vipul: Uh, I do, uh, out of FOMO picked up 1984. And it’s very aggressive. It’s very depressing at times to read it. I haven’t been able to read it.

Yeah, I haven’t been able to read it full. Uh, the copy is still lying in my, in my drawer. I’m not sure if I’ll be able to finish it because you just want to Okay, let’s not be too violent in this conversation. But, you know, I just want to kill those bad characters in the, in the, in the book because it resonates quite well with me.

Uh, the real world that we live in today, uh,

Brianna: I think that’s what I’m finding. I think my style might be a bit more optimistic and less dystopian than

Vipul: Yeah, and that has, that has been a native of George Orwell because I have read his Animal Farm and, uh, that was also a very aggressive read. It was, it, it started slowly and it was building interest, but then it got really dirty. And then you just wanted to tear the book apart

Brianna: right.

Vipul: because it was that aggressive in the writing and it was that impactful, uh, uh, in its, in its theme.

Brianna: Yeah. Yeah, it’s definitely interesting. But yeah, definitely emotion evoking read.

Vipul: absolutely, and, and, uh, while we were, uh, having a conversation before starting this recording, you also mentioned that you recently pivoted to Spotify. Right. Right, and you’ve been consuming a lot of Spotify content. So what would you like to recommend from Spotify?

Mhm. Mhm. Mhm.

Brianna: I just started a new Spotify, I have been spending a lot of time just revisiting all of my old favorites to get my DJ to recommend something that I like. So I’ve just been, I’ve been looking, uh, I’ve, I’ve been listening to a lot of, uh, jazz and, uh, a lot of alternative, uh, stuff from my youth. Uh, music to onboard everything, lots of Nirvana, some Metallica, uh,

Vipul: Okay.

Brianna: and lots of, lots of different music just to get it into my library. But it’s a fun exercise to go from your Pandora algorithm that knows you so well, and then you start a whole new thing and you have to reinvent yourself and your music. Yeah,

Vipul: uh, rang a bell, uh, from my, I’m a 90s kid and, uh, Metallica was quite popular, uh, in India back then. I remember, uh, borrowing a CD of Metallica songs from one of my classmates when I got my new computer. Uh, honestly, I’m not a very big fan of, uh, metal music, but, uh, but yeah, uh, I mean, uh, brings back some memories from my childhood days.

Brianna: I’ve, I’ve listened to, they have, um, I’m not a big metal metal person either, but symphony and Metallica is a really good one. It’s, uh, lots of instrumentals. It’s pretty cool. If you like jazz and stuff, you might like that.

Vipul: absolutely. I would, I love to, uh, hear jazz when I’m sort of stressed out and, uh, I’m all alone and I’m working. I just put up. Jazz music on my, on my speakers, and I am able to focus on my work.

Brianna: Yeah.

Vipul: Great. Uh, it was lovely speaking to you, Brianna. Thank you so much for taking out the time to speak with us and sharing your insights or valuable experience with our audience today. Uh, I’m sure that audience.

Must have noted down a lot of, uh, inside that you spoke about in their notebooks and they’ll definitely practice it in their respective organizations. Cool. Great. Thank you so much, Brenna, once again, and have a great day ahead.

Brianna: likewise, you too. Thank you very much for having me.

Speaker

Brianna Warthan

Brianna Warthan

Associate Director Product Management and Development, AT&T

Other Suggested Sessions

[Workshop] Psychology and Controversy - How to do A/B Testing the Right Way

Meet Oliver, Katja, and Ivan for a fun, deep dive into A/B testing's quirks, from bias hacks to guessing game winners and hot debates in CRO.

Beyond Basics: Addressing Complex Challenges in Experimentation

Join Dr. Ozbay to master online experimentation: managing multiple tests, analyzing complex data, and tailoring strategies for team success.

Experiments, Events, and More: Fireside Chat With Mark Kilens

Dive into a laid-back chat with Mark and Jan, spilling secrets from 17+ years in SaaS and experimenting in the virtual world. Don't miss their stories!