VWO Logo VWO Logo

From Website Funnels To Live Global Events: May Chin’s Incredible CRO Journey

Release On: 12/11/2025 Duration: 40 minutes
Explore for Free Request Demo
May Chin
Speaker May Chin Product Lead, TOKEN2049
Bhavan Kochhar
Host Bhavan Kochhar VP – Sales (Global - Enterprise), VWO
Back to Podcasts

About this episode

Quick Description

Join our host Bhavan Kochhar as he sits down with May Chin for a wonderful conversation on the VWO Podcast. 

May leads product at Token 2049 and Super AI, two of the world’s most prominent tech conferences with over 20,000 attendees and 300+ sponsors. 

In this episode, May shares her journey from building growth and experimentation functions at eCommerce brands like Zalora to orchestrating seamless physical and digital experiences at global-scale events.

The conversation explores: 

  • How product strategy differs between purely digital platforms and hybrid event experiences 
  • The role of AI and low-code tools in democratizing experimentation 
  • Process of aligning cross-functional teams under tight deadlines 

Moreover, May discusses her philosophy of balancing automation with human discretion and shares practical advice for aspiring product and experimentation leaders in Asia.

Explore how VWO Copilot takes over the heavy lifting of optimization, allowing your team to focus on strategic decision-making that drives growth and results.   

Ideas you can apply 

  • AI and experimentation aren’t separate constructs but should work in the same feedback loop. AI can compress testing cycles by auto-generating variants and predicting outcomes, while human judgment can determine which experiments actually matter.
  • For in-person events, you’re no longer optimizing funnels but orchestrating moments. You’re caring about whether an attendee can find their way from the keynote to the buffet to the after-party without friction, with only one shot to get it right.
  • Democratizing testing requires three things: strip away jargon, make tools invisible by running data collection in the background, and lower the cost of failure by framing experiments as scrappy pilots.
  • An event or conference is deemed successful when both attendees and sponsors win. Attendees should feel the experience is effortless, while sponsors can justify their six-figure spend with quantifiable ROI.
  • “Vibe coding” with AI enables non-technical teams to ship novel features in days without engineering resources, but human discretion remains the steering wheel that prevents crashes.

May’s 3-step framework to democratize experimentation and open it up for all teams

  1. Strip away the jargon: Replace “hypothesis test with confidence intervals” with “let’s try two versions and see which works better.”
  2. Make tools invisible: Automate data collection and statistical interpretation in the background, and output business-friendly results.
  3. Lower the cost of failure: Frame experiments as scrappy pilots with small groups before rolling out at scale.

Insights from May Chin 

“If you call something a ‘hypothesis test with confidence,’ people tune out. But if you say, ‘Let’s try two versions of this email and see which gets more replies,’ suddenly marketing is eager to help launch the experiment without even realizing it’s an experiment.

“No matter how elegant the system you build, if your people aren’t aligned, it will eventually fall apart. My job isn’t just to ship features. It’s to bring every team, from marketing and ops to tech and sponsors, together toward the same goal under intense time pressure.”

“While AI makes experimentation cheaper and faster, it’s really the experimentation aspect and constant testing that keep AI grounded in practicality. Automation and AI remove the speed bumps, but human discretion and testing are still the steering wheel. Without it, you’re moving faster but towards a crash.”

A/B Testing Experimentation Platform Website Personalization

Key moments

[04:48]

Building for global conferences vs. consumer platforms

[15:13]

Using AI tools to empower non-technical teams

[20:28]

How to democratize experimentation across functions?

[29:13]

Why teams should combine AI with experimentation

[35:14]

Early career advice that shaped May's journey

Transcript

Editor’s Note: This transcript was created using AI transcription and formatting tools. While we’ve reviewed it for accuracy, some errors may remain. If anything seems unclear, we recommend referring back to the episode above.

Episode Trailer

Where do you see like the biggest opportunity when it comes to the use of AI putting experimentation together? Experimentation can often be very slow and manual. AI can help to compress that cycle. This frees up humans to focus on the ‘why’ of testing rather than the ‘how’. From my experience, alignment definitely doesn’t come from telling people what to do.

It comes from giving teams a shared North Star. If everyone then takes a step back and agrees the ultimate measure of success, then suddenly those KPIs don’t compete, but they connect with each other. Testing isn’t an exclusive domain for just engineers, technical team members or data scientists. At its core, it’s a discipline.

It’s simply innovation driven by the scientific method. Don’t just copy tech playbooks from larger companies in the West. You don’t have the same funding cushions or the same user dynamics. Here you need to be scrappier, more creative, more pragmatic, and also think deeper about how to leverage the people around you.

About VWO

Welcome to a brand new episode of the VWO Podcast, recognized as a finalist in the CMA Awards 2025 for best B2B podcast with hours of in-depth conversations, we talk about using CRO to deliver what your users want while also driving long-term growth for your business. Before we speak to our guest, here’s a very quick overview of what we do.

VWO is a leading experience optimization platform that enables you to gather in-depth user insights and build winning experiences across your website and mobile app VWO Copilot, our AI champion handles the heavy lifting of suggesting test ideas, creating variations, and finding insights so you can focus on making strategic decisions that drive growth.

Now, without further delay, let’s jump right back to this conversation.

Guest Introduction

Bhavan: Hi everyone and welcome back to the VWO podcast. Uh, my name is Pavan and today I’ve got an incredible conversation lineup. Uh, here I’m very excited to introduce, uh, may Chin. Uh, she leads product at Token 2 0 4 9 and super ai, two of the world’s most prominent tech conferences. She oversees end-to-end technology strategy from platform development to workflow automation, uh, to AI integration and digital experiences, and all of this at a scale and efficiency of over 20,000 attendees and 300 plus sponsors.

We’re gonna dive into all of this as we speak with her before this. She’s built the growth, experimentation, and analytics functions from scratch at brands like Zalo. She’s also led product and growth initiatives at Mindvalley, Lazada, and ipr, uh, where she scaled consumer platforms across Southeast Asia, driving acquisitions, monetizations and retentions.

We are very excited to dive into all of these and really get to know a little bit more about her product mindset and how she’s shaping the tech experiences at such a global scale. So, without further ado, I wanna introduce May to everyone may welcome to the VW podcast. How are you today?

May Chin: Doing really good. Very busy week, but everything’s going smooth.

Bhavan: Uh, I, I know you were traveling, uh, for a few days and you’ve just come back. So thank you so much for taking time because I completely get how busy this, this week would be for you. Uh, so I’m gonna dive straight in before we start. Talking about a few questions that our audience always wants to hear more about.

I just wanna get a sense of, uh, how, do you have any sort of like rituals or habit that help you get into a zone before, like a podcast like this, or a big presentation, or a big meeting, anything like that?

May Chin: Hmm. It would usually be a quick dose of caffeine, right before I’ve spoken at a few conferences, uh. Over the past couple of years and I, you know, I still always get a little bit of stage fright, especially if it’s an in-person event. But what always helps me is a quick dose of caffeine, but I have to make sure I strike just the right balance. Too much ends up making me feel even worse. So I usually go for a quick shot of matcha and try to, um, avoid coffee

Bhavan: Yeah, I mean, someone told me that, uh, good nervous energy is always the right way to feel, uh, because that keeps you on your toes, and like you rightly said, just the right dose of caffeine also definitely helps. Alright, let’s get in. Let’s get in. Uh, we’re gonna start with, uh, really trying to understand a bit more about, uh, what you do currently.

Building for Global Conferences vs Consumer Products

Bhavan: And I say that because you’ve had a diverse experience, e-commerce, digital learnings, mindvalley, and now over overseeing tech experiences at, uh, token 2 0 4 9 and super ai. I just wanna get that. How does, like building for a global conference with like 20,000 plus attendees, 300 plus sponsors, differ from what you were doing earlier with consumer? Like how does your role differ?

May Chin: It, it’s actually, um, extremely different to be honest with you, but at the same time, also very similar in, in certain ways. So let, lemme get into that, um, a little bit more, right? So, you know. These worlds that I’ve come from, first of all, a purely digital world into something that has a very strong physical component. They definitely do seem very distinct from each other, but in practice, a lot of the same muscles are actually at work in the background. And to give you some, uh, quick examples of that, right? So when you’re, when you are building for a purely digital platform, you would usually be optimizing for some kind of funnel, your typical acquisition, conversion, or retention, for example. The scale is digital and therefore usually very large in volume. Your data is usually fairly clean and exhaustive, and you’re mostly thinking about how to get more users through some kind of happy path, such as a checkout path, for example, but with a global conference. Instead, you suddenly have 20,000 people flying in from over a hundred different countries.

So the stakes are much higher in a certain sense, and all of these people that are flying in just for you are also expecting a seamless world-class experience. As they should, given that they’ve taken the trouble, the trouble to travel halfway around the globe just for us. So the notion of scale is very much still there, but I would say that the average expectation level per user tends to be much higher. So the biggest difference and shift would be from. Would be in moving away from optimizing funnels to rather orchestrating moments in Zulo, for example, I cared a lot about cart abandonment, but currently in Token 2049, I care more about whether an attendee can find their way from a keynote to a buffet line.

To our after party without friction, and also making sure they’re having fun along the way as well. So while the end goal of helping the user succeed remains exactly the same, the means of how we get there. It’s very different. And unlike e-commerce or you know, purely digital platforms where you can test and roll back fairly quietly here, it’s one shot.

The event goes live and over 20,000 people stress test your system at once in real time. This tends to force a higher degree of quality, um, even in your very first iteration because physical experiences. Much less forgiving. What makes things even harder as well is I would be at the event myself. So there’s no way for me to hide from disgruntled users. So what this means is we tend to go for denser, higher quality MVPs rather than scrappier ones. We also need to plan better and longer term for more, more edge cases and also. This also demands a higher degree of creativity because you don’t only want a seamless experience, you also want to create one that people remember long after the event.

So this for me would be the major differences.

Defining Success: Attendees vs Sponsors

Bhavan: I think having attended quite a few events myself, sometimes we just take for granted what we just get at events and the experience we get at those and the the amount of hard work your teams go through just to give us that experience. I, I can’t even imagine the nights that everyone spends making sure everything’s fine.

And I just have a small follow up question to just to just that, because you just touched upon the fact that you’re no longer just building features that can help audiences better, but you’re actually building experiences for them to make it better. How does your team, like at the end of an event, define the success of the product from an attendee point of view?

I know you touched upon it and also from like a sponsor point of view as well, like how do you guys sit down and say. This worked, but this did not work for us.

May Chin: Mm-hmm. Yeah, for sure. And, and that’s a really important question. Um. When it comes to a company like ours, it’s not enough to just ship features. The real job is making sure that the experience we build and deliver works for two very different groups of users at once, which is of course, our attendees and our sponsors. And our sponsors are honestly the lifeblood of our event, and they mean a great deal to us. So this is. Really a crucial perspective. So for attendees, as we’re building our product for them and the way that we define success, it’s really when the event feels effortless, where to a certain extent, they don’t even notice all of the underlying tech, and it’s somehow.

Melts into the background, which makes sense because if someone is flying halfway around the globe for our event, they’re able to get their event badge in just a few seconds, find all of the right keynotes, and they’re able to collect perks through event gamification or other key, um, activities like that.

And they’re able to walk away feeling like they did, they discovered special. Event opportunities they couldn’t have found elsewhere, and therefore they feel incentivized to tell everyone they know about this event. That’s really how we define success on the attendee side, but sponsors are very different in most cases.

They’re usually not there for the free food and coffee. They’re usually there to justify a six figure spend to their upper management. So for them, quality means. The extent to which they’re able to generate good leads that feed well into their pipeline, and also the extent they’re able to, uh, generate and measure brand visibility as a result of being at our event. Did they meet the right investors, partners or customers? Did our event help them track our eye in ways that go beyond? Anecdotal. Yes. Our booth was very busy. So I define product success as when both sides win, um, without them noticing the compromises along the way. If, for example, attendees feel engaged at the event, but not. Bombarded by sponsor noise or a feeling of being oversold to, but at the same time, sponsors still walk away saying, this was the most valuable event that we’ve done all year, and I’m able to quantify this. Then that really is the sweet spot. So again, drawing a parallel to, you know, my previous life in e-commerce back in the day, we would usually measure, um, some notion of. User LTV in events. I would say it’s roughly similar, but it’s more along the lines of repeatability. Will this attendee or sponsor come back next year and will they bring an additional friend or colleague alongside with them? If the answer is yes, then we’ve done our job well.

Prioritization Framework with Hard Deadlines

Bhavan: Fair enough. I think some really good points and things that as an attendee and as a sponsor, I’d love for. The, the, the host to be, to be thinking about. So thank you so much. One more question. I, and I really wanna ask you this. As the product leader, you know, with, with timelines so tight, close to the event, like how does your prioritization engine works?

Like what do, what should we experiment with? What changes in products should we bring up? Like do you have a framework around that that people would love to a little bit know about?

May Chin: Yep. So the very, um, inherent nature of an event space company is we often don’t get the luxury of endless ab tests or long roadmaps. The event is a hard deadline set in stone. Our doors open over 20,000 people walk in, and whatever we’ve built along the way either works or it doesn’t. So. In this role, I tend to think about prioritization in three buckets. Uh, the first of which would be what I call mission critical hygiene, the foundational things that simply cannot fail. These would include things such as registration, badge, printing, our floor plans, payments. If these break, nothing else matters, and I honestly should be fired in my opinion. And the second bucket of priority would be high leverage differentiators. These are the features that. Build on the foundational aspects and elevate the experience to make our event truly stand out. So for us, that might manifest in some things such as our hackathon platform, our NFT powered event, site Quest, or personalized sponsored tools. Um, what I really want to stress here though is that these aren’t just shiny features. They’re what? Actually makes people say this conference feels different to any other event I’ve been to before. The third bucket alongside that would be nice to have, which I would usually only tend to green light if they can be administered in the form of fairly simple MVPs with relatively low downside risk where we can ship a polished yet scrappy enough version. Without sacrificing our UI UX too much, and most importantly, to learn quickly from it rather than sinking months into perfection. So across all three of these different buckets, um, the key is ruthless sequencing. And again, early on in my career at Soor, I loved optimizing for everything at once. But here I’ve instead learned to ask, what’s the one domino? We, we need to knock over so the rest of the experience cascades and holds together. Cohesive way and having that tight timeline and hard event deadline really forces that clarity on you. And this, while this might sound really stressful to those who don’t work in the event industry, in a lot of ways the high stakes can be a blessing when you know that you’ll never get a redo.

When you know that there is no undo button or feature flag or roll back, you tend to stop overthinking and start being very practical. MVP first test with a small group and ship what matters the most.

Reducing Complexity with Low-Code and AI

Bhavan: Perfect. Thank you so much. And I, this leads really well into my next question and, and I know a bit of what you will talk about as well because when we met a couple of days ago, I just wanna know, like in order for improving efficiency, uh. I know you lean on a lot of like low code and AI driven tools, and you spoke about vibe coding when you and I were talking a little bit, but I just wanna get a sense that in your role, how do you like, help your teams reduce complexity?

Um, don’t put too much, uh, burden on the, on the engineering team for high impact work. Like how do you go about that and how do you empower your team to be able to do that as well?

May Chin: Sure. Um, I think, you know, that’s a really good question ’cause I, I do feel that there is a very big misconception about tech at conferences or events in general, which is that everything needs to be meticulously custom built in order to work well and to work properly. And a lot of people also think that. The degree of complexity in your tech stack has a direct correlation to its efficacy. But the truth is, in events, oftentimes speed and reliability matter much more than building from scratch, just for the sake of it, or just for the selfish pursuit of tech greatness, if that makes sense. So, you know, AI adds another layer to this mindset as well, where in it enables us to ship novel features very quickly without major engineering resources. And this mindset culminated in the creation of a fully vibe coded feature that we’ve built called Photo Finder. And when I say vibe coded, it means that someone such as myself with no technical background comes in and actually does software engineering with AI assistant. So with this photo finder feature, what it enabled us to achieve was instead of simply sending our attendees thousands of event photos and calling it a day, we use AI to build an integration with AWS’s facial recognition tool to auto curate images based on the selfie uploaded by the user. And this is fully built from scratch in just two to three days with no engineers involved at all. So the role of AI power tooling and the whole notion of low code or five coding in general. For me it’s twofold. It democratizes execution for non-technical teams, and they also protect our overall engineering focus for the things that only engineers truly can do, such as making sure our backend database is structured in the right way.

Bhavan: sure. Yeah.

May Chin: So it’s a force multiplier. Um, and in a high visibility environment with immovable deadlines, that difference really means everything to us.

Human Discretion in AI-Driven Testing

Bhavan: Perfect mate. Thank you so much. And, and my next question probably leads into what you just spoke about and we spoke about AI and a lot of podcasts. People are talking about ai, but I really wanna know, what is your sense on like the human discretion coming in on the use of automation, AI speeding things up?

And yet having certain checks and balances in place when you’re testing at such a scale.

May Chin: Yep. So I think it’s really important to have a balanced perspective here. And while I’m. The kind of person to expound the benefits of AI powered tooling, low code and vibe, coding a stack as well. I think it’s important to not get too starry-eyed, um, about all of those new developments because the point is, while these tools can take away grunt work, they can’t really replace human judgment at least. Not as off to date. So in token, 2049, we do run experiments all the time, but unlike, uh, a hundred percent digital product, we don’t have infinite cycles of iteration. As I, as I said, we only have one shot once the event goes live. So this makes testing and human discretion. Not just optional, but actually essential and mandatory. For example, if we are to automate batch printing and it misfires on day one, you have 200 angry people standing in line and there’s nowhere to hide because I would be at the event too. So human discretion is really what keeps. Honest and grounded in my opinion. It’s me asking, does this actually make sense in the real world and not just in theory. So the role of testing for me is less about. Um, perfection and more about risk management. Can we drive, run a simple test with 50 people instead of 20,000 ahead of the event? Or can we shadow test a new product in parallel before switching it on? These constant checks allow space for human discretion. Most importantly helps to prevent big disasters and inform the overall direction that our roadmap should be moving in as well. So while things like automation and AI definitely remove the proverbial speed bumps, um, for me, human, human discretion and testing is still the steering wheel. And without it, you are just, yes, you are still moving faster, but towards a crash.

Democratizing Testing Across Teams

Bhavan: I think with AI kicking over, a lot of people have, the dependencies have increased for sure, but it still leads human discretion. And thank you so much for putting. So Well, and because you’ve just said that I, I, I just wanted to get a better understanding and maybe some practical steps and approaches as well.

You, I, we know you’ve championed democrat democratizing testing, uh, across non-technical teams as well. Now, beyond tools, like, would you be able to share with our listeners, uh, some practical steps and approaches you found most effective, especially when different functions are, are testing as well for their own needs?

May Chin: So the very first step, I think to. to. adopt. Um, experimentation in a practical way is actually something a bit more philosophical, um, in the form of a mindset shift, which is to understand that testing isn’t an exclusive domain for just engineers, technical team members or data scientists. At its core, it’s a discipline.

It’s simply innovation driven by the scientific method, and anyone in any role would be doing some kind of innovation. So why wouldn’t you apply scientific methods on top of it to maximize your chances of success? Right. So on a practical level, once you’ve had this mindset in place, I found that three things work best. The first is to strip away the jargon, which a lot of AB testing practitioners, myself included, can fall into without realizing for sure. If you refer to something as a hypothesis test with confidence. People will tune you out if you instead say, let’s try two versions of this email and see which gets more replies. Suddenly marketing is running to you and saying, how can we launch this experiment without even them realizing that it is an experiment? Right? But as far as you’re concerned, this is just an activity that helps them to maximize

Bhavan: True. True.

May Chin: And that the second part of that is to try to make your tools as invisible as possible because the, the truth is when you look under the hood of your experimentation stack, it can be very daunting and it can be very

Bhavan: Absolutely.

May Chin: So when I was at Zola, we set up our testing tooling. So. Any team member can launch an AB test. And all of the data collection and statistical interpretation runs in the background. It gets parsed by a proprietary logic that we built ourselves, which then outputs a business friendly interpretation. So non-technical team members don’t even have to know how it works. They just see results. Um, third and most important is to do what you can to lower the cost of failure in e-commerce. I learned that. A lot of people hesitate to test when everything feels so high stakes, and it’s roughly the same. So if you instead frame experiments as scrappy pilots, let’s try this side.

Quest mechanic. Where, um, we trial it with 100 people before rolling out to 20,000. Suddenly pe feel a lot braver. So for me, Democrat democratizing testing is really inherently more of a philosophical shift at its core, to be honest. It’s about building, um, cultural levers in place within your organization where people feel. Less intimidated, that’s one. And also braver to partake in it in the first place. So once you give them the proper language to use the tooling, the guard rails, and most importantly the safety net, suddenly it stops being so daunting and it just becomes an inherent way of how they approach innovation.

Aligning Cross-Functional Teams

Bhavan: I mean, I think some, some real solid advice. I think a lot of people should hear this. Uh, some truth bomb as well. And I’m gonna, I’m gonna lean on the first point you mentioned about KPIs. When different teams come, come over to you, like how did you go about like organizations like Zalo, how did you go about aligning cross-functional teams, like on shared priorities when you know, everyone has their own KPIs, pressures on delivering different things, and sometimes it’s not just scaling experiments, it’s also what you touched upon, like scaling people’s expectations.

Also, how did you go about, about running with so many different teams?

May Chin: Right, so. From my experience, alignment definitely doesn’t come from telling people what to do, but it, it is, it comes from giving teams a shared North Star. Uh, for example, marketing might be chasing ticket sales. Our operations team might be chasing smooth check-ins and our sponsors team. Want ROI. But if everyone then takes a step back and agrees, the ultimate measure of success is whether attendees and sponsors come back next year. Suddenly those KPIs don’t compete, but they connect with each other. The practical side to this, um, to enable this is ruthless prioritization and transparency. Uh, in doing so, it really helps to make trade offs. Very explicit. If we invest in X feature, it means delaying y feature. Which one best serves our core goal of getting attendees and sponsors to return next year? And once the trade-offs are on the table, teams feel like they’re part of the decision rather than victims of it, and that they just sw up.

Bhavan: Correct.

May Chin: The, the other piece to that is shared rituals and routines, um, at for teams. Drift towards working in silos. So it helps to bring everyone into the same room literally or virtually. Um, and we walk through the funnel end to end together. So for example, ops can see what marketing needs, marketing can see what tech is up against. And when you have that shared context, it helps to build empathy, which then is a natural. Uh, precursor to alignment, in my opinion. So while, um, scaling people can definitely be really difficult, I, I do feel that once you anchor them to one North Star, even if the North Star is fairly abstract, just having that commonality in place helps to make trade trade offs more transparent. It helps to, it helps to keep everyone seeing the same picture and the chaos that becomes a lot more manageable

Breaking Deadlocks in Prioritization

Bhavan: Yeah, and, and I just wanted to understand when you, you know, you mentioned that you’ve put everything on the table now and then people, there are trade offs. If there is a deadlock, how do you go about. Uh, resolving for that and, and who takes the lead in, in solving for that? Just really wanna understand that part.

May Chin: So for me, the person who takes the lead on resolving, um, resolving this deadlock scenario should be actually. Two, they, they should fall into two different buckets. The first of which should be the person who’s responsible for executing that initiative. As you know, there is a direct opportunity cost for this person, right? The time that they spent building on this feature versus something else, which one would bring the highest value add for the company. The second bucket of decision makers should be the person. Um, who holds the most stake in the value add of this feature? And often this can be a completely different stakeholder to the person who’s executing the initiative. So it’s important for both of these parties to come together and to arrive at a, a. A solid decision of priority. How to make that decision though it’s really important that it’s as quantitative as possible, or at least as empirical as possible. Often in some companies, there are many subjective aspects that cannot be quantified, and that’s fine, but we can still be empirical, right? Which is to take a step back and to always focus on the North Star. For us at Token 2049, as I mentioned, it’s always about which initiative would maximize the chances of attendees or sponsors returning the next year. And if we distill it to that very simple tenant, it suddenly becomes a lot clearer as to which initiative would move the needle more. More in that fruit. And even if we are still unable to decide on. The projected impact of two different initiatives, we can then take into account other factors such as the relative effort associated with each. We should, of course, be aiming for the lowest effort, one first, and if we still arrive at a deadlock in those situations, um, we would then fall back onto more subjective, um, but equally important factors such as overall company positioning or um. Whether this would lead to a subjective competitive advantage for us compared to other events. These factors, while not directly quantifiable, are still important and ultimately can help you to arrive at a more objective, uh, priority decision as well.

The Future: AI and Experimentation

Bhavan: Fair enough. I think that’s a, that’s a really good way to break it down. Uh, let’s move a little bit towards the future. Mei, and I just wanna get your thoughts on this. You have hands-on experience of building, uh, various things, strategic leadership, but where do you see like the biggest opportunity for product leaders, uh, when it comes to the use of ai?

Putting experimentation together to really like delivering value in the next three to five years. Where do you see that?

May Chin: So for me, the real opportunity is, you know, not seeing AI and experimentation as two separate constructs because they’re really not. But rather, you have to see the opportunity of synergizing them within the same feedback loop. What what I mean by that, right? Like right now, um, experimentation can often be very slow and manual. You design, run and analyze a test, rinse and repeat in perpetuity. AI can help to compress that cycle, for example, by auto generating experiment variant ideas. Predicting likely outcomes or servicing the riskiest assumptions that you never thought of before you even run a test. This frees up humans to focus on the why of, of testing rather than how. Um, for this to work though, I, I do think that product leaders need to improve their judgment for sure. This is where the whole notion of human oversight and discretion comes into play. Um, while AI will help. Teams for sure run 10 times more experiments with the same resources. The real value will still come from leaders who are able to discern which experiments matter and how to carefully select or interpret AI driven ideas or results without just blindly following the AI output like a sheet.

Bhavan: Hmm.

May Chin: And in the events industry, how this translates. For example, AI could dynamically personalize event experiences, but it’s human judgment and experimentation on top of that, which determines whether those personalizations actually do drive ROI for sponsors and delight for attendees. It would also be really naive to just assume that this.

That any kind of AI model can somehow optimize this personalization model by itself and guide itself to our intended business outcomes. There are still many different checkpoints where human course correcting and steering, um, would help. This would help this model, uh, to achieve our KPIs in a more. Uh, in a more congruent way. So the, the real opportunity for me is not to replace human strategy directly, but rather to supercharge it. Um, and while AI makes experimentation cheaper and faster for sure, um, it’s really the experimentation aspect of it and constant testing and inspecting offers results that keeps AI grounded in practicality.

Mindset Shifts and Advice for Product Leaders

Bhavan: Uh, fair enough. I think that’s, that’s, that’s really good, uh, advice for, and some of your thoughts and, and I talk about, because I remember talking to you about how you started and where you are right now. So I just wanna understand like from from the time you started to now, what have been some of the biggest mindset shifts that you’ve seen in yourself as now a product leader?

And I’d love for you to just give some advice to people who just starting out their journey. So what would your top two or three advice be for them? Uh, looking back at your own journey?

May Chin: Uh, when I was younger, earlier on in my career, um, it was really important for me to pursue product or technical greatness and to make sure that whatever tech stack I was working on was just the shiniest, most impressive thing possible. Right? So the big mindset shift for me has been kind of moving away from. Just focusing on my, on my tech and tooling, but rather on focusing on people as well and how we orchestrate people. So early on in my career, as I mentioned, I, I obsessed over. More, I would say, um, I guess like harder aspects of my job, such as funnels, metrics, experiments, tech and tooling, which button color drives more conversions, which feature drives more retention. But now, you know, in token 2049, I’ve learned over the years. That no matter how elegant the system you build, um, if your people aren’t aligned, it will fall apart anyway, and you end up, um, delivering a lower value add to the company than than you should, than you should be. So my job isn’t just to ship features, is to get. All of my cross departments, marketing ops, sponsors, tech, all moving in the same direction under insane time pressure and all of the tech infra is a means to an end for that, but it’s not the end game in itself. So for the next generation of leaders in this region, my advice would be don’t just copy tech playbooks from larger companies in the west. Um, we don’t have the same. Funding cushions are the same user dynamics here. You need to be scrappier, more creative, more pragmatic, and also think deeper about how to leverage the people around you, um, as this is really what helps you to move faster. And to, uh, ship bigger ideas as well. Uh, another aspect to that, and perhaps most importantly, is to also stay as close to the problem as possible. In this region in particular, the difference between success and failure, I found is not, not just brilliance or technical brilliance, it’s, it’s often execution. And those who can balance their audacious vision with getting their hands dirty. People who can both. Sketch out an amazing roadmap and also fix your workflow when Zapier inevitably breaks at 2:00 AM and you have to jump on your laptop from your bid or from a cocktail bar. These are the people who will build the next wave of great companies here, in my opinion.

Bhavan: Uh, some, some really good advice and I, and I really feel that when the younger generation listens to some of the things you said, it really just. Just shows the experience that you’ve lived through, uh, and what you’re talking about. And now from sharing advice to receiving advice, uh, I just wanna get a little bit of an understanding what’s those, what are those one or two advisors that you got very early on which have really stayed true with you, like through your journey?

May Chin: The first would be to know very well what your strengths and weaknesses are. What you do don’t bring to the table. Um, even for myself, early on in my career when I wanted to break into product and I was. Urgently looking for a new product role. I tend to go for a spray and pray approach where I would apply to almost every single company under the sun with, you know, a very strong cv, but definitely taking more of a volume first approach.

Right? What I would. Do instead. And also the advice that I would give to fresh grads is to, even before you apply for that first job, is to know and understand yourself as the very first step. Because not every company is for you and not every role is for you. There are also many different flavors and styles of product management, and depending on which bucket you fall into. The volume of possible roles that would, roles that would make sense for you get greatly reduced. So it’s much better to take a more narrow focus approach, um, while you are pursuing your next role or your first role before even beginning on your job search. And it’s, you know, really coming from knowing yourself at your core that you’ll be able to see what’s worth your time pursuing or not.

Rapid Fire Round

Bhavan: Awesome, Amme, thank you so much. May, uh, this is pretty much all the questions, but I don’t wanna let you go. Uh, I wanna take two more minutes. I have five rapid fire questions that I really want you to answer. Uh, some of them focus on CRO, some of them focus on yourself. Uh, I’ll get started and see, uh, what we hear from you.

If you were starting your career in CRO today, what is the one thing you would do differently? From what you did earlier?

May Chin: Ooh. One thing I would do differently, I would focus less on small tweaks that ultimately don’t really matter, like button color changes or CTH changes. And instead leverage CRO to drive bigger innovations. Yep.

Bhavan: On the topic of CRO, one more question. What is the one CRO metric that you really wish people stop obsessing over?

May Chin: To cart rates. Because at the end of the day, even if you see an increased add to cart rate and your checkout rate is not, incr is not growing in tandem, you still have a net negative outcome.

Bhavan: I really felt you were gonna say that, but I’m glad you did. And because we’ve spoken about AI and for a good 10 minutes, I really wanna know, like while we’ve spoken about the, the, the goods and the bads of ai, like what, what are some of the areas you feel that AI will probably take over in the next three years?

May Chin: Anything that falls in the bucket of grunt work. So this would include things such as writing product requirement documents, for example, or synthesizing transcripts from stakeholder meetings into, um, into your feature roadmap. Things like this, I think can definitely be, be replaced by AI in terms of, you know, deeper areas that require higher levels of human discretion.

I honestly don’t see that yet. There, there are inherent limitations to ai such as this, you know, limited context windows where I just don’t see this happening, at least not today.

Bhavan: No, absolutely. And I don’t know if you’re into, if you’re a big reader, but I wanna know, are there a couple of books that you would recommend our listeners to read? Uh, like when they’re starting out the journey or something you’re reading right now?

May Chin: Ooh. It’s a book that I really love and it’s, it’s a fiction book, but I do think very relevant to the AI landscape today is I iRobot by Isaac Asmo.

So he, he wrote that book in the fifties, but it’s insane how ahead of its time it was. He even predicted things such as AI hallucination

Episode Transcript

Episode Trailer

Where do you see like the biggest opportunity when it comes to the use of AI putting experimentation together? Experimentation can often be very slow and manual. AI can help to compress that cycle. This frees up humans to focus on the ‘why’ of testing rather than the ‘how’. From my experience, alignment definitely doesn’t come from telling people what to do.

It comes from giving teams a shared North Star. If everyone then takes a step back and agrees the ultimate measure of success, then suddenly those KPIs don’t compete, but they connect with each other. Testing isn’t an exclusive domain for just engineers, technical team members or data scientists. At its core, it’s a discipline.

It’s simply innovation driven by the scientific method. Don’t just copy tech playbooks from larger companies in the West. You don’t have the same funding cushions or the same user dynamics. Here you need to be scrappier, more creative, more pragmatic, and also think deeper about how to leverage the people around you.

Introduction

Welcome to a brand new episode of the VWO Podcast, recognized as a finalist in the CMA Awards 2025 for best B2B podcast with hours of in-depth conversations, we talk about using CRO to deliver what your users want while also driving long-term growth for your business. Before we speak to our guest, here’s a very quick overview of what we do.

VWO is a leading experience optimization platform that enables you to gather in-depth user insights and build winning experiences across your website and mobile app VWO Copilot, our AI champion handles the heavy lifting of suggesting test ideas, creating variations, and finding insights so you can focus on making strategic decisions that drive growth.

Now, without further delay, let’s jump right back to this conversation.

Bhavan: Hi everyone and welcome back to the VWO podcast. Uh, my name is Pavan and today I’ve got an incredible conversation lineup. Uh, here I’m very excited to introduce, uh, may Chin. Uh, she leads product at Token 2 0 4 9 and super ai, two of the world’s most prominent tech conferences. She oversees end-to-end technology strategy from platform development to workflow automation, uh, to AI integration and digital experiences, and all of this at a scale and efficiency of over 20,000 attendees and 300 plus sponsors.

We’re gonna dive into all of this as we speak with her before this. She’s built the growth, experimentation, and analytics functions from scratch at brands like Zalo. She’s also led product and growth initiatives at Mindvalley, Lazada, and ipr, uh, where she scaled consumer platforms across Southeast Asia, driving acquisitions, monetizations and retentions.

We are very excited to dive into all of these and really get to know a little bit more about her product mindset and how she’s shaping the tech experiences at such a global scale. So, without further ado, I wanna introduce May to everyone may welcome to the VW podcast. How are you today?

May Chin: Doing really good. Very busy week, but everything’s going smooth.

Bhavan: Uh, I, I know you were traveling, uh, for a few days and you’ve just come back. So thank you so much for taking time because I completely get how busy this, this week would be for you. Uh, so I’m gonna dive straight in before we start. Talking about a few questions that our audience always wants to hear more about.

I just wanna get a sense of, uh, how, do you have any sort of like rituals or habit that help you get into a zone before, like a podcast like this, or a big presentation, or a big meeting, anything like that?

May Chin: Hmm. It would usually be a quick dose of caffeine, right before I’ve spoken at a few conferences, uh. Over the past couple of years and I, you know, I still always get a little bit of stage fright, especially if it’s an in-person event. But what always helps me is a quick dose of caffeine, but I have to make sure I strike just the right balance. Too much ends up making me feel even worse. So I usually go for a quick shot of matcha and try to, um, avoid coffee

Bhavan: Yeah, I mean, someone told me that, uh, good nervous energy is always the right way to feel, uh, because that keeps you on your toes, and like you rightly said, just the right dose of caffeine also definitely helps. Alright, let’s get in. Let’s get in. Uh, we’re gonna start with, uh, really trying to understand a bit more about, uh, what you do currently.

Building for Global Conferences vs Consumer Products

Bhavan: And I say that because you’ve had a diverse experience, e-commerce, digital learnings, mindvalley, and now over overseeing tech experiences at, uh, token 2 0 4 9 and super ai. I just wanna get that. How does, like building for a global conference with like 20,000 plus attendees, 300 plus sponsors, differ from what you were doing earlier with consumer? Like how does your role differ?

May Chin: It, it’s actually, um, extremely different to be honest with you, but at the same time, also very similar in, in certain ways. So let, lemme get into that, um, a little bit more, right? So, you know. These worlds that I’ve come from, first of all, a purely digital world into something that has a very strong physical component. They definitely do seem very distinct from each other, but in practice, a lot of the same muscles are actually at work in the background. And to give you some, uh, quick examples of that, right? So when you’re, when you are building for a purely digital platform, you would usually be optimizing for some kind of funnel, your typical acquisition, conversion, or retention, for example. The scale is digital and therefore usually very large in volume. Your data is usually fairly clean and exhaustive, and you’re mostly thinking about how to get more users through some kind of happy path, such as a checkout path, for example, but with a global conference. Instead, you suddenly have 20,000 people flying in from over a hundred different countries.

So the stakes are much higher in a certain sense, and all of these people that are flying in just for you are also expecting a seamless world-class experience. As they should, given that they’ve taken the trouble, the trouble to travel halfway around the globe just for us. So the notion of scale is very much still there, but I would say that the average expectation level per user tends to be much higher. So the biggest difference and shift would be from. Would be in moving away from optimizing funnels to rather orchestrating moments in Zulo, for example, I cared a lot about cart abandonment, but currently in Token 2049, I care more about whether an attendee can find their way from a keynote to a buffet line.

To our after party without friction, and also making sure they’re having fun along the way as well. So while the end goal of helping the user succeed remains exactly the same, the means of how we get there. It’s very different. And unlike e-commerce or you know, purely digital platforms where you can test and roll back fairly quietly here, it’s one shot.

The event goes live and over 20,000 people stress test your system at once in real time. This tends to force a higher degree of quality, um, even in your very first iteration because physical experiences. Much less forgiving. What makes things even harder as well is I would be at the event myself. So there’s no way for me to hide from disgruntled users. So what this means is we tend to go for denser, higher quality MVPs rather than scrappier ones. We also need to plan better and longer term for more, more edge cases and also. This also demands a higher degree of creativity because you don’t only want a seamless experience, you also want to create one that people remember long after the event.

So this for me would be the major differences.

Defining Success: Attendees vs Sponsors

Bhavan: I think having attended quite a few events myself, sometimes we just take for granted what we just get at events and the experience we get at those and the the amount of hard work your teams go through just to give us that experience. I, I can’t even imagine the nights that everyone spends making sure everything’s fine.

And I just have a small follow up question to just to just that, because you just touched upon the fact that you’re no longer just building features that can help audiences better, but you’re actually building experiences for them to make it better. How does your team, like at the end of an event, define the success of the product from an attendee point of view?

I know you touched upon it and also from like a sponsor point of view as well, like how do you guys sit down and say. This worked, but this did not work for us.

May Chin: Mm-hmm. Yeah, for sure. And, and that’s a really important question. Um. When it comes to a company like ours, it’s not enough to just ship features. The real job is making sure that the experience we build and deliver works for two very different groups of users at once, which is of course, our attendees and our sponsors. And our sponsors are honestly the lifeblood of our event, and they mean a great deal to us. So this is. Really a crucial perspective. So for attendees, as we’re building our product for them and the way that we define success, it’s really when the event feels effortless, where to a certain extent, they don’t even notice all of the underlying tech, and it’s somehow.

Melts into the background, which makes sense because if someone is flying halfway around the globe for our event, they’re able to get their event badge in just a few seconds, find all of the right keynotes, and they’re able to collect perks through event gamification or other key, um, activities like that.

And they’re able to walk away feeling like they did, they discovered special. Event opportunities they couldn’t have found elsewhere, and therefore they feel incentivized to tell everyone they know about this event. That’s really how we define success on the attendee side, but sponsors are very different in most cases.

They’re usually not there for the free food and coffee. They’re usually there to justify a six figure spend to their upper management. So for them, quality means. The extent to which they’re able to generate good leads that feed well into their pipeline, and also the extent they’re able to, uh, generate and measure brand visibility as a result of being at our event. Did they meet the right investors, partners or customers? Did our event help them track our eye in ways that go beyond? Anecdotal. Yes. Our booth was very busy. So I define product success as when both sides win, um, without them noticing the compromises along the way. If, for example, attendees feel engaged at the event, but not. Bombarded by sponsor noise or a feeling of being oversold to, but at the same time, sponsors still walk away saying, this was the most valuable event that we’ve done all year, and I’m able to quantify this. Then that really is the sweet spot. So again, drawing a parallel to, you know, my previous life in e-commerce back in the day, we would usually measure, um, some notion of. User LTV in events. I would say it’s roughly similar, but it’s more along the lines of repeatability. Will this attendee or sponsor come back next year and will they bring an additional friend or colleague alongside with them? If the answer is yes, then we’ve done our job well.

Prioritization Framework with Hard Deadlines

Bhavan: Fair enough. I think some really good points and things that as an attendee and as a sponsor, I’d love for. The, the, the host to be, to be thinking about. So thank you so much. One more question. I, and I really wanna ask you this. As the product leader, you know, with, with timelines so tight, close to the event, like how does your prioritization engine works?

Like what do, what should we experiment with? What changes in products should we bring up? Like do you have a framework around that that people would love to a little bit know about?

May Chin: Yep. So the very, um, inherent nature of an event space company is we often don’t get the luxury of endless ab tests or long roadmaps. The event is a hard deadline set in stone. Our doors open over 20,000 people walk in, and whatever we’ve built along the way either works or it doesn’t. So. In this role, I tend to think about prioritization in three buckets. Uh, the first of which would be what I call mission critical hygiene, the foundational things that simply cannot fail. These would include things such as registration, badge, printing, our floor plans, payments. If these break, nothing else matters, and I honestly should be fired in my opinion. And the second bucket of priority would be high leverage differentiators. These are the features that. Build on the foundational aspects and elevate the experience to make our event truly stand out. So for us, that might manifest in some things such as our hackathon platform, our NFT powered event, site Quest, or personalized sponsored tools. Um, what I really want to stress here though is that these aren’t just shiny features. They’re what? Actually makes people say this conference feels different to any other event I’ve been to before. The third bucket alongside that would be nice to have, which I would usually only tend to green light if they can be administered in the form of fairly simple MVPs with relatively low downside risk where we can ship a polished yet scrappy enough version. Without sacrificing our UI UX too much, and most importantly, to learn quickly from it rather than sinking months into perfection. So across all three of these different buckets, um, the key is ruthless sequencing. And again, early on in my career at Soor, I loved optimizing for everything at once. But here I’ve instead learned to ask, what’s the one domino? We, we need to knock over so the rest of the experience cascades and holds together. Cohesive way and having that tight timeline and hard event deadline really forces that clarity on you. And this, while this might sound really stressful to those who don’t work in the event industry, in a lot of ways the high stakes can be a blessing when you know that you’ll never get a redo.

When you know that there is no undo button or feature flag or roll back, you tend to stop overthinking and start being very practical. MVP first test with a small group and ship what matters the most.

Reducing Complexity with Low-Code and AI

Bhavan: Perfect. Thank you so much. And I, this leads really well into my next question and, and I know a bit of what you will talk about as well because when we met a couple of days ago, I just wanna know, like in order for improving efficiency, uh. I know you lean on a lot of like low code and AI driven tools, and you spoke about vibe coding when you and I were talking a little bit, but I just wanna get a sense that in your role, how do you like, help your teams reduce complexity?

Um, don’t put too much, uh, burden on the, on the engineering team for high impact work. Like how do you go about that and how do you empower your team to be able to do that as well?

May Chin: Sure. Um, I think, you know, that’s a really good question ’cause I, I do feel that there is a very big misconception about tech at conferences or events in general, which is that everything needs to be meticulously custom built in order to work well and to work properly. And a lot of people also think that. The degree of complexity in your tech stack has a direct correlation to its efficacy. But the truth is, in events, oftentimes speed and reliability matter much more than building from scratch, just for the sake of it, or just for the selfish pursuit of tech greatness, if that makes sense. So, you know, AI adds another layer to this mindset as well, where in it enables us to ship novel features very quickly without major engineering resources. And this mindset culminated in the creation of a fully vibe coded feature that we’ve built called Photo Finder. And when I say vibe coded, it means that someone such as myself with no technical background comes in and actually does software engineering with AI assistant. So with this photo finder feature, what it enabled us to achieve was instead of simply sending our attendees thousands of event photos and calling it a day, we use AI to build an integration with AWS’s facial recognition tool to auto curate images based on the selfie uploaded by the user. And this is fully built from scratch in just two to three days with no engineers involved at all. So the role of AI power tooling and the whole notion of low code or five coding in general. For me it’s twofold. It democratizes execution for non-technical teams, and they also protect our overall engineering focus for the things that only engineers truly can do, such as making sure our backend database is structured in the right way.

Bhavan: sure. Yeah.

May Chin: So it’s a force multiplier. Um, and in a high visibility environment with immovable deadlines, that difference really means everything to us.

Human Discretion in AI-Driven Testing

Bhavan: Perfect mate. Thank you so much. And, and my next question probably leads into what you just spoke about and we spoke about AI and a lot of podcasts. People are talking about ai, but I really wanna know, what is your sense on like the human discretion coming in on the use of automation, AI speeding things up?

And yet having certain checks and balances in place when you’re testing at such a scale.

May Chin: Yep. So I think it’s really important to have a balanced perspective here. And while I’m. The kind of person to expound the benefits of AI powered tooling, low code and vibe, coding a stack as well. I think it’s important to not get too starry-eyed, um, about all of those new developments because the point is, while these tools can take away grunt work, they can’t really replace human judgment at least. Not as off to date. So in token, 2049, we do run experiments all the time, but unlike, uh, a hundred percent digital product, we don’t have infinite cycles of iteration. As I, as I said, we only have one shot once the event goes live. So this makes testing and human discretion. Not just optional, but actually essential and mandatory. For example, if we are to automate batch printing and it misfires on day one, you have 200 angry people standing in line and there’s nowhere to hide because I would be at the event too. So human discretion is really what keeps. Honest and grounded in my opinion. It’s me asking, does this actually make sense in the real world and not just in theory. So the role of testing for me is less about. Um, perfection and more about risk management. Can we drive, run a simple test with 50 people instead of 20,000 ahead of the event? Or can we shadow test a new product in parallel before switching it on? These constant checks allow space for human discretion. Most importantly helps to prevent big disasters and inform the overall direction that our roadmap should be moving in as well. So while things like automation and AI definitely remove the proverbial speed bumps, um, for me, human, human discretion and testing is still the steering wheel. And without it, you are just, yes, you are still moving faster, but towards a crash.

Democratizing Testing Across Teams

Bhavan: I think with AI kicking over, a lot of people have, the dependencies have increased for sure, but it still leads human discretion. And thank you so much for putting. So Well, and because you’ve just said that I, I, I just wanted to get a better understanding and maybe some practical steps and approaches as well.

You, I, we know you’ve championed democrat democratizing testing, uh, across non-technical teams as well. Now, beyond tools, like, would you be able to share with our listeners, uh, some practical steps and approaches you found most effective, especially when different functions are, are testing as well for their own needs?

May Chin: So the very first step, I think to. to. adopt. Um, experimentation in a practical way is actually something a bit more philosophical, um, in the form of a mindset shift, which is to understand that testing isn’t an exclusive domain for just engineers, technical team members or data scientists. At its core, it’s a discipline.

It’s simply innovation driven by the scientific method, and anyone in any role would be doing some kind of innovation. So why wouldn’t you apply scientific methods on top of it to maximize your chances of success? Right. So on a practical level, once you’ve had this mindset in place, I found that three things work best. The first is to strip away the jargon, which a lot of AB testing practitioners, myself included, can fall into without realizing for sure. If you refer to something as a hypothesis test with confidence. People will tune you out if you instead say, let’s try two versions of this email and see which gets more replies. Suddenly marketing is running to you and saying, how can we launch this experiment without even them realizing that it is an experiment? Right? But as far as you’re concerned, this is just an activity that helps them to maximize

Bhavan: True. True.

May Chin: And that the second part of that is to try to make your tools as invisible as possible because the, the truth is when you look under the hood of your experimentation stack, it can be very daunting and it can be very

Bhavan: Absolutely.

May Chin: So when I was at Zola, we set up our testing tooling. So. Any team member can launch an AB test. And all of the data collection and statistical interpretation runs in the background. It gets parsed by a proprietary logic that we built ourselves, which then outputs a business friendly interpretation. So non-technical team members don’t even have to know how it works. They just see results. Um, third and most important is to do what you can to lower the cost of failure in e-commerce. I learned that. A lot of people hesitate to test when everything feels so high stakes, and it’s roughly the same. So if you instead frame experiments as scrappy pilots, let’s try this side.

Quest mechanic. Where, um, we trial it with 100 people before rolling out to 20,000. Suddenly pe feel a lot braver. So for me, Democrat democratizing testing is really inherently more of a philosophical shift at its core, to be honest. It’s about building, um, cultural levers in place within your organization where people feel. Less intimidated, that’s one. And also braver to partake in it in the first place. So once you give them the proper language to use the tooling, the guard rails, and most importantly the safety net, suddenly it stops being so daunting and it just becomes an inherent way of how they approach innovation.

Aligning Cross-Functional Teams

Bhavan: I mean, I think some, some real solid advice. I think a lot of people should hear this. Uh, some truth bomb as well. And I’m gonna, I’m gonna lean on the first point you mentioned about KPIs. When different teams come, come over to you, like how did you go about like organizations like Zalo, how did you go about aligning cross-functional teams, like on shared priorities when you know, everyone has their own KPIs, pressures on delivering different things, and sometimes it’s not just scaling experiments, it’s also what you touched upon, like scaling people’s expectations.

Also, how did you go about, about running with so many different teams?

May Chin: Right, so. From my experience, alignment definitely doesn’t come from telling people what to do, but it, it is, it comes from giving teams a shared North Star. Uh, for example, marketing might be chasing ticket sales. Our operations team might be chasing smooth check-ins and our sponsors team. Want ROI. But if everyone then takes a step back and agrees, the ultimate measure of success is whether attendees and sponsors come back next year. Suddenly those KPIs don’t compete, but they connect with each other. The practical side to this, um, to enable this is ruthless prioritization and transparency. Uh, in doing so, it really helps to make trade offs. Very explicit. If we invest in X feature, it means delaying y feature. Which one best serves our core goal of getting attendees and sponsors to return next year? And once the trade-offs are on the table, teams feel like they’re part of the decision rather than victims of it, and that they just sw up.

Bhavan: Correct.

May Chin: The, the other piece to that is shared rituals and routines, um, at for teams. Drift towards working in silos. So it helps to bring everyone into the same room literally or virtually. Um, and we walk through the funnel end to end together. So for example, ops can see what marketing needs, marketing can see what tech is up against. And when you have that shared context, it helps to build empathy, which then is a natural. Uh, precursor to alignment, in my opinion. So while, um, scaling people can definitely be really difficult, I, I do feel that once you anchor them to one North Star, even if the North Star is fairly abstract, just having that commonality in place helps to make trade trade offs more transparent. It helps to, it helps to keep everyone seeing the same picture and the chaos that becomes a lot more manageable

Breaking Deadlocks in Prioritization

Bhavan: Yeah, and, and I just wanted to understand when you, you know, you mentioned that you’ve put everything on the table now and then people, there are trade offs. If there is a deadlock, how do you go about. Uh, resolving for that and, and who takes the lead in, in solving for that? Just really wanna understand that part.

May Chin: So for me, the person who takes the lead on resolving, um, resolving this deadlock scenario should be actually. Two, they, they should fall into two different buckets. The first of which should be the person who’s responsible for executing that initiative. As you know, there is a direct opportunity cost for this person, right? The time that they spent building on this feature versus something else, which one would bring the highest value add for the company. The second bucket of decision makers should be the person. Um, who holds the most stake in the value add of this feature? And often this can be a completely different stakeholder to the person who’s executing the initiative. So it’s important for both of these parties to come together and to arrive at a, a. A solid decision of priority. How to make that decision though it’s really important that it’s as quantitative as possible, or at least as empirical as possible. Often in some companies, there are many subjective aspects that cannot be quantified, and that’s fine, but we can still be empirical, right? Which is to take a step back and to always focus on the North Star. For us at Token 2049, as I mentioned, it’s always about which initiative would maximize the chances of attendees or sponsors returning the next year. And if we distill it to that very simple tenant, it suddenly becomes a lot clearer as to which initiative would move the needle more. More in that fruit. And even if we are still unable to decide on. The projected impact of two different initiatives, we can then take into account other factors such as the relative effort associated with each. We should, of course, be aiming for the lowest effort, one first, and if we still arrive at a deadlock in those situations, um, we would then fall back onto more subjective, um, but equally important factors such as overall company positioning or um. Whether this would lead to a subjective competitive advantage for us compared to other events. These factors, while not directly quantifiable, are still important and ultimately can help you to arrive at a more objective, uh, priority decision as well.

The Future: AI and Experimentation

Bhavan: Fair enough. I think that’s a, that’s a really good way to break it down. Uh, let’s move a little bit towards the future. Mei, and I just wanna get your thoughts on this. You have hands-on experience of building, uh, various things, strategic leadership, but where do you see like the biggest opportunity for product leaders, uh, when it comes to the use of ai?

Putting experimentation together to really like delivering value in the next three to five years. Where do you see that?

May Chin: So for me, the real opportunity is, you know, not seeing AI and experimentation as two separate constructs because they’re really not. But rather, you have to see the opportunity of synergizing them within the same feedback loop. What what I mean by that, right? Like right now, um, experimentation can often be very slow and manual. You design, run and analyze a test, rinse and repeat in perpetuity. AI can help to compress that cycle, for example, by auto generating experiment variant ideas. Predicting likely outcomes or servicing the riskiest assumptions that you never thought of before you even run a test. This frees up humans to focus on the why of, of testing rather than how. Um, for this to work though, I, I do think that product leaders need to improve their judgment for sure. This is where the whole notion of human oversight and discretion comes into play. Um, while AI will help. Teams for sure run 10 times more experiments with the same resources. The real value will still come from leaders who are able to discern which experiments matter and how to carefully select or interpret AI driven ideas or results without just blindly following the AI output like a sheet.

Bhavan: Hmm.

May Chin: And in the events industry, how this translates. For example, AI could dynamically personalize event experiences, but it’s human judgment and experimentation on top of that, which determines whether those personalizations actually do drive ROI for sponsors and delight for attendees. It would also be really naive to just assume that this.

That any kind of AI model can somehow optimize this personalization model by itself and guide itself to our intended business outcomes. There are still many different checkpoints where human course correcting and steering, um, would help. This would help this model, uh, to achieve our KPIs in a more. Uh, in a more congruent way. So the, the real opportunity for me is not to replace human strategy directly, but rather to supercharge it. Um, and while AI makes experimentation cheaper and faster for sure, um, it’s really the experimentation aspect of it and constant testing and inspecting offers results that keeps AI grounded in practicality.

Mindset Shifts and Advice for Product Leaders

Bhavan: Uh, fair enough. I think that’s, that’s, that’s really good, uh, advice for, and some of your thoughts and, and I talk about, because I remember talking to you about how you started and where you are right now. So I just wanna understand like from from the time you started to now, what have been some of the biggest mindset shifts that you’ve seen in yourself as now a product leader?

And I’d love for you to just give some advice to people who just starting out their journey. So what would your top two or three advice be for them? Uh, looking back at your own journey?

May Chin: Uh, when I was younger, earlier on in my career, um, it was really important for me to pursue product or technical greatness and to make sure that whatever tech stack I was working on was just the shiniest, most impressive thing possible. Right? So the big mindset shift for me has been kind of moving away from. Just focusing on my, on my tech and tooling, but rather on focusing on people as well and how we orchestrate people. So early on in my career, as I mentioned, I, I obsessed over. More, I would say, um, I guess like harder aspects of my job, such as funnels, metrics, experiments, tech and tooling, which button color drives more conversions, which feature drives more retention. But now, you know, in token 2049, I’ve learned over the years. That no matter how elegant the system you build, um, if your people aren’t aligned, it will fall apart anyway, and you end up, um, delivering a lower value add to the company than than you should, than you should be. So my job isn’t just to ship features, is to get. All of my cross departments, marketing ops, sponsors, tech, all moving in the same direction under insane time pressure and all of the tech infra is a means to an end for that, but it’s not the end game in itself. So for the next generation of leaders in this region, my advice would be don’t just copy tech playbooks from larger companies in the west. Um, we don’t have the same. Funding cushions are the same user dynamics here. You need to be scrappier, more creative, more pragmatic, and also think deeper about how to leverage the people around you, um, as this is really what helps you to move faster. And to, uh, ship bigger ideas as well. Uh, another aspect to that, and perhaps most importantly, is to also stay as close to the problem as possible. In this region in particular, the difference between success and failure, I found is not, not just brilliance or technical brilliance, it’s, it’s often execution. And those who can balance their audacious vision with getting their hands dirty. People who can both. Sketch out an amazing roadmap and also fix your workflow when Zapier inevitably breaks at 2:00 AM and you have to jump on your laptop from your bid or from a cocktail bar. These are the people who will build the next wave of great companies here, in my opinion.

Bhavan: Uh, some, some really good advice and I, and I really feel that when the younger generation listens to some of the things you said, it really just. Just shows the experience that you’ve lived through, uh, and what you’re talking about. And now from sharing advice to receiving advice, uh, I just wanna get a little bit of an understanding what’s those, what are those one or two advisors that you got very early on which have really stayed true with you, like through your journey?

May Chin: The first would be to know very well what your strengths and weaknesses are. What you do don’t bring to the table. Um, even for myself, early on in my career when I wanted to break into product and I was. Urgently looking for a new product role. I tend to go for a spray and pray approach where I would apply to almost every single company under the sun with, you know, a very strong cv, but definitely taking more of a volume first approach.

Right? What I would. Do instead. And also the advice that I would give to fresh grads is to, even before you apply for that first job, is to know and understand yourself as the very first step. Because not every company is for you and not every role is for you. There are also many different flavors and styles of product management, and depending on which bucket you fall into. The volume of possible roles that would, roles that would make sense for you get greatly reduced. So it’s much better to take a more narrow focus approach, um, while you are pursuing your next role or your first role before even beginning on your job search. And it’s, you know, really coming from knowing yourself at your core that you’ll be able to see what’s worth your time pursuing or not.

Rapid Fire Round

Bhavan: Awesome, Amme, thank you so much. May, uh, this is pretty much all the questions, but I don’t wanna let you go. Uh, I wanna take two more minutes. I have five rapid fire questions that I really want you to answer. Uh, some of them focus on CRO, some of them focus on yourself. Uh, I’ll get started and see, uh, what we hear from you.

If you were starting your career in CRO today, what is the one thing you would do differently? From what you did earlier?

May Chin: Ooh. One thing I would do differently, I would focus less on small tweaks that ultimately don’t really matter, like button color changes or CTH changes. And instead leverage CRO to drive bigger innovations. Yep.

Bhavan: On the topic of CRO, one more question. What is the one CRO metric that you really wish people stop obsessing over?

May Chin: To cart rates. Because at the end of the day, even if you see an increased add to cart rate and your checkout rate is not, incr is not growing in tandem, you still have a net negative outcome.

Bhavan: I really felt you were gonna say that, but I’m glad you did. And because we’ve spoken about AI and for a good 10 minutes, I really wanna know, like while we’ve spoken about the, the, the goods and the bads of ai, like what, what are some of the areas you feel that AI will probably take over in the next three years?

May Chin: Anything that falls in the bucket of grunt work. So this would include things such as writing product requirement documents, for example, or synthesizing transcripts from stakeholder meetings into, um, into your feature roadmap. Things like this, I think can definitely be, be replaced by AI in terms of, you know, deeper areas that require higher levels of human discretion.

I honestly don’t see that yet. There, there are inherent limitations to ai such as this, you know, limited context windows where I just don’t see this happening, at least not today.

Bhavan: No, absolutely. And I don’t know if you’re into, if you’re a big reader, but I wanna know, are there a couple of books that you would recommend our listeners to read? Uh, like when they’re starting out the journey or something you’re reading right now?

May Chin: Ooh. It’s a book that I really love and it’s, it’s a fiction book, but I do think very relevant to the AI landscape today is I iRobot by Isaac Asmo.

So he, he wrote that book in the fifties, but it’s insane how ahead of its time it was. He even predicted things such as AI hallucination

Bhavan: Wow.

May Chin: like that, you know? So it’s just a really fascinating book about how, um, about the various pitfalls of AI and how the way that AI operates is often analogous to human psychology and the way that our brains work. So I just, I think, really relevant to, um, to

Bhavan: Could you share the name once again, please? I, robot. Okay, perfect. Thank

May Chin: They, they made, they made a terrible

Bhavan: Oh yeah, I remember that.

May Chin: it’s nothing like the book.

Bhavan: Oh man, I, I’ll, I’ll definitely take a note of that. And my last question, like what is that one dream or goal that you really want to achieve in the next three years?

May Chin: The next dream or goal. I would say probably starting my own, um, consultancy or mentorship business that specializes in getting fresh grads, their first job in product. It’s something that means a lot to me. I don’t really have the time to do as much of that as I would like to now, but three or five years down the line, if I could do that full time, uh, yeah, that would be the dream achieved for

Closing

Bhavan: May, how passionately you spoke about some of the things today. I really feel you add so much value to these, these people who are getting into the industry. I, I genuinely think you should consider that. Uh, but thank you so much for being candid. Thank you so much for all the answers you’ve given to us.

Uh, it brings us towards the end of, uh, today’s conversation with me. Uh, and may It was a pleasure listening to some of the answers you gave. They were so real. Uh, I think, which is what people really are looking for. They’re not looking for answers that they can read somewhere else. You’ve really shared some of your experiences.

I’ve enjoyed asking you these questions and hearing some of those answers. And until next time, I wish all our listeners happy, experimentation and keep learning. Thank you so much everyone.

You might also love to watch these

Voices of CRO

From Website Funnels To Live Global Events: May Chin’s Incredible CRO Journey

Bhavan Kochhar

Hosted by Bhavan Kochhar

Connect with your existing tech Watch Now

Voices of CRO

Beyond Best Practices: Vinayak Purshan On Empathy-Led Testing

Niti Sharma

Hosted by Niti Sharma

Connect with your existing tech Watch Now

Voices of CRO

April Hung On Breaking Down CRO Barriers in Asian Markets

Aanchal Mahajan

Hosted by Aanchal Mahajan

Connect with your existing tech Watch Now

Do you want to be our next guest?

Got some CRO stories, hard-learned lessons, or a unique take on product and research? We'd love to have you on the show. Share your details, and we'll get in touch soon.

Deliver great experiences. Grow faster, starting today.

Explore for Free Request Demo