Editor’s Note: This transcript was created using AI transcription and formatting tools. While we’ve reviewed it for accuracy, some errors may remain. If anything seems unclear, do refer to the episode.
Guest Introduction
Reuben John: Thank you for joining us today. I’m Reuben, Product Director at VWO, leading the Feature Experimentations product, and I will be a host for this episode. My guest today is Sourabh Gupta, a product leader with over 17 years of experience building and scaling FinTech and consumer tech platforms. He spent his career turning complex financial systems into simple, trusted products people use every day.
Sourabh is currently the SVP of Product and Design at Flip, one of Indonesia’s largest FinTech platforms. He leads product, design and data across a growing portfolio of products and has played a key role in driving adoption, revenue and impact, also building and scaling teams and systems. In this conversation we’ll focus on how he approaches product leadership at scale and the principles that guide building and scaling durable FinTech platforms in high growth markets. Let’s get into it.
Hi Sourabh. Welcome to the VWO podcast.
Sourabh Gupta: Thanks, Reuben. Thanks for having me. Looking forward to it. When I’ve done a bunch of podcasts or conferences in the past, but I think this topic is something which doesn’t get talked about that much. So I think that’s why I’m also looking forward to talk from my learnings in terms of experimentation and product leadership, and hopefully people see value in it.
Conversation
Personal Habits and Rituals
Reuben John: Absolutely. I mean, again, it’s very exciting to have you on board. It’s not often that we sort of get to get this sort of firsthand experience and perspective on something that’s so relevant in today’s market, right? It’s 2026. AI is taking over, channel efficiencies are getting crunched, roles are merging. So many new things are sort of happening, right? So I think it’s the most relevant time to be discussing this. So yeah, let’s get into it. Before we sort of get into those serious questions, I have an icebreaker round. So let me just quickly go ahead and get that out of the way. So question here is, do you have any rituals or habits that help you get in the zone, whether it’s before a podcast presentation or big meeting?
Sourabh Gupta: Interesting question. Honestly, I don’t think I have a ritual as such. I think what has happened is over the years, right, as your scope keeps on increasing, you are looking at multiple problems. You just context switching like every 30 minutes or so, right. So I won’t say I have a ritual as such, but I think I’ve trained myself in a way that I’m able to switch context very quickly. Like I, I want to call talking about expanding our loan portfolio. In the next call, I’m trying to solve a onboarding problem with the team, so able to context switch effectively. What I do is, because you are managing multiple teams, multiple problems, I do keep a tracker, which is prioritized in terms of what problems are on top of mind, and I keep on rotating in that list. So I know that, okay, today what are the key problems I need to look at? But I do start off that earlier in the day so that I know, okay, how do I progress?
And I use Slack effectively on that because, you know, we as a tool, we use Slack in Flip to communicate and a lot of discussions are happening on different channels. So I think Slack reminders is a kind of a lifesaver for me to have a track of what all things are happening, where I need to check in, what I need to follow up, what is to be prioritized for today, or what is to be prioritized for later. So not something fancy, pure play, you know, focus, deep work and just a to-do list. That’s what I use.
Reuben John: It is very interesting that you say that, right? I think in the world of such complex personal workflows that people are sort of building, it’s ultimately the simplest thing that sort of gets us through, right? And, you know, that’s very interesting perspective from you and for folks who are joining us. That’s like a one-on-one in terms of how you sort of behave, or operate as a product leader. Yeah.
Sourabh Gupta: And I’ll give a quick tip on that as well because a lot of my team members have asked me about this, right? Like how I’m able to get on top of things, you know, all the time. How I don’t drop off on threads. I think a lot of people have to-do lists, right? That’s the easy part. You can use different apps for it. You can use Trello, you can use whatever. The key is to build a habit of whatever your to-do list is, you need to look at it periodically, okay? So if you’re not looking at it again and again, no point, whatever list you have, wherever you’re maintaining it, you’ll keep on losing out on things. So this has to be something you’re doing multiple times in a day to be on top of it. So Slack is like one way where my muscle memory is like that. Like if, even if I’m in a meeting, I’ll quickly do an alt tab and a control key to see, you know, what are the threads that are open that I need to work on. So I think the habit part is the key. It’s not the tooling, it’s the habit.
Reuben John: Absolutely right. And so the habit and consistency is honestly what drives any system. Very interesting. Thanks for sharing Sourabh.
Flip’s Pivotal Product Bets
Reuben John: And with that, let’s get into our main topic of the day, right? So right off the bat, let me get into this. Flip has become the market leader in consumer and merchant payments in Indonesia. Now, when you look back, what was that most pivotal product bet, let’s say, an experimentation driven product bet that made this scale possible according to you?
Sourabh Gupta: Sure. So I’ll give a bit of context, right? So I’m not sure if all the listeners are aware of Flip as well, so I’ll start with some context. So Flip started about a decade back, trying to solve the problem of moving money within Indonesia. So kind of like a P2P transfers app, because there was a lot of friction that existed back then, and moving money into bank in Indonesia was also very costly. So that’s how we started and that’s how we found a product market fit. We have bootstrapped for many years, and when we started to kind of scale up, that’s where the question came up to us that if you’re solving the problem of moving money for users within Indonesia, and we have built our own payments platform for it. We have built an engine where we have multiple payment rails connected. A lot of logic that was built.
So the point that came to us was, if you’re solving this for users within Indonesia, why don’t we solve the same problem for businesses and merchants? Because they also have the problem of they need to move money and accept payments efficiently at a lower cost, at scale, with reliability and for people to move money outside of Indonesia, right? So those were the two key bets, I would say we took early on to diversify our product portfolio in terms of payments while sticking true to our core value, which was to be in the business of moving money.
But if you ask me, is that the big win for us? I look at it differently. For me, I think what has happened is over the years, we have been able to change the culture and DNA of our team where be it product, be it business marketing, any team they have defaulted to experimenting and iterating over any decision. So winning over long term, for me, that’s why is not one or two key bets. It’s about this small, small decision we used to take on a daily basis on a sprint by sprint basis on a product by product basis, on a feature by feature basis, which have compounded to where we are today.
So today at Flip, we have a bunch of product portfolios. We have a 10 to 12 large products. We are diversifying beyond just payments today. So today our motto is more around three areas, which is send, spend, and save. And we are building multiple products on that beyond payments, touching lifestyle, and effectively cross-selling it to our users. ‘Cause you know, payments, as you know, is a low margin business, but that’s the funnel through which you get a large chunk of users and then you need to cross sell them alternate products, lifestyle products where your margins are higher, your take rates are higher, and that’s how you increase your average revenue per user.
So to answer your question in a nutshell, for me, the biggest win is how the mindset of the team was changed and how each individual has put experimentation in their DNA and decision making, which has compounded over so many years.
Reuben John: Right, and that’s, that’s, thanks for sharing that. That’s so interesting because what I’m sort of hearing you say is it’s not just the big bets that let’s say leadership takes, right, or the founder takes, it’s also the compounded effect. And that’s a term that I wanna sort of double click, the compounded effect of every team member truly internalizing that any addition that they make has to be backed by data, has to be backed by some form of experimentation, right? Nothing is rolled out without that.
Sourabh Gupta: Absolutely. Because as you scale, right, it’s no longer the founders or two or three individuals who will hold all the decisions. You have to democratize decision because otherwise you lead to decision fatigue. So, and if you don’t have the right foundation, then your team is not taking decision the right way. So that’s why I think this cultural change is important.
Reuben John: Absolutely, absolutely, and it’s a great point to sort of take away from this, right? And I think a lot of companies, especially as they sort of grow out of that bootstrap phase to getting more mature, that’s something that folks typically struggle with, right? How do I sort of ensure that I have a team that sort of wins collectively and is not just dependent on a few strategic bets that may or may not work right? That’s the difference between success and failure for an entire company at times. So, no, it’s very interesting. Thanks for sharing that.
Prioritizing Experimentation Across Product Portfolio
Reuben John: So which also raises a good segue to the next point. Right. Now, as you also scale, you have many products under your belt, right? And so I believe currently you have more than 10 plus products in your portfolio, right? And so for these products, how do you determine whether to focus on experimentation on core revenue, driving journeys, or newer product areas where learning velocities could be probably just higher?
Sourabh Gupta: Okay. So I think it’s an interesting point that you mentioned on the portfolio, right? Because a lot of people, when I interact with them, they will ask, okay, so at which stage is your product, is it zero to one? Is it one to 10 or is it 10 to 100? Right. But my point to them is always that, you know, when you are a FinTech or any growth stage company, you don’t have one product. You have a portfolio of products, and different products are at a different stage of maturity.
So today at Flip, I have a product which is, you can say 10 to 100, where it’s serving millions of users. We do 20, 30 million transactions for that product per month, moving couple of billion dollars monthly, right? And then I have also a product, which is at a one to 10 stage where it has found an early product market fit. And we are trying to figure out how do we optimize on our pricing communication to scale that product. And then we have products which are more on zero to one experimentation. We are still trying to figure out, is there a demand for it? Does it solve an actual problem or not?
So depending on what the priority of the organization is, your focus becomes on which product you want to focus on, because you can always experiment on each of this product. It is just that what you are validating or the hypothesis will be different, right? A zero to one product might be a go/no-go product where you want to have a go, no go hypothesis validated, right? Whereas a more mature product is where you are trying to see, okay, if I can improve this particular funnel with one percentage change, right? Because that percentage change might result in thousands of dollars in terms of revenue, right?
So depending on the product stage, what you are testing or experimenting also changes and it comes back to the organization what their focus is, right? Do they want to do this optimization on these products or do they want to focus on products which are giving them double digit growth? So for us, that is how we kind of choose and decide on where we want to focus based on our existing priorities.
The key difference is when the product is mature, we’ll be more, you can say guarded in terms of experimentation because we know any mistake can lead to a larger impact, right? So we will be more cautious about what we’re experimenting, making sure that we have configured things more correctly. Whereas for a zero to one product experimentation, the focus is on learning velocity. We want to quickly learn about some of the details. So we won’t be that strict about the parameters. It’s okay if it is crappy, because we will prioritize time to learning over the accuracy of an experiment. So that’s how we look at experimentation across different products and prioritize accordingly.
Reuben John: Got it. And I think this is sort of like a key takeaway for many folks out there, right? When they also talk about, maybe flipping it to a more product management angle here, when folks talk about what does product management mean? It of course depends on the context of the industry and the company, whether it’s a growing stage or mature, but then it also depends on the product that you’re working with, right? And your metrics are different accordingly, your bets that you take are different. And one thing that I really found interesting is for zero to one products learning velocity, right? It’s about building and failing quickly and understanding that, right? And not everything has to be linked to a revenue output.
I have a follow up question there. Of course anything that we do ultimately ends up impacting our top line or bottom line, inevitably, right? So it’s a function of whether it’s a leading or a lagging metric, right? And the degrees of separation over there. Now, typically it happens that when you’re prioritizing experiments, the ones that sort of have a not so obvious impact on, let’s say some business metric or let’s say core product metric tends to get deprioritized. How does, how do you, or how does Flip sort of address that? I think one hint that you gave is of course the learning velocity, right? But I’m just sort of opening that out to you.
Sourabh Gupta: Sure. So, I mean that what you mentioned is a problem when everyone is looking at one or two areas, right? How we work is, we have individual pods or squads you can call them, who are all focused on their problem area. So when we are talking about prioritization, it is getting prioritized within that particular pod, right? So that pod’s or that squad’s focus area for that semester, if they need to do certain experiments and experiments, like you said, you know, while everything moves towards your top line or bottom line, the leading metrics for them could be more around, let’s say KYC funnel improvement, right? But you can always translate it to more users coming in, more revenue you’re making, right?
But we generally don’t have this prioritization problem because we prioritize within that particular team. So within that particular team, we’ll figure out what their priorities are, because the moment you do it cross team, then all of these questions start coming in. And then it becomes harder for platform teams to experiment because, you know, platform teams generally, their outcomes are not that visible or not that easily correlated with the top line, right? Versus a services team, which is, which every transaction leads to a certain amount of revenue, right? So that’s how I think we have been able to solve it because we generally don’t see that kind of a conflict. We have staffed each team accordingly. Each team has the capability to run independent tests, and that’s how we are democratizing it. There’s no centralization of testing. Everyone is in charge of their own destiny. So we rarely see these conflicts.
Reuben John: Got it, got it. Just again, you’re tying back to that, you know, that experimentation culture that you’re building. Right. Absolutely, absolutely makes sense.
Balancing Speed and Compliance in FinTech
Reuben John: Now at the same time though, like now Flip is within the space of finance, right? And now we know that finance is typically a very regulated environment. At the same time, as you mentioned, you know, teams need to move fast. And then at the same time, you need to be also regulated. You need to be compliant. You need to be cautious because there’s also internal risk management, but also there are other regulatory agencies that are monitoring you. In this scenario, how do you structure an experimentation system that speeds up launches while still ensuring this compliance and safety?
Sourabh Gupta: Sure. So in regulated companies, Reuben, it’s often said that your roadmap is, you know, determined by the regulators and not by the product team. But, jokes aside, I think the point you are making stretches beyond just experimentation, right? I have heard this about even taking bets on product, right? Where people will raise question that when you have a regulatory setup, when everything is something that you need to validate with your regulator and compliance team, how do you move fast?
And I think from my experience, while there’s no silver bullet to it, why usually companies struggle in my view, is because they look at compliance and regulation as an afterthought. They look at it as a checklist. Once they have decided to do something, then they approach the compliance team or look at their regulations. But for us, and I think companies who are more mature in this area, you need to make compliance and regulations as part of your product development process.
And that’s what we have done that right from a product conception itself, when we are ideating about a new idea, we loop in our compliance team. We get their buy-in, we talk to them, we’ll have meeting with them. We’ll explain what we are trying to do and we try to get the surety that this is something we can go ahead with. And there are obviously areas because compliance is not always black and white, right? There are gray areas and that’s where we will go for audience with the regulators as well to make sure that whatever ideas we have, whatever concept we have, they are on the right side.
So firstly, that’s how we have so far been able to mitigate the situations where we are launching something and we need to roll back because whatever we build already has blessings of the regulator, our teams, and that has happened by embedding compliance within a product development process.
Now, when it comes to experimentation, also what I have realized with time is, for regulators, right? It’s more important to understand the intent and not really what is written exactly in a document, right? So that is why when we are thinking about experiments as well, we understand what is the intent of the regulator, and we are on the right side of it. And within the experimentation framework also, you need to look at what are you experimenting with? Because there could be areas, for example, even though we are a FinTech, but if I’m experimenting on a particular design, on a copy, on an illustration, that doesn’t go against any particular regulation, right?
So we divide it into multiple layers. The UI layer is something we are very fast to experiment with because we know it doesn’t configure anything. Then there will be a decision layer where we are a bit more guarded. And finally, anything that has to do with user data, money movement, then we are absolutely cautious on what we want to do, goes with the guardrails.
And I’ll give you an example. So I think we were doing a pricing change. Now, from a regulation perspective, the goal was you need to communicate that to the user upfront and you need to communicate it clearly, like whatever is changing. So when we were experimenting, it was basically about what is the best way to communicate, right? So we were following what the regulator intended, but we were still experimenting. But experiment was more on what gives us the better conversion. Okay? I show it as a fee change, as a popup, as a tool tip, or as an animation, that’s where I started to experiment with. So, so far, I would say at Flip we never have had any problem because of this philosophy, how we work together with regulators and we are innovative.
Reuben John: Got it. And thanks so much for sharing that. Right. Because, you know, even with some of our large FS customers, this is something that we hear often, right? And they do struggle with keeping up that velocity, struggle with the process of it. And of course those become key concerns and sometimes a blocker to even building that culture, as you said. So it’s super to sort of see how combining process and just what I sort of see as like a prioritization matrix, right? Of sorts like of what you can do and what you cannot do so easily, which then becomes the guiding principle for your entire team. Fantastic.
Building and Sustaining Experimentation Culture
Reuben John: And in this case though, and maybe just going back to I think the broader point of, you know, democratizing the experimentation culture, right? Of course we can always start something, but sustaining momentum is usually difficult, right? And clearly Flip has cracked that, right? You’re having every team, every person thinking about this and doing this on a day in and day out basis. So I think what a lot of our audience would be curious to understand is what are, what does that operating model or what maybe are there any rituals that sort of help scale this out within Flip’s product culture, right, where you have continuous experimentation as a core tenet of how you’ll view every product release?
Sourabh Gupta: Sure. So when I started at Flip, right, this was back in 2021. We are still a small early series A company. We didn’t do like a lot of experimentation. I think that wasn’t part of the culture. We didn’t have proper tooling for instrumentation, for analytics, for A/B testing. Nothing of that sort was in place. So I started from there, you know, setting up the right foundations, getting the tools in place, getting the data in place.
But what I realized from my past experience was just because I have the tooling in place, I have a framework in place, I have a process in place, I have a template on how we should be writing experimentation docs, things will not change automatically. You know, change always needs to be driven by someone. That’s my learning, right? It doesn’t organically happen that you have sent out an email that folks, tomorrow this is what we will follow and everyone just starts adopting it, right?
So I was very conscious of that, that I will have to be the one who shepherds the team initially for the next three to six months to change this behavior. And then once I have some of these individual leaders, then they can do that to their teams and others as well.
So the strategy that I used was I intentionally used to nudge people to experiment. Whenever we are discussing about a particular topic, and this question comes right on options, should we go with option A? Should we go with option B? Right? That invariably comes on your product discussion. And a lot of times, while I could use my judgment or instinct to say, okay, we should take option A instead of B, right? Because that’s where the product sense comes in. I used to intentionally ask the teams to, okay, why don’t we test it out? Right? Why don’t we test it out?
And that is what I think slowly, slowly became a culture where we were not taking decisions on gut feel, on personal opinions, even if we had a solid judgment, right? And I’ll come to that, like why judgment is important. We were intentionally making everyone default to an experiment. And that is how it became a behavior that everyone now knew that, okay, we need to take a decision. Let’s do an experiment, and then let’s figure out.
Now this sounds great, but I’ll tell you what was a problem with that, which also happened in Flip. Now once this was done, what I started observing was, teams started to over experiment. And not a lot of companies will say that, and you’ll not hear it from a lot of people, but I could clearly see we are over experimenting because teams are defaulting to experiment wherever they need to take a judgment call also, to avoid, let’s say debates, to avoid, let’s say he said, she said, they were just saying, okay, let’s do experiment.
And you don’t want that long term. Also because you want your product team to build product sense and instinct. You want them to take judgment calls that okay, I have a very strong conviction on what will work. ‘Cause experiments are costly, they are not free. Like if I need to take a decision today, I can take a judgment call and move ahead with it. But if I need to experiment, okay, I’ll configure it. It’ll take some time. I need to wait for rollout, then wait for it to be statistically significant. Then also, maybe it’s not significant, then I need to change other parameters, right? So it keeps on going. So experiments are never free. They’re costly.
So that’s why it’s very important for the teams to be conscious about what they’re experimenting with. So that’s the balance. I think we need to take one. I think I talked about how we need to handle the team to build this culture. And second, at the same time, we need to balance so that the teams don’t end up over experimenting also.
Reuben John: Right, and maybe let me just double click on that, right, because it’s such a classic problem that you just stated, right? Like, where do I go with gut feel on, let’s say something which is just incrementally maybe things that you know, that if I add a particular feature, it’s incrementally great, right? So why should I test it. And there’s, of course, there’s a counter argument to that also, right? Because there are, let’s say, secondary metrics, or let’s say there’s a tertiary metric that might have adverse impact on this balance of things. What do you think has helped the most to sort of regulate that? Is it sort of driving that as part of a culture, as part of a process? Is it the tool that has helped it? How do you sort of strike that balance?
Sourabh Gupta: I think this is where it has to be a bit abstract and more of a culture rather than, you know, a document or an SOP, right? Because different people are different. So let’s say if I have two or three lead or group PMs right now, how each of them operate would be different. What sort of conviction they have for their product area would be different. So it becomes a bit hard and something that you cannot really put it as a process, right? It is more of mentorship, it’s more of grooming and telling them why this adds value.
Because as I said, right, the goal is not to be prescriptive over long term. The goal is to have people think the way you are thinking, but it has to be intrinsic for them. So that is why I think the approach, at least that has worked for me, is to groom people to think differently so that they are leaders of tomorrow where they have strong product instincts themselves while balancing with experimentation culture.
And also, one of the things which helps is, what is the cost of failure, right? What’s the cost of your judgment being wrong? And if the cost is minimal, then we’ll rather prioritize speed, where you can take a quick decision, move it rather than waiting out to see the results of an experiment.
Reuben John: Got it. Got it. Economics 101, trade-offs are everywhere, isn’t it?
Empowering Cross-Functional Teams
Reuben John: So now let’s sort of move on to the next part. I know you mentioned we are speaking of product folks taking these judgment calls, right? But of course, you know, you of course lead product design and data teams in general, product management is at that sort of confluence of, let’s say every team in at least a tech based organization, right? Like, we’re stuck. We have that in that Venn diagram, that middle person that everyone sort of puts—
Sourabh Gupta: Blamed with everything, right?
Reuben John: Blamed with everything, something goes good. It’s the team, something goes bad, it’s a product manager. On a lighter note, not subscribing to that. Anyhow, so, but then my, coming to my point here, there are other teams, of course, that are part of this decision making process, that are part of, you know, taking some calls on this. And each of these teams—design, data, even engineering teams, compliance teams—have different incentives and ways of thinking. Now, of course, you lead three of these teams, but like you said, you also influence almost every other team in the organization. What do you think can be done to empower these other team members and their leaders to make experimentation decisions or to, while still aligning to that broad strategy that you are able to set out for the product and the company?
Sourabh Gupta: You are right. So in general, I haven’t found that the problem comes with the approach, right? So it’s not generally about experimentation. It’s more about having a shared goal. So problem with the cross-functional team is, as you mentioned, their interpretation of, or the incentive of particular goal might be different. So hence, what is most important is to have a very clear understanding of what the shared goal is while what activities each team is doing to achieve that goal might be different. It all needs to translate to the same result or outcome, right? That is what is most important, that you need to align on the problem space and not the solution space.
Because, you know, you can have autonomy, but without alignment, it leads to chaos. And if you have alignment without autonomy, then it leads to bureaucracy, right? So what we do is, we define goals more top down on what we want to achieve, but in terms of how we achieve those goals, that is more about the teams to come up with the bets they want to take. And this is a cross team initiative mostly. For example, product, engineering, data, design, right? We have a PD cross, or a PD group as we call it.
Now to achieve a particular outcome, the inputs that a PM is doing versus what a designer is doing versus someone from a data is doing could be different. So that becomes their key results. But all of that combined will help you move the overall metric, which we want to do. So that is how we kind of operate.
And every quarter, when we kick off, after our planning team comes and present their bets to us. And the goal of the leadership is not to question on the tactics, but to pressure test the assumptions on why do they think something will work, why something will not work. So in general, our way of working is like in leadership. We all have strong opinions, but those are loosely held. We’ll go with the team’s decision if they’re able to convince us on why a particular bet makes sense and why others doesn’t make sense. So that’s how we operate. I’m not saying it’s perfect, but I think we at least fundamentally understand that the problem with cross-functional collaboration is the understanding of the shared goal.
And it’s a shared celebration, which is important. To your point on something goes right and something goes wrong, a lot of time what also happens is, again, it depends on the organization, how mature the product org is, but if a product is successful, a lot of time the attention is around the team, which built the product, right? But let’s say there was someone from the operations team who worked really hard to make sure the product was successful, managing a lot of customer complaints, that person never gets the limelight right? So celebrating wins collectively, recognizing everyone’s effort is extremely important for a cross team collaboration to be successful.
Reuben John: Fantastic. Right. I think one key point that sort of stood out to me is you have different KRAs, but you need to have that common metric, right? Many of us, or most many companies do structure this out, but how does that actually translate on ground? I like that. And then lastly, celebrating those wins, I think absolutely. I think that does seem to be sometimes the sort of dissonance that happens where folks do feel that, Hey, okay, what am I even doing? Right.
And lastly, I think just one point I really liked for folks maybe product managers are specifically listening to this, I think something, words to live by having strong opinions, but you know, sort of loosely enforcing them. Right? Like, sorry, do you mind just sort of giving me that quote again? ‘Cause I think that’s a—
Sourabh Gupta: Having strong opinions but loosely held.
Reuben John: Yeah. There you go.
Sourabh Gupta: I mean, you, because you see a lot of times, you know, some exec will just stick onto a point for the sake of sticking, right. Even when the data says otherwise, and that’s not the kind of leadership, you know, we want to have. In fact, whenever I’m, you know, even interviewing people and people are trying to understand culture, my key point to them is always that I’ll be more than happy if you come and tell me that I’m wrong on certain things. ‘Cause that will tell me that you have thought through that particular problem statement, right? Like, I have no issues in being proven wrong because that tells me that people I have in the team are solid. Like they are able to think beyond what I’m able to think, right? And that’s one of the traits of good leadership I think that we should have in different organizations.
Reuben John: Absolutely. And a prerequisite to any success that you could possibly even imagine in experimentation. Right? So absolutely, absolutely makes sense. Awesome.
Learning from Product Failures
Reuben John: So now let’s talk about failures, right? And that’s part and parcel of what a product manager should go through. If you haven’t seen a failure as a product manager, then you know, are you even a product manager? Right? And so when these experiments or more critically, let’s say product launches don’t go as planned or they fail, what have these moments really taught you as a product leader, and how do you leverage those learnings towards making that plan towards progress?
Sourabh Gupta: Okay, so Reuben, from my experiences, what I have learned is, you can only know if a product launch is going to be successful once you have taken it out to the market, to the public. Before that whatever you are doing is just trying to reduce the possibility of failure. You are doing your researches, user design, you’re doing usability tests, you are running A/B tests, you are doing control rollout, you are doing detailed analysis, competitor analysis. All of that will just reduce the possibility of a failure. But nothing will guarantee a hundred percent whether your product is going to work or not until you hit the market.
And the reason for that is there are any number of things which can go wrong. Okay? You look at a competitor and you’ll say that, okay, the product works for that competitor. Let me do the same thing. But when you launch, it’s not working. Why? Maybe your brand is not positioned in the right way. Maybe people don’t look at you like they look at the competitor. Maybe how you did your GTM or communication did not resonate with the user. So there are so many different reasons for a product to not work.
So that is why when you talk about failures or product launches, I look at it as a start. Like launch for me is a start. It’s not the end. When you, because the outcome is not launch, the outcome is to hit a certain number, right? So when you launch the product, it’s just the beginning. And it’s very rare that you launch a product and it automatically scales. In real life, once you launch a product, you really need to shepherd it, cradle it, and you know, iterate over it for extended period of time for it to start seeing success.
So that is why I think experiments or launch failures, they’re just telling us about different hypothesis on what works, what doesn’t work. It’s a lesson for us. It’s a learning for us, but we don’t look at it really like a failure, right? I look at it literally like a learning, and we figure, okay, what didn’t work?
For example, and okay, this is a good one. So we in Flip, we built a game voucher product where, you know, users could buy game vouchers. These are your mobile legend games, free fire. So you wanna buy a skin, a sword, or some asset. You can buy that. That’s pretty big in Southeast Asia. I’m sure in India also, it would be very similar. So what we did was we have a consumer app and we launched this product there. And from the market we knew that there is a huge demand for it. But when we launched it, we were hardly seeing any numbers.
Now obviously we start analyzing what’s going on, what’s happening, and normally superficially everything was right, right? There was no issue with the rollout. Our analysis were correct. Our research was correct. Product funnel was fine. I mean, there was no bugs there, et cetera. But after some investigation, it clicked to us, our product was built like a FinTech product, right? You come in, you are doing sign up, you are doing your KYC, and then you’re landing, and then you have a lot of products.
Now, people who were buying this game vouchers, et cetera, a lot of them were students. School going kids. Right. And they didn’t have documents or IDs to do KYC and for them, this FinTech product didn’t seem like the right place where, you know, they will go to buy a game voucher. And we took this eureka moment and did a very basic website, which was built actually by our design team using AI tools. And we launched this product on a web, same supply of game vouchers, et cetera, but on web. And suddenly it just grew organically, right? That people, and we obviously, we did a bit of marketing, communication, et cetera, but it scaled very rapidly. The number of users we got in two months when it wasn’t working, we got it within a week.
So that was a eureka moment, right? So now tying this back to your question, we couldn’t have thought about this when we were building it, right? We did it the typical flow. You know, you did your analysis, build a PRD, UI, UX development, everything done. So this comes from the learning in the market. So that’s why I think these are not failures, these are learnings. And the point is, how do you apply these learnings to achieve your end outcome?
Reuben John: Got it, got it. And so essentially you start off, understand, iterate and you improve those metrics that you set out. Right? And of course you need to have that clarity on those objectives and metrics as well. Absolutely. Makes sense. And I think, again, a big takeaway for a lot of us in this space, right? It’s you never have that winning strategy at the get go. It’s all about how you sort of address and iterate.
Now, one thing that I sort of found very interesting here is, you know, how we sort of went and sort of did that RCA, that root cause analysis of sorts. So if I may sort of double click on that, what helped you get this information in a way that was structured and quick for you to make that final judgment call? Was it, what was it? Would you attribute it to the tools? Would you attribute it to the process that you used? What helped you do that the easiest?
Sourabh Gupta: Sure. So in this case it was, so we usually do a post-launch analysis as well to understand, you know, you build a product or a feature with the intent, after launch, is it meeting that intent or not? Right? So that’s the fundamental of it. So we did that analysis, obviously it took us a while, right? Firstly, we are seeing okay, why the traction is not happening. We’re trying to pump up through marketing initiatives. But after one, one and a half months, that’s where we said, okay, something doesn’t seem to be working because we have put efforts from our side in terms of communicating. GTM is still not working.
So we started investigating after about one and a half months of launch. And what we did was, so my guidance to the team was, let’s talk to people. Right? That’s the fastest way to get response. So we distributed into some of the users who were already using us. We talked to some members of our team who were avid gamers and we tried to recruit people from other competitors. Right. And that’s how the insights started coming in, that okay, how we are distributing it.
And it didn’t take us a very long time. Right. Six, seven person we spoke to, we started seeing a pattern of same response. Right. And that’s where, you know, okay, that your hypothesis is correct, right? When after, you know, after, so generally in research, five is considered a number where you start seeing a repetitive pattern, right? So when we saw that pattern, we knew, okay, this seems to be key problem. And we then took corrective action on it. So would’ve taken us a week to 10 days to get it out. And then we just went back to the drawing board to iterate over it.
Reuben John: Got it. Essentially just good old school, product management, user research, get to a customer, get them on a phone call or just in person.
Sourabh Gupta: At least in this case, right? I mean, obviously different scenarios. Maybe you can get data through experiments as well, but in this scenario it was, you know, your old school talking to your users and understanding what’s happening.
Reuben John: Understood. Understood. Awesome. No, thanks for sharing that, Sourabh.
AI’s Impact on Product Development and Experimentation
Reuben John: Now you mentioned your design team put up a website on their own just using AI tools. Right. And 2026, if not 2025 already. But 2026 is the era of AI. Maybe for several years from here as well. But if I bring AI and experimentation together, right, do you believe, or how do you believe AI assisted experimentation will change the way we look at product leadership? Right. And does this create new opportunities for product teams? Does it create more challenges and what do you think is that sort of guiding principle for product leaders, for product managers when they start thinking of adopting AI within experimentation?
Sourabh Gupta: Okay, so I’ll take a step back, Reuben, on this. Instead of talking specifically on experimentation, let me talk more for AI driven product development because I think experimentation would be a subset of it. And let me cover on what we have done here, plus what I’m seeing in the market.
So today you’ll see that product teams, PMs or you know, other members of the product tribe, they’re already using AI, LLMs to either write effectively, right, quickly, do some basic analysis to prototyping white coding. So it’s already happening at different levels, different extent based on organization to automation based on the maturity. But what I see in future is, besides your essentials for product management, which could be around strategy, communication, alignment, innovation, product sense, everything else is going to get automated.
I have a very strong conviction that your mid to low level of product work will get automated. And I’ll tell you what I mean by that. But for all product teams, you know, PRD is like a standard document that, you know, it has become very standard. I don’t think it used to be that standard like 10, 15 years ago, but now, like PRD is the term that is used everywhere. Like, okay, get me a PRD, engineer will ask, get me a PRD, designer will ask, get me a PRD. Right?
I think writing PRDs is mid to low level work because fundamentally, you as a PM you already have thoughts in your head. You just need to put it out in a document so that, because people can’t read your mind, but people can read a document, so they refer to this document to understand what do you want to do? What is the problem that you’re trying to solve? How do you measure it? What’s the benefits? So on and so forth. Right?
What I’m trying to do at Flip and what I’m very sure will be popular is we are actually building an agent, which is trained on your context, on your organization, which you’re training on, how you write in that organization, the templates, so on and so forth. And what you need to do is you need to input certain parameters to that agent, and the agent will turn out a PRD. There are already tools for it, like I think ChatPRD is one website which does that. But I think you need to build context, organization context for this to be working effectively.
Now, this agent is very much possible and I think we’ll have it soon at Flip as well. But what I’m seeing in future is it’s not going to be just one agent. It’s going to be a bunch of agents. So imagine you as a PM you started off, put in some fields, data, it creates a PRD that feeds to another agent, which is supposed to do risk assessment on what your ideas are. As in FinTech, you know, you need to have this. That agent can be trained on your SOPs around fraud, risk, regulations. You can put all the documents from the regulators in it to understand, hey, what idea you have is compliant or not. And there is a feedback loop.
Now, the feedback loop goes back to the earlier agent, earlier agent modifies the PRD, sends back, then it goes, let’s say, to an agent, which takes up those requirements, creates a prototype, then it goes to an agent, which creates a basic tech spec on how you are going to implement it. So the possibilities are immense in my view. So this is the future towards which I think we are going, and that’s how I think what will happen is your low level work, anything that does not require your brain power will get automated.
Now, when it comes to experimentation, I think because of the pure compute power that, you know, AI today brings in where you are able to solve things or get answers very quickly. Some of the things which I definitely think will happen is your hypothesis generation will become much faster. Synthesizing data would become much faster. Pattern detection across experiment, simulating outcomes before you know, if there’s something risky, you can simulate it right before taking it to the users. You can all mock it much more easily today than you could do it before. Faster failure detection, like you can understand, okay, when your experimentation is going sideways.
So I think the possibilities are immense. There is no limit to it. I mean, the more you’ll think, the more you’ll figure out on what you can use AI for. And that’s what we have done at Flip as well, like my guidance to the team is look at AI to solve your day-to-day problems. If you have a particular problem, think how we can solve it. And that is where the adoption is happening because you can’t always think about product problems to solve. So people are starting by solving their individual problems.
Someone will say that, Hey, I have a problem. And okay, this is a real example. Let me share this with you. So I had a PM and he had this problem where he was like, I go into meeting, I come back, there’s a thread in Slack and there are like 90 messages and I need to go through it one by one just to understand what the context is. He on his own using n8n builder, built a Slack summarizer thread. This was done before Slack launched it as a feature. So today Slack has this feature, but this PM in my team built this to solve his problem and democratized it within the organization. So that’s the power of AI that I see is going to change things.
So to your point on product leader, I think with product leadership, as I said, right, there are key points around strategy, communication, alignment, and product sense, judgment. Those are very important. So I don’t think in near future AI is going to replace that. So I don’t think AI is going to replace product leadership, but I think AI will replace product leaders who don’t use AI. So that’s how I see the difference.
Reuben John: Yeah. That’s interesting. Um, and then, so to summarize, essentially you also have, I mean, there’s two layers here, right? One is how do I enable, tying it back to that point that you mentioned earlier, that how the team works closely with compliance, legal teams, right? And so how do you sort of enable that in a more seamless fashion where you have like these agents who are doing that work for them. And so you’re not bound by it’s a human or time constraints. And then a second sub thread over there is, you know, how do you sort of also empower yourself to give more time to experimentation by sort of delegating those other tasks, those easier tasks to the AI layer, right? And then, so that’s one part.
And the second aspect is, you know, the art of actually running that experiment, right? Wherein you spoke about AI generating hypothesis, AI generated summaries, failure detections, et cetera, right? And combined together that sort of, so you’re building the culture and you’re easing that process around this, which is fascinating. Right. And so if I may sort of just double click on that, in your current A/B testing tool, or any tool that you use really to run experiments, do you have that AI layer already embedded into the workflow?
Sourabh Gupta: So we, we have the, from a tool perspective, they have that capability, but we haven’t seen it to be very advanced, to be honest. So at times we will just take a dump of the data and analyze it separately using other AI tools. So that’s what we have been doing so far. But I do sense that when a lot of these analytics companies, et cetera, they also understand the power of AI and what they started off by was started making it a paid module. But now once you do that right, you understand that it’s very easy to take the data and go somewhere else and analyze it. So what they are doing is, companies are making AI as a default to use their product, right? So that if people see value, obviously you’re gonna stick to a platform. So that’s something I’m seeing in the market.
Reuben John: Right. And you know VWO, we have something called VWO Copilot has been default free from the get go. I think we sort of recognized how we need to sort of put that out there and it’s sort of integrated into everything that we do. Right.
AI-Driven Development and Experimentation Controls
Reuben John: Okay. So I’m just doing a time check here. Okay. Let me flip it around a bit. Right. So you mentioned how AI is making things faster for your team. Things that are taking time is reduced because the grunt work is done by someone else. Now that’s from the perspective of product manager. You take that perspective from an engineering person, you know, everyone’s using some sort of AI coding tool, right? Whether officially if it’s there or unofficially it’s there, it’s happening regardless of whether you want it or not. And when you combine these two things together, essentially what’s happening is your releases are happening just much more faster.
Sourabh Gupta: Yep.
Reuben John: 10x possibly faster than before. And it’s a good thing. It could also be a bad thing. ‘Cause who knows what’s getting shipped out there, right? Is every word or every piece of code being reviewed? Maybe, maybe not. And so, in such a world where delivery is now, let’s say 10x of what it was, where do you sort of see the relevance of something that allows you to control rollouts and experiment at the same time?
Sourabh Gupta: Sure. So to be honest, I mean, we use Claude, et cetera for our engineering teams. But it’s not that we are deploying things without review. So, as I said, the fundamental is to give superpower to the teams to be able to execute much more in a given amount of time, more fastly, et cetera. But your process or the guardrails that you have, be it on reviewing the PRs, merge requests, et cetera, that doesn’t change, because we don’t think AI is that advanced to know that context. Right.
So far for us, what we can say is at least 20% of our delivery time has been saved or like we have been able to do more with AI today, but it has not had a detrimental impact on other processes that we have. We continue to do phased rollout, having experiments because that is not fundamentally changing. What is changing is you are able to build something much more faster, quickly, efficiently. Right. But the process that you had before that, that is still there. So this is not something, at least I’m seeing or at least what I know from the company, other companies I have talked to, because no one is fully relying on AI to just, you know, deploy, ship everything. It’s right now being used as an assistant. Right. That’s how everyone is putting it.
Reuben John: Right. And I think Flip also benefits from, you know, as you mentioned, having that very, very clear process on having phased rollouts, right? So you’re doing gradual release management, you have a process around it, you probably have tools to help you do that. And so I think that’s sort of something that maybe more maturing companies or growth stage companies will have to adopt at an earlier stage perhaps, because you’ll be shipping out faster. How do you sort of put a guardrail around it, right? How do you put a basic feature flag around it and just have a kill switch ready? And what are the kind of tools that you sort of can do to sort of leverage that. Right. And if the same tools allow you to sort of also do downstream experimentation and impact analysis, perfect. It makes sense.
Recurring Blind Spots in FinTech Experimentation
Reuben John: Okay. Now sort of, now you, you know, of course you’re, as you mentioned, you get invited to a lot of conferences, regulator round tables. Right. And so in your observation, what are those typically recurring blind spots or misconceptions that you hear about experimentation and product risk. Right. Especially in the world of FinTech.
Sourabh Gupta: Yeah. I think one of the most common things that I hear is that risk or security measures are inversely proportional to your user experience, right?
Reuben John: Hmm.
Sourabh Gupta: Can be true to certain extent, but lot of time you need to have this positive friction, right? And that positive friction defines an experience. Let me give an example, right? I mean, today, when you want to do payments from, let’s say your wallet, you’ll have a PIN by default, okay? That PIN doesn’t need to be there. I mean, fundamentally it’s just there for additional security layer. But today, if I build a product where you are doing a transfer and it doesn’t have a PIN, as a user, you’ll be scared to use that product. So while the additional step of entering PIN has added friction, that friction acts as a positive reinforcement for the user to feel secure. Okay?
So that’s why the notion of, you know, risk rules always being inversely proportionate to the experience in my view, doesn’t hold good. So that’s a first misconception that lot of folks have. The problem is, people should understand that risk rules need to be in the background and not in the foreground. So you should have context based rules. It should not be one size fits all.
On experimentation as well, when it comes to regulated companies, a lot of time people think that experiments mean you are risking users’ money, right? You are exposing to more risk. But in my view, experiments actually de-risk things, right? It’s a de-risking mechanism because if I do one big bang release after six months, it could be that there are a lot of problems that happen. But if I break that into smaller monthly small releases, phased rollout, which is just to a certain audience, not to our audience, then I’ll know and learn much more early and I’ll de-risk any big issues that are going to happen if I do one big bang release.
So that’s also, I think something which is kind of a misconception. And that’s where I feel that, you know, data led leadership is actually the most compliant form of leadership because it removes the gut feeling on errors from the equation.
Reuben John: Absolutely makes sense. Right? And so three things sort of that you sort of called out there is, one is more risk doesn’t necessarily mean a poor user experience. Right? And that’s true with your end user as with I guess internal processes as you currently mentioned. Not it, it cannot be the loudest voice in the room. Right. The second part is not one size fits all. And the third part being more experimentation is actually de-risk, not necessarily adding risk.
Sourabh Gupta: Because you’re learning earlier, right? You’re learning much sooner at a smaller scale instead of exposing a particular issue to all of your users.
Reuben John: Absolutely, absolutely makes sense. Fantastic.
Unlearning Product Instincts
Reuben John: For our final question here, and I think it’s sort of like a summation of I think a lot of points that you’ve mentioned, but maybe for our viewers, having that in one place would help. And so here the question is, every product leader through trial and reflection, right? You’re never there on the first day. As you also mentioned. What is that product instinct that you had to unlearn in order to truly become an experimentation leader?
Sourabh Gupta: Okay? So this will touch upon something I’ve already shared initially as well. So the theme would be common here. But this is quite interesting because, you know, it was a kind of a wow moment for me as well, because I didn’t realize this was happening. So I have historically been very good with product sense. I think that’s a skill I’ve been very strong at, even when I was an IC, and to now when I’m a product leader, right? I would generally be able to take a judgment call on a decision and would get it right, honestly a lot of times. And that would lead to situations where I will kind of argue with my peers that hey, why do we want to experiment on certain thing? Because, you know, I’m quite convinced that the approach I’m taking is right. And by experimenting we are delaying our time to market and so on.
So while this skill was good from that perspective, it became a problem when I became a leader, especially when I was, when I came to Flip also and was, you know, building the entire team. Because what I realized was it started delaying things because team was always looking for my opinion. Okay. I became the bottleneck for my team. And while my goal was to democratize decision making, because I was good at this, they will default to come to me to ask my opinion, what do you want to do?
So that is where I had to kind of unlearn and change this habit where even though, let’s say, I had a strong opinion on something, I will let the team to go ahead and experiment it. I will say, okay, maybe this is a better idea, but yeah, why don’t you experiment it? And then let’s see what’s going to happen. And I think that’s something which is true not only for me. I’ve talked to other product leaders, and they also have this similar problem where, you know, as product leaders, somehow you feel that you need to have all the right answers all the time. Like, if your team comes to you with something, you have to share opinion to them.
But I don’t think, you know, product leaders have a crystal ball where they can see a future, right? Product sense is great, but I think if it leads to you being the center of all decision making, then that is something which leads to problem because there could be some blind spots you’re not aware of. You could lead to decision fatigue and your accuracy might come down.
So that’s, I think, a behavior that I had to consciously unlearn where I used to ask the team, even if I had instinct, my instinct was an input, okay, it wasn’t a verdict. I would say, okay, here’s what I think will happen, but why don’t you go and test it out? So that’s a learning for me.
Reuben John: Yeah. And it’s such a, an interesting one, right, because I believe it applies to any level of being a product person, right? Like whether you are starting off or whether you are like a product leader leading like an entire portfolio of products as you are. Right? And so even for someone who’s starting out, it’s so important to have that collaborative spirit, not within the product team, but also with your engineers, with your design folks, with your end number of teams that you work with, right? And that’s how it sort of scales, right? And so it’s such a powerful learning that you just shared there. And maybe just to record what you just mentioned earlier, which I think is like the quote for me in this session here is, you know, strong opinions, but loosely held. Yeah. Held, yeah. So applies here too. Fantastic.
Rapid Fire Round
Reuben John: With that, we are, I think through the main part of this podcast, I think on what we wanted to sort of discuss and sort of get your thoughts on. We do have an interesting segment coming up right after this, but before we go there, an icebreaker round, sort of, if you’re ready. It’ll be a light one, nothing late with work. What’s your favorite travel destination and why?
Sourabh Gupta: Hmm, good one. Honestly, I don’t think I have a favorite one. I think in different points of time, the ones you have visited more recently appeals to you more. That’s how I’ve realized. Apple, like, if you would ask me that question five years ago, I would’ve probably said Amsterdam. I lived there for over a year and I love that. But today, if you ask me the question, probably I’ll say Labuan Bajo, which is in Flores Island towards east of Indonesia. Because when I went there, I had a fantastic time. One of the best coral reefs I’ve ever seen. So I’ve had a great time. So I know one place, but yeah, I’m a travel junkie. Love to travel, so each place offers its own value.
Reuben John: Yeah. Awesome. And you’re bang in the middle of such a beautiful, you know, region, right? Like such, I think one of those few places which still have intact coral reefs, right?
Sourabh Gupta: Yeah, I mean, there are spots, obviously there are areas where, you know, over tourism has damaged it, but still there. The more east you go where there are less people, less tourists, the reefs are much more alive. And there’s also land of fire as it’s said. Right. Because there are bunch of active volcanoes all over Indonesia.
Reuben John: Oh, super cool. Have to plan this. I might just—
Sourabh Gupta: You should, you should come over sometime. It’s a good—
Reuben John: For sure. For sure. I might just take you up on that. Awesome, awesome. Thanks for sharing that. And with that we’re in our final segment, and so this is gonna be a rapid fire segment where it’s a mix of both, right? It’s work, it’s somewhat non-work, but still work related. But if you’re ready, let’s start.
Sourabh Gupta: Okay.
Reuben John: Okay. First question. If you’re starting a career in product today, what is the one thing that you would do differently?
Sourabh Gupta: I think learn to align more effectively. ‘Cause I think that was a learning which came a little later. ‘Cause I started my career in product after certain bit of experience. Right. So the way of operating in product is a bit different, where you, as you said, in the beginning, right, it’s at the intersection of everything. So your skills of alignment are extremely important in this role.
Reuben John: Okay. Got it. Super. One thing, your non-industry friends still don’t understand about your job.
Sourabh Gupta: Yeah, I mean, I think the constant question is, so what exactly do you do? Right?
Reuben John: Okay.
Sourabh Gupta: That’s the general question. Like, I’m like, okay, I do this. I’m building products. I manage, I work with like 10 different teams, but they were like, but what do you own? Right? What do you really do? Right? So, yeah, that it’s a hard one to explain. And that’s exactly why product management is not a discipline which gets taught in many places, right? You’ll have engineering, you’ll have design related courses. You have very few actual degree courses for product management because of the nature of it, right? So what is interesting is you excel in this role if you are great with ambiguity, which is absolutely opposite when it comes to, you know, engineering, design, where you want to work with clarity and product management is ambiguity, which makes you great. So yeah.
Reuben John: Yeah. No, absolutely resonate. I have a sub-question to that. I’m just adding here. Do your parents understand what you do?
Sourabh Gupta: Oh, absolutely not.
Reuben John: Okay, so it’s the same everywhere. Great. Next question. Who’s that one person that every product professional must follow?
Sourabh Gupta: Good one. Hmm. I mean, there are a bunch of leaders from Valley, right, who have very strong, you know, opinions and like I have followed Shreyas Doshi as an example. I think Lenny’s podcast and his newsletter. I think that’s just amazing. Just before this call, I was just seeing, just briefly checking out his latest post, which was with, I think the founder of Andreessen Horowitz. And they were actually talking about the same thing on AI and which we are doing at Flip, the lines are blurring between what an engineering is doing, what a designer is doing, what a PM is doing. And that, I think is spot on. That’s how the world is changing.
So I think if there’s one person someone should follow is, I think Lenny, because the content that you’re getting, like you don’t need to go to 10 different leaders, right? He’s curating the best from the world and bringing it to you. So that would be my recommendation.
Reuben John: Got it. Awesome. Next question, three books that you would recommend to our listeners.
Sourabh Gupta: Okay. I think for someone starting on product management, Inspired by Marty Cagan, I think is a default, right? I think he has, while it might be a bit idealistic, but I think as a foundation, it’s a very good book to, you know, talk about basic product management, how to go about it, talking about the fundamentals, et cetera.
The second book, I think, let me Google it. Maybe actually you can, we can redo this question. Like, gimme one second.
Reuben John: Okay.
Sourabh Gupta: Yeah, yeah, I got it. So let me start. So I think the second book that I think that PM should really, you know, read is the Lean Startup by Eric Ries. I think this is where the whole concept of experimenting, you know, experimenting fast, learning fast is elaborated by him so long back. Right? So, but it still holds value today. And I think the last book is Hooked. I think that’s another book by Nir Eyal. I think that talks about how you should be building habit forming products. And between these three, I think one is fundamental, one is on design, UX, one is on experimentation. I think these are very good for your foundation. After that, obviously there are many other books you can read about. But I think from a product management foundation, these three are pretty good.
Reuben John: Yeah, thanks for sharing that. I think Marty Cagan famously quoted, product management is about building a team of missionaries, not mercenaries. I think that’s from this book, right? I still remember that. It’s such a powerful quote. Awesome. Next question, fifth question. What’s the best piece of advice you’ve ever received?
Sourabh Gupta: Okay, so this is not typically on product. But this becomes very important for product managers, because, you know, I think PMs have to be by default, great at leadership, right? That’s a quality you need to have, because when you work with so many different cross-functional stakeholders, that’s very critical. So one advice that I got, which might be a bit controversial as well, but it really works, is you don’t align in a meeting. You align before a meeting. You communicate in a meeting.
So if you’re coming to a meeting with 10 different folks and you think you’ll present your idea and everyone will be okay with it, tow your line. That rarely happens. The fundamental is you talk to different people before that meeting. You align them before the meeting and when you come, when you present it, everyone will be already brought into the idea. So that was an advice given to me by one of my mentors and I think that has worked well for me.
Reuben John: Yeah, no, absolutely resonate with that. Especially for a product person. It’s just alignment everywhere. Right. So, awesome. One thing that AI will probably take over in the next three years.
Sourabh Gupta: I think PRD writing, I think that’s the one thing which comes into my mind as I was explaining also, right. I think all the low level work will get automated and I very strongly believe that, you know, everyone is going to have some sort of assistant who’s going to do the work for them three years from now. And all the grunt work will just go away. Like basic analysis, writing documents, et cetera, will just be kind of done by AI.
Reuben John: Yeah, no, absolutely makes sense. If not product management, what other profession would you have chosen?
Sourabh Gupta: I mean, I can talk about what I would’ve chosen. I don’t know if I would’ve made it to it or not, but I was really passionate about joining the armed forces. So if not product management, maybe I might be in the Army or somewhere.
Reuben John: Okay. Did not expect that one. Okay, nice. Eighth question. One product metric that you wish people would stop obsessing over.
Sourabh Gupta: MAU, especially for like fintechs and organizations like such, we only look at transacting users. We don’t care about users who are coming on a platform. So a lot of times I’ll see people talk about, okay, this many people came to our platform, but if they are coming and not doing anything, what’s the point of that? So for me, MAU is, again, the active has different definition, but what I’m trying to say is, for me, the value is always more with people who are coming and transacting on your platform. If that’s the kind of product it is, if it’s a social media, et cetera, obviously the metrics are different. But for a FinTech kind of a product, right, you need people to come and transact, right? That’s the whole purpose.
Reuben John: Yeah. I agree. I think that definition of active is the most debated definition, especially if in a zero to one project, what does active mean, right? And so, yeah. Okay. Last question. A dream or goal you want to achieve in the next three years?
Sourabh Gupta: I think on the personal side, obviously on the organization side want to, you know, scale up Flip to a different level. But I think on a personal side, I would definitely want to work in the ESG space as well in whatever capacity I can because that’s something which is very close to my heart. I haven’t been able to contribute much on that so far, but that’s, as you said, it’s a dream, right? So it’s a dream to be able to contribute there.
Closing
Reuben John: Awesome. That’s good to hear. Well, we are at the end of our session here, so thank you so much for this very insightful, very deep conversation and rather discussion that we had. It is really great to get your thoughts on some of these very, very pertinent topics. And then I think some of which are still debated, right? We still don’t know the answer to. And so it’s really, really great to get that perspective. The takeaway is clear, effective product leadership is less about certainty and more about clear thinking, steady execution and learning through change, right? And so, thanks so much.
Sourabh Gupta: Thanks for having me. Yeah.
Reuben John: Yeah, no, thanks so much. And to our listeners and viewers as well, thank you for spending time with us. If this episode resonated, feel free to share it with someone on your team, and don’t forget to subscribe for more conversations like this. We’ll see you in the next episode of the VWO Podcast. Thank you so much.
Sourabh Gupta: Thank you.