• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Weaving Psychographics into Experimentation

Discover how to weave psychographics into experimentation for impactful user insights and data-driven decision-making in business.

Summary

Rubén Nesher García Cabo, Head of Experimentation at Bitso, shared insights into his role and the significance of experimentation in business. He advocated the importance of understanding user behavior and making data-driven decisions. Rubén highlighted the necessity of building strong hypotheses, working closely with stakeholders, and fostering an experimentation culture within the organization.

He also discussed the challenges of prioritizing experiments and the need for continuous monitoring and adaptation in dynamic environments like the crypto sector. Rubén underscored the value of both qualitative and quantitative data in shaping experiments and the growing relevance of machine learning in experimental design.

Key Takeaways

  • Understanding user behavior for building effective experiments and improving user experience.
  • Creating strong hypothesis to run experiments by utilizing both qualitative and quantitative data
  • Fostering an experimentation culture within an organization for innovation and risk-taking

Transcript

[00:00:00] Arjun Kunnath: Hello and welcome everyone to ConvEx ’23, the annual online summit by VWO, dedicated to all those working in growth and experimentation. 

[00:00:14] For those who might not know, VWO is a top-tier conversion rate optimization platform. It’s your one stop solution for A/B testing, behavior analytics, and personalization, all designed to supercharge your business.

[00:00:25] I’m Arjun, a product marketer here at VWO, and I’m thrilled beyond measure to introduce our guest today. Joining us from the front lines of Bitso, we have Rubén, the mastermind leading their experimentation division. Rubén, it’s an absolute pleasure to have you with us. How are you today? 

[00:00:41] Rubén Nesher García Cabo: Thank you Arjun, and yeah, really happy to be here.

[00:00:44] All good and excited to have this conversation with you.

[00:00:49] Arjun Kunnath: Great. Awesome. So Rubén, let’s start a discussion with your day to day roles and responsibilities as the Head of Experimentation at Bitso. Could you probably walk us through that? 

[00:01:00] Rubén Nesher García Cabo: Yeah, for sure. So, in Bitso, the experimentation team is pursuing to be an innovation machine inside the company.

[00:01:12] We’re always working with different areas across organization including marketing teams, growth teams, product teams to start finding new ways to improve user’s experience. And doing all this in a data driven way using experiments. So we’re working on experiments to make new products, new features to optimize flows, to do communications campaigns. And in the end, the most important thing to make all of this is understanding our users, right? 

[00:01:49] So, as I was saying, alongside those experiments trying to understand our users. And this is very important to build a strong hypothesis. So we work a lot also with surveys, and we have a behavioral economics team that is always doing behavioral analysis, trying to understand those users and motivate those experiments.

[00:02:12] So that’s, in a nutshell, what we do in my team in Bitso.

[00:02:22] Arjun Kunnath: Awesome. Awesome. I’m super excited to ask you the next couple of questions. So, Rubén, I just wanted to understand what sparked your interest in the field of experimentation and how did you decide to make this your career focus?

[00:02:37] Rubén Nesher García Cabo: Yeah, so entering this path was more by chance. And yeah, like some friends introduced me to start doing these jobs and then it was like, something super exciting for me. And yeah, like after I started digging into what is experimentation and what are behavioral sciences?

[00:03:04] Yeah, I made this my job and I really like these approaches because taking data driven decisions is super important, right? For any organization. And also the importance of start innovating. That’s what I like experimentation that you’re always trying something new. Trying to understand the why’s and why not’s and in the end it’s fun.

[00:03:32] It’s playing a little bit, understanding what can we do different, making bold questions, making bold moves and understanding how people think, how people make decisions for me is super, super satisfying. Super interesting. 

[00:03:52] And in the end, we’re always trying to make things that are in the benefit of our users, right? So I think that is really exciting. 

[00:04:01] Arjun Kunnath: Absolutely. Absolutely. I think getting into users mind and understanding, how and they go about with your at least, from a business point of view, that makes a lot of sense, right? You know what to put out there and how to talk, communicate and what products and features to release. Makes a lot of sense.

[00:04:19] Yeah. Yeah and I think you’ve chosen the right profession. I see that you’ve recently climbed up to the position of Head of Experimentation at Bitso. 

[00:04:31] So how has this affected your viewpoint on team dynamics, corporate culture, and long term planning for CRO?

[00:04:41] Rubén Nesher García Cabo: Yeah. Okay, so the most important thing when you are trying to make experimentation in your company or when you’re trying to build like an experimentation team, and a data driven team. One of the most important things is how do you create an agenda, right? And right now that I am, as you said, a head of experimentation, that is one of the most important things.

[00:05:09] How do we create this agenda of experiments, this road map of experiments inside the company. And it has different caveats, and different things that you need to consider in order to have a complete and successful agenda. And, for example, you need to be very mindful of how do you work with stakeholders, right?

[00:05:34] It’s very important that everyone feels that it’s part of it. That it’s part of this process, that it’s part of the discovery of these solutions on building them, understanding their own users. So making everyone feel part of it, I think it’s really important, right? Also, try to drive that experimentation culture is something that has been advancing a lot, but still there’s a lot of room, I think in a lot of companies to drive that experimentation culture to be able and be willing to take risks. 

[00:06:11] That’s something that is also very important. In Bitso we have a very strong experimentation culture, and that willingness to take risk. But I think in other companies, it could be harder to make this and still like in all places I think that willingness to take risks is going to be super, super important and also something that changed my perspective also is the importance of being able to prioritize and compromise, right? 

[00:06:49] So sometimes when you are like always thinking on experiments, there’s like a lot of things that you want to do and a lot of things that everyone wants to test and sometimes it’s hard to do all. So being able to prioritize and understand which is like the most impactful thing that you could do in that moment, I think is super important and also start compromising things in order to be able to have a strong agenda together with stakeholders.

[00:07:22] Yeah, I think that’s one of the most important things. 

[00:07:29] Arjun Kunnath: All questions regarding this. So one, I want to know it’s fairly clear that, Bitso is super into experimentation, right? So has it always been like that, the experimentation culture? Second, you said we have to make those compromises right amongst the list of ideas that you would want to test, you have to cherry pick the ones that you think are the most impactful.

[00:07:55] So what’s your take? How do you go about with it? 

[00:07:58] Rubén Nesher García Cabo: Okay. So regarding the first question, I think it’s not like a company is completely experimental and data driven or not. It’s more like a spectrum and yeah, I think we are advancing a lot, right? There’s still room and yeah one or two years before we weren’t as experimental or driven as we are now, right? And that requires a lot of effort right from the data teams. 

[00:08:37] And that requires being able to show them the benefits of doing this and yeah, while you start getting interesting results, then this culture starts getting a boost.

[00:08:54] But we’re doing a lot of things. For example, we made a whole training for product teams for them to understand why do we need to do an experiment? Why is it important? Why does it matter, right? How can we do this together? And yeah working together on this, and as I mentioned before, being able to make them feel part of that process is going to be super important for that culture to actually have a boost. And can you remind me the second question? 

[00:09:29] Arjun Kunnath: Yeah. So regarding the compromises that you have to make, given that, you are an experimenter and you have like tons of ideas. So how do you decide on what to go after? 

[00:09:39] Rubén Nesher García Cabo: Okay. So that’s a tricky one. First, you can have estimated impacts on the initiatives that you have, right? And that estimated impact depends on the importance of the product or the flow that you’re experimenting with. 

[00:10:00] Maybe you have two experiments and one is focused on the most important product in your company and the other one is just a flow, like it’s important, but it’s not like the core of the business, right? So maybe you should focus on the one that is core for your business. 

[00:10:19] Also, you can have previous experiments that can tell you, okay, so we know that these kinds of initiatives work well with our users. And we have tried these other things and in the past, we haven’t seen those great results from these experiments. 

[00:10:35] So maybe we should focus on the ones that we have more certainty that normally they work, right? So that’s the kind of things things that we try to focus on to see which one we should prioritize and the other important thing also is effort.

[00:10:51] Because, maybe, you will have this super impactful experiment but it will require half a year to make it happen. And if you have something that you can take in two weeks, maybe that’s the way to go. 

[00:11:05] So, it’s understanding that benefit between how complex an experiment is and the expected benefit it has, depending on what you have learned in previous experiments, and the importance of that population or the importance of that part of the business.

[00:11:23] Or also, for example, you can have benchmarking or literature review, right? That also can guide you to understand what can have more impact because maybe for example, you have seen that a particular experiment has worked in other places or maybe you have seen a paper that show that it has these results.

[00:11:45] So that hints on what are the potential impacts of this, I think it is super important to start making that prioritization. Because in the end, you won’t know the impacts until you actually experiment with it, right? That’s the magic of it. 

[00:12:05] Arjun Kunnath: Right. Absolutely. I think this is somewhere on the lines of an ICE framework, right? If I’m not wrong that you’d ideally follow with your experiments. 

[00:12:15] Rubén Nesher García Cabo: Yeah. 

[00:12:15] Arjun Kunnath: Great. Great. Nice. Interesting. So now that you’re leading a team specifically designated to run experiments, could you share your strategy for task allocation, delegation, and ensuring the CROs around you achieve what they’re supposed to achieve.

[00:12:37] How do you go about that? 

[00:12:39] Rubén Nesher García Cabo: Yeah okay. So, it’s actually related to creating this agenda, right? And yeah we try to do, inside my team, continuous planning. At least, for example, each quarter we try to have a planning of what are the projects that we’re going to focus on. And as I mentioned, it’s something that we work together with stakeholders. 

[00:13:08] So we define what are their priorities, what are the experiments that we can do? And we try to create a roadmap together. And here is very important, we try to have in this planning like an active approach.

[00:13:23] Sometimes experimentation team can fall into being passive or reactive to what’s happening in the company, right? And sometimes that can lead you to focus on initiatives that don’t have big, big impact, right? 

[00:13:40] So you want to focus on things that are actually going to have an impact on your organization and your user experience, right?

[00:13:47] So having this active approach, we work together with stakeholders, super important. And once we have this inside the team, we try to allocate those tasks in a way that, first of all, it takes advantage of their comparative skills. So, for example, maybe you have some experiments that are more related to a financial experiment.

[00:14:20] So maybe, we take someone that is more knowledgeable on financial things or maybe something more behavioral. So we try to focus on to give that to someone that has more expertise in doing behavioral analysis and also sometimes we try to have delegation of initiatives by teams.

[00:14:42] So maybe for example, one focus on one product and another person works with another product and that way it also helps a lot with this idea of working things together, right? 

[00:14:54] So they feel part of the team they’re working with and also it gives them like a deeper sense of that product. Because if you’re working experiments, you need to understand the product and all the implications and how different experiments can be correlated. Or how they interact. So having a person focus on a product or a group of people focus on a product can be really good in that sense because they have a more holistic view of what’s the product or the marketing area or the growth area is focused on.

[00:15:38] So yeah, that’s normally how we try to allocate projects and in the end, something that I think is really, really important is always try to visualize like the steps of all the experiments. 

[00:15:52] So for example, maybe an experiment is just in the ideation phase. And another one is already developing, and the other one is already running.

[00:16:03] So, you need to understand, what are the stages of each experiment and allocate it to people so that they have a good flow of experiments. You don’t want to give a test to someone and they have all the experiments already running, so they will have to make a lot of evaluations afterwards.

[00:16:30] Or you don’t want someone to be fully in charge of a lot of experiments that are still ideating. Because in the end you will have a bottleneck right on their capabilities and their efforts. So you want to have a complete flow of diagnostics, solution making, then running the experiment, then great making results.

[00:16:53] So, like having that flow is going to be very important and also you want to allocate efforts so that everyone has projects that are a priority. And also in the end, projects that are not so priority, but you want that to be really good distributed between your team. 

[00:17:13] So everyone is making things that have great impact, but also have space for making other projects that are not that impactful but it’s still we want to carry on.

[00:17:22] Arjun Kunnath: Workflow management, I think it’s super crucial when it comes to any project especially given the stakes that experimentation has, it’s super crucial here. Right.

[00:17:33] Interesting. Interesting. So Rubén, while I went through your profile, it was super clear, your enthusiasm for behavioral sciences.

[00:17:46] Could you share how you blend insights into human behavior? With your experimental design. Simply put, how do you incorporate behavioral science in Bitso? 

[00:17:59] Rubén Nesher García Cabo: Yeah. Okay. So, the important thing here is you want to build things that people need or that people want.

[00:18:09] You don’t want to build things that your company wants, right? Or that your company needs. What I’m saying is, you’re building for a person so, you need to take into account that person. And that person has their own motivations. That person has their own behavioral barriers, their own cognitive biases, they have their own way of making their decisions and how they behave. 

[00:18:44] So we need to take all that into account in order to build something that actually works. Like one of the statements of behavioral economics and particularly applied for all of this is, do you want to make people use something that they don’t want, right?

[00:19:06] So, you want to build it in a way that they understand why it’s useful for them and make it as easier for them to use it. That’s why having like a behavioral approach on your experiments is super, super important. So what we do in the end with behavioral science and behavioral economics is understand the decision making of our users and make a deep dive on their behavior.

[00:19:40] So we’re able to create solutions, that in the end will be adjusted to what they need. So, for example, in the case of flow optimization. So what we do is, okay, so we’re going to map the flow. Let’s understand where are the drops of that flow? Where are the users falling [in the flow]? Why are they not able to complete the flow? 

[00:20:07] And try to put yourself in the shoes of the user and trying to understand what are the reasons behind [them falling in the] flow. So, there could be a lot of behavioral barriers there. Maybe the user feels choice overload, right?

[00:20:28] Or maybe the user is ego depleted by the end of your flow. Like, there could be a lot of different things that in the end are affecting the decision making process. Or also, it could be from the part of motivation. Maybe your flow is super neat but they are not feeling the motivation to complete it.

[00:20:49] So we make that analysis to understand what are the motivations and what are the cognitive barriers that a user is facing to change a flow. And yeah, that’s like the analysis we do to try to understand a little bit more the decision making process and why are they doing the actions that they are taking?

[00:21:13] And, yeah try to understand that from the perspective of the user so we’re able to build experiments that help them with those needs. I don’t know if I’m explaining myself. 

[00:21:24] Arjun Kunnath: Absolutely, absolutely. I actually have a follow up question, right? So with behavioral science, I just want to understand what’s your approach to this research. So I’m sure there’ll be tons of data that is available at your disposal, right? How do you go about with dissecting and analyzing all this data pouring in from your web and mobile applications to understand your customer behavior, what kind of tests to run. If you could throw some light into your approach.

[00:21:57] Rubén Nesher García Cabo: Yeah. Okay. The experiments that we’re doing, in the end, we tried to decide everything that we’re building through an experiment. That’s what we’ll tell you in the end, what works and what doesn’t. And the impact that you can expect from that change. 

[00:22:16] But to motivate those experiments, we do a lot of research using data that comes from our web page or from our app, right? And, yeah, as I told you in the previous question, we use a lot of that data to understand the flows. So you want to understand in each step of the way, where are the users falling.

[00:22:44] For example, you want to understand what are the drop rates of each flow. But also, for example, you may want to see how much time they’re spending in each step, right? Or maybe they’re going back or maybe, how do they interact with each screen? Also, you want to understand, for example, if you have patterns in your user behavior.

[00:23:11] In the end at Bitso, for example, we can gather a lot of information on what are they doing in the app? Which products they are using, which assets they are buying? And you can start seeing if there are patterns in depending on different users characteristics. 

[00:23:32] For example, maybe we see that there’s a relationship between how much time they have in Bitso and the number of products they have.

[00:23:44] Or maybe we start seeing that users that open the app more tend to buy more assets. I’m just like giving not necessarily real examples. But like understanding those patterns. And you can have a lot of user’s characteristics, right? You can see for example, how much time they have with your company.

[00:24:06] You can see their age. For example, in Bitso we can see the country, maybe you can see different areas. Across a state or across a country. So you have a lot of different characteristics and you can start seeing if there are different patterns in whatever they’re doing with your product. And yeah like in the end, different business analytics that you can do to understand that user behavior and that gives you hints on what is happening there.

[00:24:47] But also you can have for example, more complex analytics or more complex models. So, for example, in Bitso we have a machine learning team. Sometimes the models that they are building also give a lot of information on the users. So maybe with just a simple regression analysis, you can start understanding what things make a probability of using a product increase.

[00:25:18] And those things can shed a lot of light on what you can do with an experiment. But also you have non-quantitative data. You also have more qualitative data. And here we’re talking for example, about surveys, or you can start doing interviews or maybe you have literature review.

[00:25:44] So maybe you’re saying that there’s a particular paper or a particular company made this move, made this experiment, and got these results. So maybe that can motivate the kind of experiments that you want to do. And also something that gives a lot of data for future experiments in the end, are past experiments.

[00:26:06] And what do I mean with this? When you’re doing an A/B test, right? Even if it doesn’t work, it gives you a lot of light on what can you expect with user’s behavior. Or maybe if you build these new experiments, understanding the decisions they make inside for example, a new screen, that can also give you light on what you should do next as a future experiment.

[00:26:37] So yeah, and for example, you can also see patterns inside an experiment. So you can see, okay, I know that these experiments have these impacts. So do I see the same impacts for example, for users that are really young vs. users that are more than 40 years old? Or am I seeing these impacts with the users that have these products? Or is it different from the impact that we have for the users that have these other products? Or maybe you see that your experiment has more impact with users that are really new to your company that are just starting to explore. So, those kind of patterns that you can find on past experiments can also give a lot of light for future ones.

[00:27:28] Arjun Kunnath: Right. So between qualitative and quantitative, I’m sure you’re analyzing both. Is there a focus more on one than the other? And generally how do you collate these ideas from different sources of work? How do you generally come up with a pattern? What’s the process there? 

[00:27:50] Rubén Nesher García Cabo: I think both of them are really important. We try to focus more on quantitative data than qualitative data. But I think they give you like different aspects of the same thing. But you want to see patterns on both. For example, in the end you can qualify your insights on how reliable they are. 

[00:28:21] And the most reliable insight you can have probably is quantitative. Because in surveys and interviews you can dig up a little bit more but in the end you’re limited to the amount of people that you can ask things. And data, probably you have for all your users.

[00:28:41] And maybe you just get answers from a particular segment. So, we try to rely a bit more on quantitative data but qualitative is also like really valuable to understand what you can experiment later and it gives you hints. The most strong insight that you can have in the end is having that the two of them go in the same direction.

[00:29:15] So maybe you find some qualitative data that sends you into one direction and then you also see that in the data, you have the same pattern, right? Or the same hint. So that’s like a super, super strong flag on where you should go forward. 

[00:29:33] So you should try to see if they match right in the end like what you see in qualitative data also is happening in the quantitative data that you have. So having those two together can give you like super strong hints. But normally we go more with quantitative and qualitative is more like going a little bit further on understanding maybe, what are their motivations? What are they feeling at the moment. Because those are things that you cannot see in the data but are also really important. What are they expecting? What was their reaction? What they were feeling?

[00:30:20] And then you go to the data and see considering that they were feeling this and that this is the motivation. What are we seeing in the data? Were they able to complete the flow in the end? Did they drop it? What happened? So I think the interaction of these two is super important.

[00:30:38] I hope I answered your question. I don’t know if there was something else. 

[00:30:42] Arjun Kunnath: That’s about it. So basically what you’re trying to say is qualitative data is used to reinforce your findings from your quantitative data just to ensure that you’re moving in the right direction.

[00:30:54] Would I be wrong if I said that? 

[00:30:58] Rubén Nesher García Cabo: Yeah, but also it gives you different things. That you cannot see in something that is quantitative. So its more like a complementary thing. More than just reassuring the quantitative, I think they complement themselves and you can be more sure of what you’re doing if both of them go into the same direction.

[00:31:31] Does that make sense? 

[00:31:32] Arjun Kunnath: Yes. Yes, it does. It does. Awesome. 

[00:31:36] Now Rubén, could you give us an inside look at the experimental frameworks or processes that you’ve implemented at Bitso? Any standout successes which had a positive impact or if there were any setbacks? We would love to know how did you turn them into learning opportunities.

[00:31:59] Rubén Nesher García Cabo: Yeah, okay. The process in the end for making experiments, and I think every time that we do an experiment that’s like the process that we should follow, the first thing is creating a good diagnostics, right? And that is related to what we have discussed in the previous questions.

[00:32:27] But like digging up a little bit more on the data. And as we said you can get that from different sources, right? But having a good diagnostic to understand what is happening? What is the problem that you’re trying to solve? What are you seeing in the data? And as we mentioned, that could come from a model, it can come from patterns that you’re seeing in the data. 

[00:32:54] It can come from an interview, it can come from a literature review, all those kind of things make you create a diagnostic to understand what is happening, what is the problem? What things we could do? And in the end, that diagnostic should be translated into a hypothesis.

[00:33:12] And yeah, like scientific method 101, creating a very strong hypothesis is super important. And it may sound basic what I’m saying but it’s super important to make it clear because it’s very easy when you’re working in a company to just jump directly to making a solution. 

[00:33:36] So like we say, Oh, I saw that this company is doing this thing and I want to do it too and yeah, like I just want to do it and that’s it and we jump directly to the solution and we start working on it. But it’s not backed by any diagnostics by any data and you don’t have a strong hypothesis of what are you trying to answer. In the end, the hypothesis is what is going to tell you what are you testing with your experiments.

[00:34:05] So being able to build that hypothesis is super important. So you make the hypothesis. You make the diagnostics, you define a hypothesis, then you start designing solutions and creating the experiment design. And here, you could go from a simple A/B test or maybe you can have multiple treatments, multiple variants, to try to test different approaches, different hypotheses and here we could talk a lot about different possible experimental designs but you use the one that makes you answer the hypotheses that you have.

[00:34:48] And then you go ahead with running the experiments and making the evaluation of your results. And once you have the final output and the final results, here is where it’s going to be very important. So you make the question, okay, so this is a good result? Is it first of all statistically significant?

[00:35:16] And on the other hand, is it economically significant? So you want to know that the impact that you’re having, it’s actually not just noise but also you want to see that the impact is actually economically significant for you. And that the cost that you’re incurring with these experiments are not higher than the impact that you’re having and if it is good, then you also need to start thinking.

[00:35:41] Okay, so is this something that I want to iterate to make it better? Is it something that maybe I should personalize a little bit? For example, if we see that there are different results for different people maybe you want to personalize or focalize your intervention a little bit more. 

[00:35:59] And yeah like giving some examples of experiments that we have done and like taking into account this whole process for example, in improving flows, like making a diagnostics on a particular flow in Bitso, we found that a particular place in the app was not working properly.

[00:36:22] And a lot of users were dropping on the flow and what we noticed is that the first you were choosing like a method to do an action and then you were deciding the action that you wanted to do and it didn’t make a lot of sense, because the options that you have were like a lot.

[00:36:57] So, yeah, so what we did in that part of the flow was just, switching the order of how they were selecting things and that allowed us to group the options a lot. So, now the user, when it went to that screen, he was seeing less options. And those options were more clear on their final goal.

[00:37:24] Because if they were first selecting the method, it was not clear what was the goal or the action that they were taking. So, if we put the goal first, we were able to group the different methods and we were able to reduce the options that the user was facing and that was like much more clear and more close to their mental model on how these things work.

[00:37:46] And so, with all that diagnosis we created the hypothesis that by making this change, we were going to be able to have an impact and yeah the flow improved, I think like 20-30%, just by making that change and that was like a super successful experiment.

[00:38:12] Or for example changing the default of things. So if you do a correct diagnosis and you like build strong hypothesis, for example, we start seeing that a lot of users, the action that they wanted to take was not the default that we were showing them. So changing the default in the end led to much, much better conversion in that flow and talking about this process, as I mentioned, once you have an experiment it doesn’t end there.

[00:38:49] So when you have a result of an experiment, it doesn’t end there because probably there are some focalization efforts that you can do that could be like really impactful. 

[00:38:59] So for example, we have a campaign where we are making price notifications to the users. So depending on the changes that we saw in the price of a particular asset, we’re sending messages to them just for them to be notified of any important changes.

[00:39:21] But we have users that are more sensible than others, right? So making focalization of those notifications let us in the end to have like a much more impact with less users. And just sending message to the users that in the end find this feature useful. And yeah, I can talk a lot about different examples of things that we’re doing.

[00:39:47] For example, we have done different games for the users to feel more engaged and discover new things in the app. As I mentioned, changing defaults and even small things. Even the wording of a particular button that could be like really, really impactful if it is not well written because words can have a negative feeling.

[00:40:16] Or words can imply a different mental model to what you are expecting. So also testing small things like the wording of a particular CTA can be like really, really impactful. And I don’t know if I answered everything. You mentioned something about losses. 

[00:40:41] Arjun Kunnath: Yeah. While I’m sure that there’s a lot of learnings from all the successes that we have. There could be one or two or a few losers also amongst all the campaigns that you run. So how do you generally turn them into learning opportunities? 

[00:41:01] Rubén Nesher García Cabo: Okay. Yeah. I really like that question because when you’re doing experiments you must be willing to fail. 

[00:41:24] In particular during experiments, failures sometimes are more insightful than actually getting winners because it can tell you a lot about your user behavior. 

[00:41:39] So, for example, if you do a particular experiment thinking that your users, for example, have a certain behavior that your users are expecting something particular and you test it and you find it is not the case and that is very conclusive. 

[00:42:01] That first of all is going to tell you these initiatives are not for our users, so let’s stop putting efforts on this and it’s better to see that to have that failure in a simple experiment once than iterating and doing a lot of stuff with the same thing, just to find out months and months afterwards that it didn’t work.

[00:42:26] So, yeah, those failures give you a lot of insights on what are your users expecting, what are your users sensible to. Because maybe your users are expecting monetary rewards or maybe your users are just looking for educational features. Or maybe your users want to feel more engaged with your company.

[00:42:55] And it depends on each company. It depends on each product inside a company. So being able to understand this is not a path that I want to take. It’s super important. 

[00:43:10] So, like, a quick answer. Yeah, we have had losses in the past with our experiments, but that’s part of it. And in the end, the losses are the things that make you learn quicker because it’s the ones that tell you, okay, so this is not a path to go forward. So let’s focus on something else. 

[00:43:30] Arjun Kunnath: Makes sense. Makes a lot of sense. Rubén, crypto is famously dynamic. How does that influence your experimental approach?

[00:43:42] How do you keep in pace with use of feedback and convert it into significant experiments within the crypto den? 

[00:43:55] Rubén Nesher García Cabo: So this has like very important implications in the experiment design. So the first one is, you need to have longer experiments, right? If you just experiment, I don’t know, for three days, maybe you have the problem that in that moment the market had like a huge boom. And maybe you’re overestimating the impact. So you want to have a little bit longer experiments so that you’re able to see impacts with different movements of the market and even in different days of the week, or days of the month. 

[00:44:39] Maybe, for example, you want to see that the impact of your experiment is the same. In payday or after payday, for example. Also, that implies a lot of continuous monitoring. So, for our campaigns that are always on we’re always keeping a control group. And we’re like even if the experimental phase is over and this is already and always on, like being able to monitor and continue tracking the impact of your initiatives is going to be super important.

[00:45:13] More if you are in these dynamic places, in these dynamic sectors. Being able to see that your initiatives are still having the same impact as in the experimental phase is going to be super important and if in case something changes, you’re able to see that the initiative needs to be adapted to the new context.

[00:45:41] And also it’s going to be very important to be always challenging things. So we are not a company that is just completely bought to a solution. And this is our solution and we’re not going to change it. We’re always trying to iterate. We’re always trying to challenge what we already have.

[00:46:03] And always test it. Sometimes people come to me and say, so maybe, I don’t know, I was thinking that this thing that we have created is not the best approach. I think that we should go with this approach and I’m like, okay, let’s test it, right? So like that is part of the experimentation culture.

[00:46:26] Not to be married to a solution. And always start challenging. In particular in dynamic environments, I think that is super, super important. 

[00:46:39] So, wrapping up, I think it’s having longer experiments, continuous monitoring and always challenging what you have. 

[00:46:51] Always have a monitor on the flows and results of an experiment. I think is super crucial. But also one thing that I think is super important is you cannot be extremely reactive to the environment. In particular if you are like in a dynamic setup, as you mentioned in the crypto world. 

[00:47:20] You cannot be super reactive. Like, okay, so we have this market right now, so let’s build solutions for that market and then the market change. So let’s make this particular feature for this moment in the market. We’re trying to focus on the fundamentals. And on solutions, features, innovations, communications campaigns that in the end work despite the market, right? 

[00:47:51] I don’t know if I’m explaining myself, but you don’t want to be reactive to what the market is, what is happening in the market, you want to do things and create solutions that even if the market is up or even if the market is down, is something that still works and that is our our goal and I think that other companies that have the same like dynamic sectors, they should also try to focus on that fundamentals. And things that are going to work despite what is happening outside. 

[00:48:27] Arjun Kunnath: Yeah and I think it’s very easy to get influenced by all these market dynamics for you to come to conclusions. 

[00:48:35] So given that, now you suggested that other companies in this industry should also keep in mind, you should be wary of all these things, right?

[00:48:46] So how do you envision the future of experimentation in this industry? Are there any trends that you’ve spotted recently that anything that you could share with us? 

[00:48:57] Rubén Nesher García Cabo: Like experimentation in? 

[00:49:00] Arjun Kunnath: Yeah, in the crypto space. 

[00:49:04] Rubén Nesher García Cabo: In the crypto space, I’m not sure. I’m going to mention what is going with experimentation is applied for every sector not necessarily crypto.

[00:49:21] Because, for example, we have seen a lot of, that experimentation is something that it is growing in general. As we mentioned in the first questions there’s a lot of things regarding culture that are like, we still have room to cover but I think that is something that is advancing in different places.

[00:49:45] But for example, something that is being super important is the importance of the intersection between machine learning and causal inference. So there are different methods for example, one that is called causal forest. These are like incredible tools to be able to focalize different initiatives.

[00:50:19] So, these are methods that allow you to have more detail on your causal inference and be able for example, to assign like a causal impact to each person and that’s like a way of knowing the sensibility of a person to a particular experiment. Those interaction of machine learning tools and causal inference is something that it’s like getting a lot of importance in the crypto sector.

[00:50:52] But I also think that in general is something that is getting like a lot of strength. Being able to use these machine learning tools to focalize, to personalize, and get your experiments to the next level. And yeah, like in the end I think that’s all the technology improvements that we have and that will come in the future, I think are going to be super important as ways to learn faster. 

[00:51:21] So a lot of technology tools that allow you to like make changes quicker. And also to obtain results quicker and actually make iterations while you learn. 

[00:51:36] They’re really interesting tools that allow you to have different treatments and without automatic job you’re able to say okay, so, this is the one that is working.

[00:51:53] So let’s take that and just use that one and like the whole system is learning while they’re testing new things. So, I think those technological improvements I think are going to be super important in the experimental world in general.

[00:52:16] Arjun Kunnath: Right. Interesting. Interesting. I think we are all looking forward to what comes our way. 

[00:52:24] Now, Rubén, this is one of the last questions that I have for you before we end the session. It’s a bit of a fun question. So just want to understand what’s on your reading list or watch list right now?

[00:52:36] Any books you’re devoting or any series you’re binge watching that you’d like to recommend to our audience? 

[00:52:41] Rubén Nesher García Cabo: Okay. Okay so books, I just finished ‘La Sombra del Viento’ of Carlos Ruiz Zafón, that’s an amazing book. I just finished it yesterday, so that’s the one I have on top of mind. And I’m a novel guy so yeah, I really like that book and I’m also reading right now, ‘Dune’ by Frank Herbert. 

[00:53:14] Yeah like those two novels, I think they are amazing. And for example, in terms of behavioral, I’m just going to start reading ‘Noise’ by Daniel Kahneman. And I also I just finished ‘Alchemy’ by Rory Sutherland, also a really good book.

[00:53:42] Those are the ones that I have on top of mind right now because I just read them or I’m going to start reading them. But yeah really good books. 

[00:53:51] TV series. I’m not a lot of a TV series guy but I just saw one in Apple TV that is called ‘Severance’. If you like sci-fi as I do, it’s a very good TV series.

[00:54:10] It’s just one season so far. I think they’re working on the second one but it is really interesting and yeah really fun to watch. 

[00:54:21] Arjun Kunnath: Nice, nice, nice. I’m definitely going to try that series. Thank you. Thank you so much Rubén for the recommendations. 

[00:54:28] That brings us to the end of the discussion today.

[00:54:31] Rubén, I can’t thank you enough for taking the time to join us and share your wealth of knowledge. 

[00:54:37] Your experience and wisdom with experimentation is truly inspiring and we’re grateful for your openness and enthusiasm. 

[00:54:44] To our audience, I’m sure you’ve found this discussion as enlightening as I have and we hope you can implement some of Rubén’s wisdom in your own experimentation.

Speaker

Rubén Nesher García Cabo

Rubén Nesher García Cabo

Head of Experimentation & Behavioral Science, Bitso

Other Suggested Sessions

Experimentation in the Hospitality Sector

Join Alexandre from Accor Hotels as he shares secrets on revolutionizing hospitality through smart, user-focused digital experimentation!

Win by Focusing on People With Interest but Not Buying

Discover Rishi's Forrest Gump method of conversion optimization, focusing on targeting interested shoppers and closing sales with essential information.

How HubSpot Used Data To Redesign Its Academy Website

Learn what thoughts and efforts went into redesigning HubSpot's academy website, and how they collect data to inform decision-making along the way.