Key Takeaways
- Prioritize your hypotheses using the ICE methodology (Impact, Confidence, Ease). This will help you rank your ideas and determine which ones to test first.
- If you're not confident in the data you're gathering from your tests, pause everything and ensure the integrity of your data. Don't take action on data that could be incorrect.
- Make sure you're tracking the right data elements to reach the correct conclusion. Your data must be accurate before you can move on to A/B testing.
- To onboard operational team members to an online experimentation platform, consider demonstrating the benefits and ease of online testing compared to offline methods.
- If you're unsure about how to conduct offline experiments online, don't hesitate to reach out to experts or use professional networks to gain more insights.
Summary of the session
The webinar, hosted by Vipul from VWO, featured Arash Vakil, Founder of Disrupt With Us, who shared his expertise on overcoming challenges in creating an experimentation culture within organizations. Arash, with his extensive experience in B2B and B2C verticals, discussed the importance of A/B testing and the fear of being wrong which often hinders its implementation.
He emphasized the need for a cross-functional team to generate diverse ideas and the importance of starting small to build momentum. The webinar provided valuable insights into data-driven decision-making, hypothesis generation, and the importance of testing for successful business strategies.
Webinar Video
Webinar Deck
Top questions asked by the audience
-
I just wanted to understand from you since you've been a C-level executive for quite a long time now. So what is culture really? And why is it so hard to build anyway?
- by VipulYeah. I mean, culture is a very, very big thing. Right? And there are a number of ways you can kinda answer, and it really starts from you know, the top and how that individual leads and how you opera ...te, that those are all certainly things that play a role in your culture itself. So how your workflows, are part of your culture, in some organizations, they are so design-focused that you can't get off on trying to put out a test that is not a beautiful design. They are so design-focused that they need to go through design cycles. So culture varies from, organization to organization. It's super hard to build the right one that works for your organization, but over time, the most successful organizations figure that out. -
What cross-functional teams do you recommend having in your working group, and how many is too many?
- by Darren RussoYeah. So one of the things I would kinda recommend is having a small pod of teams. It's a small group of teams to at least get things started. Right? And so you may want a representative from marketin ...g. You may want a representative from engineering. Because you'll be able to bounce off these ideas very quickly in real-time. How feasible is this? Is this test too much for us to take on? Can we do it? Do we have the data to support the test? So I would think about, you know, a representative from, you know, some major organizations. So probably about 3 to 4 people, as kind of like the brain trust to at least get that they started and you could theoretically replicate that based on different products that you have within your organization. -
How do you proceed with the hypothesis? Like, what are the key elements in the research? If I see a problem I think I need to fix, and I need to establish a hypothesis, what key elements I should put in my research before I decide what hypothesis I should have?
- by AkashYeah. So thank you for your question. So the question is basically about, okay, we've got a whole bunch of hypotheses, you know, what data do we need? And also, you know, which one should we kind of s ...tart with? So I didn't touch on this. It's prioritization basically, the next step after gathering your hypothesis because what you're going to want to do is basically rank these different ideas from in their many frameworks for this. I'll just use the ice methodology for now. Impact, confidence, and ease. So the impact is how impactful do you think that this test is going to be, and confidence, how confident are you in its ability to kinda move the needle of the business? And, you know, the e is ease, how easy is it to kind of, get this test going? So you'll want to rank them based on that. And then you'll be able to have a ranked order of things that you should attack. I hope that answers your question. -
What do you do if you're not confident in the data that you are gathering from all the tests that you're running or maybe all the insights that you're gathering?
- by VipulYeah. So certainly came across this in multiple teams. The first step is to pause everything you're doing and get that data correct because no matter what A/B test result that you put out, it will be ...questioned. It will be potentially wrong, and you don't wanna take action on data that could be incorrect. So you have to ensure that the integrity of the data is valid And make sure that what you are testing, you track those data elements in order to reach the correct conclusion. So, certainly, you've gotta get your data down, first, in order to kind of move on to A/B testing. -
Oftentimes teams are interdisciplinary and formed with different roles. But do you think that the members of these teams should necessarily share a common technical knowledge about this? How can one go about to repeat the correct way of applying a test?
- by MarcelaAwesome question. Absolutely not. It is not required. Right? And again, it kinda goes back to what I mentioned earlier about having a diverse team from multiple groups. You want the techies. You want ... the non-techies, because of your input, you're coming at the same problem. You're gonna try to solve the same problem in multiple directions. And so for someone who's not technical, you make try to come up with, and I'm sorry to say this, a more human, method of solving that problem. And that might be through copy changes. It might be through a design element, whereas someone on the engineering side, the very simplistic answer here may want to see if, you know, they could speed up the transitions between the stages to kind of push people through your registration process, for instance. So to answer your question, it is not required to have a technical background when you're forming this kinda cross-functional group. -
What would you suggest, should the team of experimentation, should they be spread across the world? Or should there be separate team for each region?
- by VipulYeah. So I think it certainly depends on the organization. What I have seen are pods, basically, that attack cross that attack it on a cross-functional basis. As well as teams that just attack mobile, ... or just the web. Right? So the reason why is that you're going to have domain expertise potentially, within that specific group that might be you know, really important and, very insightful in terms of developing those tests or may have the historical knowledge in terms of okay, why is it like that? And this kinda goes back to you know, the Trello board that I kinda mentioned, right, but you gotta put things in a repository put as much data as you can through there, that'll help you refine your hypothesis, moving forward to develop better and potentially more impactful hypotheses.
Transcription
Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.