Key Takeaways
- Use VWO for testing: VWO is a reliable tool for testing different aspects of your website, such as messaging, layouts, navigation bars, and CTAs. It also allows for segmentation, such as distinguishing between logged-in users and new visitors, or users from different geographical locations.
- Start small with testing: Begin with a small percentage (like 20%) when testing new changes to your website. Monitor the results and if everything seems to be going well, gradually increase the percentage.
- Monitor and adjust tests: If a test seems to be failing or negatively impacting your KPIs, stop and investigate. It could be a technical issue or the idea itself might not be working. Make adjustments as necessary.
- Balance between self-service and contact sales: For self-service packages, users must first sign up. However, there should also be an option for users to directly contact sales if they prefer. It's a constant balance to maintain, but it's important to offer both options to cater to different user preferences.
- Gradual implementation: Implement changes gradually and monitor the results. Rarely will a test drastically drop results unless there are technical issues. By doing things gradually and checking regularly, you can ensure the effectiveness of your tests.
Summary of the session
The webinar, led by Nimrod Kozol, Head of Growth Marketing at Lusha focused on the complexities of website optimization for growth, using VWO as a testing tool. He discussed the importance of segmentation, identifying different user groups, and the balance between business needs and testing. He shared Lusha’s approach of starting small with tests, gradually increasing them, and monitoring the impact on KPIs.
He also addressed the challenge of balancing different user funnels, emphasizing the need for users to sign up before accessing self-service packages. The host facilitated a Q&A session, addressing queries about testing methods, user segmentation, and balancing business needs with testing.
Webinar Video
Top questions asked by the audience
-
Can you talk about the level of complexity in Lusha's tests? Did Lusha build up its own experimentation platform internally? Or did you rely on 3rd party testing tools like VWO?
- by TimSo, first of all, obviously, we rely on VWO. I mean, that's our testing tool for these tests, obviously; that's what we use in order to test, whether it's messaging or showing different pages to diffe ...rent IPs. Now, I'm not quite sure, like, where exactly you wanna go to talk about the complexity. Most of our tests, in the end, are around messaging, layouts, and showing different CTAs. Now, the complexity comes from the segmentation itself, like our ability to identify between logged-in users and new visitors or users coming from the US IP versus, you know, UK IP or India IP. So those are usually the complexities. -
How did you manage the need to test different versions and be agile with it versus the need for complex functions on your website? And essentially, again, the question is around the platform. Did you use the same platform, or did you outgrow it? This is one specific type of question.
- by RuckAnd for the second question about how to balance our business needs with testing, then usually what we do is we obviously start small, where we don't necessarily always start with the 50/50. We'll sta ...rt with 20%. We'll see how that goes. We'll see that everything is in place. Well, obviously, if a test seems to really fail this is a struggle that everybody has at growth. I mean, on the one hand, you have KPIs. We are constantly, I mean, we have KPIs we have to reach, and obviously testing is supposed to help that, but sometimes testing obviously hurts that. Perhaps we tested a different homepage that lowered our conversion by 10%. Perhaps we implemented a navigation bar that decreased conversion. So, we do it gradually. We start with 20%. We see that everything's okay. We see that things are in the right direction. And then we'll gradually increase it to 50%. Now, obviously, we'll give it a week or two. If we see after a few days that it's really bringing results down, then obviously we'll stop and we'll see, perhaps it's a technical issue or perhaps really just the idea wasn't good. But I have to say that rarely happens. I mean, usually, we'll find that the test is just a bit less than the control or perhaps it's the same. We haven't really encountered, unless it was technical issues, a case in which really it just dropped us by 50%. But, I think when you do things gradually and and and and you check, then you'll probably be fine. -
Does the Lusha site have a pricing page or an area where a user can directly sign up for a paid package?
- by MatthewOkay. So first of all, all the self-services have to first “Sign-up”. You cannot go through the self-service package without signing. So that's the first thing. And, obviously, yes, on the pricing ... page, you'll be able to see all the self-service packages and you’re able to see the contact sales. And we do have a number of users who go directly to the “Contact sales”. This does happen. It's very hard to attribute what exactly caused that specific user because a lot of times it's not attributed to a specific campaign, but it came organic that caused the user to immediately contact sales. But, definitely, it's a constant battle. And like I said, we're still a PLG company, and I think that's what's important. Even though our KPI is to improve demos, and you know at the end it's through the product itself. What we're simply trying to do is make it simpler for those who, when we perhaps already know they want to purchase an enterprise package, we make it easier for them. And we help to support it through the website, through the content, and really putting, the contact sales CTA in the right places in order to make it easier. -
How long do you let your tests run? Is it time-based or number of users based, and how much data do you need? Do you generally need to make a decision as per Lusha's experience?
- by NikitaObviously, this is also a question that we battle a lot of times. First of all, some tests, for instance, the enterprise page that I showed; that's a very small test. I mean, it had a big impact, but ...you don't get that many users going to the enterprise page. So that test can run for a very long time until you get masses of data. So at some point, you really have to see, even after perhaps 100 sessions or 200 sessions, if there is a difference and if you are creating an impact already. I believe you don't necessarily have to. Now in other cases, like the home page, we might run the test for a few weeks or a month, or sometimes if it's a page that has a lot of volume, we can even do it in a matter of a week, a week and a half. So I don't think there's a rule of thumb. I think there's a lot of being conscious and just making a decision based on the amount of volume and what you see from the data that you currently have. And, for instance, on PPC, a lot of times, we'll run tests for a lot longer since, you know, there's volumes or, sorry, actually shorter, since you have volumes of data, and you can, you know, accomplish a lot of decisions.
Transcription
Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.