VWO Logo Partner Logo
Follow us and stay on top of everything CRO
Webinar

User Research: The Superpower Behind Experimentation Programs

Duration - 50 minutes
Speaker
Chris Gibbins

Chris Gibbins

Chief Experience Officer

Key Takeways

  • Embrace change and experimentation: Making significant changes to your website or product can lead to substantial improvements in conversion rates. Don't be afraid to test new ideas and approaches.
  • Use data to validate changes: Whenever you make changes, use data to validate whether those changes are effective. This can help you understand whether your changes are worth the effort and resources.
  • Consider the impact of changes across the entire user journey: Changes made on one page can impact conversions on another. When testing, consider the entire user journey, not just isolated pages or features.
  • Prioritize down-funnel metrics: While engagement metrics like button clicks and page views are important, prioritize down-funnel metrics like form submissions, transactions, and revenue. These metrics are more closely tied to your bottom line and can provide a clearer picture of the impact of your changes.
  • Don't run more than one test on the same page at the same time: This can lead to confusion and skewed results. Instead, have a clear strategy for when and where you launch your tests.

Summary of the session

The webinar, hosted by a representative from VWO, featured Chris, an expert in user research and experimentation. The discussion focused on the importance of both qualitative and quantitative research in understanding customer behavior and uncovering opportunities.

Chris emphasized the value of usability testing not just for assessing prototypes, but also for discovery research. He also highlighted the need for analytics to understand the scale of a problem and determine its priority in the business roadmap. The session was interactive, with attendees asking insightful questions about user research methods and experimentation.

The webinar concluded with a Q&A segment, with the host promising to forward any unanswered questions to Chris for follow-up. Attendees were also encouraged to connect with Chris on LinkedIn for further discussion.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • Which product or platform is best for moderated testing, other than the coffee shop approach?

    - by Jimmy
    Well, we actually use look back. And that's the software we use. And it works pretty well. There there there's always some so we used to do everything in our labs in London. And then COVID came, and t ...hen that forced us to do everything remotely. And, actually, at the time, we were thinking, oh my god. It's kind of like ruined all our use beta testing. but in fact, tools like loopback made it possible to do remote testing and still keep it moderated There are some challenges with this because people have to download something onto their device and get it working. But overall, the pros outweigh becomes a of it. And on the, the good thing is you still get to see the picture in the picture. You can see their face, like, on the bottom left here, and you see their screen as well. But there are just a few issues on iOS. Sometimes you can't see their face, etcetera. I think the best thing with it, Jimmy, is that you can anybody can then log in and watch and write notes, which are on the right-hand side of that small screen. So you can write notes and, comments and things without the user knowing. So, actually, you can get your whole team involved watching everything. And then the notes are time-stamped. So when you want to create a video, you can just easily create that clip of where you took that note of that observation. So that's a really good tool.
  • Do you have any top tips for good screener questions just to make sure you get the right type of users within the moderated tests?

    - by Jimmy
    Yeah. That's a good question. So the the one that's I mean, actually, we we want people who haven't done much research before. So in actual fact, one of the well, you have to I'll ask it in the right ...way, but we don't want people who have done research more frequently than in the last 6 months or 3 months. so that's an important one. You have to watch that sometimes the problems with the panels out there. there are tools that make it really, really easy to run more unmoderated tests, but the panels have users who are just doing tests all the time. And you can kinda start to see it, actually. You can start to think but they're almost ready with their opinions. They almost said, oh, yeah. That color shouldn't be blue anymore. And, of course, we don't really care what they think in terms of design, we just want them to do that journey as if they're a real shopper and just go ahead and and find a hotel or buy that dress, for example. the other one which is really good is don't recruit people who have used that product or website before. So if you're a retailer, for example, make sure there's an open question where they will say where they've shopped and where they haven't. And then you can make sure that they're not a regular customer because a regular customer of your website will have already learned all the problems with it. So they're they're They're not the best person to use. But, also, you can put a list of light competitors, and you can recruit the ones which recruit which use your competitors. and then they're they're perfect target audience. That'll make sense.
  • When you experiment for a long time we have seen a lot of things that work and don't work. In an experimentation culture you form this notion that, okay, might be this is going to work, this is not going to work, maybe I should add a pop-up here, or I should remove this. So how to prevent your team from reaching that saturation? But, okay, yeah, let's do this, and let's get this client over with, you know, let's not do anything further. So how to eliminate this, generalization or saturation from our teams per se?

    No. That's really good. That's almost like, if I'm hearing you correctly, it's almost like they're solutionizing first, aren't they? You can kind of get this. Some people will all jump to the solution ..., and it's just a generic solution. It's like making everything sticky, which seems to be a trend at the moment. Make the button sticky, make the sticky, make the sticky, you know, but a very jumping. Again, it's jumping to a solution. I think that the trick is as soon as you get people to think about what the problem is they're trying to solve, it gets them away from that way of thinking it gets from from just jumping to jumping too quickly to an AB test. And that's what that's the power of research. I'd say, so there are things like making sure everybody actually has in a test plan, a customer problem, and also making sure it is based on real evidence, not just something made up or this kind of will just quickly go and find some data to back up my idea type of thing, which I mentioned earlier. Those types of methods can help people get away from that kind of solutionizing way, and culture.
  • Which would be your preferred method of user research if you wanna leverage the volume of feedback over the quality of feedback? There'll be surveys, for instance, you can get volume, but in interviews, you can get pretty greater insight.

    - by Alex
    Yeah. That's a really good question. I think the, it's always the best it's always best to do both. That's why I go on a lot about the quantum, the quant. but it's good to get really close to the cust ...omer, to get rich insights, to understand why things are going on as well as to observe their natural behavior. So that's why usability testing is so good. unfortunately, a lot of teams only use it for assessing a prototype not almost assessing the idea or validating the idea they already have. What they don't use it enough for is more discovery research for uncovering opportunities in the first place. So that's really good, but you also need to have the quant site. So you need to then, use tools like analytics or behavior analytics to help you find out what scale of that problem. So so for example, or just the number of users who are going through the problematic journey, and then that will give you a real idea as to whether you should focus on that in your road or not. So it's it's that combination, but definitely really face-to-face research with with target audience. And you don't need to do many, like I said, you can just do 2 or 3 days is plenty.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Vipul from VWO: Thank you so much for tuning in live for this yet another insightful session of VWO webinars. As always, my name is Vipul, and I’m the senior marketing manager at VWO, a full-funnel experimentation platform. I’m really stoked to introduce our expert speaker ...
tonight. And, of course, a lot of you guys would already know him, Mr. Chris Gibbins. He is the Chief Experience Officer at CreativeCX, a highly technical experimentation consultancy as going by the LinkedIn page. So, feel free to, reach out to them if you need any help regarding experimentation.

Hi, Chris. Happy to have you on stage today.

I hope you’re feeling good.

Chris Gibbins:

A good intro. It’s really great to be here. Really excited to be here today, talking about this subject, which is very close to my heart. How user research can really help you drive your experimentation program forward. So it is really like super power behind experimentation.

So, yeah, I’m really excited to show you your examples today and talk through a few points

V:

Also, I myself, am going to be a very keen audience tonight, and we’ll be taking down points from the presentation. I’ve seen the entire presentation, guys. So, it’s going to be really insightful, and I really, really mean it. And you’ll see it once we progress, into the presentation. But before I just jump off the stage and let, Chris hold the mic, know, just wanted to share, one thing that this, you know, we really want the session to be interactive, away from the mundane, know, by a unidirectional webinars, that you guys might be experiencing at other places.

But, you know, if you have any questions to ask, Chris, if you have any observations, any opinion, right, just feel free to drop in your point in the questions box as you’ve seen on your control panel, Go to Webinar control panel. And I’ll be happy to unmute you, right, and you can speak out your heart. You can ask your questions with more context to Chris. You can just let your observation be known to everybody else in the audience today.

And, yeah, let’s make it interactive. Let’s make it, a bidirectional kind of a dialogue. So, yeah, looking forward to getting your questions, and participation. Yeah, with that, Chris, the stage is yours.

CG:

Great. Yes. And, yeah, if there’s anything you’re not too clear on or anything you need a bit of explanation around also, just type that into the box, and then we can address that as we go through. I wanted to start with a quick thing about us.

We’re an experimentation consultancy based in London, England. And where that is our primary focus, but to do great experimentation, and actually it’s probably worth me just highlighting what we need by experimentation. Experimentation is the broad title that covers all our optimization work, our personalization work, and also our product experimentation when it comes to like server-side and, feature flagging and all that really great stuff. But these are the three things we do at our company. The first thing actually is we spend a lot of time understanding customers, the products, and services, and we do this through user research which I’m gonna talk about today quite a bit. 

Second, of course, is the core thing, optimization, personalization, and innovation. But the third thing actually, which we’ve been doing a lot of recently is scale. And this is all about helping the clients we work with to scale their own practices of experimentation. So we do a lot of training, as you can see here, our CTO, giving some training to one of our client teams, but we also help build tensors of excellence, which is a really big thing now in the experimentation world, helping teams to build this experimentation center of excellence. and a lot of technical enablement as well.

We were one of the 1st consultants, really, in the UK to help people really get hold of server-side experimentation. We’ve since onboarded many clients with all kinds of highly technical skills around the side. So that’s us. We’re lucky to work with a whole range of different clients and different sectors from retail, to travel, and finance, which is great. And I suppose the last thing really is just that, experimentation is growing really fast at the moment as you from our client list, but also just from, I think companies are finally realizing the real value of experimentation, the value it brings which is really exciting.

So for example, the value it brings when you’re driving incremental and measurable growth, that kind of optimization type of areas. The whole thing around eliminating guest work and making better decisions and the slightly newer thing, really, is a realization that experimentation helps you to avoid rolling out harmful changes. In the past, with optimization, you’ve had these amazing kinds of CRO optimization teams in a company working in a little team over here. And yet over there, in any other part of the company, there are product teams releasing massive great features, and huge redesigns of parts of the website, but without A/B testing them.

So, basically, taking huge risks. But recently, people have started to realize that that’s a bad thing to do. So, therefore, experimentation can now help them roll out new features and new big parts of the website in a much safer way, and we’ve been helping to do that. The other part, which is really exciting, actually, is thinking about experimentation as a safety net for adventurous ideas and innovation. If you didn’t have A/B testing, for example, you have to play it pretty safe. You have to make really small changes, and you can never really be adventurous.

And that’s an exciting part of experimentation. And then a couple more. The other thing that’s really good is testing bold ideas early and cost-effectively. So rather than, say conducting a 6-month project, before you finally roll out the brand new redesign or navigation system or something like that, and then you find out it doesn’t work.

Using experimentation in a creative way, to find out early to that, through things like painted door and fake door tests or through just smart use of experimentation to actually maybe, release a more minimal version of what you really want to put live. But after 1 month, they’ll wait 6 months and waste all that resources. And finally, one of the reasons I got into experimentation in the first place is to be more customer-centric. When we experiment, when we put A/B tests out there, when we release our solutions to the world to test at scale, our customers vote with their taps and their clicks on what works best for them. And there’s no better way, really, to find out whether something is truly customer-centric than running an A/B test.

Okay. So that’s the really exciting part, and that’s why I think experimentation has grown so fast at the moment. And that’s why it’s such a joy to be in this world right now and helping clients achieve some amazing things.

But one of the things we do is a lot of experimentation audits and, I thought I’d put a few of the common themes that we’re seeing, which are related to research, actually, so a few of the common areas where people can improve at the moment. And then that’s a whole bunch here. I’ll talk through fairly quickly and then talk a few in a bit more detail. So overall, there are quite low win rates out there. People are struggling to really kind of find winning solutions.

There’s a low velocity, actually. So people are just testing maybe a few tests a month and/or only testing on the part of the website and like I said before, we’re releasing loads of changes elsewhere that aren’t tested. There’s an interesting area around playing it to save and lack of originality and diversity. So that’s a real need out there. That’s where when we do these audits and we see their roadmaps and their ideas quite a few of them.

You see the ideas are all very much the same, and they really need to push the boundaries a little bit and look a bit further up a field. Or they’re often just copying their competitors so they’re too focused on their competitors. And I’m sure some of that relates to you guys as well out there when you work within an industry, you can sometimes be a bit too you can because you’re worried about your competition. You want to be better than them. You’re always watching them.

And a lot of the time, it’s much better to actually look elsewhere, look outside of your sector. And then the other thing is, experimentation teams are disconnected from research teams. You get a lot of experimentation teams, and it’s on the it’s more on the quant side, and they’re very, very good at the analysis and the data side. But it’s that they miss that research, which I’m gonna talk about today. And it’s often because they have an amazing research team, but sometimes it didn’t even know the names of who’s on that research team. It’s not disconnected.

Few more things. There’s a real lack of evidence out there. There’s some confirmation bias, which I will go into in a bit, and there’s also the poor problem statements. We see a lot of problem statements where people have them, which are just conversion rates that are too low from this page. It’s not really a customer problem that conversion rates are too low.

And it’s not that helpful either. And it leads to a kind of weak hypothesis, which should be the other problem, which is for example, by making this button sticky, we will increase conversion rates. And again, that’s a bit of a weakness. It doesn’t really solve a problem with the even uncovered at all through research. It’s there’s not a problem.

It’s not like because of poor visibility, we’re making it more prominent or something. And the poor visibility was uncovered through research, so it’s real evidence. There’s none of that going on here. And the last thing is there’s a lot of hacking going on in A/B test results as well, which I’ll talk about in a second. to those of a general theme, and I’ll have a lot of these on here related to just a lack of research, a lack of evidence, leading to the ideas.

The ideas are just coming from nowhere, from, thin air as such without that evidence. Okay. I hope that all makes sense as an intro. I’ve kind of flown through it quite a lot, but obviously, let me know if there are any questions. Now, does anyone out there know what hacking is?

And then, this is the first question for you guys. So those who know, do you want to type into the questions box, if you know the answer to this, what hacking means? See if you can do it without googling it. So there’s also a chance to make sure the question’s box is working. Okay.

Anybody wanna have a go?

V:

So we do have, 2 inputs. — Oh, great. — one from Mike and one from Emma Travis. Hi, Emma. so, Mike says hypothesizing after the results are known.

And, Emma same hypo after the results with the question mark.

CG:

Yeah. They’re both right. Well done, guys. Hi, Emma. So, yeah, absolutely.

V:

The 3rd answer just rolled in. Weighing too much personal opinion in the results, Sara Kanemur. Sorry if I pronounce your name incorrectly, Sara. but, yeah, that’s what she says. Paying too much personal opinion in the results.

CG:

Yeah. Which it actually kind of means, but not specifically. So, actually, it means hypothesizing out of results unknown, and this is something we see everywhere. And it’s where people once they see the results, they start almost changing what the original hypothesis was to fit in what they now know.

And it’s kind of bad practice, but it happens all the time. And yes, instead of being so when you look at the final results, you’re not really looking at what you tested then. So it’s kind of a human bias. It kind of comes from this willingness to always find a winner, really, to always be writing what you’re doing and not and some teams aren’t so comfortable with getting it wrong sometimes, which is the whole nature of experimentation, and that’s where this comes from.

And then the second thing I wanted to bring out, I’m just gonna bring out 3 things. It’s a really interesting thing we see a lot actually. This is what I’m calling confirmation bias within problem statements. And it happens when you get a team that has a fantastic A/B test idea, they think. And it says it may come from a competitor or just something they think would be cool to do.

The next thing that happens is that they realize that actually, oh, yes, we need to run our experimentation properly. So now we need a proper statement. And, then what we do is after they’ve got the idea, it’s only then do they go and hunt for some supporting data to support their great idea, ignoring everything else. This happens all the time, especially when there is no kind of discovery research on where the ideas are coming from in the first place. I think this diagram, which comes from the confirmation bias is a really good way of looking at this.

So we have their cool AB test ID on the right. All the facts and evidence are on the left, but the only bits of data we look at are the bits related to their call idea, and they ignore all the other evidence. And this is a really good example or really bad example of confirmation bias, which we see all the time in problem statements. And it’s a tricky one because, you could say, look, we got data. We have evidence behind these ideas.

We’ve been into the behavioral analytics tool, and we found some relevant heat maps and data related to this idea. The problem is you’re only looking for data to back up the idea you’re doing, and that’s the problem. But if you turn things on their head and everything is coming from discovery research or continual discovery research, which you’re carrying out, then it’s a different story. You’re not like the ideas are stemming from there, so naturally, you will pick the ideas with good evidence behind them. So I thought that was a good one to bring out and hope that makes sense to everybody.

And then the other thing is around low success in win rates, you know, and this is a problem, but it’s kind of it is a problem in 1 in 1, but it also depends on where you’re at in terms of maturity. So if you’re on a very unoptimized website, then, it’s quite easy to get a lot of winners and you get a very high test rate. If you’re on a very optimized website, then, obviously, it’s harder to find. You have to work harder to get those winning solutions and to find them. You do a lot more research and a lot more burden, test several variations, etcetera.

But these are a few, data just widely available out there. Our win rate here, I’m not just bragging, is it’s about 40%, which is about kind of average for agencies and consultancies. Of course, remember that some of our clients bring us in because they haven’t done a lot of optimization yet. So, therefore, we can get really high win rates at the start. But also these days, the interesting thing is we have some clients where we’re managing and we’re helping them to build a center of excellence and we’re helping all their product teams across the whole organization to experiment, to run A/B tests, Then naturally, the win rate gets lower and lower as more people start testing everything.

And that’s why you kind of think that’s a bad thing. But actually, it’s a good thing because all those features they would have rolled out originally are now being A/B tested. So, actually, we’re saving them from rolling out harmful experiences. So yeah, finding solutions that are significantly better than the control is hard. And the other thing is the other realization that we’re pretty rubbish at predicting human behavior.

And for any of you out there who aren’t really into experimentation, as I’m sure a lot of you are, of course, that’s why we do it, isn’t it? You know, you have to when you realize human behavior is hard to predict, when you realize a lot of those ideas that you thought were no-brainers, arms, then it even inspires you around this whole concept of experimentation. It’s why you need to test everything. But also it’s why you need to work harder at coming up with ideas. So one of the key things and themes for today is how teams increase the odds of finding more winning solutions.

And there are a few things, a few levers that you can pull, to improve this. But it’s just the quantity of ideas. And I’ve used an analogy here around kissing more frogs to find more winners. I’m not sure if you guys everybody knows this analogy, but it’s about kissing more frogs to help us be kids enough frogs to find the prints. So therefore, you can, you know, if you find your ways to just get improved the number of ideas, and this might be from testing everywhere, you know, not just testing one little journey of the web or the funnel, test everywhere, test your app to test your devices, start A/B testing in more than one place.

The next thing is this is a really important one. It’s moving from AB to ABN. And what I mean is there are so many teams out there just testing one variation against the control. And, there’s a lot of evidence around here, but actually doing ABN tests, ABCD test, you know, testing 1, 2, we’re testing 2 or 3 or 4 variations significantly increases the chance of finding winning solutions. Effectively, you’re just testing more ideas, and more executions and solutions to a problem.

And then the third one is democratized experimentation. I know this is quite an overused term these days, but this is about getting when allowing other people to test, not just a small silo team in your organization. So getting all the product teams testing get the marketing teams testing, get CRM testing, etc. And that’s how you improve. You will eventually find some really interesting winning solutions. And then the last thing is reducing the cost of an experiment.

So if you reduce the cost, then it, through maybe automating some of the analysis by making it easier to build tests, by building extensions for different teams can all improve the velocity. Okay. Hope you’re with me there. And this is where user research comes in again. And in our experience, it’s a brilliant way to drive more ideas within your organization.

It’s the separate serendipity effect of research. It’s the surprises that you get when you actually observe users through usability testing, through interviews, for example, you always get surprises. which then, when you ideate, which is the right picture, from all those really rich findings from real people using your products and services, It’s the perfect inspiration for generating, a ton more ideas, which again leads to leads to more success of finding winning ideas. It also helps your teams to build empathy with the end user and to keep everybody user-centric.

This is just an example of a tune to audit from research we do with a client just to show you how many ideas come from just one lot of research are 105 findings and problem statements from here. And we ended up with 61, give you test ideas, to test 5 personalization ideas, 2 ideas that kind of led to Mandara initiatives, and a ton of fixes as well. And again, this comes from this discovery research we carry out. Okay.

Velocity is important, but it only gets you so far. I mean, the next 2, which are very research-related again, help you improve the quality of your AB tests. problem understanding and quality of execution. And for this, I used this graph where I painted all the lovely frogs on again. Remember the frogs?

And remember before, we’re trying to create as many ideas as possible, as many frogs on here as possible, but there are 2 very important factors that determine whether a solution is going to be better than the status quo and what’s there at the moment. The first one is your level of problem understanding. How well do you understand the problem you’re solving? From no idea at all, to complete understanding. Like, I completely understand the problem.

And if you completely understand the problem, it makes it much easier to solve a problem. you have no idea at all, you are just guessing. You’re gonna take your 100 codes before you figure out, a a better solution. The second thing is the quality of execution. This ranges from poorly designed buggy code.

This could be, in the feature you’re building, or it could be actually the AB test that you’re coding all the way to high-quality design, usability, and code. And the important thing with this is you can have an amazing, design, which is a which is a far right side, could think, oh, this is like the Ferrari, you know, this is amazing. But and these three occasions, you didn’t have enough understanding of the problem. You didn’t. You carry out your research.

So they, therefore, that’s the reason that they weren’t in that top red area. They weren’t winners. you didn’t have any problems standing. And the same is true with the top area. So these five here, Actually, you really had a fantastic understanding of the problem that you were solving, but this is almost more frustrating somehow, actually.

but you have a great idea. You’ve done your research. You know what’s going on here, but it falls over at the end because you introduced another usability issue in the variations. or it wasn’t coded very well, or there was a performance issue and you had a huge, like, flicker or hook and therefore, it had this animation effect, which wasn’t, it wasn’t a depiction of what you were tree truly after. So what we need to do is we need to do use research and analysis to truly understand the problem.

And the more effort you put in that area, the more chance of getting winning variations. But at the same time, you need to always try to improve the quality of execution as well. From the UX design to the quality of coding, showing us no flicker, but also the data quality of experiments and experiment trustworthiness. Again, if you fall over on how you build your experiment, you won’t notice or uncover that winning solution. So I hope that makes sense.

And I wanted to, look in more detail at the problem understanding because this is absolutely key to where the research comes in. And the problem with the standing and all this discovery research, which I’ve been mentioning quite a few times, the reason why you do this is that the objectives are confusion, unmet needs, expectations, priorities, and also the objections and distrust and anxieties, which come up a lot. And another thing is just the interesting user behaviors and workarounds. because sometimes when we see these behaviors in, say, usability testing on the right, when we see people, copy information into a spreadsheet sometimes because the journey isn’t, isn’t repeating, what they’ve got in their basket, for example. And all they, and some really, which lead to really innovative solutions.

But unless you were watching and observing human behavior, you wouldn’t uncover these things. Also, a big thing at the moment is a lack of trust people have, and they get as soon as their trust the business’s trust is lost, it affects the whole rest of the journey. But again, research is the only real way to uncover those things. second thing is to quantify the extent of the opportunities as to how many people are affected by the the purpose issue. which comes from, quant research and analytics and also which audiences are affected.

Okay. So that’s all the reasons why we do this stuff. The most effective research technique in our experience is usability testing, which is really underused, but moderated, not, remote remote remotes are moderated as opposed to unmoderated. is so much more valuable.

I know it’s a bit more difficult to do, but the value you get from massively increased user interviews. We do a lot of jobs to be done interviews, especially when exploring, new business opportunities and new products for our clients. Post-purchase surveys are good as long as they could have at the moment. That way works really well in the classic is what nearly stopped you from buying today. And also at the moment, a lot of call center insights.

If you can get your hands on the call center insights, that’s an amazing resource for collaborative insight to lead to some amazing ideas. And the last one is sales team interviews, especially when you’re in retail. If you can talk to the sales team who are selling the, 2 VM customers. It’s amazing what they know. often they’ll know things like the common objections that the customer has to buying this this thing that you’re selling, and then you can put counter objections on the website, which is a great idea for AB tests.

But It’s really important to have that core and core balance and focus on the behavior analytics, you know, the context squares for stories chance, quantum metrics, and also the VWO insights, all that lovely stuff. But because it can help you to quantify for example, how many people are having this issue at the moment which we saw in the use of research? Also, at the end, there’s a job to be done service as well, which is all about quantifying the importance of of the needs we uncover. So these are the techniques which we’re having the most, success with. And again, they lead to amazingly rich and powerful ideas.

So the most powerful combination comes from a mix of corner quants, which help you build really strong problem statements, evidence-based, how might we use, which is a great inspiration for ideation and more creative ideation sessions. and ultimately believed in powerful data-driven hypotheses for tests with several variations. Because you’ve got all this inspiration you’re not struggling to come up with more than one variation now, and leads to higher overall success rates. Okay. Being through a lot. so I thought to really, to really kind of, so it syncs in really well.

I thought I’d leave your client example, which is, which is a a great example serendipitous nature of user research, and also a perfect example of problem understanding. And this is the login step This is all about my login stamp. So I’m gonna first off, if you can look at this, website at the moment, it’s a travel website for booking accommodation, and it’s just for login step, you know, when you’ve added some or added a room, and this is the next step before you go on to book that room. Actually, from an expert point of view, it’s got prominent CTAs, recently prominent, it’s got clear booking summary, It’s got secure checkout padlock, which should have liked tick, you know, that’s got that feature. It’s not a guest checkout option, which is another kind of best practice feature.

And it’s got a fairly low drop off, you know, continue to the next step, 73%. It’s quite quite good, really. there are quite a few going back to the basket, which is a step before. and then there’s got a low exit rate. Okay.

So if you were looking at that, it seems kind of okay. And, actually, what I’d like you to do is a question for all of you guys. What do you think was the most important problem opportunity at this step? We’ve got a bit of research coming up, and I’ll show you that finding in a minute. But it’d be great to hear from you if anybody can look at this page and guess what would be the most impactful thing.

What’s the biggest opportunity? Does anybody want to guess?

V:

Chris, are you able to view the questions box?

CG:

I can’t actually. No. I can see the questions here, that I show. No. It’s not showing up. If you don’t mind. So I’m relying on you.

V:

Okay. No problem. I’ll read them out. We do have one input from Alex Taylor. He’s saying he is able to register with a new account.

CG:

Okay. Yeah. Okay. That’s that’s good. Actually, that’s an interesting one.

That’s related. Let me show you what happens in reality. So when we worked with this client, we actually carried us an opportunity audit. We did usability testing on the whole journey opportunities. And then we would start from there. So let me show you this clip.

And, hopefully, you can all hear this. So this is whether we’re gonna come in.

V:

Yeah. Just before he would play the video, there is one more input that came in from Odette, eliminating the login step.

CG:

Ah, yes. Yeah. That’s a good one. Okay. Alright.

Cool. So they’re close. But this is gonna reveal what the exact problem was, and this is a clip from this guy Mark. That’s not a real picture, actually. this is just, but this is a real guy, a real target audience who hadn’t done any user research before, but he was perfect for this, a real and I’ll let him talk about the step.

So you have to log in or check in as well.

I’m not I don’t have an account. You don’t have an account to stay with us. It’s easy to set one up. later. Alright.

What does that mean? Log in, and check out as a guest. But if I don’t have an account, how do I log in, and I can’t check out as a guest? Yeah? Ah, okay.

So you have to make an account. Oh, you know what? That’ll be a pain in the ass for me. I’m sorry. We’re dealing with that.

So I hope that was interesting. Could you all hear it? Hear it alright? So that’s the first part. And then then when this happened, I remember, my UX team, and they were thinking, that’s really that’s really interesting.

We didn’t expect it at all. Seems like it’s a little bit confused by the login or the checkout area. And then later on, there’s this other clip where it really reveals itself. This is what the armor means.

Oh, okay. you know, in terms of checkout as a guest, just don’t like I think this I think that what’s unusual is to check out, why is it not check-in? Right. Okay. So — And check I want to check in as a guest owner, not should we check out it’s at the end of the stay?

Right. Yeah. Okay. So it’s really that way. It’s to check out when it is.

I mean you know, this is the start of the process. I’m trying to check in.

So I hope that was revealing to you all. The real I mean, it was a surprise. It’s a perfect example of a real surprise from user research. And check out as a guest. Really not the best terminology for a hotel website. And it’s this confusion between check out or check-in because in the real world, checking in is going into the hotel and getting into your room.

The other thing is the word guest as well, actually, and, being because you’re a guest of a hotel. This is forcing people to go down the login route as an alternative but they’re not they’re not they’re not registered yet. So, yeah, it caught a lot of confusion. but the interesting thing is if you were just going only by best practices. You’d be looking at some amazing sites like the BayMard Institute, which talks all about how important it is to have guest checkout. on retail sites, but also in travel as well.

It talked about this guest checkout as being a massive recommendation. Everybody should do it. So you would be none the wiser unless you’d carried out proper user research, and then you based your experiment ideas on that. And that’s why this is so important, and this is a really good example. what this allowed us to do was we unidentified a really clear problem where we could write a really good customer problem statement, and we had a powerful hypothesis.

And then off the back of that and the ideation sessions, we constructed 4 variations plus the control for this experiment. And the fantastic thing was we saw, variation 4, which was amongst a whole other things which we experimented with, 1 and was significantly better than the control. So it’s a really good story. And and, yes, you’re right. We did actually eliminate the login step in the end, but we also tested with a changing language, which also worked as well.

So, yeah, that this was a result and it’s it’s it’s great. Well, actually, when you, a really important point is when you start to catalog all your experiment ideas, and when you start to do enough experiments as we have over the years, you start to see the ones which are based on research versus the ones which are based on, say, a client idea or the Hippo’s idea have a much higher success rate. And, again, that goes back to the problem understanding and, execution diagram I showed you. So that’s it. I wanted to leave you with a few practical tips before.

I think we’re just about out of time now. Because one of the things I mentioned earlier was about silos. It’s really important in bigger organizations to effectively make friends with your research teams. We keep on coming across experimentation and research teams who don’t even know each other’s names are so separated. But to do this, you really have to, you gotta open your hands to the research teams. And, whatever tricks we found work really well is to give them credit for the winners that come from research, and make them feel part of the process.

And then, like I said before, keep track of where the ideas come from in your AB test. So when you’re looking at your program metrics, start to see what percentage of win rates are for tests that come from research and data versus from, company ideas, for example. The other really important thing is to start scheduling user user research by default. So scheduling monthly usability testing sessions before you even know what journey you’ll be testing. try to enshrine that kind of, lets customer-centric approach to everything that you do.

And then when it comes new over time, you can okay. We really wanna test out this journey at the moment, and you can put that forward. said, before you get so much more value from moderated research, and get the experimenters and the optimizers to observe a note-taking that. So even when the research team is conducting the type of real face-to-face customer research, make sure everyone else is involved. make sure you have evidence-based promise statements, like I’ve mentioned before, to avoid that confirmation bias analysis.

and the last one is, I mean, I understand that some of you are being small organizations and you don’t have the resources, don’t have a separate research team or specialists in this area. I mean, I would advise you’re actually learning, going out there, reading books, like, don’t make me think, and learn how to run and moderate usability testing sessions itself. I really think that’s it. That’s a good idea as well. But, that’s it.

And we’ll be distributing everything afterward, but it would be Great. If you have any questions, and then I’ll be happy to answer them.

V:

Cool. That was really insightful. I hope the audio was coming through really well, for you guys as well in the audience. I hope you were able to understand, what, the call was a look was all about and what the, you know, the person who was taking the, the test was speaking.

But, yeah, this is, this has been great. And if you have any questions, any observations from your own? I know user usability research or user research in general. You know, I think, I will have a few minutes to you know, have you on stage and share them. By the way, a question rolled in, the question is from Heidi.

CG:

Hey, Heidi.

V:

I think would you like me to switch on your mic so that you can ask your question with more, context? Let me just go ahead and unmute you maybe. Ah, okay. No problem. So, Heidi’s question is how many users would you advise in a modulator best, Chris?

CG:

Well, at the minimum, we would advise 5, participants per device. Normally, with our clients, we have these dates with we’re testing on mobile and desktop. So we do 2 dates 5 a day. That’s what we recommend. But it’s a bit hard to go under that, really, but that’s a really good number.

Of course, if you have tons of different audio tire plays really different demographics, and you’ll need to do 5 with each of those demographics. but normally that that will get you started, and you’ll just give a load of amazing stuff.

V:

Yeah. Just just adding my own experience on top of that from several years back, I was doing this research, not usability testing research, but more of understanding, you know, what our customers really want from the product. The biggest challenge of course, they, decide, you know, running this research, survey and all was, you know, acquiring, the, the people, the participants. So I used to, like, cold email a lot of people, from, you know, I got a list of, users of a product from our Salesforce and I was cold emailing them and a lot of negative replies. It came in.

It did come in you know, asking me to just stop emailing them, but, but I did not. I was able to interview, like, 14, 15 people at that time. It was really, really fun, and it was really, really insightful as well. So it definitely works. I have one more question.

Please wait. Yeah.

I’m just gonna say on the recruitment, we use professional recruitment company that goes and free finds the right people. Of course, that costs a bit more money because they have to pay the fees for this week, pay the fees of another company, but also the quite large incentives if it’s a big business, they can afford it. And then you just have to be creative if you don’t have the budget. So maybe use adverts and professional media to try and get users in in other ways.

Or just for what we used to do in the olden days was actually walk into a cafe and, offer to buy

somebody a coffee. and then, run ad hoc usability testing. They should just be on their device there and then. And again, that was great for 10 minutes. Nice. so the next question is from Jimmy. Oh, Jimmy, would you like me to switch on your mic and ask the question? directly to Chris?

Oh, okay. Cool. Let me just unmute you from my side. And I’ve unmuted you. You can unmute from your site and ask your question.

Jimmy:

Hi, Chris.

CG:

Hey. How are you doing? Hey, Jimmy.

J:

Good. My question was just around in your experience, Which product or platform is best for moderated testing, other than the coffee shop approach?

CG:

Well, we actually use look back. And that’s the software we use. And it works pretty well. There there there’s always some so we used to do everything in our labs in London. And then COVID came, and then that forced us to do everything remotely.

And, actually, at the time, we were thinking, oh my god. It’s kind of like ruined all our use beta testing. but in fact, tools like loopback made it possible to do remote testing and still keep it moderated There are some challenges with this because you with people having to download something onto their device and get it working. But overall, the pros outweigh becomes a of it. And on the, the good thing is you still get to see the picture in the picture.

You can see their face, like, on the bottom left here, and you see their screen as well. But there are just a few issues on iOS. Sometimes you can’t see their face, etcetera. I think the best thing with it, Jimmy, is that you can anybody can then login and watch and write notes, which are on the right-hand side of that small screen. So you can write notes and, comments and things without the user knowing.

So, actually, you can get your whole team involved watching everything. And then the notes are time-stamped. So when you want to create a video, you can just easily create that clip of where you took that note of that observation. So that’s it’s it’s a really good tool.

J:

Thank you. Do you have any top tips for good screener questions just to make sure you get the right type of users within the moderator tests?

CG:

Yeah. That’s a good question. So the the one that’s I mean, actually, we we want people who haven’t done much research before. So in actual fact, but one of the well, you have to I’ll ask it in the right way, but we don’t want people who have done research more frequently than in the last 6 months or 3 months. so that’s an important one.

You have to watch that sometimes the problems with the panels out there. there are tools that make it really, really easy to run more unmoderated tests, but the panels have users who are just doing tests all the time. And you can kinda start to see it, actually. You can start to think but they’re almost ready with their opinions. They almost said, oh, yeah.

That color shouldn’t be blue anymore. And, of course, we don’t really care what they think in terms of design, we just want them to do that journey as if they’re a real shopper and just go ahead and and find a hotel or buy that dress, for example. the other one which is really good is don’t recruit people who have used that product or website before. So if you’re a retailer, for example, make sure there’s an open question where they will say where they’ve shopped and where they haven’t. And then you can make sure that they’re not a regular customer because a regular customer of your website will have already learned all the problems with it.

So they’re they’re They’re not the best person to use. But, also, you can put a list of light competitors, and you can recruit the ones which recruit which use your competitors. and then they’re they’re perfect target audience. That’ll make sense.

J:

Yeah. Makes sense. That’s a lot. That’s really useful.

V:

Thank you so much, Jimmy, for your question. In the interest of timing, I’ll just take 2 more questions. A few more have rolled in. I’ll just have one question from, let me just unmute you.

And, you know, unmute it from my side of it, you can go ahead and unmute yourself and ask a question.

Audience:

Hello. Yeah. Can you hear me?

CG:

Yes. I can. Yeah. Go ahead. Have a great day.

Audience:

Hi, Chris. thank you for the session. I had this question that, you know, when you experiment for a long time. We have seen, you know, a lot of things that work that don’t work. And then you, you know, form some, so in an experimental, experimentation culture you form this notion that, okay, this might be this is going to work.

This is not going to work you know, maybe I should add a pop-up here, or I should remove this. So how to prevent, you know, your team from reaching that saturation? But, okay, yeah, let’s do this, and let’s get this client over with, you know, let’s not do anything further. So how to eliminate this, generalization or saturation from our teams per se?

CG:

No. That’s really good. That’s almost like, it’s if I’m hearing you correctly, it’s almost like the They’re solutionizing first, aren’t they? You can kind of get this. Some people will all jump to the solution, and it’s just a generic solution.

It’s like making everything sticky, which seems to be a trend at the moment. Make the button sticky, make the sticky, make the sticky, you know, but a very jumping. Again, it’s jumping to a solution. I think that the trick is as soon as you get people to think about what the problem is they’re trying to solve, it gets them away from that way of thinking it gets from from just jumping to jumping too quickly to an AB test. And that’s what that’s the power of research.

I’d say, so there are things like making sure everybody actually has in a test plan, a customer problem, and also making sure it is based on real evidence, not just something made up or this kind of will just quickly go and find some data to back up my idea type of thing, which I mentioned earlier. Those types of methods can help people get away from that kind of solutionizing way, and culture.

V:

Right. That’s, you know, that’s what, Chris mentioned in his presentation as well, you know, eliminating any kind of bias that builds in once you’ve started seeing a lot of successes or even failures, you build an idea and you could bias towards what’s building in your head and you just want to jump on to that, that hint whenever you get a chance. So that that’s tough, but that’s, doable. So the next question, the last question for tonight, I’ll take from Alex. Oh, Alex.

Sorry. Alex Taylor. Alex, let me just unmute you, and I’ve done so from my end. You can unmute and ask a question.

CG:

Hey, Alex.

Alex:

Or maybe I just — Hello. Can you hear me now?

CG:

Yeah. That’s good to hear. We can hear you.

A:

So, yeah, so quick question, just, which would be your preferred method of user research? If you wanna leverage the volume of feedback over the quality of feedback. There’ll be surveys, for instance, you can get volume, but in interviews, you can get pretty greater insight.

CG:

Yeah. That’s a really good question. I think the, it’s always the best it’s always best to do both. That’s why I go on a lot about the quantum, the quant. but it’s good to get really close to the customer, to get rich insights, to understand why things are going on as well as to observe their natural behavior.

So that’s why usability testing is so good. unfortunately, a lot of teams only use it for assessing a prototype not almost assessing the idea or validating the idea they already have. What they don’t use it enough for is more discovery research for uncovering opportunities in the first place. So that’s really good, but you also need to have the quant site. So you need to then, use tools like analytics or behavior analytics to help you find out what scale of that problem. So so for example, or just the number of users who are going through the problematic journey, and then that will give you a real idea as to whether you should focus on that in your road or not.

So it’s it’s that combination, but definitely really face-to-face research with with target audience. And you don’t need to do many, like I said, you can just do 2 or 3 days is plenty.

V:

I hope that answers your question, Alex. And, I see there are a few more questions. so I’ll send your questions to Chris. and, he’d be able to answer them for you at his leisure. But you can also reach out to Chris on LinkedIn.

Of course, you can find him easily on LinkedIn. Just type in the name first, Kevin, then you’ll, you’ll find him and you can catch hold of him and ask all your pressing questions regarding the experimentation CRO and anything. So feel free to do that, or, you can reach out to us as well, we’ll be we’ll be happy to help you out with any experimentation or or CRO questions as well. with that, Chris, thank you so much. It was really, really insightful.

it was really interesting today. I would also love to thank our, attendees for asking really encouraging questions. thank you so much. everyone for tuning in live at this hour, and have a great day, ahead. Bye bye.

Thanks so much, everybody. I will speak to you soon, hopefully.

 

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Deck
  • Questions
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out