VWO Logo VWO Logo
Dashboard
Request Demo

[Workshop] Create a Data-Driven Experimentation Program With Speero's Research Playbook

Unlock Speero's B2B experimentation secrets with an interactive workshop, turning research into test hypotheses, plus all the tools you need to DIY!

Transcript

[NOTE: This is a raw transcript and contains errors. Our team is working on editing the same. The final version of the transcript will be available soon.]

 Hello, welcome once again to Convex 2022, an annual conference on experimentation by VWO. Uh, a full funnel experimentation platform today with us. We have Emma Travis, as you can see in front of you right now. She is the director of research at Spiro, uh, by CXL. Uh, and of course she will, uh, you know, uh, take the liberty of introducing herself more briefly in the, in subsequent slides.

But just wanted to, uh, you know, walk you through how this, uh, you know, workshop will happen. So this workshop is titled Create a Data Driven Experimentation Platform Program with Spiro’s Research Playbook, right? And Emma has put in a lot of effort in creating, designing a Miro board. So we will be sharing the link to the Miro board with you.

Emma, shall we share it right now or do you want me to share it after your

presentation? Um, feel free to share it now. Um, what I would say is obviously I don’t want people to be distracted, uh, by the mirror board, but it would be good to make sure in the background that everybody can, can access it. So, If we share it now, if you guys can just click on that link and make sure you don’t have any access issues and then we’ll be able to make sure that everybody has the access that they need.

Yeah, now you can see my screen. So this is the link. Of course, you can click on it and land on the page. So I’ve created a readable link for you. Just go to www. com slash L slash convex. That’s an L and not an I. So go to www. com slash L slash convex. And you will be able to enter the Miro board that Emma has designed for this workshop.

So quickly head over to this URL to participate in this. workshop. We do not interact with the board yet. We will come to the Miro board after Emma finishes with her initial bit of the presentation.

Perfect. Great. Thank you so much Vipul. Hi everybody. It’s great to see all of you here, or at least well, not really see you, but I’m sure that you’re all there.

I hope you’ve enjoyed the session so far. As I mentioned earlier, I was enjoying taking part in that little quiz and just testing my knowledge. Um, so as Vipul mentioned, my focus for today will be, um, on the role of research in, uh, your experimentation programs. So I’m going to be thinking, first of all, for a few minutes about why it’s so important to take a data driven kind of research focused approach to experimentation.

Um, but also more importantly, I want this session to be relatively hands on. Both in terms of the workshop session that we’ve already talked about, but also in the sense that I want you to take away from today a range of different kind of tools, frameworks and processes that will help you to actually run this type of research yourselves within the teams and the businesses that you’re, that you’re working within.

So in terms of how the next kind of 40 minutes will work, And I will try and keep an eye on time. Um, because obviously we’re a bit behind, so I’m a little bit confused about how long, so I’ll just set my timer. And so, as I mentioned, I’ll start by just kind of talking about the rationale, I guess, of taking this data driven approach.

And then I’m going to introduce to you our kind of playbook or our how to guide. We’ve actually developed this playbook with our sister company, CXL, and essentially, it’s a Bible for how to run this research. I will provide links to this playbook to you as a follow up to this session as well. It’s completely free.

There’s no strings attached to that. We just want to continue the kind of ethos of CXL, of sharing our knowledge within the industry. So, um, that playbook will provide you with, uh, the how to guide. Obviously, we don’t have, you know, the six hours that we would probably need, um, to go through that in detail today.

So I just want to introduce with, to you, the concept of Research Excel, what it is, why each of the research methods are included, and the sort of top level steps, I guess, in, in terms of running that type of research. Then we’re going to jump into the workshop session. Obviously, we’ve already talked about the mirror board.

Hopefully you guys can all access that in the background. As I’m speaking, this is really where I want to illustrate to you how we turn insights from research activities into hypotheses for a B tests. And really, it’s about kind of connecting the dots really between You know, user research and data analysis through to actual test ideas.

So we’ll use some kind of, I guess, dummy data today, but you will be able to take away that framework for a workshop to run internally within the businesses that you’re working within. And then hopefully there’ll be a bit of time for Q& A. And I believe there’s a way for you to add. questions. Um, I’m sure you know more about that than me since you’ve been in some more sessions than I have.

Um, but yeah, Vipple will be gathering up those questions, uh, and I’ll look to share those and, and talk about

those. And I encourage everyone to, you know, drop in their questions on the GoToWebinar control panel itself this time, uh, so that I can recognize who is actually asking the question and make you, sorry, unmute you, uh, from your side so that you can ask your questions or, you know, share any observation.

Regarding anything that’s going on at the moment at that particular moment by coming on to the stage itself. So don’t feel free to either raise your hand or send in your question using the questions panel itself. And I will take care of unmuting you. Please go ahead.

Perfect. Thanks. Um, cool. So just before we get started into that, just a little bit, I guess, about me and about Spiro.

And Spiro is ultimately the agency arm of CXL. Which many of you will likely have heard of in terms of the training in the CXL Institute. From Aspiro’s perspective, we are an agency and we specialize in experimentation or conversion rate optimization, user research, design and analytics services. And we work with a range of businesses across the world and we also have a team all across the world as well.

Um. I’ve been working at Spiro as research director for two years now, um, but I’ve been in UX and experimentation for over 10 years, but really with a big focus on on user research, um, you know, I’ve spent years like running interviews, running usability studies. Um, following people around supermarkets with weird technology that measures brain waves and and all of these kind of weird and wonderful research activities.

So that’s really my passion. And that’s what that’s what brings me to my role as research director at Spiro. And it’s also what brings me here today. Um, so let’s get started. So ultimately, there are two kind of ways that you can think of running an experimentation program or at least coming up with ideas for A B tests.

As part of an experimentation program, option one is basing ideas on ultimately guesswork, uh, assumptions, opinions, often actually from the top down. So, you know, maybe the CEO or the highest paid person or whoever shouts the loudest. Um, this can ultimately lead to what we kind of call a scattergun approach to experimentation and A B testing, testing lots of random things with no real kind of rationale, reason or lack of clear strategy.

Um, in terms of what you’re testing and why option two, on the other hand, is the type of experimentation programs we try to build with our clients. So this is being data driven on being customer focused. And really, in order to do those things, you have to be regularly conducting research regularly talking to your customers.

To understand what matters to them. Um, these types of programs also tend to be more kind of considered and more strategic because they’re focused on identifying real customer problems. And as such, we tend to extend, I guess, the types of experiments that we run into more kind of innovative. business strategy experiments versus things like the traditional button color tests or moving something above the fold kind of tests as well.

So you might be saying, or you might be thinking to yourself, well, what’s so wrong with option one? Uh, I guess you could argue that if you throw enough things at a wall, eventually something will stick, right? So eventually if you test enough random ideas, You’ll find some things that work along the way, and I guess to an extent, this is true.

But from a business perspective, this doesn’t make logical sense. And there’s two main reasons for this. Um, and that’s because this approach requires over investment in two things that we generally are all short of time. And money. So if I dig a little bit quickly into each of those a little bit more. So from a time perspective, this scattergun approach will ultimately mean that it takes longer to find what really matters to your users to your potential customers and your customers.

So what we’re showing on this slide here is the various outcomes that we have from any given test. So we have winners. losers, um, and flat tests. We’ll ignore that the error one for now. Um, so what I’m not saying is being data driven and doing user research is it’s not necessarily going to mean that you only have winning tests from now on.

That is unrealistic. Um, and besides a losing test isn’t necessarily a bad thing. Um, a losing test actually shows a significant shift in behavior, uh, but just in the wrong way. But you found something that kind of resonated with your customers in some way. So that’s actually a really great opportunity for an iteration.

But what I am saying is running research and being data driven will help reduce the time that you waste on tests that simply have. no impact. Um, and therefore it helps you ultimately focus on the things that really matter to your customers, um, as well. And then there’s the money aspect. So the longer you spend kind of, I guess, wasting time on things that don’t really matter or don’t move the needle in terms of the important metrics that you’re looking at.

Um, the more money that you’re wasting in terms of, you know, you’re acquiring traffic, you’re spending billions of dollars potentially to acquire new customers, new visitors to your website. Um, but you’re not making the most of that traffic, A, because you’re not converting them. But B, you’re also not learning from them by testing things that really matter that get you significant test results, whether that is, you know, winning tests or losing tests, but you’re not shifting behavior in that sense.

So what we see is that many businesses are kind of growing on paper in terms of traffic, but not following suit when it comes to kind of revenue and profitability. And that’s because there’s still this kind of addiction, I guess, or obsession with acquisition and acquiring traffic, and not enough focus on converting that traffic.

into paid customers. Um, this isn’t necessarily a new phenomenon in the digital world. Um, I’ve been working in the industry for over 10 years, and this is something that we’ve regularly talked about, especially when I worked in like full service digital agencies. So I’ve dealt with clients kind of, uh, spending millions or maybe even billions of dollars on paid search, uh, strategies.

And then kind of scraping the barrel for a budget for things like research and experimentation. Um, it’s not enough to just acquire traffic, and now it’s also not enough to just A B test, um, random things. Just A B testing isn’t going to be the silver bullet that you’re looking for. What is going to be the silver bullet is using experimentation to solve real user pain points and real customer problems.

And we can only do that by gathering a deep customer understanding. And we can only do that by running research. So I want to introduce to you, um, Spiro’s ResearchXL model. So you can see how thorough this methodology is not just because of the number of research kind of methods that are included in this methodology, but actually, more importantly, about the different types of research that we’re looking at pulling together and identifying the key themes from across these research methods.

These research methods on their own are not necessarily anything particularly brand new, and that you won’t have heard of before, but the magic really happens here when we conduct multiple research methods, um, over a similar period of time and triangulate the data across these different research methods.

One of the reasons for this is that, um, and I like to think of research Excel or this type of research methodology as a bit of a jigsaw puzzle. Each of these different research methods is great at gathering a particular type of data and actually quite bad. at gathering a different type of data. So I’ll give an example to kind of explain what I mean here.

So usability studies, for example, they’re a great way to identify on site friction areas. So, you know, the traditional UX kind of usability issues that we might uncover. Things like users, you know, struggling to complete a form due to poor error messaging, maybe on an e commerce website, struggling to find the product that they’re looking for because the site search doesn’t work very well.

Um, you know, these kind of, I guess, more traditional UX issues. Usability studies are an absolutely brilliant way to find out where those kind of areas of friction are. On the other hand, a customer survey is never going to get you insight into onsite friction. We should never try and mold a survey to get that type of data because it’s just not the right method in order to do that.

However, what a customer survey will get you is some really great insight into your customer’s motivations. What matters to them kind of in general when shopping for the products that you sell or researching the services that you might offer. Um, We’re not going to find out usability issues, but we are going to get that kind of deeper motivational data.

And again, it’s kind of thinking about pieces of that puzzle, and we can use these pieces of information in different ways to optimize our website. So, for example, usability studies. They provide friction data, which can help us identify kind of UX fixes and AB tests that might be focused on, you know, fixing UX issues, for example, whereas motivational data is going to be more useful when thinking about things like website copy and website content and potentially pieces of functionality that you have on your website as well.

to help address those kind of, uh, motivational aspects. So they, all of these different pieces of research have their role. They all have their kind of benefits and they all have their things that they can’t offer and they can’t do. And that’s why it’s important to kind of do a wider spectrum of research activities in most cases, versus maybe just focusing on one, because then you only have one piece of the puzzle, so to speak.

Um, now. This is kind of a framework that we’ve recently put together. And what I want to point out here is that we have developed that Research Excel framework from years of working with clients on these types of projects. And what we find is that that combination of research activities as part of Research Excel is a really great balance between kind of Effort and value and tends to be appropriate in most kind of cases for the businesses that we work with.

At least that said, we have and we continue to tailor research Excel to specific client needs. So you might be sat there and thinking, well, we could never do X, Y and Z because of A, B and C reasons. Well, that doesn’t mean you can’t do any research on this matrix, which we’re working on actively at the moment is about mapping out those different research methods.

to help businesses to understand what type of research they that they could do what’s realistic for them in their circumstances, based on things like cost, effort, as well as the value that you get from those different research methods. So an example that I want to give in terms of when we might kind of flex the research Excel model might be related to business size.

So we would be looking for somewhere in the region of 200 to 300 survey responses when running a customer survey with one customer segment. Now, there are some caveats to that because we would want to calculate kind of appropriate sample size based on the particular business. But that would be our kind of minimum threshold.

Now, in order to gather 200 to 300 survey responses. even with a 10 percent response rate on a survey, we’re going to be looking at, you know, needing like 2000 to 3000, uh, paid customers to send that survey to, and a 10 percent response rate is relatively high. So we’d actually probably need more than that.

So in that case, what we would be looking at probably doing is something like switching out that customer survey. With something like customer interviews or at least running a smaller sample survey, but supplementing that with some customer interviews. So what we lose in sample size with customer interviews, we kind of gain in depth of insight because we’re able to ask more detailed questions were able to kind of probe a bit more in terms of their answers to particular questions, for example.

So we get a sort of richer set of data in that sense, but the sample size is lower. So it’s all a little bit of a balancing act in that sense. And I guess I wanted to share this with you because you guys can take this away, um, you know, to help you plan research internally. Um, this slide, I’ve just kind of also shown how we would definitely recommend.

You know, triangulating. Um, and I’ve shown that on this side here, these different research methods. And again, this is about kind of filling in as many of those pieces of the puzzle as possible. So in this scenario, you know, heuristic review is relatively low effort, but it’s also we say it’s kind of low value in the sense that it’s as close to opinion as we ever get because it’s a group of experts.

Evaluating a website based on kind of UX principles, so we’d really want to validate something that’s relatively low value with something much higher in value, like customer interviews, but also something, um, you know, from a behavioral perspective, like analytics analysis as well. We’d also always be recommending.

At least one qualitative or attitudinal research activity and one quantitative or behavioral research activity. Um, at the, at the very least, um, if anyone has like a specific use case that you’re dealing with at the moment and you’d like my help with, um, please reach out like on LinkedIn or in the q and a or, or whatever, because I guess there’s never really.

a right or wrong answer. Maybe sometimes there is actually, but sometimes it’s a bit of a balancing act and there’s a few considerations to make in terms of what’s the best kind of set of research activities to do in any given scenario, essentially. Um, cool. So before we jump in,

I’m sorry, uh, sorry for interrupting.

We do have one question that just came in from, uh, Luca. Let me just go ahead and read if that’s a valid question or a comment. Okay. It’s a, it’s a valid question. Uh, Luca, let me just unmute you. Because it’s a long question, I believe it will require a bit of more context from your side to pour in. Uh, let me find your name in the list.

Yeah, found it. So I’ve unmuted you now, Luca. You can unmute yourself from your side as well and ask your question over the audio with a bit more context. Can you hear me? Yes. I just,

I don’t know, I went really deep into one question. Uh, concerning

user testing. Um, most

specifically, um, I found this sort of issue with panelists that just go straight to complete the main task, such as, um, I don’t know, purchase a travel trip or a vacation, and they do not really care about, um, some sort of sub task.

That I really want them to focus, such as, uh, you may add the travel insurance or a transportation service, and I really want to understand if they, uh, if they, I don’t know, take care of it, or if they think about them. And, uh, do you have any suggestions? Sure. I’ve got a couple of questions before I answer.

Are you primarily talking about remote usability studies or remote user testing versus moderated?

No, no, no. I’m talking about moderating user testing.

Okay. Um, so in that sense, I guess I would be thinking about how, how I was moderating. Um, so for example, um, you know, maybe making sure that users understand exactly what is expected of them, um, within the session.

Also, um, I would. Approach it in terms of how you write the script. So one thing that I like to do when I’m writing a script for usability studies is have a set of tasks that are very open. Because what we want to do is we want to observe natural user behavior as much as possible. Obviously it’s not completely natural because it’s a research setting.

But we want to give users the opportunity to For example, the example you gave, I think, was about like adding insurance, perhaps. So we want to almost see if they do that. Naturally, before we tell them to do that, if that firstly makes sense, so I would be kind of making sure to prime them by telling them things like, I’m really interested in how you would do this.

Normally, if I wasn’t here, um, you know, watching you, for example, um, maybe mentioning that we have plenty of time to go through this task. So there’s no need to rush. And also, you know, saying things like, um, you know, I’m really interested in how you would do this. Uh, normally, so don’t, there’s no right or wrong ways of doing things, for example.

So we really want to put them at ease. But then we also want to give them the opportunity to just do what they would do kind of naturally. But then obviously, if you have a certain specific research objective, so for example, you want to gather insights into You know, the insurance add ons, for example, and then we would want to add as a second or a third task and having some more specific tasks aimed at that particular kind of research objective, um, you know, to make sure that we are gathering the insight that we need.

Um, but yeah, I think it really comes down to kind of how, how you moderate and the questions that you ask, but also how you kind of. Prime the participants to understand that they are able to kind of do what they would do normally. But I also think the other thing to mention that’s important is if they don’t add insurance, for example, within the checkout flow or whatever the example is, that’s kind of interesting in itself.

Um, and I think it’s quite easy to quite quickly kind of jump into very specific tasks. Um, but actually we want to understand if they didn’t select insurance themselves, why not? Um, and then we can kind of ask and probe questions about that as well and identify opportunities that way. Um, I feel like that was quite a long answer.

Does that help at all? Yeah. Yeah. Great tip. Thank you. Okay. No worries. Um, any more questions before we carry on?

None Emma, you can go, you can go forward. Okay,

perfect. Thank you very much. Um, is this working? Okay, cool. So, um, the last thing I’m kind of going to talk through before we jump into the workshop session is these kind of five main steps for running Research Excel. But actually, these five steps apply to any research that you might be running.

So that example I gave of kind of heuristic review, analytics analysis and customer interviews, these five steps would be the same five steps. I’m going to talk you through all of them quite briefly, but the reason I want to do this is because where we’re going to end up as part of the workshop session is on steps kind of four and five.

So I kind of want to prime you in terms of what happens before we get to that stage that we’re going to do in the workshop. So step number one is preparation. So there’s quite a lot to prepare and plan for with all of these different research activities. So this would include things like From the survey perspective, like what questions are we going to ask and there’s a whole art in, you know, developing good questions for surveys and how we ask the right types of questions.

But also things like what tool are we going to use to run the survey? How are we going to send the survey out? Are we going to offer an incentive to encourage people to complete the survey? From a usability study perspective, and obviously we were just talking about that with that question and answer.

And, you know, what questions are we going to ask what? tasks that we’re going to set, um, and also thinking about the types of people that we want to include and what kind of demographics, uh, what kind of screener questions we might want to use. Things like polls, we need to think about the logic there.

How long, um, are we going to wait until we trigger the poll? Maybe it’s exit intent. If so, what are we going to do about mobile where there’s kind of no such thing as exit intent? all these different things that there are to think about, um, in terms of preparing for each of these different research activities.

Again, a lot of this is available in the playbook that I mentioned at the start, um, in terms of what you need to think about for each different research activity. Um, then data gathering. Now, I’d like to say that this is the point where we just sit back and relax and let the data kind of flood in. Um, but unfortunately that isn’t always the case.

And there are definitely still some things that we need to be doing at this point. So quality assurance is one of those. So again, thinking about the question that just came in, you know, there is definitely a quality aspect to things like usability studies, especially if we’re running remote research, remote usability studies, I would recommend, you know, setting one up, Watching it back and making sure that the quality was good in terms of did they understand the tasks?

Did they understand the questions? Was there anything that they misinterpreted? Was the flow correct? Because you can, you can preview the flow yourself, but sometimes until an actual user has used. use that you don’t actually know, just like with any user experience. Um, also with things like surveys and polls, we’re going to want to check on sample size, how we’re getting on.

If we’re not getting on very well, are there changes we need to make to the approach? Uh, do we need to send followups or reminders? Can we stop the poll and start analysis? So we kind of want to keep an eye on how things are gathering, um, as well. Step number three is coding or analysis. Now what I’m showing on the screen here is some screenshots from the coding templates that we use for things like surveys and polls.

Now there is also kind of the analysis that happens in terms of like heat map analysis, session recording analysis, for example. That obviously doesn’t necessarily happen in this same format, um, but In terms of surveys and polls, um, what we would be doing is kind of quantifying the insights ultimately.

So if it’s kind of closed questions or multiple choice questions, we’re simply kind of quantifying, you know, the groups, uh, the trends in terms of how people answered those. But also a lot of the time, especially with surveys, We’re asking open ended questions. So we’re we do a kind of sentiment analysis.

Um, and we do that actually manually because we’ve never had a lot of look or with with doing that via a tool. So that’s another reason why we try to cap the number of responses are kind of sensible amount. So 200 to 300 responses. Because that’s like a manageable task to go through those, um, manually and tag them with kind of different sentiment tags.

Um, again, all the templates are available within the playbook, but this coding essentially is where we take all of the kind of individual pieces of raw data, and we start to kind of condense those down into some kind of key insights that we can quantify as well. Next up. is thematic analysis. And actually this is where we start to run some workshop sessions.

This is going to be kind of the first thing we’re going to do as part of the workshop that we’re going to jump into in a few minutes. Um, what I wanted to show you here is a kind of live example of that. So on the left hand side of the slide, um, this is all done in Miro, uh, as well. On the left hand side of the slide, you can see some top level kind of research, um, insights that have come from each individual research activity.

So you can see this is what we found from user testing, this is what we found from customer survey, etc. There’s also a reason why they’re all color coded because that helps us in the, in the future to understand where those research insights have come from. Then what we do is essentially a card sorting activity on our own research activities.

Uh, insights for anybody that’s kind of into research. Uh, like me, I kind of enjoy this because it, it kind of feels like the matrix. It’s like doing research on research. Um, it’s kind of cool. Maybe that’s just me being weird. Um, but essentially what we’re doing is like a drag and drop exercise. So we’re grouping these insights into groups that make sense.

And so I’ll show you an example of that. So from here, we dragged, um, these ones over into this group and you can kind of see we started adding some that were all to do with delivery together and then there’s a few that relate to returns as well. Um, so we started to kind of create this delivery and returns group.

This was for an e com client. Um, down here, we grouped lots of different issues that related to the checkout. Something else to point out here is you can see that like there’s two post it notes here that have the exact same thing written on them. But that’s actually a really positive thing because what that means is we identified that in the heuristic review, but we also identified the same thing in the user testing.

This is a really, this is where the sort of strength of Signal starts to get stronger because we have multiple research sources. pointing towards this same kind of problem that we’re in. Once we’ve done this drag and drop exercise, just like with card sorting, we’re also creating labels for these groups that, that, that describe them.

And this is where we can start, you know, going from all of these really individual, uh, tests, uh, sorry, research insights into these kind of larger overarching customer problems. Like these were the kind of main areas that the customers struggle. And the other thing that I’d point out here is these shouldn’t be based around particular areas of the website.

In this scenario, we did have one that was related to kind of checkout, for example, but actually these can be, um, kind of site wide issues that they should be more customer problems than kind of. Just pages of a website, for example, another one that comes up quite frequently is like trust or credibility, for example, or lack of credibility.

And this is something that could be fixed or could be dealt with across all pages of the website, but it’s a, it’s a key kind of customer problem. So that just kind of shows how we take, you know, all those individual insights and start creating these larger kind of key themes or customer problems. The fifth and final step is the workshop that we’re going to jump into in a minute.

In terms of kind of the benefits of running this as a workshop session, there’s a few different ones that I wanted to highlight. I mentioned previously, it really helps to kind of get you out of the weeds, so to speak. It can be very overwhelming when you finish a research project. to understand kind of how to move forward.

This research, this hypothesis workshop really helps with that. There’s a few points as well about team engagement and stakeholder management. I mentioned at the start, you know, these kind of top down, um, experimentation programs that are all focused on, you know, ideas from the highest paid person, et cetera.

Um, And also wider team engagement is a big part of, uh, improving your experimentation maturity, you know, gathering ideas from, uh, the wider team, uh, from like customer service team, et cetera. So by getting as many people involved in this, in this workshop as possible, that helps get people involved, but it also helps.

keep them honest and keep them focused on coming up with ideas that based on the data versus coming up with random ideas. Um, and then finally, it really helps communicate where experiment ideas have come from. What I’ve found when I’ve run these types of workshops with clients is that if you include them in this workshop, Then you get less kind of pushback and less kind of questions, I guess, later in the process because everybody understands why we testing that specific thing is because it came out of that workshop because it relates to the research so it can kind of really help longer term as well with that kind of buy in into the different ideas as well.

So just to finish off before I jump into the workshop. This is just a call out to that playbook that I mentioned. There is a link to it in the slides. I’ll make sure that you guys get access in in any follow ups that go out. As I mentioned, it’s completely like publicly available. There’s no kind of strings attached to you guys having access to this.

And there’s so many pages to it, and there’s so much information in there and advice on how to. research method, but also how to kind of report and present insights as well as links to various different, um, example briefs, coding templates, reporting templates, etc. So hopefully that’s really useful. And please reach out if you do have any questions.

Um, if you if you choose to, uh, undertake this, uh, this project yourselves. Cool. So it’s time now to jump into the workshop. I guess it might be a good time to just check if anyone has any more questions before we do that.

Uh, no more questions, Emma. And, uh, just want to drop in a note, uh, don’t want to make you rush into it, but we’re running a bit behind schedule.

So, uh, it’d be great, right? Uh, if you can skip a few parts, if, if there are any, I know that it’s, you know, a very, very comprehensive, uh, video board that you’ve created. But, uh, it would be great if we can maybe, uh, finish the workshop in maybe 10, 15 minutes.

Yeah, sure. That’s absolutely fine. Um, we can, we can make it work.

What I would suggest is I’m going to kind of talk through how the workshop works. If everybody that’s in the board that I can see here, you can kind of be working as I’m explaining, um, as opposed to us I was planning on like putting on a timer and, you know, spending five minutes on things. It probably just makes sense to just kind of work as I speak.

So this mirror board tracks from from left to right in terms of the different tasks. Now, you’ll recognize, hopefully, the research insights part. So that’s the part on the left hand side that’s got the different colored coded post it notes. What you can see here is, um. Insights from individual research activities, and these are what I would describe as like the main insights from, you know, each research activity.

So as I described previously, the first step is what we call thematic analysis. We’re going to do it a little bit differently here. We’re not going to do the drag and drop thing just because it will get really messy on the board. So what I want you to do is where it says task just here, the first task with all these white post it notes.

And what I’m going to ask you to do is have a read through the insights and the research insights and start coming up with some of those themes. And what I would describe as a Spiro tip and what I would describe as a theme are the things that are starting to come up multiple times across different research methods.

So I’ve given you an example here in that first one, which is that there’s a lack of clarity around the offering. So people in messaging testing and copy testing were confused about what the tool actually is. Similarly in usability studies, um, users were having similar questions. So these all kind of generate ideas for, um, key themes.

So that would be the first part of, of the workshop. So, given the time, um, I’ll maybe put on two minute timer, um, and I’d ask you to start filling in these white post it notes with some ideas of what those key themes might be. Hopefully that makes sense to everyone. I’m gonna go with two minutes. I’ve got some music to accompany us on this, um, activity as well.

Um, here we go.

It’s a great idea and I hope the instructions to interact on the Miro board are very clear enough to you. Just let us know if it is not. I can see a couple

of people adding things, which is a good sign, I think.

Great. So now I’m going to send you a link or if you are on Mentimeter, the link is right in front of you.

But let me type it out for you in the chat window bw. com slash L slash convex.

I don’t know if you can see it, but there’s a lot of stuff going on here. Welcome to the Hey! Hey! Hey! Hey! Welcome to the 2020 Hey!

Okay, just 20 seconds left to add a few more ideas of those kind of key themes.

I’m really happy to see that people understood the assignment and they’re filling in the stickers with the observations.

Perfect. So obviously, ordinarily, we would have much sort of more time to do this and to really kind of dig into these insights. But, um, yeah, this just is just to kind of show how we would look to pull together the insights and come up with some of those. So great to see some here. And obviously, I’m quite familiar with the content on this board, so I can already see some of the ones that I’m familiar with.

So, um, Plans and pricing being a major one, like lack of transparency around around pricing. Another one I can see there is trust and kind of that whole credibility, lack of social proof, et cetera. And so a lot of people kind of picking up on the same things that we picked up on as well. So moving on to the next task.

So this is where we can start to, um, really get. Uh, some test ideas onto onto the board as well. Now, anybody can obviously look at a website and come up with random test ideas. But that’s obviously not what we’re about. What we want to be able to do is look at the website through the lens of the research, essentially through the customers lens with those key customer problems that we’ve just identified.

Um, you know, top of mind. So what we would then be looking to do is to review key pages of the website. So in this scenario, you know, we’ve got a home page. We may want to do this on a full website or a full kind of key user journey. But ultimately, this is where we can start thinking about how we could solve some of these key customer problems.

So the next task, and again, we’ll go with another two minutes for this, is to grab these post it notes up here, drag and drop them onto the page, and come up with some test ideas that actually solve these problems. So just to remind you, some of the main things that came up were A lack of kind of trust and credibility, uh, some confusion about what the offering actually is and how it works, and also pricing, uh, confusion about pricing.

Admittedly, the pricing one might not be so relevant on the homepage. Um, great, perfect. I can already see some people adding some ideas. I’m just going to set the, the two minutes going. Um, while you’re all working, one thing I’ll also talk through is my next Spiro tip. Um, and this is to think about this statement, how might we, How might we improve credibility on this website?

How might we, um, improve transparency around pricing? This is really related to the fact that for any customer problem, there are endless ways that we could solve that particular problem. And this isn’t about coming up with the only, you know, only test idea that there is to solve that problem because that doesn’t exist.

There are many different ways that we could solve this problem. And in many cases, It might even be a combination of different things that help to actually solve those problems. So how might we statement just really helps kind of frame those ideas of like, yes, there are multiple, um, different ideas, but we can kind of frame it in that way to help us come up with test ideas as well.

It’s so cool to see everybody working on the board and having ideas. And I think the other thing that this kind of highlights is how quickly you can come up with lots of really great test ideas that are grounded in research. Um, just from running, you know, and we, we’ve only done this for like five minutes.

Um, so if you did this on a bigger scale, um, you know, you can see how many, how many test

ideas

you’ve come up with.

I’ll leave you for 14 more seconds.

Okay.

Perfect. Again, yeah, I’ve said it all already, but just really cool to kind of see how we’ve, you know, gone from the research and now we’re thinking about specific test ideas. Now the final task, um, and this is probably, um, the part we can, we can spend the least amount of time on just in terms of time, but this is just about, um, hypothesis.

creation ultimately. So you’ll remember I called this a hypothesis generation workshop. What this isn’t is random test idea workshop. So it’s really important that we come away not just with a load of random test ideas, but actual hypotheses for a B test. Now there are a lot of different hypothesis frameworks out there.

Um, And none of them are necessarily wrong. Well, some of them might be, but it doesn’t necessarily matter what which framework you use as long as you use a framework and you stick to it. The reason that this is the framework that we use is because it starts with this because statement. Now, this is really important when we’re really, you know, we want to keep ourselves honest.

We want to really keep focused on why we’re running these tests. So, for example, the example I’ve given you, because research Excel highlighted that users question the credibility of the tool. We could get more specific and we could say, because user usability study participants struggled with X, Y, or Z, or the user, uh, the survey that we ran highlighted the, these different research activities.

Um. But ultimately, it’s about making sure that way we’re calling back to the data or the research as part of the hypothesis. Then we’re thinking about the test idea. What is it that we propose to do? How do we think we could solve that problem? And really, we can drag and drop those from, you know, the test idea part of the task that we’ve just run.

Um, but then finally, we’re also thinking about measurement. What do we expect to happen next? And this is where we start kind of creeping into actually planning for a test. It’s very important to think about the metrics that we want to affect as part of the experiment that we’re going to run. Prior to was running it.

And that’s important for a couple of reasons. Firstly, we need to make sure we’re tracking those metrics. And secondly, again, it kind of keeps us honest because it’s very easy to kind of, I guess, manipulate data afterwards to tell a story. But we want to make sure that we focus on the metrics that matter and the metrics that we expect to be impacted by this particular test idea.

So in the interest of time, I’m just gonna put on a one minute timer here. So let’s get started. So What I would love for you to do is to create your hypothesis. So at this point, we’re working kind of vertically down, down the board to fill in those three post it notes. Why are we running this test? What is it we propose to do?

And how do we expect to kind of measure the impact? Obviously, we can, you can kind of keep access to this board as well if you want. Fill these in, um, after the fact. While

people fill in their responses, we also have received a comment a while back. A viewer commented that this mirror board is really awesome. Would you be willing to share it later?

Yes, yes, absolutely. That is absolutely the plan. Um, yeah, I’m not sure exactly how things are getting shared, um, afterwards. But I will absolutely share the, the link to this, uh, with Vipul so that he can kind of organize the, uh, distribution of this, along with, um, the playbook, which I mentioned, which I’ve just got actually open just here.

Um, and this, you can click on all of these to dig into all of these different research activities. And they all have their own playbooks. So yeah, we’ll make sure you have access to all these different resources and frameworks and mirror boards and everything

Sure, just send over to me over the email and i’ll share with all the registrants for convex 2022.

Sure. I will do that Um, awesome. So that actually brings me to the end of, of that sort of mini workshop and presentation session. Um, great to see you all still interacting with the board. Uh, I appreciate you, you taking part in, in the workshop as well as listening. Uh, to me hope you found this useful and I hope that you can use some of these frameworks and tools and, and things, um, as part of.

Your experimentation efforts, uh, and please reach out to me if you have any questions or things you’d like to discuss, um, or if you need any help with any of the research that you’re looking, uh, to conduct. Um, so

that’s me done. Whatever questions do you have, you can either, you know, send them to me, or you can put them in the questions panel right now, or you can also find Emma, Travis, or anybody from, uh, for that matter, from, uh, SPHERO, and ask your questions to them, or help you out with anything relating to experimentation.

Of course, it goes without saying. CXL has been working in the field of experimentation for, I think, over a decade now. So, of course, feel free to reach out to Emma, and I’m sure that she’ll be able to help you out with your answers. Thank you so much, Emma, once again for bearing with me, bearing with me to, you know, rush things up a bit.

I know this is not the ideal way to run a workshop, but in the interest of time, we will have to make it this way. Thank you so much once again, Emma. And thanking everyone as well as else as well in the attendees, uh, for interacting with Emma’s new

report.

Speaker

Emma Travis

Emma Travis

Director of Research, Speero

Other Suggested Sessions

eCommerce Optimization Using Voice Of Customer Data

Context can be used as a strategy. Learn how to prioritize the voice of customer research findings and create a process for testing and action.

How A 250-Year-Old Company Adapts To The Changing Consumer

Encyclopaedia Britannica combined customer research and testing to recognize their changing reader base and bring their consumers new products and on-site tools for a better learning experience.

Managing and Scaling Experimentation Programs at Global Companies

Dive into Simon and Johann's chat on tackling the tricky bits of global experimentation, from team vibes to cultural quirks.