VWO Logo Partner Logo
Follow us and stay on top of everything CRO
Webinar

Two Frameworks You Can Use to Improve the Hypotheses of Your Tests

Duration - 45 minutes
Speakers
Haley Carpenter

Haley Carpenter

Senior CX Strategist

Shanaz Khan

Shanaz Khan

Brand Marketing

Key Takeaways

  • Use the PXL prioritization model: This model is less subjective than the ICE model and breaks down the factors into more specific metrics. It helps to prioritize insights based on their location on the page, noticeability, data backing, and ease of implementation.
  • Prioritize insights that are above the fold and noticeable within 5 seconds: These insights are more likely to be seen by users and therefore have a higher impact.
  • Back your insights with data: The more research methods that support an insight, the more confident you can be in its validity. This can include user testing, qualitative feedback, digital analytics, heat maps, or eye tracking.
  • Consider the effort of implementation: Insights that take less time to implement should be given higher priority. The PXL model uses a scale to score the effort, with less time-consuming tasks scoring higher.
  • Use the PXL model to sort and prioritize all your insights: This will help you know what to do first in each category, minimizing arguments and getting everyone on the same page. This model can be used across any industry or vertical.

Summary of the session

The webinar, hosted by Shanaz, Marketing Manager at VWO, featured Haley Carpenter, a CX strategist at Speero by CXL. Haley, who has extensive experience in user research and running experimentation programs, shared her insights on practical frameworks to improve test hypotheses. She drew from her experiences working with companies like Vitamix, NextEra Energy, and OLaplex.

The session aimed to help attendees understand how to break down complex problems and identify significant opportunities. The webinar concluded with a Q&A session, allowing attendees to engage directly with Haley and gain further insights.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • Do you use the PXL and research Excel frameworks when managing the interplay of demand generation and CRO/CX?

    - by Devin O'Grady
    Yes. You can use this across I would say almost any application or prioritization. Like I said, it is customizable. So if there's something that doesn't apply to your scenario, your business case, wha ...tever. It is definitely able to be changed, but I would still encourage you to use these frameworks. Absolutely.
  • Do you have an example of a checklist of A/B tests?

    - by Andre
    Not off the top of my head, that I can share. That would be all client information, unfortunately. But I assure you every time I have done, the research Excel framework and cranked out a research pres ...entation for a client I've ended up with so many tests every time I do it, varying from 20 tests to 30 tests to 50. I mean, it can just really go on and on. And especially if you're doing that continuous research, you'll just have a continuous list of tests that you can pull from.
  • When ranking or scoring, how do we make sure that we are not biased towards an idea that we believe will work, but don't necessarily have data to back it?

    How do we prioritize those ideas and test, sort of score them? Yeah. That's a good question. So the PXL template that everyone should be getting access to after today's webinar should be as objective ...as possible. You shouldn't have to worry about bias when you go to fill that out. Because there aren't any questions, like, how do you feel about this example, or what do you think about this test? As I went through the examples, is it above the fold? Can you notice it in 5 seconds? Are you adding or removing anything? All of those are really easy, yes, no objective questions, so you don't have to worry about the bias there. And then say you did not find it in any research, and it's just a heuristic idea that you came up with based on your expertise. It can still go in the framework. But in that confidence section where it asks If you found it in any research, it will just get zeros across the board. Right? So it will score lower because it wasn't any research, and it was perhaps just serious. But the buy shouldn't be in there. You should be set to use the PXL template. Not have to worry about it. And absolutely still put your heuristic ideas in there.
  • I would like to add politics as a column for context, but I'm not sure what score to add how to score in the column of politics when prioritizing ideas.

    - by Dragos
    You're getting into some muddy waters there. You're trying to cheat the system. I know what you're doing. I, you know, I can't lie. We do put a political score column into some of our frameworks, but ...this is opening up the door for some subjectivity and some cheating system. So I get the need for it because I'm not thrilled to admit that we have used a political score column, but sometimes there's really just no way around it. I do understand that politics can get intense and messy, and sometimes you just have to put those in but I would really advise you to keep out a column like that if you can. And I would recommend if you have to put a political call them in there. Maybe just keep it so it has a low weight. So maybe just make the score 0 or 1. Don't crank it up to a 3 and give 3 extra points for that because that's introducing more of that subjectivity. And then you're essentially saying, I can give all of my ideas way more points and jack it up to the top if I think it's a good idea because there's a political need for it. So, hopefully, that answers your question. Like I said, it gets muddy. Try not to add 1, but if you have to, give it a low weight, so I would do, like, a 1. And be very stingy with using it.
  • For working with new clients where do you start? Is there a process where you start analyzing first and second?

    - by Jake Young
    Yeah. Yeah. Excellent question. So if I get a new client for research, Honestly, we just go at it as hard as we can all at once. So they come in and we try and get as much data and set up at the outse ...t as we can. So we try and get the survey set up. We try and get a poll set up. If we're doing any kind of interviews, we try and get those set up. Keep maps from everything in that model, we try and get as much set up as possible so that we can get that data collection done because that's really what takes the longest. But like I said, that is a lot all at one time, and not everyone has resources for that. So you can absolutely attack that molecule. One piece at a time or a couple of pieces at a time. I don't really think it matters. You know, where you start, I think it would be on a case-by-case basis. What makes sense? So pick what you think makes sense to you and what aligns with your priorities, or maybe you have some pressing issues where user testing might make sense to start first. Something like that. It really can be done either way where you tack it all at once or in pieces. And then, you know, after we get the data, then we analyze it, do a readout where you could do that readout all at once or do the readout and pieces if you're attacking the research and pieces. Yeah, and then you just put that all into the PXL and run with it. Our testing program. That's the TLDR.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Shanaz from VWO: Hey, everyone. Welcome to another session of VWO webinars. I’m Shanaz, Marketing Manager at VWO, a full-funnel A/B testing, experimentation, conversion rate, and experience optimization platform. Today, we have with us Haley Carpenter, CX Strategist at Speero by CXL. Haley has a wide ...
range of experience from user research to building and running experimentation programs.

She’s passionate about helping clients break down complex problems and find what the biggest opportunities are like. Haley works with Vitamix, NextEra Energy, OLaplex, Procore, and Toast. If you could please turn on your camera, Haley, so that the audience can see you. Great. It’s great to have you here today, Haley and I’m really excited for this session. Before we start the webinar, I’d like to thank everybody for joining us from all across the world. So I’d also like to inform you all that we’ll be taking up questions at the end of the webinar. So feel free to shoot your questions in the chat box at any given point during the session. With that, Haley, the stage is all yours.

 

Haley Carpenter:

Perfect. Thank you, Shanaz. And I just realized my mic was off a second to go out, but I think we’re good to go now. And thank you for hosting me today. I’m so excited and happy to be here, and I am very passionate about this topic.

So I’m ready to have fun with this. I hope everyone is ready as well, and happy Saint Patrick’s Day. I know I’m not being very fast and I’m not in my green. So I’m probably gonna get pinched once or twice today, but that’s okay. But having said Patrick’s Day, thanks for taking some time from your busy schedules.

Everyone always has a lot going on. So as Shanaz said, we are here for a webinar on practical frameworks to improve your test hypothesis. And just to start out, Shanaz has given me a great intro. Thanks for that. I hopefully have a little bit of authority to talk on this topic, taking it back just a little bit, for a bit more about myself.

I started at Hanifin Marketing. Now known as BRAIN Labs. They focused primarily on PPC. So I started out doing landing page optimization for those paid campaigns. And now, of course, I’m here at Speero as a senior CX strategist where we do full funnel website optimization and have that focus on customer experience as well.

And I do wanna say I love love love connecting with people on LinkedIn. So if you’re on LinkedIn, please find me and send me a request to connect. I would love to hear from you. And especially if you have any feedback or follow-up questions, don’t hesitate to send me a message there. But that’s plenty about me.

We are not here to hear about me, so we will jump right into it here. You see a gift of an adorable child getting spaghetti thrown on his face and you might wonder what on earth this has to do with testing. I will explain, but first I want to ask, how many of you? Yes.

 

Shanaz:

I’m so sorry to butt in. We can’t see your screen.

 

Haley:

Oh, no. Let’s see if we can get that to work. How about now? Yeah. Perfect.

So you should see now a gift of an adorable child getting spaghetti thrown in his face. And, before I explain how this relates to testing, I want to just throw out a scenario, and I want you to think about it for a moment. So how many of you either have testing programs where this happens or just really in any case where you’re in a meeting, you’re trying to come up with ideas, and the HIPAA comes in, the highest paid person in the room comes in, bulldozes everything you had on the list, or the next idea and they say, no, this is what we’re working on next because I think it’s best because of x, y, and z, or I think this will be the most impactful. Or, you and your team are just simply coming up with ideas and subjectively choosing what to move forward on. Think about when and how often this happens where you work.

Wanna let it sink in just for a second. Okay. I ask this because I see this happen all the time. Time and time again. Many clients come in, and they are subjectively picking things to work on next, subjectively prioritizing.

There’s no method to the madness. There’s no framework, and I frequently see it both coming in and just bulldozing everything. And you might be wondering how this relates to spaghetti. So I promise it all ties together, I will explain. But for those of you who have made pasta, I’m guessing some of you have probably learned the trick where you have your pot.

You just pick the noodle, one noodle out at random. You’re filling it at the wall and you hope that it sticks. And supposedly, if it does stick, it tells you that the pasta is ready And if it falls off and doesn’t stick yet, it needs to cook longer. And a side note, I used to do this all the time as a kid. It’s super fun.

So if you’ve never tried this, I encourage you to go home and try sometime when you make passive. But it’s similar to testing. And if you’ve been in the industry for a long time or you’re new, you might have heard the term spaghetti testing. And so this relates because I see all the time before clients come to us that they have their bucket of spaghetti. They have their bucket of ideas, tests, initiatives, whatever it might be, like I said, there are no frameworks to prioritize or work on anything.

They are just picking an idea, a test idea, a project, whatever, at random, flinging it up at the wall and hoping that it wins or that it gets the results that you’re looking for. This is a terrible method for testing programs and really prioritization in general. I don’t want anyone doing spaghetti testing. We don’t recommend it. If you’re doing it, I hope that you’ve stopped after today.

Today, the spaghetti testing ends for good, please. And this is bad, you know, like I said, for a number of reasons. And one thing is you want to have data. You want to have a data-driven program. And if you don’t and if you’re picking these random noodles, your tests are gonna have a lesser chance of winning.

You might as well not be doing it. Tests not backed by research and data are just more likely to fail. Like I said, we see this over and over and over. So the frameworks today are gonna come to the rescue. And moving on, this is exactly what we’re gonna cover.

The 2 frameworks are our research Excel model and our prioritization framework, and the research model will go through what research to conduct and where, and then the PXL prioritization framework goes hand in hand with that. It’s kind of the peanut butter and jelly combination, so to speak, mac and cheese, and the PXL prioritization framework will do just that. It will help you prioritize your research insights. Here you can see a visualization of our research Excel model. This is going to be what leads you to those big results to get things going up and to the right.

So it’s going to lead to ideas, not out of your head, not out of your boss’s head, not out of the hippo’s head. Because you’re not gonna come up with good ideas. Most likely, I anyway, because you’re too close. You’re too invested. You’re too in the weeds all the time, and you’re not your target audience.

So this will get you those proper insights and shockingly, a lot of companies don’t do any research at all or very little. And that’s just not going to cut it anymore. Everyone needs to be doing research and it needs to be continuous. And all the time in some fashion and having a framework makes that even easier and makes this better. And I do see a lot of times as well.

Clients come in and they only have one type of data, or they’re only doing quantitative research, and you need both. You need the numbers and the why. And if you don’t have the why, or one or the other, you’re gonna be missing out on a lot. You need that full picture. The continuous part is important because COVID is a great example of this behavior and perceptions change over time.

And so using COVID as an example, I’m sorry. I know everyone is probably a little tired of hearing about COVID, but it’s perfect for this. Say you did a poll as research pre-COVID. You’ve got some insights.

You ran a test based on those insights. You found a winning iteration and you implemented that all pre-COVID. You’re like, okay. I’m good now on this page or wherever you did this. COVID hit.

Changes behavior and perception. So people are now worried about contactless deliveries. How does that work? What precautions are your company taking to keep everyone safe? There are all kinds of questions and new things arising.

People were not worried about masks before. People didn’t think about social distancing or all sorts of things. So now post-COVID, that poll that you did is probably gonna have some different answers, or you could run a new iteration of that poll to get updated insights, and that winning test iteration that you tried and then implemented might not be a winning test iteration anymore, or you could probably level that up and optimize it to align with these new behaviors and perceptions. But if you don’t do research ongoing or at all, you’re not going to learn these things.

You’re not going to figure it out. You’re not gonna be on top of all these changes. So with that, look at this model which gives you what to do. It’s outlined right here. It’s an objective framework that everybody can get on board with, and you can keep track of what you’re doing.

Certainly, you could add to this model if there are other methodologies that you prefer or that you do want to do or maybe do every now and then. You could swap things in and out perhaps. But, truly, this should be what you need. You shouldn’t really have changed around anything. It covers a lot as you can see, and they all work together.

So if we look at the outside, those are all the methodologies. Analytics, audio, user testing, polls, chat logs, etc., go around, and they all work together to get those insights in the middle in that red circle that we’re all looking for that will get things going up into the right. And another note about this as to why you don’t just want one research message. You could do one, say you just did user testing. Great.

I’ll give you that’s better than doing nothing at all, but the more the better, really, because in an ideal world, say you found an insight in user testing, that’s one pillar. Then you could find it in a survey. Then you could also find it in a heat mat perhaps in another flavor where you’re triangulating and you find something about it in numerous sources. Then your signal’s even stronger. Then you can be more confident in that insight, turn it into a text or whatever, and be more confident that you’re going to get, you know, a bigger impact. You want to triangulate your insights as much as you can.

And this model, not only is it going to tell you what to do, make sure that you cover all of your bases, but it’s also going to make sure that you cover a lot of bases and all of your bases as far as different optimization areas go. So it’s going to cover behavior, friction, fears, uncertainties, doubts, anxiety, best practices, motivation, clarity, and relevancy, all huge areas that need to be optimized and looked at, and this model will cover all of that. Well, it will ensure that. And remember how I said that you want quantitative and qualitative data so you have numbers and the why.

This model wasn’t sure that you get a good mix of both. You’ll cover both throughout this model. So that’s that. And I’m not going to go into the nitty-gritty of how to do user testing, of how to set up launch a poll because we’d be here for a much longer amount of time, those all could be their own webinars, a series of webinars. So if you don’t know how to do all of these or any of them for that matter, don’t worry.

There are so many resources available, thankfully, in today’s world that can go into the nitty-gritty of how to do these and teach you what to do to make sure that you’re doing them the correct way. Also shameless plug here. We have CXL Institute. If you haven’t heard of it, you haven’t checked it out, or even if you do know of it, and you still have some questions about how to do some of these research methodologies, go check out CXL Institute. There are a lot of great resources there.

I even participated in a conversion research sprint over the summer that focused just on this model and have to implement it correctly. So go check that out, and no worries. Again, like I said, if you don’t know how to do all of these or any of these, just learn piece by piece or fill in the gaps that you don’t know right now, on how to do them. But I would say I’m willing to say that any effort toward research, toward learning is better than just choosing to not do anything at all. And then the output of this you might be wondering is, okay.

So say I do one research method or all of these. What is the output? And I will say that the primary output to worry about is the list of insights that you will get from doing this. I will say that generally, we deliver the insights in a presentation deck of some kind, but that’s not really the primary concern here. I don’t really care how you deliver them or put them together.

Just know that you’ll get a list of insights from this that you’ll be able to move on. And so you have the research model now. Say you do these, you get your list of insights, and you might be like, okay. Great. But how am I going to decide what to do first?

What to move on? Is everything going to be a test? I don’t know. Don’t worry. I have that covered.

So let’s go to the next slide here. You have your research, you have your insights, and now we’re going to take that list and classify them into 1 of 5 buckets. Of course, you’ll have your tests. But not everything is a test. Right?

Not everything you’re gonna set up, in VWO, perhaps, or Optimizely. Some things you might need to investigate further. For example, say you did an analytics analysis and you found some weird bounce rates or some weird numbers that look broken, perhaps, and you need to investigate that further. That would get a set of binoculars. Those little icons at the top.

Some things are instrumentation issues. Say you need to add a tag for Hotjar in GTM or a script of some kind. That’s just an instrumentation thing. Some things you still want to test.

You know, you found an opportunity, a target area, a target page that you need to test, but you haven’t quite drilled down into that nitty gritty solution yet. You haven’t drilled down into what the exact test will look like, you need to think about it some more. So that would be what we call a hypothesized item. A good example of this would be, you know, you need to redo a sign-up and a billing flow. That’s a big initiative.

There’s gonna be a lot of details there. It could look a number of different ways. And that’s going to require some more discussion and some more thought. You need to hypothesize about that more, hence, it could get the hypothesized label. And then eventually you could break that down into a number of tests, or when you drill down into that specific test idea, then perhaps change it over to a test classification.

The last bucket is what we call a GEDI or just do it where it’s simply a no-brainer. It’s easy to implement. You should just do it right away. An example of this is broken foreign validation. That’s a no-brainer. You just need to fix that if it doesn’t work.

You need it. Just do it. And I will say that when we do research, get our research insights, and break them down. We often mostly get a lot of texts, a lot of JDIs, some hypothesized items, a couple of investigate items, and a couple of instrumentation items. But the breakdown looks different just from each time that you do it.

So now you have your research model You have your list of insights. You’ve classified those insights into one of these 5 buckets. And now you might be asking, still, that’s great, but how do I prioritize them? Don’t worry. Our second framework will cover that.

But before I jump into it, we recently did our experimentation program benchmark report and released that. One of the stats that we got out that was only 18% of respondents strongly agreed the hypothesis is prioritized objectively using metrics of impact potential and ease of implementation. This is bananas. So crazy. However, I’m not surprised, because it does align with my personal experience, and you would be surprised at how many clients do at the outset of working with us, and they don’t have a framework. They don’t have a process for, objectively prioritizing tests or just any internal project for that matter, really any of the work. It’s just that spaghetti kind of process where you just pick something, based on subjective opinions, lay out the wall, and hope that that gets the results that you’re looking for. And they certainly don’t have metrics that they’re scoring things on, like, impact potential and ease of implementation. Goodness. No. So let’s change this. Please let’s change this number. Let’s up the percentage of companies that do these things. And this PXL framework will help you do that.

So there is no excuse anymore for not doing this. And you’re getting a template from the webinar. So there’s definitely no excuse not to start doing this and using this. Because it will be in Google Sheets, you can make a copy and start doing this immediately, which is what I promised. And that was my intent.

I want people to be able to walk away from this and start heading in the right direction right away. I’m all about the things you can move on right away and all about usable action items. So with that said, here’s the PXL framework, and this will help you objectively keyword or as objectively as possible, rank test ideas. And you don’t have to just put the test in here. You can put all of the insights from all of the buckets through this framework.

And I will say the template is in Google Sheets. You can certainly do this in there, but a lot of times we actually put the PXL framework into Airtable. We love Airtable as well. It’s really too agnostic. I don’t care where you put this.

Just put it somewhere and start using it. But I will say that I found that Google Sheets and Airtable are my favorites. But anyway, so you have the screenshot. If you look on the far left, that’s where you’ll start putting all of your insights. And then you’re not gonna see in here a column for the different issue classifications.

But you can certainly add 1, and I would recommend adding a column for that with a label so you can have your tests, your JDIs, your, investigation items, and so forth. Because then you can group by those and sort by them. And then in the middle here, you have all of the different metrics that we’re gonna use for the scoring. And then on that far right column, you have the result, which I’ll talk about here in a bit, it’s that yellow column. And you see at the top here, impact, confidence, and effort. This is just an expanded eyes model.

And I’m gonna talk about the eyes model for just a second. You might be like, but why not just use that? It’s really subjective. If you scored something, I scored something, and another colleague scored something using that eyes model, more often than not, I’m willing to bet that we would get different scores, and it would be really difficult to get a true signal on prioritization. Because if I’m sitting here thinking about an insight, okay, how impactful is this gonna be?

Well, what in the heck does impact mean? Confidence. Where does my confidence come from? What does that mean? How do I judge that effort?

What how much effort? What’s a lot of effort? What’s a little effort? It’s very subjective. What we do in the PXL is break those out further into the metrics that you see here.

So I’m gonna start at the left, just say you start with there with above the fold, and you just ask, is this inside on the left above the fold? Yes or no? Very straightforward. Pretty hard to argue about that. It’s pretty clear. And then if it is above the phone, it’s just a Yes or no. So yes, one. No. It gets a 0. The next column is noticeable within 5 seconds.

Pretty hard to debate. Yes or no question. So if yes, give it two points. If no, give it zero points. And then you just keep moving down those questions.

And if you’re asking why things like, is it above the fold? Is it noticeable within 5 seconds? Why are those important? Well, if it’s above the fold and you can see it within 5 seconds and any of the other questions that we have listed here as the metrics. For those 2 particular questions, at least, more eyeballs are gonna see it.

So it’s gonna be more impactful. You’re probably not gonna have to scroll if it’s above the fold, so it’s easier to see. Probably gonna be more impactful. You get the idea. If we go over to the confidence section, this is really asking, did you find this in any kind of research? Is this backed by data? 

So in each column, we have user testing, qualitative feedback, digital analytics, heat maps, or eye tracking. You can really break this out as much as you want if you have other research methods that you wanna put in here and consider. And like I said before, the more research methods that you find insight in, the more you’re able to triangulate insights, and the more confident you can be, right, that makes sense. And then the last column is effort, so ease of implementation.

And here we have a scale just in the template that we found to work pretty well. So does it take less than 4 hours? Give it three points. Up to 8 hours, 2 points under 2 days, one point, more than 2 days, 0 points. So the harder something is the fewer points it gets.

You score all of your insights. You have everything in here. You have the buckets. And then you’ll get those final result scores on the far right in that yellow column. You can call it whatever you want, result, result score. PXL score. Doesn’t matter, but that’s how you’re going to sort and prioritize. And what we do is you can do that for every bucket so you can group your tests, store from high to low there, group your JDIs, sort from high to low there, and know what you need to just do first. You can store your items to investigate, sort them from high to low, and so forth. So you can do this for everything.

Like I said, this framework, I just love it, because it’s so useful, and you can use this across any industry, any vertical, use it for all of our clients, and I’ve seen it work over and over again successfully. And I will say, those conversations that you have where someone just tries to come in and bow those ideas or you’re just subjectively picking those noodles and having a lot of conversation about them. This will minimize that. It will minimize arguments. Or heated discussions and really get everyone on the same page.

It’s a framework that everyone can buy into. And it is completely customizable. So like I said, you could add different research methods. You can tweak the questions, you can tweak the scoring, and say what you feel for your business. It makes sense for something to be more heavily weighted.

Just make the scale a little bit larger and say it’s 0 to 1, change it from 0 to 3 or 0 to 2, or say you feel something needs to be weighted a little less because it’s not as important to your particular business. That’s okay. If it’s a score from 0 to 2, change it from 0 to 1 or 0 to a half. Completely customizable. If there’s another impact question that you think should be considered, certainly put that in there.

You can change that effort scale. Totally customizable. And I do want to make a note that generally people, once we get those results, scores, so they’re like, okay, great. So we just work from top to bottom, right?

We just start with the highest and go to the lowest. It’s not necessarily wrong. You can absolutely do that, but I do wanna call out one thing that the framework is not great at or doesn’t necessarily account for. And that is generally, at least when I’m strategizing, I like to have a good mix of easy, fast, quick, maybe smaller initiatives running, that you can just crank out one after the other and have running at all times. Those will be at the top of the list, toward the top of the list.

But keeping in mind that effort was calculated and considered into these scores, that means that the harder ones are gonna fall toward the bottom, and I like to have some of those going as well. You need to have a balance of both. Generally, those bigger, harder initiatives that fall toward the bottom are big effort, big lift, and probably higher risk, but bigger potential for reward. Not that the easy smaller tests don’t have big potential for reward either. But I do generally find that the bigger, bolder text ideas are harder and do fall toward the bottom.

So I would recommend that you start using this framework because I’m hopeful that everyone will go back and start using this. Just keep in mind that you probably wanna pull from the top and also try and fold in some from the bottom as well. And one last thing. When you score your test ideas, I encourage you to check how everything fell after you scored them. It’s usually on point.

But just double-check. Maybe you have some internal projects that kind of correlate and need to be coordinated with some stuff that maybe fell really low on the list. I’m not saying cheat the system, and subjectively move things up to the top as you see faith because that defeats the whole point. But if you do see something that you’ve thought ranked incorrectly because of a real business reason. For an objective reason, you know, maybe if something was brought up by your customers also or whatever, you can you know, try and move that up a bit or tweak the scoring.

But like I said, should not be happening on a regular basis, but I do encourage you to check the scoring. Once you have gone through this exercise. So now you have the research framework for research Excel. You have your list of insights that you have classified into different buckets that you have now run through the PXL framework and you have your prioritized, objectively prioritized list of tests, GDIs, instrumentation issues, investigation issues, and hypothesize issues. You are set to go. No more spaghetti testing.

No more excuses for that. And I do want to say that if you are a VWO user or you’re considering VWO, their Plan feature has a couple of things that you can use in conjunction with what I’ve talked about here today. So they have an observations feature where you can put those research insights and keep track of them. I still encourage you to use the PXL framework in Google Sheets or Airtable or somewhere, but the benefit of having them in VWO would be there right there in your testing platform. For you to grab and use and have that context live with the test so that anyone that jumps into the testing platform Bams right there.

They don’t have to ask you and they can just see it in the platform. There’s also a place to keep your hypothesis So you have your observations, which are your research insights, your hypothesis for your test class, and classification items, that you can keep in here. Same thing. It’s great. It’s right in your testing platform so that anyone who jumps in it, lives right there. And they do have ICE metrics in here. It’s not our expanded version, but you could certainly coordinate this with the PXL and maybe come up with some kind of type of aggregate score. For the I to see an e and put that in here. So it lives in VWO as well.

And then if you’re new to the industry, and you’re like, I don’t know what to test still, even from the research, or you’re just looking for inspiration. Come to this idea feature of the VWO plan. You should most definitely, get ideas and tests from the research, but you can certainly use this in coordination or in conjunction with that. And like I said, just to get some inspiration as well. So that’s it. You should be all set, and I will open up the floor now for some q and a.

 

Shanaz:

Hey, Haley. That was a really insightful presentation. I’m sure, all of our attendees are definitely going to implement one of those templates and, you know, even give VWO Plan a try. Take the VWO Plan for a spin and see how that works out their use case. So, yeah, thank you very much for this presentation. It was really, really insightful.

I definitely enjoyed it. And, I hope everybody else did too. So, we have a lot of questions, but, let me just take a couple of them. So the first question is by is from Devin O’Grady. I hope I’m pronouncing your name correctly.

Thanks for your time. He says thanks for your time and the presentation. Quick question. Do you use the PXL and research Excel frameworks when managing the interplay of demand generation and CRO/CX?

 

Haley:

Yes. You can use this across I would say almost any application or prioritization. Like I said, it is customizable. So if there’s something that doesn’t apply to your scenario, your business case, whatever. It is definitely able to be changed, but I would still encourage you to use these frameworks. Absolutely.

 

Shanaz:

I hope that answered your question, Devin. Thank you, Haley for answering that. The next question is from Andre. Hello, Andre again. Nice to see you again today.

So he says, hello, Haley. Nice to meet you. The presentation is amazing. Do you have an example of a checklist of A/B tests?

 

Haley:

Not off the top of my head, that I can share. That would be all client information, unfortunately. But I assure you every time I have done, the research Excel framework and cranked out a research presentation for a client I’ve ended up with so many tests every time I do it, varying from 20 tests to 30 tests to 50. I mean, it can just really go on and on. And especially if you’re doing that continuous research, you’ll just have a continuous list of tests that you can pull from.

 

Shanaz:

Alright. So, Andre, We don’t have a checklist of sorts, but if you go to our website, vwo.com, you should see a section of guides in the footer. There’s a guide on A/B testing. You’ll find a calendar there. It’s less of a checklist more of how you can scale from doing one test, 2 tests to multiple tests, and how to progress from there.

So maybe you could give that a look. And see if that helps. If not, please feel free to shoot a message to either Haley or me, and we’ll see how we can help you.

 

Haley:

Right? Yeah. Absolutely.

 

Shanaz:

Okay. The next question is, when ranking or scoring, how do we make sure that we are not biased towards an idea that we believe will work, but don’t necessarily have data to back it? I think the question is how do we prioritize, objectively?

 

Haley:

How do we prioritize those ideas and test, sort of score them? Yeah. That’s a good question. So the PXL template that everyone should be getting access to after today’s webinar should be as objective as possible.

You shouldn’t have to worry about bias when you go to fill that out. Because there aren’t any questions, like, how do you feel about this example, or what do you think about this test? As I went through the examples, is it above the fold? Can you notice it in 5 seconds? Are you adding or removing anything?

All of those are really easy, yes, no objective questions, so you don’t have to worry about the bias there. And then say you did not find it in any research, and it’s just a heuristic idea that you came up with based on your expertise. It can still go in the framework. But in that confidence section where it asks If you found it in any research, it will just get zeros across the board. Right?

So it will score lower because it wasn’t any research, and it was perhaps just serious. But the buy shouldn’t be in there. You should be set to use the PXL template. Not have to worry about it. And absolutely still put your heuristic ideas in there.

 

Shanaz:

Yeah. Thanks for answering that, Haley. Another question is, by Dragos. He says that he would like to add politics as a column for context, but he’s not sure what score to add how to score in the column of politics when prioritizing ideas.

Haley:

You’re getting into some muddy waters there. You’re trying to cheat the system. I know what you’re doing. I, you know, I can’t lie. We do put a political score column into some of our frameworks, but this is opening up the door for some subjectivity and some cheating system.

So I get the need for it because I’m not thrilled to admit that we have used a political score column, but sometimes there’s really just no way around it. I do understand that politics can get intense and messy, and sometimes you just have to put those in but I would really advise you to keep out a column like that if you can. And I would recommend if you have to put a political call them in there. Maybe just keep it so it has a low weight. So maybe just make the score 0 or 1.

Don’t crank it up to a 3 and give 3 extra points for that because that’s introducing more of that subjectivity. And then you’re essentially saying, I can give all of my ideas way more points and jack it up to the top if I think it’s a good idea because there’s a political need for it. So, hopefully, that answers your question. Like I said, it gets muddy. Try not to add 1, but if you have to, give it a low weight, so I would do, like, a 1. And be very stingy with using it.

 

Shanaz:

Yeah. Fair enough. I think that should answer the question. The next question is from Jake Young. Jake is asking, for working with new clients where do you start? Is there a process where you start analyzing first and second? Basically, I guess, what Jake wants to know is how do you go about working with new clients where you start, and how you take it from there.

 

Haley:

Yeah. Yeah. Excellent question. So if I get a new client for research, Honestly, we just go at it as hard as we can all at once. So they come in and we try and get as much data and set up at the outset as we can.

So we try and get the survey set up. We try and get a poll set up. If we’re doing any kind of interviews, we try and get those set up. Keep maps from everything in that model, we try and get as much set up as possible so that we can get that data collection done because that’s really what takes the longest. But like I said, that is a lot all at one time, and not everyone has resources for that.

So you can absolutely attack that molecule. One piece at a time or a couple of pieces at a time. I don’t really think it matters. You know, where you start, I think it would be on a case-by-case basis. What makes sense? So pick what you think makes sense to you and what aligns with your priorities, or maybe you have some pressing issues where user testing might make sense to start first. Something like that. It really can be done either way where you tack it all at once or in pieces. And then, you know, after we get the data, then we analyze it, do a readout where you could do that readout all at once or do the readout and pieces if you’re attacking the research and pieces. Yeah, and then you just put that all into the PXL and run with it. Our testing program. That’s the TLDR.

 

Shanaz:

Yeah. I think, I agree with you, and I think data is everything. Start with data, and the data will tell you what step to take next, both quantitative and qualitative.

 

Haley:

Yeah. Exactly.

 

Shanaz:

Thanks for answering that. I hope that answered your question, Jake. So we’ll take the last question for you. This is more of a personal insight into any podcast or materials that you follow to keep yourself continuously updated.

 

Haley:

Oh, that’s a good question. Of course, the CXL blog I honestly like Feedly, and I just have a ton of sources in my Feedly account. That just gets aggregated that I check so I can see all of the posts from a particular day or particular week pretty quickly. I’m really just if you search for marketing resources in Feedley, I just have the free version. A ton will come up and just start following a bunch.

I think those are the big ones. And then I do love books also and, a book that isn’t super specific to the topics that I talked about today, but that I love and I tell everyone about is Linchpin. If you haven’t read Linchpin, shout out to that book. I love it. Go check it out. It’s a great one. But, yeah, Feedly. Check it out.

 

Shanaz:

Yeah. I’ll add those to my lists of to-do, to read, to watch as well. And then will try whenever I have a little more space. So, yeah, thank you, so much, Haley, for, coming and doing this webinar with us. I am pretty sure everybody who attended is going to take back a lot of actionable from here and they’re going to use the templates that you provided.

And, yeah, I think this is going to be very helpful, especially for those who were sort of struggling to, you know, figure out what to test, what ideas to test, what ideas to not test. And yeah. So thank you so much, Haley. It was a pleasure having you here, and it was a very, very insightful presentation. And, thank you, everyone, for, joining us, being patient, and staying through the entire webinar.

I know webinars can get a little tiring now since COVID has been around for a very long time, but, no better way to learn from each other than to connect across the world, and exchange ideas. So, yeah, we’ll be sharing the recording and the presentations with everybody who attended as well as those who couldn’t attend. So in the next 24 to 48 hours, you will see an email from me in your inboxes. And, yeah, thank you so much, Haley. It was a great, great pleasure having you here today.

 

Haley:

Yes. Likewise. Thank you, and thank you, everyone. Have a good one. Happy Saint Patrick’s Day.

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Deck
  • Questions
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out