VWO Logo Partner Logo
Follow us and stay on top of everything CRO
Webinar

Starting Experimentation and Scaling to Personalization

Speaker
Sarah Fruy

Sarah Fruy

VP of Marketing

Key Takeaways

  • Embrace a culture of experimentation across your entire company, even if initial ideas don't work as expected, such as the slider example.
  • Personalization can be effective, but it's important to find the right balance to avoid coming off as creepy. Using company names instead of personal names was found to be more acceptable.
  • Use data to back up your strategies and win over skeptics within your company. For instance, the personalization experiment led to improvements in demo and free trial CTAs.
  • Start small with new strategies, prove their effectiveness, and then expand them throughout the site.
  • Tie the results of your experiments back to your company goals and financial outcomes. Attribution is crucial to understanding the return on investment from your experiments.

Summary of the session

The webinar, led by Sarah Fruy, former VP of Marketing at Linqia, focuses on the importance of adaptable and data-driven marketing strategies. Sarah shares her experiences in changing the North Star Metric based on the evolving goals of the business, emphasizing the need to understand the customer journey and what drives results. She also discusses the role of experimentation in validating assumptions and formulating hypotheses.

The host facilitated an engaging Q&A session, addressing questions about eliminating assumptions in hypothesis formulation. Attendees are assured that the recording and presentation will be shared post-webinar.

Webinar Video

Top questions asked by the audience

  • In your opinion, does testing increase CPA for paid media?

    - by Maria
    Is it increased CPA for paid media? I would argue that it could reduce your cost per acquisition because you're going to reduce wasted media spend, where I said, like, I've managed the management prog ...rams in the past, and I, like, require a team. If you're going to put, you know, a new campaign out there, we need to be testing something. We need to be learning from it, whether it's the right creative, the right messaging, or whatever. It can also just kind of like be a litmus test because, like, flat results are also interesting. Like, if you're trying to test something new and you get a flat test, like, that means that you're not rocking the boat. So not having a loser is also sometimes a winner depending on what you're trying to do. But in my experience, testing has driven down my cost per acquisition for demand gen. Yeah.
  • Have you ever used experimentation results to prove your boss wrong? How to do that? How to use the implementation result and, ensure that you can take it to the senior managers and able to prove them wrong. How to effectively do that?

    - by Peter
    So I think that's really making sure that you structure your experiment in a way that, you know, Again, like I said, I had an executive who really was passionate about a certain data point when it cam ...e to our marketing strategy and never wanted kind of rock the boat on there. And so if you have a hypothesis around, that would challenge that, you know, framing it up, documenting it, know, we said that, you know, most of our customers like to watch the demo before talking to a salesperson. Here, I'll give you an example. I'll give you a real example so that we can speak to this. At my company, we used to do a live demo, and it was just ingrained in the culture that had to have the live demo because people needed to ask questions. And if they couldn't ask questions, they weren't gonna move this as a sales process because it was a really complicated SaaS product at the time. And I was like, well, you know, I don't really always want to talk to somebody when I'm in the buying cycle. Like, I wanna challenge this, right? So what happens if I do a recorded version of the demo? And so we did an AB test, and we had a live demo form and a recorded demo form, and you could pick which one you wanted to do. The recorded demo, like, blew away the submissions for our live demo by, like, 60% or something like that. It was like a big win. You don't always get, like, really outsized like that, but it was like a really big win for the organization. And so then it was like, okay. Well, the demo that we recorded was an hour. What if we did it in 15 minutes? What if we did it in 8 minutes? We started to iterate on the results of that success, and then I was able to go back to the team, hey, we don't need to do a live demo every single week. We can do these recordings and start to optimize the recordings, and then that led to a whole resource center of, like, on-demand learnings. And so, like, this one point, you know, where someone in the organization or group of people actually were saying, “Hey, this is the best way to do it. We don't wanna change it.” And I challenged that. It actually unlocked a whole new program of training, and so I think it's really just, you know, being able to run the experiment, prove whether maybe the person is right. Like, there's always that chance, but then you can at least validate their opinion. But if they're wrong, having an experiment, being able to document that package it up, and bring it back to management, then you can start to change behaviors on something that may be, out of date or incorrect for your business.
  • Did you gate the demo? What was the follow-up?

    - by Jessica Miller
    Yes. We were gating the demo because, in terms of our lead gen program, the demo was, like, one of our highest lead gen assets that we had. That being said, we had ungated versions that we would send ...out as well. So that was another thing that we were testing. I think nowadays, like, that's really important, right? It was, like, gating, or ungating assets, whether it's, like, an ebook, a white paper, a demo, but at our company, that was one of those things where we even though we were seeing a lot of these results, when we ungated it, it was like for people that we knew. Because we didn't wanna jeopardize the massive amount of leads that we were getting because it was too important to the business. So that's sometimes where you have to weigh things outside of just the experimentation results of, like, if we lose these people, we felt it was too risky to ungate it at that point in time.
  • How long do we need to test in order to determine if this was if it was a failure or not?

    - by Diana Gonzalez
    Statsig - statistical significance. So, sometimes I would cut a test early because it just looked like it was tanking. So volatility is something you need to pay attention to as you're looking at the ...experiment. Like, is the the test leveling out? And early on, maybe before you get to Statsig, you can see that the test is flattening out. You're seeing more even results if the volatility is still kind of, bouncing around that you might wanna wait, but generally, like, this will be a part of your experimentation program guidelines as, like, what's your threshold for statistical significance? It doesn't need to be 100%. It might be 80%. You're gonna have to weigh that risk with your team on what you feel comfortable with, but you know, at one point, we wanted to increase the velocity of our test, and we lowered the thresholds for statistical significance for my team as a way to do that. So there are different members that you can pull, but it really you know, again, having consensus with your team on what you feel comfortable with when it comes to making decisions is important there.
  • Can you give me an example of a small test you have run versus a bigger, more involved experiment?

    - by Travis
    Sure. So, a smaller test would be we like on the homepage, you know, where I saw that, like, that get started button, we changed the copy on that many, many times. Is it gonna be should it be a demo C ...TA? Should it be a free trial CTA? Should it be a contact us CTA? And so that's really simple. Right? We're just changing the copy. I can go in there and do that myself. I don't need a big team to, like, mock something up and you know, come up with a whole workflow plan. So that to me is pretty, like, a low-effort, easy test to run. Something more complicated would be like, you know when we would look at our forms. Forms are really important for conversion marketing.  And so when we're changing the the language and the layout of that page, you know, I would need a designer to come up with you know, a new concept. Sometimes it would be changing the design, but the form stays the same. Sometimes we would be changing the form field.  And so as you get into, like, all those buttons and different things like that, like, that can be a lot more complicated from a design perspective. Because you need to, like, mock that up and have all these different labels and design guidelines and things like that. And so that would be something that would be more complicated. And so it's really just like looking at the effort of, you know, what resources do you need? Do you need to pull in a product marketer, or a copywriter or a designer? How much development work is this gonna require? Again, like, changing the copy is really easy to go do some of these tests when you're, like, changing your forms are gonna require a developer to actually build out that new form. So that you can test it. And so that requires a lot more effort. And so that would be kind of like two examples of a small test versus a bigger test.
  • What are the typical overarching North Star goal of experimentation and marketing?

    - by Pascal
    I wouldn't say there's like one North Star Metric that works for every organization that really needs to be specific to your business. At Pantheon, our North Star Metric for the experimentation progra ...m changed over time based on, like, the goals of the business. The last one that we had before I left was we we had said that we wanted to increase the number of hand raises on our website. And so how we defined hand-raisers were people who engage with us on chat. Contacted us through the Contact Us form or called our phone line directly, and so we wanted to increase the volume of people who were raising their hands because what we saw when we looked at the data was that people who reached out to us directly through these contact us formats and raise their hand like, “Hey, I wanna speak with you.” We're more likely to convert to paid customers. And so the more we could get people to engage with us directly in those formats, the more likely we were to win their business. And so that became our North Star Metric was to increase, the volume of interactions between our hand raisers. Other times in the business, you know, like our demo form when I first joined was one of those things where this is, like, the most important transaction on the whole website and everything has to map to, like, getting more demo fills. Over time, we realized that like, that wasn't always the most important path. When we looked at the data, we saw a lot of people were coming to us through our pricing page, and that was another really important metric. And so that, you know, that North Star Metric changed over time. And so it's really taking a deep dive mapping out your customer journey on your website. What is actually driving the results that you're trying to achieve? Is it sales? Is it talking to, a salesperson, you know, is it a click to cart? Is it talking to a salesperson? Is it, you know, maybe you want people to register for your webinars because those are really important to your business? Like, you know, the folks that think that you know, we're, you know, excited about all you joining because help some of their goals. And so, you know, you just really need to figure out what looking at your customer journey is going to achieve the best results and then, you know, anchor on that as your North Star Metric. But not having too many. It should be one. It should be a metric because you need to sort of deprioritize. Well, you know, this didn't help me get more handraisers. So it's a great experiment. I'm gonna put it in the backlog, but right now, we're really trying to get people to reach out to us through our contact channels, and this is not in service of that. And so that's a way for you to prioritize work that's going to service your North Star metric.
  • If you eliminate assumptions 100%, how else can a hypothesis be formulated?

    - by Pascal
    So you're just trying to sort of, like, prove out a concept that you feel is true. The way that I would go about that is, like, if you have, again, things like surveys, if there are other data on your ... website and you want to leverage experimentation to prove something to be true, but you don't have, like, that actual, result. You can use experimentation to do that. You know, like, so trying to think of an example of that. So even, like, that second place on our homepage, like, that was an area where, you know, we talked to a bunch of people. We looked at our scroll rates. We realized people were moving past the first part of our site. When we did the user interviews, the thing that came up most commonly was that they thought they hit the bottom of the website already, and that's why they weren't scrolling down to the bottom of the page. So we got qualitative insight. And then we started to change our layout to address that to push people down the page. So that we could turn that, data point on its head and change the experience so that we can drive the results that we wanted. I hope that's answering the question. But I would say just like pulling in, looking at data, and then forming your hypothesis around other pieces of information and proving it out on your website is a way to make hypotheses without basing them on assumptions. So maybe you have a demand gen campaign and this one title is working better, or call to action. You bring that over to the website. It might be different because it's a different environment, right? That CTA on your banner ad might work really well out in a while where you have all your campaigns running. You bring it in-house on your website, and it won't work. And so then we're getting back to this assumption thing of, like, I had a data point. This content works really well over here, but then it's not working well over here. And that's why we experiment, right, is to sort of either validate things that are working somewhere else or understand that different environments yield different results. And people will add it to you as it changes over time. That's the other thing is, like, your experiments will decay. So you're gonna have, like, an outlier. You're gonna have, like, maybe an outlier campaign that does well, but you can't say that that's a winner forever for your business because people are, like, make like, we change one of our buttons to be pink, and we saw the engagement rate go up really, really hard because it was just, like, so shocking to see a pink button on the website. Eventually, people are gonna get used to seeing that pink button on your website. So maybe you need to change it to blue next time. And so that's one of those things where, you need to be constantly be testing because people will get, you know, the the the results will decay over time and needing to keep things fresh in order to keep people engaged and continue to improve results for your business.

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

 Ajit from VWO: Let me talk about today’s webinar. We have, Sarah Fruy, who is VP of Marketing at Linqia. Yeah. Sarah has a tremendous marketing experience of 20 years. She’s currently, working as a VP of marketing at Linqia, which is an influencer ...
marketing company that delivers guaranteed influencer ROI. 

Prior to the stint, she rose up the ranks by starting experimentation, which did not exist at the Pantheon. She then leveled up the game by introducing personalization. Now, I would have you guys know that this is how it works though. That’s, that is the first personalization webinar that VWO is doing, and we are honored that, Sarah has come on board to do the session for us.

So without further ado, let me introduce Sarah to you. Hey, Sarah.

 

Sarah Fruy:

Hi, everyone. Wonderful to be here. As we kind of get our bearings together, it would be great to know what roles you all have in your company. If you could put up the poll, that would be amazing. 

I would just love to know, like, are we talking to marketers or developers? Product people out there, maybe. This will help me kinda contextualize some of the talking points as we get into, the discussion. 

So we’d just love to know, yeah, what function you have if you wanna, like, make any comments in the chat about where you’re dialing in from. I’m out in Oakland, California. Normally, it’s sunny in California, but early in the morning, we have a lot of fog, especially in the summer.

We call our fog Carl out here. It has a name. So, yeah,

 

A:

I see that 36% of people have voted. So I think, the remaining 25% of people are still in the kitchen. 81% of people have voted by now. And, it’s surprising because I see that the majority of people are from product, which is surprising to us as an organizer because, in the other webinars, we get a majority of people from marketing. 

So, there’s something about the session, Sarah, that people from product are, you know, joining majorly. 86% have quoted, and I’m not going to pause the poll unless at least 90% of people vote. So come on, folks, please share your thoughts. Okay. We crossed 90%. So I’m closing the poll. Yeah. And here are the results.

 

SF:

Great. So, yeah, we do have quite a few marketers out there. That’s good to see. I’m gonna be speaking from the marketing perspective myself. A lot of product folks, which make a lot of sense, some designers, sales.

No developers. That’s a little interesting. I have worked very closely with developers through all of my experimentation programs, especially on the website. So, again, welcome everyone. Really appreciate you joining me today. You know, as a young woman going through school, there always seems to be this great divide in education. Like, you’re either into math and science or you’re into liberal arts, your left brain, or your right brain. It was very polarizing. And as a child with a wild imagination, I chose the liberal arts path. So I resigned myself to believing that I was skilled at art, but not so much in science. And then I got really into photography and quickly saw these 2 worlds kind of blend together.

That eventually led me to a career in marketing. And nowadays, we talk about art and science in this world all the time. You know, as a marketer, you have to be good at math and analytics to understand how your content is performing. You need to be creative to think about different ways to express key messages for your organization that will drive sales, donations, sign-ups, you know, whatever your North Star Metric (NSM) is. And these things go hand in hand.

And to get the most value out of the decisions you’re required to make, I believe that experimentation is essential. So, today I’m going to talk to you about how I built an experimentation program at Pantheon and scaled it to enable personalization on our website. 

So we’re gonna start off by talking about the pitfalls of decision-making without experimentation and then go into identifying ways to prove the need for experimentation at your company. How to get buy-in, that’s really important, right? You need to sell these things to the bosses. And finally, how to level up your practice to include personalization. 

So I’ve been talking for a little bit. I suppose this time I introduce myself.

You know, I’m Sarah Fruhey. Again, the VP of marketing at Linqia. Linqia is the calm and the chaos of influencer marketing. We are a full-service tech-enabled platform that enables campaigns for the world’s leading brands from influencer selection to create strategy to scale. So prior to joining Linkia, I served in a number of marketing roles at Pantheon during which time our company, rose to unicorn status.

And it is here that I caught the testing bug and helped build a culture of experimentation at the company. In addition to that, I am a certified Scrum Master and Agile Marketer with over 15 years of online marketing, digital media, and website operations experience, along with marketing strategy and digital marketing certificates from the Cornell Johnson Graduate School of Management. Previously, I worked at emerging media companies like Stay Media as well as Heritage Brand, including the San Francisco Chronicle. I’m a guest contributor on many blogs and a frequent public speaker. You can follow me on Twitter. I’ve been a little bit quiet on there lately. I try to be more active on LinkedIn, so there’s a lovely QR code if you want to connect down there. 

And when I’m not at work, you know, I live in Oakland, California, I’ve got 2 young kids that keep me really busy. I love to go hiking, playing with my dog. I’m not I’m not tied a lot. That’s one of the best parts about being on the West Coast.

So enough about me. Let’s, let’s dig in. 

So since you signed up for this webinar, you clearly find value in experimentation, and it’s no longer a nice to have. As Mark Okerstrom, CEO of Expedia put it, “In an increasingly digital world, if you don’t do large-scale experimentation, you’re dead.” This is really table stakes. I think for any company looking to thrive in this changing market. 

Why?

Because, you know, decision-making without experimentation can anchor your team on out-of-date or misleading information, you know, quite often this means working off of assumptions. So I can’t tell you how many times I’ve been in an organization where an executive in the company tells me some marketing-related facts that may or may not be true, but it’s so ingrained in the company’s folklore that everyone just, like, anchors on it and, like, they can’t question its validity. And I don’t doubt that at some point in time, these things were actually true. 

But, you know, audiences change over time. People’s behaviors change over time.

You know, what worked over a year ago or even a few months ago may not be effective tomorrow. You know, this creates a high-risk situation for your company because you’re making decisions on potentially out-of-date information without validating those assumptions and this also results in a lack of innovation when you work with a set-it-and-forget-it mindset. Your team isn’t challenging the status quo. They’re less likely to produce new ideas, products, and methodologies that can help you grow your business. So when you lack testing and innovation, it’s nearly impossible to keep up with the rapidly changing environment that surrounds us today.

People are concerned about the economy, war, global warming. These are really significant issues that impact consumer purchase behavior and how they interact with your brand. So if you weren’t testing on a regular basis, your decisions will lack authority and can potentially hurt your business. Or at the very least prevent it from reaching its full potential. 

This leads me to my next point on “How to prove the need for experimentation in your organization.” 

At a higher level, properly managed experimentation programs will yield positive ROI, you know, return on investment for your business. You put this much money in, you’re gonna get this much money out. And ideally, you don’t wanna be a cost center for the business. You wanna be driving profits. So you know, you wanna move from relying on assumptions to making data-driven decisions, and data-driven decisions will lead to faster iterations because you can fail fast, something’s going wrong. You can identify that through your testing and then pivot to try something new when your original hypothesis didn’t plan out.

And you’ll also get early signals when your hypothesis is correct and can reallocate those resources to fuel that innovation so you can get you know, to success faster. So as you build up a cadence of testing, you’ll need additional resources, which enables cross-functional collaboration. You know, I say this all the time, but great ideas can come from anywhere. That’s one of my favorite parts about working in experimentation is, like, I’ll be sitting at the lunch table or have some kind of virtual coffee chat, and of my colleagues be like, oh, I heard you were working on this. Have you ever thought about that?

And, like, I just love, you know, getting these ideas from people all over the company, where, you know, it really breaks down those silos. So if your company really struggles with silos, I think experimentation is a way to, like, you know, collaborate together. We can get product and marketing talking together. We can get, you know, sales into the mix and HR and all these different you know, parts of the business.

So, yeah, having, you know, the right resources and data will really enable you to track the value of experiments generated, which can then be translated into ROI for your company. 

So another way to think about this is through the concept of Growth Levers for your business. So, Growth Levers are changes whose effectiveness can be measured with statistical significance. For example, you know, how does presenting two CTAs to each other on your homepage compare to presenting a single CTA? Growth levers are tied to a North Star Metric that you care about, such as the overall conversion rate for your website.

Now a small change to a growth lever won’t lead to a won’t result in a statistically significant impact on your North Star Metric. But you do know that that growth lever wins are moving the North Star Metric. So it’s like looking at the hour hand of a clock. You won’t see it move, but if the second hand is moving, You know the hour hand is moving. The key is to do a lot of iteration on your growth levers where you can measure the impact and move the overarching metric.

You know, said another way, lots of small changes lead to a big change. 

In fact, Microsoft Bing’s experimentation units conducted dozens of monthly improvements that collectively boosted the revenue per search by 10 to 25% a year. That’s really impressive. They accomplish this by embracing failure and learning from it. When you run a high volume of small experiments, They’re not all gonna be winners. I promise you there’s gonna be a lot of duds in the mix. But in order to have a successful experimentation program, you need to be comfortable with failing and you know, being wrong sometimes. 

That was something when I first got into this that was, kind of humbling. You know, I think as a marketer, a lot of times, historically in the past, you have these great ideas and you’re pitching them to everyone around you and it’s so much like on your instinctive, like, this is what we’re gonna do. And then all of a sudden, you bring in the data into the mix and you’re like, wow, like, even though that design looks better, our audience doesn’t click on it, even though that copy to me is much more appealing.

It’s not resonating with the people that I’m trying to talk to. And, as soon as you can kind of remove yourself from the equation and just let the data speak to you, that really can kind of unlock your team, to uncover all kinds of insights because you take the ego out of it. I think that that’s really important when it comes to experimentation. You can’t get bummed out if your idea didn’t win or have some kind of, like, tally about, like, I got this many experiments right and that many wrong. Like, that’s not helpful.

The helpful part of it is if you fail, make sure that you’re learning it. You know, there’s a lot of points in my career where some of the biggest insights I’ve gotten have been from things that didn’t go the way that I expected. And paying attention to why that happens so that, like, I can course correct and, and get the big wins. And the other thing, you know, when you’re doing a lot of frequent experimentation is, you know, there’s you’re gonna have small regular experiments, but you need to also mix in some that are, you know, kinda swinging for the fence. It’s something that is really big and more complicated because those can have a potentially even greater payoff.

They’re more risky because they’re more complicated, and they require more resources. But I tend to do kind of a mix of, you know, 3 or 4 small experiments, one bigger experiment, throughout the course of a month or any given, like, you know, sprint period. 

So we’ve been talking a bit about the why. Now let’s get into the how. In this next section, I’m gonna share 3 ways that I’ve been able to get executive buy-in. Not only my experimentation program but other big ideas as well. I think that’s what’s kind of interesting about this topic is, yes, we’re, like, focused on experimentation, but hopefully you can take some of my tips here and bring them back to your jobs and, win over the powers that be to, execute other programs that maybe you’re you’re noodling over or you think would help the business or your team. 

So, one of the first things that I tend to advise people on is ensuring top-down bottom-up consensus. So what that means is, like, not only do you need executive support, but you also need the people who are, you know, boots on the ground that are doing this work to be excited about it too. You can’t just say like, “Hey, I’m the boss. We’re gonna run an experimentation program. This is how we’re doing it.”  

Because the people who are doing the work might not be as excited about it, and so gaining that momentum, getting people into it, giving them special projects, and making sure that they’re just as passionate about this as you are is going to matter all levels of the business because if you don’t have the people who are doing the work excited about it, they’re not gonna be as successful If you don’t have executive buy-in, when things get expensive, when things go wrong, you know, I’m gonna have a backup to say, “Hey, Sarah or, you know, whatever your name is, has a really great idea. I support her on this. We need to see this through, especially, you know when some of the experiments might not go the right way.” 

They’re, you know, that can be scary for some people who aren’t used to having experiments of like, well, it didn’t go right. You know, we need to cancel this thing right away. Again, like, embracing failure is a really big important part of having a strong experimentation program. But from a cultural perspective, you know, top-down, bottom-up, getting support across the organization is really important.

And that starts before you kind of make your pitch. So if people are kind of like investigating this, you have a rally of people that are excited about what you want to do. 

Next, I would suggest building a business case for your resources. And so this might involve a spreadsheet typically. It could be a presentation, but you’re gonna wanna look at the budget, you know, what tools, software do you need, things like VWO, for example, staff, I’ve worked with developers, user interface designers, data analysts, copywriters, product marketers, you name it.

There are a lot of different people that might be pulled into your experimentation group. So identifying them, quantifying, “Hey. I need this much time out of them per week.” It’s not gonna be a huge commitment, or maybe it is for some people, but, you know, spelling that out for, for the executive team, making sure that you have the right data pipelines, especially as we get to that part about, personalization, making sure that you have really strong data is kind of again, table stakes for building an experimentation program. And so, you need to make sure you have really strong Google Analytics or whatever you’re using to measure your website, that if you’re identifying visitors, you know, that that data is good.

It’s clean. It’s up to date. And, I think this is something, again, a lot of marketers struggle with like attribution, but your experimentation program will only be as strong as the data that you have coming into it. And so that’s really important. So again, yeah, putting together a business plan, making it very cohesive, how much money you’re gonna wanna spend, who you need to be a part of this, packaging it up in a really professional way so that when you go to the management team, you have all your ducks lined up in a row. You know what you’re talking about, and you can answer any questions they might have about these issues because the budget is always going to be a really important part of this, right? 

Next, you should establish a strong foundation with a single North Star Metric. I think that sometimes with programs, it’s easy to get distracted. And so what I’ve found for my team is having one overarching goal is really essential for success, and that helps you, you know, prioritize work. So everything is mapping up to that north star. 

And, you know, as you’re building this foundation, one of the decisions that you’ll need to are you going to have, like, decentralized teams or centralized teams? 

Meaning, like, is there a dedicated squad that’s always going to be running your experimentation programs? You’re gonna be pulling individuals from different parts of the organization. My team at Pantheon was a bit of a hybrid. I had some dedicated resources that were just like a part of the experimentation team. And then every quarter, I would kind of pull in different folks from an organization and swap people in and out so that I could get, you know, fresh ideas, different functions participating, and that was a way for me to keep the program fresh, keep it interesting, and not sort of, get stuck in a rut of the same people all the time coming up with the same ideas. 

I think having fresh folks cycle in was one of our keys to success. So, in that kind of debate between centralized versus decentralized, I actually recommend a hybrid. You know, having strong program management guidelines is also really important. So, you know, having a regular meeting every week, having a backlog of station ideas, again, identifying what your North Star Metric is, what other pieces of information are really important to making decisions in terms of, like, is this test a winner? Is it not, occasionally, you might have a loser, but because of things that are going on inside your organization, you’re gonna run with that, like, loser test idea anyways because maybe you’re trying to get any round of funding. 

And even though the language didn’t work, you need investors to see a certain perspective on your website or something like that. Or sales are really struggling. And even though people aren’t clicking on a certain form, they need the website to say a certain thing to support, what they’re out in the market. There are going to be times when you’re going to have to make tough decisions internally that even though the data suggests you move in one direction, maybe the business requires that you move in another. And so having some guidelines around that is really important. Having a strong prioritization framework. 

Again, I have an agile background. So what I work with is a backlog of ideas. You know, you have a, like, a list of, “Here’s all the things that we could do”, and then you sort of map them to the North Star Metric and you look at things like, you know, how complicated is this test, what resources do we need? You know, maybe some of them, you need a designer and a writer and a product marketer and a few other folks that are coming together. And so that’s gonna take a couple weeks of production to just get the test built. 

Whereas another one is like, “Hey, this is just a quick copy change, or we’re just gonna, like, you know, move some things around on the page”, and that’s a lot lower effort. And so, you know, when I was speaking earlier about having, like, a lot of small tests, like, what can you do quickly and easily to sort of get to validation fast? And then mix that in with, you know, a big test play as far as more effort. We’ll provide deeper insights So you sort of have that balance of, like, a regular cadence of quick, small tests and then some bigger ones that might take a couple weeks, to come to fruition. 

And you know, when it comes to testing, experiments can take a matter of hours. It can matter of days. It can matter of months. It really depends on the traffic to your website. So, sometimes people ask me about that, and it really just sort of depends on the volume of people you have going through the tests themselves. 

The last point here I’d like to make is, you know, you also wanna establish ethical testing practices. There are a lot of people on the internet these days. You need to respect them in there’s, you know, Facebook in the past has gotten blown up for doing some tests where there are manipulating people’s emotions and things like that. I would just be really cautious as a team and have some guidelines in place about when you’re running these experiments, making sure that it’s a healthy experience for your customers and that you’re not doing any kind of things that are gonna mess with their emotions.

And, you know, you’ll have to, as an organization, determine what ethical decisions are appropriate for your business, but I would just always keep in mind, like, this at the end of the day, we are experimenting on people. So keep that focus at the heart of what you’re doing and have empathy for your audience to make sure that you’re not doing anything that would make you uncomfortable or someone you care about. 

So, yeah, another quote here. I’ve got a few of them in here. Oh, I think a lot of people know Amazon does a ton of testing, and they really credit that to, the frequency. You know, they’re how many they do per year per month per week per day. So this has to just become kind of part of your everyday culture. 

Experimentation is kind of like a lifestyle once you get into it. It’s just it becomes a part of your regular workflow and, and that’s how you move your business forward really. 

So next section, you know, maximizing the impact of your experimentation program. 

I’ve kind of given you the ways that I sort of have built the foundation and now Here’s some advice on how to take it to the next level. So, the first point here would be to eliminate the guesswork, you know, data must trump opinions. I said this earlier, you know, when I first started working on experimentations, how humbling it was to realize that the data, proved me wrong a lot. You know, my instincts weren’t always correct, and I had to rely on them in a way. And that’s not to say that your instincts are going to be misleading. They’re going to come up with experimentation program ideas. 

And, again, like, sometimes knowing what doesn’t work is gonna be really important to your business as much as knowing what does. But at the end of the day, you know, again, take your ego out of it. The data data wins.

Next, you know, we really wanna create I’ve used this phrase probably a few times now, but it’s something I’m really passionate about. I was just creating a culture of experimentation. So, again, having these continuous cycles of iteration like I was just talking about with Amazon. You know, successfully innovate, you really need to make experimentation an integral part of your everyday life, especially when budgets are tight because that’s gonna help you make really smart decisions. This is not a place cut when things are getting, a little scary when it comes to financial situations.

I know there’s a lot of people making cuts with employees and bunches are getting tightened and things like that with, like, this recession looming in the United States and other areas. And so when it does get scary, I think it’s more important than ever to experiment to make sure that you’re making smart decisions. 

And again, you know, this idea of a culture of experimentation should really extend across your organization. How can other departments build testing in what they’re doing, whether it’s finance, HR, or sales? Like, it shouldn’t just sit with marketing or your web team or the product team, you really want everyone to be thinking about this.

Like, I’ve managed demand functions in my career, and every time we’ve launched a new ad campaign, I make sure that we’re AB testing the ads. I work at, you know, an influencer marketing company. We, you know, put out all of our influencer posts on organic, on their social media channels, we look at them to see which is performing the best, then we identify the top creative, and we start to put that onto paid, and we’ll test the visuals. We might do multi-variant testing. We’ll, you know, look at the copy and different things like that.

And so, this idea of experimentation really should extend across your organization in all different ways, you know, whether it’s the HR, if you’re doing employee studies and things like that, like, asking people serving them, running experiments on, like, what’s the best way to onboard somebody. So, if you want to build more support for your experimentation and they get successful, you need to make sure that all of your colleagues are really excited and like, how can I test something to make my team perform better and help the business move forward? 

And so having little lunch and learns and things like that to share your best practice as an experimenter is going to help, you know, make your program more successful because people are going to believe in the value of experimentation. So I think that’s really important. 

You know, another thing about, you know, building experimentation is that it creates opportunities for learning.

So there’s, again, like, if you guys if y’all do lunch and learns at your company or there’s all hands meetings or things like that. Like, raise your hand. Hey. I wanna present this week. I’ve got this great experiment that we went through.

We learned a ton of stuff. I wanna share it with the company. And that might inspire other people to, make changes in their behavior or how they talk or, you know, you know, it’s really kind of wonderful to see how when I’ve shared experiments, how they inspire people, my colleagues that I really didn’t expect. Another thing that experimentation can do is create healthy debate among team members. You know, there are some organizations where it’s this very top-down mentality, again, of, like, the boss and this is what everybody has to do.

And, I think from an employee perspective, that doesn’t always feel great. Right? You know, you want to empower people to have a voice and feel like they’re contributing. And I think an experimentation program is just a wonderful way to give people an opportunity to raise their voice “Hey, I’ve got this great idea. I’d love to share it with you. Can we talk about it?” 

And then, you know, when that experiment maybe wins, like, at that all hands, call that person, congratulate them. We got this idea from somebody on the finance team. Wasn’t expecting it, but this test blew us away. You know, we’d love to have more contributors next time. And also, you know, I’ve had a lot of experiments. It can be kind of controversial. And the team is like, “Hey, we should do this. We should do this. And having those debates, I think, is important because it helps you consider different aspects, different perspectives, and stuff like that, and always having more diversity in your decision-making is gonna make your decision stronger.

And so I think, encouraging debate is important, not just coming in every week and being like, “Hey, we queued this up as is most important”, checking with the team, making sure the priorities are still in line, you know, evaluating your backlog to make sure, you know, is there knew we need to include. Maybe there’s a new product that got released, and we need to, like, test something about that that we weren’t expecting. So, yeah, the healthy debate is really important. 

And then I am repeating myself a little bit here in this presentation, but I just think it’s so important again. Hit this note one more time. Embrace failure and learn from it. I will stop there because I’ve said it a few times, but I just think it’s really important. 

Next, yeah, you wanna socialize and celebrate your wins and learning. So if you wanna grow your experimentation program, it needs visibility. Right.

Do you have a company newsletter that you can put some stats in? Again, all hand, lunch and learns or different things like that. You want to if you’re doing OKRs, like objectives, the key results. I don’t know what goal setting your different organizations use. The company I’m at right now, and the previous one I was at, we used OKRs. So we have company-level goals that we’re trying to hit.

Make sure that your experimentation program maps back to those company goals so that when you go to your boss, when you go to the management team, you’re saying, “Hey, we read all these experiments and moved this goal to help get us into the green this quarter, which helped us hit these goals”, and things like that. And so don’t work in a silo of, like, this is only serving my needs. 

Make sure that it serves the company and that you can socialize that in a meaningful way to express the impact that you’re making. And that will also help you, you know, nurture an environment where all employees can contribute. The more you can tie this back to the company’s success, the more you know, support you’re gonna have, and that’s gonna help you get more funding, more staff, more resources, which are all really important to scaling your program. 

So this data’s a little bit old, but I just think it’s, you know, an interesting data point. You know, insights-driven businesses are growing by more than 30% annually and are on track to earn $1.8 trillion by 2021. So use data points like this again in your business case and stuff like that to prove to the management team, that having data-driven decisions, and leveraging experimentation is only going to help your company make more money.

Not doing it to me is risky. That’s where you’re going to, like, lose out on potential opportunities to grow the business, accelerate your efforts, and innovate on your products and all these wonderful things that experimentation can produce for you. 

So leveling up to personalization. I know this is, you know, a hot topic for a lot of folks that join today. So one of the ways that we kind of leveled up to personalization at Pantheon was fostering collaborative brainstorming sessions 

So, again, gathering a diverse set of stakeholders, you know, this includes not only people from different backgrounds, different ethnicities, but also different roles in the company, different age groups. I think that anytime you’re looking to make meaningful decisions, you really need to have diversity within the people who are making them. Otherwise, you’re you’re gonna have a lot of people who are too much the same that are going to kinda, like, agree with each other. And, it’s really important to collaborate with lots of different types of people when you’re creating this kind of culture. 

You also wanna look at data and analytics. So things like customer surveys, interviews, funnel reporting, engagement metrics, heat mapping, cohort analysis, you know, the list goes on and on and on. 

 

A:

Can we take this moment, Sarah, to take some questions from the audience? I see that we got a question here, from Maria. Towards the beginning, she asked, “In your opinion, does testing increase CPA for paid media?

 

SF:

Is it increased CPA for paid media? I would argue that it could reduce your cost per acquisition because you’re going to reduce wasted media spend, where I said, like, I’ve managed the management programs in the past, and I, like, require a team. If you’re going to put, you know, a new campaign out there, we need to be testing something. We need to be learning from it, whether it’s the right creative, the right messaging, or whatever. 

It can also just kind of like be a litmus test because, like, flat results are also interesting. Like, if you’re trying to test something new and you get a flat test, like, that means that you’re not rocking the boat. So not having a loser is also sometimes a winner depending on what you’re trying to do. But in my experience, testing has driven down my cost per acquisition for demand gen. Yeah.

 

A:

Right. And I see there is one more question. Actually, there are the same questions from two people, and they’re again asking if you would be sharing the recording of the recording in the presentation. Or not. So the answer is yes, we would, and, you know, that applies to everyone. If you’re registered with the webinar, you will get the slides. 

And there is one more question. It’s about, So I had asked in the chat, “Have you ever used experimentation results to prove your boss wrong? And, Peter has asked, “How to do that? How to use the implementation result and, ensure that you can take it to the senior managers and able to prove them wrong. How to effectively do that? ”

 

SF:

So I think that’s really making sure that you structure your experiment in a way that, you know, Again, like I said, I had an executive who really was passionate about a certain data point when it came to our marketing strategy and never wanted kind of rock the boat on there. And so if you have a hypothesis around, that would challenge that, you know, framing it up, documenting it, know, we said that, you know, most of our customers like to watch the demo before talking to a salesperson. 

Here, I’ll give you an example. I’ll give you a real example so that we can speak to this. At my company, we used to do a live demo, and it was just ingrained in the culture that had to have the live demo because people needed to ask questions. And if they couldn’t ask questions, they weren’t gonna move this as a sales process because it was a really complicated SaaS product at the time. And I was like, well, you know, I don’t really always want to talk to somebody when I’m in the buying cycle. 

Like, I wanna challenge this, right? So what happens if I do a recorded version of the demo? And so we did an AB test, and we had a live demo form and a recorded demo form, and you could pick which one you wanted to do. The recorded demo, like, blew away the submissions for our live demo by, like, 60% or something like that. It was like a big win. You don’t always get, like, really outsized like that, but it was like a really big win for the organization. 

And so then it was like, okay. Well, the demo that we recorded was an hour. What if we did it in 15 minutes? What if we did it in 8 minutes? We started to iterate on the results of that success, and then I was able to go back to the team, hey, we don’t need to do a live demo every single week. We can do these recordings and start to optimize the recordings, and then that led to a whole resource center of, like, on-demand learnings.

And so, like, this one point, you know, where someone in the organization or group of people actually were saying, “Hey, this is the best way to do it. We don’t wanna change it.” And I challenged that. It actually unlocked a whole new program of training, and so I think it’s really just, you know, being able to run the experiment, prove whether maybe the person is right.

Like, there’s always that chance, but then you can at least validate their opinion. But if they’re wrong, having an experiment, being able to document that package it up, and bring it back to management, then you can start to change behaviors on something that may be, out of date or incorrect for your business.

 

A:

Your example is generating a lot of interest, Sarah. We have gotten, like, 4 follow-up questions on that, but in the interest of time, we’re going to take only two. So, Jessica Miller is asking, “Did you gate the demo? What was the follow-up?

 

SF:

Yes. We were gating the demo because, in terms of our lead gen program, the demo was, like, one of our highest lead gen assets that we had. That being said, we had ungated versions that we would send out as well. So that was another thing that we were testing. I think nowadays, like, that’s really important, right? It was, like, gating, or ungating assets, whether it’s, like, an ebook, a white paper, a demo, but at our company, that was one of those things where we even though we were seeing a lot of these results, when we ungated it, it was like for people that we knew. 

Because we didn’t wanna jeopardize the massive amount of leads that we were getting because it was too important to the business. So that’s sometimes where you have to weigh things outside of just the experimentation results of, like, if we lose these people, we felt it was too risky to ungate it at that point in time.

 

A:

Yeah. And we got another question from, Diana Gonzalez, She asked” “How long do we need to test in order to determine if this was if it was a failure or not?”

 

SF:

Statsig – statistical significance. So, sometimes I would cut a test early because it just looked like it was tanking. So volatility is something you need to pay attention to as you’re looking at the experiment. Like, is the the test leveling out? And early on, maybe before you get to Statsig, you can see that the test is flattening out. You’re seeing more even results if the volatility is still kind of, bouncing around that you might wanna wait, but generally, like, this will be a part of your experimentation program guidelines as, like, what’s your threshold for statistical significance?

It doesn’t need to be 100%. It might be 80%. You’re gonna have to weigh that risk with your team on what you feel comfortable with, but I, you know, I, at one point, we wanted to increase the velocity of our test, and we lowered the thresholds for statistical significance for my team as a way to do that. So there are different members that you can pull, but it really you know, again, having consensus with your team on what you feel comfortable with when it comes to making decisions is important there.

 

A:

Cool. Thanks, Sarah. So we have gotten more questions, but I think in the interest of time, we are going to, save them for the last. So, yeah, let’s continue then.

 

SF:

So the next point here would be, you know, when you are thinking about personalization, stay focused on the needs of your customers. So, personalization can sometimes be creepy, right, and we really to establish where that line is. 

I’m gonna talk through an experiment in a bit that we ran on our home page, and that was a big concern internally with, we gonna creep people out by using personalization in this particular instance? And so, you know, being human, putting your customers at the center of what you do is really important. You want both qualitative and quantitative data in your research when you’re thinking about personalization.

And so you know, not just looking at the numbers sometimes, but also interviewing people, you know, would this turn you off if you saw you know, your name and job title and things like that or, you know, whatever information you’re looking to surface from a personalization perspective. You want people, like, I think more and more people are sort of expecting personalization. So, over time, depending on the nature of your business, this will be, like, less of a concern, but also just, like, really trying to understand, like, “Where’s the line with your audience?” is important. Again, really ensuring good data. 

So, you know, when personalization goes wrong, it can hurt your business. So make sure that if you are going start to use people’s names or titles or bits of information about them and expose that on the website or, you know, were you using personalization? And, because, you know, if you call me by the wrong name or, you know, I’ve gotten emails before where it’s like, name in brackets, and it’s, like, really a salesperson. You want me to book a meeting with you and you can’t even, like, schedule your emails to populate my first name properly, or you have an out-of-date title or something like that. Like, that’s something that’s kind of gonna turn people away, and making sure your data is really strong is important.

And then another way to help with personalization is, again, this culture of experimentation, you know, potentially incentivizing employees to submit ideas. That’s a great way to kind of grow your program and, you know, tap into different perspectives. 

So here is an example of the homepage when I was managing the website at, Pantheon. So one of the things that we did was some qualitative research. Like I said, there’s gonna be quantitative and qualitative, and we did some user interviews.

And when they saw the logos, right below that first, you know, I call them like places on the home page. They’ll write right before that first piece. There’s like this black piece that you see on the left. They thought that they had hit the footer. And, so one of our experiments was to sort of redesign that experience and, we came up with this sliding bar. 

So, our home page was one of the areas where we did a lot of experimentation. So before we got to that point, this was another, like, a previous iteration of our home page. And so initially, you know, our North Star Metric was really getting people to watch the demo, which I was talking about earlier. But we wanted to test a hypothesis if we added a second CTA to our home page, you know, would it, you know, hurt our conversions or increase the conversions for the page? And so the idea was if we gave people an option, then it would force them to make a decision. 

And in doing this, we saw a 53.5% improvement on our demo CTA and a 10.8 improvement on our free trial CTA. So the “Get Started” button in the upper right is our free trial. And so even though that button already existed, adding it with a separate, like, a different title increased the conversion rate on there by 10%. And then the demo alone went, you know, up by 53%, just by adding an option. 

And so, I think experimentation just can really, like, bring some really surprising revelations, and so this was like a really big win for our company early on in my experimentation program. So it’s something that I like to show a lot again, you know, when I was running demand gen, at one point, you know, we were trying to change our homepage tagline, which felt really risky to me. And so I leveraged experimentation through our demand gen program. 

We run a bunch of banner ads, and this is kind of what I was thinking to do earlier where the banner ads with the new messaging, didn’t show, like, an increase in conversions. It didn’t show a decrease. They were pretty flat from the results that we were seeing with our previous campaigns, and so that gave me confidence before I took the change to the home page. 

Let me run this little experiment with some paid media to get some quick insights, you know, with a larger pool of people before I bring it to a riskier part of you know, our brand, which was the homepage. 

And so that’s some of the ways that you kind of like test small before you even bring it to the website is looking at some of your paid campaigns to get some of those early signals of, like, are we heading in the right direction or not? So that’s a note that I wanted to hit here. This was the story I wanted to tell you about the second slide. So, again, we had some folks that we interviewed, who thought that they had hit the footer when they saw these logos.

And so we’re like, okay. “What if we put a slider in?” And I know that there’s, like, a lot of research that says, don’t use sliders, but we thought, hey. Gonna work for us. It’s gonna work for us, and so we put a slider in, and it didn’t work for us.

Even though there was this motion, and we thought that that would pull people down the page, it was not an effective way to get people to engage and scroll farther down. And so we, you know, have I’ve tried some other things. There’s, like, a menu that we rolled out, things like that, but that was one of those where our team was like, it was gonna work. It’s gonna work, but you have to test it to know. 

And here is, an example of personalization.

So, we had the ability with our team to say people’s first names or titles when they hit our home page, and this was where the team kinda sat down and we’re like, where is the creepy line? Where’s the creepy line? And we all kind of agreed at some point after some debate that using people’s first names was gonna be too creepy, but, you know, this was, like, pre-pandemic. A lot of people were working in offices. It was really easy for us to say, like, oh, you work at, like, Lyft, for example, when you hit the site so we felt people would be comfortable with identifying their business versus their name, even though we had the ability to identify their name and could have been like, “Hey, Sarah, superpower your web team with Web Ops.”

 

There was a lot of pushback internally when I first wanted to start doing personalization on the website because people were just like, “Oh, we’re gonna you know, turn people off. They’re not gonna wanna do this.” 

I was like, “No, I really think that people want this. They expect this. It’s gonna make us look more high-tech.” 

And sure enough, when we ran this experiment, we saw a 3.3% improvement on our demo CTA and a 1.5% improvement on our free trial CTA. And so this is where we’re kinda talking about those growth levers. Like, all of these, like, little wins start to add up. So you personalize here. You add a little bit there a little bit there, and then all of a sudden this North Star Metric is, like, moving the needle. 

So I think it can be tricky. But this is where, again, having really diverse team members, multiple points of view is gonna be really important to make sure that as you roll out personalization, that’s gonna be effective for your organization. But, like, to that person earlier, I was asking how to prove my boss wrong This was something that initially, like I did this a couple of years ago, and I got a lot of pushback on our company. Didn’t feel comfortable with personalization at that point, and I used the experimentation program to win people over it being like the data shows. This isn’t turning people away. It’s actually helping our business.

And so then I was able to roll out other situation opportunities throughout the site. So you gotta start small and then kinda go from there. So, you know, to recap, you really want to embrace a culture of experimentation across your entire company. You wanna tie those results back to your company goals as much as can financially, you know, attribution is important to say we spent this much money on this experiment. We got this many leads that led to this many, you know, customer acquisitions and this much revenue.

And so, like, I’ve had experiments that, you know, paid for my whole program for a year because they were so big, and that’s one of the things that has given me more staff and more support over the years. It’s really tying it back to the company goals, the revenue, to prove. I’m not a cost center. I’m making the company business.  I’m helping us make money. I’m helping us achieve our goals, and that’s really important. And then also, you know, I think at the center of all this is your customer. So make sure that as you’re running the experiments, you’re thinking of the person who’s on the other end, “How can I make this experience better for them?” 

Yes. All these company goals and revenue things are important, but your customer should really be at the heart of everything that you’re doing. Improving the user experience, improving their connection with your brand, so that they, you know, become adhered to you and want to do business with you moving forward. 

So thank you, and we have a little bit more time for more questions. I’d love to hear from you all.

 

A:

Yes. And we do have more questions and I’ll also take this opportunity to remind attendees that we have a final few minutes to share your questions with Sarah. So don’t waste that. If you have anything, you know, added in the chat, and also if you want to ask directly, let me know and I’ll unmute you and you can ask directly to Sarah. Meanwhile, we have got this, question from Travis, “Can you give me an example of a small test you have run versus a bigger, more involved experiment?”

 

SF:

Sure. So, a smaller test would be we like on the homepage, you know, where I saw that, like, that get started button, we changed the copy on that many, many times. Is it gonna be should it be a demo CTA? Should it be a free trial CTA? Should it be a contact us CTA?

And so that’s really simple. Right? We’re just changing the copy. I can go in there and do that myself. I don’t need a big team to, like, mock something up and you know, come up with a whole workflow plan. So that to me is pretty, like, a low-effort, easy test to run. Something more complicated would be like, you know when we would look at our forms. Forms are really important for conversion marketing. 

And so when we’re changing the the language and the layout of that page, you know, I would need a designer to come up with you know, a new concept. Sometimes it would be changing the design, but the form stays the same. Sometimes we would be changing the form field. 

And so as you get into, like, all those buttons and different things like that, like, that can be a lot more complicated from a design perspective. Because you need to, like, mock that up and have all these different labels and design guidelines and things like that. And so that would be something that would be more complicated. And so it’s really just like looking at the effort of, you know, what resources do you need? Do you need to pull in a product marketer, or a copywriter or a designer? How much development work is this gonna require?

Again, like, changing the copy is really easy to go do some of these tests when you’re, like, changing your forms are gonna require a developer to actually build out that new form. So that you can test it. And so that requires a lot more effort. And so that would be kind of like two examples of a small test versus a bigger test.

 

A:

Cool. We have got another question from Pascal. I’m not sure if I’m pronouncing his name. Right? So if I’m pronouncing it wrong, then, you know, please forgive me. So, meanwhile, he’s asking, “What are the typical overarching North Star  goal of experimentation and marketing?”

 

SF:

I wouldn’t say there’s like one North Star Metric that works for every organization that really needs to be specific to your business. At Pantheon, our North Star Metric for the experimentation program changed over time based on, like, the goals of the business. The last one that we had before I left was we we had said that we wanted to increase the number of hand raises on our website. And so how we defined hand-raisers were people who engage with us on chat. 

Contacted us through the Contact Us form or called our phone line directly, and so we wanted to increase the volume of people who were raising their hands because what we saw when we looked at the data was that people who reached out to us directly through these contact us formats and raise their hand like, “Hey, 

I wanna speak with you.” 

We’re more likely to convert to paid customers. And so the more we could get people to engage with us directly in those formats, the more likely we were to win their business. And so that became our North Star Metric was to increase, the volume of interactions between our hand raisers. Other times in the business, you know, like our demo form when I first joined was one of those things where this is, like, the most important transaction on the whole website and everything has to map to, like, getting more demo fills.

Over time, we realized that like, that wasn’t always the most important path. When we looked at the data, we saw a lot of people were coming to us through our pricing page, and that was another really important metric. And so that, you know, that North Star Metric changed over time. 

And so it’s really taking a deep dive mapping out your customer journey on your website. What is actually driving the results that you’re trying to achieve? Is it sales? Is it talking to, a salesperson, you know, is it a click to cart? Is it talking to a salesperson? Is it, you know, maybe you want people to register for your webinars because those are really important to your business? 

Like, you know, the folks that think that you know, we’re, you know, excited about all you joining because help some of their goals. And so, you know, you just really need to figure out what looking at your customer journey is going to achieve the best results and then, you know, anchor on that as your North Star Metric. But not having too many. It should be one. It should be a metric because you need to sort of deprioritize. 

Well, you know, this didn’t help me get more handraisers. So it’s a great experiment. I’m gonna put it in the backlog, but right now, we’re really trying to get people to reach out to us through our contact channels, and this is not in service of that. And so that’s a way for you to prioritize work that’s going to service your North Star metric.

 

A:

Oh, great. And the second question with Pascal is, “If you eliminate assumptions 100%, how else can a hypothesis be formulated?”

 

SF:

So you’re just trying to sort of, like, prove out a concept that you feel is true. The way that I would go about that is, like, if you have, again, things like surveys, if there are other data on your website and you want to leverage experimentation to prove something to be true, but you don’t have, like, that actual, result. You can use experimentation to do that. You know, like, so trying to think of an example of that. 

So even, like, that second place on our homepage, like, that was an area where, you know, we talked to a bunch of people. We looked at our scroll rates. We realized people were moving past the first part of our site. When we did the user interviews, the thing that came up most commonly was that they thought they hit the bottom of the website already, and that’s why they weren’t scrolling down to the bottom of the page. So we got qualitative insight. And then we started to change our layout to address that to push people down the page. So that we could turn that, data point on its head and change the experience so that we can drive the results that we wanted. 

I hope that’s answering the question.

But I would say just like pulling in, looking at data, and then forming your hypothesis around other pieces of information and proving it out on your website is a way to make hypotheses without basing them on assumptions. So maybe you have a demand gen campaign and this one title is working better, or call to action. You bring that over to the website. It might be different because it’s a different environment, right? That CTA on your banner ad might work really well out in a while where you have all your campaigns running. You bring it in-house on your website, and it won’t work. And so then we’re getting back to this assumption thing of, like, I had a data point. This content works really well over here, but then it’s not working well over here. And that’s why we experiment, right, is to sort of either validate things that are working somewhere else or understand that different environments yield different results. 

And people will add it to you as it changes over time. That’s the other thing is, like, your experiments will decay. So you’re gonna have, like, an outlier. 

You’re gonna have, like, maybe an outlier campaign that does well, but you can’t say that that’s a winner forever for your business because people are, like, make like, we change one of our buttons to be pink, and we saw the engagement rate go up really, really hard because it was just, like, so shocking to see a pink button on the website. 

Eventually, people are gonna get used to seeing that pink button on your website. So maybe you need to change it to blue next time. And so that’s one of those things where, you need to be constantly be testing because people will get, you know, the the the results will decay over time and needing to keep things fresh in order to keep people engaged and continue to improve results for your business.

 

A:

Thank you, Sarah. I see that we have got more questions now. But in the interest of time, we are going to, we are forced to skip them. Sorry, folks. We are going to go through each of the questions. And, we’ll select some of the questions that will be passed from Sarah and we can perhaps have the conversation later.

Meanwhile, thank you so much everyone for joining. I see that nearly 80% of the people who joined at the beginning of the session have still stayed for 1 hour. So thank you, folks, and thank you, Sarah, for holding this session. 

Really appreciate it. Really appreciate your time. And thank you folks for staying and enjoying the session. Any last words that are for our, for our attendees who are still staying back.

 

SF:

Just keep testing. You never know what you’re gonna learn. I really appreciate y’all for joining me today. And, yeah, feel free to reach out on LinkedIn. That’s probably the best way to get in touch with me, and we’ll share this live. So thank you so much.

 

A:

Keep testing and keep personalizing. With that note, we are ending this now. Thank you so much, folks. All the best.

  • Table of content
  • Key Takeaways
  • Summary
  • Video
  • Questions
  • Transcription
  • Thousands of businesses use VWO to optimize their digital experience.
VWO Logo

Sign up for a full-featured trial

Free for 30 days. No credit card required

Invalid Email

Set up your password to get started

Invalid Email
Invalid First Name
Invalid Last Name
Invalid Phone Number
Password
VWO Logo
VWO is setting up your account
We've sent a message to yourmail@domain.com with instructions to verify your account.
Can't find the mail?
Check your spam, junk or secondary inboxes.
Still can't find it? Let us know at support@vwo.com

Let's talk

Talk to a sales representative

World Wide
+1 415-349-3207
You can also email us at support@vwo.com

Get in touch

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number
Invalid select enquiry
Invalid message
Thank you for writing to us!

One of our representatives will get in touch with you shortly.

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

Invalid First Name
Invalid Last Name
Invalid Email
Invalid Phone Number

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo.

Which of these sounds like you?

Please share the use cases, goals or needs that you are trying to solve.

Please provide your website URL or links to your application.

We will come prepared with a demo environment for this specific website or application.

Invalid URL
Invalid URL
, you're all set to experience the VWO demo.

I can't wait to meet you on at

Account Executive

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments.

Christoffer Kjellberg CRO Manager

VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time.

Elizabeth Levitan Digital Optimization Specialist

As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing.

Tara Rowe Marketing Technology Manager

You don't need a website development background to make VWO work for you. The VWO support team is amazing

Elizabeth Romanski Consumer Marketing & Analytics Manager
Trusted by thousands of leading brands
Ubisoft Logo
eBay Logo
Payscale Logo
Super Retail Group Logo
Target Logo
Virgin Holidays Logo

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

© 2025 Copyright Wingify. All rights reserved
| Terms | Security | Compliance | Code of Conduct | Privacy | Opt-out