How To Build A Culture Of Experimentation
It’s one thing to run an A/B test correctly and get a meaningful uplift. It’s another thing entirely to transform your organization into one that cares and respects experimentation.
This is the goal, though. You can only shake off so much additional revenue if you’re the only rogue CRO at your company. When everyone is involved in the game, that’s when you stride past the competition.
It’s not just about the tools you use, or even the skills, but also about the people involved. But organizational matters tend to be a bit complex, as anything that involves humans is. How do you build a culture of experimentation?
This article will outline 9 tips for doing so.
1. Get the Stakeholders Buy into CRO, and Establish Program Principles
First thing’s first—we need to get everybody on the same page.
It used to be more difficult to convince people of the value of conversion optimization. Now, it seems that it is more mainstream, and most people buy into the benefits.
We know from conducting State of the Industry Report that CRO is being more widely adopted, and those that are adopting it are increasingly establishing systems and guidelines for their program. All of this is good.
If you’re just getting started on your CRO journey, though, don’t fret. There are some simple and tactical ways you can start establishing a vision.
First, if you don’t have full buy-in from stakeholders, make sure you have at least one influential executive sponsor who is on your side. If you don’t have this, you won’t go far. (Programs tend to have a substantial ramp-up period before you see a good return.)
Second, write down up front, your program principles and guidelines. I like to create a “principles” document for any team I’m on (and a personal one as well), just so that we know what our operating principles are and we know how to make decisions when things are ambiguous.
Here’s an example of a principles document from my team at HubSpot (just a small section of it, but you would get the point):
Of course, we have tons of documentation from everything on how we run experiments to our goals, and more.
Andrew Anderson gave a great example of his CRO program principles in a CXL blog post:
- All test ideas are fungible.
- More tests does not equal more money.
- It is always about efficiency.
- Discovery is a part of efficiency.
- Type 1 errors are the worst possible outcome.
- Don’t target just for the sake of it.
- The least efficient part of optimization is the people (with you also included).
Yours could look completely different, but just make sure you up-front script the critical plays and don’t leave any questions hanging in the air. This will help stakeholders understand what you are up to and will also help onboard new employees in your team when they get started.
2. Embrace the Power of “I Don’t Know”
With most marketing efforts, we expect a linear model. We expect that for X effort or money we put into something, we should receive Y as the output (where Y > X).
Experimentation is somewhat different. It may be more valuable to think of experimentation as building a portfolio of investments, as opposed to a machine with a predictable output (like how you’d view SEO or PPC).
According to almost every reputable source, many tests are going to fail. You’re not going to be right. Your idea is not going to outperform the control.
This is okay.
If, for every 5 tests that fail, you get 1 true winner, you’re probably already ahead. That’s because, on the 5 tests that failed to improve conversion rates, you are only “losing” money during the test period. You didn’t set them live for good, so you mitigated the risk of a suboptimal decision. (This alone is a great benefit!)
Outside of that, the one test that did win should add some compounding value over time. A 5% lift here and a 2% lift there add up; and eventually, you’ve got the rolling equivalent of a portfolio with compounding returns:
A side point to the whole “embrace I don’t know” thing is that you shouldn’t seek to test things to validate only what you think is right. The best possible case is that something wins that you didn’t think would win.
That’s how Andrew Anderson frequently frames conversion optimization, saying in this post that “the truth is, in optimization, the more often we prove our own perceptions wrong, the better the results we are getting.”
Ronny Kohavi, too, makes the point that a valuable experiment is when the “absolute value of delta between expected outcome and actual outcome is large.” In other words, if you thought it would win and it wins, you haven’t learned much.
3. Make It a Game
Humans like competition; competition and other elements of gamification can help increase engagement and true interest in experimentation.
How can you gamify your experimentation process? Some tools, such as GrowthHackers’ NorthStar, embed this competition right into the product with features like a leaderboard:
You can create leaderboards for ideas submitted, experiments run, or even the win rate of experiments. Though, as with any choice in metric, be careful of unintended incentives.
For example, if you create a leaderboard for the win rate, it might be possible that people are disincentivized from trying out crazy, creative ideas. It’s not a certainty, but keep an eye on your operational metrics and what behaviors they encourage.
4. Adopt the Vernacular
Sometimes, a culture can be shifted by subtle uses of language.
How does your company explain strategic decision making? How do you talk about ideas? How do you propose new tactics? What words do you use?
If you’re like many companies, you talk about what is “right” or “wrong,” what you have done in the past, or what you think will work. All of this, of course, is nourishing for the hungry, hungry HiPPO (who loves talking about expert opinion).
What if, instead, you talked in terms of upside, risk mitigation, experimentation, and cost versus opportunity instead?
The world sort of opens up for those interested in experimentation. Obviously, you still have to be grounded in reality. You can’t throw insane test ideas at the wall and hope that everyone jumps on board.
But if you can propose your ideas in the context of a “what if,” something you can test out with an A/B test rather quickly, you can probably get people on board.
“We see here that 40% of our users are dropping off at this stage of the funnel. We’ve done a small amount of user research and have found that our web form is probably too long. It would take us very little time to code up X, Y, and Z variants, and we’d have a definitive answer in 4 weeks. The upside is big. The risk is low. Let’s run the experiment?”
It’s much harder to argue against something like this.
Most of persuasion is framing. If the person you are trying to convince feels attacked or threatened (“you think a scrappy A/B test is better than my 25 years of experience?!”), you’re not going to get far.
If you pull people into the ideation process and propose ideas as experiments with lots of upside, it’s easier to get people involved in the process. Or maybe just start throwing the words “hypothesis,” “experiment,” “statistically significant,” “risk mitigation,” and “uncertainty reduction” into all of your conversations, and hope that people follow along.
It doesn’t need to be limited to experimentation, either. You can make it normal to talk about pulling the data, cohort analyses, user research, and others. These should be normal processes for decision making that replace gut feel and opinions.
5. Evangelize Your Wins
It’s important to stop and take the time to smell the roses. When you win, celebrate! And make sure that others know about it.
It’s through this process of evangelization that you both cement the impact and results you’re creating in others’ minds as well as recruit others to become interested in running their own experiments.
How do you evangelize your wins? Many ways:
- Have a company Wiki? Write your experiments there!
- Send a weekly email including a roundup of the experiments.
- Schedule a weekly experiment readout that anyone can attend.
- If possible, write external case studies on your blog. This isn’t always possible, but can be a great way to recruit interesting candidates to your program.
I’m sure there are many other interesting and creative ways to celebrate and evangelize wins as well. Make sure to comment in the end on how your company does it.
6. Define Your Experiment Workflow/Protocol
If you want everyone to get involved with experimentation, make sure that everyone understands the rules. How does someone set up a test? Do they need to work with a centralized specialist team or can they just run it themselves? Do they need to pull development resources? If so, from where?
These all are questions that can cause hesitation, especially for new employees; and this hesitation can really hinder the pace of experimentation throughput.
This is why it’s so beneficial to have someone, or a team, owning the experimentation process.
Even if you don’t have someone in charge of the program, though, you can still build out the documentation and protocol. At the very least, you can create an “experimentation checklist” or FAQ that can answer the most common questions.
In Switch, Chip and Dan Heath wrote:
“To spark movement in a new direction, you need to provide crystal-clear guidance. That’s why scripting is important – you’ve got to think about the specific behavior that you’d want to see in a tough moment, whether the tough moment takes place in a Brazilian railroad system or late at night in your own snack-packed pantry.”
“Clarity dissolves resistance,” they say.
7. Invest in Ongoing Education and Growth Opportunities
This is anecdotal; but I’ve found that the best organizations, those that run very mature experimentation programs, tend to invest heavily in employee development.
That means granting generous education stipends for conferences, books, courses, and internal trainings.
Different companies can have different protocols as well. Airbnb, for example, sends everyone through data school when they start at the company. HubSpot gives you a generous education allowance.
There are tons of great CRO specific programs out there nowadays, specifically through CXL Institute. Some programs I think everyone should run through:
- Intermediate Google Analytics
- A/B testing mastery course
- Form Optimization
- CRO Certification Program
8. Embed Subtle Triggers in Your Organization
I’ve found one of the most powerful forces in an organization is inertia. It’s exponentially harder to get people to use a new system or program than it is to incorporate new elements into the current system.
So what systems can you use to inject triggers that inspire experimentation?
For one, if you use Slack, this is certainly easy. Most products integrate with Slack—Airtable, Trello, GrowthHackers Northstar, and others—so you can easily set up notifications to appear when someone creates a test idea or launches a test.
Just seeing these messages can nudge others to contribute often. It makes the program salient overall.
Whatever triggers you can embed in your current ecosystem—even better if they’re automated—can be used to help nudge people toward contributing more test ideas and experiment throughput.
9. Remove Roadblocks
According to the Fogg Behavioral model, there are 3 components that factor into someone taking an action:
I think the ability, or the ease at which someone can accomplish something, is a lever that we tend to forget about.
Sure, you can wow stakeholders with potential uplifts and revenue projects. You can embed triggers in your organization through Slack notifications and weekly meetings so that people don’t forget about the program. But what about making it easier for everyone who wants to run a test?
That’s the approach Booking.com seems to have taken, at least according to this paper they wrote on democratizing experimentation.
Some of their tips include:
- Establish safeguards.
- Make sure data is trustworthy.
- Keep a knowledge base of test results.
To summarize, do everything you can to onboard new experimenters and mitigate their potential to mess up experiments. Of course, everyone has to go through the beginner phase of A/B testing, where they’re expected to mess things up more often than not. The trick, however, is to make things less intimidating while also making it less likely that the newbie may drastically mess up the site.
If you can do that, you’ll soon have an excited crowd anxiously waiting to run their own experiments.
An organization with a mature testing program knows that almost all of it is dependent on a nourishing experimentation culture. One cannot operate, at scale and truly efficiently, with only one or a handful of rogue experimenters.
The program needs to be propped up by influential executive stakeholders; and everyone in the company needs to buy into the basic process of making evidence-based decisions by using research and experiments.
This article outlines some ideas I’ve seen to be effective in establishing a culture of experimentation, though it’s clearly context dependent and not limited to the items on this list.
Got any cool ideas for implementing a culture of experimentation? Make sure you let me know!