• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Breaking Down Silos: Experimentation and Personalization at Virgin Media O2

A practical conversation on how experimentation and personalization work together in a large organization. The discussion covers testing strategies for major campaigns, how teams stay aligned, and how experimentation scales across multiple channels.

Summary

Charlotte and Chris explain how customer lifecycle, mindset, and channel preference guide personalization decisions. Using Black Friday as an example, they describe a phased approach: early learning tests, then optimization with multi-armed bandits, plus segment-based insights. They also cover the shift from a siloed CRO setup to a centralized model, using Opportunity Solution Trees to connect product, research, and testing. The conversation ends with how training, stakeholder involvement, and technical support help expand testing across web, app, comms, and call center channels.

Key Takeaways

  • Customer context determines the right message and channel.
  • Use learning tests early, then switch to performance-focused tactics.
  • Alignment and training are key to scaling across teams and channels.

Transcript

NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.

Hey, Charlotte.

Hi, Chris. How are you?

Yeah. Very good. Thank you. Thanks so much for volunteering to be on the the Convex, BWM Convex webinar.

Oh, that’s okay. It’s a pleasure to be here.

And as we were just saying, we talk loads on experimentation anyway, so we’re gonna have a good conversation today.

Yeah. We could talk about it for hours.

But welcome everybody to the session, and we’re going to be talking with Charlotte here. So Charlotte, would you like to introduce yourself?

Yeah, sure. So I’m Charlotte Golding. I head up experimentation at Virgin Media two. My team primarily focuses on the existing customer side of Virgin Media Row two. So typically we play around a lot with personalization and trying to ensure that the customer experience is super personalized for our existing customers.

Fantastic, and my name’s Chris Gibbons. I’m a Chief Experimentation Officer at Creative CX, and we’re very lucky to to work with Berger Media o two and work with and support Charlotte with with her work that she does over there. And we’re an an experimentation consultancy, so we’re one hundred percent focused on all that fantastic subject.

And we’re all about enabling our clients, making experimentation experimentation easy for all of our clients to do and supporting them with a resource when they need it. That’s what we do. So fantastic, let’s move on to the of three or four questions I’ve got today for you Charlotte, and I think we can really talk around the subject.

So first one is what does experimentation really look like at Virgin Media in O2?

So it’d be great if you can kind of go through how it works, what it looks like, and of course all the fantastic personalization work you do as well.

Yeah. Sure. So I’ve actually prepared like a little bit of a view so that you can understand how it works.

So I’m gonna focus primarily on Black Friday because I feel like that’s quite a nice use that is actually like quite relevant at the moment. I think a lot of people are looking at Black Friday at the moment. So in terms of how it works, personalization perspective, but also from experimentation. We tend to start with trying to understand like the customer context context to start with.

So thinking about the frame of mind that the customer is going to be in when they visit your website, when they visit your app, or when they engage with the communication from you. So we tend to think about things such as customer life cycle, mindset, and channel preference are super key. So life cycle focuses on like the relationship that you have with your customer. So are they a new customer?

Do you not have that relationship yet? Are they currently within contract with you? Are they nearing the end of their contract? Or are they going out of contract?

Because that will really influence kind of like their frame of mind.

And then moving on to mindset. So that’s like, more specific to the relationship that your customer had with you, kind of, like, more so that specific moment in time as opposed to, like, overall. So are they experiencing, like, perhaps some service problems? Are they not a particularly happy customer with you at the moment?

Perhaps they’ve got some payment issues, or are they just, like, all around feeling like they’re a super loyal customer?

And then we look at channel preference. So channel preference is around ensuring that you’re talking to your customer, like, about at the right time, like, through the right medium. So right message, right medium. So do they want to engage with you via, like, SMS, email, or push notification? Maybe they prefer the app or maybe they prefer the website because these kind of things are going to heavily influence, like, how you essentially personalize to your customers.

That’s really interesting. It’s interesting how you don’t treat everyone the same because I think we wanna be you’ve moved on to just trying to treat people differently because then you can better serve them, can’t you? Because if you just treated if everybody started off in the same journey, the same experience, it would be a compromise and it would be long winded for some people having to so I think it’s it’s really interesting the way that you’ve broken this down into very logical dimensions.

Yeah. Absolutely. Like, we found specifically on channel preference, more so this year, that’s where we’ve really started to, like, heavily personalize because I think it’s such a personal thing.

Like, I personally prefer, like, being engaged with via, like, push notification if there’s, like, a sale on, like, if there’s an ASOS sale, like, like to know via a push notification. And that all of the customers are, like, different. And I think it’s just important to ensure that, like, you personalize via channel preference.

Absolutely. And the user needs are completely different to a brand new customer versus someone who’s in contract or near the end of contract. I think that’s the key thing that people forget sometimes. Actually, what they what they know about the brand. I mean, how it’s gonna be completely different and therefore the information you present them, it needs to be different, doesn’t it? In a different priority.

Oh, absolutely. Yeah. There’s there’s actually an example that I’m going to talk about in a little bit around, like, the differences that we’ve actually seen in the past with customers who are in contract near the end of their contract and when out of contract.

Because they just have, like, a completely different interest in engaging with you. We find that typically customers who are in contract, they can be less engaged. Whereas customers who are nearing the end of their contract, like, they’re super engaged because they want to try and find, like, the right price.

And then customers who end a contract, then they they care a little bit less a little bit less than all of that because they’re just rolling really typically.

Yeah. No. Exactly. Should we move on then to the next?

Yeah. Go for it.

And then the next thing that like we typically look specifically during Black Friday is we tend to think about it kind of like in phases and we then personalize our phases. So phase one is thinking about like the Friday coming.

So how can you personalize for the talking new experiment Friday is coming up? We tend to run a lot of experiments around like lead generation. So trying to get customers to opt in to like marketing comms essentially, so that we can then reach out to them and contact them and start to generate the overall hype. And the experiment would therefore be different for, like, the different customer life cycles or mindset that we poke earlier.

And then when Black Friday is live, very similar approach. Like, you’re trying to get, like, customers, like, engaged. Like, it’s right at the start of Black Friday. At Virgin Media o two, our Black Friday campaigns typically run longer than just like the standard few days. Although I think a lot of companies have moved to this model anyways.

So It’s a week, isn’t it?

Well, ours is longer. It’s longer this year. I don’t wanna give any spoilers but okay.

Wow. Yeah. Last year, it went on for like around a month, which is like quite a long period of time. Getting customers engaged that early on can be a real struggle. So you really need to like tailor your message to them. And then Black Friday, like, when it’s ending as well, that’s when you need to, like, really lay on.

Try and get the customer Right. Like, right at the end for those who haven’t. And for those who haven’t, that’s when like the Black Friday is over phase. So trying to get them to get excited about like the next campaign that’s going to be coming up.

Again, trying to get them to like opt into marketing comms, start to get them excited about like the next step and kind of move into that aftercare phase as well, like caring for the customers after they’ve purchased.

And of course, the tricky thing here is targeting, isn’t it? Because you don’t want to present the wrong messaging to the wrong person. If someone’s already bought a got a really good Black Friday deal, you don’t want to be sending them tons of messages to you at that point. So I think that’s the careful balance.

Yeah, absolutely. I mean, one thing that we look at is we use this model called MBA, which stands for next best action, which really helps with so if we know that a customer has recently purchased, our MBA model will say, like, don’t try and hammer them again, essentially.

We could we we could do the different, like, trying to push them into purchasing. Because actually at that point, they recently purchased and then they’re ready to move into the aftercare phase.

Yeah. Fascinating. And and do you how do you treat Black Friday and campaigns like this? Do you treat them as when it comes to experimentation AB tests, do you treat them as a as a learning type of test or do you is it all about maximizing that period during during the actual sale?

That’s What what I mean is do you because do you actually is it all about maximizing the the sales during the period or learning from the next sale?

It’s a bit of a mix, really. So because our Black Friday is quite long, we’re able to do a mix of experiments over that period of time. So at the start of it, we take the approach where we’re more so, like, trying to learn and understand what’s going to be the And we do that through, like, a mix of medium. It can be with AB tests or sometimes if we’ve got a lot of variations that we want to test, but maybe we don’t necessarily have the traffic for it in a specific area. We might use something like a Stats Accelerator, which would essentially just push the traffic to the variation that has the highest probability of being statistically significant, whether it’s positive or negative.

But yeah. So we like have the right mix and then more soy towards the end. And that’s when we would just go like maximize, we can just send the traffic to the agent that has like the higher probability of just like generating orders.

The audience out there, the static accelerator is a type of multi armed band, isn’t it? Yeah. So it’s where you, based on the performance of any particular variation, which could be great at sales periods and for content promotions, you start to vary that traffic, don’t you? The algorithms start to push more traffic towards the one that’s doing well for whatever your chosen primary metric is.

Yeah, exactly that. So like it’s a, as I said, like it’s a real mix because with the bandits as well, like there’s nothing, it’s not particularly statistically sound in that sense. It’s literally just like trying to like maximize the gain. So that’s the trade off that we play like towards the end.

Yeah, and the primary metric is really key actually. When we’ve run multi armed bandits before, it’s that choosing what your metric is is really important because otherwise you can skew results. Otherwise, you choose the wrong metric, of course it’s just a machine, isn’t it? It’s just driving towards that metric. But if you chose it on conversion rate where your ARV was most important, can drive the sales of your lowest priced items if you’re ecommerce, for example. So you could actually reduce your revenues in that period if you chose the wrong metric. So it’s something you need to really think about.

Yeah. Absolutely. Nice. So this is a little example that I I wanted to talk about.

So in this example, which is a Black Friday test, this experiment is called the upper funnel overlay test, and it’s powered by our our products recommendation, which is where, like, the personalization lens comes in.

So essentially, what we were doing here is before the customers enter the store where they can transact, we were showing customers, like, the best offers that were available to them.

And essentially, the algorithm would show customers the product first that we believe they have the highest propensity to purchase, which is where, like, the personalization lens starts to come in.

Interestingly, we did see a twelve percent increase in conversion rate overall, which is awesome.

An experiment that we then decided to run after Black Friday in other campaigns afterwards. And it’s been a success, and it’s now, like, the control for us, which is which is great. So that’s kind of like how personalization and experimentation looks at Virgin Media too.

Another lens to put on here is for this experiment, we did, like, chunk it up quite heavily to really hone in on, like, the different contract lengths. So looking at customers who are in contract, customers who are out of contract, customers who are nearing the end of their contract, because that really, like, changed, like, the overall effect. But overall, we did see, a positive increase in conversion rate, but, like, the different customer behavior was, yeah, we really saw a difference.

Like for the customers who were nearing the end of their contract, we saw like quite a big spike in calls to the call center, for example, when they saw this. They were seeing, like, really great products for them, and they’re like, oh, that’s brilliant. But I wanna talk to someone on the phone to make sure I’m getting the right price. Whereas for customers who are nearing the end of their con customers who are out of contract, we didn’t see behavior.

That’s really interesting, isn’t it? So is this something you do in your post test analysis?

Look at the key segments to see if there’s any extra learnings to take to the next test? Is this kind of a standard approach for you do?

Yeah. Exactly that. Yeah. So like after we run our experiment, you’re right, Chris, we then run our evaluation to see if there are any key points that we can then take through into our next tests, which is something that like we carry on. And from last year’s Black Friday campaign, we’ve taken the learnings from last year and we’re bringing them around through to this year so that we can then, like, develop on top of it. We took some of the learning through from last year throughout the rest of this year, but there are some, like, pretty key learnings that I think are, like, quite specific to, like, Black Friday that are worth just testing again and iterating on during that Pacific Black Friday time.

Yeah. And like you said before, it’s that it’s it’s that mix of tests to learn and and to take forward to future sales periods and just maximizing the benefit with multi armed bandits. Yeah.

Which I think is really interesting. And I think that’s a good good tip for anybody out there really that it’s using the right technique at the right time, isn’t it, really? Most of the time you do want to learn. You do want to kind of do robust AB testing and have some really good rigor when it comes to the evaluation side of things so that you’re sure of a result, so you can learn really valuable learnings from it.

But there are definitely periods when the multi unbanded is to just maximize your business metrics during that period is the best approach to take. Yeah. So I think that’s really interesting. Okay, let’s move on to the next question, which is all about operating models.

Of course there’s no real right way when it comes to operating models are what the team structure is, how you as an experimentation personalization team support the rest of the business.

And we work with tons of different companies, and they all have slightly different ways of working. And I love it. And and there’s always I think in experimentation community generally, sometimes it can come across like there’s only one way of doing it. It’s very black and white in terms of this is the correct way, this is the wrong way. I don’t think that’s the case. I think businesses like yours, Charlotte, are very huge and really complicated and will have different structures internally. So I think it’s all about taking inspiration from different models and then applying it and tailoring it to what works best.

But yeah, fascinated to hear your thoughts on on this. This is quite a complicated slide, but it’s it’s just it is a summary of of of the variations of different models out there. So if on the far left, we’ve got the the of place where a lot of people start, which is the typical CRO team, which is a very much self contained, as you can see in the blue box on the left, self contained team that have all the skills they need. They usually have a couple of all rounders who do a bit of user research, bit of usability testing, and then they uncover the problems, they solve themselves, and then they launch client side front end JavaScript injection type of experiments to really and and they do they tend to keep things a bit smaller as well because, of course, they can’t do too many big changes.

Whilst at the same time, they’ll have they might have loads of product teams outside and other business units which are doing loads of things, but they’re not AB testing. So it’s a really interesting point. So you kind of think these guys are doing some really good work here, but then they’re a bit siloed. So what happens is they can end up quite siloed and misaligned to the rest of the business.

And there’s also limits to what they can do, which I think is interesting.

Yeah, absolutely. I was gonna say that’s like very similar to like how we started.

Like we started off as kind of like like more of like a siloed CRO to into your point. And there was just it was so clear that there’s so much misalignment between us and product.

But, like, we’ve now we’ve moved away from that model that to your point, like, there’s no, like, right or wrong. It’s just, like, how that ended up being set up in our org. We actually became quite siloed.

But we sit within products, so that makes things a little bit easier for us, I suppose.

And we like all focus towards like the same OKRs and objectives now, which makes things like a lot easier.

Yeah. That makes so much sense, doesn’t it? But it’s quite a difficult move for some people, isn’t it? How to move from one to the other? I mean, how how was it initiated? How how did it how did it happen at your organization?

So it happened through a lot of a lot of calls, a lot of workshops, a lot of retrospectives.

Luckily for us, the people that we were working with, like, they’re naturally very, very curious.

So they love the fact that when we run an experiment, we can really clearly see what works and what doesn’t work for customers.

So they were very bought into that. It was then just a case of taking them on that journey and seeing that we can be more aligned overall to our objectives and work to get them closer.

So we held a series of sessions that happened, like, in person. And we educated the wider business on experimentation and what good experimentation looks like from our perspective.

And off the back of that, we then started to work together to form something called an OST, which stands for opportunity solution tree.

And with that, I know we don’t have a visual of it, but you can certainly look it up online.

It’s Teresa Torres created it. It’s a super interesting model. But you start with, like, a big objective at the top of your tree. It’s, like, essentially, like, a customer problem. And then underneath that, you then have, like, your objectives so that you can then impact your primary. And then underneath that, you then have the different opportunities in order to like support your objectives and underneath that you have your experiments.

And we found that way of thinking really helped to group experimentation and products closer together to move in one direction towards the same goal essentially.

No. I really love that approach. It’s very similar to we use flavors of that in our problem first approach to whatever we do.

And again, it’s brilliant for alignments because it just means that everybody, well for a start it makes everybody feel good because you’re working towards solving a real problem that affects your customers and the business.

So it gives everybody common purpose I think, and that’s why, I think that’s how you, and these are just tools, experimentation is just a methodology, but to help you get somewhere, and I think that can really help align things.

Yeah. That’s really good good to hear. I think I think also it’s great for directing research and making research more actionable and and making and helping you to attribute value and business impact to research. And a lot of, in the industry at the moment, a lot of UX researchers, there’s always a lot of disgruntled researchers moaning about how businesses don’t listen, don’t read their reports and do a lot with the fantastic insights they’ve uncovered.

The the reason is very often because it’s just not connected to any team activities. It isn’t in in like an OST, opportunity solution tree. So therefore, there aren’t experiments coming off it, And therefore it’s very hard to attribute the value to the actual underlying opportunity discovery work. So I think it it can really help to achieve quite a lot of things. Really, can give everybody a purpose, including UX researchers all the way around down to experimentation.

Wow.

So yeah.

So Yeah. Absolutely. We hold, like, opportunity solution tree sessions on a biweekly basis. So we have the user researchers that attend, like, the product owners are the ones that primarily run it and champion it and experimentation within that session as well.

And sometimes we get analytics too, which is great.

And you’re right, Chris. Like, it really helps everyone to, like, focus on the problems that we’re trying to solve as a group as opposed to then, like, working in silos. It means that we can talk about the customer problems that we can see coming through that are a key priority, which then helps us to then kind of, like, delegate, like, the tasks amongst the group where user research will say, oh, that’s interesting. I’m going to go and run a piece of user research to then bring back to the session so that we can then build on the problem.

And it really helped just to like get everyone working towards the same goal. It’s been it’s been good.

It should stop a number of people spending, companies spending so much money on solutions to nonexistent problems too, what, which to be honest is what most, a lot of companies waste an awful lot of money creating features and building new journeys without much evidence behind them.

Then they wonder why they don’t, they’re not successful in uncovering winning experiences. So it can really help with that too. No, fantastic. A good side point there actually.

Just to finish up this diagram, right on the right side is the decentralized model, And and we don’t see much of this, but we did a massive experimentation audit of a huge company who were absolutely running it in this way, where they had they had fifteen or twenty different teams all running experimentation, but all doing it differently because they didn’t have any kind of experimentation team keeping the standards up and and, like, didn’t have any key governance around experimentation. So it’s all a bit like the Wild West.

And the the consequence of that is that data is measured in a different way, and then the business eventually loses trust in the data and loses trust in experimentation.

So the solution for that company was actually to then because some teams were really good at experimentation, but some needed to kinda catch up. The solution there was for it to have a a center of excellence experimentation team that that started to get involved in all the teams and started to create some common processes, standards, focus on tooling, focus on culture, etcetera, to kind of bring everything together. And that brings me to the middle whole area, which is center of excellences, which vary in different styles, but you’ve got the, often the first center of excellence is still a centralized model, but they’re communicating, hence the arrows, with the product team.

So effectively building all the experiments at this point, they’re analyzing all the experiments, but they’re at least aligned and they’re focused a bit more on enablement and culture. And I think that’s quite a good first step to go into more of a point where you want to get everybody involved, but you still want to keep maximum control in the early days.

But then a very common next step really to move towards is to start to enable product teams to experiment themselves and to do more of the process themselves.

For example, we start getting them to build, to design and build their own simple tests, while the center of excellence team will maybe be involved more in the complex tests.

And often at this point, the COE, you still want to keep control of things like the analysis is quite a good idea because it’s quite complicated and there’s also a lot of room for, I’m not being critical of product teams here, but there can be a tendency to do some harking, which is hypothesizing after results are known, possibly change what the primary metric was or what the hypothesis was after you know the results to try and make your team look better. That kind of thing does happen a little bit. Or what I’ve seen recently is is when you when you start off with an experiment, which is like a a superiority experiment, you really wanted to win, and then you change the rules afterwards to say actually it was an inferiority.

Was like we were aiming for a flat result anyway, let’s roll it out type of thing. So again it’s like changing the rules afterwards, so you see that type of thing. And then lastly, there’s the federated approach, Center of Excellence Federated, which is almost like the third stage, which is where the experimentation team, Center of Excellence becomes very much just an enablement, just support, just about tooling, making the life of the product teams easier. And actual product teams do all the experimentation.

But this does still rely on good governance and good standards and processes. This is almost like the end game, I think, and probably the the one that a lot of large organizations will will aim towards getting there eventually.

So, that’s it’s it’s interesting. But but again, there’s no like there’s there’s you could I put three levels here in the center of excellence, but you could easily create another one probably and give a different name towards it.

Yeah. I feel like we sit at the moment kind of within like the centralized bucket where we moved from standard CRO team more to, like, the COE centralized with the ambition to become hybrid, which is what we’re moving towards. We have some areas of the business that are already hybrid, But we have other areas of the business where we’re not quite hybrid yet. We’re in the process at the moment of rolling out server side experimentation with Creative CX. So some of our product areas do have server side set up and running, which means then they can build their own tests. They just need the support from the experimentation specialists to support with the analysis. And naturally, like, coming up with the experiments because we do that, like, as a group within the OFT sessions.

But majority of the business still sits within more so of the centralized buckets while we are starting to build out server side and move on to, like, the hybrid approach.

Yeah. And I think that’s a really good balance, isn’t it? It does seem like a good good good way of working.

But, yeah, it’s it’s interesting also the scale of your team will change, I think. That’s always interesting too. The type of skill sets within your team, within experimentation center excellence will vary depending on what stage you’re at. They’ll probably go up and down in terms of the skills that’s needed really, so I think that’s fascinating too.

Yeah. Absolutely. We’ve quite enjoyed it because it’s it’s allowed people to grow naturally with the team.

I set up the experimentation team within this extinct customer side. Gosh. Almost four years ago, it’s what. But since then, we had, like, the team join essentially, like, from the very start, most of them. And it’s allowed people to grow with it where they’re developing new skills for, like, this centralized model and further skills in the hybrid model. Because when you move more into the hybrid model, stakeholder engagement is important throughout the process, but my gosh, it’s extremely important when you hit hybrid because you’re more, like, into the product space and working even closer with them. So having that really good relationship with product and other stakeholders that you might not have engaged with previously becomes even more important outside of what we were focusing on before.

It’s almost like you need to be and I think the mindset changes too, doesn’t it? Because when you’re on the left in a CRO team, it’s all about it’s all about the next winning experiment. Just Yeah. You’re right in in the weeds as such, And it’s about it’s about the win.

It’s about that one. And and, of course, there’s only so much you can do, though, to if you want to have maximum impact on a business. There’s only a few of you. So there’s only so much you can do in terms of creating uncovering really good problems and solving them through experimentation.

So it makes so much sense to kind of go outward and introduce experimentation to the world, to the whole business, to start selling it into it just make everything everyone else is doing, all their results, their outcomes so much better. I mean personally, that’s what gets me excited. It’s kind of that you can, it’s the teaching people to fish as opposed to fishing yourself saying, isn’t it?

Yeah. Absolutely.

We’ve also seen this as we start to expand outside of just like web experimentation because we started primarily just running experiments on the web, whereas we now run experiments also within comms. We run experiments within the app. And our team aren’t we’re not necessarily trained to set up the experiment within those areas.

So for example, we don’t have, like, the developers to build experiments within the app. And we don’t have, like, we don’t have people in the team who are going to actually set up the experiments within comms. So, yeah, like, I I completely agree there, Chris. Like, it’s interesting to move more so towards, the education piece, which is very much where we’re at now. It’s, like, helping to train the other teams around experimentation and what makes a good experiment.

Fantastic. Really interesting stuff to hear, Charlotte. This is all about your vision, and at Creative CX, it’s our job to really help you achieve your vision and your objectives at Virtual Media O2. And in particular on this, this is all about the work you’re doing in enablement, which follows on really nicely from the last question actually.

Yeah. Absolutely. So I’ll talk a little bit about the enablement piece that I think let’s kind of like set the tone as well, like, with like the vision that we’re trying to achieve just to add the context. So our vision is essentially to make experimentation happen everywhere throughout the business.

And the reason for that is so that we can move out of the behavior of thinking this is the best idea in the world and this is what’s going to be executed. Because the reality is we’re all humans and we don’t always make the best choice. So it’s important to ensure that we run experiments to ensure that we are putting the right account for the customers. Now with CX, what we’ve been doing is essentially setting up service side experimentation.

What that’s going to enable us to do is to not just run experiments within, like, the areas that we work within. So for example, that means running experiments beyond just testing on the web. It means running experiments from web to app or across the chatbots, across the call centers, across comms so that we can have, like, a joint up experimentation approach, which definitely isn’t an easy task. It’s something that, like, takes a lot both in terms of, like, educating the business. So getting to bring them around to the idea of it that what experimentation is and how it works.

But also, like, the technical enablement part to ensure that, like, it’s set up properly and people are competent in the implementation of it, which is very much what we’re doing with Creative CX.

And I find something that really underpins it, something that we’ve actually, like, spoken about a lot, Chris, is, like, the education piece, which is where you guys are really supporting in terms of, like, the training. Because you can very well just give people a tool, but you can’t expect them to know how to use it or how to use it to, like, kind of like the extents of what is possible, which I feel like is, like, a huge benefit of, like, what you guys are actually. And we’re doing this both across, like, Virgin Media and O2, and as I say, like, across all of the channels.

So it’s a very big mission that we’ve been facing with you guys.

So far, it’s got it’s going pretty well actually, I think.

Like, it’s set up within the app. We are experimenting within comms. We’re looking at the call center.

We’re implementing it on the web.

So it’s, yeah, the vision’s going in a good place so far.

Fantastic. And the team are loving it too. So been a good, good really enjoyable for them. I think it’s a nice challenge.

It’s it’s great of you guys because you have all these different I mean, it’s complicated, but it’s great because you have all these different channels and these different segments and different kind of platforms to consider. It really it pushes the solution to the limits, which is which is great as well. And I think people forget how important the call center is. Think when you’re in customer experience like yourself, it’s important you start to see the channels all as one.

They all should be working together to achieve common goals, and it’s important that you start to not treat them separately. And I think it’s only kind of more advanced companies, more mature companies like yours that can that have that vision, I think, to do that. But there’s so much opportunity to do it that way because in in the users and the customers’ mind, they don’t really they forget what platform, what channel they’re using. They just think about it’s Virgin Media two, isn’t it?

And I want to get an answer to my problem or I want to find out the best package for this.

So it’s like in their eyes already, they’re seeing it as a single all encompassing channel. But it’s just in many organizations, we silo everything, don’t we, and just think of things separately.

Yeah, exactly. The customer doesn’t care about the silos. Don’t know about us.

No, exactly.

But I feel like we’re very similar to a lot of large organizations where there’s so many silos.

So experimentation had been, I think, quite a nice thing really to like start to bring, like start to split those silos because you can really see the impact that you’re having, like, with some solid data, which is I think the thing that is really starting to, like, turn it around for the business.

It’s that, yes, we were, like, we’re a little bit siloed, but those siloed walls are, like, gradually starting to break down. It’s not something that fast, at least not a Virgin Radio two, but it does like help to chip away, it certainly does.

Yeah and even I think with your ambition around experimenting everywhere on different platforms, which is absolutely something we believe very strongly in, Even just getting tests on apps is interesting because there’s a lot of I was running an conference session last year actually at one of the big conferences in Europe, And I ran it on app testing. Why aren’t people testing on apps?

And there were tons of really experienced experimentation programs in the group. We were talking about this. And they are very mature programs on the web, but hardly anybody was testing on apps. It was quite shocking really, but it’s because it’s just that bit technically challenging and it doesn’t sit it’s not like a natural thing for especially client side AB testing tools.

I’ve never been really made that side of things too easy. It’s a completely different mindset, isn’t it? You have to embed experimentation into the current team who work on an app rather than have that say CRO team methodology of coming in to optimize an existing team’s work. So it’s like that.

And I think that’s the barrier sometimes, it’s that kind of technical barrier.

Yeah. We we certainly saw that technical barrier.

One thing for us that we felt like really helped with like the planning.

So we’re lucky that the product owners in the app are phenomenal with their planning. Like one of the product owners we work with, he’s very bought into experimentation and he was super keen to embed it within his roadmap. Because to your point, Chris, like, there’s so many, like, additional complications where you need to think about, like, release windows. And if you miss that release window, then rolls over into the next one. So then it impacts your experimentation velocity. And if you communicate that too widely without enough context, it can make a lot of people upset. And so it’s ensuring that you communicate it in the right way because I think a lot of people it’s so exciting app experimentation, right?

But it’s the communication of like setting that piece up and ensuring that you’re planning ahead as far as possible is something that we found really helped with those complications.

The tech leads as well had been super supportive in this, which has been great. They were brought along on that journey, so they’ve naturally been quite brought in to experimentation within the They’re involved within our evaluation sessions. So after we run an experiment, they can see, like, what the impact was. And that was something that really excited them and got them excited and brought into, like, what’s going to be the next test that we’re going to work on so we can see that result.

And you hit the nail on the head there though, didn’t you? You’re you’re they’re brought in early to the whole idea, to the whole comp chat. And I think that’s the key. That’s also key to where things can go wrong sometimes, that people are brought in.

People are just told this is what’s happening now. Suddenly, need to do this extra task in your development. But if you’ve brought along on the whole concept and why you’re doing this early on, then you’re you’re winning hearts and minds of those key individuals, aren’t you? And that’s what sounds like what you’ve done over there is to get them on board first because that doesn’t always happen.

Sometimes we’ve had we you can get the designers and engineers.

They can think it if it’s if it’s delivered to them the wrong way, it’s just another task. Almost like, oh, it’s something else I have to do, I have to make this an experiment or I have to do this extra implementation work.

Yeah.

So it’s really interesting, isn’t it? The people start, the people come first really, don’t they? The mindset comes before any of the technical stuff really.

Yeah. Absolutely. And I think one thing that’s interestingly quite different within the app, and it’s something that we’re keen to roll out amongst like the rest of the products that we work with, that the tech lead within the app is involved within the OST sessions, and they have a level of accountability within the OST sessions, which I think is actually something that changes it because they see what the customer problems are, and they also participate in coming up with, like, the experiment ideas and the solutions to try and solve it. And their part in it is to think about the technical complications as part of that conversation. So they’re brought in right from the very start. They get excited about the experiments that they’re going to be building, and then they then get to see it as a result at the end.

And I think, to be honest, like, that’s maybe, like, the secret sauce is the communication bringing the journey.

I think that’s great advice for everyone else, I think. Yeah. Fantastic. So I mean, I think we’ve kind of covered a lot of things, haven’t we? Well, I wonder was there anything more around because I mean, those those last few snippets of advice I think is great for anybody if they can start to involve people from all disciplines in the in the problem they’re solving, in the ideation sessions. I mean, be honest, that’s why we love collaborative ideation sessions. We love our what we do a lot of recently is what we’ve done for a few years now is our problem first collaborative ideation sessions where you don’t go in a room saying, look, we’re gonna try and we’re gonna optimize the product details page here and with no insights, no objective, just basically just optimize this, just get more people to buy from us and that you can end up with the worst ideation sessions when that happens.

Everybody, they just go to their competitor sites and start pinching ideas or and you end up with just very simple button color. Make it sticky. Make it flash. Make it kind of whatever.

And then it’s devoid of meaning really and meaningfulness.

But with our sessions, we get the UX researchers from the client. We get all the insight teams to in advance, find the most important problems along the whole journey, and then they present that at the start of a session. So everybody ideating is ideating to solve a real known problem, and then you end up with all your all your potential solutions you can experiment with are now one hundred percent focused on a problem, which is very different to the other method of kind of like cherry picking data to support your idea afterwards.

So I think that’s and again, it just goes down to what you were saying, Charlotte, around the around the getting people involved because you would if you invite the tech leads to those sessions, it’ll make all the difference.

Yeah. I feel like those are like some super key points there. Like, communication, like, so key, like, bringing people along on that journey with you. That way they feel like they’re a part of it. And naturally, they then feel more brought into it because they see their test idea go live and they’re like, that was my idea. Then naturally, they’re more brought into it along the way.

It starts even just like beyond that, right, like starting with the customer problem statements and like really getting into that show and trying to understand how you can solve for the customer as opposed to thinking about it necessarily as, like, a customer base. Like, how can you solve that problem for, like, a specific group of customers to really try and, like like, make a difference for them, which I suppose is just like communicating with the customer, really. Communication, super key.

It’s the it’s the concept of there’s something about AB testing, which is almost in the past, has almost encouraged people to take shortcuts and almost rush to the end too quickly, rush to the end of, oh, I can because it’s amazing, isn’t it? You can test all kinds of solutions and you can actually see real uplifts in your business metrics and it’s a clear it’s tempting just to start at the end, almost like start at the, oh, we just wanna make more money now, guys. How can we make more money on these ideas? But the key thing for selling more, for getting more sign ups, for answering people’s questions, for reducing calls in the call center is to remove the obstacles, isn’t it?

It’s to understand what the real user needs are at that moment in time with these customers and to address them. I mean, that’s what it’s all about. So you help, You almost there’s no shortcut to putting in the hard work really to find out what those what those problems and opportunities really are in the first place.

Yeah.

Absolutely agree. Like, the opportunity solution tree really helped us with that in terms of, like, thinking about the customer, like, first, foremost, but also, like, helping to, like, break down those silos that we’ve spoken about. Because when we were looking at the OSTs, we started with creating multiple OSTs for all of the different product areas. But over time, we realized there’s such a heavy overlap with the customer problem statements. Like, we’ve got overlapping problem statements on web, app, call center, chatbot. Who would have thought of it? So being able to see it so visually in front of you really helped us to move that needle and help really with the communication and break down the silos across all of those different teams and start them with experiment experimentation.

And who produces them? Who who manages the OSTs?

So we started with experimentation owning them.

However, this is something that’s very much evolved since then. So we started by championing the OSTs, but now they’ve moved over to sit with product, which is something that we did champion for.

And I do really believe that it is something that should sit with product. They’re the product owners. They should be the ones that are like within that data constantly. They are there to understand the customer.

And it makes sense for it to sit with the product owner and own it for their product.

Experimentation participate in those sessions still really heavily, but, yeah, it’s it’s driven by product team now.

Which comes all the way back, doesn’t it, to the operating model? To the way of to champion something first, championing a technique, very much experimentation focused technique, and then educating and handing that over to the product teams to manage themselves, which is moving towards that hybrid.

I think that’s fantastic. I think that’s a really good way to end actually. So I mean, Charlotte, it’s been wonderful speaking to you as always. I just wanna say a massive thank you for coming on this webinar today. Thank you so much.

You, Chris. It’s been a pleasure. It’s always lovely chatting to you about experimentation. We could chat about it for hours.

Speaker

Chris Gibbins

Chris Gibbins

Chief Experience Officer, Creative CX

Charlotte Golding

Charlotte Golding

Head of Experimentation, Virgin Media

Virgin Media logo

Other Suggested Sessions

Product Management and Experimentation

Dive into Canva's approach to product management and experimentation with Pius Binder, uncovering strategies for impactful user experiences

FNB’s Vision for a Future Built on Contextual User Experience

Exploring Contextual User Experience in Banking: FNB's Maria Alves and Tye Riley, with Hype Digital Founder - Cameron Calder, unpack innovative strategies for designing secure, user-centric digital banking experiences.

The Frictionless Path To Customer Loyalty And Higher Sales

Drawing on the content of his new book, FRICTION (McGraw Hill), Dooley will show how user effort affects conversion, retention, and even online reputation.