VWO Logo VWO Logo
Dashboard
Request Demo

Designing Customer-Centric Experiments: Insights from Specsavers

Explore Melanie's strategies for crafting customer-focused experiments at Specsavers, enhancing digital journeys and scaling innovation.

Summary

Melanie Kyrklund, Global Head of Experimentation at Specsavers, oversees the optimization of digital journeys for booking appointments, ordering prescription glasses, and contact lenses. Starting just before the pandemic, Melanie transitioned the experimentation strategy in-house, building a team to manage deployment and support capabilities like global testing tools, server-side integration, and analytics. Her focus is on scaling experimentation, enabling teams to conduct their own tests, and moving towards a center of excellence approach.

Melanie emphasizes the importance of aligning experimentation with user needs, business partnering, and integrating with product development. She advocates for embedding experimentation into daily workflows to foster a culture of experimentation and recognizes the need for patience in seeing results in large organizations.

Key Takeaways

  • User-Centric Approach: Melanie uses a User Needs framework, focusing on user research to identify key customer needs and tasks, which guides the experimentation process.
  • Team Structure: The experimentation team at Specsavers is small but works closely with regional optimization teams, product owners, and an external agency. The goal is to scale up and enable more in-house experimentation.
  • Balancing Data and Intuition: While data-driven decision-making is crucial, there's room for human intuition and expertise, especially in the discovery phase or when data alone doesn't provide clear direction.

Transcript

[00:00:00] Ishan Goel: Hello everyone. Welcome to VWO’s annual virtual summit for growth and experimentation professionals. I’m Ishan Goel. I am the Associate Director of Data Science at VWO. Thousands of brands across the globe use VWO as their experimentation platform to run A/B tests on their websites, apps, and products. 

[00:00:22] Today we have here, Melanie, who is the Global Head of Experimentation from Specsavers.

[00:00:28] I’m very excited to have you here, Melanie. Thank you for joining us. 

[00:00:34] Melanie Kyrklund: Likewise, thanks for inviting me. 

[00:00:38] Ishan Goel: We’ll be discussing with Melanie, some questions about her work, about how they use experimentation, and how their experimentation journey goes. So Melanie, it would be great if you can start by explaining your roles and responsibilities as the Global Head of Experimentation at Specsavers?

[00:01:00] Melanie Kyrklund: Sure, I think first of all I can give a bit of more information on Specsavers as a company. It’s a large optical retailer. We’re present in 11 countries now. And within those countries we run a network of thousands of stores where customers can go to for their eye health checks, prescriptions, and also hearing checks.

[00:01:22] And in my role as Head of Experimentation, I’m really focused on the optimization of our digital journeys where we help customers book their appointments in the store, order online prescription glasses, which obviously is a newer market to be in, as well as contact lenses. And I’m part of a global function that’s servicing both product teams, like product owners, as well as regional commercial teams.

[00:01:48] I started there three years ago. Just before the pandemic. So very weird time to start a new job. I think it was a month before we went into lockdown. And at that time I was the sole lead on conversion rate optimisation, working with an external agency. And what I did was bring experimentation strategy in-house.

[00:02:09] So the actual understanding of the areas to go after the tests we want to run in-house. And over the past three years I’ve built out a team. So I have some expert specialists working on my team, helping with the deployment of tests. And on top of that it’s about building all the supporting capabilities we need in order to scale in experimentation.

[00:02:32] So making sure we have one global testing tool that is correctly implemented, starting to work on the integration server side with engineering, which has been really exciting and making sure we’re running pilots with recommendations, with personalization, and also looking at experimentation analytics.

[00:02:54] So we’re a very busy team. And I think our vision for the next couple of years is to really start scaling experimentation and enabling teams to do it themselves. So moving towards that real center of excellence approach. So we’re at an exciting time where we’ve built really mature operations and we can take it a step further to further impact the business, and the way we work.

[00:03:17] Ishan Goel: How big is the experimentation team at Specsavers currently? 

[00:03:23] Melanie Kyrklund: So, currently, I only have a team of two, who are fully servicing all the regions and the product teams. We also partner very closely with regional optimization teams as well as the product owners. And we have an external agency that’s helping us with the actual development and deployment of tests as well as ad hoc research. But the plan is to bring more of that in-house over the next year as we scale. Because we’ll really be moving towards enabling experimentation within teams as opposed to the real doing of the experiments that we’ve also been focused on.

[00:04:04] Ishan Goel: You’re taking help from an agency for the research and the designing of experiments and all those things, as I understand. I would want to ask a question from you there. What strategies and frameworks do you use to ensure that all the experiments that are designed are customer centric? 

[00:04:21] Melanie Kyrklund: So there’s a few.

[00:04:23] The first one that springs to mind, and that’s the one I typically use if I’m running an experimentation program of a broad remit. So I have full ownership of the experience and there’s a lot I need to understand end-to-end to go after. And the one I’m referring to is what I call the User Needs framework. So that’s actually using user research a lot, listing what the key needs are of the customer and tasks that they need to do. And grouping those together to create sort of buckets that you can really dive into.

[00:04:57] So to give you an example when I had to start working on the online glasses market. And obviously that is quite a new capability for Specsavers, online prescription glasses and overall, it’s a newer market to be in, so the key user needs I dived into even were product discovery. So customers, this is quite a common one, customers need to find a product that’s suited to their needs 

[00:05:25] The second is around size and fit. Users need to understand if those glasses fit their face, if they’re correct in terms of sizing. 

[00:05:33] And another one would be Specsavers customers needing to locate their prescription and entering it online.

[00:05:40] So this is just an example of the few of the user needs that were then pulled out. And then research was conducted into every single area, both qualitative and quantitative. And that allows you then to really create an in-depth experimentation track around that user needs and be able to really go after it cohesively and create a reliable body of knowledge that can then be fed back into product development.

[00:06:09] Because in the absence of that, your efforts can become quite fragmented across the user journey. So yes, User Needs is my go to approach. 

[00:06:22] Ishan Goel: That’s quite interesting. So is it that you come with a list of hypotheses with that research? Do you reduce them down to a list of hypotheses that are you wanting to test out?

[00:06:34] That’s how it results in? 

[00:06:37] Melanie Kyrklund: Yeah, absolutely. So as you go into areas, you’ll start understanding what the specific pain points are. You’ll get insight into that and then you’ll create hypothesis. That tie back to that insight and then downstream that will create solutions that you start A/B testing in order to address those pain points or needs that you’ve identified. 

[00:06:58] Ishan Goel: A lot of times our knowledge of what the user want lies with the human expertise.

[00:07:06] A lot of times we want to trust the data for it. And we know that the human might not be knowing the best what the customer wants. So how do you balance between the data driven decision making and the importance of human intuition and expertise in the experimentation process? 

[00:07:23] Melanie Kyrklund: Sure. So it’s a very interesting question.

[00:07:25] I’d say that the two can coexist. If you look at the experimentation process, and if we take the discovery process as an example, sometimes you’ll run qualitative and quantitative research, and the two will sort of marry up and give you a clear picture of what the problem is. However, sometimes if you’re in the discovery phase, for example, you might only have analytics to rely on, and then the why you are seeing things becomes less clear cut.

[00:07:59] And there, I think you can rely on your gut, on your expertise, or even on your creativity to make that jump between what I’m seeing and why I’m seeing it. The second area is around Prioritization. And actually I was just discussing this with my team, because mostly you’ll have a backlog, which is really based on what’s been backed up by data, research, and business objectives that you need to go after.

[00:08:29] However, sometimes some of those ideas might not be ready to go into development and you have resource available. So then I think it’s fine to start also prioritizing work that’s based on expertise. 

[00:08:41] What you know has worked before on other sites that you might have worked on, and it’s just worth a shot. If it’s a really easy change, again, it’s probably worth a shot because the more tests you run, the more probability you have of unearthing things which are influential and important to the customer. Or you could run something on gut feel again. It shouldn’t be that your whole program is based on that, but if you have a backlog which is 80% research base and then you allow for a bit of flexibility for that 20%, then I think that’s a perfectly valid approach.

[00:09:17] I see the two coexisting. 

[00:09:18] Ishan Goel: I want to ask a question there. Is it sometimes a barrier to experiment things when things have to go out fast? Like there might be deadlines, and you need to make changes to the product. Do you experiment with everything or there are some things that you choose to experiment with and some things that need to go out fast that you don’t want to experiment with?

[00:09:43] Melanie Kyrklund: Well, it varies. Some things just get done to be honest. We’re not in a position where I think we have the requisite traffic to just experiment on absolutely everything. 

[00:09:56] Ishan Goel: That’s quite interesting.

[00:09:57] So what are some common barriers and resistances apart from this that you have encountered in establishing the culture of experimentation in your company? And how have you solved those barriers for the culture part of things? 

[00:10:16] Melanie Kyrklund: I’ve worked at a few large organizations in experimentation. So one thing I’ve seen come up repeatedly is experimentation remaining siloed. So experimentation will typically grow out of one team or a few teams.

[00:10:40] It will be focused on client side testing. And over time you’re running a considerable amount of experiments, but not a lot of them are being hard coded. And there’s a disconnect between what you’re doing on the front end and what’s happening in engineering. And this situation can go on for years or a couple of years and it’s suboptimal.

[00:11:04] And I’ve realized that the best way to overcome this is to have those discussions with product and engineering as soon as possible. And find stakeholders will support your cause of integrating experimentation within the engineering processes. And doing the work to go server side. And change potentially the way you do feature rollouts as well as part of that in order to validate them. So I would say integration with product engineering is the best way to overcome this silo and really start embedding experimentation into day-to-day workflows and ways of working.

[00:11:46] And then I think the second barrier I’ve found to an experimentation culture, and in answer to this question, I’m thinking of culture also as in the way things are done, is when teams get excited about experimentation and aren’t doing it very well, to be honest. So you’ll see that may be not adhering to the right statistical standards.

[00:12:17] The hypothesis generation may be poor. So I think having teams go off on a tangent and start doing experimentation in a suboptimal way also kind of goes against the creation of that sort of transparent culture. So I think the way to overcome that is the center of excellence approach.

[00:12:45] Whereby there’s active training, documentation, and best practices that can be embedded into the team. And it’s a known role within the organization that the center of excellence will be taking on these responsibilities of training and guiding people through experimentation. 

[00:13:04] Ishan Goel: The two barriers that you explained, one is the silos problem and the other one is not knowing how to correctly execute experimentation.

[00:13:15] Do you have a central team that is responsible for analyzing, reading, and understanding the data, or everyone who is running their experiments are in-charge of it? 

[00:13:30] Melanie Kyrklund: Yeah. Interesting question because we are going to start changing the way we do this over the next year, also because of you know, GA4.

[00:13:41] So what we have is common standards when it comes to statistics. It’s agreed that teams who are running experimentation being at a regional level or group level, we apply frequentist methods. There’s X% statistical power and X% significance, which are standard ways of evaluating the experiments.

[00:14:09] And then the reports are generated using Google Analytics data within Google Sheets. Now, when it comes to my team, they spend a lot of time with stakeholders within the digital product teams or the regional commercial teams, following up with a full report on the experiment and discussing the outcome of the experiment, and what the next steps are.

[00:14:33] Now with the move to GA4 and our desire to move towards more center of excellence approach, we’re looking at ways we can create standard templates in-house. And I think by automating the output of reports and the way a decision is presented, within the report, will help safeguard the way experimentation is done within the business.

[00:15:02] And also enable the scaling of individual teams and the ability to run experiments. So that’s going to be a big focus for us, looking at creating Power BI reports to service with individual teams over the next year. Rather than us doing a lot of the analysis and then the outcome discussions with several stakeholders. 

[00:15:31] Ishan Goel: From the process that you have at Specsavers, that you keep the entire interpretation and statistics in-house and are not relying on a third party services for that. Is that the case? Or do you calculate the statistics and all the numbers in-house only after the data is collected.

[00:15:52] Melanie Kyrklund: We have an agency that helps us with the templates. Those are standardized. So we have a reporting dashboard and then there’s an output report off the back of that. Now the actual interpretation of the results and the follow up on the next steps is something we do in partnership with the agency and with our regional and digital product owner stakeholders.

[00:16:18] For the most part, the strategy, the next steps is owned by Specsavers. So we’ll take the data as it comes in, we’ll evaluate it but really understanding what that means to us as a business is something we do internally. 

[00:16:32] Ishan Goel: So has there been a time when some sort of an experiment that resulted in a failure or the interpretation was wrong? How did you deal with it?

[00:16:44] Melanie Kyrklund: First part of the question, which is failed experiments, I guess this is just a part of life. As an experimenter, where you’ll see a neutral result, or a negative one. Obviously, if it’s something that’s taking a strong downturn from the outset, then it’s probably pretty clear that something’s gone wrong with the actual execution of the experiment, if we look at an experiment which is run correctly.

[00:17:08] Every experiment is an opportunity to learn and to iterate on your approach. Upon having a result of any sort, obviously the behavioral analysis and the deep dives that you go into, will help shape what the next step will be and understand why you’re seeing that result and how you can refine your hypothesis or move to a different one.

[00:17:35] So that’s just an intrinsic part of working. 

[00:17:41] Ishan Goel: Embedding that culture into an organization is very difficult to have. Everyone who is running experiments to have that mindset that a failed experiment is something that can help you in the future and not something that you have to shut down very quickly.

[00:18:00] Was that very difficult at Specsavers to build that culture? 

[00:18:04] Melanie Kyrklund: Not, not at Specsabers, actually. I think there’s an acceptance of experimentation company wide, and also that comes top-down. And an openness towards really focusing on the customer because that’s what it is about.

[00:18:23] It’s not about your own attachment to a feature and success, and wanting to deliver something just because it’s in the plan. It’s about being customer centric and making the right decisions for the customer. And I think that’s at the heart of Specsavers as a business. 

[00:18:46] Ishan Goel: How does cross functional collaboration of teams help in the experimentation process at Specsavers and contributes to better outcomes?

[00:18:55] So teams would be learning from each other in the experimentation processes. Yeah, how does that cross functional collaboration help? 

[00:19:04] Melanie Kyrklund: Well, I guess being a group function or a global function, cross functional collaboration is probably what we do. And so we intersect with the whole business. We have digital product owners where we’re actually more embedded into their ways of working.

[00:19:26] So as business objectives come in, user insights come in, then there’ll be a discussion with the product owner, with the experimentation analyst, the business analyst, with you know, regional stakeholders, in order to understand what the best possible solution is in order to address some of these objectives of insights we’ve unearthed.

[00:19:50] And then similarly, we interface with the regional teams as well. The commercial teams are really responsible for trading and the results that they garner from their digital experiences. And then the collaboration there is different. It’s about looking at a business objective and really partnering with them so that they can meet their business objectives and that is more of a localized approach. 

[00:20:13] Whereas in global product development, obviously we have to understand the regional nuances, but it’s all about scalability and coming up with a strong global solution that everyone can benefit from. Interestingly, we also have a lot of teams that come to us for pilots. So, for example, the product team responsible for the way our frames are shown on our website might have some ideas around the display of the frames using lifestyle imagery, for example, and then we can collaborate with them to really break down a pilot or business case that can help inform the next steps of what they need to do.

[00:20:59] So yes, I think cross functional collaboration is at the heart of our experimentation process because we’re a global. 

[00:21:07] Ishan Goel: Can you share a specific example of an experiment that you ran at Specsavers that challenged the existing assumptions of the company, or generally the results were very surprising to what you guys knew before?

[00:21:20] Melanie Kyrklund: Well, I think what I find most surprising is the conflict between what users tell us they want and then their behavior once we give them what they have asked for. So this has happened both within appointment bookings and in the eCommerce flow that we’ll see substantial amount of feedback to the point that we will then prioritize this for experimentation around information that’s missing in there to help inform them in buying journey or in their booking journey.

[00:21:56] But when we actually run an A/B test to introduce this information, we’ll see a decrease, like a significant decrease in conversion. So actually that additional information is causing some sort of friction within that decision making process. Even though it’s very much aligned to the feedback we got on what was missing in their evaluation of a booking journey or an eCommerce journey. 

[00:22:25] So I think this is what i’ve consistently seen as most surprising. It’s the disconnect between the two. 

[00:22:33] Ishan Goel: You are pointing out to a very prevalent problem in the business space. Repeatedly you’ve seen that when you talk to customers what they’re telling you and when you actually get them to act on the product, it’s contrasting and it’s different. Is it very common or it’s just once or twice that happens or is it common a lot of times? 

[00:22:56] Melanie Kyrklund: What I’m highlighting is that this was substantial amount of feedback. So if you look at the top three feedback buckets we get from our voice of customer.

[00:23:12] This request was coming from top three buckets of customer feedback. We thought it would have an impact because it was so prevalent as feedback. So I guess that’s more where the dissonance for me takes place. It’s not the prevalence. It’s just something that was really asked for and then it didn’t work.

[00:23:37] Ishan Goel: The process of collecting the priority of what customers ask for, [specifically for this surprising experiment], did it change the process also, or that could not be changed any further or improved any further? 

[00:23:51] Melanie Kyrklund: The rationale behind us developing that and prioritizing that was correct.

[00:23:55] It’s just you know, humans can be irrational. As we very well know in their behavior. And this specific feedback we’ve got, we haven’t completely parked it. We still think that there’s something that can be done in terms of execution, how we do it, where we do it. So our thinking in this space will evolve.

[00:24:19] Ishan Goel: What methodologies and techniques do you use at Specsavers to generate innovative ideas for experimentation and avoid falling into predictable patterns? 

[00:24:31] Melanie Kyrklund: Yeah, good question. I do have to be honest and say that my team is very much involved in the exploitation phase predominantly. So, existing products, how do we then continue to optimize them and develop them introducing features of varying amount of complexity. So that is very much the remit of my team.

[00:24:53] Now we do have a product owner who is completely dedicated to new technology and piloting that within our markets, and where possible we partner to help out with these pilots. And we also have an innovation hub at Specsavers, which is focused more on that mid long term horizon of innovation, again, based on technology advancements that are happening, as well as, market forces that have been identified. 

[00:25:24] So my remit really is within that exploitation phase as opposed to innovation. But, having said that, I think we also constantly look at ways we can improve the way we do experimentation. It’s not just about innovation in the test ideas.

[00:25:41] So for example, at the moment we’re very much focused on how we can make decisions more quickly within the experimentation program. So that involves looking at perhaps different statistical methods so that we can come to conclusions a lot faster as we don’t always have a lot of traffic in our markets to enable very rapid experimentation.

[00:26:08] Ishan Goel: I definitely appreciate your honest answer there. And the way you have segregated it into exploitation and exploration. 

[00:26:17] I have an interesting follow up question. When you say exploitation, what are these existing repository of ideas that you’re exploiting? Are they from past experiments or you see what competitors, or other people are doing on their product, or you’re talking to customers?

[00:26:35] So can you expand a bit on what corpus are you exploiting on? 

[00:26:42] Melanie Kyrklund: It’s all of those. It’s understanding how we can improve the performance of our existing journeys. And yes, there’ll be various methods we use in order to come up with ideas. So obviously there’s a qualitative and quantitative analysis and combining the two in order to understand how we can address known pain points and continue to iterate on the experience we have.

[00:27:10] So yeah, I think that’s at the core of the exploitation. It’s a constant examination of how our digital journeys are performing, what customer feedback is coming in, also what business objectives we need to attain within markets and how we can help achieve those. So it’s a balance of all those areas and continued optimization.

[00:27:36] Ishan Goel: Interesting. Quite interesting. So you are slowly planning to move further as these low hanging fruits get settled down and you’ll probably move further into exploring into more uncharted territories, as I say. 

[00:27:50] Melanie Kyrklund: When it comes to online prescription glasses, there’s more room for exploration, right?

[00:27:56] Because it’s a relatively new market. So we are still at the core trying to understand who our users are from that perspective, who wants to buy glasses online. We know they’re quite different to our store customers, for example, their needs are quite different as well. So I think within that remit, there’s definitely a lot more room for exploration. But if you look at other parts of the journey, such as the appointment booking journey, that’s really about exploiting and optimizing, introducing new features to quite a mature product track within, let’s say our portfolio at Specsavers.

[00:28:41] Ishan Goel: Interesting. Thank you. Thank you for throwing light on that. Moving forward, I have a question that generally, when experiment results are generated, rich experimentation culture demands that results are discussed and communicated efficiently to all the stakeholders. So what infrastructure or frameworks or processes do you have at Specsavers to ensure that?

[00:29:08] Melanie Kyrklund: As mentioned, my team is in constant contact with all teams that are experimenting. So, experiments are discussed at length within the sessions learnings from maybe other markets can be shared, which might be relevant. So this occurs both with commercial teams and product owners.

[00:29:30] And then on top of that, there’s quarterly reporting that goes a layer up. That also happens. So yeah, we’re set up in such a way and embedded in a lot of the teams and have regular catch ups where those results are communicated. 

[00:29:53] Ishan Goel: How many experiments in a month are you guys running because that also determines how much can this be discussed and communicated?

[00:30:01] Melanie Kyrklund: Yeah, exactly. So, we’re hitting about 100 a year now, which I think is good given that we do have to adhere to longer test durations. That’s just a function of the traffic you have in certain markets. And that’s I think why we can still handle those regular meetings with all the stakeholders.

[00:30:34] But I think we are reaching a phase where in order to scale, it’s become clear that we need to start handing over the experimentation to the individual teams and give them the tools to run more of it themselves. I think a lot of organizations who have reached a certain level of maturity within the operations come to that point where we think actually we’ve become the bottleneck to experimentation, scaling, because we can’t possibly ideate, build tests, analyze, and evolve the strategy of the program with a central team. And the minute you start putting experimentation into individual teams, you get a lot more ideas, you get a lot more tests, and a lot more impact because individual teams can really focus in on the KPIs that matter to them in a lot more detail, a lot more surgically. 

[00:31:35] So building the analytics infrastructure that I was speaking to you about once we move to GA4 within Power BI will help to drive the adoption of experimentation within teams, but also like common standards around the way experiments need to be interpreted, which is very important. 

[00:31:57] Ishan Goel: It’s starting quite a good number. So you would be running a lot of experiments in parallel as well on the same pages or mostly they’re exclusive? 

[00:32:06] Melanie Kyrklund: Well, we’re running tests across different parts of the website. So we have the booking flow, we have glasses journey, as well as contact lenses and different markets. We don’t come across running experiments concurrently that often. However, it’s not something I’m averse to. 

[00:32:27] Ishan Goel: You run a lot of experiments concurrently or it works fine. 

[00:32:32] Melanie Kyrklund: We’re not a Booking.Com.

[00:32:34] So we don’t have to start introducing all those checks so that there can be early detection of potential interactions. If you have two experiments running on a product page, I think for the most part, you can just let them run. 

[00:32:51] Ishan Goel: I would want to understand that, in your journey as leading experimentation efforts at Specsavers, what were your key lessons or insights that you would like to share with companies starting to embark on their experimentation journey? 

[00:33:08] Melanie Kyrklund: Yeah, so I guess these learnings have come from working in a few large organizations now on experimentation.

[00:33:17] One is that business partnering is really important. So you have to really understand what the product teams are trying to achieve, what the commercial teams are trying to achieve, and align your experimentation activities to those to help them meet those needs. And that’s very crucial towards building an experimentation function that is valued within the organization.

[00:33:46] So business partnering first and foremost.

[00:33:49] Obviously making sure that the program is relevant to user needs as well. So we discussed that framework and how you can anchor your program within known user needs. 

[00:34:03] Secondly, the integration with product development going server side is absolutely key.

[00:34:09] And experimentation culture is created when it’s embedded in the ways of working so you can have a solid experimentation program, which is doing quite well and you can have support from management, but in the end, it’s not going to change the culture and the ways of working for remain siloed.

[00:34:27] So integration with product development and engineering is key. 

[00:34:31] Another point is around scaling experimentation and knowing when it’s time for your team to set step back from the actual running of experiments and understand that you’ve become the bottleneck. And that in order to have more ideas, more impact and scale experimentation, you really have to stop putting experimentation into the teams. And creating that autonomy to run experiments within teams. So I think coming from a background of running experiments, enabling experiments then becomes, it’s quite a shift for many experimenters. 

[00:35:09] And then I think lastly, it’s just patience. You need patience. If you think about it change takes time. Especially within large organizations.

[00:35:18] If it’s average 10 years, I don’t know, [maybe] three years, within a large organization, you’re probably not going to execute your vision within three years. So I think you need to have a realistic view of how long the change is going to take and remain focused. Yes, and have patience to see the results you want.

[00:35:41] Ishan Goel: One broader question, [what] books are you currently reading? If you are a book person. 

[00:35:50] Melanie Kyrklund: My team will laugh when they hear about this because I’ve been going on about long format reading now. Few months ago, I realized that through social media, you just always receiving small snippets of information.

[00:36:07] And you don’t necessarily take the time to digest any of the topics or really delve into them deeper. So, I decided that I was going to go long format only and start picking a subject and then reading a book or going into blog posts about it. And my topic was AI. 

[00:36:26] So the book I’m reading is by a lady called Melanie Mitchell.

[00:36:30] She’s a professor in artificial intelligence. And it’s called, Artificial Intelligence: A Guide for Thinking Humans. 

[00:36:36] So it’s about the history of AI, how it works, ethical considerations and so forth. And I read all of 70 pages of the book so far. So I’m making very slow progress. I’m trying to stay away from social media so that I can give it the focus that it needs.

[00:36:58] However, it was published a few years ago. If you look at large language models and neural networks, and how those evolved, that’s not in there. I have to read online about that side of things. So yeah, artificial intelligence is the topic. 

[00:37:18] Ishan Goel: I can understand. And the field is moving so fast. 

[00:37:20] Melanie Kyrklund: It is. Yeah. Yeah. Everyone’s all over it at the moment. Yeah. 

[00:37:24] Ishan Goel: Yeah. But that’s amazing. I have a wonderful book recommendation from you as well. And that’s great. We had a lot of learnings from you, Melanie. 

[00:37:34] Melanie Kyrklund: Thank you. Thank you for inviting me and for this conversation. I appreciate it. 

[00:37:41] Ishan Goel: Thank you. Thanks a lot.

Speaker

Melanie Kyrklund

Melanie Kyrklund

Global Head of Experimentation, Specsavers

Other Suggested Sessions

Experimenting with AI: When Bots do CRO

Discover the real-world impact of Generative AI beyond the buzz, exploring its valuable roles and practical applications in our latest session.

Rapid Experimentation: How Testing Works at Scale

Explore how Aditi leverages rapid experimentation for product growth at scale, aligning tests, business and team for impactful outcomes.

UX Fundamentals for More Conversions

Join Karl for a deep dive into 5 crucial UX principles and 2 transformative marketing questions, blending humor and insight to tackle persistent online business flaws.