How A 250-Year-Old Company Adapts To The Changing Consumer
Encyclopaedia Britannica combined customer research and testing to recognize their changing reader base and bring their consumers new products and on-site tools for a better learning experience.
Mani: Welcome to ConvEx where we are celebrating experiment-driven marketing. My name is Mani and I handle product marketing at VWO. Today, we have with us Elizabeth Romanski, Manager of Consumer Marketing and Analytics at Encyclopedia Britannica. Encyclopedia Britannica, of course, needs no introduction, and it has been at the forefront of Information Revolution for the past 250 years. I am very happy to have you with us Elizabeth today. How are you doing today?
Elizabeth: I’m great. Thank you very much for having me.
Mani: So before Elizabeth starts her presentation, I want to inform all our viewers that you can join ConvEx’s official LinkedIn group and ask all the questions you might have from this presentation there. With that Elizabeth, the stage is all yours.
Elizabeth: Wonderful. Thank you so much. Not many companies can say they have successfully survived the business world for more than 250 years. It’s no small feat. It requires constant adaptability, ingenuity, research, drive, patience, and the dose of daring.
Encyclopedia Britannica just celebrated its two hundred and fiftieth year anniversary last December, and when we may look quite a bit different than we did at our beginning, we have remained true to our mission ever since 1768, and for the next two hundred and fifty years we have and will continue to inspire curiosity and the joy of learning to our customers. In order to ensure that we do this, we must always be user-focused. Recently, we just reset our assumptions on who our users were and what they wanted from our consumer site britannica.com. But our company was founded around the idea of creating a new encyclopedia for the general reader so, we were always user-focused, whether we knew it at the time in those early years or not. In 1768, two entrepreneurs Collin Macfarquhar and Andrew Bell, and one editor William smellie decided to go against the current dictionary and encyclopedic convention by printing the first ever Encyclopædia Britannica. Their goal with this new style of Encyclopedia was meant to be more utilitarian for readers. With this first edition, readers can now more quickly than ever find the reference material they were looking for, before it ever with previous encyclopedias of the time.
[…] this Edition launched a series long journey of addition after addition of Encyclopaedia Britannica. Each one was revamped for accuracy, usability and authority. The 15th edition, which was first published in 1974 would ultimately be our last Edition. In this span of time, the company had undergone acquisitions and growth, and the needs of our users were changing. By the late 1990s, our users have already….. were going digital, and so were we. We went digital in 1981 with the publication of the first official encyclopedia in history. And then in 1994, we launched Britannica Online to become the first encyclopedic resource to debut on the internet.
Although Britannica has always kept its readers at the forefront, the last 20 years has proven challenging in keeping up with the ever changing consumer. I think that we can all agree, technology is moving at an incredible pace. In 1999, most people were just fascinated with the internet as just a regular tool, but now it’s ingrained in our everyday life, and new platforms like voice and AI are becoming even more explored within our digital ecosystem.
This is where user and market research, along with testing came into play for us for the last decade. We thought we knew who our readers were – they were intellects, people who wanted to read long articles, gather facts and learn and they were willing to spend the time to go through our entire articles, weather in the early years it was print or when we were digital online.
Although we thought this, we noticed that our consumers and our analytics of those consumers were beginning to tell a very different story.
Our consumer site brittanica.com, we started seeing more and more of our users spending less time on our articles than they ever used to. We realize that now things were changing. So we needed to better define who our users actually were; there was an idea that they probably weren’t who we thought they were many years before.
And equally important to knowing and defining who our users were right now, we also wanted to understand why they were coming to our site because it was clear that consumer behavior was changing, and is changing in parallel with the shifts in technology information landscape.
Media is, and still is, taking the world by force. In fact just last year eMarketer reported that US adults, they spend an average of 6.3 hours a day with digital media. That’s insane!
So, knowing all of this information, we decided you needed to begin a multi-year research project focused on tackling a better understanding of who our users were now.
We wanted to learn within this project what we were doing well, what we could do better, and what we could potentially create in terms of new products that were aimed to fit our changing consumers’ needs. So at the beginning of this project we had three main goals.
The first one was that we needed to better define who was visiting our site britannica.com. The second goal was that we needed to understand why they were visiting. And the third goal was we needed to define what they wanted when they arrive to our site. We began creating and posting surveys on britannica.com. The survey asked questions related to the reason behind their visit to our site ,whether they were looking for a specific topic, what amount of information were they looking for when they came to our articles, and whether they had a lot of knowledge of the topic they’re researching ahead of time.
With this survey, we had over 34,000 respondents from across the globe that responded to our survey. And after analyzing these responses, we were able to categorize them and find several commonalities. We found that 48% of our respondents were academic users. They were coming to our site for school work related to homework, papers, projects, research assignments, studying, all of those. 30% were coming to our site with personal interests or projects. Some were writing books, fact checking things, simply wanting to learn more about something that they loved, and some were even preparing to have conversations with colleagues and family about certain topics they enjoyed to discuss. The remainder of the respondents who were surveyed were a mix of users who are coming to our site for work, casual interest, or they were very frequent users of our sites. They were kind of our loyalists.
In addition to learning who our users were through this survey, we also learned that they felt our content was trusted. Our brand respected and our articles were comprehensive and extensive. All wonderful things to hear from our users. However, most users mainly those that were falling into that academic group, they weren’t reading as many article pages as they used to. And in fact, they weren’t really reading the article all the way through at all. It wasn’t because we were not providing the answers or content they were looking for but, these users were now wanting to come to the site with a very specific question in mind so that they could get that question answered quickly, and move on.
They were coming to our articles with the intent of getting quick facts and answers, with very minimal reading.
This is very crucial insight into the needs of our users, especially for an encyclopedia like us. And, we were excited to learn this and very eager to start finding a solution for them. Our brains are already worrying with ideas.
But, before we can immediately begin devoting resources to redesigning all of our articles to suit these user group needs, we knew we needed to take a breath and slow down. After the initial thrill of getting all of these new exciting learnings, we realized there was more information and testing that still needed to be done. Well, we had just identified a user need and a new problem that they found with our articles. We also needed to understand what kinds of questions they were coming to our articles that they were wanting to be answered. So, we then decided to ask those 34,000 survey respondents to now share what they wanted to know about those topics that they had indicated they came to our site to look up. They were given a long form box to list out all of the questions that they had coming to those articles.
From this, we collect about three thousand responses each that had multiple questions that they had written, that indicated what users were wanting to know when they came to specific articles. After sorting all of these and categorizing questions, we discovered two key things: one users had a lot of the same questions, and two each questions were mainly focused on our articles that were about events, places, and our biographies. Some examples of the questions, you can see in the slide above but there were also some other ones like: How many nautical miles is Dunkirk from Dover? What were the causes of the Salem witch trials? and, How did William Shakespeare die?
The process of sorting all the submissions of questions told us what our users wanted to know, but we now needed to determine how best to deliver these answers to them.
So, this began a lot of discussion and debate on several different designs that we could use to provide this information to them on our site. But after this, and after all this debate we decided on one concept and began to craft our A/B test for this project. Now, you may be wondering why we needed to test this concept when we have already found out from our survey that users wanted quick answers and facts. Wasn’t that enough to get started? To answer this, I want to reiterate that these users who responded to the survey only made up of portion of our sites audience. Not everybody who comes to our site responded to our survey. And additionally, those who did respond to our survey not everyone indicated that they needed quick answers.
And lastly while users wanted quick answers, as a business, we want KPIs like user engagement and page views to be mapped. So, we really had to make sure that we are balancing our business needs in addition to and with our consumers’ needs.
This section of the project is where VWO came into play. My team uses VWO frequently for a variety of A/B testing needs, and I knew based on my experience with the tool that this would be the perfect project to execute it with. We first identified several articles based on some the most popular articles from the survey to test this new feature on, and then we settled on a main hypothesis along with a few test goals. Internally, this new feature that we were testing was called the Q&A Accordion Module. As you can see in the screenshot, we had several different variations and the accordion modules had an average of about five questions that users could click on to get answers to. The questions in these modules were specific to the article that they were presented on, and they were questions that were not only generated by our editors and curated but also ones that some of our users provided that showed that there was a very common interest in that question. For example, as you can see how William Shakespeare died is a question that we included in this module because it was one that was very commonly asked among those who were surveyed.
In our test, we have the main control, which is the article as it is on the right, which is William Shakespeare. We also had a variation A which presented these Shakespeare specific QA accordion modules with just text and additional links to other articles like Anne Hathaway, and even jump links or in-line links, I’m sorry, to [read] more about Shakespeare within that article. We then had a second variation B which had the same kind of basic template but, we also wanted to incorporate images, as you can see in this example with an image next to the Anne Hathaway link, that maybe would entice users and show them what they were getting into when they click the link.
Ten days and we have over 11,000 visitors were part of this first test on the William Shakespeare article. We were measuring with this test engagement with the questions to see if a) users even opened up the accordions to view the answers ,and b) if these users clicked on any of the in-line links to get to other articles, and lastly, c) we wanted to make sure that this accordion module did not adversely affect any other general metrics on article page. We wanted to make sure, for example, that this accordion module wasn’t distracting users clicking and engaging with any other part of our article. We wanted to enhance the article and so we needed to make sure that it wasn’t detracting from it.
So in this next slide I’m going to show you a recording of an anonymous user who interacted with our test.
What VWO allows us to do is have these different, you know, anonymous recordings of users that are part of our testing pool so that we can really get a sense of how they’re actually interacting with not only the page that the test is on, but also for us the testing portion – so [in] this case the Q&A audience.
So if you can see, this user is reading the initial introduction of our article and then scroll down.
This is our Accordion Module they’re opening up, they’re seeing what we’re providing, and then lastly they’re actually engaging with that and clicking to get to another article. So this is showing us not only that they yes are engaging with this module, but they’re actually incorporating this into our learnings and clicking to a secondary article, in this case, Anne Hathaway, to learn more about her as well, not just William Shakespeare but more about William Shakespeare’s life, and in this case his wife, Anne Hathaway.
Through this VWO test, it was determined that the Q&A accordions were a success -both variations A and B got over 20% engagement. But, it was also further determined that the first variation, the one that had no additional images, was actually the winner. It seems that users were not really enticed by having visuals as you would expect. They were more just wanting the answer. It didn’t matter to them if there was additional visuals or not. They were simply looking for the facts. Over 23% of users tested, clicked open the accordions to see the answers to the question and also clicked through to read more of the articles and other related questions.
But, after we figure out with the Shakespear article that it was a success, we needed to make sure that this same concept, the same accordion module was actually gonna work for other articles. What if it only works on William Shakespeare? What if users on other articles for not wanting that? So, to determine whether or not this supporting module was a success for our site and not just that one article we needed to repeat this test on other articles. So, we did a few other tasks on articles like our Galileo biography, the Civil Rights Movement event article, World War II, and Alexander Graham Bell. Each of these tasks as ran through VWO and they, in fact, prove the same results – users were opening up the accordions, they liked the accordions, and they actually preferred them without any extra images.
These findings gave our team the confidence that we needed. Through surveys, we identified a user need for quicker content that answered to predetermined question.
For your responses, which allows users to indicate what questions they were frequently having, allowed us to show that there were very similar questions about a very specific kind of main group of our articles, as you may remember about biographies, our places, and our different event articles. And through A/B testing through VWO we found that which design and variation work best for providing this type of information to our users. All of this research and testing allowed us to see our users a new way. They have different needs than they had a few years ago, and by acknowledging their influence on our site we were able to address their needs and find a winning solution for them.
After our testing [it was] confirmed that the accordions were a solution to this. We then began making these changes to many other or popular articles, in addition to those articles that were ones that the respondents indicated they were coming to most frequently on our site.
Here are some examples of the final design and some articles it appears on. As you can see at the top, we actually shortened it and then in the final design we only had three questions instead of five. And, we also kind of adjusted some of the treatment to it. Since this testing, and since we’ve rolled out these different accordions to several articles, we have continued to see strong engagement. Users come to Britannica because they trust our content and in the midst of all of the information that’s swirling around the internet, it’s more important than ever that users have the certainty that the information they are going to get is accurate. And, our Q&A accordions now give users quick facts, the answers that they want, and they can also be confident that these are in fact the right answers and facts. They’re accurate and they’re backed by a 250 year old company that knows how to get this done.
With the success of our QA and accordions, we are now already thinking ahead of what can be iterated from this. What are the next ways to continue to provide this sort of content and the way our users would enjoy even more? We’ve picked up future iterations of these QA accordions to now see how they can be translated into videos. A handful of these videos are actually already on our most popular articles, in fact, like William Shakespeare, and they’re live so that our users can actually already see them. These new videos are borrowing the basic template of our QA accordions by presenting answers to the questions that users want but in video format. We are also testing ways to create more jump links in our articles that allow our users to dive even deeper into our content – so much lays ahead of us and our users.
Yes, we figured out now what a solution was to our initial discovery of our users’ needs. But, that can change and it will. Users, and not just for britannica.com but, users, in general, are constantly evolving. And to keep up we must always test, learn, implement … test, learn, implement – its continuous, it’s a non-stop cycle for us to understand how our users are changing and what their needs are on a day-to-day basis. Because with each wave of this we’re getting more insights into our consumers that we had known before, even with the previous iteration. This helps us make the best Britannica that we can for our users. For 250 years, we have remained the pivotal place of knowledge for students, curious minds, and lifelong learners.
Always extensively backtracked and content curated for experts around the world, we will always have the facts because, now more than ever, facts matter. And Britannica is how you discover them. Thank you.
Mani: Thank you, Elizabeth. That was indeed a very insightful presentation. I think I got a lot of new facts about Britannica itself such as like the one that you have 1 million-plus pages. That’s humongous amount of content that you are managing.
Elizabeth: And it’s always growing.
Mani: Yeah, and also 20 million monthly uniques is a lot of people coming to your site. I have a few questions and I think in no particular order, I will throw them at you and have you attempt them one by one. So, firstly, I wanted to understand who is responsible for, you know, making such changes or devising optimization opportunities at Britannica. Is it the marketing team’s KPI, or is there a special team inside Britannica, or it’s a functional team across functions?
Elizabeth: So, its first and foremost a collaboration. We work, you know, I’m on the marketing team but we worked very closely with product, UX, advertising, everybody. Because our site now, we are a subscription site for the consumers which, for the main part is actually ad supportive. But we also have a subscription side to the business as well, where our users can get to our site unlocked, you know, if they pay for a subscription they get all of our content ad free, they get special features And so, because of that we need to make sure that whenever there is an idea either from the marketing and product team, we’re not necessarily overseeing … sorry, there’s oversight on what other units within our company need, and so, it’s a constant balance. But we also understand that some people, so many people from our company have amazing ideas and so, we need to just kind of make sure that we are listening to everybody. And then, the UX team is crucial for us because they’re someone that they really know the users as they are on a day-to-day basis and they can help us with the accordion module – make sure that it’s presented in a very specific way that users will actually be willing to engage with and it’ll be beneficial to them. So it’s a cross-functional collaboration.
Mani: I think that’s a very valid point that you make here because I’ve seen companies struggle with building such sort of an experimentation culture within, and with such a huge amount of audiences, plus a lot of work that goes into building a Britannica Encyclopedia, it’s really appreciated that you have cross-functional team doing this.
Elizabeth: Yeah, so that’s crucial.
Mani: Yeah. From that stems my second question actually that you have been using VWO for a long long time now. And, maybe if you can give me some other examples of where you and your team have used VWO for testing.
Elizabeth: Sure, there are several different examples. We can go as simple as one of the times, as you may have seen from our videos and our screenshots of our articles, all of the cross links to other articles within our site are blue. So, there was one of our early tests with VWO where we wanted to make sure that blue, you know, it’s a brand color for us, but is it actually something that users identify as a clickable link. Is there a better color that can even further increase the engagement? So, we tried to create very different colors from orange to red to green, [and] lots of different colors.
And it turned out with VWO we were able to use those heatmaps and recordings, and understand along with just the regular click engagement, no one likes red interlinks, and in fact, blue was the best. So, that was one example that was a great initial learning for us in terms of using VWO. But it also just kind of solidified that we were doing something right, that our color was good, our engagement was strong, and actually, it didn’t necessarily, for this portion of the test, matter with an additional color – it didn’t increase engagement and actually the other colors decreased engagement.
So that’s one example, and then I know I briefly mentioned the subscription side. So, that’s another side, yeah, where we were able to use VWO to see messaging, value proposition – Can we test the different subscription calls to action within the article? Is there a better placement on the page? How do we get users into that registration flow? And so we use the VWO in several instances and we’re constantly, you know, our roadmap has a lot of different ideas that really make sure that we’re balancing our content with also these great value perhaps that’s why you should become a subscriber and how is the best way to do that. And, whether that is redesigning the button or even redesigning and testing the subscription flow – we’ve also been using VWO for that.
So VWO, you know, we’ve seen it for us being used, and like I said, very simple, just stylistic things to even more expensive to subscription and then obviously the accordion module which is a huge, you know, it was a huge growth for us within the product.
Mani: All right. Coming to the same test that you spoke about, how did you qualify which pages do you want to attack first in terms of testing? What was the metric that you used to decide which pages to go after first?
Elizabeth: Yeah. So, I think the first way we decided was actually based on our survey respondents.
So, our survey users had indicated several different questions, but we were also able to see which article they visited, and from there we found about 20 articles that had about 80% of those survey respondents were going to. So that was our first step. And then the next one is we wanted to make sure that we were hitting an article that had a lot of traffic naturally, and so that we could get a lot of users that came from a variety of different intents, whether they were coming for school or for business. So, because, first, we wanted to see, based on the survey, these were the 20 or so articles that were most common. Then we wanted to check our analytics and see what page view numbers match with those articles. And so, from that we determined which articles we should tackle first. So, William Shakespeare, the articles we tested had the most traffic, and you know, it’s one that so many people come to for so many different reasons – they want to know how he died, they want to you know prepare for a speech or a paper. So, we knew we would get a very diverse testing pool.
Mani: Yeah the bard trumps it all, I think!
Mani: Elizabeth, you briefly mentioned, and I think you touched upon in your last answer about you having a roadmap in place, which is another interesting thing that I would really want to know. How did you a) come up with building this road map and b) were there any challenges that you faced in creating such a roadmap.
Elizabeth: Yeah, so obviously there’s one person who kind of decides that – I’m going to start this and tackle it – and then from there, once that framework is set up, we try to implement kind of meetings where we have different people from product, UX, etc to come together, to then see what we want to test, whether we know this from analytics, or we noticed from just word of mouth on social media, anything that we can see, maybe, we should try and test that, you know, or maybe this isn’t working for us anymore. And, so from there we kind of just brainstorm, put everything out and then it’s a matter of prioritization. So you know what amount of effort is behind each test, what will it involve, is the timing right, when should we work for the consumer side, we tend to see more traffic in fall, so, is it best to test then, do we want to test in a lower traffic month? So, it’s a lot of consideration for a lot of different factors.
So, that’s why we make sure we try and build out a roadmap that can really allow us to always have something in queue to start testing and exploring. But, obviously, there are ones that just come about, and we need to get that tested now, we need to see what’s happening now. So, it’s a little bit of a roadmap and a little bit of, you just kind of see how things are going.
Mani: Quite interesting. This is another thing that many companies struggle with, to be very honest, and a lot of stakeholders since a lot of stakeholders are involved in bringing them to the same desk and then having them chart this roadmap out, I think you guys are doing a commendable job out there.
Elizabeth: Well, thank you very much.
Mani: One last question actually, a couple more, but [the] one of the important questions out here: you briefly touched upon being ahead of the customer or being ahead of your consumers, you know, and one thing that has been now being discussed across digital circles, social groups, LinkedIn etc is voice being the next search and I just want to know what is happening at Encyclopedia Britannica when it comes to voice or you know recognizing voice search as the next big thing.
Elizabeth: So we are definitely trying to understand the voice search, kind of, realm. But, what I can talk about and share with regards to voice is one thing that we have launched: it’s our kind of first step into it is actually an Amazon Alexa skill, and a Google assistance skill – it’s called Britannica’s Guardians of History and it’s a voice game that is targeted to kids who are wanting to learn more about history.
But, also just any user who’s interested in voice kind of game and experience. And, this one looks at how […] the first chapter of it, and which is launched, is about Greece. So Ancient Greece, the Olympics, all of that and it’s kind of taking you back into that time period, and your challenge as a user to understand and interact with different characters in the platform, of how you can kind of save history because there are different time, you know, breaks that things happen, things aren’t going the way you want. So, you’re able to interact with the device, learn a lot, and also can get codes, you’re thinking [about] our articles, you can go as far sites and get access to all these different, very unique content for this particular Guardians of History. So that’s our big, a big one that we just launched about a month ago, and we have certainly lots that are coming more, but you’ll just have to wait and see.
Mani: Interesting! I think this weekend I have a Britannica and Chill plan versus a Netflix and chill with this game I’m going to play.
Elizabeth: That would be great!
Mani: Seems very interesting and I’m particularly interested in Greek history. So I think it’s a really good start for me. Again Elizabeth a lot of savvy marketers and you know professionals are watching this, and this one question is something that you know comes to all of us: What are you reading these days? Like, what are the blogs or the books that you are reading right now?
Elizabeth: So I mean from the non marketer I just have to put a plug in for where[…]. I know it’s across all the best sellers by Julia Allen’s it is so good. But I think you know one of the other ones is Outliers by Malcolm Gladwell. It’s a very old book, you know not, let me correct that – several years which now in the, you know the marketing world is kind of old and one that you wouldn’t necessarily think as a great book to read because, you know, there’s not Marketing in the title, but Outliers really explores very different ways to think about people and like, events, and there’s a lot that goes into the world of, you know, who you are, and how different people are, and how different people think – so just really challenges the way that you think so that you know as marketers you can always kind of try and see from every side of something and really make sure that you’re being very creative as you’re trying to, you know, develop different solutions for users.
Mani: Interesting. Outliers is one of my favorite books as well. I am also bent on psychology and how psychology applies a lot to what we do as marketers, marketing professionals. So interesting, I think I like that book as well.
Elizabeth: Yeah. Revisit it.
Mani: One last question to sum this up: if our audience wants to connect with you and have some questions, what are the best ways to reach you?
Elizabeth: Yeah. I am definitely on LinkedIn – just Elizabeth Romanski, you can search me and it’s kind of the best way. So, please connect, visit our site and you know, definitely explore Britannica Guardians of History. It’s a lot of fun, a lot of fun.
Mani: Thank you so much for the session with us Elizabeth today. I think it’s been a lovely lovely session. I hope you had fun also preparing for this session as well.
Elizabeth: I did. It was a great challenge.
Mani: Yeah. Thank you so much once again for your time, and lovely talking.
Elizabeth: Thank you.
Other Suggested Sessions
Proving Best Practices By Testing
Once you start winning, getting buy-in for Conversion Rate Optimization becomes easier. Martin walks you through winning tests that you can implement right away.
Performance: Each 0.1 Second Is Worth $18M For Bing
Every millisecond decides whether a user will stay or leave Bing. In this session, hear from Ronny himself how Bing ensures a great experience to its users.