• +1 415-349-3207
  • Contact Us
  • Logout
VWO Logo VWO Logo
Dashboard
Request Demo

Gathering and Synthesizing User Testing Insights with AI

Strong digital decisions depend on understanding real user needs. This talk shows how direct user feedback—combined with AI—can surface clearer insights, streamline decision-making, and power smarter personalization across websites, content, and marketing systems.

Summary

The session focuses on how qualitative research methods such as user testing, surveys, and preference studies reveal insights that raw metrics often miss. Using practical examples and real performance results, it demonstrates how AI can help identify patterns in research data, share insights across teams, and train internal systems to make decisions grounded in what users actually want.

Key Takeaways

  • Direct user feedback uncovers clarity and intent that quantitative data alone cannot provide.
  • AI can accelerate insight discovery by organizing, synthesizing, and distributing research at scale.
  • Research-informed AI enables more relevant content, personalization, and decision-making across teams.

Transcript

NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.

Hey there. So my name is Josh Jennings. I’m the senior web strategy manager at Commerce. We’re formerly BigCommerce. We just recently rebranded ourselves. And I’m really excited to be speaking to you today about the idea of user research and how you can synthesize research to get some really good insights.

We’re going to talk a little bit about user research itself. I’ve been doing website work for about eleven years now, and I found that user research, user testing is one of the best ways to get insights out of a website. And with the addition of AI coming in, there are a lot of really good opportunities to use it to make that process a lot more efficient.

So this presentation is going to be in two different parts. Part one, we’re going to talk about user research, what it is, and then we’re going to look at some examples of success. And then part two, we’ll talk about AI and how we can start to use AI to synthesize that research, and then what the future of AI looks like, how we can start to use it for even greater things as we build AI into the greater systems of our businesses.

Before I get into all that, though, I want to talk a little bit about my history. So again, I’ve been doing website work for about eleven years now. And so this is kind of what it was like for me as a young individual coming out of college, going into meetings. I had tons of ideas for our websites, things that I thought were going to work.

And I came in there with the belief that I was right. And then this was me when I was asked for the data to support my ideas.

I didn’t really have any. And unfortunately, you know, when you don’t come into meetings with data, there’s a lot of doubt about what you bring to the table, right? Data is essential for coming up with ideas that are going to work on a website, right? So the question then becomes, where do we get that data?

There’s a few different hard truths about marketing teams that I think, you know, need to be addressed when it comes to data. The first one is that marketing teams need data, but this isn’t quite the truth itself, right? The truth really is that marketing teams need insights, and data is what we use to get those insights.

The second hard truth is that insights can be hard to get generally, right? In this day and age, our data warehouses are massive. We have so much data available to us. It can be hard to parse through all of that data and find something, some nugget of truth that we can use to inform our marketing efforts, make actionable decisions.

So quantitative data itself, despite being so prevalent, can be very hard to dig through and find something worth including, right? And the last hard truth is that insights, once we get them, need to be distributed out to the wider team, right? If they just live with us inside of our brains and inside of our tests, they’re good. But the real benefit of insights that we’ve pulled from our data is that they’re distributed out to the wider marketing team so that everyone can use them.

Every team can enhance their marketing efforts through the insights, through the knowledge that you’ve gained from these tests. But that can be really hard to do, right? How do you make sure that people can find the information? How do you make sure that people remember that information?

How do you make sure that they understand the information in the correct concept, right? Do know exactly what that information is saying about the users and how they can use that in their own work?

AI agents also need data.

And this is becoming more problematic as AI becomes more prominent in our business usage, right? Sixty two percent of IT leaders say that their data is not fully ready to handle AI. And the biggest problem with that is the unstructured data that makes it hard for AI to parse through all of that and to grab information that is really beneficial.

So with all of those things in mind, there are really three things that we need to do. The first one is that we need to get data that is reliable, that we can trust.

Then we need to take that data and we need to turn it into insights.

And then we need to distribute those insights out to the team. Right?

And I believe that user research and user testing, I’ll use those terms interchangeably throughout this. User research, user testing are amazing ways to get data very, very quickly that is very actionable. And then with AI, we can take that and we can make it a lot easier to get that data, a lot easier to synthesize it and then distribute it out to the team, as well as use it to enhance our AI agents so that they make smarter decisions on our behalf based on the user research that we’ve done.

The real benefit is this, that user research allows you to get actionable insights about your website in just a few days where, you know, data or AB testing can take a really long time. User testing usually is much faster process. And you simply do this by asking users for feedback. This isn’t a particularly new way of doing research, right? This is something that’s been done for forever.

Marketing companies, business owners have been getting feedback from their users for a very, very long time.

This is not something new, but it is really, really valuable. And in the day and age of massive amounts of quantitative data, we kind of sometimes forget to go back to the basics, to go back to simply asking users to give us their feedback and use that information to make really informed decisions.

So I want to go through a few examples here of what I mean by user research, just to give everyone kind of something to take away from this, some things you can start doing right away.

The first one is screen recording. So very simply asking users to record themselves while they explore your website.

They can talk through it as they go. You know, you can watch where they go to try to find certain things on your website. What it’s great for is getting a really unfiltered view into how users try to find information on your website and where they get confused in that process.

There’s a huge difference between looking at something like a user flowchart inside of your data where you’re looking at drop off rates compared to watching a user who comes to your website and somehow finds a case study from five years ago that’s totally out of date and has and it’s not even a client to use anymore, it’s very eye opening to watch a user dig through your content and find something that you never thought that they would ever find. It’s illuminating, right? Very, very worth doing, to watch users go through that process.

The second thing is messaging feedback. So this is where you present a web page to a user with all of the messaging that you’ve strategized and thought about and ask them to give their impressions of the message, right? The thing this is great for is making sure that users can comprehend what you’re trying to say. Does it resonate with them? Do they understand it?

If users can’t understand your message, then your page is never going to work, right? And the only way you’re ever going to quantitative data will never be able to tell you whether a page is understandable by users without doing tons of AB testing on it to try to figure out what works.

User testing will tell you this in a very short period of time. If they don’t understand it, you’ll know it right away because they will tell you. Users will absolutely tell you when they don’t understand what you’re trying to say.

The next version or the next type of testing is preference testing. So very similar to message feedback, but with this one, would present multiple versions of a section of your website and ask users to choose which one they like the best and explain why they like that one the best.

What this is really great for is not necessarily knowing what the best option is, but when you show three versions of a piece of content, very often you’ll see that one of those three versions does not work for users. So it’s really great for a quick rejection of bad ideas. So if you’re trying to run an AB test and you have three different ideas for what you may want to test, I would highly recommend throw this into a preference test first, see which one of those versions users just don’t understand and they don’t like, and then you can take that one out of your AB test and you can get data on that AB much more quickly, or you can substitute it out for a new version that might actually work better. It’ll save you a ton of time running AB tests that have no chance of possibly winning. Getting that feedback early in the process through preference testing is way more valuable.

And the last one, and this is definitely not something new, is user surveys. User surveys have been done or customer surveys have been done for forever, right? It used to be done with direct mail, sending out surveys to people through letters, right?

Now we can do it much more quickly through the web. There are tons of tools that allow you to send out user surveys to particular users, right? It’s extremely valuable way of getting a lot of great perspectives on your users and what they’re interested in.

Totally worth doing as many of these as you can do because the more insights you have into your users, the more that you can structure all the rest of your tests around what users actually want.

So let’s talk about a quick example of this.

This is the let’s talk about the Got Milk campaign. Back in the 1990s, nineteen ninety three, the California Milk Processing Board, they hired an ad agency to make a new ad campaign for milk. The problem that they ran into, though, was that people really seemed to know everything there was to know about milk, right?

When asked about this campaign, Jeff Manning, who is the executive director at the California Milk Processing Board, said, What could you say about milk? It was white and it came in gallons. People felt that they knew all there was to know about it, so it was hard to find a strategic platform.

So what this agency that they hired did was some research to see if they could figure out what maybe people thought about milk. They did focus groups and they found something particularly interesting, that people viewed milk as an addition to food. They didn’t really think about milk on its own. Nobody ever went out to buy milk alone. They always thought about it with some other kind of foods. You think about like cereal and milk, cookies and milk, coffee and creamer or milk, right? It was always thought about when it wasn’t around to pair with other kind of food.

And this is a quote from Jeff Goodby, who was the co founder partner of the agency that they hired, Goodby Silverstein and Partners. He said, Luckily, some woman at a focus group that we did said, the only time that I even notice milk is if I run out of it. And we make twenty five years of advertising out of that sentence. And if there’s one thing that you take away from this presentation that I think is extraordinarily important about this is that this is not luck.

Finding us insight was not luck on the part of the ad agency, right? It was research. It was taking the time to sit down with users and ask them what they thought about the product. They didn’t know that this was what they were going to come out of that research with, but it’s what they learned and they made twenty five years of advertising out of that, right?

The results is that GOT Milk became one of the most successful advertising campaigns of all time. Very, very widely recognized. They got tons of celebrity endorsements to do this campaign. In the first year, there was a seven point seven percent increase in milk sales in California across the entirety of the state, right?

And I put it ran here for twenty years.

It has kind of continued over longer periods than that, right? So this campaign has persisted for an extraordinarily or did persist for an extraordinarily long time, all based on that one insight that they got in a focus group from one woman.

And Jeff Goodby, in kind of talking about the results of this campaign said that the lesson is sometimes if you listen to the world around you, it tells you what to do. And this is so true of user research, so true of user testing. If you just ask users what they think about the things that you’re putting out, you’re very likely gonna get some really, really fantastic insights that you can use in your marketing.

So then let’s get to a more modern day specific example. This is a project that I worked on recently where I used user testing to kind of get some insights on a landing page. So this was the landing page that we were working on. We were sending the vast majority of our paid traffic to this page.

And so what I did is I put this into a user testing system. It asked users to tell me what they thought about everything. So we got a lot of feedback kind of like this. Oprah for ambitious growth, which was our headline for the page, was a very vague headline. You’re just saying there are way too many messages on this page.

They were unsure how it differed from other platforms.

And that this section towards the bottom, what you can see is cut off here, they thought that section should be farther up. They really liked this content in that section, but it was way too far down the page. They wanted that information more quickly.

So based on those testing insights, right, I ran a number of sequential AB tests to see if I could improve the performance of this page. So one of the first things we did very simply was move that feature section that was at the bottom up on the page. We added an image to it, added some sub headlines in there as well, and then kind of compressed all of the higher level information up to the top.

This resulted in a twenty two percent increase in button clicks and an eleven percent increase in demo requests on this page.

So then following that with the same insights that users really wanted more of the feature information, we turned those individual features from that grid into cards where we actually visually showed each one of those features. This resulted in a twenty percent increase in demos.

And then knowing that users really wanted a comparison of our key features with competitors, we added that to the top with a new headline that kind of focused on that comparison, the differentiator. This was a forty six percent increase in button click rates with a fifty four percent increase in demo rates.

All of these tests were directly based on the insights we got from those original user tests. So it’s not like we were coming into this blind. We knew exactly what we wanted to test based on what users told us was really important to them.

So now that we I mean, all of that user research that was done is fantastic. And you can, you know, do tons of things with that user research. But when you add AI, all of this becomes not only easier to do for you, but it also becomes easier to distribute, more ways that you can use this information. It can enrich AI agents.

So we’re gonna talk about that. There are three real ways that I think you can use AI to enhance user research. The first one is identifying trends. The second one is then distributing the insights that you get from those trends.

And then the third one, which is I believe, again, the future of AI is to enrich AI agents at scale using the user research you get.

So first and foremost, identifying trends.

In user research, when you’ve gotten a lot of feedback from users, right, it can be a lot of information to parse because it’s so many quotes from users, so much information. AI can be used really, really effectively to extract trends from that data based on what users have told you, right? You can see here in this graphic how there’s all this kind of mismatched data.

Grabbing those, pulling them into themes is something AI can do really well. The way I recommend doing this is creating a custom GPT that looks for that particular types of information, right? It takes some tweaking, I will say, to get good results. Sometimes the AI will go off in weird directions. So continue with the work on this until you get what you want.

Make sure you give it specific instructions for things to look for, similar words, phrases, positive versus negative reactions, messaging, confusion, features that people are particularly interested in. Those are all really good things to include.

Always, when you ask it to dig through this data for you, have it start with trends and themes and then do insights and recommendations next. Don’t ask for these two things together. If you ask for them together, AI is going to go off in really crazy directions trying to find insights and themes out of the data. That’s not sorry, insights and recommendations. That’s not what you want. You don’t want AI to look for insights from the data. You want it to look for trends and themes first and then extrapolate the insights based on the trends that it finds.

And then the last thing, and this is just pretty much true for every AI that you use is make sure you use human intuition, right? Validate what the AI gives you, read through the data yourself, make sure that you feel like it’s right, and then make sure you correct the AI when it’s wrong. If you don’t do this, then it’s probably going to start running off in wrong directions on its own. So make sure that you’re checking its work and confirming that it all is correct.

The second thing, and I think this is what I love and I just think is extremely exciting about AI, is the distribution of these insights, right?

So once you’ve done a collection of these user research tests, you’re going to have a lot of data about your users, but it’s all to be separate from each other, right? You’ve done all these different studies, all these different user research tests, and you wanna take all that data and combine it together. AI is fantastic for this. So if you go into ChatGPT and you create a custom GPT, if you configure that custom GPT with a knowledge base of all of the user tests you’ve done, I’ve uploaded these here as PDFs, which is not the ideal way to do this, but it is an option.

You can upload all that stuff. You can also upload things like product information. So I’ve got all of my features listed out. I’ve got all of our services listed out.

Once you upload this, ChatGPT becomes a smart search engine for all of your user testing insights. You can ask the GPT, Hey, what do users say about our integrations? Right? And ChatGPG can go crawl through all that user feedback and tell you what those users are saying.

Right? Your whole team then can use this to learn and brainstorm on these user tests as well. Right? So they’re not just asking these AI search engines about random information that it’s pulling from the internet.

They’re asking the AI based on information that you’ve learned from the user specifically, right? It becomes a website strategist that can make recommendations based on the data so it can create entire outlines for pages. And then it can be a quality assurance reviewer so that if you create something or your team creates something and they want to make sure that it aligns with user research, they can just upload it and say, hey, does this align? And then suddenly AI tells you, yes, it does or no, it doesn’t. Here’s what you can fix. Right?

So here’s an example of what this looks like before and after. What I did here was I asked the GPT to create a content outline for a particular landing page for BigCommerce, and these were the two versions that it spit out. On the left using a normal GPT, you can see it’s just this long massive list of information. It’s got very disparate links.

It’s not really well thought out. It’s just dumping information based on what it could reach out and quickly find on the internet.

Whereas in the after version, based on the custom GPT trained on the user testing research, It knows exactly what users want to see based on the research that’s been done. So you can see here, going top to bottom, very specific sections, very specific information and features laid out. This migration and onboarding section doesn’t even exist in this left side, but it’s something that we found in testing users really wanted to know is how do I migrate to your platform, right? That would have never been included in this had you gone with this version. So the AI that comes out of this user research is way more intelligent than the version that just sits with the normal research that the AI does on its own.

Then, once you have this ability to improve your AI agents with your user research, what you have is an opportunity to enrich AI agents at scale with very relevant user data. So as we start to make AI agents more involved in our wider processes, like, for example, personalizing web content, this is in an old version, this is what it might have looked like. The user visits the web page. We collect some information on them, whatever we possibly can.

That gets sent over to an AI agent. And then the AI agent just kind of guesses what content is most relevant to them based on this information. Right? And it puts something out onto the page.

The new process for this is that when a user visits the web page and we collect all their information, the AI agent uses the user research that we have to personalize that content based on what they know the user wants to read and then drops that onto the page for them. So we’re not anymore just guessing what users want based on vague demographic data that we have. We’re using it based on, or we’re personalizing the content using user research insights to know exactly what they want to see, right? So now instead of this muck of information that we’re trying to guess what users want, we get very well structured, very specific recommendations from the AI.

And then what the future of this looks like is that you can apply this across your entire AI agent ecosystem, right?

Where every single AI that you have is specifically trained on exactly what users want to see. So from personalization, automated email campaigns, lead enrichment, marketing campaign, ideation, first draft copywriting, whatever it is, is all trained based on exactly what users have told you that they want to see. So everything is directly focused on user needs.

In conclusion to all of this, know, again, I think user research is an undervalued marketing research strategy. You can get extremely valuable insights very, very quickly.

It’s not hard to do on your own, but with AI, it becomes way easier to do. So it’s totally worth integrating this into your systems and into your processes. And then once you’ve got those insights from users, if you start to integrate that into your AI, then everything you do with AI moving forward is smarter, it’s more intelligent, and it can be scaled out across your entire business, right? So it’s really worth doing.

Take some time to look into this, start integrating user research into your methods. You’ll find some really, really valuable stuff very, very quickly.

If you want to connect, here’s my information. Feel free to reach out to me on LinkedIn.

And I’d love to connect and talk about this stuff. I do this all the time, so I really, really enjoy doing it.

Yeah, thank you all so much. I really appreciate it.

Speaker

Josh Jennings

Josh Jennings

CRO Manager, BigCommerce

BigCommerce logo

Other Suggested Sessions

How to Guarantee Innovation & Sales Growth Through Experimentation

Learn from Amazon's Rose Jia on driving business growth in any economy through investment-focused mindset, experimentation, and innovation.

AI Bootcamp for Experimentation

Craig reveals how AI transforms product development and experimentation, sharing practical strategies from the 'AI Playbook' to enhance critical thinking and innovation.

Beyond Basics: Addressing Complex Challenges in Experimentation

Join Dr. Ozbay to master online experimentation: managing multiple tests, analyzing complex data, and tailoring strategies for team success.