From Data Dump to Discovery: Using AI to Decode User Behavior in GA4
Learn how to use AI to extract meaningful insights from GA4 data — especially when experiments appear inconclusive. This session walks through practical frameworks for structuring prompts, preparing clean datasets, identifying behavioral patterns, and avoiding common AI pitfalls. Real examples show how deeper analysis can uncover conversion blockers, refine lead scoring, and turn raw analytics into actionable decisions.
Summary
This talk explores how AI can help marketers move beyond surface-level metrics and uncover actionable insights within GA4 data. Through detailed examples — including inconclusive A/B tests and SaaS lead analysis — it demonstrates how structured prompting, focused data exports, and proper context enable AI to identify behavioral signals that traditional reporting often misses. The session also highlights data preparation best practices, privacy considerations, and techniques for validating AI outputs to ensure reliable results.
Key Takeaways
-
Inconclusive experiments often contain valuable behavioral insights that AI can surface through multidimensional analysis.
-
Clean, focused data exports and structured prompts are essential for accurate AI-driven analysis.
-
AI outputs must be validated, contextualized, and aligned with business constraints before acting on recommendations.
Transcript
NOTE: This is a raw transcript and contains grammatical errors. The curated transcript will be uploaded soon.
Hello, everyone. Hello.
Welcome welcome to day two at Convex twenty twenty five. It’s my pleasure to, welcome Dana. Thank you so much, Dana, for allowing us this opportunity, and this stage is yours.
Awesome. Thank you so much for having me, and thanks everybody for sharing in the chat where you’re joining from. I live on Vancouver Island in Canada, which is not where Vancouver is. Just a different place.
But I love how many different time zones and areas of the world, that people are joining from. So, yeah, drop where you’re from in the chat. I saw someone else from Northern Ontario, Canada. So joining from Toronto.
Yeah. I grew up near Toronto. I grew up in Hamilton, Ontario. So fun to see people from all across Canada and around the world.
And so in the discussion today, obviously, we’re gonna be talking about using AI to decode user behavior specifically for GA four, but I also wanna encourage you. There’s a q and a section. We’re gonna have lots of time for q and a. I wanted to not make the presentation too long so we can have a discussion about those too.
So start thinking about what those questions are that you have about using AI to analyze data or just GA for generally. You know? Like, I am here as a resource to help you. And if you don’t know who I am, hi.
I’m Dannadeet Matho. I’m the founder of NLUG’s Playbook, and I’m the president of KickPoint, which is a Canadian based digital marketing agency. We started in Edmonton, but now we’re fully remote across Canada. And over the next hour, I’m gonna show you how to stop staring at inconclusive experiment results, wondering what now and start extracting insights that actually drive your optimization strategy forward.
So by the end of this, what I’m hoping is that you’ll have everything you need to run your first AI analysis. And one thing that I really wanna point out as well is that in the engagement section, there is a files tab. And in that files tab, there is a PDF download. And that PDF download has some example prompts that I’m gonna go over in the presentation today.
And one of them actually has the actual prompt I use for real clients, and one of them is a bit more stripped down just so you can reuse it for yourself. So you can see the kinds of things that I genuinely use. I personally use Claude, but this stuff works in chat GPT as well to get extracting extracting insights essentially from various g a four and other pieces of data. So we’re gonna start off with a quick poll.
Yes. And I’m glad someone else is a big Claude fan as well. So first poll I have is how many experiments have you run-in the last six months that gave you inconclusive results? And so, you know, I’m assuming you’re running maybe an experiment or two a month.
Let’s see. You know? Maybe maybe more, maybe less. Yeah. So, I mean, some of you might be running only zero to two experiments, and both of those were inconclusive.
If you let me know as well, I’d be curious to know, like, how many experiments do you run-in general over the course of six months? You know, do you find overall, percentage wise, do you have a lot of inconclusive, you know, no statistical significance, no clear winner, just like, oh, well, I guess that didn’t work, and I’m gonna move on from it. And, you know, be honest about your I’m not gonna say failures just like the experiments that you feel like didn’t give you what you wanted because this is a safe space. We’re all here to talk about how to make things better, and I think it’s good to share those stories that you might perceive as a failure as well as the success stories.
And I say this as someone who’s been doing this. You know, I’ve been working in this field for twenty five years now. I have seen a lot of not good decisions that have been made, and I think it’s really important for us not to have that sort of survivorship bias of, like, well, every test I make is great. That’s not true.
No one is that good. So remember that, you know, everybody’s got a weird failed test in their past, and I think that that’s totally okay to have that. So what I’m hoping is that we can move past these inconclusive tests and realize that they’re not actually failures.
These tests are data gold mines that you probably just haven’t analyzed deeply enough. So let’s, for example, say you’re running an AB test and you see no significant conversion difference. You’re only looking at one dimension whether or not it was a conversion, yeah, or not, for example. And So I just had a comment here.
It’s hard to reach statistical significance. Yeah. It takes a while. But that test also helped you collect hundreds of behavioral signals, scroll depth, time on page, element interactions, exit points.
AI can process that simultaneously and find patterns that tell you exactly what’s happening with your users even when conversion rates don’t change. And so that’s what we’re gonna learn to do today. There’s a comment from Dewey. Says when I was experimenting, one in four success rate hover, suspect PM was peaking.
Yeah. That’s hard to keep people away from, like, don’t ruin the experiment by please don’t click on the experiment for sure. I hear that. So I’m gonna paint a picture for you that might feel familiar.
You’re running experiments, which is great, But your tools are collecting that behavioral data. And, also, you know, it’s hard to look at all the behavioral data, and you look at dashboards, and you try to find patterns. And, you know, you got eighteen emails, and a Teams call has just come in. And the human brain is not built to process multiple dimensions simultaneously.
We talk about multitasking, but the reality is, like, we’re not computers. We’re humans. And, you know, you might notice when you look at your data that mobile users behave differently or that one traffic source is slightly better engagement. But connecting all those different dots together when you don’t have time to really dig into it or finding the non obvious patterns, that’s where you really hit the wall.
And that’s exactly where AI excels in helping you get that data out of this so called failed experiment. So those of you who are likely already familiar with experimenting with AI in your various analytics workflows are probably familiar with this. But what AI is really good at is pattern recognition.
And I say that because when I really say AI, I’m talking about LLMs. LLMs, large language models, they are pattern recognition machines. They’re guessing what the next word might be based on, you know, our next paragraph or next sentence based on common patterns that they have in their training dataset. So it is really good at pattern recognition.
It’s bad at math, and it’s bad at coming up with things all on its own, but it is very good at pattern recognition. That’s what it’s built for. The other thing that it’s good at is multidimensional analysis. Instead of looking at device performance and source performance and behavior separately, AI can analyze all of that together, and it will find those interactions.
It will find those patterns. You can also use AI to generate hypotheses, especially as follow ups to your completed tests. You know, AI is an interesting brainstorming tool. Not everything comes up with this gold.
Sometimes you’re like, wow. AI is really up to lunch on that one. And that’s totally fine, but it is great for those big ideas that you may not have thought of based on the information that you’re feeding it. And the last thing that I really enjoy using AI for is connecting behavioral signals to outcomes in ways that may not be obvious.
For example, it could find that users who watch your product video actually browse fewer pages before converting, for example. That could be counterintuitive, but it is actionable data that you can work on.
So AI, on the other hand, is not good at a few things, which we know. Math. I already talked about that. But here’s some other things that AI is not good at.
So it has zero business context unless you give it that business context. Even if you provide the website, sometimes it won’t go look at it. It’ll just pretend it did. It doesn’t know you’re about to rebrand or you can’t change the checkout because it’s hosted on Shopify or the q four is your biggest season, or if you’re in b two b, it’s your slowest season.
It also can’t tell you what’s realistic to test. It can suggest ridiculous things that you know you can’t do because it’s gonna take six months to build, or your web services team is in a lockdown for the end of the year and they can’t push anything new. Also, AI will find correlations and present them as meaningful without understanding causation. For my favorite example is if you look at a chart, for example, of left handedness over the years, and I say this because I’m a left handed person.
And if you look up any chart of left handedness over the years, you’ll see that it was relatively low, and then all of a sudden, you know, it started to spike in the sixties and seventies. Well, does that mean there were more left handed people? No. It was just more socially acceptable to be left handed.
Now an AI would look at that and say, wow. This is an epidemic, but they don’t understand that context behind the societal acceptiveness of being left handed. So that’s where it will look at those correlations and present them as causation, but it’s not true. And so you always need to apply that layer of, is the AI onto something, or is it just making it up with the data that you get back from it?
And finally, remember, in my mentioned data, AI is only as good as the data that you give it. If you dump messy unfocused data into it that isn’t formatted well or you’re just giving it everything and hoping something sticks, you will get messy, unfocused insights as a result, which is why my next section on data preparation is gonna be really important for you. So let’s look at our next poll. So one of the questions that I have for you is what is your biggest challenge when it comes to behavioral analysis?
And so that poll is just gonna pop up on the stage. There we go. Okay. So what is your biggest issue that you run into with behavioral analysis? Yeah. So don’t know what to look for, can’t connect insights.
You know, it’s funny that a lot of people have aren’t saying yet too much data because I feel like there used to be a problem with too much data. And now people sort of are like, I’m just gonna ignore the data that I can’t pay attention to.
That’s fine. But, yeah, for sure, don’t know what to look for and can’t connect insights to action. That is really helpful. And so I think this is where AI really is that sweet spot of being able to help you figure out what to look for or at least give you some ideas of what to look for.
And then, also, the next thing is connecting that insight to action. Now, of course, you gotta use your human brain a little bit for that fourth thing, connecting insights to action, but AI can certainly lead you along that path. Thank you so much. Those great responses.
Alright. So I wanna start off again by reminding everyone data preparation is the foundation of good analysis. This is true, period. It’s a matter of using AI tools for analysis, if you’re doing it yourself.
I think we’ve all gotten a bit lazy in that we just toss-up a bunch of data and we figure out, you know, AI will figure it out. Our tools do so much for us. Why can’t it figure it out? Good data prep is the difference between getting useful insights and between getting useful sounding garbage.
And this is where I teach, a university course on data analytics. And I can tell the students, unfortunately, who are using ChatGPT to write their assignments because they give me useful sounding garbage as opposed to giving me actually something useful. So remember, you really wanna cut back on the amount of data you give AI. You just wanna make sure you’re giving it what is necessary.
And this is my golden rule about AI. Only give AI what it needs, absolutely nothing more.
You could export every single field from the analytics tool, paste it into ChatGPT and say analyze this. You’re gonna get crappy results back, but you could certainly do that. The better approach is to start with a specific question.
Ask yourself, what is the thing that I I wanna answer? What is the question I’m answering here? And then you’re gonna export only the dimensions and metrics from whatever tool you’re using. And, of course, I talk about GA four a lot because that’s my primary focus.
But, really, this works for any analytics tool at all that you’re using. You could be using Amplitude. You could be using Matomo. You could be using Adobe Analytics.
It doesn’t matter. You could be using a spreadsheet. I don’t care. I’m not gonna judge you.
But what you wanna make sure you’re doing is you’re cutting it back just what’s necessary for that specific question. So if you’re analyzing, for example, why mobile users aren’t converting, you don’t need desktop data in that analysis. If you’re looking at one experiment, you don’t need data from six other tests. Make sure to focus.
Focus is your friend when it comes to AI analysis of really anything, not just data, but everything.
So for example, these are the kinds of things that I typically export. This will depend, of course, on your question, but the most common scenarios I’ve listed here. So if you’re analyzing an experiment, of course, you’re gonna need that experiment identifier because you need to know which was experiment a, b, c, etcetera. You should have session counts because that’s necessary to determine significance. You should pick your conversion metric, whatever that might be. And then I also recommend including these behavioral signals, scroll depth, time on page, clicks on specific elements.
If your hypothesis is that device matters, maybe you’re trying really hard to get the mobile site fixed and you know that mobile sucks and so you’re gonna compare mobile versus desktop, make sure to add that. But if it’s not important to the analysis, don’t include it. You could always if you’re like, I’m not getting anything conclusive, I’m gonna toss in desktop versus mobile later, you could do that as a secondary follow-up for sure. But when you’re first starting out, include as little data as you need, just like the bare minimum. So, for example, if we’re doing a behavioral pattern analysis and we’re thinking, what do my converters do differently versus the people who don’t convert? You would need page or element identifiers, you know, what page are they on, user segments, engagement metrics, and whether or not they converted.
Now what’s not on this list is user IDs, time stamps down to the second, doesn’t matter, IP addresses. You don’t need that level of detail, and it raises privacy issues. So keep it aggregate. Keep it focused. Don’t share anything that might violate, personal information as well, especially those of you in jurisdictions like the European Union who have very specific privacy rules about this stuff.
And so as I mentioned privacy, I wanna cover a really important point. Never ever ever export personally identifiable information to an AI tool. No names, no email addresses, no IP addresses, no user IDs that could identify an individual aggregate data only. And you’ll also wanna check your AI tools data policies. Some tools use their inputs to train their models unless you opt out.
There are also situations if you are specifically in the European Union that you need to have a data processor agreement with the AI tool that you’re using. I am not a lawyer. This is not legal advice, but you really wanna make sure that you are checking with your legal department if you are a big enough company to have that to make sure that you are okay before you upload this stuff. If I’m looking at just aggregate data, like page names of pages and session counts, for example, and those pages are just publicly facing pages on my website, that’s just aggregate data.
That would be fine. But then as soon as we get into anything like logged in user data or anything that might be personally identifiable, this is where you really do need to check before you go ahead and start using an AI tool. And, also, for example, if you’re using places like Claude that has data retention and we’ll look up previous chats, make sure you understand that data retention policy. And, also, document what you export and when.
Because if you ever get audited for GDPR or CCPA or any of the other acronym privacy policies out there, you’ll need to know what data left your systems. So this is not paranoia. This is just basic data hygiene. Just document it even if you wrote down, like, I exported this data on this date, and this is where I saved it, and then I destroyed it afterwards or whatever it might be.
It’s just good practice to get into.
So once you export your data, before you upload it to AI, you also need to clean it. Remove those metadata rows that exports often include. You know, the ones that say, like, report generated on or they have some summary stats at the bottom or the top. AI will try to review that as data, and it will get confused.
You’ll also wanna make sure your column headers are clear. Session source medium is better than dimension three. It’s not gonna understand. And then also filter out noise.
So for example, if you’re doing a page level analysis, if you have pages with only two sessions, that’s not meaningful. They’ll distort pattern recognition. Just get them out of there.
And then you’ll also want to decide what to do with unset or null values. So if you’re using g a four, for example, you might see not set in brackets. That is an absence of data. It just means the data doesn’t exist.
Sometimes that’s meaningful. Sometimes it’s just data quality issues. Make a conscious choice on whether or not you’re going to include that particular data as well. And then if you’re combining data from multiple sources, make sure your date ranges match.
I know it seems basic, but I’ve seen people compare January data to one tool to February data from another, and they’re wondering why the insights don’t line up. And even a day off or a time zone can mess things up. For example, if you’re using Google Search Console to export data for, say, search engine optimization purposes, that tool’s date range or, pardon me, time zone is Pacific time because that’s where Google is. That’s where I am too.
It’s very convenient for me at least. But for the rest of you, it might be less convenient. So make sure to keep that in mind when you are comparing data is that maybe if you are on, say, the opposite side of the world from California, your date ranges are gonna be a little bit weird when you try to blend together GA four data with Google Search Console data in an AI tool. Just remember that, you know, that little these little things can mess up that analysis, which makes you make a wrong decision, and we really want to avoid that.
So now let’s get into the techniques. And as I mentioned, in the files tab, there’s a download you can access with all the prompts that I’m gonna be going for. So don’t forget to download that. So first, I wanna look at a so called failed experiment or, you know, inconclusive, whatever you wanna call it. So the most common scenario I see, you ran an AB test. It came back inconclusive.
So this is a product page test. Control converted at two point three. The variation converted at two point five. Ten thousand sessions per variation, not statistically significant. You know, traditional analysis says that didn’t work onto the next test. But we’re gonna dig in deeper and find out what actually happened here.
So for this particular analysis, what we’re exporting is the variant name or the ID, whatever system you’re using, b o, for example, VWO. You could export the name. Just depends on on what it is you’re using and then pages. So and then for metrics, I am including conversions. You do need that for context, but then I’m adding on these behavioral metrics. So, in this case, this was an ecommerce website, and we included sessions, conversions, scroll depth, time on page.
And you’ll notice as well I have average time on page here. That’s not actually a metric in GA four, but you can create it as a calculated metric by taking the average session duration and dividing it by the number of pages, which is a little bit messy. But, hey. You know what?
We we take what we can get. You could also and this is something where you might have to set this up in advance if you’re not recording this stuff. But, for example, did people click on product images? Did they click on the reviews?
Did they add it to the cart? There is an add to cart event you could export. And the exit rate. And, again, if you dug into GA four and you’re like, there’s no exit rate here.
There is. It’s in explorations. They don’t give you the rate. They just give you the number of exits.
You can easily calculate the number of exits by looking at the number of views. So that is something for sure that you’ll want to get, and you can only access that in explorations. You can’t add it as a custom metric, unfortunately.
But, yeah, just keep that in mind. So instead of having a whole bunch of data, I’m getting really focused, clean, specific to our question data, and that’s it.
And the other thing I noticed that a lot of people make mistakes when it comes to AI is you need to use a longer prompt. I write prompts that are, like, paragraphs long because I’m providing as much data as I possibly can. So I’m gonna go through the example step by step of the kind of prompt that I use. And, again, this is in the files tab that you can download.
The first part of this prompt is the context, and I know there’s a lot of text on the page. Again, this is in the file. I just wanted to give you basically what I’m asking. So what kind of site do you have?
Who are you targeting? What variation was the control? What did you do in the other variant? How long did the test run?
What other information would help the AI make a call on whether or not that was a good test?
And then the second part is the hypothesis. What were you expecting to have happen? Be specific in the results. Don’t just say you expected it to perform better, but how much better? In this case, we’ve expected a fifteen to twenty percent conversion lift based on similar tests. Didn’t happen. Why didn’t it happen?
Now you talk about the goal. What is the point of this AI query? Why are you asking the AI to do this? In our case, we’re asking if variation b actually created better user engagement even though it didn’t lift conversions.
And what might the conversion blocker be? What are we trying to figure out here?
The fourth part is the data. And so in the data, you always want to explain what’s in the data. And the reason why I include this information because you’re thinking, oh, you know what? I’ve uploaded this file.
It’ll see what’s in there. Sometimes AI can be kind of not bright, and it can’t necessarily like, it doesn’t look at the file or maybe the file is corrupted or didn’t upload properly, and it can just go off on its own if you don’t stop it and say, hey. This is the information you should expect to see. And then it can compare that to the file and and then tell you before it starts the analysis, oh, you know what?
You said I was gonna see these dimensions and metrics. I don’t actually see that in the CSV that you’ve provided. Let’s double check that information. So including this kind of information is really important.
You don’t want it to make up stuff because you accidentally uploaded the wrong file for the wrong client or whatever it might be. It happens.
Now we have the actual ask we’re making of the AI. What do you want it to do? You want this to be as specific as possible so it doesn’t veer off course. And, usually, when I’m formatting the request, I will include points.
First thing I want you to look at. Second thing I want you to look at. Third thing. Fourth thing.
And you could just have one thing. It doesn’t matter. Just number it so it knows that this is the one thing you wanted to look at. Because, also, in its response, it will often refer back to, you know, the specific question that you asked, and it will write your information back to you as well.
So by being really specific in the request, you can include a lot more information for the AI to make sure that it’s actually doing what you want it to do. I’m never saying just analyze this. Tell me what you see.
And then for constraints. So what are the things that you want the AI to avoid doing, for example? Or what constraints do you want the AI to put in their analysis? So you could say focus only on differences greater than ten percent and prioritize insights that are actual within thirty days. And if it doesn’t know it’s gonna be actual within thirty days, then you might need to provide that context as well in this constraint section or back at part one when we had the context information, you could include it there as well.
You could, for example, say a follow-up question could be mobile versus desktop. So, you know, let me know if you feel that you would like me to include this information in a follow-up analysis to see if they see relevant patterns. That would be another good use as well.
And then finally, we’re almost at the end of this prompt. What do you wanna get back? Who are you presenting the data to? What do you need?
Do you need a chart? Ask for a chart. Do you need executive summary? Ask for an executive summary.
And I’m just gonna go on a little bit of a tangent here and talk about tables.
Normal people, people who do not work in experimentation or analysis or whatever, they don’t like looking at tables. If you’re presenting to your CMO or your CEO, don’t present data in a table. If you do have to show a table, include, like, a bar or a heat map or literally anything at all to make it visually interesting because that is what is gonna resonate with the audience. Tables are boring, and they do a very poor job of communicating data and results.
Like, yeah, they communicate data obviously in things like Excel, Google Sheets, etcetera. But in a presentation, do not. Do not use tables. Less tables, more charts.
Thank you. That’s my side rant. Now we’ll get back to the actual presentation. Alright. So what you’ll get back from the AI, and, again, this was Claude.
You probably recognize it if you use Claude. We’ll help you figure out what you’re gonna do next. This response continued on and on and on. I just screenshotted part three.
I think this was a five part response. But this is where you get the useful data because it’s telling you what it needs to continue the analysis. So it’s saying, you know, based on the metrics available, I can see behavior on the product page, but not what happens after customers click add to cart. So it’s saying, you know what?
I can get this far, but no further. So the product page itself is not the blocker. What we don’t know and we do need to know is what happens at add to cart clicks. So what is the follow-up data as a result of this?
And, again, this whole response was extremely long. I did get a really useful executive summary, which we then obviously rewrote because never again present, you know, AI data flat out. You always wanna put your own spin on it. You wanna add more information, but this tells us where we need to go.
So the result of this was actually that the checkout was the issue, not the product page. This was not a Shopify site. This was a site that they had built their own custom ecommerce product, which, you know, I’m never a fan of. There’s lots of great ecommerce products out there.
Please don’t build your own. Then you are an you are an ecommerce software company that happens to sell stuff on the side. But this particular client had built their own ecommerce platform, and the issue was that the checkout was the problem. We tweaked some factors.
We added some new payment options. They only had credit cards, and people were looking for Apple Pay. People have gotten so used to Shopify. They were actually looking for the shop button.
And then, you know, we changed up the free shipping threshold, and there we go. So one of the questions that just popped in, if the AV variant changes are specific to the PDP, shouldn’t the checkout experience be the same across both test and control?
Yep. The issue was that, we actually tried a different, we tried making the PDP more engaging because we were expecting that the PDP was the problem in terms of, like, the PDP wasn’t capturing people’s attention in terms of getting people to the checkout. But the issue was that the PDP was actually gathering people’s information. It was encouraging more add to click, add to cart clicks.
So more people were adding it to the cart, they got to the checkout, and then they were just dropping right off. So if we only looked at the purchases as the conversion and we didn’t actually look at the user behavior, we would not know that, actually, our changes had been good. The problem was not with the PDP itself. The problem was with the checkout issue.
And, actually, what ended up happening is that three free shipping threshold ended up being the winner, in terms and it was just five dollars. That’s all it was. So, you know, sometimes it’s just little things like that. So, hopefully, Kevin, that answers your question on what the what the issue might be.
So part of it too is also making sure that when you put your data in there that you are analyzing, you know, more than just conversions, you’re focusing on other things as well. But then AI can also help us figure out, like, what else is going on here.
So, hopefully, that helps you figure out that that first technique, that first experiment that we did. So here’s another technique, and this is behavioral pattern recognition. And in this case, we’re looking across multiple dimensions.
So this is when you have a really frustrating situation.
Lots of engagement, lots of traffic, but conversion is not where you want it. You need to understand what actual converters differently from people who are just browsing around. AI really excels at this because it can look at device source behavior and outcome all at once. So for this particular experiment, we’re looking at a b to b SaaS company, and we have this particular data available to us.
And when I say user segments, I’m not talking about, like, individual user IDs. I’m saying that these people are converters, not converters, etcetera. And in this case, for this particular client, this client has their Salesforce integrated with g a four. So we can see that the lead filled out the form, but then we can also see that as they move through the Salesforce process, those events are pushing back into GA four as the lead hits each stage of the process.
So do these people ultimately become clients or not? So we did have this nice integration which helped us get a little bit better data than we would typically get with a standard GA four, implementation.
So in this case, it’s the same prompt framework. I’m not gonna include the entire prompt in the slides. You can get it from the file download. But I did change some details for anonymity in the analysis, so make sure it is really specific for you when you change it for your circumstances.
So, again, this is a bit small. I wanted to include the entire context. So the context is, you know, who I’m working with, project management SaaS platform, IT and operation teams, what size companies, how long is their sales cycle. They have a free trial sign up that requires a business email, and then their sales team qualifies leads.
So there’s lots of trial sign ups, but only twelve percent are converting. And so people say, oh, the sales team is the problem. The sales team says there’s too many tire kickers who come in. So what are the behavioral patterns that distinguish the high high intent people, the people who are more likely to close so that the sales team can figure out who they’re gonna talk to first and then also work on your content strategy as a result?
Okay. So in this case, we’re providing as much detail as we can. Here’s the information in the CSV. User segments are trial converters, people who went on the site and signed up for a trial and then did convert to paid.
Trial nonconverters, so they signed up for trial, but they did not convert to paid at least in this ninety day period. And then nontrial visitors, people who went to the site and never signed up. And we were looking at only organic search, paid search, LinkedIn direct and referral. They do a lot of LinkedIn campaigns.
We did look at desktop and mobile just to see if that was an issue. And then what pages were viewed as well. So we cut it down to specific page groups in terms of the pages that we were providing. And then we included metrics, etcetera, just to help the AI figure out what was going on.
Again, this is in the file that you can download.
So we did get actual useful suggestions based on this data. It came up with some action items for marketing, including what to prioritize, and it actually came up with an entire lead scoring framework based on this new information.
I also noticed that AI loves giving people to do list, which I’m not mad about it. And there were more than three items here for the sales team, but I had to cut it off for the length. And also anonymity number four was pretty specific to who they are. So what we found interesting about this was that people who converted downloaded case studies. And one of the things that we also found was that if people watched the video, they actually didn’t turn into customers. It was a negative factor.
So what was really interesting was that if people basically, they visited once and they converted, great. And if they took longer than that, they probably weren’t gonna be a good customer. So people watch videos, aren’t customers. Long average time on page for pricing was a negative purchase factor, but a positive conversion factor. So that would be wasting the sales time. If they spent, like, fifteen minutes on the pricing page debating, you know, or even, like, came back to it several times over the course of weeks, probably price might have been an issue for them, and they were less likely to become a customer.
Far more purchasers viewed competitor comparison pages. So they were clearly, you know, here’s our top three. Who should we go with? And if a user signed up for trial in three or less sessions, they were more likely to purchase, which is kinda counterintuitive.
You think what people would be doing research. But in this case, it was if people came there fast and they ended up trialing almost right away, they were a really fantastic lead. And then we could work that into lead scoring. So the more pages or more sessions someone had, we’d actually reduce their lead score in Salesforce.
And then the sales team would be able to say, oh, you know what? This person has a super high lead score because they’ve only been to the site once. They already signed up for a trial. They came via organic, which is a positive lead score as well, and so I’m gonna contact them right away for a follow-up.
Then that was really effective for the sales team in terms of just focusing on leads who are more likely to convert. And then when they ran out of time for those, they obviously could work on some of the tire kickers, but the lead scoring system really helped them, prioritize what was important.
Now, I’ve shown you these two examples. So I’m gonna talk a little bit about how you can apply this, and I wanna cover some of the mistakes, that I see people make that leads people to bad insights, which we really want to avoid.
So I know I just showed you these super lengthy prompts. I do want you to include all this detail. But in the request section, when you do your first prompt, just start off with one question as you get used to using the system. You know?
And here’s some examples of what you could include. Why didn’t high value what do high value visitors do differently? Which traffic source, etcetera? You know?
Definitely, check that out.
And then you can use the structure that I went over earlier. Of course, once you get more comfortable with this, you can change it up, add in more sections. But I find that having a structure means that I am thinking about each of these sections as I supply them to the AI, and I make sure I’m covering each of these bases when I prompt. So that way, I make sure not to forget anything to include that is really important for the AI to know.
Alright. So the next thing, we’re gonna move on to another poll. Alright. Now that you have taken a look at this information and I’ve shown you some of the prompts, what is the first thing that you would want to take a look at? If I can get that poll on the screen. Thank you.
Would you like to analyze a recent experiment, traffic source performance, user journey patterns, content engagement?
Okay. Yeah. User journey patterns. That is a big one for sure. And analyzing user journey patterns is really difficult because the tools do not make it easy to do so.
And I would say that if you are really focusing on user journey patterns, to be frank, GA may not be the right tool. It does not necessarily do the great best job of it. I find that there might be better tools to look at user journey paths. But, certainly, you know, one of the real downsides of GA four is that they have this really promising exploration called a user path or path exploration, I think.
Yeah. Path exploration. But you can export it to a CSV. You can only export it as a PDF.
And so that is something that I find really frustrating about GA four is that it does not necessarily do a great job of user journey patterns. I do have a course that’s on LinkedIn learning, which people can check out, And that does go into, some of the prompts that I have on user journey mapping. I can include a link to that when we get to the q and a section. But I think for sure it’s something that, you know, they just g four doesn’t do a great job of, and I feel like it should do a better job of it.
So we can definitely talk about more user journey stuff, when we get to the q and a, and I will dig out that link, and I’ll provide it in the chat as well when we get to the q and a. And, also, if you wanna check out my courses on LinkedIn learning, don’t feel that you have to buy a LinkedIn premium membership. If you have a library card, most libraries include free access to LinkedIn learning. So you could also do that.
Or if you’re with the university, your university library might also have access to LinkedIn learning.
And some of the purse portions of that course I will be publishing on my blog in the future as well, particularly around your user journey stuff because that is that is a really tricky thing to unlock for sure. Alright. So now let’s talk about let’s get back to presentation before I run out of time for some good q and a.
Alright. So we’re all familiar with the idea of hallucinations in AI, of course. But how can you spot that it is actually hallucinating? Something that’s overly specific is a really good sign it’s gone off the rails.
So they spend exactly three minutes and forty seven seconds. That’s that’s far too specific. I don’t believe you. So sometimes I will ask it things.
You previously concluded x. What specific data point led to that? Basically, this is the show your work. You know, when you’re in an exam and you’re doing a math exam and you have to, you know, write your workout, this is what you can ask the AI as a follow-up.
It’s like a fact checking situation. And I will, often ask AI this if I don’t trust that it’s actually coming up with this information. And, also, this last part here, rate your confidence high, medium, low is a really good way, for the AI to admit that maybe it was just, like, making stuff up a little bit, which happens.
AI will never say no. It is deeply, absurdly confident.
And I’ve had AI plow ahead with an analysis even though I uploaded a totally blank file and it couldn’t see the data. It just made it up. This happens less now than it used to in the beginning. You really do wanna double check and get it to fact check because you don’t wanna take this analysis and say, oh, yeah. This looks great. Present it, and then it turns out that it’s a load of nothing. So you really wanna make sure that it’s actually operating for the parameters you gave it, and it’s not just, you know, using its instruction that’s built in to always answer the question no matter what.
And AI can also sometimes go on and on and on about all the things you can try, which is super nice for it, but we are all real human beings who need to sleep and, you know, have lives other than jobs. And maybe we have a hobby or two that isn’t working on improving conversion rates all day. So make sure to get it to prioritize based on this kind of framework as well. So this is a follow-up question I will ask as well.
And then you also always always want to include business context. AI loves assuming you have unlimited budget. Everyone’s totally ready to jump on board with all of your recommendations at a moment’s notice, which we know is not the reality of, you know, life in marketing or generally. So tell it that.
This is a simple example, but another one could be the CMO was really against investing budget in SEO. They think the traffic from AI tools is around thirty to forty percent when you know it’s actually around five percent. What would you do about that? Or tell it your budget.
Be as specific as you can.
And I do actually have a CMO that we’re working with right now who thinks that traffic from AI tools is around thirty to forty percent. It’s not. It’s five percent. So I also have a post that I can toss in the chat afterwards as well in setting up a channel to track how much traffic you’re getting from AI tools in GA four, which can be a nice wake up call for any particular, leadership who might have bought really hard into the AI driving traffic to your website hype. It’s not Google is still fine. And, really, Chad GPT is just Google in a trench coat. It’s just presenting Google results, so keep that in mind too.
Alright. So, now the teams that win, I just wanna leave you with this one last thought while we get to and then we’re gonna get to the q and a. So start thinking about those questions that you wanna ask. And remember, not just about what I’ve talked about here, but other stuff about g a four as well.
I wanna leave you with this one last thought. The teams that win in terms of getting more conversions, in terms of getting more business, they are not the ones with the most data. They are the ones who are able to use that data most effectively, and they are the ones who can turn that data into action the fastest. Everyone has data.
Everyone has so much data now. That is not the issue. You know, maybe ten years ago, fifteen years ago, twenty years ago, people didn’t necessarily have a ton of data, but we are all absolutely drowning in data now. Data is not the issue.
It’s really what you do with that data that’s gonna count and make the difference for you.
So I hope that you found this workshop valuable. I’ve included my link tree here if you wanna follow-up with me at all. And, now we’re gonna hop into some questions. And don’t forget to download the prompt file as well.
Thank you for such a fascinating workshop. It has obviously led to some of our chatter. Folks, now would be the right time to ask any questions, maybe build up the discussion.
We have couple of minutes to spare.
So I Yeah.
I was gonna say there’s definitely, like, some spicy chatting going on with the YouTube video that I have to check out that, Craig Sullivan is showing. Yeah. And so there is something that Craig was raising in the chat, actually, that I wanna address. Talking about the count of users is a big problem.
So user counts and session counts are different in GA four, and I just want people to have that distinction. So user count, a user is typically a device. So unless you are reporting user IDs to g a four, a user is a device. That’s it.
Now a session is estimated in g a four. If you need that to be accurate as in complete accuracy, then you need to use something like BigQuery for analysis. But the kinds of organizations that demand that kind of certainty, you’re probably using BigQuery anyway. And I’m also gonna say too that, like, people use ad blockers.
People try say no to cookie consent. Like, there is no accuracy in analytics in terms of absolute accuracy. So you do need to be just, like, a little bit comfortable with having a little bit of ambiguity in your results. And I know that I say this, like, heresy.
You know, we’re talking about, like, statistical significance and p values and all the things that, you know, we learned about in stats class. I mean, I have a geography degree. I took stats for geographers, but we still learned about these things. And you just still need to be comfortable with a little bit of ambiguity.
People do weird things on the Internet. People will visit on one device and convert on another device. Right? Like, I am a Gen Xer.
Buying something on a mobile still feels weird. I prefer to do things like spending money on a desktop computer. There’s lots of me out there who like doing this as well. So just keep in mind that that people, unfortunately, you know, your data is never gonna be as perfect as you want it to be.
And, also, really, it’s just yeah. Just keep that in mind. And okay. So I’m gonna dig up this link as well because, Craig, you’re sharing your link about the dashboard.
I also have a blog post specifically on how to, create a channel for AI tools in GA four, which you can also use, which you can also use in Looker Studio as well. So I’m just gonna toss that in the chat so that people have that resource as well. And I have a ton of different resources about AI.
Okay. So should we get into the q and a now?
Yep. Absolutely. So Okay. There are other questions in q and a tab. I’ll take them one on one. Kick off with a a question from Kevin.
K.
How specifically does AI perform multidimensional analysis well within GA four? Well
It does not really perform multidimensional analysis well, honestly, not within GA four itself. GA four is an excellent data collection engine. It is a not good data reporting engine. So this is where you really do need to go somewhere else, like AI tools is what I prefer.
You know? Or you can create a, you know, you can create a Looker Studio dashboard, etcetera. But, really, you gotta get the data out of GA four to do any useful analysis. And if you’re working with big datasets, I do recommend looking at using BigQuery.
There’s a new product called Dataform for g a four. The team that runs it is excellent. It is a great product. If you want to start looking at your raw g a four data and do some analysis on it, I do recommend checking that out, especially if you don’t wanna start learning right away to be able to work with that stuff.
So yeah. I would I would check that out.
There’s also a follow-up.
If you wanna follow-up a separate question from this, she’s on your screen right away.
Yeah. What are the most common or standard practices within GA four to monitor AI visibility and traffic? Yes. So I just shared one of them in the chat about how to check and report on traffic from AI tools.
And then I also have a let’s see. There is another tool that I’m another post I’m gonna share. And so you might have noticed as well that there is when you click on a link from, say, an AI overview, you get a big long weird link in your URLs, and you can actually track if people come to your site via that link, which will help you track if people are coming to your site via AI overviews, featured snippets, or people also ask results in GA four. That process that I’ve just linked to, it does require you to use Google Tag Manager.
Most of you probably are, but I would recommend checking that out as well.
Awesome. Kevin, do let us know if that answers your question or if there’s a follow-up.
Yeah. And if there’s a follow-up, for sure, definitely make sure to ask that too.
Yep. Here’s another question coming in from Ronaldo.
Yeah. This is a great question, Ronaldo. Do I give the context about the possibility of ad blockers blocking the GA four pixel losing data? Yep.
Absolutely. I do. And I also think that this is something that you should make sure to share, at all times with your leadership as well. So not only ad blockers.
And, of course, if you’re using server side tracking, then you might not have ad blockers as much of an issue. If using a tool like Segment, for example, sometimes that can get around ad blockers, blah blah blah. So it’s you know, you can get around some things. But the reality is that in most cases, like, people will figure out ways to block you.
And there’s also privacy, browsers that impact data as well. So for example, we have Safari. Safari has a technology built in called ITP, intelligent tracking prevention. ITP stops cookies from being around for longer than seven days.
So that means that, you know, if I come to your website today and then I come to your website two weeks from now, I’m gonna look like two brand new people even though I had been in your website before. And that’s a privacy function. There’s really not much you can do about that.
Users from Safari will also show up as coming from a different geography. There’s also some new things coming out there that, you know, parameters, for example, from HubSpot or other email programs just gonna rip that get ripped out of URLs. There’s always a little bit of panic that they’re gonna start ripping out UTM parameters as well that hasn’t happened yet, but keep an eye out for that too. So there’s also that kind of technology blocking.
And then there’s also just the weird way that people use the Internet. And my go to example, I live in an area of the world where you need to spray for ants every year or else the ants will take over the house because it never it never you know, it doesn’t get cold enough here even though I live in Canada for the the bugs to die off. So you gotta have a pest control person out every year to do a perimeter spray. And our old guy retired, so I had to Google for a new guy.
And so I did a whole bunch of searching. This person had the best SEO presence and the best Google business profile. I’m like, I really wanna reward this company for investing in the thing that I do for a living. And so I contacted him, and it turns out that they could only come, I called them, and then it turns out they could only come when my wife was gonna be home and I wasn’t gonna be home.
So I texted her the link, and she text she tapped in the link in the text message and did the booking. But the thing is then it shows up as a direct for the booking. They will have no idea how she got there because I sent her a link without any UTMs attached to it. She tapped on a link in a text message.
It’s gonna show up as direct. And so there’s lots of different ways too that, for example, you know, people will remove UTMs before they put in URLs. They will, type directly type in a product. They will do tons of research on you.
And then when it actually comes to buy, it’s someone else in procurement who is directly brand searching for you and then clicking on your website and then filling out a form, for example. So there’s lots of different ways that analytics is going to be broken simply by the way that people use the Internet. So that is really something to keep in mind, and it’s something to explain to your leadership about as well. That that direct traffic is not necessarily, like, people with bookmarks.
It’s people just using the Internet the way that people use the Internet, and that’s okay.
Yeah. And that’s just if you have a lot of direct traffic, like more than half, you might wanna take a look at your configuration issues, but often it’s just people being weird on the Internet. For example, on analytics playbook, I sell courses to marketers. Marketers love ad blockers.
Only half of my sales actually show up in GA four when I do this client side tracking. And out of those half of sales that show up, half of them are attributed to direct because everybody removes my UTMs that I put on absolutely everything before they come to the website. So, like, if I do this professionally for a living and I can’t even track it, like, what hope do the rest of us have? Right?
So definitely something to keep in mind when it comes to, tracking people. I hope that helps.
Yeah.
Absolutely. Thank you for the personal anecdote there too. I hope that answers your question. Moving on to the next one from Polyus.
Yeah. Yeah. So I, there’s definitely some other tools out there that do an okay job with it. I would say, like, you could even try out some tools like Amplitude, for example, which, are kind of like a DI more of like a roll your own analytics platform.
But, I mean, it really depends on and I don’t have the context here of, like, how many visitors you have, how big are your user journeys. You know? There are ways to track this sort of thing in GA four, but it does require advanced setup. So you just have to decide if the amount of effort is gonna take to do this in-depth user tracking is beyond what you can get out of a path analysis exploration in g a four.
So I would encourage you to start by using the path exploration in g a four. And then if you need more information beyond that, then you can start to look at different tools or different ways of recording that information. So for example, I’ll often record what I an internal link event in g a four, which records when people click from page to page to page. And then we can look at the analysis of the internal link because in the internal link, I will have the link that they’re on, and then I will have the link that they went to.
And then I can use that information to reconstruct the paths along with some time stamps. So there are definitely ways to do it in g a four itself. It does just require some advanced work. And, again, it’s decide if it’s worth it.
You know, do you really need to know that six people went from this page to this page, or can you just look at the path analysis path exploration, pardon me, and determine, yep, is this good enough for the analysis you need to do.
I hope that answers your question, Polyus. Moving on to Okay. The last question maybe.
Yeah. The best way to track experiment information in GA four. So this will depend on the individual tool. So whatever the tool recommends, VWO, whatever they recommend for setting it up, follow those steps for sure. The tools that I like the best, you know, VWO, is where they do push the information like it does have that in GA for integration. So you definitely wanna make sure you’ve got that set up.
And in terms of whether or not use data layer pushes or use a direct integration, it does depend a lot on how your website is set up.
If you’re, you know, you’re using a plug in, for example, then you could probably just use that integration. Data layer pushes are one of the more reliable ways to get data into GA four. And particularly, if you’re working with a site that has anything funny like infinite scroll or, a single page website called the spa, you would know if you had one of those. A data layer push would be the better best way to go in that case. But, again, follow VWO’s recommendations on how to get that experiment information in GA four. They they know what’s gonna work the best.
Absolutely. And then if I do recall that there’s a ebook that we authored a while back, I’ll look for it, probably send that across, and maybe it’ll be of help. I think that is about it.
There’s one more question from Minoxi. Just a second.
Yep. Yeah. Oh, yeah. And I just wanna say too, Joanna just mentioned the chat about using audiences in GA four. Yes. Please set up audiences.
I love setting up audiences.
So I I am a huge fan of figuring out different audiences that I can track. And I’m just digging up a traffic analysis. I just posted this one a little while ago on how to create AI traffic analysis as well. So this is where you’re looking at first touch, return touch, zero touch, and any touch AI audiences in GA four. And I find that audiences is one of the most powerful and most overlooked sections of GA four.
And one of the reasons why I really like using audiences as well is audiences in GA four can be based on several factors over a period of time. It doesn’t have to be in that same session. You could say, I wanna create an audience of people who came to the website and then a week later came back or, you know, added to cart, thought about it, etcetera. And then you can create events based on people adding being added to an audience.
And there’s lots of interesting stuff you can do once you start to get into that. So, in that post, I just posted into the chat.
I talk about cohort explorations in GA four, and that’s where creating, events based on audience inclusion can be a really powerful way to, build fantastic cohort analysis in GA four. So I recommend checking that out as well.
Awesome. Awesome. Thank you for the compliment also there, Joanna.
And here’s probably the last question before we k.
Conclude day two. Yeah.
Can we be sure, double check that even after giving detailed prompt AI would not misinterpret or give the best insight? Yeah. Definitely use that fact checking slides. I’m just gonna go back to the fact checking as well.
Where was it? Hang on here. I just want to let’s see.
Sorry. I’m trying to find my right window. Okay. We’re gonna go back to the fact checking. Okay.
So use this. This is this is my way of checking. So AI is pretty honest too.
And sometimes you can just ask it, why did you do that, or why did you come up with that? And it will tell you. I mean, AI doesn’t have an ego. Right?
It’s not gonna be like, oh, I’m trying to pretend that I know everything, so I can’t possibly admit. You know? I find, like, AI can be like a junior employee who’s like, I can’t possibly say I don’t know everything. So I’m gonna answer yes to everything, and I know everything, and they don’t actually know everything.
And as you get confident in your career and old junior career, you’re like, oh, I can say I’m gonna look into that, and that’s totally okay. I’m not gonna get fired. AI is like that junior employee. So just ask them.
Ask them, look. Why did you come up with this? Where did you get here? And rate your confidence, I find as well, is also a really good way to ask this as well just to make sure that you’re getting, you know, what you should be getting out of AI tools.
And if it truly feels suspicious, start over. That’s my best advice as well.
Yeah. Thank you so much, Dana, for allowing us the opportunity and for hosting this amazing workshop. And we’ll be looking forward to doing many more collaborations together. Hopefully.
But And thank you so much for having me, and thank you everybody for, I don’t know. It was early for me. I don’t know if it was early or late for you, but I appreciate all of you joining from all over the place and, listening to my AI stuff today.
Awesome.