How To Conduct User Research To Supercharge Your Optimization Efforts
What Is User Research?
Usability.gov defines user research as “understanding user behaviour, needs, and motivations through observation techniques, task analysis, and other feedback methodologies”. User research has become entrenched in Epiphany’s CRO process, allowing us to understand what makes users tick and helping us uncover the ‘why’ behind analytics data.
To allow us to run qualitative and quantitative research for all of our CRO clients, we’ve recently completed the development of our in-house user research facility, Mindseye. Mindseye has been created with users at the heart. We’ve purposely created a relaxed, informal, and homely environment to encourage natural behaviour.
We’ve also invested in both desktop and mobile eye tracking technology which provides us with another layer of data to draw upon to support our recommendations. Eye tracking also helps us tap into users’ subconscious behaviour, meaning we don’t have to rely solely on what users tell us.
What Are Its Benefits?
There are many benefits to adopting the user-research method to inform a CRO programme:
Firstly, understanding why users are behaving a certain way can help us develop solutions that genuinely address this behaviour, increasing the likelihood that the solution will have the desired impact. For example, analytics data might tell us that the basket has a high exit rate. Without user research, we may hypothesize that the reason for this is the prominence of the promo code field resulting in us developing an AB test centred around reducing the prominence of the promo code field. However, as we’re not genuine users of that website and we don’t necessarily fit with the target audience or demographic, it’s likely our hypothesis could be wrong. Conducting user research means we understand what’s genuinely causing the high drop off, for example, lack of clarity regarding delivery options, and thus, develop a relevant solution which is more likely to generate a positive outcome and helps prevent wasting development resource. The impact of taking a more informed approach to CRO is illustrated in our 80% AB test win rate.
Secondly, user research helps remove opinion and subjectivity from optimisation efforts. Paul Rouke from PRWD describes the negative impact the traditional HIPPO mentality (highest paid person’s opinion) can have on optimisation efforts when he explains “the traditional HiPPO in business is the thing that so often is seen as the opposite of progress, engagement, leadership, inspiration, collaboration, and humility.” Running AB tests based on what the highest-paid person thinks, rather than tests informed by data can be the detriment of successfully implementing an optimisation culture within a business. Conducting regular user research highlights the importance of the user and helps communicate this to the wider business. Again, the ultimate benefit to this is that optimisation efforts can be focused on the things that genuinely impact users, increasing the chances of success.
Finally, user research can help with prioritisation. Prioritisation of AB tests is an extremely important aspect of the design process of any optimisation programme. At Epiphany, we’ve developed our own prioritisation methodology which includes scoring each test hypothesis on the amount of supporting quantitative and qualitative evidence. As such, conducting user research helps us validate test ideas by gathering an understanding as to the severity of the issue on user experience. This ensures we’re focused on what impacts users as opposed to what doesn’t adhere to UX best practice, which although has its place, tends to assume all users, websites, and businesses are the same.
How Does It Work?
There’s quite a lot involved in running user research and although it can be done quickly and cheaply, we’re firm believers that you get out what you put in. There are four key things to think about:
It’s important that the participants you include in your user research provide a true representation of your actual users. There are a few ways to recruit users from UX research platforms, via social media, or sourcing users from your existing database. Although demographic criteria such as age and gender are important, it’s also key to consider the types of habits and behaviours you want your participants to have, for example, what device/s do they use to access the internet, how does this differ depending on the task, are they the sole or joint decision-maker and finally, do they have a genuine interest or need for your product or service. Pretending to part with your money is very different from actually parting with your money, so it’s integral that the sessions are as realistic as possible to ensure that your participants are genuinely in the market for your product or service.
Top tip – Set up a recruitment survey for potential participants to complete. This should include answering questions on a range of different topics, for example, ‘Which of the following items are you currently in the market for? (select all that apply)’ rather than ‘Are you in the market for a new sofa?’. This prevents potential participants from knowing what the research is focused on and as such ensures they answer genuinely.
Writing a script for the research sessions helps ensure there is consistency across the sessions as well as communicate to clients or colleagues how the sessions will run. Script creation should be focused on the research objectives, e.g. the purpose of the research. Thinking about this should help you develop your key tasks, questions, and prompts. The script should also include rough timings to help you plan out the session and prevent over or under running. Including a time allowing task which will only be undertaken if the participant is particularly quick to complete the key tasks and provide feedback can ensure you make the most of all of the available time.
Top tip – Start the script with very open tasks that don’t lead users to behave in a particular way. If there are specific areas of the website you want feedback on, include more specific tasks later in the session. Always give participants the chance to interact and share feedback naturally first. If they don’t come across the page or element you were looking to gather feedback on naturally, this is an interesting insight in itself. We start the majority of user research sessions from a blank browser screen, allowing participants to browse for the relevant product or service completely naturally, before taking them to a specific website.
The key to moderating user research sessions effectively is to observe and listen. The less you speak, the better and the silence is often golden. Your role is firstly to reassure the participant about the session by explaining what will happen, being open, friendly and (where appropriate) informal and help participants relax into the situation. Being engaged and providing cues of encouragement when participants are giving feedback also reassures them that they are providing useful insight.
Top tip – Start the session with some pre-session questions. These questions should talk very broadly and then become more specific to the topic or service the research is focused on. For example, if you or your client sells holidays, then start out by asking about where they’ve been recently or any holidays they have coming up, then find out more about how they normally book, with what companies and why. These questions help you find out more about your participant which will help you run the rest of the session effectively, but this is also an opportunity to get your participant talking.
When it comes to analysis, it’s always best to observe live via streaming from the research facility. When observing the sessions, take note of what participants do as well as what they say (these things aren’t always aligned). Participants don’t always know when they do something interesting so keep an eye on the eye tracking to ensure you don’t miss insights that participants don’t necessarily verbalise.
Top tip – If there’s a large group of people observing the research, encourage everyone to note down their top 3-5 insights from each session on post-it notes. Reviewing everyone’s top insights between sessions will facilitate debate and help identify the key themes.
The Role Of Eye Tracking
User research can and is frequently conducted without the use of eye tracking. However, there are many benefits to investing in technology to help enhance the output of research techniques. Any effective CRO programme involves drawing on multiple data sources to validate AB test hypothesis. User research provides a strong source of qualitative evidence but including eye tracking overlays another quantitative source to further strengthen the supporting evidence.
Eye tracking also means the research sessions can be more natural, preventing users from thinking out loud as they go. ‘Think out loud’ is a common methodology, however, it can cause participants to alter their behaviour as they are forced to verbalise and as such rationalise their behaviour. Using eye tracking means you can use a different methodology called ‘retrospective think aloud’. RTA involved running post task retrospective user interviews where the eye tracking footage is played back to users which helps them recall what they were thinking and feeling at that point, meaning they can conduct the tasks in peace.
The final benefit of using eye tracking technology within CRO programmes is to validate insights gathered from heatmaps. Heatmaps play an important role in identifying optimisation opportunities. However, mouse and eye movements are often very different. It could be inferred that heavy mouse movement, for example over a section of content on a website, suggests that the user was engaged with that content but this isn’t necessarily the case. Being able to observe participants interacting with your website with eye tracking helps you understand if that mouse movement is accurate to where your users are engaging and interacting or not, helping you make more informed optimisation decisions.
What Results Can I Expect?
Here’s an example from one of our clients Goodmove. The user research that we undertook in Mindseye identified that participants felt that the homepage made them feel negatively, explaining that some of the terminology used felt ‘spammy’ and ‘click-bait’. This led to an ABn test run through VWO which experimented with the use of the word ‘FREE’ on the homepage. This test included multiple variations which included removing the capitalisation of ‘FREE’ and replacing ‘FREE’ with ‘personalised’. This test resulted in a 7% increase in sign-ups, highlighting the importance of gathering user feedback to support conversion rate optimisation.