Remote user research and testing has never been more at the forefront of UX researchers’ minds and toolkits.
Whether adopting remote out of necessity or choice, remote UX research has many pros and cons, and just like any other methods that involve people, there’s a fair share of tales about how remote interviews can go wrong.
Here are a few common issues with remote user research and some helpful guidance on how to deal with the things that inevitably go wrong.
For those more acquainted with in-house research, the first thing you quickly realize with remote interviews is the loss of control over the environment in which the research takes place. Your user can join the call from anywhere.
Granted, this is common for a number of research techniques like in-field or guerrilla testing. However, when preparing for those methods, you do have a rough idea of where you’ll be conducting the research and you can plan ahead.
For remote research, you have no idea where your user is until you’re both connected on the call. In that moment you have to deal with whatever is thrown at you – children crying, neighbors arguing, traffic noise, etc.
Even from the researcher’s side, our homes are not designed to be professional environments and we have a complete loss of control over other people’s actions like flatmates singing or pets howling.
During a recent remote interview, I was in my kitchen at the dining table, and my user was in their living room on the couch. The user was a senior in their academic field currently working in Saudi Arabia, and I wanted to make a good impression. Unfortunately, little did I know that my interview coincided with the ice cream truck driving around the streets.
While the ice cream jingle was at some distance and inaudible on the recording, my dog’s incessant howling at the ice cream truck was a little too audible.
The only thing I could do was apologize and move my dog into my bedroom away from the kitchen. However, this only added to my dog’s anxiety and his howling was louder from further away. Luckily, towards the end of the call the user’s child joined in by also crying, and others in the room started to become noisy whilst preparing for one of their daily prayers.
The call was brought to an early finish with not all questions covered, but the user’s environment being just as noisy as mine was reassuring that this can happen to users as well as researchers.
We will sometimes lose control over our environments. We have to understand that this can have an impact on the research, especially if your user is constantly distracted or the audio is poor.
As a facilitator, when this happens you need to assess the situation and make the call on whether the interview should continue or not. Ask yourself questions like:
Judging the severity of your answers will help you make the right decision for your research and your user.
Remote research and online video calls have been around for years, with many different tools to choose from. The range of tools available is not only confusing, but there is enough choice that every user has their own preference.
There is nothing wrong with choice, but the problem arises when every tool seems to need installing or account creation, increasing the risk of users dropping out.
I’ve experienced users asking to use specific tools which are usually not ones designed for conducting remote research. Even if you’re exclusively interviewing academics and scientists, no industry or domain guarantees your users are tech-savvy.
Offering plain-speaking guides on what the process will involve and how to install any necessary software will provide users with reassurance before the interview. Even if your users are more on the tech-savvy side, it’s always handy to have guides detailing as granularly as possible what they need to do, when and how.
Particularly distracting when conducting remote interviews is the user insisting on seeing your face during the call.
It’s completely normal to want to see the face of whomever you’re talking to. But some users will insist on seeing the researcher’s face the entire time, to the point where they continually switch between the camera view and the prototype.
In a recent interview, the user was obsessed about moving the camera view, trying to rearrange windows so that they could see my face and the prototype at the same time.
I had to sit watching my user switch back and forth between my face and the prototype as I explained the context of the prototype and the scenario for the usability test. Not only was my face distracting to me as I delivered the scenario, it was clear that the user was also distracted from listening to what I was saying.
The research was impacted at this point because it wasn’t clear how much of the context or scenario the user had heard and understood. As a facilitator I felt uncomfortable asking them to stop moving my face around the screen, as this could be easily misconstrued as rude. I also couldn’t use body language to help convey that I meant it in a friendly tone.
You’ll need to politely ask the user to move any distracting window away from the key section they need to interact with, while also trying not to lead the participant towards using the key section that needs testing. It’s a difficult balance, but with the right level of tact, it shouldn’t be a problem for the user.
Given the current situation of working from home, there is less access to the usual range of devices on which prototypes can be tested. This means, the first time I see my prototype on a non-Mac machine is during the interview after the user has opened it.
There have been a few times where my user has finally moved my face to the opposite side of the screen where I need them to look, only to reveal that my prototype has been designed with a too high resolution in mind. Many times the key section of the interface, which would be built to always be displayed above the fold, is now beyond the fold both horizontally and vertically.
Fortunately, the research is not impacted too much – depending on the prototype and the research questions. What this does mean is that usability issues that stem from users not seeing a feature because of the screen size, have to be discarded because they have no merit.
This takes up time in the interviews as your user may have to scroll to search for the feature they would normally see. It also takes up time in the analysis, as you have to determine whether the user could not find the feature based on the resolution issue or for genuine usability issues.
During current lockdown measures, while we may not be able to test prototypes on different devices, try looking at the prototype with different screen resolutions instead. Resolutions on Windows machines tend to be smaller, so if you’re on a Mac, try viewing the prototype on the smallest possible resolution to see how your prototype looks.
As an alternative, you can ask colleagues who use different resolutions to view the prototype and send screenshots of how it looks, so that you can change it accordingly.
The combination of technical issues, distractions and users obsessing over moving your face around the screen means that remote research can take just that little bit longer than normal.
While it’s not a huge issue, it does mean that some elements in the research may not be covered and delays the delivery of the overall research.
I’ve had users, both in-house and remote, sign up to interviews on the basis that they think that it’s an hour-long session where they provide any feedback they like, whilst ignoring all your questions.
They’ve sidetracked every question with a long winded answer and managed to bring every question back to what they wanted to talk about. You will be amazed how quickly a conversation can go off topic and how many topics a user can go through in one answer.
To try and get around this, do all you can to alleviate what takes up the most time. If you know the video tool requires installation then send the link to the call at least 20 minutes beforehand and let users know they may need to install software.
Where users are distracted or are taking longer than anticipated, this is a common factor of research. Some users will fly through all your tasks, while others only complete half. Just know this is not a reflection of you as a facilitator.
With users who appear adamant on sharing their own "off-topic" feedback, perhaps tell them that there will be a set amount of time after the actual test in which to air ‘semi-related’ feedback.
It is a strange time indeed, with high levels of uncertainty and social distancing. Not only for us personally, but also in the way it’s affecting our working relationships and the relationships we have with users.
We now have to overcome these new constraints and "remember empathy for our users," as we have no idea what they’re going through either.
There are common troubles for all types of research. So many that people have filled books with their own tales from the trenches. Remote research is no different. It takes experience to understand what troubles can occur and how to deal with them.
Unfortunately experience takes time, and right now it feels like we don’t have any time to understand our newly found remote working situations. So given the constraints we’re facing, we have to be realistic and learn to expect the unexpected.
Just like other tried and tested research techniques we should learn from these challenges and continue fighting our good fight on spending as much time with users as we can.
Everything you need to know to effectively plan, conduct, and analyze remote experience research.