Working in a UX agency means that sometimes you’ll get limited budget for an entire project or for certain parts of a project.
In my time at Grapefruit I am proved time and time again that most clients will choose to reduce costs in the validation phase of a project. As designers we need to educate our clients about our desired process on a daily basis. Still, it’s often difficult for them to understand the value of usability tests. What can we do to test our ideas, prototypes and even live products and marketing campaign websites?
In this article, I will describe a few testing methods that we can use by ourselves or with the help of our colleagues. Since these methods are used internally, without the help of actual users, I’ve called them internal evaluations.
In the various design teams that I have been part of, I’ve often experienced contrasting practices of testing products. I’ll mention some of them:
In a perfect environment, all of these practices will be used in a particular project. And we’ve already established that the perfect environment doesn’t exist, so I’ll continue the article describing how an internal evaluation works.
The two internal evaluation methods that our team likes and have used most are Informal Action Analysis and Cognitive Walkthroughs. Let’s discover how they work and how to set them up.
Before getting to understand each method, I’d like to mention that both of them are task based. This means that the expert that’s about to evaluate a functionality of a product will have to do so starting from a specific task. A task is similar to a Job-to-be-done (“The customer job-to-be-done starts when they want something that will improve their life, and it ends when they obtain or give up on obtaining the object of their desire”), except it’s more specific and detailed. In a good design process, tasks are determined in the research phase of the project.
Let’s take a look at a specific example of a task. In the following example I’ve considered that the product is an ecommerce website that sells electronics, and the user’s task is to purchase a specific product. The task might be framed like this:
I want to buy a smartwatch. I’d like the watch to be black. The only features I’m interested in a smartwatch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.
I will use this example of a task to describe the two internal evaluation methods mentioned above.
Informal Action Analysis is a method where experts have to accomplish given tasks and note down each of the steps they went through in the flow. They will then write down encountered problems of layout, usability, perception and potential ways of improvement at each step.
Following the above-mentioned task example, an expert might write down something like this:
I want to buy a smartwatch. I’d like the watch to be black. The only features I’m interested in a smartwatch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.
An expert might then write down encountered problems for each step: maybe the available filtering system is not enough for the user, maybe the user expected to be able to make a comparison between two products and such a feature is not available, etc.
Layout problems (ex. “The ‘Checkout’ button was not visible and I had trouble looking for it”) and wording problems (ex: “The link ‘Discover more’ was not very self-explanatory. I didn’t understand what I was going to discover after clicking on it.”) might also be written down by experts.
After a number of experts have done the analysis (from our experience, four or more), patterns will begin to emerge. It’s then the facilitator’s job to collect all the insights, to rank them by importance and to turn them into solutions.
Cognitive Walkthroughs are a similar method to Informal Action Analysis, as the involved experts will also be given specific tasks to accomplish.
In addition to each task, the experts will also be given the actual steps that the designer thought of and used to develop the specific user flows and functionalities. For each given step, they will have to write down answers to the following questions:
Let’s see how this would work for our example and assume that the expert is given the same task, with the following steps to go through:
I want to buy a smartwatch. I’d like the watch to be black. The only features I’m interested in a smartwatch are sleep monitoring and counting my walking steps. It’s really important that the watch has a good battery and that I won’t need to constantly charge it. I want the watch to be under 100 dollars. I’m brand agnostic, but I’m aware of the big brands in this niche.
For each of the 7 given steps, the expert will have to answer to the four questions mentioned earlier. Here’s an example of how an expert might answer the four questions for Step 2.
Step 2 — Press on the option ‘Categories’ and select ‘Smartwatches’.
Does the user understand that this step is needed to reach their goal?– Not necessarily. My initial reaction was to go to the search option and type ‘smartwatch.’ The ‘Category’ option is not that visible.
Will the user see the correct action they need to perform to produce the outcome?– In this particular example, ‘Category’ was not a fairly visible action I knew I could take. The button on itself was barely visible, although I’m used to its position, and that wasn’t necessarily a problem.
Will the user recognize that this (not another) is the action they need to take?– Again, my initial impulse was to discover the smart watches list by using the search option. My answer to this question is ‘no’.
Will the user understand the feedback?– After clicking on the ‘Categories’ option, it displayed a list of all the product categories on the website and it was the anticipated result. It was then easy to select the category of ‘Smart watches’.
Again, after a number of experts have done the analysis, it will not be difficult to discover patterns. This method is good in particular because it indicates better perspectives of analysis.
A few months back one of our clients at Grapefruit asked us to perform an evaluation for their loyalty programs platform. They wanted to make a series of A/B tests on this website and needed some starting points. Our job was to discover some of the bigger problems users encountered on this platform. Since they were already familiar with our internal evaluation approach, we decided to use it for this project as well.
We kickstarted the project with a workshop involving all the stakeholders. We used the workshop to refresh their memory of how these methods work. We also outlined the most important actions users can do in the platform.
After the initial workshop, we began the actual work. We wrote down the tasks (10 tasks in total), aiming to describe the most important actions on the platform. The tasks are the same for Informal Action Analysis and for Cognitive Walkthroughs (and are complemented by the desired steps for CW).
Every colleague at our agency is more than welcomed to participate as an expert for these sessions and this project was no exception. We invited eight colleagues from teams like design, development, content and even administrative. Some of them were new to these methods and some of them had already participated in past evaluations. We organized a meeting where we presented the two methods and made sure that each one of them knew exactly what to do.
After we settled on the group of experts, we divided them into two teams, and also the 10 tasks into two sets The evaluation would then go like this:
The evaluations are done individually. Since all of them were involved in other projects, we decided to give them three days for the actual evaluations. After each colleague was finished with their evaluation, we switched the tasks, the methods and the types of devices (desktop/mobile) between the two groups, as follows:
After another three days of evaluations, we gathered the insights and started working on a very detailed report of problems, divided into 3 clusters: Crucial Problems, Important Problems and Less Important Problems (more than 60 problems were discovered for this specific internal evaluation).
For each of the problems we suggested specific solutions (with sketches, where it was necessary). The report was compiled into a presentation where we also included the description of the two methods involved and how we set up the evaluation sessions, for future references.
Here are a few tips that we’ve learned after this and other evaluation projects:
We’re constantly surrounded by articles, books and courses about the ‘perfect process’. A lot of the times this ‘process’ is not possible and it’s our responsibility to make the most with the minimum tools available. Internal evaluations won’t replace usability tests. Internal evaluations will help you discover potential problems of a product, with less effort.