Moderated and unmoderated user testing have pros and cons. By leveraging unmoderated user testing with branching logic, you can have the best of both.
According to Experience Dynamicx, 84% of companies plan to increase their focus on measuring and analyzing customer experience, and 73% of the ones who aren’t doing it now plan to start in the next year. Large companies like LinkedIn are very focused on conducting user research in a scalable way. Donna Driscoll, Principal User Researcher at LinkedIn, says, “I create programs that get as many product and engineering partners to participate in research and come along on the journey of understanding who our members and customers are and what they need.”
So lets take a step back and analyze the types of user testing that are possible.
Three types of user testing
The idea of usability testing and user experience design has been around for a while. In the first century BC, Vitruvius, a Roman engineer and inspiration to people like Leonardo da Vinci, came up with three design principles, one of which was “utilitas,” centered around a design’s usefulness and suitability. (SmashingMag)
But it’s only recently, during the interactive computing age that UX has taken on a life of its own—distinct from human factors and general design. Designers have become increasingly immersed in usability and user experience.
User testing may be performed on existing products (“My analytics & conversion results tell me there is a problem, but now I need to understand why the problem is occurring”), and/or on prototypes during an agile UX design sprint and even on competitor assets or best practice company assets
Whatever the motivation and objectives of the usability testing program, the user testing methodology generally falls into one of three buckets:
- Lab-based
- Online moderated user testing
- Online unmoderated user testing
Lab-based usability testing
In lab-based UX testing, a moderator and a participant sit inside a lab with the product. The moderator follows a user testing script, asking the participant to complete a list of tasks on the prototype (whether it’s a physical product or a beta version of a site).
User testing tasks can be of three types: Scripted Tasks, Natural Tasks and Decontextualized Tasks. The first type is highly directive of a specific flow, feature or journey
Natural tasks are based on a real life scenario, with little or no supervision, allowing user testing participants to react as they wish based on the scenario and the asset
Decontextualized tasks are attempts to separate activities from the user testing participant and the asset being studied, to try to understand their mental model of how they think and feel about an object and/or activity.
Regardless of the type of user testing task, with moderated usability testing there is a great deal of flexibility to probe into specific actions and answers of participants, and thus react in a personalized way in accordance with the actions of each user testing participant,
The downside is a total lack of scalability—each participant needs a usability moderator—and because of that, there’s a large cost and timing drawback to this type of user testing. You also have limits on your sample size and quality because of geography (unless you’re willing to pay people to travel).
In addition, there’s a phenomenon called the “observer effect” that takes place and may influence results. People inherently act differently when other people are around versus when they are alone. In other words, the user testing data may not be accurate.
Online moderated usability testing
The second method of user experience testing leverages user testing software to allow participants to access a site or mobile app or prototype and usability test it wherever they are located geographically, from their homes of offices, while interacting with a moderator with a user testing script who can observe and direct them.
Doing user testing online and remotely lets people experience a product in their natural environment. It solves the problem of geography and, partially, of observer bias. The drawback is cost, time, and logistics. Like lab based usability testing it is not scalable and thus suffers from the same drawbacks in terms of cost, time and logistics that lab based user testing does.
This is the most important drawback. Online moderated user testing isn’t scalable—for every half hour of a single participant’s time, you need to have half an hour of a UX researcher’s time.
So when a company is trying to run an Agile UX design sprint, and iteratively come up with new designs to user test, it will run against the limitation of the number of UX researchers it has available to moderate at any one time. Additionally, there’s still risk of an observer effect with this method.
The 3rd methodology: Online unmoderated usability testing
The last method is unmoderated user testing, which means that a software platform is used to guide the participant with a predefined user testing script and without the use of a moderator.
This type of user testing is highly scalable, allowing a single UX researcher to set up a user test in minutes and have results from 5, 10, 20, 30 or even hundreds of participants within hours.
In the past, it came with a trade-off: you could not probe testers with different questions or tasks depending on what they said or did.
At Userlytics we set out to bridge that gap. In other words, we decided to design a system that would allow the use of a scalable qualitative user testing platform that allows for different instructions and questions in accordance with how each participant reacts to the asset being user tested, but without the need for a moderator. In other words, a scalable, but also responsive and interactive user testing platform.
Unmoderated user testing with branching logic
Conditional user testing logic bridges the gap.
With branching logic user testing features, you can set conditions such that, depending on each participant’s actions and response, he or she is sent to a different set of user testing tasks, instructions, and questions.
This enables a highly sophisticated user testing script to be used, in a scalable fashion, set up within minutes, and garnering results from hundreds or even thousands of participants within days or even hours.
Did the participant succeed in the task? Send them to the next task. Did they fail? Send them to a slightly different task (or prototype asset) to see if with this one they do succeed. Which takes the concept of A/B testing and user testing to a whole new level!
Or, ask participants if they currently use a certain category. If they do, ask them to try your own product. If they do not, ask them why, and probe whether this category might be suitable to them and how.
By adding branching logic to the unmoderated user testing equation, the scalability of unmoderated user testing is matched with the personalized set of instructions and questions that moderated usability testing allows for.
So now you can have both!
Interested in UX Testing?
Data Visualizations
About the Author: Userlytics
Since 2009 we have been helping enterprises, governmental organizations, non-profits, agencies and startups optimize their user experience, or UX. With our state-of-the-art platform, massive global participant panel and unlimited accounts/seats for democratizing user research, we are the best all-in-one solution for remote user testing.
Schedule a Free Demo