Skip to content

Online Usability Testing Tools: what to look for?

 By Userlytics
 Jan 12, 2018
 1897 views
Home  »  Blog   »   Online Usability Testing Tools: what to look for?

Everybody working in the IT industry has seen a number of case study videos, where users talk about glitches in software, unavailable options in apps, and their ideal features and interfaces.

Running a remote unmoderated usability study seems to be an easy way to uncover those glitches, discover unavailable options and explore pain points vs nice to haves. But is it that easy to get relevant results? Yes and No. No – because the number of potential obstacles in planning a remote user experience study is infinite. Yes – because with experience and good help everything is possible.

What is the secret knowledge that an experienced usability researcher possesses? Apart from a vision of the research goals and the different ways of reaching them, it comprises a basket of small how-to’s, shortcuts, and knowledge of what can go wrong and ways to avoid mishaps.

Misunderstanding of the task by the users? Tasks that require too much time? Asking the same question twice because a user has already used a button or option? All of these little details seem to be minor, but can waste valuable time and resources, for both the participants and the researchers.

Many of these issues and time inefficiencies can be avoided by choosing an appropriate platform for remote unmoderated testing. So the question then becomes: What to look for: Automated surveys and metrics? In-line video annotations? Branching logic?

Selecting the right usability testing tools is critical for capturing comprehensive insights into user behavior, ensuring your usability studies are both effective and efficient.

First of all, you need to think of the results: what do you want to achieve at the end of the study?

Quantitative usability testing data? What kind? Qualitative user experience videos? With or without the participant webcam view, additional to the screen recording + audio? Both?

In an ideal world all of this is available in several formats – as picture-in-picture videos, hyperlinked annotations, downloadable video and quant data, pre-formatted question types such as system usability scale, single ease question, net promoter score and similar, as well as other metrics such as time on task (min, max and average) and success/fail rates.

UT Tools

In terms of video recordings, the most advanced platforms will offer you picture-in-picture videos, where you can see your respondents, and the screens they are interacting with as they follow a “think-aloud” protocol. So you can observe whether they are angry, annoyed, tired, or surprised and delighted by beautiful designs, and why.

Usability Issues

Usability experts need to see facial expressions for a practical goal – to identify the critical moments, events and comments that may be the most persuasive for their clients, stakeholders and other important decision-makers. No chart or data table works better than real users experiencing pain or moments of delight while using existing or proposed workflows and features.

One way to share the most important events and comments of recorded usability test sessions to stakeholders or team members is to share links to video excerpts, which can be done using hyperlinked annotations, via a shareable UX research project dashboard.

User Research

Some platforms enable private labeling and customizing the look of the shared UX research project project dashboard.

private label

And some also enable a localized language landing page and user interface for respondents.

User Testing information. Online Usability Testing Tools

In addition to recorded sessions (qualitative usability data), metrics are hugely important (quantitative user experience data). The first metrics that are used in any study are success/failure rates and time on task. If possible the remote user testing platform you use should capture these metrics automatically, and they can be used to benchmark prototypes, and/or production assets vis a vis best practice or the competition.

There are a number of widely used usability testing metrics, such as System Usability Scale, Single Ease Question, and Net Promoter Score, that allow you to also benchmark the asset you are testing against industry averages. A number of platforms include these types of questions pre-formatted and automatically calculated, thus making your life easier as you scale you user research program.

System usability score

Speaking of time, we need to think not only about researchers themselves, but also about participants’ time and, what is even more valuable, their attention. As Jacob Nielsen wrote a dozen years ago, if participants stumble upon a huge obstacle on their way to fulfill a goal, they won’t be able to focus on other usability bloopers. Once we find something that needs to be changed, there is no need to “validate” that issue over and over again, or ask the same question twice. Branching logic in the form of “if you answered X to this question, skip to question Z” is available on the most advanced user testing platforms, and can help you build sophisticated test scripts, and/or dive into the reasons for a failure, only with the participants who failed, and/or set up an easy task for those who failed so as to improve their animus in respect to the remaining UX tasks.

Branching Logic

Experimenting with various features while looking for something different is an inherent nature of UX studies. When conducting a study in a lab it is easy to note that a user has already done a task he or she is approaching and it is easy to change the test plan in real time: “You have already read/watched/played that, now just share your feedback”.

When working with an unmoderated remote study, there is no way to watch the video before it is done (and change the instructions for the participant), which is another area where branching logic with unmoderated usability tests comes in handy: if you suspect that users may have had a chance to do something already in relation to a forthcoming task, ask them openly: “did you have a chance to read/watch/do this?”. Chances are, several users did and they can be directed to not perform these tasks again. They also will have some more time to think of something else in another area. That’s one of the ways that branching logic in usability task flows can be leveraged.

I hope you find these tips and tricks useful, and that it allows you to avoid the fate of the shoe maker’s son, who, as the saying goes, runs barefoot…;-)

Shoemaker. Online Usability Testing Tools

Interested in UX Testing?

Data Visualizations


Latest Posts

Blog
September 30, 2024

UX Checklist for Websites: Crucial Steps Before Going Live

Optimize your site before launch with our detailed UX checklist for websites. Learn how UX research can make your website successful.
Read More
Webinar - ULX SWOT Analysis & UX Metrics
Webinar
October 10, 2024

Bridging the Gap: Leveraging Market Research Insights in UX Research

Join our webinar Leveraging Market Research Insights in UX Research and discover how it can drive to better product design.
Read More
Whitepaper
March 5, 2024

The State of UX in 2024

Discover 'The State Of UX In 2024' report: Key insights on UX research evolution, roles of product managers, and future trends.
Read More
Why CEOs Need To Care About UX – Before It’s Too Late
Podcast
September 10, 2024

Why CEOs Need To Care About UX – Before It’s Too Late

Why CEOs Need To Care About UX - Before It's Too Late. Interview with Userlytics' Founder & CEO Podcast
Read More

Didn’t find what you were searching for?

Ready to Elevate Your UX Game?