Determining the number of participants for a UX research study has been an ongoing debate for years, particularly when it comes to qualitative studies. The Nielsen Norman Group’s famous 5-user rule has become a go-to guideline for many researchers, but is it sufficient?
The reality is that it’s a highly subjective topic that largely depends on different factors. These include the type of study, the researcher’s approach and beliefs about what will produce the best results, and the practical process of running a study until reaching saturation and confidence.
Today’s UX research goals are more complex and broader in depth, which makes the popular 5-user rule somewhat outdated. In this article, we’ll address why we believe studies should be conducted with at least 10+ participants to get a true baseline of reliable and actionable insights.
Origins of the 5-user Rule
In 1993, Jakob Nielsen and Tom Landauer published research that significantly shaped how companies approached usability testing. Their mathematical model demonstrated that the first few test participants reveal the majority of usability issues, with diminishing returns after five users. According to their findings, testing five users would identify roughly 85% of usability problems.
This ‘discovery’ made user testing accessible to companies with limited budgets. Before this research, comprehensive user testing was usually done only by large corporations that could afford it.
It’s important to note that Nielsen’s 5-user rule really only applies when you’re dealing with a uniform user group. In fact, he makes the distinction that additional users need to be tested when dealing with a highly distinct group of users. In other words, if your goal is to catch obvious interface problems in a controlled setting with similar users, five participants could potentially reveal most issues.
Why the 5-User Rule No Longer Fits Modern UX Research
The 5-user rule has been quoted for years as if it were a universal truth. But in practice, limiting yourself to five participants leaves major gaps. Our research team has seen firsthand why modern usability studies require a broader lens.
Here are four reasons why the 5-user rule is not enough:
1. Diverse Audiences Need Broader Coverage
Digital products rarely serve a single, uniform user group. They’re used by people with different backgrounds, goals, and contexts. Testing with only five people risks overweighting individual quirks and underrepresenting entire segments.
“With just five users, especially in a diverse audience, results can feel biased. At 10 or 15, you can make an educated generalization that users felt in a particular way, not just one.” – Ananya, Userlytics UX Researcher
2. Not Every Participant Adds Equal Value
In a perfect world, every participant is thoughtful, expressive, and perfectly aligned with the test goals. In reality, some sessions fall flat: participants may be quiet, distracted, or simply less engaged. If you only recruit five, you risk ending up with a thin dataset. By recruiting 10, you can count on at least 7–8 strong, insight-rich sessions.
“The 5-user rule assumes perfect participants and that’s not reality. You always get a few interviews that aren’t very rich in feedback, so we recruit 10 to end up with 7–8 reliable interviews.” Tiago, Userlytics UX Researcher
3. Complex UX Problems Surface Randomly
Not all usability issues appear consistently. Some are rare but critical and are triggered by specific behaviors or a combination of actions. These edge cases only emerge when more users are tested.
In one SaaS study, our team ran 24 unmoderated sessions across several markets to evaluate exit popups. While the feature was designed to appear only when users tried to close a tab, we discovered it was also triggering during normal browsing, like scrolling. This inconsistency only affected a few participants, but it created significant frustration. With fewer participants, this critical issue would likely have gone undetected.
4. Modern Research Goes Beyond Bug Hunting
The 5-user rule was based on studies focused on catching interface bugs. Today, usability research often digs deeper: into motivations, behaviors, and emotions. Capturing the why behind actions requires more perspectives. Five participants may surface problems, but they won’t provide enough variation to explain the reasoning behind them.
In short, five users might give you an early snapshot, but they don’t give you the confidence or coverage needed for business decisions.
At Userlytics, we recommend 10+ participants to get started for moderated and unmoderated studies to balance efficiency with reliability. This way, we can ensure you capture both common issues and the subtle insights that drive better design.
What Research Shows Today: Faulkner’s Study
In a landmark study by Laura Faulkner, Beyond the 5-user assumption: Benefits of increased sample sizes in usability testing, Faulkner tested groups of 60 users and analyzed how many problems would be uncovered by smaller subsets. What she found was striking: five users might uncover as many as 95% of issues, but in other cases, as few as 55%. The variability was simply too high for reliable decision-making.
Instead, Faulkner found that with 10 users, the lowest percentage of problems identified by any single set of participants was 80%, and with 20 users, 95%.
Her study reinforces what our researchers see in practice: recruiting 10 participants as a minimum for moderated and unmoderated studies dramatically reduces the risk of missing usability problems and provides reliable conclusions that inform design decisions.
Two Key Principles Researchers Consider when Determining Sample Sizes
Beyond Faulkner’s study, UX researchers point to two concepts that help explain why more than five users are usually required:
- Saturation: Insights plateau after a certain point, but typically not as early as the fifth participant. The deeper patterns become consistent around participants 8–10, giving you the reliable insights you need.
- Confidence: With larger samples, variance decreases, which makes findings easier to defend with stakeholders. When you present your research, you want data that holds up under scrutiny and drives real decisions.
It’s important to keep in mind that sample sizes are never absolute. Every study is a balance of time, cost, and the level of certainty a team requires. However, when product teams want to know how widespread an issue is or how many users it impacts, those questions typically require larger samples than typical 5-user tests can provide.
Our Recommended Participant Ranges for UX Research
At Userlytics, we’ve learned that 10 participants make a solid baseline for qualitative research, but every study has its own personality. Different methods and objectives call for different sample sizes, which is why we work with flexible ranges that blend industry research with our own experience from thousands of projects.
Unmoderated Testing
When there’s no researcher in the room to ask follow-up questions or clarify confusing moments, you need more voices to paint a complete picture. Participants might misunderstand instructions or skip over details that would be gold in a live conversation.
We recommend 10–25 participants (5 per profile at a minimum). You can spot major friction points with smaller groups, but the real story usually emerges when you hit 15–25 participants.
“With unmoderated testing, more is definitely better until you hit about 25. After that, you’re just hearing the same issues over and over.” – Ananya, Userlytics UX Researcher
Moderated Testing
Live sessions give you superpowers. You can dig deeper when someone hesitates, ask “why” when they make unexpected choices, and catch those subtle moments that reveal how people really think. This means smaller groups can pack a serious punch.
We recommend between 8–20 participants, depending on how complex your research questions are.
“Ten is my sweet spot. You usually start seeing patterns around participant six or seven, but I always push to 10. That gives me 7–8 really solid sessions I can stake my reputation on.” – Tiago, Userlytics UX Researcher
Quantitative Studies
When you need numbers that represent real populations, the math gets serious. These studies are all about statistical confidence, which means you need enough people to make your findings credible beyond your test group.
We recommend 100+ participants as a baseline, with larger samples (200 or more) providing tighter confidence intervals.
“For quantitative work, bigger is always better for the statistics. Fifty gives you a decent foundation, but 200 is where you really start feeling confident about your conclusions.” – Ananya, Userlytics UX Researcher
Tree Testing
Tree testing helps evaluate navigation structures and requires larger samples to uncover reliable patterns. We recommend 8–20 participants, depending on the size and complexity of the tree.
Card Sorting
Card sorting explores how people group and label information. Because categorization varies widely across individuals, larger samples are essential to capture consistent patterns. We recommend 15–30 participants.
Niche or Specialized Audiences
Some projects involve hard-to-reach profiles such as highly specialized professionals. In these cases, we work on a best-effort basis and collaborate with clients to decide when to close recruitment or whether to pursue additional recruitment channels.
When audiences are this specialized, we take a “best-effort” approach and collaborate closely to determine when the sample size will provide meaningful insights. Maybe that’s eight medical specialists instead of 20, or 12 enterprise software administrators instead of 50. Sometimes we’ll pause recruitment early, other times we’ll tap into specialized third-party networks to hit our targets.
Our ranges aren’t arbitrary numbers. The idea is to make sure that every study is both realistic to execute and rigorous enough to bet product decisions on. Whether it’s 10 people in a moderated session or 200 in a quantitative survey, we’re always striving with the same goal: to provide insights that are reliable enough for teams to act on with confidence.
Here are our recommended ranges in more detail. Note these are per profile type.
Practical Guidance for UX Teams
Budgets, timelines, and internal expectations shape every study. The challenge for research teams is to balance rigor with pragmatism, making sure the results are both actionable and credible.
When budgets are tight, the best move is to narrow the scope instead of cutting participant numbers. For example, focusing on a homepage hero banner or checkout CTA will provide more reliable findings than spreading a handful of sessions across an entire user flow.
When there’s pressure to reduce participants, hold the line on quality. Ten is a strong baseline because it ensures patterns are real and not just individual quirks. In rare cases, compromises can work—like running 8 shorter sessions—but only when the research goals are sharply defined.
When stakeholders ask for very large numbers, it often makes sense to combine methods. A small set of unmoderated or moderated interviews can surface the “why,” while a quantitative survey provides the scale to show how widespread an issue is. This mixed approach offers both depth and confidence without exhausting budgets or timelines.
The takeaway for research teams is that sample sizes are never just numbers. They are strategic decisions about where to invest limited resources to get the most credible insights.
Rethinking the 5-User Rule
The 5-user rule shaped an important chapter in usability testing history, but today’s products, audiences, and research goals demand more. At Userlytics, we’ve seen time and again that relying on only five participants leaves too many blind spots.
A baseline of 10 participants gives researchers the confidence of 7–8 strong, insight-rich sessions they can rely on. From there, the right sample size depends on the method, the complexity of the audience, and the questions teams need answered. The key is matching sample sizes to research goals so teams can act on findings without hesitation.
There’s no magic number that fits every study. But there is a clear principle: the more reliable your research inputs, the more confident your business decisions. That’s why our recommended ranges are designed to balance efficiency with rigor so every customer walks away with insights they can trust, so they can better serve their users.