Usability
Attribute

The product is easy and efficient to use, with a clear, intuitive interface that requires minimal effort to navigate.
Author: Jay Thomas
Creator of the framework “Think Like the User”

Introduction

In the Usability validation stage, we assess how intuitive and user-friendly the product is for the end user. This phase focuses on understanding how easily users complete key tasks in the interface, how to reduce cognitive friction, and how to ensure smooth, comfortable interactions.
The main tool for validating Usability is usability testing, which reveals how real users interact with the product in practice. This method uncovers potential barriers, difficulties, and unexpected behaviors—highlighting areas where the interface can be optimized for easier onboarding and faster task completion.

Process Structure

To properly evaluate this attribute—i.e., to test how easy and intuitive the product is to use—you should follow this sequence of steps:
Analyze the product features to be tested
Formulate test hypotheses
Write usability test questions and tasks
Prepare the test script
Recruit participants for testing
Run a pilot session and launch testing
Analyze the results

When to Work on the Attribute?

The Usability attribute is assessed last, only after the Appearance attribute has been addressed.
Once your idea prototypes are tested and you've confirmed that the user’s task is being fully solved—compared to alternatives or competitors—you move on to final UI design. These final mockups are then used for usability testing.
Let’s break down the entire process in detail.

“Usability Testing” online course

Analyzing Product Features

At this stage, I break down the product into distinct usage scenarios and describe every interface element a user interacts with while completing their task. The breakdown follows this hierarchy:
Task
The overarching goal the user wants to achieve with the product.
Usage Scenario
A logically separate process the user follows to accomplish a specific outcome within the task.
Functional Stage
A specific feature or subtask—a sequence of actions that forms part of the scenario.
Step
A single user action within that stage.
Interface Element
The actual UI component the user interacts with (e.g., buttons, dropdowns, input fields).
Example: If you're testing a property listing card in a real estate app, there’s a stage where the user evaluates the apartment's appearance through photos.
You document this, including every UI element the user interacts with, in a table like this:
Breaking a complex product into smaller, structured pieces allows you to write much clearer, more focused usability test tasks later on.

Formulating Test Hypotheses

A hypothesis in usability testing is a reasoned assumption about how a user will interact with specific interface elements to complete a task.
Key rules for writing strong hypotheses:
It must be verifiable—you can confirm or disprove it.
It should be clear and unambiguous—no vague or overly complex phrasing.
It should describe specific user actions, not subjective feelings (e.g., avoid words like “likes” or “finds convenient”).
Avoid negations or negative phrasing such as “won’t be able to,” “won’t understand,” etc.
Bad Example
The user won’t try to find the report manually.
Good Example
The user will use the search bar to find the report.

Evaluating the Impact of Usability Issues

After identifying potential issues, you need to assess how much impact each one may have on both the user experience and business outcomes.
This helps you prioritize test areas—put the most critical issues at the beginning of the session and save lower-priority ones for later. If time runs short, you can skip or postpone those lower-priority areas.
To assess severity, I use a 1 to 5 scale for both user and business impact:
1 = Minor inconvenience, no major effect on experience
5 = Major blocker or breakdown in core functionality, with a high cost to users or business

Writing Questions and Tasks for Usability Testing

Creating the Main Task

Usability tasks are designed to observe real user interaction and uncover actual problems. Each task should be phrased to encourage natural behavior and reflect a real-world use case.
I combine two methods: observation and think-aloud. The participant is given a task and asked to voice their thought process. I observe whether they encounter any of the issues we identified earlier. If they do, I follow up with clarifying questions to uncover the root cause.
Main task formula: Ask the participant to complete a specific functional stage in a specific usage scenario.
Task Template
You need to [describe the goal or step]. What will you do?
Examples:
You need to assess whether the apartment suits you based on the photos. What will you do?
You need to evaluate the apartment layout. What will you do?
You want to apply for a mortgage on this apartment. What will you do?

Writing Follow-Up Questions

Step 1: Explore the Problem

If a user encounters a problem, assess its criticality and impact on task completion.
Questions to ask:
Why is this a problem for you?
How do you handle this outside of the app?

Step 2: Interface Use & Interaction

Observe if the user interacts with the interface element as expected. If not, ask:
Did you notice this element?
What do you think it’s for?

Step 3: Check Element Understanding

Ask if the user understands the purpose of each key UI element.
If not, the issue lies in poor affordance or visibility.

Step 4: Clarify Expectations

Check if the user’s expectations match the actual behavior of the element.
If not, dig into how often this mismatch could happen among all participants.

Creating the SEQ Question

To measure perceived task difficulty, I use the Single Ease Question (SEQ) with a 7-point scale, asking it after each task. The key is the follow-up:
“What else felt uncomfortable?”
This often uncovers new problems even if the task was rated as easy.
Single-Ease Question (SEQ)
How easy was it to [complete this step or task]?
What else felt uncomfortable or confusing during the task?
Examples:
How easy was it to explore the building features?
How easy was it to understand the apartment layout?
How easy was it to review the purchasing terms?

Preparing the Usability Test Script

The test script consists of three key parts:
Introduction
A brief welcome, explanation of the session’s purpose, format, and expected duration.
Main Section
Detailed walkthrough of the test scenarios, tasks, and follow-up questions, including the SEQ rating to gather additional insights.
Closing
Thank the participant, explain what happens next, and invite them to participate in future sessions.

Recruiting Participants for Usability Testing

Participant Sourcing Methods

Existing Users
If the product is already live—or you're working within a larger company launching a new product—you can recruit current users.
Send them an email invitation offering to join a research session.
Friends & Personal Network
This is common when working on early-stage products. Use social media to recruit through:
Posts or stories announcing your search
Relevant group chats or channels
Direct Outreach to Strangers
Reach out to people who comment on relevant social media posts
Approach potential users in their natural environment: gyms, schools, expos, forums, conferences
Online Listings
Post requests on classified ad boards (relevant to your audience or region).
AI can assist with recruitment strategy.
Use ChatGPT or DeepSeek to brainstorm potential channels. Here’s a sample prompt to generate ideas:
To simplify your work, use the following prompt.
I’m developing [your product, e.g., a nutrition tracking app for people gaining muscle mass] for users in [target geography, e.g., Australia]. I need to conduct usability testing with potential users. Give me a list of all possible recruitment channels, including specific examples, sources, and links.

Screening Participants

When recruiting, focus on behavioral experience—not demographic traits.
Questions about age, income, or marital status—and persona-based recruitment—are useless for usability testing.
Instead, ask whether participants have relevant experience with the task or scenario you're testing. Examples:
Testing a secondary housing purchase flow? You need users who’ve recently searched for or bought a home on the secondary market.
Testing mortgage application for IT professionals? Recruit developers who’ve applied for a mortgage with that rate.

How Many Users You Need

In most cases, 5 participants are enough.
According to Jakob Nielsen, a leading usability expert, testing with 5 users can uncover around 85% of usability issues.
Beyond that, insight returns diminish—each additional user is less likely to reveal new problems.

Running a Pilot Test and Launching the Main Study

To reduce stress and uncertainty before running the main sessions, it’s good practice to start with a pilot test.
Invite one participant with relevant experience—ideally a friend or someone you know—and run the full testing flow as you would with actual users.
This helps identify any weak points in your script, timing issues, or unclear tasks before launching the main round.

Analyzing Usability Test Results

The analysis process follows this structure:
Review the recordings (video/audio of each session)
Document all identified issues
Rate each issue by its impact on the user
Rate each issue by its impact on the business
Calculate the combined severity score (user + business impact)
Record how frequently each issue occurred
Decide how and how quickly to respond to each issue
The result is a prioritized list of usability problems ready for the team to work on.
Optionally, you can run a second round of testing to verify that the most critical issues were fixed.
More than two cycles are rarely worth it—insight returns usually plateau.

Jay Thomas

A UX strategist with a decade of experience in building and leading UX research and design teams. He specializes in implementing Jobs to be Done (JTBD) methodologies and designing both complex B2B admin panels and high-traffic consumer-facing features used by millions.
Previously, he led UX development at DomClick, where he scaled the UX research team and built a company-wide design system. He is a guest lecturer at HSE and Bang Bang Education and has studied JTBD at Harvard Business School.
Jay has worked with ONY, QIWI, Sber, CIAN, Megafon, Shell, MTS, Adidas, and other industry leaders, helping them create data-driven, user-centered experiences that drive engagement and business growth.