Reliability
Attribute

The product is reliable and fully solves the user’s task from start to finish without friction
Author: Jay Thomas
Creator of the framework “Think Like the User”

Introduction

The Reliability attribute in the user-centered framework “Think Like the User” means that a product or feature doesn’t just formally exist (Functionality), but actually completes the user's task from start to finish. In other words, the product is “reliable” in fulfilling the entire Job Story—it addresses the full need, not just isolated steps.
For example, if a user wants to prevent food from burning while cooking, simply having a timer is functionality. But automatically turning off the stove when the timer ends is a reliable solution that completes the entire job. The methodology below focuses specifically on validating whether a solution fully covers the Job Story. It does not include usability testing—that's covered separately under the Usability attribute.

Process Structure

To properly develop the Reliability attribute, follow a series of structured steps. Overall, the validation process includes preparing and conducting Solution Interviews with users, followed by analyzing the results. Key stages:
Prioritizing Desired Outcomes and Scenarios
Decide which user needs (Desired Outcomes) and usage scenarios to test first.
Preparing the Opportunity Score Table
Create a tool for recording user ratings and calculating importance/satisfaction metrics.
Preparing the Prototype
Develop a prototype or concept that covers key steps in the Job Story and Desired Outcomes.
Preparing Interview Questions and Tasks
Create an interview script: which scenarios the user will perform and which questions you'll ask (regarding importance and satisfaction).
Conducting User Interviews
Run a series of Solution Interviews (usually 5-8) using the prepared script.
Analyzing Results
Compile the scores, calculate the Opportunity Score, interpret the data, and determine whether the solution is reliable.
Each step will be examined in detail below, with a focus on practical execution and success criteria.

“Think Like the User” online course

When to Work on the Attribute?

The Reliability attribute should be evaluated after the Functionality attribute has been covered. This means that user needs must already be explored, hidden opportunities and explicit demands identified, and JTBD statements formulated—Key Job Story, Job Stories, and Desired Outcomes.
If any gaps emerge during the Reliability phase, they must be addressed before moving on (e.g., to final UI design or usability testing).
Let’s break down the entire process in detail.

Prioritizing Scenarios and Desired Outcomes for Validation

Before conducting a Solution Interview, it’s crucial to determine which Desired Outcomes and scenarios are most critical to validate first. You can’t realistically test everything—so focus on what matters most.
Evaluate the importance of Desired Outcomes
Use insights from previous research (especially from the Functionality stage) to identify which Desired Outcomes are most important to users. Assign them priority—using a 1–10 scale or labels like Low/Medium/High. Focus on the most critical needs.
Assess current satisfaction
If you already know how well users' current solutions address these outcomes (from prior studies or assumptions), note that. A combination of high importance + low satisfaction highlights a vulnerability—your solution must prove its reliability here. Mark these areas in the Opportunity Score table (we’ll cover that next).
Identify critical sub-steps of the Job Story
Every user task includes key moments where failure undermines trust in the whole solution.
For example, in an app for athletes tracking nutrition for muscle gain, entering initial data and receiving personalized recommendations are make-or-break steps. If there’s a glitch here, trust is lost.
Identify those critical sub-steps and focus your interview scenarios and Desired Outcomes around them.
Keep scope realistic
In one interview, you can realistically cover about 5–7 sub-steps with 5–10 Desired Outcomes per step (including follow-up discussion). So limit your focus to only the most essential Desired Outcomes—those without which the product cannot succeed.
Less critical or already well-addressed ones can wait.
As a result, you’ll have a shortlist of key scenarios (situations) to explore—each tied to a specific sub-step—and a set of high-priority Desired Outcomes to validate.
Next: we’ll prepare the tools needed for this validation.

Preparing the Opportunity Score Table

To collect and analyze data from the Solution Interviews, we use a structured scoring table—the Opportunity Score Table. It helps organize user responses and see how well your concept improves each Desired Outcome.

Table Structure

The table includes:
A list of Desired Outcomes (rows)
Separate sheets/pages for each participant (usually 5–8)
Automatically calculated metrics
For each Desired Outcome, you’ll capture:
Importance—How important this outcome is to the user (rated before showing the prototype, on a scale from 1 to 10; 10 = extremely important)
Satisfaction Before—How well the user’s current solution satisfies the outcome (before seeing your prototype)
Satisfaction After—How well your proposed solution addresses the outcome (after prototype interaction)
Opportunity Score—A calculated metric showing how poorly the outcome is currently satisfied. The higher the OS, the more potential for improvement (calculated automatically using Importance and Satisfaction delta)
Satisfaction Difference (Δ Satisfaction)—Difference between Satisfaction After and Before (shows whether perception improved due to your concept)
Summary—Text summary of change (e.g., “Improved,” “No Change,” “Worsened”)
Priority—Final action priority: High / Medium / Low / Not Applicable. Based on importance, satisfaction, and whether the user is familiar with the outcome. Use “Not applicable” if the user has no experience with that outcome.
Once filled out across interviews, this table gives you a clear view of how each outcome performs across your sample—where satisfaction improved, and where your solution still falls short.

Preparing the Table Before Interviews

Beforehand, pre-fill the table with all selected Desired Outcomes (each as a separate row). Leave space for the Importance and both Satisfaction ratings. Optionally add a Comments column to capture key quotes or explanations—these will help in your later analysis.

Preparing the Prototype for Testing

The prototype (or solution concept) is what you’ll show users during the Solution Interview to gather their feedback. It’s critical that the prototype covers the key Desired Outcomes identified during prioritization. The user must be able to try—or at least imagine—how the solution works in areas that matter most to them.
Your prototype can be:
An interactive prototype (easiest for users to understand)
A clickable design mockup
A scenario description or slide-based demo (if visual prototyping is difficult)
The key requirement: the participant must clearly understand the idea behind the solution and how it addresses their Job Story. If an important outcome is missing from the prototype, the interview will surface this—users may express concern or suggest something is lacking. That’s a red flag indicating a gap in Reliability.
During the interview, you’ll present the prototype step by step, in the context of specific scenarios (sub-steps of the job). Before the interview, make sure you have a screen or description prepared for each scenario. For example, if you’re testing the scenario “Measure current body fat percentage” in a fitness app, the prototype must allow the user to input data and see the calculated body fat percentage. Otherwise, the user can’t assess the reliability of that step.
Finally, test the prototype yourself or with a colleague (run a mock interview) to ensure it works smoothly. During the actual interview, nothing should distract the participant (e.g., bugs, confusing layouts). Remember: the goal is to evaluate trust in the concept, not the user’s ability to handle an unfinished prototype.

Designing the Interview Scenario and Questions

Once the prototype is ready, it’s time to plan the Solution Interview scenario—how the session will flow, and what questions or tasks the user will go through. This type of interview is a hybrid between a user interview and a usability test, but it’s structured specifically around testing selected use scenarios.

General Structure of a Solution Interview

For each prioritized scenario (sub-step), follow this structure:
Repeat the sequence for each important scenario.
Example: In a nutrition-tracking app for athletes gaining muscle mass, one scenario might be “Determining the current phase (bulking or cutting)”. First, the user rates how important it is to get that phase right, and how satisfied they are with their current method. Then, you let them try your solution (e.g. a feature that recommends a phase), and finally, they rate how well your solution performs.

Interview Structure Template

Repeat this sequence for each key sub-step:
Scenario Introduction
The moderator sets the scene: “Now imagine a situation where you... [describe the scenario and context].” This helps immerse the participant in the relevant use case.
Importance
Ask: “How important is it for you to... [Desired Outcome for this step]?”
The user gives a 1–10 rating.
Example: The outcome “Minimize the risk of incorrect BMR calculation” becomes “How important is it for you to avoid errors in BMR calculation?”
Or: “Minimize time spent calculating BMR” becomes “How important is it for you to calculate BMR quickly?”
Satisfaction (Before)
Ask: “How satisfied are you currently with how you achieve this?”
Example: “How satisfied are you with your current method for calculating BMR?”
User rates from 1 to 10 (before seeing your prototype).
Scenario Execution
Give an instruction or task: “Please try to... / Let’s look at... [user is asked to perform or review something in the prototype].”
Either let them complete the sub-task in the prototype or show a static screen or concept if interactivity isn’t available.
The goal is to simulate real usage without helping or guiding them.
Example: For the sub-step “Measure current body fat %”, the task may involve inputting data and viewing results. If the prototype isn’t functional, show a mockup or description of how it works.
Satisfaction (After)
Ask: “Now that you’ve tried the solution, how satisfied are you with how it supports [sub-step]?”
The user gives a 1–10 rating again.
Example questions:
“How satisfied are you with the BMR calculation through our app?”
“Was the calculation speed suitable for you?”
“Do you feel the app suggested an accurate method for estimating body fat %?”
Follow-up Questions
Ask: “Why did you rate it this way? What did you particularly like or dislike?”
Also explore trust: “Do you feel the solution reliably solves your problem? What would you do if... [negative scenario]?”
These questions help uncover signals of trust or doubt and provide evidence to evaluate Reliability.
Repeat this process for each key scenario. At the end of the interview, ask a wrap-up question like:
“Overall, how would you describe the reliability of this solution? Do you feel your task is fully solved?”
Then thank the participant and end the session.

Note

If a participant has no experience with a given sub-step (e.g. they’ve never done something you’re testing), just skip it. Leave the cells blank in your table. The Priority for that Outcome should be marked as “Not applicable.” These gaps are valid and factored into the analysis.

Interview Script Structure

A Solution Interview script includes four key parts (similar to a classic in-depth interview):
Introduction
Screening
Main Session
Closing

Introduction

Before starting, establish trust with the respondent. You're a stranger, and most participants have never taken part in such research. It's essential to explain who you are, what will happen, and what you expect from them.

Screening

Screening questions are a crucial part of preparing for a Solution Interview. They ensure data relevance and research efficiency. If a respondent lacks the required experience, the interview won’t be useful.
Ask two quick questions to assess relevance:
Has the user ever faced this Job Story?
Helps determine if they've encountered the problem you're researching.
“Do you currently track your nutrition to gain muscle mass?”
When was the last time they did this?
Checks how fresh and relevant their experience is.
“When was the last time you tracked your nutrition?”
If the participant isn’t a fit, end the session politely:
Acknowledge the mismatch:
"Thanks so much for your willingness to help. Unfortunately, we need to dive deeper into a different experience. Apologies for any inconvenience."
Ask to stay in touch:
"We’d love to reach out again for future research—your experience might be very helpful for another area of the product."
This approach preserves the relationship and leaves a good impression, which is useful for future sessions.

Main Session

Start with easy, open-ended questions to warm up the participant and help them recall their real experience. For example:
Can you walk me through how you currently solve your task X?
“Can you walk me through how you currently track your nutrition when gaining muscle mass—from start to finish?”
Then move into prototype testing—show your solution, observe reactions, and go through the structured steps (importance, satisfaction before/after, etc.).

Closing

People enjoy feeling helpful. Even if the session wasn’t perfect, I always thank participants, highlight what I learned, and keep the tone warm. That goodwill often leads to stronger sessions later on.
I also ask if they’re open to future interviews. I’ve never heard “no,” and it’s a good practice. For example:
That’s all the questions I had. We’ve gathered a lot of useful insights—thank you for your time!
Would you be open to participating in future sessions like this?
Do you have any questions for me?
Thanks again. Wishing you a great day!

Recruiting Participants

Number of Participants

For each user role (you can read about segments and roles in the Functionality attribute), 5 to 8 participants is usually sufficient. Research shows that most core problems and behavior patterns emerge within the first 5–6 interviews. Additional sessions help validate findings and occasionally reveal rare issues, but the rate of new insights typically drops after 8–10 interviews.
So, 5–8 respondents per role is optimal for most UX research.

Methods for Finding Participants

Existing Users
If the product is already live—or you're working within a larger company launching a new product—you can recruit current users.
Send them an email invitation offering to join a research session.
Friends & Personal Network
This is common when working on early-stage products. Use social media to recruit through:
Posts or stories announcing your search
Relevant group chats or channels
Direct Outreach to Strangers
Reach out to people who comment on relevant social media posts
Approach potential users in their natural environment: gyms, schools, expos, forums, conferences
Online Listings
Post requests on classified ad boards (relevant to your audience or region).
AI can assist with recruitment strategy.
Use ChatGPT or DeepSeek to brainstorm potential channels. Here’s a sample prompt to generate ideas:
To simplify your work, use the following prompt.
I’m developing [your product, e.g., a nutrition tracking app for people gaining muscle mass] for users in [target geography, e.g., Australia]. I need to conduct Solution Interviews with potential users. Give me a list of all possible recruitment channels, including specific examples, sources, and links.

Conducting the Interview

Best Practices for Conducting Interviews

Your main goal during the interview is to honestly assess how much the user trusts your solution. Here are several key practices:
Interview users for whom the Job Story is relevant
Make sure the participant is part of the target audience and has actually experienced the problem your product addresses. If they’ve never faced the Job Story, their answers will be abstract or useless. It’s better to spend more time recruiting than to waste a session on irrelevant data.
Build trust and set context early
Respondents may feel unsure at first. Explain the purpose of the research, emphasize that their honest feedback matters, and that there are no right or wrong answers. This is especially important when discussing trust/distrust—they need to feel safe sharing doubts.
Stick to the script—but stay flexible
Follow your prepared guide to collect all necessary scores, but if a user brings up something valuable spontaneously—let them talk. If they think aloud about potential problems during a task, listen carefully. Note key phrases or record the session (with consent) to capture these insights.
Don’t lead or justify the prototype
Avoid interfering during tasks. If a user struggles or misunderstands a feature, don’t help them immediately—log it as a potential issue.
Once they’ve completed the task, ask:
What did you think this feature would do?”
Listen carefully. Don’t correct them mid-way—you’ll lose insight into whether the interface communicates reliably. After hearing their assumptions, you can explain the intended function so the rest of the session continues clearly.
Allow time to think and speak
After each task—especially when asking about trust—pause. Don’t rush to fill silence. They may need a few seconds to reflect and share something valuable.
Encourage elaboration:
“Can you tell me more about what you're thinking here?”
Deeper concerns often surface with a delay—be patient.
Clarify unclear reactions
If they smirk, frown, or show other nonverbal cues, ask what they were thinking or feeling. This can reveal hidden doubts or moments where they were impressed by the reliability.
All of these tactics serve one purpose: to get an unfiltered, honest view of how much the user trusts your solution.

Common Mistakes in Interviews and Analysis

Even experienced researchers can make mistakes that reduce the value of results. Below is a checklist of common pitfalls to avoid:
Leading questions
Be careful when phrasing questions about reliability.
Bad:
“You liked how the app handled that, right?”
—this subtly pressures the user into saying “yes.”
Better:
“How did that feel to you?”, “Do you trust this approach? Why?”
Avoid justifying the product in your question (e.g., “We’ll improve this later, but would you trust it for now?”). Keep questions open and neutral.
Ignoring or misusing rating scales
Always collect numerical ratings (importance, satisfaction) using the prepared scale. Mistake: forgetting to ask for a number and settling for “yeah, I guess it's fine.” Numerical data is necessary for calculations and comparison. Also make sure the user understands the scale (e.g., that 10 = highest importance/satisfaction).
Confusing reliability with usability
Stay focused on whether the user’s needs are fully addressed and whether they trust the outcome, not on ease of use. If they start complaining about a clunky UI—note it, but steer the conversation back:
“Thanks for the feedback on the design—we’ll take that into account. But in terms of the result—did it give you what you needed? Do you trust it?”
Otherwise, you risk wasting the interview on minor usability talk instead of evaluating the solution’s core reliability.
Overlooking user doubts
The most important insights come from uncertainty. If you hear phrases like “I’m not sure if…” or “That part worried me a bit…”—don’t move on. Ask follow-ups:
“What exactly caused doubt?”, “Why do you feel unsure?”, “What would you do in that case?”
These responses highlight where reliability breaks down.
Trying to “correct” the user’s opinion
Some researchers feel defensive when hearing criticism and start explaining:
“But our app already has that feature!” or “That issue is easy to fix…”
Don’t do this. Your job is to gather feedback, not defend the concept. Even if the user missed something, don’t “correct” them. Misunderstandings or distrust signal one of two things: the concept is weak, or your prototype didn’t communicate it clearly. Either way, that’s valuable data—not a misunderstanding to “fix.”
Incomplete result logging
Immediately after the session, fill out your results table completely while the details are still fresh. A common mistake is to postpone and then forget why a user gave a score of 5 instead of 8. Write down the user’s explanation in the comments for each row. Also, mark “not applicable” outcomes explicitly so they aren’t treated as zeroes in calculations.
Shallow analysis with no categories
Finally, don’t just glance at the scores and say “seems fine/bad”. Break the data down:
What needs were fully met? Partially met? Not met?
Group repeated concerns into categories
Highlight high and low Opportunity Scores
This structured analysis (covered in the next section) leads to evidence-based decisions on what to improve.
Avoiding these mistakes will improve your data quality and confidence in your findings for the Reliability attribute.

Analyzing Results and Criteria for Successful Reliability Validation

Once all interviews are completed, it's time to determine whether your solution concept is truly reliable—i.e., whether it solves the user’s task without raising doubts. The analysis includes both quantitative evaluation (via the Opportunity Score table) and qualitative interpretation (comments, behavior, and user quotes).
Consolidate Data into the Table
Collect scores from all respondents into a single summary table (each respondent on a separate sheet). The table will automatically calculate:
Average Importance, Satisfaction Before, and Satisfaction After
Average Opportunity Score
A Summary Status ("Improved" / "No Change" / "Worsened") based on Satisfaction Difference
A Priority level: High / Medium / Low
The before-vs-after satisfaction dynamic shows whether the solution worked. Check how many Outcomes improved, stayed the same, or got worse. Also watch for red flags: if an Outcome was rated as highly important but received low post-use satisfaction, that’s a serious issue.
Interpret the Opportunity Score (OS)
OS helps you rank improvement opportunities. In the context of Reliability, it indicates how important a Desired Outcome is and how unmet it remains. Define threshold levels to guide your action plan:
OS ≥ 15 (high): Outcome is very important but poorly satisfied—critical to fix. Typically occurs when Importance is 9–10 and Satisfaction After is low or even negative. Priority: High
OS ≈ 10–14 (medium): Some potential for improvement. May show partial gain, or high importance with only slight improvement. Priority: Medium
OS < 10 (low): Either Outcome isn’t important or it’s already well covered. Low OS means there’s no big “opportunity gap.” If Importance is low—no need to improve. If Importance is high and Satisfaction is also high—mission accomplished. Priority: Low
OS is a support metric, not a verdict. Always interpret it alongside raw data. A low OS from a low-importance Outcome doesn’t mean success—just that the Outcome itself wasn’t crucial.
Set Your Own Threshold Rules
Create your own thresholds for interpreting ratings:
Importance: 8–10 = high, 5–7 = medium, <5 = low
Satisfaction After: 8–10 = high (well-resolved), 5–7 = medium, <5 = low (unsatisfactory)
Satisfaction Difference (Δ): +1 or more = improved, 0 = no change, negative = worse
Having these thresholds helps you draw clear conclusions and remain consistent across projects.
Criteria for Confirming Reliability
Now, use the scores, summaries, and observations to answer the key question: Did we validate the Reliability of our solution?
Checklist for a positive outcome:
All top-priority Outcomes are satisfied: If your highest-priority Outcomes (from your prioritization phase) received high Satisfaction After scores—this strongly indicates success. If most of them remained poorly rated, your solution failed to prove reliability.
Improvement over existing methods: Compare Before vs After for each important Outcome. Success means satisfaction went up, especially in areas where the current solution was weak. If satisfaction didn’t increase, or worse—declined—your solution failed. The Summary field will flag these cases (“No change” / “Worse”)—pay close attention.
Opportunity Score thresholds: Ideally, no high OS (15+) should remain for critical Outcomes. If all key OS are low (<10), you’ve likely covered most needs well. Some medium OS can be accepted, but high OS must be resolved before release.
Qualitative signs of trust/distrust: Beyond numbers, rely on user behavior and language. Signs of trust = users are confident and ready to use the solution. Signs of distrust = doubts, skepticism, workarounds. If most users showed confidence, Reliability is likely validated. If not—your concept needs refinement.
In summary, you’ve validated Reliability if:
All important needs are well satisfied
Users find your solution better or equal to their current method
No serious doubts are expressed
No high-risk Opportunity Scores remain
If these are true—proceed with confidence (e.g., to UI design and usability testing).
If not—back to the drawing board.

Categorizing Cases by User Trust Level

During analysis, it’s helpful to classify each interview case into one of three categories: complete Job Story coverage, partial coverage, or no coverage at all. This effectively reflects the user’s level of trust in the solution. Let’s break down the signs of each situation:

Complete Job Story Coverage (High Trust)

The solution fully addresses the user’s task, and the user is ready to trust it 100%. Signs of full coverage:
The user expressed no significant doubts throughout the interview. They completed all test scenarios without fallback plans or additional steps.
When asked directly if their task was solved, the answer is highly positive. For example: “Yes, everything I need is here.” Satisfaction ratings are close to 10/10 across most Desired Outcomes.
Their language shows clear satisfaction: “That’s exactly what I was missing.” They may ask implementation questions like “When will this be available?”—but they don’t question the reliability of the solution itself.
Even when prompted with potential edge cases, the user remains confident. They might say something like: “Well, I trust the system will handle it.” That indicates trust: they believe the product won’t fail under pressure.
If most of your interviews fall into this category—congratulations. Your concept shows strong Reliability. Users are willing to rely on it without fear or backups. That’s the goal of this attribute.

Partial Job Story Coverage (Moderate Trust)

The solution solves the problem only partially. The user sees value and may use it in some cases, but still has concerns or limited trust. Signs of partial coverage:
The user believes the core task is solved, but secondary needs or edge cases are not. For example: “Yeah, doing X is easier now, but I’d still have to handle Y manually.”
They define usage boundaries: “I’d use this for simple cases, but for complex ones I’d stick to my old method.” In other words, they trust the product only in low-risk or limited contexts.
Direct trust questions receive cautious, non-committal answers. A typical phrase: “Maybe I’d try it, but I’d wait for some reviews or updates.” Satisfaction scores are mid-range (5–7 out of 10).
The user still holds onto a backup plan. For example: “I’d keep a copy of the data just in case.” This indicates they’re not yet confident enough to fully rely on the product.
In short, the solution covers parts of the Job Story, but not fully enough for the user to drop their old concerns. Trust is conditional: they see some benefit but aren’t ready to rely on it completely.
For the product team, this means the solution needs refinement—either by expanding coverage, handling edge cases better, or boosting confidence (e.g., through certifications, testimonials, or guarantees) to convert these users to full trust.

No Job Story Coverage (Low Trust)

This is the most critical case: the user says the solution doesn’t solve their task—or doesn’t do it reliably—so they don’t plan to use it. Signs of no coverage:
Responses show skepticism or disappointment. For example: “Honestly? I’m not sure this would work for me…” This might mean the wrong Job Story was targeted, or a key user need was missed.
The user clings to their current method and shows no intent to switch. You’ll hear things like: “It’s easier the old way, even if it has flaws.”
Trust is clearly absent. Quotes like: “I wouldn’t risk anything important with this tool.” or “I’m afraid it would glitch when I really need it.” Even if their current workflow has pain points, they see this product as not better—and maybe worse.
The table reflects this: low Satisfaction After (often 3–4 or less), no improvement or even decline, high Opportunity Scores, and red flags on important Outcomes.
This means either (1) you targeted the wrong problem, or (2) the solution is so unreliable that users would rather deal with current frustrations than risk using it. Either way, these cases demand a serious rethink. Step back: revalidate your Job Story (Functionality stage), or rebuild key reliability mechanisms from the ground up.

Decoding User Reactions: Signs of Trust and Distrust

During analysis, don’t just rely on scores—watch for behavioral and verbal signals. Below is a checklist of user reactions and how to interpret them, helping you decode what the user actually felt.
Doubt expressed as a question
If the user asks things like “Will the system handle it if...?”—that’s a clear sign of distrust. They’ve already spotted a potential failure point and are probing you. These questions often mask concerns:
Technical: “Will it freeze? Could data be lost?”
Logical: “Will it calculate this case correctly?”
Support-related: “Where do I go if something goes wrong?”
Each question like this is a red flag. Document all doubts and categorize them later—this shows which reliability aspects need attention (performance, accuracy, support, etc.).
Mention of a workaround or duplicated actions
If the user says they’d do something “just in case,” they don’t fully trust the system. Classic examples:
“I’d still export a backup to Excel—just in case.” → lack of trust in data safety.
“I’ll keep a manual log in parallel.” → fear of data loss or errors.
“I’ll call a manager to double-check.” → doesn’t trust automation.
Each workaround = a reliability gap. The user feels the need to safeguard themselves—your ideal product should eliminate that need. Document all such behaviors; they may lead to new Desired Outcomes or design requirements.
“Red flags” mentioned by multiple users
If several users independently raise the same concern—pay attention. For example, if 3 out of 5 users ask about offline access, that’s a strong signal: offline mode is crucial for trust. Unaddressed, this could break your concept’s Reliability. Prioritize repeating red flags.
Completeness ≠ Trust
Sometimes everything is functionally “there,” but the user still hesitates. They say things like: “Yeah, it all seems fine, but I’m still unsure…” This often reflects perceived, not actual, reliability.
Why? Maybe:
The design doesn’t give enough feedback.
There’s not enough transparency.
The user doesn't feel in control.
In such cases, adding features won’t help—you need to communicate trust better. Add visual cues like save indicators, security explanations, undo options, etc. These reduce anxiety and increase perceived reliability.
In summary, during analysis, create a summary for each interview:
Was the Job Story fully, partially, or not covered?
What was the user’s trust level (high / medium / low)?
What were the main reasons for distrust (specific doubts, quotes, unmet Outcomes)?
Then aggregate the results across all interviews:
What % of users fully trust the solution?
What % trust it with conditions?
What % rejected it?
This directly informs what to prioritize for improving the product’s Reliability, showing which issues are most widespread and critical.

Conclusion

Reliability is one of the four pillars of UX quality (alongside Functionality, Usability, and Visual Appeal). Remove one—and the whole structure starts to wobble. That’s why it’s critical to give reliability proper attention early in the development process: make sure your solution truly addresses user needs and earns their trust.
By following the method described here—hypothesis prep, structured interviews, analysis using Opportunity Score and qualitative trust signals—you’ll get a clear view of how reliable your product feels to users, and where to improve.
Once you’ve built strong confidence in your product’s Reliability (users say things like “This is exactly what solves my problem—and I trust it”), you can move on to other areas: visual refinement, usability testing, etc.
The result? A digital product that people love—because it solves their problems fully, effectively, and pleasantly, from start to finish, with no stress or second-guessing. That’s the kind of product that actually wins.

Jay Thomas

A UX strategist with a decade of experience in building and leading UX research and design teams. He specializes in implementing Jobs to be Done (JTBD) methodologies and designing both complex B2B admin panels and high-traffic consumer-facing features used by millions.
Previously, he led UX development at DomClick, where he scaled the UX research team and built a company-wide design system. He is a guest lecturer at HSE and Bang Bang Education and has studied JTBD at Harvard Business School.
Jay has worked with ONY, QIWI, Sber, CIAN, Megafon, Shell, MTS, Adidas, and other industry leaders, helping them create data-driven, user-centered experiences that drive engagement and business growth.