The goal of usability testing is to bridge the gap between design assumptions and actual user behavior. It identifies critical friction points, collects quantitative data on performance (e.g., time on task, error rates), and measures qualitative satisfaction.
☝ Core Principle: In usability testing, we test the product, not the person. If a participant struggles, it's a design failure, not a user error.
The Usability Research Cycle
A structured, iterative approach ensures that insights lead to actionable design updates.
- Define Goals — What specific questions do you want to answer?
- Select Methodology — Lean (guerrilla), moderated, or unmoderated.
- Choose Participants — Aim for 5 participants per user type to capture 85% of usability issues.
- Create Scenarios — Design realistic tasks that mirror actual user flows.
- Conduct the Test — Observe behavior, record interactions, and listen to "thinking aloud."
- Review & Report — Analyze patterns and prioritize fixes based on severity.
- Iterate — Update the design and conduct a second round of testing to validate fixes.
1. Preparing the Session
Before you start, you must be able to answer the "W" questions:
- Why are you running the test? (e.g., validating a new checkout flow)
- Who are the participants? (Screener questions are vital here)
- What system/functionality is being tested?
- How will you collect data? (Analytics, heatmaps, or video recording)
Preliminary Conversation
Your main goal is to make the participant feel safe.
- "We are trying to determine the strengths and weaknesses of this product."
- "Just do whatever comes naturally."
- "Don't feel bad if you hit a snag—we're testing the system, not you."
- The "Think Aloud" Protocol: Encourage them to verbalize their thought process as they navigate.
2. Questioning Techniques
The way you phrase questions determines the quality of your insights.
During the Test
- Avoid Leading Questions: Instead of "Do you like this button?", use "What are your thoughts on this design?"
- The Funnel Technique: Start with broad impressions and move toward specific details.
- Answer Q with Q: If a user asks "How do I do this?", respond with "What would you expect to happen if you tried X?"
- Let Them Fail: Resist the urge to help. Watching where users get stuck is the most valuable part of the test.
Post-Test & Feedback
- Be Specific: "How did this experience compare to [Competitor]?" or "What would you do if we weren't here?"
- Quantify with Likert Scales: Ask users to rate their confidence or ease of use (e.g., 1-7 scale from "Strongly Disagree" to "Strongly Agree").
- Sequence General to Specific: Ask for overall impressions before diving into specific UI components.
3. Mobile-First Best Practices
Mobile testing introduces unique physical and contextual constraints.
- Thumb Reach & Orientation: Verify that key interactive elements are within "the thumb zone"—the area easily reachable when holding a phone with one hand.
- Device Fidelity: Always use the correct device for the user (iOS vs. Android) to ensure system-level patterns feel natural.
- Recording Context: Record not just the screen, but also the fingers and body language. "Fat-fingered misses" are common usability issues that screen recordings alone might miss.
- Minimize Input: Mobile typing is cumbersome. Test how easily users can complete tasks using lists, suggestions, or auto-fill instead of typing.
- Connectivity: Consider how the app behaves when signal drops—does it save state?
4. Roles in the Room
For the best results, a session should have at least two people:
- Facilitator: Manages the session, asks the questions, and ensures the participant feels comfortable. Focuses on the human connection.
- Logger/Observer: Records key events, behavior, and "anxiety stages." Having a second set of eyes prevents bias and ensures a shared memory of the event.
5. Tools & Resources
The Toolkit
- Prototyping: UXPin, Figma
- Remote Testing: Lookback, UserTesting, Maze
- Analytics & Observation: UXCam, Hotjar