Quickfire usability evaluation
Instructor: Dr. Monica Pereira
Email: monica.pereira@brunel.ac.uk
Evaluation: Definition and Purpose
What it is and isn’t
Why we do it
Fit with User-Centered Design (UCD) and Usability Engineering (UE)
Role of Evaluation:
In/Formal evaluation
Goals and factors to consider
Methods of Evaluation:
Discount usability testing
Heuristic evaluation and cognitive walkthrough
Strategies for quick and cheap evaluations
Timing of Evaluation:
When and how often to evaluate in the UCD process
Formal Technical Reviews: Include non-developers
Technical Software Testing:
White-box testing: Basis path and control structure testing
Black-box testing: Focus on functional requirements
Software Testing Strategies:
Unit Testing
Integration Testing
Validation Testing
System Testing: Recovery, Security, Stress, Performance
Note: Focus on software, not user interaction in this module
Shift focus from "Does the software achieve its task?" to "How well and in what way does the software assist the user?"
Key question: How to assess and measure usability meaningfully?
Evaluation: Process to gather information about system usability to improve or assess the system. (Preece, 1994)
Evaluation Method: Procedure for collecting relevant data on the usability of a computer system. (Preece, 1994)
Key questions addressed:
How to choose between interface alternatives?
How to improve an existing design?
Compare designs and assess engineering toward goals
Check conformance to standards
Understand real-world efficiency and effectiveness
How effectively does the design facilitate task completion?
Are users able to manage emergency scenarios?
Usability metrics for success and complexity:
Example: How easy is a ticket machine?
Size of screen correlated with selection errors?
Escape mechanisms for erroneous commands?
Rate of user retention and organizational adoption impact?
Assess system functionality and appropriateness
Evaluate the effect of the interface on user experience
Learning ease, usability, and expectations
Identify specific design problems, including contextual aspects
Characteristics of users
Types of user activities
Environment where the study takes place
Nature of the artifact being evaluated
The evaluator acts as a critical friend to provide constructive criticism to designers
Imagine being blindfolded and locked in a trunk. After navigating a website, can you answer these questions?:
What site is this?
What page am I on?
What are the key sections available?
What are my local navigation options?
How can I search?
Overreliance on 'common sense'
Incorrect assumption about typical users (testing only on self)
Failure to involve representative users
Delaying testing until too late
Star life cycle includes:
Requirements specification
Conceptual design
Prototyping
Evaluation (ISO 13407)
Formative: Helps in decision-making and identifying issues early
Summative: Conducted at the end of the design process to ensure goals are met
Iterative testing favored over late summative testing
Analytical Testing: Expert review, involving opinions of usability experts
Abstract Testing: Drawing from existing data, such as cognitive analysis
Empirical Testing: Direct user testing through qualitative studies
Heuristic Evaluation: Scorecard approach assessing against usability principles
Cognitive Walkthrough: Step through user actions to simulate tasks
Formal Usability Inspections: Collaborative meetings to discuss strengths and weaknesses
Experts evaluate the user interface against usability principles
Duration: Typically 1-2 hours
Output: List of usability issues identified
Each problem should include:
Description
Anticipated user difficulties
Context of the problem
Assumed causes
Feedback mechanisms based on response time
Use user-friendly terminology and conventions
Features like Cancel and Undo should be readily available
Maintain user expectations through standard practices
Design to prevent errors from occurring initially
Minimize memory usage through visible options
Provide shortcuts for experienced users
Avoid overloading users with unnecessary information
Clear error messages without technical jargon
Provide ample resources that are easily accessible and relevant
Problems should be rated based on frequency and impact to prioritize corrective action
Experts should combine findings to build a comprehensive issue map
Lack of ecological validity and potential biases
Discuss the necessity of combining expert reviews with user testing
Evaluate through observing usage patterns
Strategies like 'break it' testing, think-aloud methods, and software logging with attention to user privacy
Focused on heuristic evaluation and methodologies to catch errors early in usability processes
Acknowledge that real user feedback is critical in validating the design's effectiveness and adaptability