Subtopic Deep Dive
System Usability Scale
Research Guide
What is System Usability Scale?
The System Usability Scale (SUS) is a standardized 10-item questionnaire developed by John Brooke in 1986 for assessing the usability of systems and interfaces.
SUS yields a score from 0 to 100, with higher scores indicating better usability. It has been applied across software, websites, and physical devices. Over 1,300 studies have validated its reliability (Bangor et al., 2008, cited in broader ergonomics literature).
Why It Matters
SUS enables rapid usability benchmarking in product design, reducing iteration cycles in engineering (Stanton et al., 2006). In human-robot interaction, SUS scores autonomy levels against user expectations (Beer et al., 2014). Health care innovations use SUS for patient interface validation, improving adoption rates (Melles et al., 2020). Industry 4.0 systems apply SUS to ensure human factors in automated logistics (Neumann et al., 2020).
Key Research Challenges
SUS Score Interpretation Variability
Adjectives for SUS scores differ across domains, complicating cross-study comparisons (Bangor et al., 2009). Benchmarks vary between novice and expert users. Stanton et al. (2006) note contextual adaptations needed in ergonomics methods.
Cultural Validity Limitations
SUS items assume Western response biases, reducing reliability in non-English contexts. Validation studies show score inflation in collectivist cultures. Carayon and Smith (2000) highlight work organization factors affecting questionnaire ergonomics.
Supplementing SUS with Tasks
SUS lacks task-specific metrics, requiring pairing with hierarchical task analysis. Stanton (2005) extends HTA for usability breakdowns. Robot autonomy frameworks demand SUS integration with performance data (Beer et al., 2014).
Essential Papers
Human Factors Methods: A Practical Guide for Engineering and Design
Neville A. Stanton, Paul M. Salmon, Laura Rafferty et al. · 2006 · 872 citations
Human Factors Methods: A Practical Guide for Engineering and Design presents more than ninety design and evaluation methods, and is designed to act as an ergonomics methods manual, aiding both stud...
Hierarchical task analysis: Developments, applications, and extensions
Neville A. Stanton · 2005 · Applied Ergonomics · 570 citations
Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction
Jenay M. Beer, Arthur D. Fisk, Wendy A. Rogers · 2014 · Journal of Human-Robot Interaction · 544 citations
A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomou...
Industry 4.0 and the human factor – A systems framework and analysis methodology for successful development
Patrick Neumann, Sven Winkelhaus, Eric H. Grosse et al. · 2020 · International Journal of Production Economics · 514 citations
The fourth industrial revolution we currently witness changes the role of humans in operations systems. Although automation and assistance technologies are becoming more prevalent in production and...
Work organization and ergonomics
Pascale Carayon, Michael J. Smith · 2000 · Applied Ergonomics · 439 citations
Human factors for informatics usability
· 1992 · Applied Ergonomics · 356 citations
Innovating health care: key characteristics of human-centered design
Marijke Melles, Armaĝan Albayrak, Richard Goossens · 2020 · International Journal for Quality in Health Care · 322 citations
Abstract Human-centered design is about understanding human needs and how design can respond to these needs. With its systemic humane approach and creativity, human-centered design can play an esse...
Reading Guide
Foundational Papers
Start with Stanton et al. (2006) for SUS as one of 90+ ergonomics methods manual. Follow with Carayon and Smith (2000) on work organization contexts. Beer et al. (2014) provides HRI-specific SUS applications.
Recent Advances
Neumann et al. (2020) analyzes SUS in Industry 4.0 human factors (514 citations). Melles et al. (2020) applies SUS to health care design (322 citations). Sgarbossa et al. (2020) extends to production logistics.
Core Methods
SUS pairs with hierarchical task analysis (Stanton, 2005). Real-time ergonomic feedback complements SUS (Vignais et al., 2012). Frameworks like LORA integrate SUS scoring (Beer et al., 2014).
How PapersFlow Helps You Research System Usability Scale
Discover & Search
Research Agent uses searchPapers for 'System Usability Scale validation ergonomics' yielding Stanton et al. (2006), then citationGraph reveals 872 downstream citations including Beer et al. (2014). exaSearch uncovers domain adaptations; findSimilarPapers links to Neumann et al. (2020) for Industry 4.0 usability.
Analyze & Verify
Analysis Agent runs readPaperContent on Stanton et al. (2006) extracting 90+ methods including SUS benchmarks, verifiesResponse with CoVe against SUS norms, and runPythonAnalysis computes GRADE scores on reliability metrics from 10 papers. Statistical verification confirms SUS Cronbach's alpha >0.9 across ergonomics datasets.
Synthesize & Write
Synthesis Agent detects gaps in SUS-robot autonomy literature via contradiction flagging between Beer et al. (2014) and recent Industry 4.0 papers. Writing Agent uses latexEditText for SUS results tables, latexSyncCitations for 20-paper bibliography, latexCompile for publication-ready report, and exportMermaid for usability workflow diagrams.
Use Cases
"Compute average SUS scores from 15 ergonomics papers using Python."
Research Agent → searchPapers('SUS ergonomics validation') → Analysis Agent → readPaperContent(15 PDFs) → runPythonAnalysis(pandas aggregation, matplotlib SUS histogram) → CSV export of means by domain.
"Write LaTeX appendix comparing SUS in human-robot papers."
Research Agent → citationGraph(Stanton 2006) → Synthesis → gap detection → Writing Agent → latexEditText(SUS table) → latexSyncCitations(Beer 2014, Neumann 2020) → latexCompile → PDF with compiled benchmarks.
"Find GitHub repos implementing SUS calculators from ergonomics papers."
Research Agent → searchPapers('SUS implementation code') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect(SUS Python tools) → verified repo list with ergonomics examples.
Automated Workflows
Deep Research workflow scans 50+ papers via searchPapers on 'SUS human factors', structures SUS benchmark report with GRADE grading. DeepScan applies 7-step CoVe chain: search → read → Python stats → verify → synthesize gaps → LaTeX → critique. Theorizer generates SUS extension theory from Stanton (2006) methods and Beer (2014) autonomy data.
Frequently Asked Questions
What is the System Usability Scale?
SUS is a 10-item Likert questionnaire scoring usability from 0-100, developed by John Brooke (1986). Scores above 68 indicate above-average usability. It appears in Stanton et al. (2006) as a core ergonomics evaluation method.
What are common SUS administration methods?
Administer post-task with 5-point scales from 'strongly disagree' to 'strongly agree'. Odd items are positive; even are negative-worded and reverse-scored. Stanton et al. (2006) detail it within 90+ human factors methods.
What are key papers on SUS in ergonomics?
Stanton et al. (2006) covers SUS in practical engineering guides (872 citations). Beer et al. (2014) applies SUS to robot autonomy levels (544 citations). Neumann et al. (2020) benchmarks SUS in Industry 4.0 systems.
What are open problems in SUS research?
Lack of universal benchmarks across cultures and devices persists. Integration with real-time feedback systems needs validation (Vignais et al., 2012). Robot and Industry 4.0 contexts require SUS extensions (Beer et al., 2014; Neumann et al., 2020).
Research Ergonomics and Human Factors with AI
PapersFlow provides specialized AI tools for Engineering researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Paper Summarizer
Get structured summaries of any paper in seconds
Code & Data Discovery
Find datasets, code repositories, and computational tools
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Engineering use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching System Usability Scale with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Engineering researchers
Part of the Ergonomics and Human Factors Research Guide