Subtopic Deep Dive
System Usability Scale SUS
Research Guide
What is System Usability Scale SUS?
The System Usability Scale (SUS) is a 10-item standardized questionnaire developed by John Brooke in 1986 to measure perceived usability of systems on a 0-100 scale.
SUS items alternate positive and negative statements rated on a 5-point Likert scale, yielding a single composite score. John Brooke introduced it in 1986 as part of Digital Equipment Corporation's usability engineering program (Brooke, 2013, 715 citations). James R. Lewis's 2018 review documents over 1,400 studies using SUS since inception (Lewis, 2018, 1664 citations).
Why It Matters
SUS enables quick benchmarking of interface usability across domains like mobile apps (Kaya et al., 2019, 149 citations) and serious games (Moreno-Ger et al., 2012, 110 citations). Its reliability supports cross-cultural adaptations, including French (Gronier & Baudet, 2021, 125 citations) and Chinese versions (Wang et al., 2019, 65 citations). Lewis et al. (2015, 121 citations) showed SUS correlates strongly with UMUX-LITE, aiding comparisons in industry evaluations. Thousands of studies use SUS scores above 68 as acceptable usability thresholds, influencing product design decisions.
Key Research Challenges
Dimensionality of SUS
Debate persists on whether SUS measures one or two factors. Borsci et al. (2009, 256 citations) tested unidimensional versus two-factor models using confirmatory factor analysis. Alternative models showed varying fit across samples.
Cross-cultural Validation
SUS requires rigorous translation and psychometric testing for non-English languages. Gronier and Baudet (2021, 125 citations) validated the French F-SUS via back-translation and factor analysis. Wang et al. (2019, 65 citations) revised Chinese SUS through interviews and reliability checks.
Benchmark Interpretation
Defining universal SUS score thresholds remains inconsistent across contexts. Lewis (2018, 1664 citations) reviewed adjective ratings and percentile ranks for scores. Domain-specific norms, like mobile apps, need more data (Kaya et al., 2019, 149 citations).
Essential Papers
The System Usability Scale: Past, Present, and Future
James R. Lewis · 2018 · International Journal of Human-Computer Interaction · 1.7K citations
The System Usability Scale (SUS) is the most widely used standardized questionnaire for the assessment of perceived usability. This review of the SUS covers its early history from inception in the ...
SUS: a retrospective
Jennifer Brooke · 2013 · Journal of Usability Studies archive · 715 citations
Rather more than 25 years ago, as part of a usability engineering program, I developed a questionnaire---the System Usability Scale (SUS)---that could be used to take a quick measurement of how peo...
On the dimensionality of the System Usability Scale: a test of alternative measurement models
Simone Borsci, Stefano Federici, Marco Lauriola · 2009 · Cognitive Processing · 256 citations
Usability Measurement of Mobile Applications with System Usability Scale (SUS)
Aycan Kaya, Reha Ozturk, Çiğdem Altın Gümüşsoy · 2019 · Lecture notes in management and industrial engineering · 149 citations
Psychometric Evaluation of the F-SUS: Creation and Validation of the French Version of the System Usability Scale
Guillaume Gronier, Alexandre Baudet · 2021 · International Journal of Human-Computer Interaction · 125 citations
While the System Usability Scale (SUS) is probably one of the most widely used questionnaires to measure the perceived ease of use of interactive systems, there is currently no scientific valid tra...
Rethinking Thinking Aloud
Obead Alhadreti, Pam Mayhew · 2018 · 121 citations
This paper presents the results of a study that compared three think-aloud methods: concurrent think-aloud, retrospective think-aloud, and a hybrid method. The three methods were compared through a...
Measuring Perceived Usability: The SUS, UMUX-LITE, and AltUsability
James R. Lewis, Brian S. Utesch, Deborah E. Maher · 2015 · International Journal of Human-Computer Interaction · 121 citations
The purpose of this research was to investigate various measurements of perceived usability, in particular, to assess (a) whether a regression formula developed previously to bring Usability Metric...
Reading Guide
Foundational Papers
Start with Brooke (2013) for SUS origin and Lewis (2018) for historical context and benchmarks. Follow with Borsci et al. (2009) to understand dimensionality debates.
Recent Advances
Study Gronier & Baudet (2021) for French validation methods and Kaya et al. (2019) for mobile applications. Wang et al. (2019) details Chinese adaptation process.
Core Methods
SUS scoring formula, factor analysis for validity, regression for norming (Lewis et al., 2015). Cross-cultural back-translation and reliability testing (Gronier & Baudet, 2021).
How PapersFlow Helps You Research System Usability Scale SUS
Discover & Search
Research Agent uses citationGraph on Lewis (2018) to map 1664 citing papers, revealing validation studies like Gronier & Baudet (2021). exaSearch queries 'SUS cross-cultural validation' to find adaptations beyond provided lists. findSimilarPapers on Brooke (2013) uncovers foundational retrospectives.
Analyze & Verify
Analysis Agent runs readPaperContent on Borsci et al. (2009) to extract factor loadings, then verifyResponse with CoVe checks model fit claims against GRADE B evidence. runPythonAnalysis computes SUS score distributions from Lewis (2015) datasets via pandas for statistical verification.
Synthesize & Write
Synthesis Agent detects gaps in SUS benchmarks for emerging domains like AI interfaces, flagging contradictions between unidimensional claims (Borsci et al., 2009). Writing Agent applies latexEditText to draft SUS analysis sections, latexSyncCitations for 10+ references, and latexCompile for publication-ready reports; exportMermaid visualizes SUS item structure.
Use Cases
"Compute average SUS scores and plot distributions from mobile app studies"
Research Agent → searchPapers('SUS mobile') → Analysis Agent → runPythonAnalysis(pandas groupby on scores from Kaya et al. 2019) → matplotlib histogram output with means and SDs.
"Draft LaTeX section comparing SUS and UMUX-LITE with citations"
Synthesis Agent → gap detection(Lewis 2015) → Writing Agent → latexEditText('comparison table') → latexSyncCitations(8 papers) → latexCompile(PDF with SUS vs UMUX correlation plot).
"Find GitHub repos with SUS calculator code from usability papers"
Research Agent → paperExtractUrls(Brooke 2013) → Code Discovery → paperFindGithubRepo → githubRepoInspect(yields Python SUS scoring scripts with example data).
Automated Workflows
Deep Research workflow scans 50+ SUS papers via citationGraph from Lewis (2018), producing structured report with score benchmarks and psychometrics. DeepScan applies 7-step CoVe to validate claims in Gronier & Baudet (2021), checkpointing translations. Theorizer generates hypotheses on SUS dimensionality from Borsci et al. (2009) models.
Frequently Asked Questions
What is the SUS questionnaire?
SUS consists of 10 statements rated 1-5: odd items assess ease (e.g., 'I found the system unnecessarily complex'), even items effort. Score = 2.5 × (sum odd - 5) + (25 - 2.5 × sum even), ranging 0-100. Brooke (2013) developed it for quick post-task assessments.
What are common SUS methods?
Administer after user tasks, compute composite score, interpret via percentiles (68=average). Lewis (2018) provides adjective ratings: 80+ excellent, <50 poor. Validate via factor analysis (Borsci et al., 2009).
What are key SUS papers?
Brooke (2013, 715 citations) retrospective; Lewis (2018, 1664 citations) history and prospects; Borsci et al. (2009, 256 citations) dimensionality test. Lewis et al. (2015, 121 citations) compares with UMUX-LITE.
What open problems exist in SUS research?
Standardizing benchmarks across domains; resolving one vs. two-factor structure; expanding validated translations. Lewis (2018) calls for more norming data; Kaya et al. (2019) highlights mobile-specific gaps.
Research Usability and User Interface Design with AI
PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Code & Data Discovery
Find datasets, code repositories, and computational tools
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Computer Science & AI use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching System Usability Scale SUS with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Computer Science researchers