Subtopic Deep Dive
Usability Evaluation of Mobile Applications
Research Guide
What is Usability Evaluation of Mobile Applications?
Usability Evaluation of Mobile Applications evaluates user interaction effectiveness, efficiency, and satisfaction in touch-based, context-aware mobile interfaces.
Researchers develop mobile-specific metrics beyond desktop standards, addressing gesture controls and multi-device testing. Key works include Harrison et al. (2013) proposing a new usability model from literature review (710 citations) and Hoehle and Venkatesh (2015) instrument for mobile app usability (414 citations). Over 10 high-citation papers span 1993-2015.
Why It Matters
Mobile apps dominate daily interactions for billions, making usability evaluation critical for accessible designs in varied contexts like movement and location. Harrison et al. (2013) show functionality gains often reduce usability, guiding better metrics. Hoehle and Venkatesh (2015) provide instruments adopted in industry for app testing, while Kjeldskov and Stage (2004) techniques improve field evaluations, reducing user errors in apps like navigation and health trackers.
Key Research Challenges
Context-Aware Testing
Mobile use occurs in dynamic environments like walking, complicating lab-based evaluations. Pascoe et al. (2000) highlight HCI issues in fieldwork (320 citations). Dix et al. (2000) propose location taxonomies to address this (288 citations).
Gesture Interface Metrics
Standard metrics fail for touch gestures and spatial awareness. Fitzmaurice (1993) introduces situated palmtop concepts (491 citations). Kjeldskov and Stage (2004) develop new techniques for mobile systems (301 citations).
Multi-Device Usability
Evaluating consistency across phones, tablets, and wearables lacks unified models. Maguire (2001) defines context within usability activities (264 citations). Seffah et al. (2006) consolidate metrics adaptable to mobile (636 citations).
Essential Papers
Usability of mobile applications: literature review and rationale for a new usability model
Rachel Harrison, Derek Flood, David Duce · 2013 · Journal of Interaction Science · 710 citations
The usefulness of mobile devices has increased greatly in recent years allowing users to perform more tasks in a mobile context. This increase in usefulness has come at the expense of the usability...
Usability measurement and metrics: A consolidated model
Ahmed Seffah, Mohammad Donyaee, Rex B. Kline et al. · 2006 · Software Quality Journal · 636 citations
Situated information spaces and spatially aware palmtop computers
George Fitzmaurice · 1993 · Communications of the ACM · 491 citations
article Free Access Share on Situated information spaces and spatially aware palmtop computers Author: George W. Fitzmaurice Univ. of Toronto, Toronto, Ont., Canada Univ. of Toronto, Toronto, Ont.,...
Designing electronic collaborative learning environments
Paul A. Kirschner, Jan-Willem Strijbos, Karel Kreijns et al. · 2004 · Educational Technology Research and Development · 489 citations
Mobile Application Usability: Conceptualization and Instrument Development1
Hartmut Hoehle, Viswanath Venkatesh · 2015 · MIS Quarterly · 414 citations
This paper presents a mobile application usability conceptualization and survey instrument following the 10-step procedure recommended by MacKenzie et al. (2011). Specifically, we adapted Apple’s u...
Socially translucent systems
Thomas Erickson, David Nichol Smith, Wendy A. Kellogg et al. · 1999 · 333 citations
We take as our premise that it is possible and desirable to design systems that support social processes. We describe Loops, a project which takes this approach to supporting computer-mediated comm...
Using while moving
Jason Pascoe, Nick Ryan, David Morse · 2000 · ACM Transactions on Computer-Human Interaction · 320 citations
Using While Moving: HCI Issues in Fieldwork Environments Jason Pascoe, Nick Ryan and David Morse mailto:JasonPascoe@acm.org mailto:N.S.Ryan@ukc.ac.uk, mailto:D.R.Morse@open.ac.uk Computing Laborato...
Reading Guide
Foundational Papers
Start with Harrison et al. (2013) for mobile usability model (710 citations), then Seffah et al. (2006) for metrics (636 citations), and Fitzmaurice (1993) for spatial concepts (491 citations) to build core understanding.
Recent Advances
Study Hoehle and Venkatesh (2015) for instrument development (414 citations) and Kjeldskov and Stage (2004) for evaluation techniques (301 citations).
Core Methods
Core techniques: context taxonomies (Dix et al., 2000), fieldwork evaluation (Pascoe et al., 2000), and new mobile testing (Kjeldskov and Stage, 2004).
How PapersFlow Helps You Research Usability Evaluation of Mobile Applications
Discover & Search
Research Agent uses searchPapers and exaSearch to find mobile usability papers like 'Usability of mobile applications: literature review' by Harrison et al. (2013), then citationGraph reveals connected works like Hoehle and Venkatesh (2015) with 414 citations.
Analyze & Verify
Analysis Agent applies readPaperContent to extract metrics from Seffah et al. (2006), verifies claims with verifyResponse (CoVe), and runs PythonAnalysis on usability datasets for statistical validation like SUS score distributions; GRADE grading assesses evidence strength in Kjeldskov and Stage (2004).
Synthesize & Write
Synthesis Agent detects gaps in gesture metrics across papers, flags contradictions in context models; Writing Agent uses latexEditText, latexSyncCitations for Harrison et al. (2013), and latexCompile to generate evaluation reports with exportMermaid for usability workflow diagrams.
Use Cases
"Analyze SUS scores from mobile app studies in provided papers using Python."
Research Agent → searchPapers → Analysis Agent → readPaperContent (Harrison 2013, Hoehle 2015) → runPythonAnalysis (pandas aggregation of metrics) → matplotlib plot of score distributions.
"Draft LaTeX report on new mobile usability model with citations."
Synthesis Agent → gap detection → Writing Agent → latexEditText (intro section) → latexSyncCitations (add Harrison 2013) → latexCompile → PDF with integrated figures.
"Find code for mobile gesture usability tools from papers."
Research Agent → paperExtractUrls (Kjeldskov 2004) → Code Discovery → paperFindGithubRepo → githubRepoInspect → evaluation scripts for field testing.
Automated Workflows
Deep Research workflow scans 50+ mobile HCI papers via searchPapers, structures report with metrics from Seffah et al. (2006). DeepScan applies 7-step analysis with CoVe checkpoints on Pascoe et al. (2000) fieldwork data. Theorizer generates theory on context-aware models from Dix et al. (2000) and Maguire (2001).
Frequently Asked Questions
What defines usability evaluation of mobile applications?
It assesses effectiveness, efficiency, satisfaction in mobile contexts with touch gestures and location factors, per Harrison et al. (2013).
What are key methods in this subtopic?
Methods include new evaluation techniques (Kjeldskov and Stage, 2004), consolidated metrics (Seffah et al., 2006), and mobile instruments (Hoehle and Venkatesh, 2015).
What are foundational papers?
Harrison et al. (2013, 710 citations) reviews literature for new model; Seffah et al. (2006, 636 citations) provides metrics; Fitzmaurice (1993, 491 citations) covers spatial palmtops.
What open problems exist?
Challenges include multi-device consistency (Maguire, 2001) and fieldwork HCI (Pascoe et al., 2000); gaps in gesture metrics persist.
Research Usability and User Interface Design with AI
PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Code & Data Discovery
Find datasets, code repositories, and computational tools
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Computer Science & AI use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Usability Evaluation of Mobile Applications with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Computer Science researchers