Subtopic Deep Dive

Mobile Application Performance Evaluation
Research Guide

What is Mobile Application Performance Evaluation?

Mobile Application Performance Evaluation develops benchmarking methodologies to measure CPU/GPU utilization, memory leaks, network latency, thermal throttling, and power consumption in mobile apps across diverse hardware.

Researchers create decision matrices and empirical frameworks to assess app efficiency on Android and iOS. Cross-platform frameworks introduce overheads measured via CPU, memory, and energy metrics (Biørn-Hansen et al., 2020; Ibrahim et al., 2019). Over 200 papers address these metrics since 2013.

15
Curated Papers
3
Key Challenges

Why It Matters

Performance evaluation identifies overheads in cross-platform frameworks, enabling developers to optimize apps for resource-constrained devices like smartphones with varying batteries and processors (Biørn-Hansen et al., 2020). It supports scalable e-health monitoring apps using HTML5 hybrids (Preuveneers et al., 2013). Efficient benchmarking reduces malware detection latency in Android apps (Taher et al., 2023). These methods ensure apps run smoothly on fragmented hardware ecosystems (Biørn-Hansen et al., 2018).

Key Research Challenges

Cross-Platform Overhead Measurement

Frameworks like React Native introduce performance overheads in CPU and memory usage across Android and iOS (Biørn-Hansen et al., 2020). Empirical testing reveals up to 30% degradation compared to native apps. Standardization of metrics remains inconsistent.

Benchmarking Diverse Hardware

Apps must perform on varied devices with different GPUs and thermal profiles (Majchrzak et al., 2018). Power profiling tools vary by platform, complicating comparisons. Fragmentation hinders reproducible benchmarks (Biørn-Hansen et al., 2018).

Multi-Criteria Evaluation Scaling

Decision matrices for skills like LSRW in educational apps extend to general performance but scale poorly with new metrics (Ibrahim et al., 2019). Integrating network latency and battery drain adds complexity. Automation of MCDM remains limited.

Essential Papers

1.

Multi-Criteria Evaluation and Benchmarking for Young Learners’ English Language Mobile Applications in Terms of LSRW Skills

Norshahila Ibrahim, Nawar. S. Jalood, M. J. Baqer et al. · 2019 · IEEE Access · 59 citations

This study proposes an evaluation and benchmarking decision matrix (DM) on the basis of multi-criteria decision making (MCDM) for young learners’ English mobile applications (E-apps) in term...

2.

Progressive Web Apps: the Definite Approach to Cross-Platform Development?

Tim A. Majchrzak, Andreas Biørn-Hansen, Tor‐Morten Grønli · 2018 · Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences · 59 citations

Although development practices for apps have matured, cross-platform development remains a prominent topic. Typically, apps should always support both Android and iOS devices. They ought to run smo...

3.

A Survey and Taxonomy of Core Concepts and Research Challenges in Cross-Platform Mobile Development

Andreas Biørn-Hansen, Tor‐Morten Grønli, Gheorghiță Ghinea · 2018 · ACM Computing Surveys · 56 citations

Developing applications targeting mobile devices is a complex task involving numerous options, technologies, and trade-offs, mostly due to the proliferation and fragmentation of devices and platfor...

4.

An empirical investigation of performance overhead in cross-platform mobile development frameworks

Andreas Biørn-Hansen, Christoph Rieger, Tor‐Morten Grønli et al. · 2020 · Empirical Software Engineering · 56 citations

Abstract The heterogeneity of the leading mobile platforms in terms of user interfaces, user experience, programming language, and ecosystem have made cross-platform development frameworks popular....

5.

DroidDetectMW: A Hybrid Intelligent Model for Android Malware Detection

Fatma Taher, Omar Alfandi, Mousa Al-kfairy et al. · 2023 · Applied Sciences · 44 citations

Malicious apps specifically aimed at the Android platform have increased in tandem with the proliferation of mobile devices. Malware is now so carefully written that it is difficult to detect. Due ...

6.

Comprehensive Analysis of Innovative Cross-Platform App Development Frameworks

Tim A. Majchrzak, Tor‐Morten Grønli · 2017 · Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences · 39 citations

Mobile apps are increasingly realized by using a cross-platform development framework. Using such frameworks, code is written once but the app can be deployed to multiple platforms. Despite progres...

7.

The Future of Mobile E-health Application Development: Exploring HTML5 for Context-aware Diabetes Monitoring

Davy Preuveneers, Yolande Berbers, Wouter Joosen · 2013 · Procedia Computer Science · 28 citations

According to predictions of information technology research and advisory firms, such as Gartner, hybrid HTML5 appli- cations will be the future for mobile application development. In this paper, we...

Reading Guide

Foundational Papers

Start with Preuveneers et al. (2013) for HTML5 hybrid performance baselines in e-health, then Ellis and Bredican (2014) on app development sourcing implications.

Recent Advances

Biørn-Hansen et al. (2020) for empirical cross-platform overheads; Taher et al. (2023) for malware-related latency detection.

Core Methods

Empirical measurement of CPU/memory (Biørn-Hansen et al., 2020); MCDM decision matrices (Ibrahim et al., 2019); JSON parsing efficiency tests (Queirós, 2014).

How PapersFlow Helps You Research Mobile Application Performance Evaluation

Discover & Search

Research Agent uses searchPapers and citationGraph to map 56-cited empirical overhead studies from Biørn-Hansen et al. (2020), then exaSearch uncovers related power profiling in Preuveneers et al. (2013). findSimilarPapers expands to 50+ cross-platform benchmarks.

Analyze & Verify

Analysis Agent applies readPaperContent to extract metrics from Biørn-Hansen et al. (2020), verifies overhead claims with runPythonAnalysis on CPU data using pandas for statistical t-tests, and assigns GRADE scores for evidence strength in benchmarking reproducibility.

Synthesize & Write

Synthesis Agent detects gaps in thermal throttling coverage across papers, flags contradictions in HTML5 efficiency (Preuveneers et al., 2013 vs. Majchrzak et al., 2018), then Writing Agent uses latexEditText, latexSyncCitations, and latexCompile to generate benchmark report with exportMermaid for performance flowcharts.

Use Cases

"Analyze CPU overhead data from cross-platform frameworks using Python."

Research Agent → searchPapers('Biørn-Hansen 2020') → Analysis Agent → readPaperContent → runPythonAnalysis(pandas plot overhead stats) → matplotlib benchmark graph output.

"Write LaTeX report on mobile app power profiling benchmarks."

Synthesis Agent → gap detection → Writing Agent → latexEditText(draft) → latexSyncCitations(Biørn-Hansen et al.) → latexCompile → PDF with performance tables.

"Find GitHub repos with mobile performance benchmarking code."

Research Agent → searchPapers('mobile benchmark') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → repo with CPU profiler scripts.

Automated Workflows

Deep Research workflow scans 50+ papers via citationGraph on Biørn-Hansen et al. (2020), structures overhead taxonomy report. DeepScan applies 7-step CoVe to verify metrics in Ibrahim et al. (2019) decision matrices. Theorizer generates hypotheses on HTML5 throttling from Preuveneers et al. (2013).

Frequently Asked Questions

What is Mobile Application Performance Evaluation?

It benchmarks CPU/GPU usage, memory leaks, latency, and power in mobile apps using empirical frameworks and decision matrices (Biørn-Hansen et al., 2020).

What methods dominate performance evaluation?

Multi-criteria decision making (MCDM) matrices evaluate skills-based apps (Ibrahim et al., 2019); empirical testing measures cross-platform overheads (Biørn-Hansen et al., 2020).

What are key papers?

Biørn-Hansen et al. (2020, 56 citations) on overheads; Ibrahim et al. (2019, 59 citations) on MCDM; Preuveneers et al. (2013, 28 citations) on HTML5 feasibility.

What open problems exist?

Standardizing benchmarks across hardware fragmentation and automating multi-metric scaling in MCDM for real-time throttling (Majchrzak et al., 2018; Biørn-Hansen et al., 2018).

Research Mobile and Web Applications with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Mobile Application Performance Evaluation with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers