Subtopic Deep Dive
Cache-Enabled Mobile Edge Computing
Research Guide
What is Cache-Enabled Mobile Edge Computing?
Cache-Enabled Mobile Edge Computing integrates content caching with computation offloading at network edges to reduce latency and backhaul load in mobile networks.
This subtopic combines caching strategies with MEC servers for efficient content delivery and task execution in 5G and IoT scenarios. Key works model joint optimization of cache placement, compute allocation, and wireless transmission (Mao et al., 2017; 5115 citations). Over 10 listed papers from 2005-2020 address modeling, energy efficiency, and AI-driven caching in heterogeneous networks.
Why It Matters
Cache-enabled MEC reduces backhaul traffic by 30-50% in video streaming via proactive caching at small cells (Baştuğ et al., 2015; 321 citations). It enables low-latency AR/VR services in UAV networks through joint caching and resource allocation (Chen et al., 2019; 183 citations). Energy efficiency improves by caching at base stations, critical for 5G sustainability (Liu and Yang, 2016; 196 citations). Real-world deployments in LTE-U UAVs use liquid state machines for cache management (Chen et al., 2019).
Key Research Challenges
Joint Cache-Compute Optimization
Optimizing storage, computation offloading, and delivery simultaneously under dynamic user demands remains complex. Stochastic geometry models HetNets but struggle with real-time variability (Yang et al., 2015; 337 citations). Machine learning aids but requires handling interference (He et al., 2017; 301 citations).
Energy Efficiency Tradeoffs
Caching reduces backhaul but increases base station power consumption. Analysis shows EE gains depend on content popularity and BS density (Liu and Yang, 2016; 196 citations). Balancing local caching versus cloud offload is unresolved in dense MEC (Mehrabi et al., 2019; 206 citations).
Dynamic Content Prediction
Predictive caching for mobility and varying requests uses DRL but assumes static channels. Realistic models incorporate D2D and interference alignment (He et al., 2017; 301 citations). Probabilistic caching optimizes hit rates yet neglects throughput in ad hoc nets (Chen et al., 2016; 206 citations).
Essential Papers
A Survey on Mobile Edge Computing: The Communication Perspective
Yuyi Mao, Changsheng You, Jun Zhang et al. · 2017 · IEEE Communications Surveys & Tutorials · 5.1K citations
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized mobile cloud computing toward mobile edge comput...
Analysis on Cache-Enabled Wireless Heterogeneous Networks
Chenchen Yang, Yao Yao, Zhiyong Chen et al. · 2015 · IEEE Transactions on Wireless Communications · 337 citations
Caching popular multimedia content is a promising way to unleash the ultimate potential of wireless networks. In this paper, we propose and analyze cache-based content delivery in a three-tier hete...
AI-Assisted Network-Slicing Based Next-Generation Wireless Networks
Xuemin Shen, Jie Gao, Wen Wu et al. · 2020 · IEEE Open Journal of Vehicular Technology · 331 citations
The integration of communications with different scales, diverse radio access technologies, and various network resources renders next-generation wireless networks (NGWNs) highly heterogeneous and ...
Cache-enabled small cell networks: modeling and tradeoffs
Ejder Baştuǧ, Mehdi Bennis, Marios Kountouris et al. · 2015 · EURASIP Journal on Wireless Communications and Networking · 321 citations
We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed o...
Deep-Reinforcement-Learning-Based Optimization for Cache-Enabled Opportunistic Interference Alignment Wireless Networks
Ying He, Zheng Zhang, F. Richard Yu et al. · 2017 · IEEE Transactions on Vehicular Technology · 301 citations
Both caching and interference alignment (IA) are promising techniques for next-generation wireless networks. Nevertheless, most of the existing works on cache-enabled IA wireless networks assume th...
Live Data Analytics With Collaborative Edge and Cloud Processing in Wireless IoT Networks
Shree Krishna Sharma, Xianbin Wang · 2017 · IEEE Access · 241 citations
Recently, big data analytics has received important attention in a variety of application domains including business, finance, space science, healthcare, telecommunication and Internet of Things (I...
Probabilistic Caching in Wireless D2D Networks: Cache Hit Optimal Versus Throughput Optimal
Zheng Chen, Νικόλαος Παππάς, Marios Kountouris · 2016 · IEEE Communications Letters · 206 citations
Departing from the conventional cache hit optimization in cache-enabled wireless networks, we consider an alternative optimization approach for the probabilistic caching placement in stochastic wir...
Reading Guide
Foundational Papers
Start with Mao et al. (2017) for MEC survey and communication models; Baştuğ et al. (2015) for small cell caching tradeoffs; Liu and Lau (2013) for cooperative MIMO video caching.
Recent Advances
Chen et al. (2019) for UAV liquid state machine caching; Mehrabi et al. (2019) for device-enhanced MEC; Shen et al. (2020) for AI network slicing.
Core Methods
Stochastic geometry (Poisson point processes), DRL (Q-learning for IA), probabilistic caching, coded caching (MDS schemes), liquid state machines.
How PapersFlow Helps You Research Cache-Enabled Mobile Edge Computing
Discover & Search
Research Agent uses searchPapers and citationGraph on 'cache-enabled MEC' to map 250M+ papers, centering Mao et al. (2017; 5115 citations) as hub with 10+ descendants like Baştuğ et al. (2015). exaSearch finds niche 'UAV cache MEC' papers; findSimilarPapers expands from Yang et al. (2015) to DRL variants.
Analyze & Verify
Analysis Agent applies readPaperContent to extract optimization models from He et al. (2017), then verifyResponse with CoVe checks DRL convergence claims against Liu and Yang (2016). runPythonAnalysis simulates cache hit ratios using NumPy on stochastic models from Baştuğ et al. (2015); GRADE scores evidence strength for EE claims.
Synthesize & Write
Synthesis Agent detects gaps in joint optimization across Mao et al. (2017) and Chen et al. (2019), flagging underexplored UAV-D2D caching. Writing Agent uses latexEditText for model equations, latexSyncCitations for 20-paper bib, latexCompile for report; exportMermaid diagrams cache-compute tradeoffs.
Use Cases
"Simulate cache hit probability in HetNet MEC from Yang et al. 2015"
Research Agent → searchPapers → Analysis Agent → runPythonAnalysis (NumPy/pandas on Poisson point process model) → matplotlib plot of hit ratios vs. cache size.
"Write LaTeX review of energy-efficient caching in MEC with citations"
Synthesis Agent → gap detection on Liu-Yang 2016 + Mao 2017 → Writing Agent → latexEditText + latexSyncCitations + latexCompile → PDF with optimized equations and 15 synced refs.
"Find GitHub code for DRL caching in MEC papers"
Research Agent → paperExtractUrls on He et al. 2017 → Code Discovery → paperFindGithubRepo → githubRepoInspect → verified RL simulation code for interference alignment.
Automated Workflows
Deep Research workflow scans 50+ cache-MEC papers via citationGraph from Mao et al. (2017), producing structured report with GRADE-verified claims. DeepScan applies 7-step CoVe analysis to He et al. (2017) DRL model, checkpointing simulation reproducibility. Theorizer generates hypotheses on liquid state machines for UAV caching from Chen et al. (2019).
Frequently Asked Questions
What defines Cache-Enabled Mobile Edge Computing?
It integrates caching at MEC servers with computation offloading to minimize latency in 5G networks (Mao et al., 2017).
What are main methods in this subtopic?
Stochastic geometry for HetNets (Yang et al., 2015), DRL for dynamic caching (He et al., 2017), probabilistic caching for D2D (Chen et al., 2016).
What are key papers?
Foundational: Mao et al. (2017; 5115 citations), Baştuğ et al. (2015; 321 citations); recent: Chen et al. (2019; 183 citations) on UAVs.
What are open problems?
Real-time joint optimization under mobility, EE in dense deployments, integration with network slicing (Shen et al., 2020).
Research Caching and Content Delivery with AI
PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Code & Data Discovery
Find datasets, code repositories, and computational tools
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Computer Science & AI use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Cache-Enabled Mobile Edge Computing with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Computer Science researchers
Part of the Caching and Content Delivery Research Guide