Subtopic Deep Dive

Few-Shot Meta-Learning
Research Guide

What is Few-Shot Meta-Learning?

Few-Shot Meta-Learning trains models via episodic tasks to adapt rapidly to new classes using few examples, encompassing optimization-based methods like MAML and metric-based approaches like ProtoNet.

This subtopic optimizes meta-learners across task distributions for quick generalization (Finn et al., 2017 implied in citations). Key works include Meta-SGD (Li et al., 2017, 839 citations) for fast few-shot adaptation and Reptile (Nichol et al., 2018, 646 citations) for scalable first-order updates. Surveys like Hospedales et al. (2021, 257 citations) cover 100+ papers on neural meta-learning techniques.

10
Curated Papers
3
Key Challenges

Why It Matters

Few-shot meta-learning enables AI adaptation in data-scarce settings like robotics and medical imaging, where collecting full datasets is infeasible. Meta-SGD (Li et al., 2017) accelerates learning in personalized federated systems (Jiang et al., 2019, 368 citations). Reptile (Nichol et al., 2018) scales to domain generalization (Li et al., 2018, 1169 citations), impacting low-data regimes in physical sciences.

Key Research Challenges

Second-Order Optimization Costs

Computing Hessian-like updates in MAML-style methods demands high memory and time (Nichol et al., 2018). Reptile addresses this with first-order approximations (646 citations). On First-Order Meta-Learning analyzes trade-offs in gradient approximations (Nichol et al., 2018, 544 citations).

Domain Shift Generalization

Models overfit to meta-training distributions, failing on shifted targets (Li et al., 2018, 1169 citations). Meta-learning for domain generalization extracts invariant features across tasks. Surveys note persistent gaps in cross-domain few-shot performance (Hospedales et al., 2021).

Task Distribution Design

Episodic training requires diverse task samplings for robust meta-learners (Vanschoren, 2019, 293 citations). Poor distributions lead to meta-overfitting. Huisman et al. (2021, 330 citations) survey optimization and metric-based pitfalls in task construction.

Essential Papers

1.

A Metaverse: Taxonomy, Components, Applications, and Open Challenges

Sangmin Park, Young‐Gab Kim · 2022 · IEEE Access · 1.7K citations

Unlike previous studies on the Metaverse based on Second Life, the current Metaverse is based on the social value of Generation Z that online and offline selves are not different. With the technolo...

2.

Learning to Generalize: Meta-Learning for Domain Generalization

Da Li, Yongxin Yang, Yi-Zhe Song et al. · 2018 · Proceedings of the AAAI Conference on Artificial Intelligence · 1.2K citations

Domain shift refers to the well known problem that a model trained in one source domain performs poorly when appliedto a target domain with different statistics. Domain Generalization (DG) techniqu...

3.

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

Zhenguo Li, Fengwei Zhou, Fei Chen et al. · 2017 · arXiv (Cornell University) · 839 citations

Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn ...

4.

Reptile: a Scalable Metalearning Algorithm

Alex Nichol, John Schulman · 2018 · arXiv (Cornell University) · 646 citations

This paper considers metalearning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously u...

5.

On First-Order Meta-Learning Algorithms

Alex Nichol, Joshua Achiam, John Schulman · 2018 · arXiv (Cornell University) · 544 citations

This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously ...

6.

Improving Federated Learning Personalization via Model Agnostic Meta Learning

Yihan Jiang, Jakub Konečný, Keith Rush et al. · 2019 · arXiv (Cornell University) · 368 citations

Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile ...

7.

Deep Domain-Adversarial Image Generation for Domain Generalisation

Kaiyang Zhou, Yongxin Yang, Timothy M. Hospedales et al. · 2020 · Proceedings of the AAAI Conference on Artificial Intelligence · 361 citations

Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain...

Reading Guide

Foundational Papers

No pre-2015 papers available; start with Meta-SGD (Li et al., 2017, 839 citations) for core few-shot formulation, then Reptile (Nichol et al., 2018, 646 citations) for practical scaling.

Recent Advances

Hospedales et al. (2021, 257 citations) survey for neural methods overview; Huisman et al. (2021, 330 citations) for deep meta-learning taxonomy; Jiang et al. (2019) for federated extensions.

Core Methods

Episodic training samples support/query sets per task; optimization meta-learns update rules (Meta-SGD); first-order approximates via repeated SGD directions (Reptile); metrics compute prototype distances.

How PapersFlow Helps You Research Few-Shot Meta-Learning

Discover & Search

Research Agent uses searchPapers('few-shot meta-learning MAML Reptile') to retrieve 250+ papers, then citationGraph on 'Meta-SGD: Learning to Learn Quickly for Few-Shot Learning' (Li et al., 2017) reveals clusters around Nichol et al. (2018) works, while findSimilarPapers expands to domain generalization links like Li et al. (2018). exaSearch uncovers episodic training variants.

Analyze & Verify

Analysis Agent applies readPaperContent to 'Reptile: a Scalable Metalearning Algorithm' (Nichol et al., 2018), verifyResponse with CoVe cross-checks first-order claims against 'On First-Order Meta-Learning Algorithms' (Nichol et al., 2018), and runPythonAnalysis recreates Meta-SGD inner-loop gradients using NumPy for statistical verification. GRADE scores evidence strength on convergence rates.

Synthesize & Write

Synthesis Agent detects gaps in second-order scalability via contradiction flagging across Li et al. (2017) and Nichol et al. (2018), while Writing Agent uses latexEditText for meta-learning sections, latexSyncCitations for 10+ references, and latexCompile to produce arXiv-ready surveys. exportMermaid visualizes episodic training pipelines.

Use Cases

"Reimplement Reptile algorithm and compare convergence to MAML on toy tasks"

Research Agent → searchPapers('Reptile Nichol') → Analysis Agent → readPaperContent + runPythonAnalysis (NumPy gradient descent sandbox plots loss curves) → researcher gets matplotlib convergence graphs and code snippet.

"Write LaTeX survey on metric vs optimization meta-learning with citations"

Synthesis Agent → gap detection on ProtoNet/MAML → Writing Agent → latexEditText + latexSyncCitations (Hospedales 2021, Li 2017) + latexCompile → researcher gets compiled PDF with equation-rendered inner-loop math.

"Find GitHub repos implementing Meta-SGD for few-shot classification"

Research Agent → citationGraph('Meta-SGD Li 2017') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → researcher gets top 5 repos with code quality scores and adaptation benchmarks.

Automated Workflows

Deep Research workflow scans 50+ meta-learning papers via searchPapers → citationGraph, producing structured reports ranking Li et al. (2017) impact. DeepScan applies 7-step CoVe to verify Reptile scalability claims against Nichol et al. (2018) baselines. Theorizer generates hypotheses on first-order improvements from survey data (Huisman et al., 2021).

Frequently Asked Questions

What defines few-shot meta-learning?

Few-shot meta-learning uses episodic training on task distributions to enable rapid adaptation from 1-5 examples per class, via optimization (MAML-like) or metric (ProtoNet-like) methods.

What are core methods in few-shot meta-learning?

Optimization-based: MAML computes task-specific gradients; Meta-SGD (Li et al., 2017) learns fast inner-loop optimizers. First-order: Reptile (Nichol et al., 2018) uses directional updates. Metric-based compare embeddings for classification.

What are key papers?

Meta-SGD (Li et al., 2017, 839 citations) for quick learning; Reptile (Nichol et al., 2018, 646 citations) for scalability; surveys by Hospedales et al. (2021, 257 citations) and Huisman et al. (2021, 330 citations).

What open problems exist?

Cross-domain generalization (Li et al., 2018); scalable second-order methods beyond Nichol et al. (2018); diverse task distributions to avoid meta-overfitting (Vanschoren, 2019).

Research Domain Adaptation and Few-Shot Learning with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Few-Shot Meta-Learning with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers