PapersFlow Research Brief

Artificial Intelligence in Healthcare and Education
Research Guide

What is Artificial Intelligence in Healthcare and Education?

Artificial Intelligence in Healthcare and Education is the application of machine-learning and related AI methods to improve clinical care and health systems while also supporting teaching, learning, assessment, and training for health professionals and students.

The research literature on Artificial Intelligence in Healthcare and Education includes 103,440 works in the provided dataset, while the 5-year growth rate is reported as N/A. "Artificial intelligence in healthcare: past, present and future" (2017) describes how increasing availability of healthcare data and advances in analytics techniques have enabled AI applications across healthcare. "Systematic review of research on artificial intelligence applications in higher education – where are the educators?" (2019) synthesizes AI-in-higher-education research and explicitly interrogates the role of educators in that body of work.

103.4K
Papers
N/A
5yr Growth
721.4K
Total Citations

Research Sub-Topics

Why It Matters

In healthcare, AI is used for clinically consequential tasks such as diagnosis, prediction, and decision support, but these uses raise accountability and safety questions that make explainability and interpretability central concerns. Davenport and Kalakota (2019) classify healthcare AI applications into categories that “involve diagnosis and treatment recommendations,” situating AI as a tool used by payers, providers, and life-sciences companies rather than only as a research prototype. Topol (2018) argues in "High-performance medicine: the convergence of human and artificial intelligence" (2018) that the practical goal is a convergence of clinician expertise and AI capabilities, implying that clinical workflows and training must adapt together. In education, large language models are framed as both opportunity and risk: Kasneci et al. (2023) in "ChatGPT for good? On opportunities and challenges of large language models for education" (2023) analyzes how systems like ChatGPT can support learning while introducing challenges that educators and institutions must manage. A concrete, high-stakes example of why this matters is the requirement for responsible decision-making when AI influences patient care or student progression: Rudin (2019) in "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead" (2019) argues that, for high-stakes decisions, practitioners should prefer interpretable models over post-hoc explanations of black boxes, directly affecting how clinical decision support tools are selected and how assessment-related educational AI is governed.

Reading Guide

Where to Start

Start with "Artificial intelligence in healthcare: past, present and future" (2017) because it is explicitly written as a survey of the current status of AI applications in healthcare and discusses future directions, providing a structured entry point before narrower debates about explainability or model choice.

Key Papers Explained

A coherent reading path links clinical motivation, methods, and governance. Jiang et al. (2017) in "Artificial intelligence in healthcare: past, present and future" (2017) establishes application areas and context; Esteva et al. (2018) in "A guide to deep learning in healthcare" (2018) then focuses on deep learning as a major methodological toolkit for those applications. Topol (2018) in "High-performance medicine: the convergence of human and artificial intelligence" (2018) reframes these methods as components of human–AI clinical practice rather than replacements for clinicians. The explainability and accountability layer is developed by Adadi and Berrada (2018) in "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)" (2018) and systematized by Barredo Arrieta et al. (2019) in "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI" (2019), while Rudin (2019) in "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead" (2019) provides a strong normative position about model choice in high-stakes contexts. On the education side, Zawacki‐Richter et al. (2019) in "Systematic review of research on artificial intelligence applications in higher education – where are the educators?" (2019) sets an institutional and stakeholder lens, and Kasneci et al. (2023) in "ChatGPT for good? On opportunities and challenges of large language models for education" (2023) updates the discussion around large language models.

Paper Timeline

100%
graph LR P0["Artificial intelligence in healt...
2017 · 4.3K cites"] P1["High-performance medicine: the c...
2018 · 7.0K cites"] P2["Peeking Inside the Black-Box: A ...
2018 · 5.2K cites"] P3["A guide to deep learning in heal...
2018 · 4.0K cites"] P4["Explainable Artificial Intellige...
2019 · 7.9K cites"] P5["Stop explaining black box machin...
2019 · 7.7K cites"] P6["Systematic review of research on...
2019 · 4.1K cites"] P0 --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> P6 style P4 fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan

Most-cited paper highlighted in red. Papers ordered chronologically.

Advanced Directions

Advanced work, as framed by the provided highly cited papers, centers on (1) translating XAI taxonomies into auditable requirements for responsible AI (Barredo Arrieta et al., 2019) while addressing critiques of post-hoc explanation for high-stakes decisions (Rudin, 2019), (2) integrating deep learning pipelines into real clinical workflows consistent with human–AI convergence (Topol, 2018; Esteva et al., 2018), and (3) building education deployments of large language models that explicitly account for both opportunities and challenges (Kasneci et al., 2023) and incorporate educators as core stakeholders rather than peripheral users (Zawacki‐Richter et al., 2019).

Papers at a Glance

# Paper Year Venue Citations Open Access
1 Explainable Artificial Intelligence (XAI): Concepts, taxonomie... 2019 Information Fusion 7.9K
2 Stop explaining black box machine learning models for high sta... 2019 Nature Machine Intelli... 7.7K
3 High-performance medicine: the convergence of human and artifi... 2018 Nature Medicine 7.0K
4 Peeking Inside the Black-Box: A Survey on Explainable Artifici... 2018 IEEE Access 5.2K
5 Artificial intelligence in healthcare: past, present and future 2017 Stroke and Vascular Ne... 4.3K
6 Systematic review of research on artificial intelligence appli... 2019 International Journal ... 4.1K
7 A guide to deep learning in healthcare 2018 Nature Medicine 4.0K
8 ChatGPT for good? On opportunities and challenges of large lan... 2023 Learning and Individua... 3.9K
9 Machine Learning in Medicine 2019 New England Journal of... 3.4K
10 The potential for artificial intelligence in healthcare 2019 Future Healthcare Journal 3.4K

In the News

Code & Tools

Recent Preprints

Artificial intelligence (AI) for social innovation in health education: promoting health literacy through personalized ai-driven learning tools – a systematic review

Dec 2025 link.springer.com Preprint

Artificial Intelligence (AI) is transforming health education by enabling personalized, adaptive, and scalable approaches that may enhance aspects of health literacy. Despite rapid adoption, compre...

Integrating artificial intelligence into medical education: a narrative systematic review of current applications, challenges, and future directions

Aug 2025 link.springer.com Preprint

Artificial Intelligence (AI) is reshaping both healthcare delivery and the structure of medical education. This narrative review synthesizes insights from 14 studies exploring how AI is being integ...

Artificial intelligence in undergraduate medical education: an updated scoping review

Nov 2025 bmcmededuc.biomedcentral.com Preprint

With the rise of artificial intelligence (AI), its use in healthcare, covering diagnostics, decision support, administration, and population health, has steadily expanded \[ 1 \]. Moreover, the imp...

Effectiveness of generative artificial intelligence-based teaching versus traditional teaching methods in medical education: a meta-analysis of randomized controlled trials

Aug 2025 bmcmededuc.biomedcentral.com Preprint

Artificial intelligence (AI) has demonstrated remarkable capabilities across diverse medical applications, potentially revolutionizing healthcare delivery systems. This systematic review and meta-a...

A generative AI teaching assistant for personalized learning in medical education

Nov 2025 nature.com Preprint

Medical education faces a scalability crisis, where rising class sizes strain individualized instruction, while students increasingly adopt unvalidated Generative AI (GenAI) tools for individualize...

Latest Developments

Recent developments in AI for healthcare and education as of February 2026 include the increasing adoption of agentic AI capable of making decisions, refining reasoning, and proactively identifying issues in healthcare (bigri.io, 01/05/2026), and the emergence of AI agents that enhance patient care, health systems, and biomedical science (bcg.com, 01/05/2026). In medical education, AI is being integrated to support learning, improve training, and address challenges, with ongoing research exploring its applications, challenges, and future prospects (med.miami.edu, 12/16/2025; mededu.jmir.org, 10/23/2025).

Frequently Asked Questions

What is the difference between explainable AI and interpretable models in high-stakes healthcare and education decisions?

Rudin (2019) in "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead" (2019) argues that high-stakes settings should prioritize inherently interpretable models rather than relying on post-hoc explanations of black-box systems. Barredo Arrieta et al. (2019) in "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI" (2019) frames XAI as a set of concepts and taxonomies aimed at responsible AI, which can include but is not limited to interpretable modeling.

How is deep learning typically positioned for clinical use in healthcare AI research?

Esteva et al. (2018) in "A guide to deep learning in healthcare" (2018) presents deep learning as a practical approach for healthcare problems, emphasizing how model development connects to healthcare data and clinical objectives. Jiang et al. (2017) in "Artificial intelligence in healthcare: past, present and future" (2017) situates such methods within a broader history of AI in healthcare and discusses their future directions.

Why do responsible AI and transparency recur as core themes across healthcare and education AI?

Barredo Arrieta et al. (2019) in "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI" (2019) explicitly links explainability to responsible AI through concepts, taxonomies, and challenges. Adadi and Berrada (2018) in "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)" (2018) surveys explainable AI as a response to the impediment created by black-box systems, which is especially acute when decisions affect patient outcomes or student evaluation.

Which papers provide broad, field-level overviews of AI in healthcare suitable for newcomers?

Jiang et al. (2017) in "Artificial intelligence in healthcare: past, present and future" (2017) surveys the status of AI applications in healthcare and discusses future directions. Topol (2018) in "High-performance medicine: the convergence of human and artificial intelligence" (2018) provides a clinician-facing framing of how AI and human expertise can converge in medical practice.

How does the higher-education literature characterize the role of educators in AI adoption?

Zawacki‐Richter et al. (2019) in "Systematic review of research on artificial intelligence applications in higher education – where are the educators?" (2019) synthesizes research on AI applications in higher education while explicitly questioning where educators are represented within that research. Kasneci et al. (2023) in "ChatGPT for good? On opportunities and challenges of large language models for education" (2023) further frames educator-relevant opportunities and challenges introduced by large language models in educational settings.

Which healthcare stakeholders are described as current users of AI applications, and for what types of tasks?

Davenport and Kalakota (2019) in "The potential for artificial intelligence in healthcare" (2019) states that several types of AI are already employed by payers and providers of care, as well as life-sciences companies. The same paper groups key application categories around diagnosis and treatment recommendations, aligning stakeholder adoption with clinically relevant tasks.

Open Research Questions

  • ? Which XAI approaches described in "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI" (2019) can be operationalized as acceptance criteria for clinical decision support tools without relying on post-hoc explanations of black-box models criticized in "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead" (2019)?
  • ? How can healthcare deep-learning development practices discussed in "A guide to deep learning in healthcare" (2018) be aligned with the human–AI workflow convergence goals articulated in "High-performance medicine: the convergence of human and artificial intelligence" (2018) to reduce deployment failure in real clinical settings?
  • ? Which evaluation designs can jointly measure educational benefit and risk for large language models, consistent with the opportunities-and-challenges framing in "ChatGPT for good? On opportunities and challenges of large language models for education" (2023) and the educator-participation concerns raised in "Systematic review of research on artificial intelligence applications in higher education – where are the educators?" (2019)?
  • ? What governance and accountability mechanisms are needed when AI systems influence both patient care and educational assessment, given the black-box impediments surveyed in "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)" (2018) and the high-stakes interpretability argument in "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead" (2019)?

Research Artificial Intelligence in Healthcare and Education with AI

PapersFlow provides specialized AI tools for your field researchers. Here are the most relevant for this topic:

Start Researching Artificial Intelligence in Healthcare and Education with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.