Subtopic Deep Dive

Scientific Retractions
Research Guide

What is Scientific Retractions?

Scientific retractions are the formal withdrawal of published papers due to misconduct, errors, plagiarism, or ethical violations.

Retractions have increased significantly, with Fang et al. (2012) analyzing 2,047 PubMed-indexed biomedical retractions where 67.4% stemmed from misconduct versus 21.3% from error. Steen et al. (2013) attribute rising rates to lower publication barriers and institutional behaviors. Fanelli (2009) meta-analysis of surveys estimates 1.97% of scientists admit falsifying data, with 33.7% aware of colleagues' misconduct.

15
Curated Papers
3
Key Challenges

Why It Matters

Retractions erode trust in scientific literature, as Fang and Casadevall (2011) show via the Retraction Index measuring post-retraction citation persistence. Grieneisen and Zhang (2012) survey reveals retractions span disciplines without predominant misconduct accusations. Understanding patterns aids post-publication review, with Fanelli et al. (2017) meta-assessment linking biases to reproducibility crises across fields.

Key Research Challenges

Quantifying Misconduct Prevalence

Surveys underestimate true rates due to self-reporting bias, as Fanelli (2009) reports only 1.97% admission despite 33.7% colleague awareness. Fang et al. (2012) classification of 67.4% misconduct retractions requires nuanced error-misconduct distinction. Standardized metrics like Steen et al. (2013) remain debated amid rising volumes.

Tracking Retraction Impacts

Post-retraction citations persist, per Fang and Casadevall (2011) Retraction Index. Grieneisen and Zhang (2012) note flawed data in few cases yet broad disciplinary spread. Measuring network effects challenges citation database completeness.

Explaining Retraction Increases

Steen et al. (2013) link rises to author/institutional shifts and publication pressures. Fanelli et al. (2017) probe biases in meta-analyses across fields. Distinguishing systemic versus behavioral drivers lacks consensus.

Essential Papers

1.

How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data

Daniele Fanelli · 2009 · PLoS ONE · 1.9K citations

The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they h...

2.

Misconduct accounts for the majority of retracted scientific publications

Ferric C. Fang, R. Grant Steen, Arturo Casadevall · 2012 · Proceedings of the National Academy of Sciences · 1.2K citations

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In con...

3.

Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond

Mike Perkins · 2023 · Journal of University Teaching and Learning Practice · 586 citations

This paper explores the academic integrity considerations of students’ use of Artificial Intelligence (AI) tools using Large Language Models (LLMs) such as ChatGPT in formal assessments. We examine...

4.

The Hong Kong Principles for assessing researchers: Fostering research integrity

David Moher, L.M. Bouter, Sabine Kleinert et al. · 2020 · PLoS Biology · 495 citations

For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. Assessment of res...

5.

Why Has the Number of Scientific Retractions Increased?

R. Grant Steen, Arturo Casadevall, Ferric C. Fang · 2013 · PLoS ONE · 408 citations

The increase in retracted articles appears to reflect changes in the behavior of both authors and institutions. Lower barriers to publication of flawed articles are seen in the increase in number a...

6.

Retracted Science and the Retraction Index

Ferric C. Fang, Arturo Casadevall · 2011 · Infection and Immunity · 400 citations

ABSTRACT Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to vi...

7.

A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines

Ana Marušić, Lana Bošnjak, Ana Jerončić · 2011 · PLoS ONE · 397 citations

High prevalence of authorship problems may have severe impact on the integrity of the research process, just as more serious forms of research misconduct. There is a need for more methodologically ...

Reading Guide

Foundational Papers

Start with Fanelli (2009) for misconduct prevalence surveys (1891 cites), then Fang et al. (2012) for retraction causes (1186 cites), followed by Fang and Casadevall (2011) Retraction Index.

Recent Advances

Perkins (2023) on AI tools in integrity; Moher et al. (2020) Hong Kong Principles; Fanelli et al. (2017) bias meta-assessment.

Core Methods

Systematic reviews/meta-analyses (Fanelli 2009), PubMed retraction database classification (Fang 2012), Retraction Index computation (Fang 2011), survey synthesis (Marušić 2011).

How PapersFlow Helps You Research Scientific Retractions

Discover & Search

Research Agent uses searchPapers and citationGraph to map retraction trends from Fang et al. (2012), revealing 67.4% misconduct rate clusters. exaSearch uncovers interdisciplinary patterns beyond biomedicine, while findSimilarPapers links Fanelli (2009) surveys to recent AI integrity papers like Perkins (2023).

Analyze & Verify

Analysis Agent applies readPaperContent to parse retraction reasons in Fang et al. (2012), with verifyResponse (CoVe) cross-checking misconduct rates against Steen et al. (2013). runPythonAnalysis computes Retraction Index trends from Fang and Casadevall (2011) citation data using pandas, graded via GRADE for evidence strength in prevalence claims.

Synthesize & Write

Synthesis Agent detects gaps in misconduct tracking post-Perkins (2023), flagging contradictions between Fanelli (2009) surveys and retraction databases. Writing Agent uses latexEditText, latexSyncCitations for Fang/Steen/Casadevall papers, and latexCompile for reports; exportMermaid visualizes retraction network flows.

Use Cases

"Analyze retraction rate trends by discipline using Python."

Research Agent → searchPapers (retr* AND Fang) → Analysis Agent → runPythonAnalysis (pandas plot of Grieneisen 2012 rates) → matplotlib trend graph output.

"Draft LaTeX review on misconduct retractions."

Synthesis Agent → gap detection (Fanelli 2009 vs Fang 2012) → Writing Agent → latexSyncCitations (Steen 2013) → latexCompile → PDF with integrated Retraction Index table.

"Find code for retraction database analysis."

Research Agent → paperExtractUrls (Fanelli papers) → Code Discovery → paperFindGithubRepo → githubRepoInspect → Python scripts for citation network simulation.

Automated Workflows

Deep Research workflow conducts systematic reviews of 50+ retraction papers, chaining searchPapers → citationGraph → GRADE grading for Fanelli (2009) meta-analysis validation. DeepScan applies 7-step CoVe analysis to verify Fang et al. (2012) 67.4% misconduct claim against Steen et al. (2013) trends. Theorizer generates hypotheses on AI-era retractions linking Perkins (2023) to historical patterns.

Frequently Asked Questions

What causes most scientific retractions?

Fang et al. (2012) found 67.4% due to misconduct in 2,047 biomedical papers, versus 21.3% errors.

How common is research fabrication?

Fanelli (2009) meta-analysis shows 1.97% scientists admit falsifying, 33.7% know colleague misconduct.

What are key papers on retractions?

Foundational: Fanelli (2009, 1891 cites), Fang et al. (2012, 1186 cites); recent: Perkins (2023, 586 cites) on AI integrity.

What open problems exist?

Quantifying undetected misconduct, measuring citation persistence post-retraction (Fang 2011), and predicting rises amid publication pressures (Steen 2013).

Research Academic integrity and plagiarism with AI

PapersFlow provides specialized AI tools for Social Sciences researchers. Here are the most relevant for this topic:

See how researchers in Social Sciences use PapersFlow

Field-specific workflows, example queries, and use cases.

Social Sciences Guide

Start Researching Scientific Retractions with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Social Sciences researchers