Subtopic Deep Dive
Questionable Research Practices
Research Guide
What is Questionable Research Practices?
Questionable Research Practices (QRPs) are common deviations from ideal research standards, including selective reporting, p-hacking, and data falsification, that undermine scientific reproducibility without always constituting outright fraud.
QRPs occur across disciplines, with surveys showing 1.97% of scientists admitting to falsification and 33.72% knowing colleagues who did (Fanelli, 2009, 1891 citations). Retractions linked to QRPs have risen due to publication pressures and institutional behaviors (Steen et al., 2013, 408 citations). Meta-analyses reveal bias patterns in thousands of studies (Fanelli et al., 2017, 373 citations).
Why It Matters
QRPs fuel the replication crisis, with most retracted articles not involving misconduct but errors or selective reporting (Grieneisen & Zhang, 2012, 348 citations). They distort meta-analyses and policy decisions, as seen in authorship disputes affecting research integrity (Marušić et al., 2011, 397 citations). Reforms like the Hong Kong Principles assess researchers on integrity practices to counter these issues (Moher et al., 2020, 495 citations). Fanelli's surveys quantify prevalence, driving training programs and journal policies.
Key Research Challenges
Measuring QRP Prevalence
Surveys rely on self-reports, which underreport due to social desirability bias, with only 1.97% admitting falsification (Fanelli, 2009). Observational methods like retraction indices capture severe cases but miss subtle QRPs (Fang & Casadevall, 2011, 400 citations). Meta-assessments struggle with field variability (Fanelli et al., 2017).
Quantifying Reproducibility Impact
Bias patterns inflate effect sizes in meta-analyses, complicating reproducibility assessments (Fanelli et al., 2017, 373 citations). Retraction increases reflect lower barriers, not just more misconduct (Steen et al., 2013). Distinguishing QRPs from errors remains unresolved (Grieneisen & Zhang, 2012).
Incentive-Driven Reforms
Publication pressures encourage QRPs like selective reporting, resisting change despite principles like Hong Kong (Moher et al., 2020). Authorship ethics vary by discipline, hindering uniform standards (Marušić et al., 2011). Peer review limitations exacerbate undetected QRPs (Tennant & Ross-Hellauer, 2020).
Essential Papers
How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data
Daniele Fanelli · 2009 · PLoS ONE · 1.9K citations
The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they h...
The Hong Kong Principles for assessing researchers: Fostering research integrity
David Moher, L.M. Bouter, Sabine Kleinert et al. · 2020 · PLoS Biology · 495 citations
For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. Assessment of res...
What is open peer review? A systematic review
Tony Ross‐Hellauer · 2017 · F1000Research · 408 citations
<ns4:p><ns4:bold>Background</ns4:bold>: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implem...
Why Has the Number of Scientific Retractions Increased?
R. Grant Steen, Arturo Casadevall, Ferric C. Fang · 2013 · PLoS ONE · 408 citations
The increase in retracted articles appears to reflect changes in the behavior of both authors and institutions. Lower barriers to publication of flawed articles are seen in the increase in number a...
Retracted Science and the Retraction Index
Ferric C. Fang, Arturo Casadevall · 2011 · Infection and Immunity · 400 citations
ABSTRACT Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to vi...
A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines
Ana Marušić, Lana Bošnjak, Ana Jerončić · 2011 · PLoS ONE · 397 citations
High prevalence of authorship problems may have severe impact on the integrity of the research process, just as more serious forms of research misconduct. There is a need for more methodologically ...
Meta-assessment of bias in science
Daniele Fanelli, Rodrigo Costas, John P. A. Ioannidis · 2017 · Proceedings of the National Academy of Sciences · 373 citations
Significance Science is said to be suffering a reproducibility crisis caused by many biases. How common are these problems, across the wide diversity of research fields? We probed for multiple bias...
Reading Guide
Foundational Papers
Start with Fanelli (2009, 1891 citations) for QRP prevalence baselines, then Fang & Casadevall (2011, 400 citations) for retraction metrics, and Marušić et al. (2011, 397 citations) for authorship issues to build core understanding.
Recent Advances
Study Fanelli et al. (2017, 373 citations) for bias meta-assessments and Moher et al. (2020, 495 citations) for reform principles to grasp current advances.
Core Methods
Core techniques include survey meta-analyses (Fanelli, 2009), retraction indexing (Fang & Casadevall, 2011), and bias pattern detection in meta-analyses (Fanelli et al., 2017).
How PapersFlow Helps You Research Questionable Research Practices
Discover & Search
Research Agent uses searchPapers and exaSearch to find Fanelli (2009) on QRP prevalence, then citationGraph reveals downstream impacts in Fanelli et al. (2017) meta-assessments and Steen et al. (2013) retraction analyses. findSimilarPapers expands to related surveys on authorship ethics (Marušić et al., 2011).
Analyze & Verify
Analysis Agent applies readPaperContent to extract survey data from Fanelli (2009), then runPythonAnalysis with pandas computes meta-analytic prevalence rates across 200+ studies. verifyResponse (CoVe) with GRADE grading verifies claims on retraction causes against Fang & Casadevall (2011), flagging contradictions in self-report biases.
Synthesize & Write
Synthesis Agent detects gaps in QRP reform evaluations post-Moher et al. (2020), flags contradictions between retraction trends (Steen et al., 2013) and bias persistence. Writing Agent uses latexEditText, latexSyncCitations for Fanelli papers, and latexCompile to generate reports with exportMermaid diagrams of incentive flows.
Use Cases
"Analyze p-hacking rates in Fanelli surveys using Python"
Research Agent → searchPapers('Fanelli 2009') → Analysis Agent → readPaperContent → runPythonAnalysis(pandas meta-analysis of survey data) → statistical output with prevalence CIs and visualizations.
"Draft LaTeX review on retraction increases"
Research Agent → citationGraph(Steen 2013) → Synthesis Agent → gap detection → Writing Agent → latexEditText(structured sections) → latexSyncCitations(Fang 2011) → latexCompile → PDF with integrated figures.
"Find code for QRP simulation models from papers"
Research Agent → findSimilarPapers(Fanelli 2017) → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → downloadable R scripts for bias simulation.
Automated Workflows
Deep Research workflow conducts systematic reviews of 50+ QRP papers starting with searchPapers('Fanelli meta-analysis'), yielding structured reports on prevalence trends. DeepScan applies 7-step verification to retraction datasets from Grieneisen & Zhang (2012), with CoVe checkpoints. Theorizer generates incentive models from Moher et al. (2020) principles and Fanelli surveys.
Frequently Asked Questions
What defines Questionable Research Practices?
QRPs include selective reporting, p-hacking, and falsification short of fraud, as quantified in surveys where 33.72% knew colleagues committing them (Fanelli, 2009).
What methods detect QRPs?
Surveys (Fanelli, 2009), retraction indices (Fang & Casadevall, 2011), and meta-assessments of bias patterns (Fanelli et al., 2017) identify prevalence and impacts.
What are key papers on QRPs?
Fanelli (2009, 1891 citations) meta-analyzes surveys; Steen et al. (2013) explains retraction rises; Moher et al. (2020) proposes integrity assessments.
What open problems exist in QRP research?
Underreporting in self-surveys, distinguishing QRPs from errors, and enforcing reforms against incentives remain unresolved (Fanelli et al., 2017; Marušić et al., 2011).
Research Academic integrity and plagiarism with AI
PapersFlow provides specialized AI tools for Social Sciences researchers. Here are the most relevant for this topic:
Systematic Review
AI-powered evidence synthesis with documented search strategies
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
Find Disagreement
Discover conflicting findings and counter-evidence
See how researchers in Social Sciences use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Questionable Research Practices with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Social Sciences researchers
Part of the Academic integrity and plagiarism Research Guide