Subtopic Deep Dive

Research Data Sharing Barriers
Research Guide

What is Research Data Sharing Barriers?

Research Data Sharing Barriers are institutional, legal, cultural, and technical obstacles that prevent scientists from openly sharing research data in academia.

Surveys of thousands of researchers worldwide identify priority over data ownership, lack of time, and absence of incentives as primary barriers (Tenopir et al., 2011, 1407 citations). Follow-up studies show modest improvements but persistent cultural resistance (Tenopir et al., 2015, 444 citations). Approximately 50% of high-impact papers fail to make data publicly available despite journal policies (Alsheikh-Ali et al., 2011, 369 citations).

15
Curated Papers
3
Key Challenges

Why It Matters

Barriers block reproducibility and meta-analyses essential for scientific progress, as seen in failed replication crises addressed by open data mandates (Allen and Mehler, 2019). In long-tail sciences, local data practices hinder infrastructure design for sharing (Wallis et al., 2013). Overcoming them enables FAIR principles implementation, boosting reuse across 250M+ papers (Mons et al., 2017). Tennant et al. (2016) quantify economic gains from accessible data at $1B+ annually in accelerated discoveries.

Key Research Challenges

Cultural Resistance to Sharing

Researchers prioritize publications over data due to unclear career benefits (Tenopir et al., 2011). Surveys across 88 countries confirm 40-60% reluctance from competition fears (Tenopir et al., 2015). Incentive misalignment sustains this despite NSF mandates.

Data Ownership and Legal Fears

Scientists fear loss of intellectual property or priority when sharing raw data (Wallis et al., 2013). Journal policies exist but <50% compliance in high-impact venues (Alsheikh-Ali et al., 2011). IP concerns block institutional repositories.

Technical and Time Barriers

Lack of standardized formats and metadata hinders FAIR compliance (Mons et al., 2017). Preparation time exceeds researcher capacity in long-tail fields (Wallis et al., 2013). Infrastructure gaps persist despite OECD guidelines (Pilat and Fukasaku, 2007).

Essential Papers

1.

Data Sharing by Scientists: Practices and Perceptions

Carol Tenopir, Suzie Allard, Kimberly Douglass et al. · 2011 · PLoS ONE · 1.4K citations

Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management pl...

2.

Open science challenges, benefits and tips in early career and beyond

Christopher Allen, David Marc Anton Mehler · 2019 · PLoS Biology · 593 citations

The movement towards open science is a consequence of seemingly pervasive failures to replicate previous research. This transition comes with great benefits but also significant challenges that are...

3.

The academic, economic and societal impacts of Open Access: an evidence-based review

Jonathan Tennant, François Waldner, Damien Jacques et al. · 2016 · F1000Research · 533 citations

<ns4:p>Ongoing debates surrounding Open Access to the scholarly literature are multifaceted and complicated by disparate and often polarised viewpoints from engaged stakeholders. At the current sta...

4.

If We Share Data, Will Anyone Use Them? Data Sharing and Reuse in the Long Tail of Science and Technology

Jillian C. Wallis, Elizabeth Rolando, Christine L. Borgman · 2013 · PLoS ONE · 483 citations

Research on practices to share and reuse data will inform the design of infrastructure to support data collection, management, and discovery in the long tail of science and technology. These are re...

5.

Changes in Data Sharing and Data Reuse Practices and Perceptions among Scientists Worldwide

Carol Tenopir, Elizabeth D. Dalton, Suzie Allard et al. · 2015 · PLoS ONE · 444 citations

The incorporation of data sharing into the research lifecycle is an important part of modern scholarly debate. In this study, the DataONE Usability and Assessment working group addresses two primar...

6.

Cloudy, increasingly FAIR; revisiting the FAIR Data guiding principles for the European Open Science Cloud

Barend Mons, Cameron Neylon, Jan Velterop et al. · 2017 · Information Services & Use · 421 citations

The FAIR Data Principles propose that all scholarly output should be Findable, Accessible, Interoperable, and Reusable. As a set of guiding principles, expressing only the kinds of behaviours that ...

7.

Citizen science in environmental and ecological sciences

Dilek Fraisl, Gerid Hager, Baptiste Bedessem et al. · 2022 · Nature Reviews Methods Primers · 413 citations

Reading Guide

Foundational Papers

Start with Tenopir et al. (2011, 1407 citations) for primary barrier taxonomy from 1324 scientists; follow with Wallis et al. (2013) on long-tail reuse fears and Alsheikh-Ali et al. (2011) on journal policy failures.

Recent Advances

Allen and Mehler (2019) links barriers to replication crisis; Mons et al. (2017) proposes FAIR solutions; Carroll et al. (2021) addresses indigenous data governance extensions.

Core Methods

Global surveys of scientists (Tenopir et al.); journal policy audits (Alsheikh-Ali et al.); reuse interviews in small labs (Wallis et al.); FAIR principle frameworks (Mons et al.).

How PapersFlow Helps You Research Research Data Sharing Barriers

Discover & Search

Research Agent uses searchPapers('data sharing barriers Tenopir') to retrieve Tenopir et al. (2011, 1407 citations), then citationGraph reveals 2015 follow-up and exaSearch uncovers domain-specific surveys. findSimilarPapers expands to 50+ related works on cultural barriers.

Analyze & Verify

Analysis Agent runs readPaperContent on Tenopir et al. (2011) to extract barrier rankings, verifies survey stats via verifyResponse (CoVe) against raw percentages, and uses runPythonAnalysis for GRADE grading of evidence strength (A-grade for n=1324 scientists). Statistical verification confirms no p-hacking in reuse rates.

Synthesize & Write

Synthesis Agent detects gaps like post-2020 incentive studies, flags contradictions between Tenopir 2011/2015 trends, and uses exportMermaid for barrier causality diagrams. Writing Agent applies latexEditText to draft policy sections, latexSyncCitations for 20+ refs, and latexCompile for camera-ready review.

Use Cases

"Analyze citation trends in data sharing barrier surveys using Python"

Research Agent → searchPapers → Analysis Agent → runPythonAnalysis(pandas citation count plot from Tenopir papers) → matplotlib trend visualization exported as PNG.

"Draft LaTeX review on cultural barriers with synced citations"

Synthesis Agent → gap detection → Writing Agent → latexEditText('expand Tenopir section') → latexSyncCitations(15 refs) → latexCompile → PDF with barrier taxonomy table.

"Find GitHub repos with open data barrier survey code"

Research Agent → paperExtractUrls(Tenopir 2011) → Code Discovery → paperFindGithubRepo → githubRepoInspect → cleaned CSV of 10 survey datasets for reuse.

Automated Workflows

Deep Research workflow conducts systematic review of 50+ barrier papers: searchPapers → citationGraph → GRADE all abstracts → structured report with barrier prevalence matrix. DeepScan applies 7-step CoVe chain to verify Tenopir survey claims against 2015 data. Theorizer generates policy incentive hypotheses from contradiction flagging across 20 studies.

Frequently Asked Questions

What is the top cited paper defining data sharing barriers?

Tenopir et al. (2011) surveyed 1324 scientists across disciplines, finding priority over data (49%) and lack of time (36%) as top barriers (1407 citations).

What methods identify sharing barriers?

Large-scale surveys (Tenopir et al., 2011, n=1324; Tenopir et al., 2015, 89 countries) combined with journal audits (Alsheikh-Ali et al., 2011, 50 journals) quantify cultural, legal, and technical obstacles.

Which papers have highest citations on this topic?

Tenopir et al. (2011, 1407 citations), Tenopir et al. (2015, 444 citations), Wallis et al. (2013, 483 citations) lead; all use global scientist surveys.

What are major open problems in data sharing barriers?

Post-2020 incentive effectiveness untested (gap since Tenopir 2015); domain-specific solutions for long-tail sciences lacking (Wallis et al., 2013); compliance monitoring needed beyond journals (Alsheikh-Ali et al., 2011).

Research Research Data Management Practices with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Research Data Sharing Barriers with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers