PapersFlow Research Brief

Social Sciences · Decision Sciences

Meta-analysis and systematic reviews
Research Guide

What is Meta-analysis and systematic reviews?

Meta-analysis and systematic reviews are methods for synthesizing evidence from multiple studies, where systematic reviews identify, select, and critically appraise relevant research and meta-analyses statistically combine their results to estimate overall effects.

This field encompasses 83,703 works on guidelines like the PRISMA statement, inconsistency measures such as I², risk of bias assessment, publication bias detection, and effect size interpretation. "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement" by Moher et al. (2009) has received 82,229 citations and established reporting standards. "The PRISMA 2020 statement: an updated guideline for reporting systematic reviews" by Page et al. (2021) reflects methodological advances with 81,217 citations.

Topic Hierarchy

100%
graph TD D["Social Sciences"] F["Decision Sciences"] S["Statistics, Probability and Uncertainty"] T["Meta-analysis and systematic reviews"] D --> F F --> S S --> T style T fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan
83.7K
Papers
N/A
5yr Growth
2.9M
Total Citations

Research Sub-Topics

Why It Matters

Systematic reviews and meta-analyses inform clinical practice guidelines and funding decisions by providing synthesized evidence from multiple studies. Moher et al. (2009) note that clinicians rely on them to stay updated, and granting agencies require them to avoid redundant research, as seen in "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement" with 82,229 citations. Higgins et al. (2003) introduced I² in "Measuring inconsistency in meta-analyses," used in Cochrane Reviews to assess study consistency, with 60,472 citations. Egger et al. (1997) developed a funnel plot asymmetry test in "Bias in meta-analysis detected by a simple, graphical test," predicting discordance between early meta-analyses and large trials, cited 54,040 times.

Reading Guide

Where to Start

"Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement" by Moher et al. (2009) first, as it introduces core reporting standards essential for understanding systematic review methodology, with 82,229 citations.

Key Papers Explained

Moher et al. (2009) in "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement" establishes baseline reporting guidelines, updated by Page et al. (2021) in "The PRISMA 2020 statement: an updated guideline for reporting systematic reviews" to incorporate methodological advances. Higgins et al. (2003) in "Measuring inconsistency in meta-analyses" complements these by providing I² for heterogeneity assessment. Egger et al. (1997) in "Bias in meta-analysis detected by a simple, graphical test" adds bias detection tools, while DerSimonian and Laird (1986) in "Meta-analysis in clinical trials" details random-effects estimation underlying many syntheses.

Paper Timeline

100%
graph LR P0["The Measurement of Observer Agre...
1977 · 75.9K cites"] P1["Bias in meta-analysis detected b...
1997 · 54.0K cites"] P2["Measuring inconsistency in meta-...
2003 · 60.5K cites"] P3["Preferred reporting items for sy...
2009 · 82.2K cites"] P4["Preferred Reporting Items for Sy...
2009 · 61.2K cites"] P5["Preferred reporting items for sy...
2009 · 45.1K cites"] P6["The PRISMA 2020 statement: an up...
2021 · 81.2K cites"] P0 --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> P6 style P3 fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan

Most-cited paper highlighted in red. Papers ordered chronologically.

Advanced Directions

Recent preprints show no new developments in the last 6 months, leaving frontiers in adapting PRISMA for emerging synthesis types like living systematic reviews and improving bias tools for big data evidence.

Papers at a Glance

Frequently Asked Questions

What is the PRISMA statement?

The PRISMA statement provides guidelines for reporting systematic reviews and meta-analyses. "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement" by Moher et al. (2009) updates QUOROM to ensure transparency on why reviews were conducted, methods used, and findings reported. It has 82,229 citations.

How is inconsistency measured in meta-analyses?

Inconsistency is quantified using I², which indicates the percentage of variability due to heterogeneity rather than chance. Higgins et al. (2003) in "Measuring inconsistency in meta-analyses" explain I²'s role in Cochrane Reviews for assessing study results consistency. The paper has 60,472 citations.

What methods detect publication bias in meta-analysis?

Publication bias is detected using funnel plot asymmetry tests. Egger et al. (1997) in "Bias in meta-analysis detected by a simple, graphical test" propose a graphical test comparing effect estimates against sample size to predict discordance with large trials. It has 54,040 citations.

What are key reporting items in PRISMA 2020?

PRISMA 2020 updates the 2009 guidelines to address methodological advances. Page et al. (2021) in "The PRISMA 2020 statement: an updated guideline for reporting systematic reviews" emphasize transparent reporting of review rationale, methods, and results. It has 81,217 citations.

How is observer agreement measured for categorical data?

Observer agreement for categorical data uses kappa statistics based on observed and expected agreement. Landis and Koch (1977) in "The Measurement of Observer Agreement for Categorical Data" present methodology for reliability studies, with 75,893 citations.

What is the DerSimonian-Laird method in meta-analysis?

The DerSimonian-Laird method estimates heterogeneity variance in random-effects meta-analysis of clinical trials. DerSimonian and Laird (1986) describe it in "Meta-analysis in clinical trials," cited 38,396 times.

Open Research Questions

  • ? How can I² be improved to better distinguish true heterogeneity from sampling variability across diverse study designs?
  • ? What graphical tests enhance detection of publication bias beyond funnel plot asymmetry in small-study effects?
  • ? How should risk of bias assessments integrate qualitative and quantitative data in evidence synthesis?
  • ? What extensions of PRISMA guidelines address reporting for network meta-analyses and individual participant data?
  • ? How do effect size interpretations vary by field when synthesizing diagnostic accuracy studies?

Research Meta-analysis and systematic reviews with AI

PapersFlow provides specialized AI tools for Decision Sciences researchers. Here are the most relevant for this topic:

See how researchers in Economics & Business use PapersFlow

Field-specific workflows, example queries, and use cases.

Economics & Business Guide

Start Researching Meta-analysis and systematic reviews with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Decision Sciences researchers