Subtopic Deep Dive

Expert Judgment in Forecasting
Research Guide

What is Expert Judgment in Forecasting?

Expert judgment in forecasting involves structured protocols for eliciting, aggregating, and integrating subjective opinions from domain experts to produce probabilistic predictions, often combined with statistical models.

Research focuses on methods like Delphi surveys and Bayesian elicitation to mitigate biases in expert opinions (Garthwaite et al., 2005, 807 citations). Studies compare expert adjustments to statistical forecasts, showing over-reliance on human advice (Önkal et al., 2009, 323 citations). Applications span public policy and crisis forecasting, with ~10 key papers exceeding 200 citations.

15
Curated Papers
3
Key Challenges

Why It Matters

Expert judgment fills gaps in data-sparse scenarios, such as COVID-19 forecasting where Petropoulos and Makridakis (2020, 597 citations) used expert inputs for early predictions amid limited data. Morgan (2014, 663 citations) demonstrates its role in public policy, informing decisions on climate risks and health crises despite elicitation challenges. Burgman et al. (2011, 309 citations) reveal that credentials poorly predict expert accuracy, guiding better selection in energy forecasts (Craig et al., 2002, 259 citations).

Key Research Challenges

Bias in Elicitation

Experts exhibit overconfidence and anchoring biases during probability distribution elicitation. Garthwaite et al. (2005) review methods to encode distributions while addressing these issues. Morgan (2014) warns of misuse leading to flawed policy advice.

Expert Selection Errors

Traditional credentials fail to identify high-performing experts in forecasting tasks. Burgman et al. (2011) find no correlation between status and accuracy across domains. This complicates aggregation in novel dilemmas.

Human vs Statistical Weighting

Forecasters overweight human expert advice over statistical models, degrading accuracy. Önkal et al. (2009) show this in stock price adjustments. Integrating judgments requires structured protocols to balance inputs.

Essential Papers

1.

Statistical Methods for Eliciting Probability Distributions

Paul H. Garthwaite, Joseph B. Kadane, Anthony O’Hagan · 2005 · Journal of the American Statistical Association · 807 citations

Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subject-mat...

2.

Use (and abuse) of expert elicitation in support of decision making for public policy

M. Granger Morgan · 2014 · Proceedings of the National Academy of Sciences · 663 citations

The elicitation of scientific and technical judgments from experts, in the form of subjective probability distributions, can be a valuable addition to other forms of evidence in support of public p...

3.

Forecasting the novel coronavirus COVID-19

Fotios Petropoulos, Spyros Makridakis · 2020 · PLoS ONE · 597 citations

What will be the global impact of the novel coronavirus (COVID-19)? Answering this question requires accurate forecasting the spread of confirmed cases as well as analysis of the number of deaths a...

4.

The relative influence of advice from human experts and statistical methods on forecast adjustments

Dilek Önkal, Paul Goodwin, Mary E. Thomson et al. · 2009 · Journal of Behavioral Decision Making · 323 citations

Abstract Decision makers and forecasters often receive advice from different sources including human experts and statistical methods. This research examines, in the context of stock price forecasti...

5.

Expert Status and Performance

Mark A. Burgman, Marissa F. McBride, Raquel Ashton et al. · 2011 · PLoS ONE · 309 citations

Expert judgements are essential when time and resources are stretched or we face novel dilemmas requiring fast solutions. Good advice can save lives and large sums of money. Typically, experts are ...

6.

What Can History Teach Us? A Retrospective Examination of Long-Term Energy Forecasts for the United States

Paul Craig, Ashok Gadgil, Jonathan Koomey · 2002 · Annual Review of Energy and the Environment · 259 citations

▪ Abstract This paper explores how long-term energy forecasts are created and why they are useful. It focuses on forecasts of energy use in the United States for the year 2000 but considers only lo...

7.

A new accuracy measure based on bounded relative error for time series forecasting

Chao Chen, Jamie Twycross, Jonathan M. Garibaldi · 2017 · PLoS ONE · 259 citations

Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and...

Reading Guide

Foundational Papers

Start with Garthwaite et al. (2005) for core elicitation methods; then Morgan (2014) for policy pitfalls; Önkal et al. (2009) for integration challenges—these establish protocols and biases (807, 663, 323 citations).

Recent Advances

Petropoulos and Makridakis (2020) applies to COVID-19; Chen et al. (2017) introduces bounded error metrics for judgmental forecasts (597, 259 citations).

Core Methods

Delphi iteration, Bayesian prior encoding (Garthwaite et al., 2005), performance-weighted aggregation (Burgman et al., 2011), advice adjustment protocols (Önkal et al., 2009).

How PapersFlow Helps You Research Expert Judgment in Forecasting

Discover & Search

Research Agent uses searchPapers and exaSearch to find high-citation works like Garthwaite et al. (2005), then citationGraph reveals connections to Morgan (2014) and Önkal et al. (2009), while findSimilarPapers uncovers related bias mitigation studies.

Analyze & Verify

Analysis Agent applies readPaperContent to extract elicitation protocols from Garthwaite et al. (2005), verifies claims with CoVe chain-of-verification, and runs PythonAnalysis to compute bias metrics from Burgman et al. (2011) datasets using GRADE for evidence strength in expert performance.

Synthesize & Write

Synthesis Agent detects gaps in expert-statistical integration from Önkal et al. (2009), flags contradictions in Morgan (2014), and uses latexEditText with latexSyncCitations for drafting reviews; Writing Agent compiles via latexCompile and exportMermaid for Delphi process diagrams.

Use Cases

"Replicate bias analysis from Burgman et al. 2011 on expert forecasting performance"

Research Agent → searchPapers('Burgman expert status') → Analysis Agent → readPaperContent → runPythonAnalysis (pandas correlation on performance data) → statistical output with p-values and plots.

"Draft LaTeX review of Delphi methods in expert elicitation"

Synthesis Agent → gap detection on Garthwaite 2005 + Morgan 2014 → Writing Agent → latexEditText → latexSyncCitations → latexCompile → formatted PDF with cited protocols.

"Find GitHub repos implementing Bayesian expert elicitation"

Research Agent → searchPapers('Bayesian elicitation code') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → repo summaries with forecasting scripts.

Automated Workflows

Deep Research workflow conducts systematic review of 50+ papers on expert judgment, chaining searchPapers → citationGraph → GRADE grading for structured elicitation report. DeepScan applies 7-step analysis with CoVe checkpoints to verify bias claims in Önkal et al. (2009). Theorizer generates hypotheses on optimal expert aggregation from Garthwaite et al. (2005) and Burgman et al. (2011).

Frequently Asked Questions

What defines expert judgment in forecasting?

Structured elicitation of probability distributions from domain experts using protocols like Delphi or Bayesian methods, integrated with statistical forecasts (Garthwaite et al., 2005).

What are key methods for expert elicitation?

Methods include fixed-quantile encoding and roulette techniques for distributions, with aggregation via equal weighting or performance-based models (Garthwaite et al., 2005; Morgan, 2014).

What are the most cited papers?

Garthwaite et al. (2005, 807 citations) on statistical elicitation; Morgan (2014, 663 citations) on policy applications; Petropoulos and Makridakis (2020, 597 citations) on COVID forecasting.

What open problems exist?

Improving expert selection beyond credentials (Burgman et al., 2011), balancing human-statistical weights (Önkal et al., 2009), and scaling elicitation for real-time crises like COVID-19.

Research Forecasting Techniques and Applications with AI

PapersFlow provides specialized AI tools for Decision Sciences researchers. Here are the most relevant for this topic:

See how researchers in Economics & Business use PapersFlow

Field-specific workflows, example queries, and use cases.

Economics & Business Guide

Start Researching Expert Judgment in Forecasting with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Decision Sciences researchers