Subtopic Deep Dive
List Experiments Survey Method
Research Guide
What is List Experiments Survey Method?
List experiments are indirect survey methods where respondents report the number of true statements from a list including a sensitive item, allowing aggregate estimation of prevalence without direct admission.
Developed to reduce social desirability bias in sensitive topics like prejudice and corruption (Blair and Imai, 2012; 609 citations). Key methods include univariate differences, regression models, and non-compliance adjustments (Imai, 2011; 318 citations). Over 10 foundational papers since 2008 address design and analysis.
Why It Matters
List experiments improve polling accuracy on controversial issues such as racial attitudes and corruption, as validated in empirical studies (Rosenfeld, Imai, and Shapiro, 2015; 236 citations). They enable reliable public opinion data for policy-making, reducing underreporting in direct surveys (Krumpal, 2011; 2630 citations). Applications span political science and social research, with tools like LISTIT for modeling (Corstange, 2008; 248 citations).
Key Research Challenges
Non-compliance Detection
Respondents may violate 'no-design-effect' assumption by altering counts non-randomly. Glynn (2013; 395 citations) introduces statistical truth serum to detect and adjust. Blair, Coppock, and Moor (2020; 252 citations) analyze 30 years of experiments for sensitivity bias patterns.
Multivariate Covariate Adjustment
Estimating sensitive item effects with respondent covariates requires specialized regression. Imai (2011; 318 citations) develops multivariate models for item count technique. Blair and Imai (2012; 609 citations) provide statistical framework for analysis.
Design Optimization
Optimal list length and item selection balance power and bias. Corstange (2008; 248 citations) models with LISTIT software. Rosenfeld, Imai, and Shapiro (2015; 236 citations) validate against other methods empirically.
Essential Papers
Determinants of social desirability bias in sensitive surveys: a literature review
Ivar Krumpal · 2011 · Quality & Quantity · 2.6K citations
Distance software: design and analysis of distance sampling surveys for estimating population size
Len Thomas, S. T. Buckland, Eric A. Rexstad et al. · 2009 · Journal of Applied Ecology · 2.1K citations
Summary 1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. ...
Statistical Analysis of List Experiments
Graeme Blair, Kosuke Imai · 2012 · Political Analysis · 609 citations
The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive...
What Can We Learn with Statistical Truth Serum?
Adam Glynn · 2013 · Public Opinion Quarterly · 395 citations
Due to the inherent sensitivity of many survey questions, a number of researchers have adopted an indirect questioning technique known as the list experiment (or the item-count technique) in order ...
Multivariate Regression Analysis for the Item Count Technique
Kosuke Imai · 2011 · Journal of the American Statistical Association · 318 citations
Abstract The item count technique is a survey methodology that is designed to elicit respondents’ truthful answers to sensitive questions such as racial prejudice and drug use. The method is also k...
How to Run Surveys: A Guide to Creating Your Own Identifying Variation and Revealing the Invisible
Stefanie Stantcheva · 2023 · Annual Review of Economics · 281 citations
Surveys are an essential approach for eliciting otherwise invisible factors such as perceptions, knowledge and beliefs, attitudes, and reasoning. These factors are critical determinants of social, ...
IMPROVING ESTIMATES OF BIRD DENSITY USING MULTIPLE- COVARIATE DISTANCE SAMPLING
Tiago A. Marques, Len Thomas, Steven G. Fancy et al. · 2007 · The Auk · 270 citations
Abstract Inferences based on counts adjusted for detectability represent a marked improvement over unadjusted counts, which provide no information about true population density and rely on untestab...
Reading Guide
Foundational Papers
Start with Blair and Imai (2012) for core statistical analysis (609 citations), then Imai (2011) for multivariate extensions, and Glynn (2013) for truth serum diagnostics.
Recent Advances
Blair, Coppock, and Moor (2020; 252 citations) on sensitivity bias over 30 years; Rosenfeld, Imai, and Shapiro (2015; 236 citations) for validation studies.
Core Methods
Difference-of-means for basic estimates; respondent-level regression with covariates (Imai, 2011); non-parametric bounds and truth serum (Glynn, 2013); LISTIT for simulation (Corstange, 2008).
How PapersFlow Helps You Research List Experiments Survey Method
Discover & Search
Research Agent uses searchPapers('list experiment survey method') to retrieve Blair and Imai (2012), then citationGraph to map 609 citing works and findSimilarPapers for Glynn (2013). exaSearch uncovers applications in political polling.
Analyze & Verify
Analysis Agent applies readPaperContent on Imai (2011) multivariate models, then runPythonAnalysis to simulate regression on sample data with pandas/NumPy. verifyResponse (CoVe) with GRADE grading checks estimator assumptions against Blair and Imai (2012) benchmarks; statistical verification confirms non-compliance tests from Glynn (2013).
Synthesize & Write
Synthesis Agent detects gaps in non-compliance handling across Krumpal (2011) and Blair, Coppock, and Moor (2020), flags contradictions in bias models. Writing Agent uses latexEditText for experiment design sections, latexSyncCitations for 10+ papers, latexCompile for full report; exportMermaid diagrams list experiment workflows.
Use Cases
"Simulate list experiment regression for corruption prevalence with covariates"
Research Agent → searchPapers → Analysis Agent → runPythonAnalysis (pandas regression on Imai 2011 model) → matplotlib density plot output with p-values and confidence intervals.
"Write LaTeX methods section comparing list experiments to direct questions"
Synthesis Agent → gap detection (Rosenfeld et al. 2015) → Writing Agent → latexEditText + latexSyncCitations (Blair Imai 2012, Glynn 2013) → latexCompile → PDF with tables and bibliography.
"Find GitHub repos implementing LISTIT from Corstange paper"
Research Agent → paperExtractUrls (Corstange 2008) → Code Discovery → paperFindGithubRepo → githubRepoInspect → R code for list experiment modeling with examples.
Automated Workflows
Deep Research workflow conducts systematic review: searchPapers (list experiments) → citationGraph (Blair Imai 2012 cluster) → readPaperContent on top 20 → structured report with GRADE scores. DeepScan applies 7-step analysis with CoVe checkpoints on Glynn (2013) truth serum, verifying assumptions statistically. Theorizer generates theory on sensitivity bias from Krumpal (2011) + Blair, Coppock, and Moor (2020) patterns.
Frequently Asked Questions
What defines a list experiment?
Respondents count true statements from a list with/without a sensitive item; difference estimates prevalence (Blair and Imai, 2012).
What are main analysis methods?
Univariate difference-of-means, multivariate regression (Imai, 2011), truth serum for non-compliance (Glynn, 2013), and LISTIT modeling (Corstange, 2008).
What are key papers?
Blair and Imai (2012; 609 citations) for analysis; Imai (2011; 318 citations) for regression; Krumpal (2011; 2630 citations) for bias review.
What open problems exist?
Detecting non-compliance robustly (Blair, Coppock, and Moor, 2020); optimal designs for small samples; integration with machine learning for heterogeneous effects.
Research Survey Sampling and Estimation Techniques with AI
PapersFlow provides specialized AI tools for Mathematics researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Paper Summarizer
Get structured summaries of any paper in seconds
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Physics & Mathematics use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching List Experiments Survey Method with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Mathematics researchers