Subtopic Deep Dive
Racial Discrimination in Resume Audits
Research Guide
What is Racial Discrimination in Resume Audits?
Racial Discrimination in Resume Audits examines field experiments sending resumes with racially identifiable names to job postings to measure callback disparities.
Researchers send identical resumes differing only in names signaling White (e.g., Emily, Greg) or Black (e.g., Lakisha, Jamal) identities to employers. Bertrand and Mullainathan (2003) found White names received 50% more callbacks in Chicago and Boston. Quillian et al. (2017) meta-analysis of 24 studies showed no decline in discrimination rates from 1990-2015 (812 citations).
Why It Matters
These audits quantify racial bias in hiring, with Bertrand and Mullainathan (2003, 637 citations) showing 9-15% callback gaps persisting across occupations. Gaddis (2014, 397 citations) revealed college selectivity amplifies discrimination for Black applicants. Findings inform EEOC policies and diversity training; Quillian et al. (2019, 271 citations) compared discrimination levels across 97 field experiments in eight countries, highlighting cross-national variations.
Key Research Challenges
Name Perception Variability
Names signal race imperfectly, varying by perceiver demographics. Gaddis (2017, 270 citations) surveyed perceptions of 1400 names used in audits, finding Lakisha perceived as 92% Black but only 42% of respondents rated her stereotypically urban. This challenges audit validity as associations like 'ghetto' influence callbacks beyond race.
No Temporal Decline in Bias
Meta-analysis shows stable discrimination over decades. Quillian et al. (2017, 812 citations) analyzed 28 experiments (1990-2015), finding no change in Black-White callback ratios despite anti-discrimination laws. Neumark (2018, 508 citations) reviews persistent gaps unexplained by resume quality.
Credential Interaction Effects
High-status credentials do not erase bias. Gaddis (2014, 397 citations) tested race-college selectivity interactions, showing elite degrees boost White callbacks more. Deming et al. (2016, 271 citations) found sector-specific valuation of postsecondary credentials varies by perceived race.
Essential Papers
Meta-analysis of field experiments shows no change in racial discrimination in hiring over time
Lincoln Quillian, Devah Pager, Ole Hexel et al. · 2017 · Proceedings of the National Academy of Sciences · 812 citations
Significance Many scholars have argued that discrimination in American society has decreased over time, while others point to persisting race and ethnic gaps and subtle forms of prejudice. The ques...
Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination
Marianne Bertrand, Sendhil Mullainathan · 2003 · 637 citations
We perform a field experiment to measure racial discrimination in the labor market.We respond with fictitious resumes to help-wanted ads in Boston and Chicago newspapers.To manipulate perception of...
Experimental Research on Labor Market Discrimination
David Neumark · 2018 · Journal of Economic Literature · 508 citations
Understanding whether labor market discrimination explains inferior labor market outcomes for many groups has drawn the attention of labor economists for decades— at least since the publication of ...
Discrimination in the Credential Society: An Audit Study of Race and College Selectivity in the Labor Market
S. Michael Gaddis · 2014 · Social Forces · 397 citations
Racial inequality in economic outcomes, particularly among the college educated, persists throughout US society. Scholars debate whether this inequality stems from racial differences in human capit...
Bias in Online Freelance Marketplaces
Anikó Hannák, Claudia Wagner, David García et al. · 2017 · 273 citations
Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated wit...
Do Some Countries Discriminate More than Others? Evidence from 97 Field Experiments of Racial Discrimination in Hiring
Lincoln Quillian, Anthony Heath, Devah Pager et al. · 2019 · Sociological Science · 271 citations
Comparing levels of discrimination across countries can provide a window into large-scalesocial and political factors often described as the root of discrimination. Because of difficulties inmeasur...
The Value of Postsecondary Credentials in the Labor Market: An Experimental Study
David Deming, Noam Yuchtman, Amira Abulafi et al. · 2016 · American Economic Review · 271 citations
We study employers' perceptions of the value of postsecondary degrees using a field experiment. We randomly assign the sector and selectivity of institutions to fictitious resumes and apply to real...
Reading Guide
Foundational Papers
Start with Bertrand and Mullainathan (2003, 637 citations) for core audit design showing 9-15% gaps. Follow Gaddis (2014, 397 citations) for credential interactions. Pager et al. (2009, 197 citations) adds low-wage market evidence.
Recent Advances
Quillian et al. (2017, 812 citations) meta-analysis proves discrimination stability. Quillian et al. (2019, 271 citations) benchmarks 97 international experiments. Gaddis (2017, 270 citations) validates name-race signaling.
Core Methods
Resume audits randomize names (White/Black signals) across real job ads. Meta-analyses use multilevel models for callback ratios (Quillian et al. 2017). Perception surveys test name stereotypes (Gaddis 2017).
How PapersFlow Helps You Research Racial Discrimination in Resume Audits
Discover & Search
Research Agent uses searchPapers('racial discrimination resume audit') to retrieve Quillian et al. (2017) meta-analysis (812 citations), then citationGraph reveals Bertrand and Mullainathan (2003) as most-cited foundational work. findSimilarPapers on Gaddis (2014) surfaces name perception studies; exaSearch uncovers 97-country comparisons in Quillian et al. (2019).
Analyze & Verify
Analysis Agent runs readPaperContent on Quillian et al. (2017) to extract meta-analytic callback ratios, verifiesResponse with CoVe against raw field data, and runPythonAnalysis(pandas) computes discrimination rates from Gaddis (2014) tables. GRADE grading scores Neumark (2018) review as A-level evidence for audit methodology.
Synthesize & Write
Synthesis Agent detects gaps like post-2015 trends beyond Quillian et al. (2017), flags contradictions between U.S. stability and international variation in Quillian et al. (2019). Writing Agent uses latexEditText for audit results tables, latexSyncCitations for 10+ papers, latexCompile for report, exportMermaid for callback disparity flowcharts.
Use Cases
"Replicate Bertrand-Mullainathan callback gaps with Python meta-analysis"
Research Agent → searchPapers('Bertrand Mullainathan 2003') → Analysis Agent → readPaperContent + runPythonAnalysis(pandas, NumPy) → outputs replicated 50% callback disparity stats and matplotlib disparity plot.
"Write LaTeX review comparing Quillian meta-analysis to Gaddis credentials study"
Synthesis Agent → gap detection → Writing Agent → latexEditText(structured abstract) → latexSyncCitations(Quillian 2017, Gaddis 2014) → latexCompile → outputs polished PDF with synced bibliography.
"Find code for resume audit name perception surveys like Gaddis 2017"
Research Agent → paperExtractUrls(Gaddis 2017) → Code Discovery → paperFindGithubRepo → githubRepoInspect → outputs survey simulation scripts and analysis notebooks.
Automated Workflows
Deep Research workflow conducts systematic review of 50+ audit papers via searchPapers → citationGraph → structured report with GRADE scores on temporal trends from Quillian et al. (2017). DeepScan applies 7-step analysis with CoVe verification to Neumark (2018), checkpointing methodology critiques. Theorizer generates hypotheses on name-credential interactions from Gaddis (2014) + Deming et al. (2016).
Frequently Asked Questions
What defines racial discrimination in resume audits?
Field experiments send identical resumes with White-sounding (Emily, Greg) vs. Black-sounding (Lakisha, Jamal) names to job ads, measuring callback rates. Bertrand and Mullainathan (2003) reported 50% more callbacks for White names.
What methods dominate this research?
Correspondence audits manipulate race via names while holding qualifications constant. Gaddis (2017) validated name perceptions; Quillian et al. (2017) meta-analyzed 24 U.S. studies using Bayesian hierarchical models.
What are key papers?
Bertrand and Mullainathan (2003, 637 citations) launched the paradigm with Chicago-Boston experiment. Quillian et al. (2017, 812 citations) showed no discrimination decline. Gaddis (2014, 397 citations) tested college selectivity effects.
What open problems remain?
Post-2015 trends unknown per Quillian et al. (2017). International generalizability varies (Quillian et al. 2019). Interactions with AI screening and gig platforms underexplored despite Hannák et al. (2017).
Research Names, Identity, and Discrimination Research with AI
PapersFlow provides specialized AI tools for Social Sciences researchers. Here are the most relevant for this topic:
Systematic Review
AI-powered evidence synthesis with documented search strategies
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
Find Disagreement
Discover conflicting findings and counter-evidence
See how researchers in Social Sciences use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Racial Discrimination in Resume Audits with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Social Sciences researchers