Subtopic Deep Dive
Gender Discrimination via Names in Hiring
Research Guide
What is Gender Discrimination via Names in Hiring?
Gender Discrimination via Names in Hiring examines field experiments where resumes with gendered names are sent to job postings to measure callback disparities in employment processes.
Correspondence studies manipulate perceived gender through names on identical resumes to test hiring biases (Bertrand & Mullainathan, 2003, 637 citations). These experiments reveal persistent gender penalties, often intersecting with age or motherhood signals (Lahey, 2008, 158 citations). Over 10 foundational and recent audit studies quantify discrimination levels across sectors.
Why It Matters
Field experiments like Bertrand and Mullainathan (2003) showed female-named resumes receive 50% fewer callbacks than male-named ones in US labor markets, informing affirmative action policies. Gaddis (2014, 397 citations) extended this to credentials, finding gender interacts with race signals to reduce employability. Neumark (2018, 508 citations) reviews how such biases sustain wage gaps, guiding HR interventions and legal reforms for equity.
Key Research Challenges
Intersectional Bias Measurement
Studies struggle to isolate gender from race or class signals in names (Gaddis, 2014). Intersectionality requires multi-factor manipulations, increasing experimental complexity (Pager et al., 2009). Standardization across diverse name pools remains inconsistent.
Temporal Stability of Bias
Quillian et al. (2017, 812 citations) found no decline in discrimination over time, challenging intervention efficacy claims. Longitudinal designs face resume fatigue and market shifts (Neumark, 2018). Replicating effects across decades demands updated name databases.
Generalizability Across Contexts
US-centric findings like Bertrand and Mullainathan (2003) may not hold internationally (Quillian et al., 2019, 271 citations). Cultural name perceptions vary, limiting cross-country comparisons. Low-wage sectors show amplified biases (Pager et al., 2009).
Essential Papers
Meta-analysis of field experiments shows no change in racial discrimination in hiring over time
Lincoln Quillian, Devah Pager, Ole Hexel et al. · 2017 · Proceedings of the National Academy of Sciences · 812 citations
Significance Many scholars have argued that discrimination in American society has decreased over time, while others point to persisting race and ethnic gaps and subtle forms of prejudice. The ques...
Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination
Marianne Bertrand, Sendhil Mullainathan · 2003 · 637 citations
We perform a field experiment to measure racial discrimination in the labor market.We respond with fictitious resumes to help-wanted ads in Boston and Chicago newspapers.To manipulate perception of...
Experimental Research on Labor Market Discrimination
David Neumark · 2018 · Journal of Economic Literature · 508 citations
Understanding whether labor market discrimination explains inferior labor market outcomes for many groups has drawn the attention of labor economists for decades— at least since the publication of ...
Discrimination in the Credential Society: An Audit Study of Race and College Selectivity in the Labor Market
S. Michael Gaddis · 2014 · Social Forces · 397 citations
Racial inequality in economic outcomes, particularly among the college educated, persists throughout US society. Scholars debate whether this inequality stems from racial differences in human capit...
Do Some Countries Discriminate More than Others? Evidence from 97 Field Experiments of Racial Discrimination in Hiring
Lincoln Quillian, Anthony Heath, Devah Pager et al. · 2019 · Sociological Science · 271 citations
Comparing levels of discrimination across countries can provide a window into large-scalesocial and political factors often described as the root of discrimination. Because of difficulties inmeasur...
The Value of Postsecondary Credentials in the Labor Market: An Experimental Study
David Deming, Noam Yuchtman, Amira Abulafi et al. · 2016 · American Economic Review · 271 citations
We study employers' perceptions of the value of postsecondary degrees using a field experiment. We randomly assign the sector and selectivity of institutions to fictitious resumes and apply to real...
Prejudice and Discrimination Toward Immigrants
Victoria M. Esses · 2020 · Annual Review of Psychology · 217 citations
Prejudice and discrimination toward immigrants, and the consequences of these negative attitudes and behavior, are key determinants of the economic, sociocultural, and civic-political future of rec...
Reading Guide
Foundational Papers
Start with Bertrand and Mullainathan (2003, 637 citations) for core resume audit method; Gaddis (2014, 397 citations) for credential interactions; Lahey (2008, 158 citations) for age-gender penalties.
Recent Advances
Quillian et al. (2017, 812 citations) meta-analysis shows stable discrimination; Neumark (2018, 508 citations) economic review; Quillian et al. (2019, 271 citations) cross-country comparisons.
Core Methods
Correspondence audits send matched resumes with gendered names (Bertrand & Mullainathan, 2003). Logistic regression on callback probabilities. Meta-regression for temporal trends (Quillian et al., 2017).
How PapersFlow Helps You Research Gender Discrimination via Names in Hiring
Discover & Search
PapersFlow's Research Agent uses searchPapers to query 'gender names hiring callbacks field experiments,' retrieving Bertrand and Mullainathan (2003). citationGraph maps connections to Gaddis (2014) and Lahey (2008); findSimilarPapers expands to 50+ audit studies; exaSearch uncovers hidden preprints on name-gender inference.
Analyze & Verify
Analysis Agent employs readPaperContent on Bertrand and Mullainathan (2003) to extract callback rates; verifyResponse with CoVe cross-checks claims against Quillian et al. (2017); runPythonAnalysis replicates meta-regressions from Neumark (2018) using pandas for effect size pooling. GRADE grading scores methodological rigor in audit designs.
Synthesize & Write
Synthesis Agent detects gaps like missing international gender-name studies; Writing Agent uses latexEditText for resume experiment tables, latexSyncCitations for 20-paper bibliographies, latexCompile for report PDFs, and exportMermaid for discrimination effect flowcharts.
Use Cases
"Replicate callback rates from Bertrand Mullainathan 2003 with modern data"
Research Agent → searchPapers → Analysis Agent → runPythonAnalysis (pandas meta-analysis on 10 audit papers) → CSV export of pooled gender effects with 95% CIs.
"Draft LaTeX review of gender name discrimination studies"
Synthesis Agent → gap detection → Writing Agent → latexEditText (intro/methods) → latexSyncCitations (Bertrand 2003 et al.) → latexCompile → PDF with integrated bibliography.
"Find code for resume audit name gender inference"
Research Agent → paperExtractUrls (Gaddis 2014) → Code Discovery → paperFindGithubRepo → githubRepoInspect → Python scripts for name-gender probability models.
Automated Workflows
Deep Research workflow conducts systematic review: searchPapers (gender hiring names) → citationGraph → readPaperContent on top 50 → GRADE + runPythonAnalysis for meta-effects report. DeepScan applies 7-step verification: exaSearch → verifyResponse/CoVe on claims → contradiction flagging across Quillian (2017) vs. Lahey (2008). Theorizer generates hypotheses on algorithm-moderated biases from Bertrand (2003) and Williams et al. (2018).
Frequently Asked Questions
What defines gender discrimination via names in hiring?
Field experiments send identical resumes differing only in gendered names to job ads, measuring callback gaps (Bertrand & Mullainathan, 2003). Female names often yield 20-50% fewer responses.
What methods are used?
Audit/correspondence studies manipulate names like Emily vs. Greg (Bertrand & Mullainathan, 2003). Meta-analyses pool callback ratios (Quillian et al., 2017). Intersectional designs add motherhood cues (Lahey, 2008).
What are key papers?
Bertrand and Mullainathan (2003, 637 citations) pioneered White-sounding vs. Black names with gender effects. Gaddis (2014, 397 citations) tested credentials. Neumark (2018, 508 citations) reviews labor discrimination experiments.
What open problems exist?
Cross-national generalizability (Quillian et al., 2019). Algorithmic hiring interactions (Williams et al., 2018). Long-term bias trends post-2019.
Research Names, Identity, and Discrimination Research with AI
PapersFlow provides specialized AI tools for Social Sciences researchers. Here are the most relevant for this topic:
Systematic Review
AI-powered evidence synthesis with documented search strategies
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
Find Disagreement
Discover conflicting findings and counter-evidence
See how researchers in Social Sciences use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Gender Discrimination via Names in Hiring with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Social Sciences researchers