PapersFlow Research Brief
Expert finding and Q&A systems
Research Guide
What is Expert finding and Q&A systems?
Expert finding and Q&A systems are computational methods for identifying and retrieving experts from online communities, especially on question answering platforms and social media, to route questions effectively and assess answer quality.
This field encompasses 11,077 works focused on expertise identification, question routing, and relevance criteria in community question answering. Techniques address user motivations, information seeker satisfaction, and knowledge sharing in social media environments. Evaluation methods like cumulated gain have become standard for ranking highly relevant experts and answers first.
Topic Hierarchy
Research Sub-Topics
Expertise Retrieval in Community Question Answering
This sub-topic focuses on algorithms and models for identifying experts in platforms like Stack Overflow or Yahoo Answers. Researchers develop techniques for expertise profiling and question-expert matching.
Question Routing in Expert Finding Systems
This sub-topic explores methods to direct user queries to the most suitable answerers based on topical expertise. Researchers investigate routing models, relevance ranking, and performance evaluation metrics.
Answer Quality Assessment in Social Q&A
This sub-topic examines criteria and machine learning approaches for evaluating response quality and relevance. Researchers study user satisfaction, acceptance prediction, and quality prediction models.
User Motivations in Knowledge Sharing Platforms
This sub-topic investigates psychological and social factors driving participation in expert finding and Q&A sites. Researchers analyze motivations, disincentives, and behavioral models.
Expertise Identification from Social Media
This sub-topic develops techniques to detect domain experts using social media signals like posts and interactions. Researchers focus on topic modeling, network analysis, and implicit expertise inference.
Why It Matters
Expert finding and Q&A systems enable efficient knowledge sharing in online communities by routing questions to capable answerers, improving response quality on platforms like Stack Overflow or academic forums. For instance, "ArnetMiner" by Tang et al. (2008) extracts researcher profiles from the web and integrates publication data into networks, supporting expert discovery in academic social networks with 2093 citations. "Finding high-quality content in social media" by Agichtein et al. (2008) identifies reliable user-generated content amid spam, aiding platforms in prioritizing expert contributions, as evidenced by its 1238 citations and application to social media quality assessment.
Reading Guide
Where to Start
"Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) first, as it provides the foundational metric for assessing expert retrieval rankings with 4504 citations.
Key Papers Explained
"Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) establishes graded relevance evaluation, which "Learning to rank for information retrieval" by Liu (2010) builds on via pointwise, pairwise, and listwise methods for ranking experts. "ArnetMiner" by Tang et al. (2008) applies these to extract profiles for academic expert networks, while "Finding high-quality content in social media" by Agichtein et al. (2008) extends quality assessment to social Q&A. "Accurately interpreting clickthrough data as implicit feedback" by Joachims et al. (2005) refines feedback signals for such systems.
Paper Timeline
Most-cited paper highlighted in red. Papers ordered chronologically.
Advanced Directions
Research continues on learning to rank and implicit feedback integration, as seen in highly cited works like Liu (2010) and Joachims et al. (2005), with no recent preprints altering core methods.
Papers at a Glance
| # | Paper | Year | Venue | Citations | Open Access |
|---|---|---|---|---|---|
| 1 | Cumulated gain-based evaluation of IR techniques | 2002 | ACM Transactions on In... | 4.5K | ✕ |
| 2 | ArnetMiner | 2008 | — | 2.1K | ✕ |
| 3 | Models in information behaviour research | 1999 | Journal of Documentation | 2.0K | ✕ |
| 4 | Learning to rank for information retrieval | 2010 | — | 1.9K | ✕ |
| 5 | Improving recommendation lists through topic diversification | 2005 | — | 1.8K | ✕ |
| 6 | Consensus measurement in Delphi studies | 2012 | Technological Forecast... | 1.6K | ✕ |
| 7 | Collaborative Deep Learning for Recommender Systems | 2015 | — | 1.6K | ✕ |
| 8 | Accurately interpreting clickthrough data as implicit feedback | 2005 | — | 1.4K | ✕ |
| 9 | Crowdsourcing systems on the World-Wide Web | 2011 | Communications of the ACM | 1.3K | ✕ |
| 10 | Finding high-quality content in social media | 2008 | — | 1.2K | ✕ |
Frequently Asked Questions
What is cumulated gain in expert finding evaluation?
Cumulated gain measures retrieval effectiveness by ranking highly relevant documents or experts first in large output sets. "Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) introduced this metric, which has 4504 citations and prioritizes graded relevance over binary judgments.
How does ArnetMiner support expert finding?
ArnetMiner extracts researcher profiles automatically from the web and integrates publication data into academic social networks. Tang et al. (2008) describe its focus on mining expertise, cited 2093 times for enabling expert retrieval in scholarly contexts.
What methods improve answer quality in Q&A systems?
Methods identify high-quality user-generated content by analyzing social media contributions beyond spam. "Finding high-quality content in social media" by Agichtein et al. (2008) proposes techniques for this, with 1238 citations, emphasizing relevance criteria and user satisfaction.
How is implicit feedback used in question routing?
Clickthrough data serves as implicit feedback for ranking, though biased by user examination patterns. Joachims et al. (2005) in "Accurately interpreting clickthrough data as implicit feedback" validate this against eyetracking and judgments, with 1383 citations.
What role does learning to rank play in expert retrieval?
Learning to rank applies pointwise, pairwise, and listwise approaches to optimize expert and answer retrieval. Liu (2010) in "Learning to rank for information retrieval" covers these methods, cited 1946 times for their relevance to Q&A systems.
Open Research Questions
- ? How can biases in implicit feedback from clicks be fully corrected for unbiased expert ranking in social Q&A?
- ? What integration of deep learning improves collaborative filtering for sparse rating data in expert recommendation?
- ? Which diversification techniques best balance topic coverage while maintaining expert relevance in recommendation lists?
- ? How do models of information behavior predict user motivations for contributing high-quality answers in communities?
- ? What metrics beyond cumulated gain capture satisfaction in real-time question routing to experts?
Recent Trends
The field maintains 11,077 works with steady focus on expertise identification and question routing, anchored by classics like "Cumulated gain-based evaluation of IR techniques" (4504 citations, 2002) and "ArnetMiner" (2093 citations, 2008); no growth rate data or recent preprints/news indicate persistent reliance on established IR techniques.
Research Expert finding and Q&A systems with AI
PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Code & Data Discovery
Find datasets, code repositories, and computational tools
Deep Research Reports
Multi-source evidence synthesis with counter-evidence
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Computer Science & AI use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Expert finding and Q&A systems with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Computer Science researchers