PapersFlow Research Brief

Physical Sciences · Computer Science

Expert finding and Q&A systems
Research Guide

What is Expert finding and Q&A systems?

Expert finding and Q&A systems are computational methods for identifying and retrieving experts from online communities, especially on question answering platforms and social media, to route questions effectively and assess answer quality.

This field encompasses 11,077 works focused on expertise identification, question routing, and relevance criteria in community question answering. Techniques address user motivations, information seeker satisfaction, and knowledge sharing in social media environments. Evaluation methods like cumulated gain have become standard for ranking highly relevant experts and answers first.

Topic Hierarchy

100%
graph TD D["Physical Sciences"] F["Computer Science"] S["Information Systems"] T["Expert finding and Q A systems"] D --> F F --> S S --> T style T fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan
11.1K
Papers
N/A
5yr Growth
83.8K
Total Citations

Research Sub-Topics

Why It Matters

Expert finding and Q&A systems enable efficient knowledge sharing in online communities by routing questions to capable answerers, improving response quality on platforms like Stack Overflow or academic forums. For instance, "ArnetMiner" by Tang et al. (2008) extracts researcher profiles from the web and integrates publication data into networks, supporting expert discovery in academic social networks with 2093 citations. "Finding high-quality content in social media" by Agichtein et al. (2008) identifies reliable user-generated content amid spam, aiding platforms in prioritizing expert contributions, as evidenced by its 1238 citations and application to social media quality assessment.

Reading Guide

Where to Start

"Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) first, as it provides the foundational metric for assessing expert retrieval rankings with 4504 citations.

Key Papers Explained

"Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) establishes graded relevance evaluation, which "Learning to rank for information retrieval" by Liu (2010) builds on via pointwise, pairwise, and listwise methods for ranking experts. "ArnetMiner" by Tang et al. (2008) applies these to extract profiles for academic expert networks, while "Finding high-quality content in social media" by Agichtein et al. (2008) extends quality assessment to social Q&A. "Accurately interpreting clickthrough data as implicit feedback" by Joachims et al. (2005) refines feedback signals for such systems.

Paper Timeline

100%
graph LR P0["Models in information behaviour ...
1999 · 2.0K cites"] P1["Cumulated gain-based evaluation ...
2002 · 4.5K cites"] P2["Improving recommendation lists t...
2005 · 1.8K cites"] P3["ArnetMiner
2008 · 2.1K cites"] P4["Learning to rank for information...
2010 · 1.9K cites"] P5["Consensus measurement in Delphi ...
2012 · 1.6K cites"] P6["Collaborative Deep Learning for ...
2015 · 1.6K cites"] P0 --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> P6 style P1 fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan

Most-cited paper highlighted in red. Papers ordered chronologically.

Advanced Directions

Research continues on learning to rank and implicit feedback integration, as seen in highly cited works like Liu (2010) and Joachims et al. (2005), with no recent preprints altering core methods.

Papers at a Glance

# Paper Year Venue Citations Open Access
1 Cumulated gain-based evaluation of IR techniques 2002 ACM Transactions on In... 4.5K
2 ArnetMiner 2008 2.1K
3 Models in information behaviour research 1999 Journal of Documentation 2.0K
4 Learning to rank for information retrieval 2010 1.9K
5 Improving recommendation lists through topic diversification 2005 1.8K
6 Consensus measurement in Delphi studies 2012 Technological Forecast... 1.6K
7 Collaborative Deep Learning for Recommender Systems 2015 1.6K
8 Accurately interpreting clickthrough data as implicit feedback 2005 1.4K
9 Crowdsourcing systems on the World-Wide Web 2011 Communications of the ACM 1.3K
10 Finding high-quality content in social media 2008 1.2K

Frequently Asked Questions

What is cumulated gain in expert finding evaluation?

Cumulated gain measures retrieval effectiveness by ranking highly relevant documents or experts first in large output sets. "Cumulated gain-based evaluation of IR techniques" by Järvelin and Kekäläinen (2002) introduced this metric, which has 4504 citations and prioritizes graded relevance over binary judgments.

How does ArnetMiner support expert finding?

ArnetMiner extracts researcher profiles automatically from the web and integrates publication data into academic social networks. Tang et al. (2008) describe its focus on mining expertise, cited 2093 times for enabling expert retrieval in scholarly contexts.

What methods improve answer quality in Q&A systems?

Methods identify high-quality user-generated content by analyzing social media contributions beyond spam. "Finding high-quality content in social media" by Agichtein et al. (2008) proposes techniques for this, with 1238 citations, emphasizing relevance criteria and user satisfaction.

How is implicit feedback used in question routing?

Clickthrough data serves as implicit feedback for ranking, though biased by user examination patterns. Joachims et al. (2005) in "Accurately interpreting clickthrough data as implicit feedback" validate this against eyetracking and judgments, with 1383 citations.

What role does learning to rank play in expert retrieval?

Learning to rank applies pointwise, pairwise, and listwise approaches to optimize expert and answer retrieval. Liu (2010) in "Learning to rank for information retrieval" covers these methods, cited 1946 times for their relevance to Q&A systems.

Open Research Questions

  • ? How can biases in implicit feedback from clicks be fully corrected for unbiased expert ranking in social Q&A?
  • ? What integration of deep learning improves collaborative filtering for sparse rating data in expert recommendation?
  • ? Which diversification techniques best balance topic coverage while maintaining expert relevance in recommendation lists?
  • ? How do models of information behavior predict user motivations for contributing high-quality answers in communities?
  • ? What metrics beyond cumulated gain capture satisfaction in real-time question routing to experts?

Research Expert finding and Q&A systems with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Expert finding and Q&A systems with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers