Subtopic Deep Dive

Cyberbullying Detection in Conversations
Research Guide

What is Cyberbullying Detection in Conversations?

Cyberbullying Detection in Conversations applies sequence modeling and graph-based methods to identify bullying patterns across multi-turn threaded discussions and user interactions on social media platforms.

Researchers focus on temporal dynamics, victim-perpetrator relationships, and contextual cues in conversations to improve detection accuracy. Key datasets include benchmark corpora for hate speech and cyberbullying (Poletto et al., 2020, 362 citations). Over 20 papers since 2014 address conversational aspects, building on foundational work like Dadvar (2014, 21 citations).

11
Curated Papers
3
Key Challenges

Why It Matters

Detecting cyberbullying in conversations enables timely platform interventions, reducing harm to vulnerable users on sites like Twitter. Models capturing multi-turn harassment support mental health protections by flagging escalating conflicts early (Zhang et al., 2018, 174 citations). Peer-to-peer analysis reveals instigator-target dynamics, aiding targeted moderation (ElSherief et al., 2018, 152 citations). Community interaction studies inform scalable detection across web platforms (Kumar et al., 2018, 307 citations).

Key Research Challenges

Contextual Ambiguity in Threads

Hate speech detection struggles with sarcasm and evolving context across conversation turns. Gao and Huang (2017, 223 citations) show context-aware models outperform isolated text classifiers. Datasets lack sufficient threaded annotations (Poletto et al., 2020, 362 citations).

Temporal Dynamics Modeling

Capturing escalation patterns in multi-turn interactions requires sequence models handling long dependencies. Zhang et al. (2018, 174 citations) identify early failure signs in conversations. Limited benchmarks hinder progress (Vidgen and Derczynski, 2020, 269 citations).

Victim-Perpetrator Graph Extraction

Graph-based methods face challenges in noisy social media graphs for identifying bullying roles. ElSherief et al. (2018, 152 citations) analyze peer-to-peer hate targets. Community conflicts add complexity (Kumar et al., 2018, 307 citations).

Essential Papers

1.

Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature

Joshua A. Tucker, Andrew M. Guess, Pablo Barberá et al. · 2018 · SSRN Electronic Journal · 1.1K citations

2.

Resources and benchmark corpora for hate speech detection: a systematic review

Fabio Poletto, Valerio Basile, Manuela Sanguinetti et al. · 2020 · Language Resources and Evaluation · 362 citations

Abstract Hate Speech in social media is a complex phenomenon, whose detection has recently gained significant traction in the Natural Language Processing community, as attested by several recent re...

3.

Community Interaction and Conflict on the Web

Srijan Kumar, William L. Hamilton, Jure Leskovec et al. · 2018 · 307 citations

Users organize themselves into communities on web platforms. These\ncommunities can interact with one another, often leading to conflicts and toxic\ninteractions. However, little is known about the...

4.

Directions in abusive language training data, a systematic review: Garbage in, garbage out

Bertie Vidgen, Leon Derczynski · 2020 · PLoS ONE · 269 citations

Data-driven and machine learning based approaches for detecting, categorising and measuring abusive content such as hate speech and harassment have gained traction due to their scalability, robustn...

5.

Human-Machine Collaboration for Content Regulation

Shagun Jhaver, Iris Birman, Éric Gilbert et al. · 2019 · ACM Transactions on Computer-Human Interaction · 261 citations

What one may say on the internet is increasingly controlled by a mix of automated programs, and decisions made by paid and volunteer human moderators. On the popular social media site Reddit, moder...

6.

Detecting Online Hate Speech Using Context Aware Models

Lei Gao, Ruihong Huang · 2017 · 223 citations

In the wake of a polarizing election, the cyber world is laden with hate speech.Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overloo...

7.

Challenges and frontiers in abusive content detection

Bertie Vidgen, Alex H. S. Harris, Dong Nguyen et al. · 2019 · 192 citations

Online abusive content detection is an inherently difficult task. It has received considerable attention from academia, particularly within the computational linguistics community, and performance ...

Reading Guide

Foundational Papers

Start with Dadvar (2014, 21 citations) for early cyberbullying detection uniting experts and machines, providing baseline against which conversational extensions build.

Recent Advances

Study Poletto et al. (2020, 362 citations) for benchmark corpora, Zhang et al. (2018, 174 citations) for failure signs, and ElSherief et al. (2018, 152 citations) for peer hate dynamics.

Core Methods

Sequence models for temporal patterns (Zhang et al., 2018), context-aware classifiers (Gao and Huang, 2017), graph analysis for communities (Kumar et al., 2018).

How PapersFlow Helps You Research Cyberbullying Detection in Conversations

Discover & Search

Research Agent uses searchPapers and exaSearch to query 'cyberbullying detection conversational models' retrieving Poletto et al. (2020), then citationGraph reveals 362 citing works on benchmarks, and findSimilarPapers uncovers Gao and Huang (2017) for context-aware extensions.

Analyze & Verify

Analysis Agent applies readPaperContent on Zhang et al. (2018) to extract conversational failure metrics, verifyResponse with CoVe checks model claims against Dadvar (2014), and runPythonAnalysis reimplements sequence classifiers with GRADE scoring for F1 verification on hate speech splits.

Synthesize & Write

Synthesis Agent detects gaps in multi-turn datasets via contradiction flagging across Vidgen et al. (2019) and Poletto et al. (2020), while Writing Agent uses latexEditText for methods sections, latexSyncCitations for 50+ refs, and latexCompile to generate arXiv-ready surveys with exportMermaid for escalation flowcharts.

Use Cases

"Reproduce cyberbullying sequence model from Zhang et al. 2018 with modern baselines"

Research Agent → searchPapers('conversational failure cyberbullying') → Analysis Agent → readPaperContent + runPythonAnalysis (pandas LSTM baseline on extracted data) → GRADE F1 scores output with statistical plots.

"Draft survey on graph-based cyberbullying detection in threads"

Synthesis Agent → gap detection (Kumar 2018 + ElSherief 2018) → Writing Agent → latexEditText(draft) → latexSyncCitations(20 papers) → latexCompile → PDF with victim graphs via exportMermaid.

"Find GitHub code for context-aware hate detection models"

Research Agent → searchPapers('Gao Huang 2017 context hate') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → verified impl with runPythonAnalysis benchmarks.

Automated Workflows

Deep Research workflow conducts systematic review: searchPapers(50+ cyberbullying conversation papers) → citationGraph → DeepScan(7-step verify on top-10 like Poletto 2020) → structured report. Theorizer generates hypotheses on graph-sequence hybrids from Kumar (2018) + Zhang (2018). DeepScan with CoVe chain verifies temporal model claims across datasets.

Frequently Asked Questions

What defines cyberbullying detection in conversations?

It uses sequence and graph models to detect bullying patterns in multi-turn threads, focusing on temporal escalation and user roles (Zhang et al., 2018).

What are key methods?

Context-aware classifiers (Gao and Huang, 2017), conversational failure detection (Zhang et al., 2018), and peer-to-peer graphs (ElSherief et al., 2018).

What are major papers?

Foundational: Dadvar (2014); High-impact: Poletto et al. (2020, 362 cites), Kumar et al. (2018, 307 cites), Gao and Huang (2017, 223 cites).

What open problems exist?

Scarce threaded datasets, handling sarcasm in dynamics, scalable graph extraction in noisy interactions (Vidgen et al., 2019; Vidgen and Derczynski, 2020).

Research Hate Speech and Cyberbullying Detection with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Cyberbullying Detection in Conversations with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers