Abstract
We present a survey of methods for assessing and enhancing the quality of online discussions, focusing on the potential of LLMs. While online discourses aim, at least in theory, to foster mutual understanding, they often devolve into harmful exchanges, such as hate speech, threatening social cohesion and democratic values. Recent advancements in LLMs enable artificial facilitation agents to not only moderate content, but also actively improve the quality of interactions. Our survey synthesizes ideas from NLP and Social Sciences to provide (a) a new taxonomy on discussion quality evaluation, (b) an overview of intervention and facilitation strategies, (c) along with a new taxonomy of conversation facilitation datasets, (d) an LLM-oriented roadmap of good practices and future research directions, from technological and societal perspectives.
They have an interesting terminology section page 2:
- Facilitation vs. Moderation
- Ex-Post moderation: moderation happening just after the user has posted some content
- Discussion deliberation, dialogue, debate: They focus on discussions and deliberations (structured discussions focusing on opinion sharing). In contrast to dialogues (collaborative), and debates (competitive, organized).
Questions:
- What is the taxonomy they use on discussion quality evaluation?
- What is the LLM-oriented roadmap of good practices and future research directions?
- Evaluation: LLMs can be used as scalable, cost-effective automated annotators for discussion quality, though they currently struggle with subjective dimensions like pragmatic understanding
- Model and prompt selection: carefully select models and tailor prompt engineering to the specific quality dimension being evaluated
- Intervention adaptation: facilitation strategies must be adapted to the specific legal frameworks, rules, and social norms of the target community or platform
- Simulation: experiments should utilize… actually I’m gonna stop here. This isn’t relevant.