Argument Quality Assessment in the Age of Instruction-Following Large Language Models
CoRR(2024)
摘要
The computational treatment of arguments on controversial issues has been
subject to extensive NLP research, due to its envisioned impact on opinion
formation, decision making, writing education, and the like. A critical task in
any such application is the assessment of an argument's quality - but it is
also particularly challenging. In this position paper, we start from a brief
survey of argument quality research, where we identify the diversity of quality
notions and the subjectiveness of their perception as the main hurdles towards
substantial progress on argument quality assessment. We argue that the
capabilities of instruction-following large language models (LLMs) to leverage
knowledge across contexts enable a much more reliable assessment. Rather than
just fine-tuning LLMs towards leaderboard chasing on assessment tasks, they
need to be instructed systematically with argumentation theories and scenarios
as well as with ways to solve argument-related problems. We discuss the
real-world opportunities and ethical issues emerging thereby.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要