Prompting and Fine-Tuning Open-Sourced Large Language Models for Stance Classification
arxiv(2023)
摘要
Stance classification, the task of predicting the viewpoint of an author on a
subject of interest, has long been a focal point of research in domains ranging
from social science to machine learning. Current stance detection methods rely
predominantly on manual annotation of sentences, followed by training a
supervised machine learning model. However, this manual annotation process
requires laborious annotation effort, and thus hampers its potential to
generalize across different contexts. In this work, we investigate the use of
Large Language Models (LLMs) as a stance detection methodology that can reduce
or even eliminate the need for manual annotations. We investigate 10
open-source models and 7 prompting schemes, finding that LLMs are competitive
with in-domain supervised models but are not necessarily consistent in their
performance. We also fine-tuned the LLMs, but discovered that fine-tuning
process does not necessarily lead to better performance. In general, we
discover that LLMs do not routinely outperform their smaller supervised machine
learning models, and thus call for stance detection to be a benchmark for which
LLMs also optimize for. The code used in this study is available at
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要