Probing in Context: Toward Building Robust Classifiers via Probing Large Language Models

arXiv (Cornell University)(2023)

引用 0|浏览2
暂无评分
摘要
Large language models are able to learn new tasks in context, where they are provided with instructions and a few annotated examples. However, the effectiveness of in-context learning is dependent on the provided context, and the performance on a downstream task can vary considerably, depending on the instruction. Importantly, such dependency on the context can surface in unpredictable ways, e.g., a seemingly more informative instruction might lead to a worse performance. In this paper, we propose an alternative approach, which we term in-context probing. Similar to in-context learning, we contextualize the representation of the input with an instruction, but instead of decoding the output prediction, we probe the contextualized representation to predict the label. Through a series of experiments on a diverse set of classification tasks, we show that in-context probing is significantly more robust to changes in instructions. We further show that probing performs competitive or superior to finetuning and can be particularly helpful to build classifiers on top of smaller models, and with only a hundred training examples.
更多
查看译文
关键词
building robust classifiers,context,language,models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要