Promptly Predicting Structures: The Return of Inference
CoRR(2024)
摘要
Prompt-based methods have been used extensively across NLP to build zero- and
few-shot label predictors. Many NLP tasks are naturally structured: that is,
their outputs consist of multiple labels which constrain each other. Annotating
data for such tasks can be cumbersome. Can the promise of the prompt-based
paradigm be extended to such structured outputs? In this paper, we present a
framework for constructing zero- and few-shot linguistic structure predictors.
Our key insight is that we can use structural constraints – and combinatorial
inference derived from them – to filter out inconsistent structures predicted
by large language models. We instantiated this framework on two structured
prediction tasks, and five datasets. Across all cases, our results show that
enforcing consistency not only constructs structurally valid outputs, but also
improves performance over the unconstrained variants.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要