Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models
arxiv(2024)
摘要
Large language models (LLMs) have shown promising abilities of in-context
learning (ICL), adapting swiftly to new tasks with only few-shot
demonstrations. However, current few-shot methods heavily depend on
high-quality, query-specific demos, which are often lacking. When faced with
out-of-demonstration (OOD) queries, methods that rely on hand-crafted demos or
external retrievers might fail. To bridge the gap between limited demos and OOD
queries, we propose Self-Demos, a novel prompting method that elicits the
inherent generalizability in LLMs by query-aware demo generation. The generated
demos strategically interpolate between existing demos and the given query,
transforming the query from OOD to ID. To evaluate the effectiveness of our
approach, we manually constructed OOD-Toolset, a dataset in the tool-using
scenario with over 300 real-world APIs and 1000 instances, each consisting of
three tool-use cases as demos and an OOD query. Thorough experiments on our
dataset and two public math benchmarks have shown that our method can
outperform state-of-the-art baselines in the OOD setting. Moreover, we conduct
a range of analyses to validate Self-Demos's generalization and provide more
insights.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要