PhaseEvo: Towards Unified In-Context Prompt Optimization for Large Language Models
CoRR(2024)
摘要
Crafting an ideal prompt for Large Language Models (LLMs) is a challenging
task that demands significant resources and expert human input. Existing work
treats the optimization of prompt instruction and in-context learning examples
as distinct problems, leading to sub-optimal prompt performance. This research
addresses this limitation by establishing a unified in-context prompt
optimization framework, which aims to achieve joint optimization of the prompt
instruction and examples. However, formulating such optimization in the
discrete and high-dimensional natural language space introduces challenges in
terms of convergence and computational efficiency. To overcome these issues, we
present PhaseEvo, an efficient automatic prompt optimization framework that
combines the generative capability of LLMs with the global search proficiency
of evolution algorithms. Our framework features a multi-phase design
incorporating innovative LLM-based mutation operators to enhance search
efficiency and accelerate convergence. We conduct an extensive evaluation of
our approach across 35 benchmark tasks. The results demonstrate that PhaseEvo
significantly outperforms the state-of-the-art baseline methods by a large
margin whilst maintaining good efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要