Towards Generalist Prompting for Large Language Models by Mental Models
CoRR(2024)
摘要
Large language models (LLMs) have demonstrated impressive performance on many
tasks. However, to achieve optimal performance, specially designed prompting
methods are still needed. These methods either rely on task-specific few-shot
examples that require a certain level of domain knowledge, or are designed to
be simple but only perform well on a few types of tasks. In this work, we
attempt to introduce the concept of generalist prompting, which operates on the
design principle of achieving optimal or near-optimal performance on a wide
range of tasks while eliminating the need for manual selection and
customization of prompts tailored to specific problems. Furthermore, we propose
MeMo (Mental Models), an innovative prompting method that is simple-designed
yet effectively fulfills the criteria of generalist prompting. MeMo distills
the cores of various prompting methods into individual mental models and allows
LLMs to autonomously select the most suitable mental models for the problem,
achieving or being near to the state-of-the-art results on diverse tasks such
as STEM, logical reasoning, and commonsense reasoning in zero-shot settings. We
hope that the insights presented herein will stimulate further exploration of
generalist prompting methods for LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要