ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented Generator
CoRR(2024)
摘要
Large language model (LLM) has proven to benefit a lot from retrieval
augmentation in alleviating hallucinations confronted with knowledge-intensive
questions. Retrieval-augmented generation (RAG) adopts IR-based techniques
utilizing semantic-relevant documents as the generator's input context and
realizes external knowledge injection. However, on today's Internet which is
flooded with content generated by LLMs, there are too many "related yet
useless" documents or even fake knowledge fabricated by LLMs, which will
introduce extra noise to the generator and distract it from giving correct
results. To this end, we regard the training of the RAG generator model as a
multi-agent adversarial-defensive system, guiding the generator to have a
better taste of whether a specific document helps answer the question through
the Adversarial Tuning in a Multi-agent (ATM) system to strengthen the
generator's robustness in an RAG pipeline. After rounds of multi-agent
iterative tuning, we find that the ATM Generator can eventually discriminate
useful documents amongst LLM fabrications and achieve better performance than
strong baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要