Large Language Model Evaluation Via Multi AI Agents: Preliminary results
arxiv(2024)
摘要
As Large Language Models (LLMs) have become integral to both research and
daily operations, rigorous evaluation is crucial. This assessment is important
not only for individual tasks but also for understanding their societal impact
and potential risks. Despite extensive efforts to examine LLMs from various
perspectives, there is a noticeable lack of multi-agent AI models specifically
designed to evaluate the performance of different LLMs. To address this gap, we
introduce a novel multi-agent AI model that aims to assess and compare the
performance of various LLMs. Our model consists of eight distinct AI agents,
each responsible for retrieving code based on a common description from
different advanced language models, including GPT-3.5, GPT-3.5 Turbo, GPT-4,
GPT-4 Turbo, Google Bard, LLAMA, and Hugging Face. Our developed model utilizes
the API of each language model to retrieve code for a given high-level
description. Additionally, we developed a verification agent, tasked with the
critical role of evaluating the code generated by its counterparts. We
integrate the HumanEval benchmark into our verification agent to assess the
generated code's performance, providing insights into their respective
capabilities and efficiencies. Our initial results indicate that the GPT-3.5
Turbo model's performance is comparatively better than the other models. This
preliminary analysis serves as a benchmark, comparing their performances side
by side. Our future goal is to enhance the evaluation process by incorporating
the Massively Multitask Benchmark for Python (MBPP) benchmark, which is
expected to further refine our assessment. Additionally, we plan to share our
developed model with twenty practitioners from various backgrounds to test our
model and collect their feedback for further improvement.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要