CoderUJB: An Executable and Unified Java Benchmark for Practical Programming Scenarios
arxiv(2024)
摘要
In the evolving landscape of large language models (LLMs) tailored for
software engineering, the need for benchmarks that accurately reflect
real-world development scenarios is paramount. Current benchmarks are either
too simplistic or fail to capture the multi-tasking nature of software
development. To address this, we introduce CoderUJB, a new benchmark designed
to evaluate LLMs across diverse Java programming tasks that are executable and
reflective of actual development scenarios, acknowledging Java's prevalence in
real-world software production. CoderUJB comprises 2,239 programming questions
derived from 17 real open-source Java projects and spans five practical
programming tasks. Our empirical study on this benchmark investigates the
coding abilities of various open-source and closed-source LLMs, examining the
effects of continued pre-training in specific programming languages code and
instruction fine-tuning on their performance. The findings indicate that while
LLMs exhibit strong potential, challenges remain, particularly in
non-functional code generation (e.g., test generation and defect detection).
Importantly, our results advise caution in the specific programming languages
continued pre-training and instruction fine-tuning, as these techniques could
hinder model performance on certain tasks, suggesting the need for more nuanced
strategies. CoderUJB thus marks a significant step towards more realistic
evaluations of programming capabilities in LLMs, and our study provides
valuable insights for the future development of these models in software
engineering.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要