Enhancing LLM-Based Coding Tools through Native Integration of IDE-Derived Static Context
CoRR(2024)
摘要
Large Language Models (LLMs) have achieved remarkable success in code
completion, as evidenced by their essential roles in developing code assistant
services such as Copilot. Being trained on in-file contexts, current LLMs are
quite effective in completing code for single source files. However, it is
challenging for them to conduct repository-level code completion for large
software projects that require cross-file information. Existing research on
LLM-based repository-level code completion identifies and integrates cross-file
contexts, but it suffers from low accuracy and limited context length of LLMs.
In this paper, we argue that Integrated Development Environments (IDEs) can
provide direct, accurate and real-time cross-file information for
repository-level code completion. We propose IDECoder, a practical framework
that leverages IDE native static contexts for cross-context construction and
diagnosis results for self-refinement. IDECoder utilizes the rich cross-context
information available in IDEs to enhance the capabilities of LLMs of
repository-level code completion. We conducted preliminary experiments to
validate the performance of IDECoder and observed that this synergy represents
a promising trend for future exploration.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要