Interactive Code Generation Via Test-Driven User-Intent Formalization

Shuvendu K. Lahiri, Sarah Fakhoury,Aaditya Naik,Georgios Sakkas, Saikat Chakraborty,Madanlal Musuvathi, Piali Choudhury, Curtis von Veh,Jeevana Priya Inala,Chenglong Wang,Jianfeng Gao

arXiv (Cornell University)(2022)

引用 19|浏览90
暂无评分
摘要
Large language models (LLMs) have shown great potential in automating significant aspects of coding by producing natural code from informal natural language (NL) intent. However, when interacting with LLMs, users have no guarantees that the code suggestions produced correctly satisfy the intent they provided. In fact, it is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics. In this paper, we propose the workflow of interactive test-driven code generation, which leverages lightweight user feedback to (a) formalize the user intent using generated tests that can be useful for debugging, and (b) produce an improved set of code suggestions by pruning and ranking candidate code suggestions. We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder. We perform an automated evaluation of TiCoder on the MBPP and HumanEval code generation benchmarks. Our results are promising with using the OpenAI Codex LLM: our best algorithm improves the 1 code generation accuracy (in absolute percentages) between 22.49% to 37.71% for MBPP and between 24.79% to 53.98% for HumanEval using between 1 to 5 simulated user queries.
更多
查看译文
关键词
Source Code Analysis,Dynamic Test Generation,Code Clone Detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要