Smart generation of code tracing questions for assessment in introductory programming

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION(2023)

引用 1|浏览1
暂无评分
摘要
Teaching programming is a challenging activity nowadays, especially in introductory programming courses, which are typically massively attended. Writing functional programs is a cognitive skill, which many students, novices in programming, find it difficult to master. It is equally challenging to assess this ability of students. Research has shown that students need to learn how to read and understand, before they can learn to write programs. Code tracing questions are a suitable way to assess the knowledge of semantics of programming constructs (understanding what programs do). However, when designing these types of questions, teachers need to be aware of the complexity of the code and must always try to create questions with code snippets of same or similar complexity, especially when preparing different versions of the same test (for large student groups). In this paper we propose a new model for smart automatic generation of code tracing questions. The model uses a suitable source code metric to measure the code complexity, and to enable generation of questions containing code snippets of consistent complexity. We also introduce CodeCPP: a newly developed Moodle plugin, which is an implementation of the proposed model. The evaluation of CodeCPP, through a year-long use of the system in teaching programming as well as feedback from selected teachers with many years of experience in teaching programming courses, shows positive and promising results. The system utilization confirms that the application of the proposed model enables a fair and objective knowledge assessment. Furthermore, the feedback provided by teachers and students is highly positive.
更多
查看译文
关键词
automatic question generation systems, code tracing questions, introductory programming, programming knowledge assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要