Anchoring Code Understandability Evaluations Through Task Descriptions

2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)(2022)

引用 1|浏览15
暂无评分
摘要
In code comprehension experiments, participants are usually told at the beginning what kind of code comprehension task to expect. Describing experiment scenarios and experimental tasks will influence participants in ways that are sometimes hard to predict and control. In particular, describing or even mentioning the difficulty of a code comprehension task might anchor participants and their perception of the task itself. In this study, we investigated in a randomized, controlled experiment with 256 participants (50 software professionals and 206 computer science students) whether a hint about the difficulty of the code to be understood in a task description anchors participants in their own code comprehensibility ratings. Subjective code evaluations are a commonly used measure for how well a developer in a code comprehension study understood code. Accordingly, it is important to understand how robust these measures are to cognitive biases such as the anchoring effect. Our results show that participants are significantly influenced by the initial scenario description in their assessment of code com-prehensibility. An initial hint of hard to understand code leads participants to assess the code as harder to understand than partic-ipants who received no hint or a hint of easy to understand code. This affects students and professionals alike. We discuss examples of design decisions and contextual factors in the conduct of code comprehension experiments that can induce an anchoring effect, and recommend the use of more robust comprehension measures in code comprehension studies to enhance the validity of results.
更多
查看译文
关键词
code comprehension,anchoring effect,empirical study design,soft-ware metrics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要