The Importance of Discerning Flaky from Fault-triggering Test Failures: A Case Study on the Chromium CI

arxiv(2023)

引用 0|浏览15
暂无评分
摘要
Flaky tests are tests that pass and fail on different executions of the same version of a program under test. They waste valuable developer time by making developers investigate false alerts (flaky test failures). To deal with this problem, many prediction methods that identify flaky tests have been proposed. While promising, the actual utility of these methods remains unclear since they have not been evaluated within a continuous integration (CI) process. In particular, it remains unclear what is the impact of missed faults, i.e., the consideration of fault-triggering test failures as flaky, at different CI cycles. To fill this gap, we apply state-of-the-art flakiness prediction methods at the Chromium CI and check their performance. Perhaps surprisingly, we find that, despite the high precision (99.2%) of the methods, their application leads to numerous faults missed, approximately 76.2% of all regression faults. To explain this result, we analyse the fault-triggering failures and show that flaky tests have a strong fault-revealing capability, i.e., they reveal more than 1/3 of all regression faults, indicating an inherent limitation of all methods focusing on identifying flaky tests, instead of flaky test failures. Going a step further, we build failure-focused prediction methods and optimize them by considering new features. Interestingly, we find that these methods perform better than the test-focused ones, with an MCC increasing from 0.20 to 0.42. Overall, our findings imply that on the one hand future research should focus on predicting flaky test failures instead of flaky tests and the need for adopting more thorough experimental methodologies when evaluating flakiness prediction methods, on the other.
更多
查看译文
关键词
test failures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要