Q uo T e : Quality-oriented Testing for Deep Learning Systems

ACM Transactions on Software Engineering and Methodology(2023)

引用 0|浏览21
暂无评分
摘要
Recently, there has been a significant growth of interest in applying software engineering techniques for the quality assurance of deep learning (DL) systems. One popular direction is deep learning testing, i.e., given a property of test, defects of DL systems are found either by fuzzing or guided search with the help of certain testing metrics. However, recent studies have revealed that the neuron coverage metrics, commonly used by most existing DL testing approaches, are not necessarily correlated with model quality (e.g., robustness, the most studied model property), and are also not an effective measurement on the confidence of the model quality after testing. In this work, we address this gap by proposing a novel testing framework called QuoTe (i.e., Qu ality- o riented Te sting). A key part of QuoTe is a quantitative measurement on 1) the value of each test case in enhancing the model property of interest (often via retraining), and 2) the convergence quality of the model property improvement. QuoTe utilizes the proposed metric to automatically select or generate valuable test cases for improving model quality. The proposed metric is also a lightweight yet strong indicator of how well the improvement converged. Extensive experiments on both image and tabular datasets with a variety of model architectures confirm the effectiveness and efficiency of QuoTe in improving DL model quality, i.e., robustness and fairness. As a generic quality-oriented testing framework, future adaptions can be made to other domains (e.g., text) as well as other model properties.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要