Methodology and Guidelines for Evaluating Multi-Objective Search-Based Software Engineering

ICSE Companion(2023)

引用 0|浏览7
暂无评分
摘要
Search-Based Software Engineering (SBSE) has been becoming an increasingly important research paradigm for automating and solving different software engineering tasks. When the considered tasks have more than one objective/criterion to be optimized, they are called multi-objective ones. In such a scenario, the outcome is typically a set of incomparable solutions (i.e., being Pareto nondominated to each other), and then a common question faced by many SBSE practitioners is: how to evaluate the obtained sets by using the right methods and indicators in the SBSE context? In this comprehensive technical brief, we seek to provide a systematic methodology and guidelines for answering this question. We start off by discussing why we need formal evaluation methods/indicators for multi-objective optimization problems in general, and the result of a survey on how they have been dominantly used in SBSE. This is then followed by a detailed introduction of representative evaluation methods and quality indicators used in SBSE, including their behaviors and preferences. In the meantime, we demonstrate the patterns and examples of potentially misleading usages/choices of evaluation methods and quality indicators from the SBSE community, highlighting their consequences. Afterward, we present a systematic methodology that can guide the selection and use of evaluation methods and quality indicators for a given SBSE problem in general, together with pointers that we hope to spark dialogues about some future directions on this important research topic for SBSE. Lastly, we showcase several real-world multi-objective SBSE case studies, in which we demonstrate the consequences of incorrect usage and exemplify the implementation of the guidance provided.
更多
查看译文
关键词
search-based software engineering,multiobjective optimization,quality indicators
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要