Initial findings on the evaluation of a model-based testing tool in the test design process

2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)(2019)

引用 1|浏览11
暂无评分
摘要
Background: One way to reduce test cost is to automatise testing tasks. Model-based testing (MBT) tools use system behaviour models as inputs to automatically generate tests. In the literature, few experiments evaluate the impact of using an MBT tool in test case productivity, test coverage, and bug detection rate.Aims: This work consists of the first part of the evaluation of the impacts of using an MBT tool whose input model is based on natural language, the TaRGeT. This MBT tool uses as inputs use case models authored using natural language. We assess the effects of using an MBT tool in test case productivity (number of test steps produced per hour).Method: A quasi-experiment was carried out for evaluating test case productivity when creating functional tests manually vs the productivity when designing tests using TaRGeT. The application size and subjects experience were controlled, and the collected data statistically analysed.Results: The results show the mean productivity when using TaRGeT is 30% higher when compared to the mean productivity when designing tests manually. Despite this difference, it was not possible to detect statistical differences in productivity when using TaRGeT or not.Conclusions: We discuss possible reasons for this behaviour and other findings, moreover we present lessons learned for future experiments.
更多
查看译文
关键词
quasi-experiment,test design,model-based testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要