From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
arXiv · Artificial Intelligence(2023)
Univ Twente | Univ Duisburg Essen
Abstract
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing black boxes raised the question of how to evaluate explanations of machine learning (ML) models. While interpretability and explainability are often presented as a subjectively validated binary property, we consider it a multifaceted concept. We identify 12 conceptual properties, such as Compactness and Correctness, that should be evaluated for comprehensively assessing the quality of an explanation. Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the past 7 years at major AI and ML conferences that introduce an XAI method. We find that one in three papers evaluate exclusively with anecdotal evidence, and one in five papers evaluate with users. This survey also contributes to the call for objective, quantifiable evaluation methods by presenting an extensive overview of quantitative XAI evaluation methods. Our systematic collection of evaluation methods provides researchers and practitioners with concrete tools to thoroughly validate, benchmark, and compare new and existing XAImethods. The Co-12 categorization scheme and our identified evaluation methods open up opportunities to include quantitative metrics as optimization criteria during model training to optimize for accuracy and interpretability simultaneously.
MoreTranslated text
Key words
Explainable artificial intelligence,interpretable machine learning,evaluation,explainability,interpretability,quantitative evaluation methods,explainable AI,XAI
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2013
被引用273 | 浏览
2014
被引用974 | 浏览
2018
被引用155 | 浏览
2020
被引用28 | 浏览
2021
被引用336 | 浏览
2019
被引用236 | 浏览
2019
被引用18 | 浏览
2020
被引用150 | 浏览
Xgail: Explainable Generative Adversarial Imitation Learning for Explainable Human Decision Analysis
2020
被引用37 | 浏览
2021
被引用189 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest