Using SHAP to Measure Interpretability of Neuronal Feature Visualization.

VINCI(2023)

引用 0|浏览1
暂无评分
摘要
Neuronal feature visualization is widely used in Explainable Artificial Intelligence (XAI). It can provide an intuitive visualization to depict the feature extraction of an individual neuron in a Convolutional Neural Network (CNN). However, it is extremely exhaustive for human users to identify highly interpretable visualizations by manually browsing massive neurons contained in a CNN. Inspired by the Shapley Value Method in coalitional game theory, this paper proposes a metric to quantitatively measure the interpretability of a neuronal feature visualization by calculating the similarity between the SHAP (SHapley Additive exPlanation) image and the visualization. This metric can help human users quickly find highly interpretable neuronal feature visualizations for understanding the classification results of a CNN.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要