Joint Gaussian mixture model for versatile deep visual model explanation

Zhouyang Xie, Tianxiang He,Shengzhao Tian,Yan Fu,Junlin Zhou,Duanbing Chen

Knowledge-Based Systems(2023)

引用 0|浏览2
暂无评分
摘要
Explainable AI (XAI) promotes understandability and credibility in the complex decision-making process of machine learning models. Post-hoc explanations for deep neural networks are crucial because they focus on faithful explanation of the learned representations, decision mechanisms, and uncertainty of the model. Explaining deep convolutional neural networks (DCNN) is particularly challenging because of the high dimensionality of deep features and the complexity of model inference. Most post-hoc explanation methods follow one single paradigm of explanation, confining the diversity and consistency of the explanation. This study proposes a joint Gaussian mixture model (JGMM), which is a probabilistic model that jointly models deep inter-layer features and produces faithful and consistent post-hoc explanations. The JGMM explains deep features by Gaussian mixture model and model inference by the posterior distribution of the latent component variables. JGMM enables a versatile explanation framework that unifies the interpretable proxy model and mining global and local explanatory examples. Experiments are performed on various DCNN image classifiers in comparison with other explaining methods. The results show that the JGMM can efficiently produce versatile, consistent, faithful, and understandable explanations. • Versatile explaining method with proxy models and explanatory examples is explored. • A joint Gaussian mixture model, a probabilistic modelling of deep features, is proposed. • JGMM produces versatile, understandable and consistent explanations for deep CNNs.
更多
查看译文
关键词
XAI,Post-hoc model explanation,Gaussian mixture model,Deep-learning interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要