Explaining CNNs Using Knowledge Extraction and Graph Analysis

Frontiers in Artificial Intelligence and Applications(2023)

引用 0|浏览4
暂无评分
摘要
Explainable Artificial Intelligence (XAI) has recently become an active research field due to the need for transparency and accountability when deploying AI models for high-stake decision making. Despite state-of-the-art Convolutional Neural Networks (CNNs) have achieved great performance in computer vision, understanding their decision processes, especially when a mistake occurs, is still a known challenge. The research direction presented in this chapter stems from the idea that combining knowledge with deep representations can be the key to more transparent decision making. Specifically, we have proposed a graph representation, called co-activation graph, that can be used as an intermediate representation between knowledge encoded within a trained CNNs with the semantics contained in external knowledge bases. Given a trained CNN, in this chapter we first show how a co-activation graph can be created and exploited to generate global insights for the inner-workings of the deep model. Then, we illustrate in detail how background knowledge from external knowledge bases can be connected to the graph in order to generate textual local factual and counterfactual explanations based on semantic attributes. Our results indicate that graph analysis approaches such as community analysis, node centrality and link prediction applied to co-activation graphs can reveal important insights into how CNNs work and enable both global and local semantic explanations for Deep Learning models. At the end of the chapter we will discuss interesting research directions that are being investigated in the area of using knowledge graphs and graph analysis for explainability of deep learning models.
更多
查看译文
关键词
cnns,knowledge extraction,graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要