Towards Explaining Deep Neural Networks Through Graph Analysis.

DEXA Workshops(2019)

引用 6|浏览28
暂无评分
摘要
Due to its potential to solve complex tasks, deep learning is being used across many different areas. The complexity of neural networks however makes it difficult to explain the whole decision process used by the model, which makes understanding deep learning models an active research topic. In this work we address this issue by extracting the knowledge acquired by trained Deep Neural Networks (DNNs) and representing this knowledge in a graph. The proposed graph encodes statistical correlations between neurons' activation values in order to expose the relationship between neurons in the hidden layers with both the input layer and output classes. Two initial experiments in image classification were conducted to evaluate whether the proposed graph can help understanding and explaining DNNs. We first show how it is possible to explore the proposed graph to find what neurons are the most important for predicting each class. Then, we use graph analysis to detect groups of classes that are more similar to each other and how these similarities affect the DNN. Finally, we use heatmaps to visualize what parts of the input layer are responsible for activating each neuron in hidden layers. The results show that by building and analysing the proposed graph it is possible to gain relevant insights of the DNN's inner workings.
更多
查看译文
关键词
deep neural networks,graph analysis,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要