BTSC: Binary tree structure convolution layers for building interpretable decision‐making deep CNN

CAAI Transactions on Intelligence Technology(2024)

引用 0|浏览4
暂无评分
摘要
AbstractAlthough deep convolution neural network (DCNN) has achieved great success in computer vision field, such models are considered to lack interpretability in decision‐making. One of fundamental issues is that its decision mechanism is considered to be a “black‐box” operation. The authors design the binary tree structure convolution (BTSC) module and control the activation level of particular neurons to build the interpretable DCNN model. First, the authors design a BTSC module, in which each parent node generates two independent child layers, and then integrate them into a normal DCNN model. The main advantages of the BTSC are as follows: 1) child nodes of the different parent nodes do not interfere with each other; 2) parent and child nodes can inherit knowledge. Second, considering the activation level of neurons, the authors design an information coding objective to guide neural nodes to learn the particular information coding that is expected. Through the experiments, the authors can verify that: 1) the decision‐making made by both the ResNet and DenseNet models can be explained well based on the "decision information flow path" (known as the decision‐path) formed in the BTSC module; 2) the decision‐path can reasonably interpret the decision reversal mechanism (Robustness mechanism) of the DCNN model; 3) the credibility of decision‐making can be measured by the matching degree between the actual and expected decision‐path.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要