LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
CoRR(2024)
摘要
Vision Transformers (ViTs), with their ability to model long-range
dependencies through self-attention mechanisms, have become a standard
architecture in computer vision. However, the interpretability of these models
remains a challenge. To address this, we propose LeGrad, an explainability
method specifically designed for ViTs. LeGrad computes the gradient with
respect to the attention maps of ViT layers, considering the gradient itself as
the explainability signal. We aggregate the signal over all layers, combining
the activations of the last as well as intermediate tokens to produce the
merged explainability map. This makes LeGrad a conceptually simple and an
easy-to-implement tool for enhancing the transparency of ViTs. We evaluate
LeGrad in challenging segmentation, perturbation, and open-vocabulary settings,
showcasing its versatility compared to other SotA explainability methods
demonstrating its superior spatial fidelity and robustness to perturbations. A
demo and the code is available at https://github.com/WalBouss/LeGrad.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要