Sparse Attacks for Manipulating Explanations in Deep Neural Network Models

Ahmad Ajalloeian, Seyed Mohsen Moosavi-Dezfooli,Michalis Vlachos,Pascal Frossard

23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023(2023)

引用 0|浏览0
暂无评分
摘要
We investigate methods for manipulating classifier explanations while keeping the predictions unchanged. Our focus is on using a sparse attack, which seeks to alter only a minimal number of input features. We present an efficient and novel algorithm for computing sparse perturbations that alter the explanations but keep the predictions unaffected. We demonstrate that our algorithm, compared to PGD attacks with if constraint l(0), generates sparser perturbations while resulting in greater discrepancies between original and manipulated explanations. Moreover, we demonstrate that it is also possible to conceal the attribution of the k most significant features in the original explanation by perturbing fewer than k features of the input data. We present results for both image and tabular datasets, and emphasize the significance of sparse perturbation based attacks for trustworthy model building in high-stakes applications. Our research reveals important vulnerabilities in explanation methods that should be taken into account when developing reliable explanation methods. Code can be found at ht t ps://github.com/ahmadajal/sparse_expl_attacks
更多
查看译文
关键词
Explainable AI,Deep Neural Networks,Adversarial Attacks,sparse perturbation,Fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要