Adversarial Learning for Coordinate Regression through k-layer Penetrating Representation

Mengxi Jiang,Yulei Sui,Yunqi Lei,Xiaofei Xie,Cuihua Li,Yang Liu, Ivor W. Tsang

IEEE Transactions on Dependable and Secure Computing(2024)

引用 0|浏览7
暂无评分
摘要
Adversarial attack is a crucial step when evaluating the reliability and robustness of deep neural networks (DNNs) models. Most existing attack approaches apply an end-to-end gradient update strategy to generate adversarial examples for a classification or regression problem. However, few of them consider the non-differentiable DNN models (e.g., coordinate regression model) that prevent end-to-end backpropagation resulting in the failure of gradient calculation. In this paper, we present a new adversarial example generation approach for both untargeted and targeted attacks on coordinate regression models with non-differentiable operations. The novelty of our approach lies in a k-layer penetrating representation, on which we perturb the hidden feature distribution of the k-th layer through relational guidance to influence the final output, in which end-to-end backpropagation is not required. Rather than modifying a large portion of the pixels in an image, the proposed approach only modifies a very small set of the input pixels. These pixels are carefully and precisely selected by three correlations between the input pixels and hidden features of the k-th layer of a DNN, thus significantly reducing the adversarial perturbation on a clean image. We successfully apply the proposed approach to two different tasks (i.e., 2D and 3D human pose estimation) which are typical applications of the coordinate regression learning. The comprehensive experiments demonstrate that our approach achieves better performance while using much less adversarial perturbation on clean images.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要