Hide and Seek: On the Stealthiness of Attacks Against Deep Learning Systems

COMPUTER SECURITY - ESORICS 2022, PT III(2022)

引用 1|浏览29
暂无评分
摘要
With the growing popularity of artificial intelligence (AI) and machine learning (ML), a wide spectrum of attacks against deep learning (DL) models have been proposed in the literature. Both the evasion attacks and the poisoning attacks attempt to utilize adversarially altered samples to fool the victim model to misclassify the adversarial sample. While such attacks claim to be or are expected to be stealthy, i.e., imperceptible to human eyes, such claims are rarely evaluated. In this paper, we present the first large-scale study on the stealthiness of adversarial samples used in the attacks against deep learning. We have implemented 20 representative adversarial ML attacks on six popular benchmarking datasets. We evaluate the stealthiness of the attack samples using two complementary approaches: (1) a numerical study that adopts 24 metrics for image similarity or quality assessment; and (2) a user study of 3 sets of questionnaires that has collected 30,000+ annotations from 1,500+ responses. Our results show that the majority of the existing attacks introduce non-negligible perturbations that are not stealthy to human eyes. We further analyze the factors that contribute to attack stealthiness. We examine the correlation between the numerical analysis and the user studies, and demonstrate that some image quality metrics may provide useful guidance in attack designs, while there is still a significant gap between assessed image quality and visual stealthiness of attacks.
更多
查看译文
关键词
Adversarial machine learning, Attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要