Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
arxiv(2024)
摘要
The escalating threat of adversarial attacks on deep learning models,
particularly in security-critical fields, has underscored the need for robust
deep learning systems. Conventional robustness evaluations have relied on
adversarial accuracy, which measures a model's performance under a specific
perturbation intensity. However, this singular metric does not fully
encapsulate the overall resilience of a model against varying degrees of
perturbation. To address this gap, we propose a new metric termed adversarial
hypervolume, assessing the robustness of deep learning models comprehensively
over a range of perturbation intensities from a multi-objective optimization
standpoint. This metric allows for an in-depth comparison of defense mechanisms
and recognizes the trivial improvements in robustness afforded by less potent
defensive strategies. Additionally, we adopt a novel training algorithm that
enhances adversarial robustness uniformly across various perturbation
intensities, in contrast to methods narrowly focused on optimizing adversarial
accuracy. Our extensive empirical studies validate the effectiveness of the
adversarial hypervolume metric, demonstrating its ability to reveal subtle
differences in robustness that adversarial accuracy overlooks. This research
contributes a new measure of robustness and establishes a standard for
assessing and benchmarking the resilience of current and future defensive
models against adversarial threats.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要