Group-based Robustness: A General Framework for Customized Robustness in the Real World
arxiv(2023)
摘要
Machine-learning models are known to be vulnerable to evasion attacks that
perturb model inputs to induce misclassifications. In this work, we identify
real-world scenarios where the true threat cannot be assessed accurately by
existing attacks. Specifically, we find that conventional metrics measuring
targeted and untargeted robustness do not appropriately reflect a model's
ability to withstand attacks from one set of source classes to another set of
target classes. To address the shortcomings of existing methods, we formally
define a new metric, termed group-based robustness, that complements existing
metrics and is better-suited for evaluating model performance in certain attack
scenarios. We show empirically that group-based robustness allows us to
distinguish between models' vulnerability against specific threat models in
situations where traditional robustness metrics do not apply. Moreover, to
measure group-based robustness efficiently and accurately, we 1) propose two
loss functions and 2) identify three new attack strategies. We show empirically
that with comparable success rates, finding evasive samples using our new loss
functions saves computation by a factor as large as the number of targeted
classes, and finding evasive samples using our new attack strategies saves time
by up to 99% compared to brute-force search methods. Finally, we propose a
defense method that increases group-based robustness by up to 3.52×.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要