Adversarial Training for Better Robustness

Smart Technologies for Sustainable and Resilient Ecosystems(2023)

引用 0|浏览0
暂无评分
摘要
As the vulnerabilities of neural networks are gradually exposed, the security of deep learning attracts the thoughtful attention of researchers. Adversarial training is a promising way to enhance the robustness of deep learning models, which can defend against white-box targeted attacks by learning from the dedicated designed adversarial samples. In recent years, researchers have proposed many algorithms to promote the security of learning models, such as training effectiveness and decreasing limitations. In this survey, we propose a novel taxonomy to categorize the progress of adversarial training and analyze current constraints. Apart from introducing an overall picture of adversarial training in terms of adversarial attack/robustness, we make a conclusion and prospect of this area.
更多
查看译文
关键词
Adversarial training, Adversarial robustness, Adversarial example
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要