Defense strategies for Adversarial Machine Learning: A survey.

Comput. Sci. Rev.(2023)

引用 0|浏览15
暂无评分
摘要
Adversarial Machine Learning (AML) is a recently introduced technique, aiming to deceive Machine Learning (ML) models by providing falsified inputs to render those models ineffective. Consequently, most researchers focus on detecting new AML attacks that can undermine existing ML infrastructures, overlooking at the same time the significance of defense strategies. This article constitutes a survey of the existing literature on AML attacks and defenses with a special focus on a taxonomy of recent works on AML defense techniques for different application domains, such as audio, cyber-security, NLP, and computer vision. The proposed survey also explores the methodology of the defense solutions and compares them using several criteria, such as whether they are attack- and/or domain-agnostic, deploy appropriate AML evaluation metrics, and whether they share their source code and/or their evaluation datasets. To the best of our knowledge, this article constitutes the first survey that seeks to systematize the existing knowledge focusing solely on the defense solutions against AML and providing innovative directions for future research on tackling the increasing threat of AML. & COPY; 2023 Elsevier Inc. All rights reserved.
更多
查看译文
关键词
Survey, Machine Learning, Adversarial Machine Learning, Defense methods, Computer vision, Cybersecurity, Natural Language Processing, Audio
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要