Adversarial Robustness on Image Classification With k-Means

Rollin Omari,Junae Kim,Paul Montague

IEEE ACCESS(2024)

引用 0|浏览0
暂无评分
摘要
Attacks and defences in adversarial machine learning literature have primarily focused on supervised learning. However, it remains an open question whether existing methods and strategies can be adapted to unsupervised learning approaches. In this paper we explore the challenges and strategies in attacking a k -means clustering algorithm and in enhancing its robustness against adversarial manipulations. We evaluate the vulnerability of clustering algorithms to adversarial attacks on two datasets (MNIST and Fashion-MNIST), emphasising the associated security risks. Our study investigates the impact of incremental attack strength on training, introduces the concept of transferability between supervised and unsupervised models, and highlights the sensitivity of unsupervised models to sample distributions. We additionally introduce and evaluate an adversarial training method that improves testing performance in adversarial scenarios, and we highlight the importance of various parameters in the proposed training method, such as continuous learning, centroid initialisation, and adversarial step-count. Overall, our study emphasises the vulnerability of unsupervised learning and clustering algorithms to adversarial attacks and provides insights into potential defence mechanisms.
更多
查看译文
关键词
Training,Clustering algorithms,Robustness,Testing,Mathematical models,Unsupervised learning,Sensitivity,Adversarial machine learning,Machine learning,Adversarial examples,adversarial machine learning,adversarial robustness,adversarial training,k-means clustering,unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要