Robust Pervasive Detection for Adversarial Samples of Artificial Intelligence in IoT Environments

IEEE Access(2019)

引用 17|浏览7
暂无评分
摘要
Nowadays, artificial intelligence technologies (e.g., deep neural networks) have been used widely in the Internet of Things (IoT) to provide smart services and sensing data processing. The evolving neural network even exceeds the human cognitive level. However, the accuracy of these structures depends to some extent on the accuracy of the training data. Some well-designed generated antagonistic disturbances are sufficient to deceive model when added to images. Such attacks cause the classifiers trained by the neural network to misidentify the object and thus completely fail. On the other hand, the various existing defensive methods that have been proposed suffer from two criticisms. The first thing that bears the brunt is unsatisfactory detection rate due to low robustness toward the adversarial sample. Second, the excessive dependence on the output of specific network structure layers hinders the emergence of universal schemes. In this paper, we propose the large margin cosine estimation (LMCE) detection scheme to overcome the above shortcomings, making the detection independent and universal. We illustrate the principle of our approach and demonstrate the significance and analysis of some important parameters. Moreover, we model various types of adversarial attacks and establish proposed defense mechanisms against them and evaluate our approach from different aspects. This method has been clearly validated on a range of standard datasets including MNIST, CIFAR-10, and SVHN. The assessment strongly reflects the robustness and pervasive of this approach in the face of various white and semi-white box attacks.
更多
查看译文
关键词
Artificial neural networks, machine learning, data security, computer hacking, detection algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要