Deep Bayesian Image Set Classification Approach for Defense against Adversarial Attacks.

2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA)(2023)

引用 0|浏览1
暂无评分
摘要
Deep learning has become an integral part of various pattern recognition and computer vision systems in recent years due to its outstanding achievements in object recognition, facial recognition, and scene understanding. However, deep neural networks (DNNs) are susceptible to being fooled with nearly high confidence by an adversary. In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in physical-world applications. To address this phenomenon, we present, what to our knowledge, is the first ever image-set-based adversarial defense approach. Image set classification has shown exceptional performance for object and face recognition, owing to its intrinsic property of handling appearance variability. We propose a robust deep Bayesian image set classification as a defense framework against a broad range of adversarial attacks. We extensively experiment the performance of the proposed technique with several voting strategies. We further analyse the effects of image size, perturbation magnitude, along with the ratio of perturbed images in each image set. We also evaluate our technique with the recent state-of-the-art defense methods and single-shot recognition task. The empirical results demonstrate superior performance on the CIFAR-10, MNIST, ETH-80, and Tiny ImageNet datasets. Our code is available at https://github.com/ai-voyage/imageset-adversarial-defence.git.
更多
查看译文
关键词
Adversarial Attack,Adversarial Defence,Deep Learning,Image Set Classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要