Adversarially Robust Continual Learning

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 5|浏览22
暂无评分
摘要
Recent approaches in continual learning (CL) have focused on extracting various types of features from multi-task datasets to prevent catastrophic forgetting - without formally evaluating the quality, robustness and usefulness of these features. Recently, it has been shown that adversarial robustness can be understood by decomposing learned features into robust and non-robust types. The robust features have been used to build robust datasets and have been shown to increase adversarial robustness significantly. There has not been any assessment on using such robust features in CL frameworks to enhance the robustness of CL models against adversarial attacks. Current CL algorithms use standard features - a mixture of robust and non-robust features - and result in models vulnerable to both natural and adversarial noise. This paper presents an empirical study to demonstrate the importance of robust features in the context of class incremental learning (CIL). We adopted the publicly available CIFAR10 dataset for our CIL experiments. We used CIFAR10-Corrupted dataset to evaluate the robustness of the standard, robust and non-robust models against various types of noise including bright-ness, contrast, Gaussian noise and more. To test these models against adversarially attacked input, we created a new dataset using the project gradient descent (PGD) and fast gradient sign (FGSM) algorithm. Our experiments demonstrate that a set of models trained on the standard (a mixture of both robust and non-robust) features obtained a higher accuracy compared to the models trained either using robust features or non-robust features. However, the models trained using standard and non-robust features performed poorly in noisy and adversarial conditions as compared to the model trained using robust features. The model trained using non-robust features performed the worst in noisy conditions and under adversarial attacks. Our study underlines the significance of using robust features in CIL.
更多
查看译文
关键词
nonrobust features,adversarial robustness,robust datasets,adversarially robust continual learning,CL algorithm,class incremental learning,CIL,CIFAR10-Corrupted dataset,project gradient descent algorithm,PGD algorithm,fast gradient sign algorithm,FGSM algorithm,catastrophic forgetting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要