谷歌浏览器插件
订阅小程序
在清言上使用

Learning to Unlearn: Instance-Wise Unlearning for Pre-trained Classifiers

AAAI 2024(2024)

引用 0|浏览96
暂无评分
摘要
Since the recent advent of regulations for data protection (e.g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch. The inherent vulnerability of neural networks towards adversarial attacks and unfairness also calls for a robust method to remove or correct information in an instance-wise fashion, while retaining the predictive performance across remaining data. To this end, we consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model, by either misclassifying each instance away from its original prediction or relabeling the instance to a different label. We also propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information. Both methods only require the pre-trained model and data instances to forget, allowing painless application to real-life settings where the entire training set is unavailable. Through extensive experimentation on various image classification benchmarks, we show that our approach effectively preserves knowledge of remaining data while unlearning given instances in both single-task and continual unlearning scenarios.
更多
查看译文
关键词
ML: Classification and Regression,CV: Adversarial Attacks & Robustness,CV: Applications,CV: Learning & Optimization for CV,CV: Low Level & Physics-based Vision,CV: Other Foundations of Computer Vision,CV: Representation Learning for Vision,ML: Adversarial Learning & Robustness,ML: Other Foundations of Machine Learning,ML: Privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要