谷歌浏览器插件
订阅小程序
在清言上使用

Self-Supervised Pretraining for Differentially Private Learning

Arash Asadian, Evan Weidner,Lei Jiang

ICLR 2023(2022)

引用 0|浏览20
暂无评分
摘要
We demonstrate self-supervised pretraining (SSP) is a scalable solution to deep learning with differential privacy (DP) regardless of the size of available public datasets in image classification. When facing the lack of public datasets, we show the features generated by SSP on only one single image enable a private classifier to obtain much better utility than the non-learned handcrafted features under the same privacy budget. When a moderate or large size public dataset is available, the features produced by SSP greatly outperform the features trained with labels on various complex private datasets under the same private budget. We also compared multiple DP-enabled training frameworks to train a private classifier on the features generated by SSP. Finally, we report a non-trivial utility 25.3\% of a private ImageNet-1K dataset when $\epsilon=3$. Our source code can be found at \url{https://github.com/UnchartedRLab/SSP}.
更多
查看译文
关键词
differential privacy,contrastive learning,learned features,one image
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要