Understanding Practical Membership Privacy of Deep Learning
CoRR(2024)
摘要
We apply a state-of-the-art membership inference attack (MIA) to
systematically test the practical privacy vulnerability of fine-tuning large
image classification models.We focus on understanding the properties of data
sets and samples that make them vulnerable to membership inference. In terms of
data set properties, we find a strong power law dependence between the number
of examples per class in the data and the MIA vulnerability, as measured by
true positive rate of the attack at a low false positive rate. For an
individual sample, large gradients at the end of training are strongly
correlated with MIA vulnerability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要