谷歌浏览器插件
订阅小程序
在清言上使用

MemDefense: Defending against Membership Inference Attacks in IoT-based Federated Learning via Pruning Perturbations

IEEE Transactions on Big Data(2024)

引用 0|浏览36
暂无评分
摘要
Depending on large-scale devices, the Internet of Things (IoT) provides massive data support for resource sharing and intelligent decision, but privacy risks also increase. As a popular distributed learning framework, Federated Learning (FL) is widely used because it does not need to share raw data while only parameters to collaboratively train models. However, Federated Learning is not spared by some emerging attacks, e.g., membership inference attack. Therefore, for IoT devices with limited resources, it is challenging to design a defense scheme against the membership inference attack ensuring high model utility, strong membership privacy and acceptable time efficiency. In this paper, we propose MemDefense, a lightweight defense mechanism to prevent membership inference attack from local models and global models in IoT-based FL, while maintaining high model utility. MemDefense adds crafted pruning perturbations to local models at each round of FL by deploying two key components, i.e., parameter filter and noise generator. Specifically, the parameter filter selects the apposite model parameters which have little impact on the model test accuracy and contribute more to member inference attacks. Then, the noise generator is used to find the pruning noise that can reduce the attack accuracy while keeping high model accuracy, protecting each participant's membership privacy. We comprehensively evaluate MemDefense with different deep learning models and multiple benchmark datasets. The experimental results show that lowcost MemDefense drastically reduces the attack accuracy within limited drop of classification accuracy, meeting the requirements for model utility, membership privacy and time efficiency.
更多
查看译文
关键词
IoT,Federated Learning,membership inference attack,defense,pruning perturbations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要