A Game-theoretic Federated Learning Framework for Data Quality Improvement

IEEE Transactions on Knowledge and Data Engineering(2022)

引用 0|浏览2
暂无评分
摘要
Federated learning is a promising distributed machine learning paradigm that has been playing a significant role in privacy-preserving machine learning tasks. However, alongside all its achievements, the framework has limitations. First, traditional frameworks assume that all clients want to improve model accuracy and so participation is voluntary. However, in reality, clients usually want to be appropriately compensated for the data and resources they will need to commit to the training process before contributing. Second, today's frameworks allow clients to perturb their parameter updates locally, which introduces a great deal of noise to the trained model and can seriously impact model accuracy. To address these concerns, we have developed a private reward game that incentivizes clients to contribute high-quality data to the training process. The game converges to a Nash equilibrium under the guarantee of joint differential privacy, and each client maximizes their reward following an equilibrium strategy. The noise injected into the model is reduced by introducing a centralized differential privacy model that aggregates the parameters and compensates clients via a data trading market. Experimental simulations show the rationales behind and effectiveness of the proposed game approach. Additionally, we present comparisons between different training models to demonstrate the performance of the proposed approach in real-world scenarios.
更多
查看译文
关键词
Differential privacy,joint differential privacy,game theory,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要