Accuracy Degrading: Toward Participation-Fair Federated Learning

IEEE Internet of Things Journal(2023)

引用 1|浏览71
暂无评分
摘要
Centralized learning now faces data mapping and security constraints that make it difficult to carry out. Federated learning with a distributed learning architecture has changed this situation. By restricting the training process to participants’ local, federated learning addresses the model training needs of multiple data sources while better protecting data privacy. However, in real-world application scenarios, federated learning faces the need to achieve fairness in addition to privacy protection. In practice, it could happen that some federated learning participants with specific motives may short join the training process to obtain the current global model with only limited contribution to the federated learning whole, resulting in unfairness to the participants who previously participated in federated learning. We propose the FedACC framework with a server-initiated global model accuracy control method to address this issue. Besides measuring the accumulative contributions of newly joined participants and providing participants with a model with an accuracy that matches their contributions, the FedACC still guarantees the validity of participant gradients based on the accuracy-decayed model. Under the FedACC framework, users do not have access to the full version of the current global model early in their training participation. However, they must produce a certain amount of contributions before seeing the full-accuracy model. We introduce an additional differential privacy mechanism to protect clients’ privacy further. Experiments demonstrate that the FedACC could obtain about 10%–20% accuracy gain compared to the state-of-the-art methods while balancing the fairness, performance, and security of federated learning.
更多
查看译文
关键词
Data privacy,deep learning,fairness,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要