谷歌浏览器插件
订阅小程序
在清言上使用

Two-Phase Sparsification with Secure Aggregation for Privacy-Aware Federated Learning

IEEE INTERNET OF THINGS JOURNAL(2024)

引用 0|浏览15
暂无评分
摘要
As a typical privacy-aware machine learning paradigm, federated learning (FL) provides facilities to individually train edge clients with their private data and aggregate the central global model. In this way, privacy leakage can be prevented. Massive communication overhead caused by exchanging updated weights between clients and the server is one of the main obstacles in this strategy. Prior work advocates compressing the weights by employing quantization, gradient sparsification, and knowledge distillation approaches. However, most of them cannot be readily applied to secure aggregation in privacy-aware FL. Some research has made great progress in directly utilizing benchmark secure aggregation protocols on top of the non-privacy-aware FL. Graph-based and gradient-based sparsification has been widely adopted in previous studies. However, the results of reducing communication costs are still unsatisfactory. In this paper, we present a novel communication-efficient privacy-ware FL algorithm from a distinct perspective. We design a new Two-Phase Sparsification with Secure Aggregation (TPSSA) algorithm. In the subnetwork phase, we identify sparse subnetworks by freezing the initial random weights in sufficiently overparametrized networks. All edge clients collaboratively train to discover their subnetwork inside a dense randomly weighted neural network. Then the server aggregates to compute the global model. In the gradient phase, for each pair of edge clients, we introduce pairwise multiplicative random masks to identify the sparsification pattern. Then updates from surviving clients can be correctly cancelled out during the aggregation process in the server. Theoretical analysis reveals convergence, privacy and performance guarantee. We show improvements in accuracy, communication, and computation over traditional and sparsified secure aggregation benchmarks on two real-world datasets.
更多
查看译文
关键词
Servers,Computational modeling,Security,Training,Protocols,Machine learning algorithms,Aggregates,Communication-efficient,gradient sparsification,privacy-aware federated learning (FL),probabilistic subnetwork,secure aggregation (SA)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要