Gradient Compression with a Variational Coding Scheme for Federated Learning

2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)(2021)

引用 0|浏览0
暂无评分
摘要
Federated Learning (FL), a distributed machine learning architecture, emerged to solve the intelligent data analysis on massive data generated at network edge-devices. With this paradigm, a model is jointly learned in parallel at edge-devices without needing to send voluminous data to a central FL server. This not only allows a model to learn in a feasible duration by reducing network latency but also preserves data privacy. Nonetheless, when thousands of edge-devices are attached to an FL framework, limited network resources inevitably impose intolerable training latency. In this work, we propose model-update compression to solve this issue in a very novel way. The proposed method learns multiple Gaussian distributions that best describe the high dimensional gradient parameters. In the FL server, high dimensional gradients are repopulated from Gaussian distributions utilizing likelihood function parameters which are communicated to the server. Since the distribution information parameters constitute a very small percentage of values compared to the high dimensional gradients themselves, our proposed method is able to save significant uplink bandwidth while preserving the model accuracy. Experimental results validated our claim.
更多
查看译文
关键词
federated learning, update compression, Gaussian distribution, Gaussian-Mixture-Model, federated averaging
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要