Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication

IEEE Transactions on Multimedia(2024)

引用 0|浏览4
暂无评分
摘要
Distributed learning requires a frequent communication of neural network update data. For this, we present a set of new compression tools, jointly called differential neural network coding (dNNC). dNNC is specifically tailored to efficiently code incremental neural network updates and includes tools for federated BatchNorm folding (FedBNF), structured and unstructured sparsification, tensor row skipping, quantization optimization and temporal adaptation for improved context-adaptive binary arithmetic coding (CABAC). Furthermore, dNNC provides a new parameter update tree (PUT) mechanism, which allows to identify updates for different neural network parameter sub-sets and their relationship in synchronous and asynchronous neural network communication scenarios. Most of these tools have been included into the standardization process of the NNC standard (ISO/IEC 15938-17) edition 2. We benchmark dNNC in multiple federated and split learning scenarios using a variety of NN models and data including vision transformers and large-scale ImageNet experiments: It achieves compression efficiencies of 60% in comparison to the NNC standard edition 1 for transparent coding cases, i.e., without degrading the inference or training performance. This corresponds to a reduction in the size of the NN updates to less than 1% of their original size. Moreover, dNNC reduces the overall energy consumption required for communication in federated learning systems by up to 94%.
更多
查看译文
关键词
Neural Network Coding,Federated Learning,Transfer Learning,Split Learning,Efficient NN Communication,ISO/IEC MPEG Standards,Federated BatchNorm Folding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要