Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 4|浏览28
暂无评分
摘要
Federated learning (FL) scenarios inherently generate a large communication overhead by frequently transmitting neural network updates between clients and server. To minimize the communication cost, introducing sparsity in conjunction with differential updates is a commonly used technique. However, sparse model updates can slow down convergence speed or unintentionally skip certain update aspects, e.g., learned features, if error accumulation is not properly addressed. In this work, we propose a new scaling method operating at the granularity of convolutional filters which 1) compensates for highly sparse updates in FL processes, 2) adapts the local models to new data domains by enhancing some features in the filter space while diminishing others and 3) motivates extra sparsity in updates and thus achieves higher compression ratios, i.e., savings in the overall data transfer. Compared to unscaled updates and previous work, experimental results on different computer vision tasks (Pascal VOC, CIFAR10, Chest X-Ray) and neural networks (ResNets, MobileNets, VGGs) in uni-, bidirectional and partial update FL settings show that the proposed method improves the performance of the central server model while converging faster and reducing the total amount of transmitted data by up to 377×.
更多
查看译文
关键词
adaptive differential filters,federated learning scenarios,communication overhead,neural network updates,communication cost,differential updates,sparse model updates,convergence speed,error accumulation,scaling method,convolutional filters,highly sparse updates,FL processes,local models,data domains,filter space,extra sparsity,higher compression ratios,data transfer,unscaled updates,different computer vision tasks,neural networks,bidirectional update FL settings,partial update FL settings,central server model,transmitted data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要