Initialisation and Topology Effects in Decentralised Federated Learning
arxiv(2024)
摘要
Fully decentralised federated learning enables collaborative training of
individual machine learning models on distributed devices on a network while
keeping the training data localised. This approach enhances data privacy and
eliminates both the single point of failure and the necessity for central
coordination. Our research highlights that the effectiveness of decentralised
federated learning is significantly influenced by the network topology of
connected devices. A simplified numerical model for studying the early
behaviour of these systems leads us to an improved artificial neural network
initialisation strategy, which leverages the distribution of eigenvector
centralities of the nodes of the underlying network, leading to a radically
improved training efficiency. Additionally, our study explores the scaling
behaviour and choice of environmental parameters under our proposed
initialisation strategy. This work paves the way for more efficient and
scalable artificial neural network training in a distributed and uncoordinated
environment, offering a deeper understanding of the intertwining roles of
network structure and learning dynamics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要