On-Demand Model and Client Deployment in Federated Learning with Deep Reinforcement Learning
CoRR(2024)
摘要
In Federated Learning (FL), the limited accessibility of data from diverse
locations and user types poses a significant challenge due to restricted user
participation. Expanding client access and diversifying data enhance models by
incorporating diverse perspectives, thereby enhancing adaptability. However,
challenges arise in dynamic and mobile environments where certain devices may
become inaccessible as FL clients, impacting data availability and client
selection methods. To address this, we propose an On-Demand solution, deploying
new clients using Docker Containers on-the-fly. Our On-Demand solution,
employing Deep Reinforcement Learning (DRL), targets client availability and
selection, while considering data shifts, and container deployment
complexities. It employs an autonomous end-to-end solution for handling model
deployment and client selection. The DRL strategy uses a Markov Decision
Process (MDP) framework, with a Master Learner and a Joiner Learner. The
designed cost functions represent the complexity of the dynamic client
deployment and selection. Simulated tests show that our architecture can easily
adjust to changes in the environment and respond to On-Demand requests. This
underscores its ability to improve client availability, capability, accuracy,
and learning efficiency, surpassing heuristic and tabular reinforcement
learning solutions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要