CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data Centers
CoRR(2023)
摘要
Training large-scale artificial intelligence (AI) models demands significant
computational power and energy, leading to increased carbon footprint with
potential environmental repercussions. This paper delves into the challenges of
training AI models across geographically distributed (geo-distributed) data
centers, emphasizing the balance between learning performance and carbon
footprint. We consider Federated Learning (FL) as a solution, which prioritizes
model parameter exchange over raw data, ensuring data privacy and compliance
with local regulations. Given the variability in carbon intensity across
regions, we propose a new framework called CAFE (short for Carbon-Aware
Federated Learning) to optimize training within a fixed carbon footprint
budget. Our approach incorporates coreset selection to assess learning
performance, employs the Lyapunov drift-plus-penalty framework to address the
unpredictability of future carbon intensity, and devises an efficient algorithm
to address the combinatorial complexity of the data center selection. Through
extensive simulations using real-world carbon intensity data, we demonstrate
the efficacy of our algorithm, highlighting its superiority over existing
methods in optimizing learning performance while minimizing environmental
impact.
更多查看译文
关键词
federated learning,geographically distributed data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要