TablePuppet: A Generic Framework for Relational Federated Learning
arxiv(2024)
摘要
Current federated learning (FL) approaches view decentralized training data
as a single table, divided among participants either horizontally (by rows) or
vertically (by columns). However, these approaches are inadequate for handling
distributed relational tables across databases. This scenario requires
intricate SQL operations like joins and unions to obtain the training data,
which is either costly or restricted by privacy concerns. This raises the
question: can we directly run FL on distributed relational tables?
In this paper, we formalize this problem as relational federated learning
(RFL). We propose TablePuppet, a generic framework for RFL that decomposes the
learning process into two steps: (1) learning over join (LoJ) followed by (2)
learning over union (LoU). In a nutshell, LoJ pushes learning down onto the
vertical tables being joined, and LoU further pushes learning down onto the
horizontal partitions of each vertical table. TablePuppet incorporates
computation/communication optimizations to deal with the duplicate tuples
introduced by joins, as well as differential privacy (DP) to protect against
both feature and label leakages. We demonstrate the efficiency of TablePuppet
in combination with two widely-used ML training algorithms, stochastic gradient
descent (SGD) and alternating direction method of multipliers (ADMM), and
compare their computation/communication complexity. We evaluate the SGD/ADMM
algorithms developed atop TablePuppet by training diverse ML models. Our
experimental results show that TablePuppet achieves model accuracy comparable
to the centralized baselines running directly atop the SQL results. Moreover,
ADMM takes less communication time than SGD to converge to similar model
accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要