An Adaptive Gradient Privacy-Preserving Algorithm for Federated XGBoost

Hongyi Cai,Jianping Cai,Lan Sun

2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023(2023)

引用 0|浏览2
暂无评分
摘要
Federated learning (FL) is a novel machine learning framework in which machine learning models are built jointly by multiple parties. We investigate the privacy preservation of XGBoost, a gradient boosting decision tree (GBDT) model, in the context of FL. While recent work relies on cryptographic schemes to preserve the privacy of model gradients, these methods are computationally expensive. In this paper, we propose an adaptive gradient privacy-preserving algorithm based on differential privacy (DP), which is more computationally efficient. Our algorithm perturbs individual data by computing an adaptive gradient mean per sample and adding appropriate noise during XGBoost training, while still making the perturbed gradient data available. The training accuracy and communication efficiency of the model are guaranteed under the premise of satisfying the definition of DP. We show the proposed algorithm outperforms other DP methods in terms of prediction accuracy and approaches the lossless federated XGBoost model while being more efficient.
更多
查看译文
关键词
federated learning,gradient boosting decision tree,differential privacy,security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要