On the Conflict of Robustness and Learning in Collaborative Machine Learning
CoRR(2024)
摘要
Collaborative Machine Learning (CML) allows participants to jointly train a
machine learning model while keeping their training data private. In scenarios
where privacy is a strong requirement, such as health-related applications,
safety is also a primary concern. This means that privacy-preserving CML
processes must produce models that output correct and reliable decisions
even in the presence of potentially untrusted participants. In response
to this issue, researchers propose to use robust aggregators that rely
on metrics which help filter out malicious contributions that could compromise
the training process. In this work, we formalize the landscape of robust
aggregators in the literature. Our formalization allows us to show that
existing robust aggregators cannot fulfill their goal: either they use
distance-based metrics that cannot accurately identify targeted malicious
updates; or propose methods whose success is in direct conflict with the
ability of CML participants to learn from others and therefore cannot eliminate
the risk of manipulation without preventing learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要