FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in Computational Pathology
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)
摘要
The fusion of complementary multimodal information is crucial in
computational pathology for accurate diagnostics. However, existing multimodal
learning approaches necessitate access to users' raw data, posing substantial
privacy risks. While Federated Learning (FL) serves as a privacy-preserving
alternative, it falls short in addressing the challenges posed by heterogeneous
(yet possibly overlapped) modalities data across various hospitals. To bridge
this gap, we propose a Federated Multi-Modal (FedMM) learning framework that
federatedly trains multiple single-modal feature extractors to enhance
subsequent classification performance instead of existing FL that aims to train
a unified multimodal fusion model. Any participating hospital, even with
small-scale datasets or limited devices, can leverage these federated trained
extractors to perform local downstream tasks (e.g., classification) while
ensuring data privacy. Through comprehensive evaluations of two publicly
available datasets, we demonstrate that FedMM notably outperforms two baselines
in accuracy and AUC metrics.
更多查看译文
关键词
Multimodal Fusion,Federated Learning,Histology Image,Genomic Signal,Modality Heterogeneity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要