DiCoM – Diverse Concept Modeling towards Enhancing Generalizability in Chest X-Ray Studies
CoRR(2024)
摘要
Chest X-Ray (CXR) is a widely used clinical imaging modality and has a
pivotal role in the diagnosis and prognosis of various lung and heart related
conditions. Conventional automated clinical diagnostic tool design strategies
relying on radiology reads and supervised learning, entail the cumbersome
requirement of high quality annotated training data. To address this challenge,
self-supervised pre-training has proven to outperform supervised pre-training
in numerous downstream vision tasks, representing a significant breakthrough in
the field. However, medical imaging pre-training significantly differs from
pre-training with natural images (e.g., ImageNet) due to unique attributes of
clinical images. In this context, we introduce Diverse Concept Modeling
(DiCoM), a novel self-supervised training paradigm that leverages a student
teacher framework for learning diverse concepts and hence effective
representation of the CXR data. Hence, expanding beyond merely modeling a
single primary label within an image, instead, effectively harnessing the
information from all the concepts inherent in the CXR. The pre-trained model is
subsequently fine-tuned to address diverse domain-specific tasks. Our proposed
paradigm consistently demonstrates robust performance across multiple
downstream tasks on multiple datasets, highlighting the success and
generalizability of the pre-training strategy. To establish the efficacy of our
methods we analyze both the power of learned representations and the speed of
convergence (SoC) of our models. For diverse data and tasks, DiCoM is able to
achieve in most cases better results compared to other state-of-the-art
pre-training strategies. This when combined with the higher SoC and
generalization capabilities positions DiCoM to be established as a foundation
model for CXRs, a widely used imaging modality.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要