Multi-modal Vision-Language Model for Generalizable Annotation-Free Pathology Localization and Clinical Diagnosis

arxiv(2024)

引用 0|浏览25
暂无评分
摘要
Defining pathologies automatically from medical images aids the understanding of the emergence and progression of diseases, and such an ability is crucial in clinical diagnostics. However, existing deep learning models heavily rely on expert annotations and lack generalization capabilities in open clinical environments. In this study, we present a generalizable vision-language model for Annotation-Free pathology Localization (AFLoc). The core strength of AFLoc lies in its extensive multi-level semantic structure-based contrastive learning, which comprehensively aligns multi-granularity medical concepts from reports with abundant image features, to adapt to the diverse expressions of pathologies and unseen pathologies without the reliance on image annotations from experts. We demonstrate the proof of concept on Chest X-ray images, with extensive experimental validation across 6 distinct external datasets, encompassing 13 types of chest pathologies. The results demonstrate that AFLoc surpasses state-of-the-art methods in pathology localization and classification, and even outperforms the human benchmark in locating 5 different pathologies. Additionally, we further verify its generalization ability by applying it to retinal fundus images. Our approach showcases AFLoc's versatilities and underscores its suitability for clinical diagnosis in complex clinical environments.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要