Learning Robust Medical Image Segmentation from Multi-source Annotations

Yifeng Wang, Li Luo,Mingxiang Wu,Qiong Wang,Hao Chen

arXiv (Cornell University)(2023)

引用 0|浏览1
暂无评分
摘要
Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. Learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of annotations and the quality of images. In this paper, we propose an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guides the training process by uncertainty estimation at both the pixel and the image levels. First, we developed the annotation uncertainty estimation module (AUEM) to learn the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former assessed annotation uncertainties. Importantly, we introduced an auxiliary predictor to learn from the low-quality samples instead of discarding them, which ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation, fundus image segmentation, and 3D breast DCE-MRI segmentation.
更多
查看译文
关键词
segmentation,medical image
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要