谷歌浏览器插件
订阅小程序
在清言上使用

C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation Framework for Medical Image Segmentation

Medical Imaging 2022 Image Processing(2022)

引用 0|浏览0
暂无评分
摘要
Deep learning models have obtained state-of-the-art results for medical image analysis. However, CNNs require a massive amount of labelled data to achieve a high performance. Moreover, many supervised learning approaches assume that the training/source dataset and test/target dataset follow the same probability distribution. Nevertheless, this assumption is hardly satisfied in real-world data and when the models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an image-level and feature-level adaptation method in a two-step sequential manner. First, images from the source domain are translated to the target domain through an unpaired image-to-image adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations and produce segmentations for the target domain. Furthermore, to improve the network’s segmentation performance, information about the shape, texture, and contour of the predicted segmentation is included during the adversarial training. C-MADA is tested on the task of brain MRI segmentation from the crossMoDa Grand Challenge and is ranked within the top 15 submissions of the challenge.
更多
查看译文
关键词
Domain Adaptation,Unsupervised Learning,Transfer Learning,Semi-Supervised Learning,Meta-Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要