Deep Learning for Tumour Segmentation with Missing Data

Neuro-Oncology(2022)

引用 0|浏览14
暂无评分
摘要
Abstract AIMS Brain tumour segmentation remains a challenging task, complicated by the marked heterogeneity of imaging appearances and their distribution across multiple modalities: FLAIR, T1-weighted, T2-weighted, and contrast-enhanced T1-weighted sequences (T1CE). But the use of all four imaging sequences is not always possible. The causes for this are legion, with common examples including corruption by image artefacts and acquisition constraints, such as those imposed in pre-operative stealth studies. We therefore aimed to quantify how well tumour segmentation models perform with incomplete imaging data. METHOD We developed a collection of 30 state-of-the-art deep learning tumour segmentation models, nnU-Net-derived, and deployed them across all possible combinations of imaging modalities, trained, and tested with five-fold cross-validation on the 2021 BraTS-RSNA glioma population of 1251 patients, with additional out-of-sample comparison to neuroradiologist hand-labelled lesions from our own centre. RESULTS Regardless of imaging available, models largely performed well. The best models with varying degrees of missingness were as follows: single sequence available - FLAIR (Dice 0.938); two sequences available - FLAIR + T1CE (Dice 0.943) and three sequences available – FLAIR + T1CE + T2 (Dice 0.945). In comparison, a model with complete data (FLAIR + T1 + T1CE + T2) achieved a similar Dice coefficient of 0.945. CONCLUSION Tumour segmentation models with missing sequences – common in clinical practice - still delineate lesions well, often with comparable performance to when all data is available. This provides opportunity for quantitative imaging in patients and clinical situations wherein full MRI acquisitions are not possible.
更多
查看译文
关键词
tumour segmentation,missing data,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要