谷歌浏览器插件
订阅小程序
在清言上使用

Pixelwise Gradient Model with GAN for Virtual Contrast Enhancement in MRI Imaging

CANCERS(2024)

引用 0|浏览19
暂无评分
摘要
Simple Summary This paper presents a novel approach to produce virtual contrast enhanced (VCE) images for nasopharyngeal cancer (NPC) without the use of contrast agents, which carry certain risks. This model uses pixelwise gradient term to capture the shape and a GAN terms to capture the texture of the real contrast enhanced T1C images With similar accuracy to existing models, our method shows an advantage in reproducing texture closer to the realistic contrast-enhanced images. This results are tested by various measures, including mean absolute error (MAE), mean square error (MSE) and structural similarity index (SSIM) for similarity accuracy; total mean square variation per mean intensity (TMSVPMI), the total absolute vari-ation per mean intensity (TAVPMI), Tenengrad function per mean intensity (TFPMI) and variance function per mean intensity (VFPMI) Various variations of the model, including fine-tuning of the hyperparameters, normalization methods on the images and using single modality, have also been investigated to test the optimal performance.Abstract Background: The development of advanced computational models for medical imaging is crucial for improving diagnostic accuracy in healthcare. This paper introduces a novel approach for virtual contrast enhancement (VCE) in magnetic resonance imaging (MRI), particularly focusing on nasopharyngeal cancer (NPC). Methods: The proposed model, Pixelwise Gradient Model with GAN for Virtual Contrast Enhancement (PGMGVCE), makes use of pixelwise gradient methods with Generative Adversarial Networks (GANs) to enhance T1-weighted (T1-w) and T2-weighted (T2-w) MRI images. This approach combines the benefits of both modalities to simulate the effects of gadolinium-based contrast agents, thereby reducing associated risks. Various modifications of PGMGVCE, including changing hyperparameters, using normalization methods (z-score, Sigmoid and Tanh) and training the model with T1-w or T2-w images only, were tested to optimize the model's performance. Results: PGMGVCE demonstrated a similar accuracy to the existing model in terms of mean absolute error (MAE) (8.56 +/- 0.45 for Li's model; 8.72 +/- 0.48 for PGMGVCE), mean square error (MSE) (12.43 +/- 0.67 for Li's model; 12.81 +/- 0.73 for PGMGVCE) and structural similarity index (SSIM) (0.71 +/- 0.08 for Li's model; 0.73 +/- 0.12 for PGMGVCE). However, it showed improvements in texture representation, as indicated by total mean square variation per mean intensity (TMSVPMI) (0.124 +/- 0.022 for ground truth; 0.079 +/- 0.024 for Li's model; 0.120 +/- 0.027 for PGMGVCE), total absolute variation per mean intensity (TAVPMI) (0.159 +/- 0.031 for ground truth; 0.100 +/- 0.032 for Li's model; 0.153 +/- 0.029 for PGMGVCE), Tenengrad function per mean intensity (TFPMI) (1.222 +/- 0.241 for ground truth; 0.981 +/- 0.213 for Li's model; 1.194 +/- 0.223 for PGMGVCE) and variance function per mean intensity (VFPMI) (0.0811 +/- 0.005 for ground truth; 0.0667 +/- 0.006 for Li's model; 0.0761 +/- 0.006 for PGMGVCE). Conclusions: PGMGVCE presents an innovative and safe approach to VCE in MRI, demonstrating the power of deep learning in enhancing medical imaging. This model paves the way for more accurate and risk-free diagnostic tools in medical imaging.
更多
查看译文
关键词
virtual contrast enhancement,tumor contrast,MR-guided radiotherapy,nasopharyngeal carcinoma
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要