谷歌浏览器插件
订阅小程序
在清言上使用

Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features Using Pretrained Network (VGG19)

Journal of medical signals and sensors(2022)

引用 5|浏览4
暂无评分
摘要
Background:The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images.Methods:We fused two MRI and PET images automatically with a pretrained convolutional neural network (CNN, VGG19). First, the PET image was converted from red-green-blue space to hue-saturation-intensity space to save the hue and saturation information. We started with extracting features from images by using a pretrained CNN. Then, we used the weights extracted from two MRI and PET images to construct a fused image. Fused image was constructed with multiplied weights to images. For solving the problem of reduced contrast, we added the constant coefficient of the original image to the final result. Finally, quantitative criteria (entropy, mutual information, discrepancy, and overall performance [OP]) were applied to evaluate the results of fusion. We compared the results of our method with the most widely used methods in the spatial and transform domain.Results:The quantitative measurement values we used were entropy, mutual information, discrepancy, and OP that were 3.0319, 2.3993, 3.8187, and 0.9899, respectively. The final results showed that our method based on quantitative assessments was the best and easiest way to fused images, especially in the spatial domain.Conclusion:It concluded that our method used for MRI-PET image fusion was more accurate.
更多
查看译文
关键词
Convolutional neural network,hue-saturation-intensity space,image fusion,VGG19
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要