Suppr超能文献

基于使用预训练网络(VGG19)提取图像特征的磁共振成像-正电子发射断层扫描图像融合的深度学习方法

Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19).

作者信息

Amini Nasrin, Mostaar Ahmad

机构信息

Department of Biomedical Engineering and Medical Physics, Faculty of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.

出版信息

J Med Signals Sens. 2021 Dec 28;12(1):25-31. doi: 10.4103/jmss.JMSS_80_20. eCollection 2022 Jan-Mar.

Abstract

BACKGROUND

The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images.

METHODS

We fused two MRI and PET images automatically with a pretrained convolutional neural network (CNN, VGG19). First, the PET image was converted from red-green-blue space to hue-saturation-intensity space to save the hue and saturation information. We started with extracting features from images by using a pretrained CNN. Then, we used the weights extracted from two MRI and PET images to construct a fused image. Fused image was constructed with multiplied weights to images. For solving the problem of reduced contrast, we added the constant coefficient of the original image to the final result. Finally, quantitative criteria (entropy, mutual information, discrepancy, and overall performance [OP]) were applied to evaluate the results of fusion. We compared the results of our method with the most widely used methods in the spatial and transform domain.

RESULTS

The quantitative measurement values we used were entropy, mutual information, discrepancy, and OP that were 3.0319, 2.3993, 3.8187, and 0.9899, respectively. The final results showed that our method based on quantitative assessments was the best and easiest way to fused images, especially in the spatial domain.

CONCLUSION

It concluded that our method used for MRI-PET image fusion was more accurate.

摘要

背景

图像融合是一种将一些不同图像的信息一起显示在一幅图像中的有趣方式。在本文中,我们提出了一种用于磁共振成像(MRI)和正电子发射断层扫描(PET)图像融合的深度学习网络方法。

方法

我们使用预训练的卷积神经网络(CNN,VGG19)自动融合两幅MRI和PET图像。首先,将PET图像从红-绿-蓝空间转换为色调-饱和度-亮度空间以保存色调和饱和度信息。我们通过使用预训练的CNN从图像中提取特征。然后,我们使用从两幅MRI和PET图像中提取的权重来构建融合图像。融合图像是通过将权重与图像相乘构建的。为了解决对比度降低的问题,我们将原始图像的常数系数添加到最终结果中。最后,应用定量标准(熵、互信息、差异和整体性能[OP])来评估融合结果。我们将我们方法的结果与空间和变换域中最广泛使用的方法进行了比较。

结果

我们使用的定量测量值是熵、互信息、差异和OP,分别为3.0319、2.3993、3.8187和0.9899。最终结果表明,我们基于定量评估的方法是融合图像的最佳且最简单的方法,尤其是在空间域中。

结论

得出的结论是,我们用于MRI-PET图像融合的方法更准确。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e32c/8804594/4c28ed03b23b/JMSS-12-25-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验