College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China.
National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing, China.
BMC Med Imaging. 2024 Sep 5;24(1):232. doi: 10.1186/s12880-024-01418-x.
Many image fusion methods have been proposed to leverage the advantages of functional and anatomical images while compensating for their shortcomings. These methods integrate functional and anatomical images while presenting physiological and metabolic organ information, making their diagnostic efficiency far greater than that of single-modal images. Currently, most existing multimodal medical imaging fusion methods are based on multiscale transformation, which involves obtaining pyramid features through multiscale transformation. Low-resolution images are used to analyse approximate image features, and high-resolution images are used to analyse detailed image features. Different fusion rules are applied to achieve feature fusion at different scales. Although these fusion methods based on multiscale transformation can effectively achieve multimodal medical image fusion, much detailed information is lost during multiscale and inverse transformation, resulting in blurred edges and a loss of detail in the fusion images. A multimodal medical image fusion method based on interval gradients and convolutional neural networks is proposed to overcome this problem. First, this method uses interval gradients for image decomposition to obtain structure and texture images. Second, deep neural networks are used to extract perception images. Three methods are used to fuse structure, texture, and perception images. Last, the images are combined to obtain the final fusion image after colour transformation. Compared with the reference algorithms, the proposed method performs better in multiple objective indicators of , , , and .
许多图像融合方法已经被提出,以利用功能和解剖图像的优势,同时弥补它们的缺点。这些方法整合了功能和解剖图像,同时呈现了生理和代谢器官的信息,使它们的诊断效率远远超过单一模态图像。目前,大多数现有的多模态医学成像融合方法都是基于多尺度变换的,通过多尺度变换获得金字塔特征。使用低分辨率图像来分析近似的图像特征,使用高分辨率图像来分析详细的图像特征。应用不同的融合规则在不同的尺度上实现特征融合。虽然这些基于多尺度变换的融合方法可以有效地实现多模态医学图像融合,但在多尺度和逆变换过程中会丢失大量详细信息,导致融合图像的边缘模糊和细节丢失。提出了一种基于区间梯度和卷积神经网络的多模态医学图像融合方法来克服这个问题。首先,该方法使用区间梯度对图像进行分解,得到结构和纹理图像。其次,使用深度神经网络提取感知图像。最后,采用三种方法对结构、纹理和感知图像进行融合,并在颜色变换后得到最终的融合图像。与参考算法相比,所提出的方法在 、 、 、 和 等多个目标指标上表现更好。