• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于区间梯度和卷积神经网络的多模态医学图像融合。

Multimodal medical image fusion based on interval gradients and convolutional neural networks.

机构信息

College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China.

National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing, China.

出版信息

BMC Med Imaging. 2024 Sep 5;24(1):232. doi: 10.1186/s12880-024-01418-x.

DOI:10.1186/s12880-024-01418-x
PMID:39237900
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11375917/
Abstract

Many image fusion methods have been proposed to leverage the advantages of functional and anatomical images while compensating for their shortcomings. These methods integrate functional and anatomical images while presenting physiological and metabolic organ information, making their diagnostic efficiency far greater than that of single-modal images. Currently, most existing multimodal medical imaging fusion methods are based on multiscale transformation, which involves obtaining pyramid features through multiscale transformation. Low-resolution images are used to analyse approximate image features, and high-resolution images are used to analyse detailed image features. Different fusion rules are applied to achieve feature fusion at different scales. Although these fusion methods based on multiscale transformation can effectively achieve multimodal medical image fusion, much detailed information is lost during multiscale and inverse transformation, resulting in blurred edges and a loss of detail in the fusion images. A multimodal medical image fusion method based on interval gradients and convolutional neural networks is proposed to overcome this problem. First, this method uses interval gradients for image decomposition to obtain structure and texture images. Second, deep neural networks are used to extract perception images. Three methods are used to fuse structure, texture, and perception images. Last, the images are combined to obtain the final fusion image after colour transformation. Compared with the reference algorithms, the proposed method performs better in multiple objective indicators of , , , and .

摘要

许多图像融合方法已经被提出,以利用功能和解剖图像的优势,同时弥补它们的缺点。这些方法整合了功能和解剖图像,同时呈现了生理和代谢器官的信息,使它们的诊断效率远远超过单一模态图像。目前,大多数现有的多模态医学成像融合方法都是基于多尺度变换的,通过多尺度变换获得金字塔特征。使用低分辨率图像来分析近似的图像特征,使用高分辨率图像来分析详细的图像特征。应用不同的融合规则在不同的尺度上实现特征融合。虽然这些基于多尺度变换的融合方法可以有效地实现多模态医学图像融合,但在多尺度和逆变换过程中会丢失大量详细信息,导致融合图像的边缘模糊和细节丢失。提出了一种基于区间梯度和卷积神经网络的多模态医学图像融合方法来克服这个问题。首先,该方法使用区间梯度对图像进行分解,得到结构和纹理图像。其次,使用深度神经网络提取感知图像。最后,采用三种方法对结构、纹理和感知图像进行融合,并在颜色变换后得到最终的融合图像。与参考算法相比,所提出的方法在 、 、 、 和 等多个目标指标上表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8445360cc573/12880_2024_1418_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/fcee64465b04/12880_2024_1418_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8367dddb7ac2/12880_2024_1418_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/1bca499a3627/12880_2024_1418_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/4b892c348925/12880_2024_1418_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/9154c8512680/12880_2024_1418_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/f0fc401ea1cc/12880_2024_1418_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/c81772302d20/12880_2024_1418_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8e5b060a31a9/12880_2024_1418_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/ee4a7e0fdf48/12880_2024_1418_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8445360cc573/12880_2024_1418_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/fcee64465b04/12880_2024_1418_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8367dddb7ac2/12880_2024_1418_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/1bca499a3627/12880_2024_1418_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/4b892c348925/12880_2024_1418_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/9154c8512680/12880_2024_1418_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/f0fc401ea1cc/12880_2024_1418_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/c81772302d20/12880_2024_1418_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8e5b060a31a9/12880_2024_1418_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/ee4a7e0fdf48/12880_2024_1418_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b75a/11375917/8445360cc573/12880_2024_1418_Fig10_HTML.jpg

相似文献

1
Multimodal medical image fusion based on interval gradients and convolutional neural networks.基于区间梯度和卷积神经网络的多模态医学图像融合。
BMC Med Imaging. 2024 Sep 5;24(1):232. doi: 10.1186/s12880-024-01418-x.
2
Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy.基于拉普拉斯金字塔和卷积神经网络重建并采用局部梯度能量策略的多模态医学图像融合
Comput Biol Med. 2020 Nov;126:104048. doi: 10.1016/j.compbiomed.2020.104048. Epub 2020 Oct 8.
3
A multibranch and multiscale neural network based on semantic perception for multimodal medical image fusion.基于语义感知的多分支多尺度神经网络用于多模态医学图像融合。
Sci Rep. 2024 Jul 30;14(1):17609. doi: 10.1038/s41598-024-68183-3.
4
[CT and MRI fusion based on generative adversarial network and convolutional neural networks under image enhancement].基于生成对抗网络和卷积神经网络的图像增强下的CT与MRI融合
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2023 Apr 25;40(2):208-216. doi: 10.7507/1001-5515.202209050.
5
CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion.CDRNet:用于灰度和伪彩色医学图像融合的级联密集残差网络。
Comput Methods Programs Biomed. 2023 Jun;234:107506. doi: 10.1016/j.cmpb.2023.107506. Epub 2023 Mar 23.
6
A multiscale double-branch residual attention network for anatomical-functional medical image fusion.用于解剖-功能医学图像融合的多尺度双分支残差注意力网络。
Comput Biol Med. 2022 Feb;141:105005. doi: 10.1016/j.compbiomed.2021.105005. Epub 2021 Nov 3.
7
Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid.基于卷积神经网络和对比度金字塔的多模态医学图像融合
Sensors (Basel). 2020 Apr 11;20(8):2169. doi: 10.3390/s20082169.
8
Image Fusion and Stylization Processing Based on Multiscale Transformation and Convolutional Neural Network.基于多尺度变换和卷积神经网络的图像融合和风格化处理。
Comput Intell Neurosci. 2022 Apr 28;2022:1181189. doi: 10.1155/2022/1181189. eCollection 2022.
9
A sum-modified-Laplacian and sparse representation based multimodal medical image fusion in Laplacian pyramid domain.基于拉普拉斯金字塔域中和的拉普拉斯算子和稀疏表示的多模态医学图像融合。
Med Biol Eng Comput. 2019 Oct;57(10):2265-2275. doi: 10.1007/s11517-019-02023-9. Epub 2019 Aug 14.
10
Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation.基于参数自适应脉冲耦合神经网络和卷积稀疏表示的多模态医学图像融合研究。
Comput Math Methods Med. 2020 Jan 24;2020:3290136. doi: 10.1155/2020/3290136. eCollection 2020.

本文引用的文献

1
Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation.基于拉普拉斯金字塔和自适应稀疏表示的多模态医学图像融合
Comput Biol Med. 2020 Aug;123:103823. doi: 10.1016/j.compbiomed.2020.103823. Epub 2020 Jun 20.
2
An adaptive two-scale biomedical image fusion method with statistical comparisons.一种具有统计比较的自适应双尺度生物医学图像融合方法。
Comput Methods Programs Biomed. 2020 Nov;196:105603. doi: 10.1016/j.cmpb.2020.105603. Epub 2020 Jun 12.
3
DRPL: Deep Regression Pair Learning For Multi-Focus Image Fusion.
DRPL:用于多聚焦图像融合的深度回归对学习
IEEE Trans Image Process. 2020 Mar 2. doi: 10.1109/TIP.2020.2976190.
4
Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images.采用四元数小波变换融合多模态医学图像。
J Med Biol Eng. 2017;37(2):230-239. doi: 10.1007/s40846-016-0200-6. Epub 2017 Mar 9.
5
Anatomical-functional image fusion by information of interest in local Laplacian filtering domain.基于局部拉普拉斯滤波域感兴趣信息的解剖-功能图像融合。
IEEE Trans Image Process. 2017 Dec;26(12):5855-5866. doi: 10.1109/TIP.2017.2745202. Epub 2017 Aug 25.
6
Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.视觉传感器网络中基于双树复数小波变换和图像块残差的多聚焦图像融合
Sensors (Basel). 2014 Nov 26;14(12):22408-30. doi: 10.3390/s141222408.
7
Multiscale medical image fusion in wavelet domain.小波域中的多尺度医学图像融合
ScientificWorldJournal. 2013 Dec 22;2013:521034. doi: 10.1155/2013/521034. eCollection 2013.
8
Group-sparse representation with dictionary learning for medical image denoising and fusion.基于字典学习的分组稀疏表示在医学图像去噪与融合中的应用
IEEE Trans Biomed Eng. 2012 Dec;59(12):3450-9. doi: 10.1109/TBME.2012.2217493. Epub 2012 Sep 6.