• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于人工选择和脉冲耦合神经网络的增强型跨视觉皮层模型的医学图像融合

Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network.

作者信息

Xu Wanni, Fu You-Lei, Xu Huasen, Wong Kelvin K L

机构信息

Xiamen Academy of Arts and Design, Fuzhou University, Xiamen 361024, China; Department of Computer Information Engineering, Nanchang Institute of Technology, Nanchang 330044, China.

Department of Computer Information Engineering, Nanchang Institute of Technology, Nanchang 330044, China; Fine Art and Design College, Quanzhou Normal University, Quanzhou 362000, China.

出版信息

Comput Methods Programs Biomed. 2023 Feb;229:107304. doi: 10.1016/j.cmpb.2022.107304. Epub 2022 Dec 9.

DOI:10.1016/j.cmpb.2022.107304
PMID:36586176
Abstract

OBJECTIVE

The traditional ICM is widely used in applications, such as image edge detection and image segmentation. However, several model parameters must be set, which tend to lead to reduced accuracy and increased cost. As medical images have more complex edges, contours and details, more suitable combinatorial algorithms are needed to handle the pathological diagnosis of multiple cerebral infarcts and acute strokes, resulting in the findings being more applicable, as well as having good clinical value.

METHODS

To better solve the medical image fusion and diagnosis problems, this paper introduces the image fusion algorithm based on the combination of NSCT and improved ICM and proposes low-frequency, sub-band fusion rules and high-frequency sub-band fusion rules. The above method is applied to the fusion of CT/MRI images, subsequently, three other fusion algorithms, including NSCT-SF-PCNN, NSCT-SR-PCNN and Adaptive-PCNN are compared, and the simulation results of image fusion are analyzed and validated.

RESULTS

According to the experimental findings, the suggested algorithm performs better than other fusion algorithms in terms of five objective evaluation metrics or subjective evaluation. The NSCT transform and the improved ICM were combined, and the outcomes were evaluated against those of other fusion algorithms. The CT/MRI medical images of healthy brain tissue, numerous cerebral infarcts and acute strokes were combined using this technique.

CONCLUSION

Medical image fusion using Adaptive-PCNN produces satisfactory results, not only in relation to improved image clarity but also in terms of outstanding edge information, high contrast and brightness.

摘要

目的

传统的迭代条件模式(ICM)在图像边缘检测和图像分割等应用中被广泛使用。然而,必须设置几个模型参数,这往往会导致准确性降低和成本增加。由于医学图像具有更复杂的边缘、轮廓和细节,因此需要更合适的组合算法来处理多发性脑梗死和急性中风的病理诊断,从而使结果更具适用性,并具有良好的临床价值。

方法

为了更好地解决医学图像融合与诊断问题,本文介绍了基于非下采样Contourlet变换(NSCT)和改进的ICM相结合的图像融合算法,并提出了低频、子带融合规则和高频子带融合规则。将上述方法应用于CT/MRI图像融合,随后,比较了其他三种融合算法,包括NSCT-SF-PCNN、NSCT-SR-PCNN和自适应PCNN,并对图像融合的仿真结果进行了分析和验证。

结果

根据实验结果,所提出的算法在五个客观评价指标或主观评价方面均优于其他融合算法。将NSCT变换与改进的ICM相结合,并将结果与其他融合算法的结果进行了比较。使用该技术对健康脑组织、多发性脑梗死和急性中风的CT/MRI医学图像进行了融合。

结论

使用自适应PCNN进行医学图像融合产生了令人满意的结果,不仅在提高图像清晰度方面,而且在突出的边缘信息、高对比度和亮度方面。

相似文献

1
Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network.基于人工选择和脉冲耦合神经网络的增强型跨视觉皮层模型的医学图像融合
Comput Methods Programs Biomed. 2023 Feb;229:107304. doi: 10.1016/j.cmpb.2022.107304. Epub 2022 Dec 9.
2
NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency.基于 NSCT 的多模态医学图像融合方法,使用脉冲耦合神经网络和改进的空间频率。
Med Biol Eng Comput. 2012 Oct;50(10):1105-14. doi: 10.1007/s11517-012-0943-3. Epub 2012 Jul 24.
3
Infrared and visible image fusion method of dual NSCT and PCNN.双 NSCT 和 PCNN 的红外与可见光图像融合方法。
PLoS One. 2020 Sep 18;15(9):e0239535. doi: 10.1371/journal.pone.0239535. eCollection 2020.
4
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.非下采样剪切波变换域中基于特征驱动简化自适应脉冲耦合神经网络的医学图像融合算法
J Digit Imaging. 2016 Feb;29(1):73-85. doi: 10.1007/s10278-015-9806-4.
5
Multimodal medical image fusion algorithm based on pulse coupled neural networks and nonsubsampled contourlet transform.基于脉冲耦合神经网络和非下采样轮廓波变换的多模态医学图像融合算法。
Med Biol Eng Comput. 2023 Jan;61(1):155-177. doi: 10.1007/s11517-022-02697-8. Epub 2022 Nov 7.
6
A New Pulse Coupled Neural Network (PCNN) for Brain Medical Image Fusion Empowered by Shuffled Frog Leaping Algorithm.一种基于洗牌蛙跳算法的用于脑医学图像融合的新型脉冲耦合神经网络(PCNN)。
Front Neurosci. 2019 Mar 20;13:210. doi: 10.3389/fnins.2019.00210. eCollection 2019.
7
Medical Image Fusion Based on Sparse Representation and PCNN in NSCT Domain.基于非下采样Contourlet变换(NSCT)域中稀疏表示和脉冲耦合神经网络(PCNN)的医学图像融合
Comput Math Methods Med. 2018 May 24;2018:2806047. doi: 10.1155/2018/2806047. eCollection 2018.
8
A Novel Fusion Framework Based on Adaptive PCNN in NSCT Domain for Whole-Body PET and CT Images.基于非下采样Contourlet变换(NSCT)域自适应脉冲耦合神经网络(PCNN)的全身PET与CT图像新型融合框架
Comput Math Methods Med. 2017;2017:8407019. doi: 10.1155/2017/8407019. Epub 2017 Apr 3.
9
Pulse Coupled Neural Network-Based Multimodal Medical Image Fusion via Guided Filtering and WSEML in NSCT Domain.基于脉冲耦合神经网络的非下采样Contourlet变换域中通过引导滤波和加权结构能量多模态医学图像融合
Entropy (Basel). 2021 May 11;23(5):591. doi: 10.3390/e23050591.
10
Multi-focus image fusion algorithm based on focus detection in spatial and NSCT domain.基于空间域和 NSCT 域中焦点检测的多聚焦图像融合算法。
PLoS One. 2018 Sep 20;13(9):e0204225. doi: 10.1371/journal.pone.0204225. eCollection 2018.

引用本文的文献

1
MBRARN: multibranch residual attention reconstruction network for medical image fusion.多分支残差注意力重建网络(MBRARN):用于医学图像融合。
Med Biol Eng Comput. 2023 Nov;61(11):3067-3085. doi: 10.1007/s11517-023-02902-2. Epub 2023 Aug 25.
2
Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain.基于剪切波域局部能量的稀疏表示的多聚焦图像融合方法。
Sensors (Basel). 2023 Mar 7;23(6):2888. doi: 10.3390/s23062888.