• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于信息互补多尺度分解的多聚焦图像融合方法

Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary.

作者信息

Wan Hui, Tang Xianlun, Zhu Zhiqin, Li Weisheng

机构信息

College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.

College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China.

出版信息

Entropy (Basel). 2021 Oct 19;23(10):1362. doi: 10.3390/e23101362.

DOI:10.3390/e23101362
PMID:34682086
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8534655/
Abstract

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.

摘要

多聚焦图像融合是一种重要的方法,用于将源多聚焦图像中的聚焦部分组合成一幅单一的全聚焦图像。目前,为了解决多聚焦图像融合问题,关键在于如何准确检测聚焦区域,尤其是当相机拍摄的源图像产生各向异性模糊和配准不良时。本文提出了一种基于互补信息多尺度分解的新型多聚焦图像融合方法。首先,该方法使用两组结构互补的大尺度和小尺度分解方案,分别对图像进行两尺度双层奇异值分解,得到低频和高频分量。然后,低频分量通过一种将图像局部能量与边缘能量相结合的规则进行融合。高频分量通过参数自适应脉冲耦合神经网络模型(PA-PCNN)进行融合,并根据高频分量各分解层所包含的特征信息,选择不同的细节特征作为PA-PCNN的外部刺激输入。最后,根据源图像的结构互补两尺度分解以及高低频分量的融合,得到两个具有互补信息的初始决策图。通过对初始决策图进行细化,得到最终的融合决策图,完成图像融合。此外,将所提方法与10种先进方法进行比较,以验证其有效性。实验结果表明,所提方法在图像预配准和未配准的情况下,能够更准确地区分聚焦和非聚焦区域,主观和客观评价指标略优于现有方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/44410505ecfa/entropy-23-01362-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/87a5caa4a8ac/entropy-23-01362-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/56d19da2edea/entropy-23-01362-g002a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/bdd7aea2590e/entropy-23-01362-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/5f9bbb6646a4/entropy-23-01362-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/48be629b11e8/entropy-23-01362-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/549fb348fe3b/entropy-23-01362-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/b1da9efcf0b6/entropy-23-01362-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/cf36b5017ad0/entropy-23-01362-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/fb8a9cafedbd/entropy-23-01362-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/74ba63856453/entropy-23-01362-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/45577321697c/entropy-23-01362-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/9e79d4ed4423/entropy-23-01362-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/df408abf2596/entropy-23-01362-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/c907ea1b88f5/entropy-23-01362-g014a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/4cfeee46e9c2/entropy-23-01362-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/9cffe08a2f7f/entropy-23-01362-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/f2764b3e79a3/entropy-23-01362-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/884dd13ac8e4/entropy-23-01362-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/44410505ecfa/entropy-23-01362-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/87a5caa4a8ac/entropy-23-01362-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/56d19da2edea/entropy-23-01362-g002a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/bdd7aea2590e/entropy-23-01362-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/5f9bbb6646a4/entropy-23-01362-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/48be629b11e8/entropy-23-01362-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/549fb348fe3b/entropy-23-01362-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/b1da9efcf0b6/entropy-23-01362-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/cf36b5017ad0/entropy-23-01362-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/fb8a9cafedbd/entropy-23-01362-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/74ba63856453/entropy-23-01362-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/45577321697c/entropy-23-01362-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/9e79d4ed4423/entropy-23-01362-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/df408abf2596/entropy-23-01362-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/c907ea1b88f5/entropy-23-01362-g014a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/4cfeee46e9c2/entropy-23-01362-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/9cffe08a2f7f/entropy-23-01362-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/f2764b3e79a3/entropy-23-01362-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/884dd13ac8e4/entropy-23-01362-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c792/8534655/44410505ecfa/entropy-23-01362-g019.jpg

相似文献

1
Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary.基于信息互补多尺度分解的多聚焦图像融合方法
Entropy (Basel). 2021 Oct 19;23(10):1362. doi: 10.3390/e23101362.
2
Multi-Focus Color Image Fusion Based on Quaternion Multi-Scale Singular Value Decomposition.基于四元数多尺度奇异值分解的多聚焦彩色图像融合
Front Neurorobot. 2021 Jun 23;15:695960. doi: 10.3389/fnbot.2021.695960. eCollection 2021.
3
A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter.一种基于多尺度剪切滚动加权引导图像滤波的新型医学图像融合方法。
Math Biosci Eng. 2023 Jul 21;20(8):15374-15406. doi: 10.3934/mbe.2023687.
4
Multi-Focus Image Fusion Based on Hessian Matrix Decomposition and Salient Difference Focus Detection.基于黑塞矩阵分解和显著差异焦点检测的多聚焦图像融合
Entropy (Basel). 2022 Oct 25;24(11):1527. doi: 10.3390/e24111527.
5
Multi-modal Medical Image Fusion Algorithm Based on Spatial Frequency Motivated PA-PCNN in the NSST Domain.基于 NSST 域中空间频率激励的 PA-PCNN 的多模态医学图像融合算法。
Curr Med Imaging. 2021;17(5):634-643. doi: 10.2174/1573405616666201118123220.
6
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter.基于熵的联合稀疏表示与滚动引导滤波图像融合
Entropy (Basel). 2020 Jan 18;22(1):118. doi: 10.3390/e22010118.
7
Multi-focus image fusion method using energy of Laplacian and a deep neural network.基于拉普拉斯能量和深度神经网络的多聚焦图像融合方法
Appl Opt. 2020 Feb 20;59(6):1684-1694. doi: 10.1364/AO.381082.
8
Multi-Band Texture Image Fusion Based on the Embedded Multi-Scale Decomposition and Possibility Theory.基于嵌入式多尺度分解和可能性理论的多波段纹理图像融合
Guang Pu Xue Yu Guang Pu Fen Xi. 2016 Jul;36(7):2337-43.
9
Simulation analysis of visual perception model based on pulse coupled neural network.基于脉冲耦合神经网络的视觉感知模型仿真分析
Sci Rep. 2023 Jul 28;13(1):12281. doi: 10.1038/s41598-023-39376-z.
10
Multi-Modal Image Fusion Based on Matrix Product State of Tensor.基于张量矩阵乘积态的多模态图像融合
Front Neurorobot. 2021 Nov 15;15:762252. doi: 10.3389/fnbot.2021.762252. eCollection 2021.

本文引用的文献

1
Multi-Focus Color Image Fusion Based on Quaternion Multi-Scale Singular Value Decomposition.基于四元数多尺度奇异值分解的多聚焦彩色图像融合
Front Neurorobot. 2021 Jun 23;15:695960. doi: 10.3389/fnbot.2021.695960. eCollection 2021.
2
An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain.一种基于非下采样Contourlet变换域稀疏表示和和修正拉普拉斯算子的图像融合方法。
Entropy (Basel). 2018 Jul 11;20(7):522. doi: 10.3390/e20070522.
3
Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion.
用于多聚焦图像融合的全局特征编码U型网络(GEU-Net)
IEEE Trans Image Process. 2021;30:163-175. doi: 10.1109/TIP.2020.3033158. Epub 2020 Nov 18.
4
A multi-focus image fusion method via region mosaicking on Laplacian pyramids.基于拉普拉斯金字塔的区域拼接的多聚焦图像融合方法。
PLoS One. 2018 May 17;13(5):e0191085. doi: 10.1371/journal.pone.0191085. eCollection 2018.
5
Image fusion with guided filtering.基于导向滤波的图像融合。
IEEE Trans Image Process. 2013 Jul;22(7):2864-75. doi: 10.1109/TIP.2013.2244222. Epub 2013 Jan 30.
6
A new automatic parameter setting method of a simplified PCNN for image segmentation.一种用于图像分割的简化脉冲耦合神经网络的新型自动参数设置方法。
IEEE Trans Neural Netw. 2011 Jun;22(6):880-92. doi: 10.1109/TNN.2011.2128880. Epub 2011 May 5.
7
Gradient-based multiresolution image fusion.基于梯度的多分辨率图像融合
IEEE Trans Image Process. 2004 Feb;13(2):228-37. doi: 10.1109/tip.2004.823821.