• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于卷积神经网络和稀疏表示的多聚焦图像融合方法

The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation.

作者信息

Wei Bingzhe, Feng Xiangchu, Wang Kun, Gao Bian

机构信息

School of Mathematics and Statistics, Xidian University, Xi'an 710071, China.

出版信息

Entropy (Basel). 2021 Jun 28;23(7):827. doi: 10.3390/e23070827.

DOI:10.3390/e23070827
PMID:34203573
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8306545/
Abstract

Multi-focus-image-fusion is a crucial embranchment of image processing. Many methods have been developed from different perspectives to solve this problem. Among them, the sparse representation (SR)-based and convolutional neural network (CNN)-based fusion methods have been widely used. Fusing the source image patches, the SR-based model is essentially a local method with a nonlinear fusion rule. On the other hand, the direct mapping between the source images follows the decision map which is learned via CNN. The fusion is a global one with a linear fusion rule. Combining the advantages of the above two methods, a novel fusion method that applies CNN to assist SR is proposed for the purpose of gaining a fused image with more precise and abundant information. In the proposed method, source image patches were fused based on SR and the new weight obtained by CNN. Experimental results demonstrate that the proposed method clearly outperforms existing state-of-the-art methods in addition to SR and CNN in terms of both visual perception and objective evaluation metrics, and the computational complexity is greatly reduced. Experimental results demonstrate that the proposed method not only clearly outperforms the SR and CNN methods in terms of visual perception and objective evaluation indicators, but is also significantly better than other state-of-the-art methods since our computational complexity is greatly reduced.

摘要

多聚焦图像融合是图像处理的一个关键分支。人们从不同角度开发了许多方法来解决这个问题。其中,基于稀疏表示(SR)和基于卷积神经网络(CNN)的融合方法得到了广泛应用。基于SR的模型通过融合源图像块,本质上是一种具有非线性融合规则的局部方法。另一方面,源图像之间的直接映射遵循通过CNN学习得到的决策图。这种融合是一种具有线性融合规则的全局融合。为了获得具有更精确和丰富信息的融合图像,结合上述两种方法的优点,提出了一种应用CNN辅助SR的新型融合方法。在所提出的方法中,基于SR和由CNN获得的新权重对源图像块进行融合。实验结果表明,所提出的方法在视觉感知和客观评估指标方面,除了SR和CNN之外,明显优于现有的最先进方法,并且计算复杂度大大降低。实验结果表明,所提出的方法不仅在视觉感知和客观评估指标方面明显优于SR和CNN方法,而且由于计算复杂度大大降低,也明显优于其他最先进方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38a8/8306545/01f6c023ac60/entropy-23-00827-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38a8/8306545/108093302523/entropy-23-00827-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38a8/8306545/01f6c023ac60/entropy-23-00827-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38a8/8306545/108093302523/entropy-23-00827-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38a8/8306545/01f6c023ac60/entropy-23-00827-g002.jpg

相似文献

1
The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation.基于卷积神经网络和稀疏表示的多聚焦图像融合方法
Entropy (Basel). 2021 Jun 28;23(7):827. doi: 10.3390/e23070827.
2
Infrared and Visible Image Fusion through Details Preservation.通过细节保留实现红外与可见光图像融合。
Sensors (Basel). 2019 Oct 20;19(20):4556. doi: 10.3390/s19204556.
3
Image decomposition fusion method based on sparse representation and neural network.基于稀疏表示和神经网络的图像分解融合方法
Appl Opt. 2017 Oct 1;56(28):7969-7977. doi: 10.1364/AO.56.007969.
4
Medical image fusion using segment graph filter and sparse representation.基于分段图滤波器和稀疏表示的医学图像融合。
Comput Biol Med. 2021 Apr;131:104239. doi: 10.1016/j.compbiomed.2021.104239. Epub 2021 Jan 29.
5
Multi-Scale Mixed Attention Network for CT and MRI Image Fusion.用于CT和MRI图像融合的多尺度混合注意力网络
Entropy (Basel). 2022 Jun 19;24(6):843. doi: 10.3390/e24060843.
6
Multimodal medical image fusion using convolutional neural network and extreme learning machine.使用卷积神经网络和极限学习机的多模态医学图像融合
Front Neurorobot. 2022 Nov 16;16:1050981. doi: 10.3389/fnbot.2022.1050981. eCollection 2022.
7
Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain.基于剪切波域局部能量的稀疏表示的多聚焦图像融合方法。
Sensors (Basel). 2023 Mar 7;23(6):2888. doi: 10.3390/s23062888.
8
Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization.基于 CNN 和核范数最小化的滚动引导滤波器的多模态医学图像融合。
Curr Med Imaging. 2020;16(10):1243-1258. doi: 10.2174/1573405616999200817103920.
9
Structural Similarity Loss for Learning to Fuse Multi-Focus Images.用于融合多焦点图像的结构相似性损失学习。
Sensors (Basel). 2020 Nov 20;20(22):6647. doi: 10.3390/s20226647.
10
A New Deep Learning Based Multi-Spectral Image Fusion Method.一种基于深度学习的新型多光谱图像融合方法。
Entropy (Basel). 2019 Jun 5;21(6):570. doi: 10.3390/e21060570.

引用本文的文献

1
Gaussian of Differences: A Simple and Efficient General Image Fusion Method.差分高斯:一种简单高效的通用图像融合方法。
Entropy (Basel). 2023 Aug 15;25(8):1215. doi: 10.3390/e25081215.
2
Conditional Random Field-Guided Multi-Focus Image Fusion.条件随机场引导的多聚焦图像融合
J Imaging. 2022 Sep 5;8(9):240. doi: 10.3390/jimaging8090240.

本文引用的文献

1
An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain.一种基于非下采样Contourlet变换域稀疏表示和和修正拉普拉斯算子的图像融合方法。
Entropy (Basel). 2018 Jul 11;20(7):522. doi: 10.3390/e20070522.
2
Regional multifocus image fusion using sparse representation.基于稀疏表示的区域多聚焦图像融合
Opt Express. 2013 Feb 25;21(4):5182-97. doi: 10.1364/OE.21.005182.
3
Group-sparse representation with dictionary learning for medical image denoising and fusion.基于字典学习的分组稀疏表示在医学图像去噪与融合中的应用
IEEE Trans Biomed Eng. 2012 Dec;59(12):3450-9. doi: 10.1109/TBME.2012.2217493. Epub 2012 Sep 6.
4
Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study.客观评估多分辨率图像融合算法在夜视中增强上下文的性能:一项比较研究。
IEEE Trans Pattern Anal Mach Intell. 2012 Jan;34(1):94-109. doi: 10.1109/TPAMI.2011.109. Epub 2011 May 19.