• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种具有U形结构的新型多聚焦图像融合网络。

A Novel Multi-Focus Image Fusion Network with U-Shape Structure.

作者信息

Pan Tao, Jiang Jiaqin, Yao Jian, Wang Bin, Tan Bin

机构信息

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430000, China.

School of Artificial Intelligence, The Open University of Guangdong, Guangzhou 510000, China.

出版信息

Sensors (Basel). 2020 Jul 13;20(14):3901. doi: 10.3390/s20143901.

DOI:10.3390/s20143901
PMID:32668784
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7412084/
Abstract

Multi-focus image fusion has become a very practical image processing task. It uses multiple images focused on various depth planes to create an all-in-focus image. Although extensive studies have been produced, the performance of existing methods is still limited by the inaccurate detection of the focus regions for fusion. Therefore, in this paper, we proposed a novel U-shape network which can generate an accurate decision map for the multi-focus image fusion. The Siamese encoder of our U-shape network can preserve the low-level cues with rich spatial details and high-level semantic information from the source images separately. Moreover, we introduce the ResBlocks to expand the receptive field, which can enhance the ability of our network to distinguish between focus and defocus regions. Moreover, in the bridge stage between the encoder and decoder, the spatial pyramid pooling is adopted as a global perception fusion module to capture sufficient context information for the learning of the decision map. Finally, we use a hybrid loss that combines the binary cross-entropy loss and the structural similarity loss for supervision. Extensive experiments have demonstrated that the proposed method can achieve the state-of-the-art performance.

摘要

多聚焦图像融合已成为一项非常实用的图像处理任务。它利用聚焦于不同深度平面的多幅图像来创建一幅全聚焦图像。尽管已经开展了大量研究,但现有方法的性能仍受融合时聚焦区域检测不准确的限制。因此,在本文中,我们提出了一种新颖的U形网络,它能够为多聚焦图像融合生成准确的决策图。我们U形网络的暹罗编码器可以分别保留来自源图像的具有丰富空间细节的低级线索和高级语义信息。此外,我们引入残差块来扩大感受野,这可以增强我们的网络区分聚焦和散焦区域的能力。此外,在编码器和解码器之间的过渡阶段,采用空间金字塔池化作为全局感知融合模块,以捕获足够的上下文信息用于决策图的学习。最后,我们使用结合了二元交叉熵损失和结构相似性损失的混合损失进行监督。大量实验表明,所提出的方法能够实现最优性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/53cf4e799b28/sensors-20-03901-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/adc7cd8954ca/sensors-20-03901-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/6ff3a56c52f5/sensors-20-03901-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/569bedb57a1a/sensors-20-03901-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/e5d7e57aacee/sensors-20-03901-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/43339515a222/sensors-20-03901-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/795854df5e20/sensors-20-03901-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/e839e9879ada/sensors-20-03901-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/19d792a60c09/sensors-20-03901-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/ddb701ac2aca/sensors-20-03901-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/da33490bb45a/sensors-20-03901-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/27ab40daca10/sensors-20-03901-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/db59c41c763f/sensors-20-03901-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/a2db5bee53e5/sensors-20-03901-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/6c0ac951118a/sensors-20-03901-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/18dfa7f7654c/sensors-20-03901-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/0e8f24c12493/sensors-20-03901-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/53cf4e799b28/sensors-20-03901-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/adc7cd8954ca/sensors-20-03901-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/6ff3a56c52f5/sensors-20-03901-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/569bedb57a1a/sensors-20-03901-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/e5d7e57aacee/sensors-20-03901-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/43339515a222/sensors-20-03901-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/795854df5e20/sensors-20-03901-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/e839e9879ada/sensors-20-03901-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/19d792a60c09/sensors-20-03901-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/ddb701ac2aca/sensors-20-03901-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/da33490bb45a/sensors-20-03901-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/27ab40daca10/sensors-20-03901-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/db59c41c763f/sensors-20-03901-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/a2db5bee53e5/sensors-20-03901-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/6c0ac951118a/sensors-20-03901-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/18dfa7f7654c/sensors-20-03901-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/0e8f24c12493/sensors-20-03901-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a99/7412084/53cf4e799b28/sensors-20-03901-g014.jpg

相似文献

1
A Novel Multi-Focus Image Fusion Network with U-Shape Structure.一种具有U形结构的新型多聚焦图像融合网络。
Sensors (Basel). 2020 Jul 13;20(14):3901. doi: 10.3390/s20143901.
2
Polyp segmentation network with hybrid channel-spatial attention and pyramid global context guided feature fusion.具有混合通道-空间注意力和金字塔全局上下文引导特征融合的息肉分割网络。
Comput Med Imaging Graph. 2022 Jun;98:102072. doi: 10.1016/j.compmedimag.2022.102072. Epub 2022 May 11.
3
CPFNet: Context Pyramid Fusion Network for Medical Image Segmentation.CPFNet:用于医学图像分割的上下文金字塔融合网络。
IEEE Trans Med Imaging. 2020 Oct;39(10):3008-3018. doi: 10.1109/TMI.2020.2983721. Epub 2020 Mar 27.
4
An efficient U-shaped network combined with edge attention module and context pyramid fusion for skin lesion segmentation.一种高效的 U 形网络,结合边缘注意力模块和上下文金字塔融合,用于皮肤病变分割。
Med Biol Eng Comput. 2022 Jul;60(7):1987-2000. doi: 10.1007/s11517-022-02581-5. Epub 2022 May 10.
5
Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion.用于多聚焦图像融合的全局特征编码U型网络(GEU-Net)
IEEE Trans Image Process. 2021;30:163-175. doi: 10.1109/TIP.2020.3033158. Epub 2020 Nov 18.
6
Image Deblurring Using Multi-Stream Bottom-Top-Bottom Attention Network and Global Information-Based Fusion and Reconstruction Network.基于多流底层-顶层-底层注意力网络和全局信息融合与重建网络的图像去模糊。
Sensors (Basel). 2020 Jul 3;20(13):3724. doi: 10.3390/s20133724.
7
Semantic-Aware Fusion Network Based on Super-Resolution.基于超分辨率的语义感知融合网络
Sensors (Basel). 2024 Jun 5;24(11):3665. doi: 10.3390/s24113665.
8
Multiscale Cross-Connected Dehazing Network With Scene Depth Fusion.基于场景深度融合的多尺度交叉连接去雾网络
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):1598-1612. doi: 10.1109/TNNLS.2022.3184164. Epub 2024 Feb 5.
9
LapUNet: a novel approach to monocular depth estimation using dynamic laplacian residual U-shape networks.LapUNet:一种使用动态拉普拉斯残差U型网络进行单目深度估计的新方法。
Sci Rep. 2024 Oct 9;14(1):23544. doi: 10.1038/s41598-024-74445-x.
10
MFAN: Multi-Level Features Attention Network for Fake Certificate Image Detection.MFAN:用于假证书图像检测的多级特征注意力网络。
Entropy (Basel). 2022 Jan 13;24(1):118. doi: 10.3390/e24010118.

本文引用的文献

1
Fully Convolutional Network-Based Multifocus Image Fusion.基于全卷积网络的多聚焦图像融合
Neural Comput. 2018 Jul;30(7):1775-1800. doi: 10.1162/neco_a_01098. Epub 2018 Jun 12.
2
Image fusion with guided filtering.基于导向滤波的图像融合。
IEEE Trans Image Process. 2013 Jul;22(7):2864-75. doi: 10.1109/TIP.2013.2244222. Epub 2013 Jan 30.