• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于α-哑光边界散焦模型的级联网络用于多聚焦图像融合。

An α-Matte Boundary Defocus Model-Based Cascaded Network for Multi-focus Image Fusion.

作者信息

Ma Haoyu, Liao Qingmin, Zhang Juncheng, Liu Shaojun, Xue Jing-Hao

出版信息

IEEE Trans Image Process. 2020 Aug 26;PP. doi: 10.1109/TIP.2020.3018261.

DOI:10.1109/TIP.2020.3018261
PMID:32845840
Abstract

Capturing an all-in-focus image with a single camera is difficult since the depth of field of the camera is usually limited. An alternative method to obtain the all-in-focus image is to fuse several images that are focused at different depths. However, existing multi-focus image fusion methods cannot obtain clear results for areas near the focused/defocused boundary (FDB). In this paper, a novel α-matte boundary defocus model is proposed to generate realistic training data with the defocus spread effect precisely modeled, especially for areas near the FDB. Based on this α-matte defocus model and the generated data, a cascaded boundary-aware convolutional network termed MMF-Net is proposed and trained, aiming to achieve clearer fusion results around the FDB. Specifically, the MMF-Net consists of two cascaded subnets for initial fusion and boundary fusion. These two subnets are designed to first obtain a guidance map of FDB and then refine the fusion near the FDB. Experiments demonstrate that with the help of the new α-matte boundary defocus model, the proposed MMF-Net outperforms the state-of-the-art methods both qualitatively and quantitatively.

摘要

用单台相机获取全焦图像很困难,因为相机的景深通常有限。获取全焦图像的另一种方法是融合多张聚焦于不同深度的图像。然而,现有的多聚焦图像融合方法在聚焦/散焦边界(FDB)附近区域无法获得清晰的结果。本文提出了一种新颖的α遮罩边界散焦模型,以生成逼真的训练数据,精确模拟散焦扩散效应,特别是对于FDB附近的区域。基于此α遮罩散焦模型和生成的数据,提出并训练了一种级联边界感知卷积网络,称为MMF-Net,旨在在FDB周围实现更清晰的融合结果。具体而言,MMF-Net由两个级联子网组成,分别用于初始融合和边界融合。这两个子网旨在首先获得FDB的引导图,然后在FDB附近细化融合。实验表明,借助新的α遮罩边界散焦模型,所提出的MMF-Net在定性和定量方面均优于现有方法。

相似文献

1
An α-Matte Boundary Defocus Model-Based Cascaded Network for Multi-focus Image Fusion.一种基于α-哑光边界散焦模型的级联网络用于多聚焦图像融合。
IEEE Trans Image Process. 2020 Aug 26;PP. doi: 10.1109/TIP.2020.3018261.
2
Defocus Image Deblurring Network With Defocus Map Estimation as Auxiliary Task.以散焦图估计为辅助任务的散焦图像去模糊网络
IEEE Trans Image Process. 2022;31:216-226. doi: 10.1109/TIP.2021.3127850. Epub 2021 Dec 7.
3
Joint Depth and Defocus Estimation From a Single Image Using Physical Consistency.基于物理一致性的单图像关节深度与散焦估计
IEEE Trans Image Process. 2021;30:3419-3433. doi: 10.1109/TIP.2021.3061901. Epub 2021 Mar 9.
4
HFCF-Net: A hybrid-feature cross fusion network for COVID-19 lesion segmentation from CT volumetric images.HFCF-Net:一种用于从 CT 容积图像中分割 COVID-19 病变的混合特征交叉融合网络。
Med Phys. 2022 Jun;49(6):3797-3815. doi: 10.1002/mp.15600. Epub 2022 Mar 30.
5
Structural Similarity Loss for Learning to Fuse Multi-Focus Images.用于融合多焦点图像的结构相似性损失学习。
Sensors (Basel). 2020 Nov 20;20(22):6647. doi: 10.3390/s20226647.
6
A Novel Multi-Focus Image Fusion Network with U-Shape Structure.一种具有U形结构的新型多聚焦图像融合网络。
Sensors (Basel). 2020 Jul 13;20(14):3901. doi: 10.3390/s20143901.
7
All-in-focus image reconstruction under severe defocus.严重散焦情况下的全聚焦图像重建
Opt Lett. 2015 Apr 15;40(8):1671-4. doi: 10.1364/OL.40.001671.
8
Defocus Map Estimation From a Single Image Based on Two-Parameter Defocus Model.基于双参数散焦模型的单幅图像散焦图估计
IEEE Trans Image Process. 2016 Dec;25(12):5943-5956. doi: 10.1109/TIP.2016.2617460. Epub 2016 Oct 13.
9
Spatially varying defocus map estimation from a single image based on spatial aliasing sampling method.基于空间混叠采样方法的单幅图像空间变化散焦图估计
Opt Express. 2024 Mar 11;32(6):8959-8973. doi: 10.1364/OE.519059.
10
DRPL: Deep Regression Pair Learning For Multi-Focus Image Fusion.DRPL:用于多聚焦图像融合的深度回归对学习
IEEE Trans Image Process. 2020 Mar 2. doi: 10.1109/TIP.2020.2976190.

引用本文的文献

1
KCUNET: Multi-Focus Image Fusion via the Parallel Integration of KAN and Convolutional Layers.KCUNET:通过KAN层与卷积层的并行集成实现多聚焦图像融合
Entropy (Basel). 2025 Jul 24;27(8):785. doi: 10.3390/e27080785.