• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于方向关系模型和多视点注意机制的 EPI 光场深度估计。

EPI Light Field Depth Estimation Based on a Directional Relationship Model and Multiviewpoint Attention Mechanism.

机构信息

School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China.

出版信息

Sensors (Basel). 2022 Aug 21;22(16):6291. doi: 10.3390/s22166291.

DOI:10.3390/s22166291
PMID:36016052
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9416155/
Abstract

Light field (LF) image depth estimation is a critical technique for LF-related applications such as 3D reconstruction, target detection, and tracking. The refocusing property of LF images provide rich information for depth estimations; however, it is still challenging in cases of occlusion regions, edge regions, noise interference, etc. The epipolar plane image (EPI) of LF can effectively deal with the depth estimation because of its characteristics of multidirectionality and pixel consistency-in which the LF depth estimations are converted to calculate the EPI slope. This paper proposed an EPI LF depth estimation algorithm based on a directional relationship model and attention mechanism. Unlike the subaperture LF depth estimation method, the proposed method takes EPIs as input images. Specifically, a directional relationship model was used to extract direction features of the horizontal and vertical EPIs, respectively. Then, a multiviewpoint attention mechanism combining channel attention and spatial attention is used to give more weight to the EPI slope information. Subsequently, multiple residual modules are used to eliminate the redundant features that interfere with the EPI slope information-in which a small stride convolution operation is used to avoid losing key EPI slope information. The experimental results revealed that the proposed algorithm outperformed the compared algorithms in terms of accuracy.

摘要

光场(LF)图像深度估计是 LF 相关应用的关键技术,例如 3D 重建、目标检测和跟踪。LF 图像的重聚焦特性为深度估计提供了丰富的信息;然而,在遮挡区域、边缘区域、噪声干扰等情况下,仍然具有挑战性。由于 LF 的多向性和像素一致性的特点,其视差平面图像(EPI)可以有效地进行深度估计,其中 LF 深度估计被转换为计算 EPI 斜率。本文提出了一种基于方向关系模型和注意力机制的 EPI LF 深度估计算法。与子孔径 LF 深度估计方法不同,该方法以 EPIs 作为输入图像。具体来说,使用方向关系模型分别提取水平和垂直 EPIs 的方向特征。然后,使用结合通道注意力和空间注意力的多视点注意力机制,为 EPI 斜率信息赋予更多权重。随后,使用多个残差模块消除干扰 EPI 斜率信息的冗余特征,其中使用小步长卷积操作以避免丢失关键的 EPI 斜率信息。实验结果表明,与对比算法相比,所提出的算法在准确性方面表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/018382e1172f/sensors-22-06291-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/b91f05751018/sensors-22-06291-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/aaa3558b3426/sensors-22-06291-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/1f74d4074d0a/sensors-22-06291-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/7619fa9d83f7/sensors-22-06291-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/b3d95a0013d0/sensors-22-06291-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/308e3ebadecf/sensors-22-06291-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/2388574fc037/sensors-22-06291-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/eff941dafae1/sensors-22-06291-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/296f5b3dd671/sensors-22-06291-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/6bbcd686a590/sensors-22-06291-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/949fd647e740/sensors-22-06291-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/64fbfa5c5d26/sensors-22-06291-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/e8aa8ebf6c56/sensors-22-06291-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/c79bc6fcb75d/sensors-22-06291-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/a10dc90bf18e/sensors-22-06291-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/018382e1172f/sensors-22-06291-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/b91f05751018/sensors-22-06291-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/aaa3558b3426/sensors-22-06291-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/1f74d4074d0a/sensors-22-06291-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/7619fa9d83f7/sensors-22-06291-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/b3d95a0013d0/sensors-22-06291-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/308e3ebadecf/sensors-22-06291-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/2388574fc037/sensors-22-06291-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/eff941dafae1/sensors-22-06291-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/296f5b3dd671/sensors-22-06291-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/6bbcd686a590/sensors-22-06291-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/949fd647e740/sensors-22-06291-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/64fbfa5c5d26/sensors-22-06291-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/e8aa8ebf6c56/sensors-22-06291-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/c79bc6fcb75d/sensors-22-06291-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/a10dc90bf18e/sensors-22-06291-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e68/9416155/018382e1172f/sensors-22-06291-g016.jpg

相似文献

1
EPI Light Field Depth Estimation Based on a Directional Relationship Model and Multiviewpoint Attention Mechanism.基于方向关系模型和多视点注意机制的 EPI 光场深度估计。
Sensors (Basel). 2022 Aug 21;22(16):6291. doi: 10.3390/s22166291.
2
Depth Estimation from Light Field Geometry Using Convolutional Neural Networks.基于卷积神经网络的光场几何深度估计
Sensors (Basel). 2021 Sep 10;21(18):6061. doi: 10.3390/s21186061.
3
Fast depth estimation with cost minimization for structured light field.基于结构化光场成本最小化的快速深度估计
Opt Express. 2021 Sep 13;29(19):30077-30093. doi: 10.1364/OE.434548.
4
Light Field Depth Estimation via Stitched Epipolar Plane Images.通过拼接对极平面图像进行光场深度估计
IEEE Trans Vis Comput Graph. 2024 Oct;30(10):6866-6879. doi: 10.1109/TVCG.2023.3344132. Epub 2024 Sep 5.
5
EANet: Depth Estimation Based on EPI of Light Field.基于光场 EPI 的深度估计
Biomed Res Int. 2021 Dec 28;2021:8293151. doi: 10.1155/2021/8293151. eCollection 2021.
6
Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation.超越光度一致性:基于几何的遮挡感知无监督光场视差估计。
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15660-15674. doi: 10.1109/TNNLS.2023.3289056. Epub 2024 Oct 29.
7
Benchmark Data Set and Method for Depth Estimation from Light Field Images.用于光场图像深度估计的基准数据集和方法。
IEEE Trans Image Process. 2018 Jul;27(7):3586-3598. doi: 10.1109/TIP.2018.2814217. Epub 2018 Mar 9.
8
Large DOF microscopic fringe projection profilometry with a coaxial light-field structure.具有同轴光场结构的大自由度微观条纹投影轮廓术
Opt Express. 2022 Feb 28;30(5):8015-8026. doi: 10.1364/OE.452361.
9
RCA-LF: Dense Light Field Reconstruction Using Residual Channel Attention Networks.RCA-LF:使用残差通道注意力网络的密集光场重建
Sensors (Basel). 2022 Jul 14;22(14):5254. doi: 10.3390/s22145254.
10
Automatic 3D reconstruction of SEM images based on Nano-robotic manipulation and epipolar plane images.基于纳米机器人操作和极平面图像的扫描电子显微镜图像自动三维重建
Ultramicroscopy. 2019 May;200:149-159. doi: 10.1016/j.ultramic.2019.02.014. Epub 2019 Feb 19.

引用本文的文献

1
Light Field Image Super-Resolution Using Deep Residual Networks on Lenslet Images.基于微透镜图像的深度残差网络的光场图像超分辨率
Sensors (Basel). 2023 Feb 10;23(4):2018. doi: 10.3390/s23042018.

本文引用的文献

1
Disentangling Light Fields for Super-Resolution and Disparity Estimation.用于超分辨率和视差估计的光场解缠
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):425-443. doi: 10.1109/TPAMI.2022.3152488. Epub 2022 Dec 5.
2
A Lightweight Depth Estimation Network for Wide-Baseline Light Fields.用于宽基线光场的轻量级深度估计网络。
IEEE Trans Image Process. 2021;30:2288-2300. doi: 10.1109/TIP.2021.3051761. Epub 2021 Jan 26.
3
A Framework for Learning Depth From a Flexible Subset of Dense and Sparse Light Field Views.一种从密集和稀疏光场视图的灵活子集中学习深度的框架。
IEEE Trans Image Process. 2019 Dec;28(12):5867-5880. doi: 10.1109/TIP.2019.2923323. Epub 2019 Jun 21.
4
Robust Light Field Depth Estimation Using Occlusion-Noise Aware Data Costs.使用遮挡噪声感知数据成本的稳健光场深度估计
IEEE Trans Pattern Anal Mach Intell. 2018 Oct;40(10):2484-2497. doi: 10.1109/TPAMI.2017.2746858. Epub 2017 Aug 31.
5
Variational light field analysis for disparity estimation and super-resolution.变分光场分析用于视差估计和超分辨率。
IEEE Trans Pattern Anal Mach Intell. 2014 Mar;36(3):606-19. doi: 10.1109/TPAMI.2013.147.