• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种从密集和稀疏光场视图的灵活子集中学习深度的框架。

A Framework for Learning Depth From a Flexible Subset of Dense and Sparse Light Field Views.

作者信息

Shi Jinglei, Jiang Xiaoran, Guillemot Christine

出版信息

IEEE Trans Image Process. 2019 Dec;28(12):5867-5880. doi: 10.1109/TIP.2019.2923323. Epub 2019 Jun 21.

DOI:10.1109/TIP.2019.2923323
PMID:31247553
Abstract

In this paper, we propose a learning-based depth estimation framework suitable for both densely and sparsely sampled light fields. The proposed framework consists of three processing steps: initial depth estimation, fusion with occlusion handling, and refinement. The estimation can be performed from a flexible subset of input views. The fusion of initial disparity estimates, relying on two warping error measures, allows us to have an accurate estimation in occluded regions and along the contours. In contrast with methods relying on the computation of cost volumes, the proposed approach does not need any prior information on the disparity range. Experimental results show that the proposed method outperforms state-of-the-art light fields depth estimation methods, including prior methods based on deep neural architectures.

摘要

在本文中,我们提出了一种基于学习的深度估计框架,适用于密集采样和稀疏采样的光场。所提出的框架包括三个处理步骤:初始深度估计、带遮挡处理的融合以及细化。该估计可以从输入视图的灵活子集中进行。基于两种扭曲误差度量的初始视差估计融合,使我们能够在遮挡区域和沿轮廓处进行准确估计。与依赖于成本体计算的方法相比,所提出的方法不需要关于视差范围的任何先验信息。实验结果表明,所提出的方法优于当前最先进的光场深度估计方法,包括基于深度神经架构的先前方法。

相似文献

1
A Framework for Learning Depth From a Flexible Subset of Dense and Sparse Light Field Views.一种从密集和稀疏光场视图的灵活子集中学习深度的框架。
IEEE Trans Image Process. 2019 Dec;28(12):5867-5880. doi: 10.1109/TIP.2019.2923323. Epub 2019 Jun 21.
2
Robust and dense depth estimation for light field images.光场图像的鲁棒且密集深度估计
IEEE Trans Image Process. 2017 Apr;26(4):1873-1886. doi: 10.1109/TIP.2017.2666041. Epub 2017 Feb 8.
3
Continuous Depth Map Reconstruction From Light Fields.基于光场的连续深度图重建。
IEEE Trans Image Process. 2015 Nov;24(11):3257-65. doi: 10.1109/TIP.2015.2440760. Epub 2015 Jun 3.
4
Depth Estimation for Light-Field Images Using Stereo Matching and Convolutional Neural Networks.基于立体匹配和卷积神经网络的光场图像深度估计
Sensors (Basel). 2020 Oct 30;20(21):6188. doi: 10.3390/s20216188.
5
FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images.FPattNet:一种具有遮挡感知的多尺度特征融合网络,用于光场图像的深度估计。
Sensors (Basel). 2023 Aug 28;23(17):7480. doi: 10.3390/s23177480.
6
Accurate Light Field Depth Estimation With Superpixel Regularization Over Partially Occluded Regions.基于超像素正则化的部分遮挡区域准确光场深度估计。
IEEE Trans Image Process. 2018 Oct;27(10):4889-4900. doi: 10.1109/TIP.2018.2839524.
7
Deep Coarse-to-Fine Dense Light Field Reconstruction With Flexible Sampling and Geometry-Aware Fusion.基于灵活采样和几何感知融合的深度粗到细密集光场重建
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1819-1836. doi: 10.1109/TPAMI.2020.3026039. Epub 2022 Mar 4.
8
Fast and Accurate Depth Estimation from Sparse Light Fields.基于稀疏光场的快速准确深度估计
IEEE Trans Image Process. 2019 Dec 17. doi: 10.1109/TIP.2019.2959233.
9
A Novel Occlusion-Aware Vote Cost for Light Field Depth Estimation.一种用于光场深度估计的新型遮挡感知投票代价
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8022-8035. doi: 10.1109/TPAMI.2021.3105523. Epub 2022 Oct 4.
10
Sheared Epipolar Focus Spectrum for Dense Light Field Reconstruction.用于密集光场重建的剪切极线聚焦光谱
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3108-3122. doi: 10.1109/TPAMI.2023.3337516. Epub 2024 Apr 3.

引用本文的文献

1
Research on depth measurement calibration of light field camera based on Gaussian fitting.基于高斯拟合的光场相机深度测量标定研究
Sci Rep. 2024 Apr 16;14(1):8774. doi: 10.1038/s41598-024-59479-5.
2
EPI Light Field Depth Estimation Based on a Directional Relationship Model and Multiviewpoint Attention Mechanism.基于方向关系模型和多视点注意机制的 EPI 光场深度估计。
Sensors (Basel). 2022 Aug 21;22(16):6291. doi: 10.3390/s22166291.
3
Depth Estimation for Integral Imaging Microscopy Using a 3D-2D CNN with a Weighted Median Filter.
基于带加权中值滤波器的 3D-2D CNN 的积分成像显微镜深度估计。
Sensors (Basel). 2022 Jul 15;22(14):5288. doi: 10.3390/s22145288.
4
RCA-LF: Dense Light Field Reconstruction Using Residual Channel Attention Networks.RCA-LF:使用残差通道注意力网络的密集光场重建
Sensors (Basel). 2022 Jul 14;22(14):5254. doi: 10.3390/s22145254.