• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于光场图像的无监督单目深度估计

Unsupervised Monocular Depth Estimation from Light Field Image.

作者信息

Zhou Wenhui, Zhoua Enci, Liu Gaomin, Lin Lili, Lumsdaine Andrew

出版信息

IEEE Trans Image Process. 2019 Oct 3. doi: 10.1109/TIP.2019.2944343.

DOI:10.1109/TIP.2019.2944343
PMID:31603783
Abstract

Learning based depth estimation from light field has made significant progresses in recent years. However, most existing approaches are under the supervised framework, which requires vast quantities of ground-truth depth data for training. Furthermore, accurate depth maps of light field are hardly available except for a few synthetic datasets. In this paper, we exploit the multi-orientation epipolar geometry of light field and propose an unsupervised monocular depth estimation network. It predicts depth from the central view of light field without any ground-truth information. Inspired by the inherent depth cues and geometry constraints of light field, we then introduce three novel unsupervised loss functions: photometric loss, defocus loss and symmetry loss. We have evaluated our method on a public 4D light field synthetic dataset. As the first unsupervised method published in the 4D Light Field Benchmark website, our method can achieve satisfactory performance in most error metrics. Comparison experiments with two state-of-the-art unsupervised methods demonstrate the superiority of our method. We also prove the effectiveness and generality of our method on real-world light-field images.

摘要

近年来,基于学习的光场深度估计取得了显著进展。然而,大多数现有方法都处于监督框架之下,这需要大量的地面真值深度数据进行训练。此外,除了少数合成数据集外,很难获得光场的精确深度图。在本文中,我们利用光场的多视角外极几何,提出了一种无监督单目深度估计网络。它从光场的中心视图预测深度,无需任何地面真值信息。受光场固有的深度线索和几何约束的启发,我们引入了三种新颖的无监督损失函数:光度损失、散焦损失和对称损失。我们在一个公开的4D光场合成数据集上评估了我们的方法。作为在4D光场基准网站上发表的第一种无监督方法,我们的方法在大多数误差指标上都能取得令人满意的性能。与两种最新的无监督方法的对比实验证明了我们方法的优越性。我们还证明了我们的方法在真实光场图像上的有效性和通用性。

相似文献

1
Unsupervised Monocular Depth Estimation from Light Field Image.基于光场图像的无监督单目深度估计
IEEE Trans Image Process. 2019 Oct 3. doi: 10.1109/TIP.2019.2944343.
2
Light-field-depth-estimation network based on epipolar geometry and image segmentation.基于极线几何和图像分割的光场深度估计网络。
J Opt Soc Am A Opt Image Sci Vis. 2020 Jul 1;37(7):1236-1243. doi: 10.1364/JOSAA.388555.
3
Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields.基于遮挡感知的4D光场深度无监督学习
IEEE Trans Image Process. 2022;31:2216-2228. doi: 10.1109/TIP.2022.3154288. Epub 2022 Mar 8.
4
Benchmark Data Set and Method for Depth Estimation from Light Field Images.用于光场图像深度估计的基准数据集和方法。
IEEE Trans Image Process. 2018 Jul;27(7):3586-3598. doi: 10.1109/TIP.2018.2814217. Epub 2018 Mar 9.
5
Depth Estimation from Light Field Geometry Using Convolutional Neural Networks.基于卷积神经网络的光场几何深度估计
Sensors (Basel). 2021 Sep 10;21(18):6061. doi: 10.3390/s21186061.
6
Unsupervised Monocular Depth Estimation via Recursive Stereo Distillation.通过递归立体蒸馏实现无监督单目深度估计
IEEE Trans Image Process. 2021;30:4492-4504. doi: 10.1109/TIP.2021.3072215. Epub 2021 Apr 27.
7
Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation.超越光度一致性:基于几何的遮挡感知无监督光场视差估计。
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15660-15674. doi: 10.1109/TNNLS.2023.3289056. Epub 2024 Oct 29.
8
Unsupervised Monocular Depth Estimation With Channel and Spatial Attention.基于通道和空间注意力的无监督单目深度估计
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7860-7870. doi: 10.1109/TNNLS.2022.3221416. Epub 2024 Jun 3.
9
Cascade light field disparity estimation network based on unsupervised deep learning.基于无监督深度学习的级联光场视差估计网络
Opt Express. 2022 Jul 4;30(14):25130-25146. doi: 10.1364/OE.453020.
10
Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints.基于光流特征和多种约束的单目深度和自身运动的无监督学习。
Sensors (Basel). 2022 Feb 11;22(4):1383. doi: 10.3390/s22041383.

引用本文的文献

1
Efficiency-Accuracy Trade-Off in Light Field Estimation with Cost Volume Construction and Aggregation.基于代价体构建与聚合的光场估计中的效率-精度权衡
Sensors (Basel). 2024 Jun 1;24(11):3583. doi: 10.3390/s24113583.
2
Joint estimation of depth and motion from a monocular endoscopy image sequence using a multi-loss rebalancing network.使用多损失重新平衡网络从单目内窥镜图像序列联合估计深度和运动。
Biomed Opt Express. 2022 Apr 11;13(5):2707-2727. doi: 10.1364/BOE.457475. eCollection 2022 May 1.