• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多图像立体匹配中直线约束相机位置估计

Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

作者信息

Donné Simon, Goossens Bart, Philips Wilfried

机构信息

IPI-UGent-imec, B-9000 Ghent, Belgium.

出版信息

Sensors (Basel). 2017 Aug 23;17(9):1939. doi: 10.3390/s17091939.

DOI:10.3390/s17091939
PMID:28832501
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5620956/
Abstract

Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

摘要

当无法进行主动测量时,立体匹配是从场景中获取密集深度信息的有效方法。所谓的光场方法是沿着定义的轨迹(通常是均匀线性或在规则网格上——我们将假设为线性轨迹)从多个相机位置拍摄快照,并利用这些信息来计算准确的深度估计。然而,它们要求知道每个快照的位置:图像之间物体的视差与相机到物体的距离以及两张图像相机位置之间的距离都有关系。现有的解决方案使用稀疏特征匹配来估计相机位置。在本文中,我们提出了一种新颖的方法,该方法利用密集对应关系来做同样的事情,借助现有的深度估计框架来确定沿直线的相机位置。我们通过对极平面图像校正的可视化以及它对所得深度估计的影响的定量分析,说明了所提出技术在相机位置估计方面的有效性。我们提出的方法为稀疏技术提供了一种有效的替代方案,同时由于其高度可并行化的特性,仍能在图形卡上以合理的时间执行。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/653160cb1d5b/sensors-17-01939-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/97fb9f994e49/sensors-17-01939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/884bd93a6330/sensors-17-01939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/87daba02750d/sensors-17-01939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/8b2a4056666b/sensors-17-01939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/bdfb95a07f75/sensors-17-01939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/0eff60e32e0b/sensors-17-01939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/6fbc8cdfc5be/sensors-17-01939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/fb5a98e27bf4/sensors-17-01939-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/b7d45102fcb5/sensors-17-01939-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/653160cb1d5b/sensors-17-01939-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/97fb9f994e49/sensors-17-01939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/884bd93a6330/sensors-17-01939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/87daba02750d/sensors-17-01939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/8b2a4056666b/sensors-17-01939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/bdfb95a07f75/sensors-17-01939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/0eff60e32e0b/sensors-17-01939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/6fbc8cdfc5be/sensors-17-01939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/fb5a98e27bf4/sensors-17-01939-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/b7d45102fcb5/sensors-17-01939-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7995/5620956/653160cb1d5b/sensors-17-01939-g010.jpg

相似文献

1
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.多图像立体匹配中直线约束相机位置估计
Sensors (Basel). 2017 Aug 23;17(9):1939. doi: 10.3390/s17091939.
2
Triple-Camera Rectification for Depth Estimation Sensor.用于深度估计传感器的三相机校正
Sensors (Basel). 2024 Sep 20;24(18):6100. doi: 10.3390/s24186100.
3
One-dimensional dense disparity estimation for three-dimensional reconstruction.一维密集视差估计的三维重建。
IEEE Trans Image Process. 2003;12(9):1107-19. doi: 10.1109/TIP.2003.815257.
4
Unsupervised deep learning for depth estimation with offset pixels.用于带偏移像素的深度估计的无监督深度学习。
Opt Express. 2020 Mar 16;28(6):8619-8639. doi: 10.1364/OE.385328.
5
6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features.通过跟踪视觉和几何特征实现机器人导航辅助设备的六自由度姿态估计
IEEE Trans Autom Sci Eng. 2015 Oct;12(4):1169-1180. doi: 10.1109/TASE.2015.2469726. Epub 2015 Oct 5.
6
GFI-Net: Global Feature Interaction Network for Monocular Depth Estimation.GFI-Net:用于单目深度估计的全局特征交互网络。
Entropy (Basel). 2023 Feb 26;25(3):421. doi: 10.3390/e25030421.
7
Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint.基于特征点匹配约束的光流网络的快速运动场景无监督单目视觉里程计。
Sensors (Basel). 2022 Dec 9;22(24):9647. doi: 10.3390/s22249647.
8
A Maximum Likelihood Approach for Depth Field Estimation Based on Epipolar Plane Images.基于极平面图像的深度场估计的最大似然方法。
IEEE Trans Image Process. 2019 Feb;28(2):827-840. doi: 10.1109/TIP.2018.2871753.
9
Sparse-to-Local-Dense Matching for Geometry-Guided Correspondence Estimation.稀疏到局部密集匹配的几何引导对应估计。
IEEE Trans Image Process. 2023;32:3536-3551. doi: 10.1109/TIP.2023.3287500. Epub 2023 Jun 29.
10
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.

本文引用的文献

1
Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation.流场:用于高精度大位移光流估计的密集对应场
IEEE Trans Pattern Anal Mach Intell. 2019 Aug;41(8):1879-1892. doi: 10.1109/TPAMI.2018.2859970. Epub 2018 Aug 13.
2
Motion estimation using the correlation transform.利用相关变换进行运动估计。
IEEE Trans Image Process. 2013 Aug;22(8):3260-70. doi: 10.1109/TIP.2013.2263149.
3
Motion detail preserving optical flow estimation.运动细节保留光流估计。
IEEE Trans Pattern Anal Mach Intell. 2012 Sep;34(9):1744-57. doi: 10.1109/TPAMI.2011.236.
4
Direct estimation of nonrigid registrations with image-based self-occlusion reasoning.基于图像自遮挡推理的非刚性配准直接估计。
IEEE Trans Pattern Anal Mach Intell. 2010 Jan;32(1):87-104. doi: 10.1109/TPAMI.2008.265.
5
Multiple-view geometry under the Linfinity-norm.无穷范数下的多视图几何
IEEE Trans Pattern Anal Mach Intell. 2008 Sep;30(9):1603-17. doi: 10.1109/TPAMI.2007.70824.