• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用投影图案的线结构进行三维重建的精确特征点检测方法。

Accurate feature point detection method exploiting the line structure of the projection pattern for 3D reconstruction.

作者信息

Ha Minhtuan, Pham Dieuthuy, Xiao Changyan

出版信息

Appl Opt. 2021 Apr 10;60(11):2926-2937. doi: 10.1364/AO.414952.

DOI:10.1364/AO.414952
PMID:33983185
Abstract

The 3D imaging methods using a grid pattern can satisfy real-time applications since they are fast and accurate in decoding and capable of producing a dense 3D map. However, like the other spatial coding methods, it is difficult to achieve high accuracy as is the case for time multiplexing due to the effects of the inhomogeneity of the scene. To overcome those challenges, this paper proposes a convolutional-neural-network-based method of feature point detection by exploiting the line structure of the grid pattern projected. First, two specific data sets are designed to train the model to individually extract the vertical and horizontal stripes in the image of a deformed pattern. Then the predicted results of trained models with images from the test set are fused in a unique skeleton image for the purpose of detecting feature points. Our experimental results show that the proposed method can achieve higher location accuracy in feature point detection compared with previous ones.

摘要

使用网格图案的3D成像方法能够满足实时应用,因为它们在解码方面快速且准确,并且能够生成密集的3D地图。然而,与其他空间编码方法一样,由于场景不均匀性的影响,很难像时分复用那样实现高精度。为了克服这些挑战,本文提出了一种基于卷积神经网络的特征点检测方法,该方法利用投影网格图案的线条结构。首先,设计两个特定的数据集来训练模型,以分别提取变形图案图像中的垂直条纹和水平条纹。然后,将训练模型对测试集图像的预测结果融合到一个独特的骨架图像中,以检测特征点。我们的实验结果表明,与以前的方法相比,该方法在特征点检测中能够实现更高的定位精度。

相似文献

1
Accurate feature point detection method exploiting the line structure of the projection pattern for 3D reconstruction.利用投影图案的线结构进行三维重建的精确特征点检测方法。
Appl Opt. 2021 Apr 10;60(11):2926-2937. doi: 10.1364/AO.414952.
2
Complete grid pattern decoding method for a one-shot structured light system.用于单次结构化光系统的完整网格图案解码方法。
Appl Opt. 2020 Mar 20;59(9):2674-2685. doi: 10.1364/AO.381149.
3
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.基于层次卷积特征的层次递归神经网络哈希图像检索
IEEE Trans Image Process. 2018;27(1):106-120. doi: 10.1109/TIP.2017.2755766.
4
Deep feature descriptor based hierarchical dense matching for X-ray angiographic images.基于深度特征描述符的 X 射线血管造影图像分层密集匹配。
Comput Methods Programs Biomed. 2019 Jul;175:233-242. doi: 10.1016/j.cmpb.2019.04.006. Epub 2019 Apr 22.
5
Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern.基于静态栅格图案的高速视频三维密集重建。
IEEE Trans Pattern Anal Mach Intell. 2014 Sep;36(9):1733-47. doi: 10.1109/TPAMI.2014.2300490.
6
An Improved Method for Stable Feature Points Selection in Structure-from-Motion Considering Image Semantic and Structural Characteristics.考虑图像语义和结构特征的运动结构中稳定特征点选择的改进方法。
Sensors (Basel). 2021 Apr 1;21(7):2416. doi: 10.3390/s21072416.
7
A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.一种使用域转移深度卷积神经网络的新型端到端生物医学图像分类器。
Comput Methods Programs Biomed. 2017 Mar;140:283-293. doi: 10.1016/j.cmpb.2016.12.019. Epub 2017 Jan 6.
8
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
9
High-Capacity Spatial Structured Light for Robust and Accurate Reconstruction.高容量空间结构光的稳健精确重建
Sensors (Basel). 2023 May 12;23(10):4685. doi: 10.3390/s23104685.
10
A quasi-dense approach to surface reconstruction from uncalibrated images.一种从未校准图像进行表面重建的准密集方法。
IEEE Trans Pattern Anal Mach Intell. 2005 Mar;27(3):418-433. doi: 10.1109/TPAMI.2005.44.