• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于基准标记的 RGB-D 图像配准增强。

Enhancement of RGB-D Image Alignment Using Fiducial Markers.

机构信息

Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal.

Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.

出版信息

Sensors (Basel). 2020 Mar 9;20(5):1497. doi: 10.3390/s20051497.

DOI:10.3390/s20051497
PMID:32182872
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7085533/
Abstract

Three-dimensional (3D) reconstruction methods generate a 3D textured model from the combination of data from several captures. As such, the geometrical transformations between these captures are required. The process of computing or refining these transformations is referred to as alignment. It is often a difficult problem to handle, in particular due to a lack of accuracy in the matching of features. We propose an optimization framework that takes advantage of fiducial markers placed in the scene. Since these markers are robustly detected, the problem of incorrect matching of features is overcome. The proposed procedure is capable of enhancing the 3D models created using consumer level RGB-D hand-held cameras, reducing visual artefacts caused by misalignments. One problem inherent to this solution is that the scene is polluted by the markers. Therefore, a tool was developed to allow their removal from the texture of the scene. Results show that our optimization framework is able to significantly reduce alignment errors between captures, which results in visually appealing reconstructions. Furthermore, the markers used to enhance the alignment are seamlessly removed from the final model texture.

摘要

三维(3D)重建方法通过组合来自多次拍摄的数据生成 3D 纹理模型。因此,需要这些拍摄之间的几何变换。计算或细化这些变换的过程称为对齐。由于特征匹配的精度不足,因此这通常是一个难以处理的问题。我们提出了一种利用放置在场景中的基准标记的优化框架。由于这些标记可以被稳健地检测到,因此克服了特征匹配不正确的问题。所提出的过程能够增强使用消费级 RGB-D 手持相机创建的 3D 模型,减少因未对准而导致的视觉伪影。该解决方案固有的一个问题是场景被标记污染。因此,开发了一种工具来允许从场景的纹理中去除这些标记。结果表明,我们的优化框架能够显著减少拍摄之间的对齐误差,从而产生视觉上吸引人的重建。此外,用于增强对齐的标记从最终模型纹理中无缝去除。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/4b287a38292a/sensors-20-01497-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c7b04d1bab8b/sensors-20-01497-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/50db92742350/sensors-20-01497-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/7475a3f50710/sensors-20-01497-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/f48068269776/sensors-20-01497-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/058da8d7338c/sensors-20-01497-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c043891ee6f0/sensors-20-01497-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/f1c637d9c6e3/sensors-20-01497-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/1f23338d284b/sensors-20-01497-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/fdced67a8cec/sensors-20-01497-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/0c8b0238f41a/sensors-20-01497-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/9cee9d0ff639/sensors-20-01497-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/67b119737e93/sensors-20-01497-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/05e7cb7b76f3/sensors-20-01497-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/42c95f22bcad/sensors-20-01497-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/238fd050aeb0/sensors-20-01497-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/dec99a71e649/sensors-20-01497-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/413b3e16df4f/sensors-20-01497-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/0bc17db9c751/sensors-20-01497-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/21b58c44496c/sensors-20-01497-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/020fafd06672/sensors-20-01497-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/226bb86c6e84/sensors-20-01497-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/feb8b3590e37/sensors-20-01497-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/dcc92d77477a/sensors-20-01497-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c4680785f28d/sensors-20-01497-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/5fec3ff1f414/sensors-20-01497-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/5722b8e8a22f/sensors-20-01497-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/62b0f370df2a/sensors-20-01497-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/4b287a38292a/sensors-20-01497-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c7b04d1bab8b/sensors-20-01497-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/50db92742350/sensors-20-01497-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/7475a3f50710/sensors-20-01497-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/f48068269776/sensors-20-01497-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/058da8d7338c/sensors-20-01497-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c043891ee6f0/sensors-20-01497-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/f1c637d9c6e3/sensors-20-01497-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/1f23338d284b/sensors-20-01497-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/fdced67a8cec/sensors-20-01497-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/0c8b0238f41a/sensors-20-01497-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/9cee9d0ff639/sensors-20-01497-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/67b119737e93/sensors-20-01497-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/05e7cb7b76f3/sensors-20-01497-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/42c95f22bcad/sensors-20-01497-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/238fd050aeb0/sensors-20-01497-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/dec99a71e649/sensors-20-01497-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/413b3e16df4f/sensors-20-01497-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/0bc17db9c751/sensors-20-01497-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/21b58c44496c/sensors-20-01497-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/020fafd06672/sensors-20-01497-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/226bb86c6e84/sensors-20-01497-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/feb8b3590e37/sensors-20-01497-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/dcc92d77477a/sensors-20-01497-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/c4680785f28d/sensors-20-01497-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/5fec3ff1f414/sensors-20-01497-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/5722b8e8a22f/sensors-20-01497-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/62b0f370df2a/sensors-20-01497-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b2e7/7085533/4b287a38292a/sensors-20-01497-g028.jpg

相似文献

1
Enhancement of RGB-D Image Alignment Using Fiducial Markers.基于基准标记的 RGB-D 图像配准增强。
Sensors (Basel). 2020 Mar 9;20(5):1497. doi: 10.3390/s20051497.
2
3D Reconstruction and alignment by consumer RGB-D sensors and fiducial planar markers for patient positioning in radiation therapy.使用消费级 RGB-D 传感器和基准平面标记进行 3D 重建和配准,以实现放射治疗中的患者定位。
Comput Methods Programs Biomed. 2019 Oct;180:105004. doi: 10.1016/j.cmpb.2019.105004. Epub 2019 Aug 3.
3
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.一种用于RGB-D相机网络的快速且稳健的外部校准方法。
Sensors (Basel). 2018 Jan 15;18(1):235. doi: 10.3390/s18010235.
4
Robust Texture Mapping Using RGB-D Cameras.使用RGB-D相机的稳健纹理映射
Sensors (Basel). 2021 May 7;21(9):3248. doi: 10.3390/s21093248.
5
Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration.基于RGB-D相机标定的室内场景点云配准算法
Sensors (Basel). 2017 Aug 15;17(8):1874. doi: 10.3390/s17081874.
6
Robust RGB-D SLAM Using Point and Line Features for Low Textured Scene.基于点线特征的鲁棒RGB-D SLAM用于低纹理场景
Sensors (Basel). 2020 Sep 2;20(17):4984. doi: 10.3390/s20174984.
7
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
8
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
9
Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras.基于多线索的圆形检测及其在 RGB-D 相机鲁棒外参标定中的应用。
Sensors (Basel). 2019 Mar 29;19(7):1539. doi: 10.3390/s19071539.
10
Intensity-based 2D-3D spine image registration incorporating a single fiducial marker.基于强度的二维-三维脊柱图像配准,结合单个基准标记物。
Acad Radiol. 2005 Jan;12(1):37-50. doi: 10.1016/j.acra.2004.09.013.

引用本文的文献

1
A Light-Weight Practical Framework for Feces Detection and Trait Recognition.粪便检测与特征识别的轻量级实用框架。
Sensors (Basel). 2020 May 6;20(9):2644. doi: 10.3390/s20092644.

本文引用的文献

1
Validation, Reliability, and Responsiveness Outcomes Of Kinematic Assessment With An RGB-D Camera To Analyze Movement In Subacute And Chronic Low Back Pain.基于 RGB-D 相机的运动分析评估在亚急性和慢性下腰痛中的验证、可靠性和反应性结果。
Sensors (Basel). 2020 Jan 27;20(3):689. doi: 10.3390/s20030689.
2
3D Virtual Reconstruction of the Ancient Roman of the Fucino Lake.富奇诺湖古罗马部分的三维虚拟重建
Sensors (Basel). 2019 Aug 10;19(16):3505. doi: 10.3390/s19163505.
3
A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns.
基于描述符的模式的多 RGB-D 相机外部标定新方法。
Sensors (Basel). 2019 Jan 16;19(2):349. doi: 10.3390/s19020349.
4
Surgical Robot with Environment Reconstruction and Force Feedback.具备环境重建与力反馈功能的手术机器人
Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul;2018:1861-1866. doi: 10.1109/EMBC.2018.8512695.
5
Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization.通过凸优化实现更完整、灵活和安全的自动驾驶速度规划。
Sensors (Basel). 2018 Jul 6;18(7):2185. doi: 10.3390/s18072185.
6
Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones.基于图像的定位辅助室内行人轨迹估计:使用智能手机
Sensors (Basel). 2018 Jan 17;18(1):258. doi: 10.3390/s18010258.
7
Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration.基于RGB-D相机标定的室内场景点云配准算法
Sensors (Basel). 2017 Aug 15;17(8):1874. doi: 10.3390/s17081874.
8
3-D Imaging Systems for Agricultural Applications-A Review.用于农业应用的三维成像系统——综述
Sensors (Basel). 2016 Apr 29;16(5):618. doi: 10.3390/s16050618.
9
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.工业环境中基于机器视觉技术的机器人引导:比较综述
Sensors (Basel). 2016 Mar 5;16(3):335. doi: 10.3390/s16030335.
10
UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications--Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy).基于无人机的摄影测量及建筑应用集成技术——意大利曼图亚垂直结构震后调查的方法策略
Sensors (Basel). 2015 Jun 30;15(7):15520-39. doi: 10.3390/s150715520.