• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于图像的半自动近景 3D 建模流水线,使用多相机配置。

A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

机构信息

Department of Geomatics, National Cheng-Kung University, No.1, University Road, Tainan 701, Taiwan.

出版信息

Sensors (Basel). 2012;12(8):11271-93. doi: 10.3390/s120811271. Epub 2012 Aug 14.

DOI:10.3390/s120811271
PMID:23112656
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3472884/
Abstract

The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

摘要

基于图像的文物 3D 建模技术是文物数字化记录的重要内容。本研究提出了一种基于图像的 3D 建模方法,该方法利用多相机配置和多图像匹配技术,无需在物体上或周围放置任何标记。采用多台数字单镜头反光(DSLR)相机,并固定具有不变相对取向。在采集图像后不进行摄影测量三角化,而是通过编码目标进行标定,以估计多相机配置的外方位元素,该过程可全自动完成。所有相机的标定外方位元素都应用于使用相同相机配置拍摄的图像。这意味着在进行多图像匹配以生成表面点云时,即使目标已经改变,其方位参数也将保持与标定结果一致。基于这种不变的特性,整个 3D 建模过程可以完全自动化执行,只要整个系统经过标定且软件无缝集成即可。进行了多项实验以验证该系统的可行性。所观察的图像包括人体、八尊佛像和一座石雕。对石雕进行了几种多相机配置的实验,并将结果与 ATOS-I 2M 主动扫描仪获取的参考模型进行了比较。最好的结果具有 0.26mm 的绝对精度和 1:17333 的相对精度,这证明了所提出的低成本基于图像的 3D 建模方法的可行性及其在博物馆中大量文物存储中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/55c134267b1a/sensors-12-11271f17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/8625b76dc794/sensors-12-11271f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/e4e0827f3ab7/sensors-12-11271f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/d778fcf3cb21/sensors-12-11271f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5050b270c090/sensors-12-11271f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/0da582d2329a/sensors-12-11271f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/25717e11ba92/sensors-12-11271f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5fed1ef87635/sensors-12-11271f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/316f442a5da2/sensors-12-11271f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5aabe6497b2d/sensors-12-11271f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/315f4f21be11/sensors-12-11271f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/3e9563c2de58/sensors-12-11271f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/c469e8bffec9/sensors-12-11271f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/90d10b7eccb7/sensors-12-11271f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/49da279783b1/sensors-12-11271f14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/8c7dda6f2bdd/sensors-12-11271f15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/1504ffd2df44/sensors-12-11271f16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/55c134267b1a/sensors-12-11271f17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/8625b76dc794/sensors-12-11271f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/e4e0827f3ab7/sensors-12-11271f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/d778fcf3cb21/sensors-12-11271f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5050b270c090/sensors-12-11271f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/0da582d2329a/sensors-12-11271f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/25717e11ba92/sensors-12-11271f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5fed1ef87635/sensors-12-11271f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/316f442a5da2/sensors-12-11271f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/5aabe6497b2d/sensors-12-11271f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/315f4f21be11/sensors-12-11271f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/3e9563c2de58/sensors-12-11271f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/c469e8bffec9/sensors-12-11271f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/90d10b7eccb7/sensors-12-11271f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/49da279783b1/sensors-12-11271f14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/8c7dda6f2bdd/sensors-12-11271f15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/1504ffd2df44/sensors-12-11271f16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f32/3472884/55c134267b1a/sensors-12-11271f17.jpg

相似文献

1
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.基于图像的半自动近景 3D 建模流水线,使用多相机配置。
Sensors (Basel). 2012;12(8):11271-93. doi: 10.3390/s120811271. Epub 2012 Aug 14.
2
Minimal camera networks for 3D image based modeling of cultural heritage objects.用于基于3D图像的文化遗产对象建模的最小相机网络。
Sensors (Basel). 2014 Mar 25;14(4):5785-804. doi: 10.3390/s140405785.
3
Quality Analysis of 3D Point Cloud Using Low-Cost Spherical Camera for Underpass Mapping.使用低成本球形相机进行地下通道测绘的三维点云质量分析
Sensors (Basel). 2024 May 30;24(11):3534. doi: 10.3390/s24113534.
4
A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.一种利用基于图像的点云检测铁路轨道选定几何参数的新方法。
Sensors (Basel). 2018 Mar 6;18(3):791. doi: 10.3390/s18030791.
5
Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target.基于三维目标的多深度相机相对方位校准
Sensors (Basel). 2019 Jul 8;19(13):3008. doi: 10.3390/s19133008.
6
Reconstruction method and optimum range of camera-shooting angle for 3D plant modeling using a multi-camera photography system.基于多相机拍摄系统的三维植物建模相机拍摄角度重建方法及最佳范围
Plant Methods. 2020 Aug 31;16:118. doi: 10.1186/s13007-020-00658-6. eCollection 2020.
7
Optimal Lateral Displacement in Automatic Close-Range Photogrammetry.自动近景摄影测量中的最佳横向位移。
Sensors (Basel). 2020 Nov 4;20(21):6280. doi: 10.3390/s20216280.
8
In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.使用运动相机进行3D重建精度的空中与水下比较
J Biomech. 2017 Jan 25;51:77-82. doi: 10.1016/j.jbiomech.2016.11.068. Epub 2016 Dec 2.
9
Full-automatic self-calibration of color digital cameras using color targets.使用颜色靶标对彩色数码相机进行全自动自校准。
Opt Express. 2011 Sep 12;19(19):18164-74. doi: 10.1364/OE.19.018164.
10
3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview.基于多视角估计时间人体姿态的 3D 静态点云配准
Sensors (Basel). 2022 Jan 31;22(3):1097. doi: 10.3390/s22031097.

引用本文的文献

1
Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision.摄影测量与计算机视觉中使用的内参相机参数的解释与转换。
Sensors (Basel). 2022 Dec 7;22(24):9602. doi: 10.3390/s22249602.

本文引用的文献

1
Accurate, dense, and robust multiview stereopsis.精确、密集且鲁棒的多视图立体视觉。
IEEE Trans Pattern Anal Mach Intell. 2010 Aug;32(8):1362-76. doi: 10.1109/TPAMI.2009.161.
2
An efficient solution to the five-point relative pose problem.一种解决五点相对位姿问题的有效方法。
IEEE Trans Pattern Anal Mach Intell. 2004 Jun;26(6):756-77. doi: 10.1109/TPAMI.2004.17.
3
Stereo processing by semiglobal matching and mutual information.通过半全局匹配和互信息进行立体处理。
IEEE Trans Pattern Anal Mach Intell. 2008 Feb;30(2):328-41. doi: 10.1109/TPAMI.2007.1166.
4
Detailed 3D reconstruction of large-scale heritage sites with integrated techniques.利用综合技术对大型遗产地进行详细的三维重建。
IEEE Comput Graph Appl. 2004 May-Jun;24(3):21-9. doi: 10.1109/mcg.2004.1318815.