• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

快速且自动从低质量 RGB-D 序列重建语义丰富的 3D 室内地图。

Fast and Automatic Reconstruction of Semantically Rich 3D Indoor Maps from Low-quality RGB-D Sequences.

机构信息

Research Institute for Smart Cities & Shenzhen Key Laboratory of Spatial Information Smart Sensing and Services, School of Architecture and Urban Planning, Shenzhen University, Shenzhen 518060, China.

State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430000, China.

出版信息

Sensors (Basel). 2019 Jan 27;19(3):533. doi: 10.3390/s19030533.

DOI:10.3390/s19030533
PMID:30691244
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6387083/
Abstract

Semantically rich indoor models are increasingly used throughout a facility's life cycle for different applications. With the decreasing price of 3D sensors, it is convenient to acquire point cloud data from consumer-level scanners. However, most existing methods in 3D indoor reconstruction from point clouds involve a tedious manual or interactive process due to line-of-sight occlusions and complex space structures. Using the multiple types of data obtained by RGB-D devices, this paper proposes a fast and automatic method for reconstructing semantically rich indoor 3D building models from low-quality RGB-D sequences. Our method is capable of identifying and modelling the main structural components of indoor environments such as space, wall, floor, ceilings, windows, and doors from the RGB-D datasets. The method includes space division and extraction, opening extraction, and global optimization. For space division and extraction, rather than distinguishing room spaces based on the detected wall planes, we interactively define the start-stop position for each functional space (e.g., room, corridor, kitchen) during scanning. Then, an interior elements filtering algorithm is proposed for wall component extraction and a boundary generation algorithm is used for space layout determination. For opening extraction, we propose a new noise robustness method based on the properties of convex hull, octrees structure, Euclidean clusters and the camera trajectory for opening generation, which is inapplicable to the data collected in the indoor environments due to inevitable occlusion. A global optimization approach for planes is designed to eliminate the inconsistency of planes sharing the same global plane, and maintain plausible connectivity between the walls and the relationships between the walls and openings. The final model is stored according to the CityGML3.0 standard. Our approach allows for the robust generation of semantically rich 3D indoor models and has strong applicability and reconstruction power for complex real-world datasets.

摘要

语义丰富的室内模型在整个设施的生命周期中越来越多地用于不同的应用。随着 3D 传感器价格的降低,从消费级扫描仪获取点云数据变得非常方便。然而,由于视线遮挡和复杂的空间结构,大多数现有点云 3D 室内重建方法都涉及繁琐的手动或交互式过程。本文利用 RGB-D 设备获取的多种类型的数据,提出了一种从低质量的 RGB-D 序列快速自动构建语义丰富的室内 3D 建筑模型的方法。我们的方法能够从 RGB-D 数据集中识别和构建室内环境的主要结构组件,例如空间、墙壁、地板、天花板、窗户和门。该方法包括空间划分和提取、开口提取和全局优化。对于空间划分和提取,我们不是根据检测到的墙壁平面来区分房间空间,而是在扫描过程中交互式定义每个功能空间(例如,房间、走廊、厨房)的起始和停止位置。然后,提出了一种用于墙壁组件提取的内部元素过滤算法和用于空间布局确定的边界生成算法。对于开口提取,我们提出了一种新的基于凸壳、八叉树结构、欧式聚类和相机轨迹特性的噪声鲁棒性方法,用于开口生成,由于室内环境中不可避免的遮挡,该方法不适用于采集的数据。设计了一种用于平面的全局优化方法,以消除具有相同全局平面的平面之间的不一致性,并保持墙壁和墙壁与开口之间的关系之间的合理连接。最终模型根据 CityGML3.0 标准存储。我们的方法允许生成语义丰富的 3D 室内模型,并且对复杂的真实世界数据集具有很强的适用性和重建能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/8bf6fcf4cd6b/sensors-19-00533-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c6523c826ceb/sensors-19-00533-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/f0a69acad335/sensors-19-00533-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/b4f436d241ef/sensors-19-00533-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c6f70408b773/sensors-19-00533-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/acdade02a0fb/sensors-19-00533-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/69d579e85535/sensors-19-00533-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/3ad2b3a776b8/sensors-19-00533-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/5026f202c774/sensors-19-00533-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/bd60f73b80bf/sensors-19-00533-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/9eba317c1687/sensors-19-00533-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c846fc1e5fa7/sensors-19-00533-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/bc8dc2ce59e5/sensors-19-00533-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/685f28202f18/sensors-19-00533-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/61ea01b13908/sensors-19-00533-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/585863eb861d/sensors-19-00533-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/8bf6fcf4cd6b/sensors-19-00533-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c6523c826ceb/sensors-19-00533-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/f0a69acad335/sensors-19-00533-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/b4f436d241ef/sensors-19-00533-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c6f70408b773/sensors-19-00533-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/acdade02a0fb/sensors-19-00533-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/69d579e85535/sensors-19-00533-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/3ad2b3a776b8/sensors-19-00533-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/5026f202c774/sensors-19-00533-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/bd60f73b80bf/sensors-19-00533-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/9eba317c1687/sensors-19-00533-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/c846fc1e5fa7/sensors-19-00533-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/bc8dc2ce59e5/sensors-19-00533-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/685f28202f18/sensors-19-00533-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/61ea01b13908/sensors-19-00533-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/585863eb861d/sensors-19-00533-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ea4/6387083/8bf6fcf4cd6b/sensors-19-00533-g016.jpg

相似文献

1
Fast and Automatic Reconstruction of Semantically Rich 3D Indoor Maps from Low-quality RGB-D Sequences.快速且自动从低质量 RGB-D 序列重建语义丰富的 3D 室内地图。
Sensors (Basel). 2019 Jan 27;19(3):533. doi: 10.3390/s19030533.
2
Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration.基于RGB-D相机标定的室内场景点云配准算法
Sensors (Basel). 2017 Aug 15;17(8):1874. doi: 10.3390/s17081874.
3
Automatic Reconstruction of Multi-Level Indoor Spaces from Point Cloud and Trajectory.基于点云与轨迹的多层室内空间自动重建
Sensors (Basel). 2021 May 17;21(10):3493. doi: 10.3390/s21103493.
4
Automatic Indoor Reconstruction from Point Clouds in Multi-room Environments with Curved Walls.基于多房间环境中带弯曲墙壁的点云数据的室内自动重建
Sensors (Basel). 2019 Sep 2;19(17):3798. doi: 10.3390/s19173798.
5
Automatic Indoor as-Built Building Information Models Generation by Using Low-Cost RGB-D Sensors.利用低成本 RGB-D 传感器自动生成室内现势建筑信息模型。
Sensors (Basel). 2020 Jan 4;20(1):293. doi: 10.3390/s20010293.
6
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
7
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.基于扫描线分析的室内移动激光扫描点云的空间细分。
Sensors (Basel). 2018 Jun 5;18(6):1838. doi: 10.3390/s18061838.
8
3D modeling of building indoor spaces and closed doors from imagery and point clouds.基于图像和点云的建筑室内空间及关闭门的三维建模。
Sensors (Basel). 2015 Feb 3;15(2):3491-512. doi: 10.3390/s150203491.
9
Progressive Model-Driven Approach for 3D Modeling of Indoor Spaces.渐进式模型驱动方法用于室内空间的 3D 建模。
Sensors (Basel). 2023 Jun 26;23(13):5934. doi: 10.3390/s23135934.
10
Point-Plane SLAM Using Supposed Planes for Indoor Environments.使用假定平面的点-平面同步定位与地图构建用于室内环境
Sensors (Basel). 2019 Sep 2;19(17):3795. doi: 10.3390/s19173795.

引用本文的文献

1
Single-Shot Structured Light Sensor for 3D Dense and Dynamic Reconstruction.用于三维密集动态重建的单镜头结构光传感器
Sensors (Basel). 2020 Feb 17;20(4):1094. doi: 10.3390/s20041094.

本文引用的文献

1
Robust and Efficient CPU-Based RGB-D Scene Reconstruction.基于 CPU 的鲁棒高效 RGB-D 场景重建。
Sensors (Basel). 2018 Oct 28;18(11):3652. doi: 10.3390/s18113652.
2
Dense RGB-D SLAM with Multiple Cameras.多相机稠密 RGB-D SLAM。
Sensors (Basel). 2018 Jul 2;18(7):2118. doi: 10.3390/s18072118.
3
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.RGB-D 单向跟踪的混合对应几何积分。
Sensors (Basel). 2018 May 1;18(5):1385. doi: 10.3390/s18051385.
4
A New Calibration Method for Commercial RGB-D Sensors.一种用于商用RGB-D传感器的新校准方法。
Sensors (Basel). 2017 May 24;17(6):1204. doi: 10.3390/s17061204.
5
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.