• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

BLAINDER-A Blender AI 插件,用于生成语义标记深度感应数据。

BLAINDER-A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data.

机构信息

Virtual Reality and Multimedia Group, Institute of Computer Science, Freiberg University of Mining and Technology, 09599 Freiberg, Germany.

Operating Systems and Communication Technologies Group, Institute of Computer Science, Freiberg University of Mining and Technology, 09599 Freiberg, Germany.

出版信息

Sensors (Basel). 2021 Mar 18;21(6):2144. doi: 10.3390/s21062144.

DOI:10.3390/s21062144
PMID:33803908
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8003152/
Abstract

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.

摘要

常见的场景分类机器学习 (ML) 方法需要大量的训练数据。然而,与图像数据相比,对于深度传感器数据的分类,相对较少的数据库是公开可用的,并且手动生成语义标记的 3D 点云更是一项耗时的任务。为了简化广泛领域的训练数据生成过程,我们开发了用于开源 3D 建模软件 Blender 的附加组件,该组件能够在虚拟 3D 环境中实现语义注释点云数据的自动化生成。在本文中,我们重点介绍了经典的深度感应技术 光检测和测距 (LiDAR) 和声纳导航和测距 (Sonar)。在附加组件中,可以从预设中加载不同的深度传感器,可以实现自定义传感器,并且可以模拟不同的环境条件(例如,雨、灰尘的影响)。语义标记的数据可以导出到各种 2D 和 3D 格式,因此针对不同的 ML 应用程序和可视化进行了优化。此外,还可以使用 Blender 的渲染功能导出语义标记的图像。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/16a33e8879dd/sensors-21-02144-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/13e6deb36a2d/sensors-21-02144-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/8d9547230d22/sensors-21-02144-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/b919a34ad342/sensors-21-02144-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/262fc3eb8851/sensors-21-02144-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/29ee003d6e16/sensors-21-02144-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/d92de6d2ebb7/sensors-21-02144-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/05dd4e747457/sensors-21-02144-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6fc370ea3e19/sensors-21-02144-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/e8a01499bb88/sensors-21-02144-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/3fbd176ca5aa/sensors-21-02144-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6327b4d29907/sensors-21-02144-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6d495d3e6b66/sensors-21-02144-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/db023e4316ae/sensors-21-02144-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/61f3b50127c1/sensors-21-02144-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/64d92b1ff935/sensors-21-02144-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/b6efadf74899/sensors-21-02144-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/bc52d8be4610/sensors-21-02144-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/21556010019e/sensors-21-02144-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/0f75dfe9511d/sensors-21-02144-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/16a33e8879dd/sensors-21-02144-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/13e6deb36a2d/sensors-21-02144-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/8d9547230d22/sensors-21-02144-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/b919a34ad342/sensors-21-02144-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/262fc3eb8851/sensors-21-02144-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/29ee003d6e16/sensors-21-02144-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/d92de6d2ebb7/sensors-21-02144-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/05dd4e747457/sensors-21-02144-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6fc370ea3e19/sensors-21-02144-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/e8a01499bb88/sensors-21-02144-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/3fbd176ca5aa/sensors-21-02144-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6327b4d29907/sensors-21-02144-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/6d495d3e6b66/sensors-21-02144-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/db023e4316ae/sensors-21-02144-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/61f3b50127c1/sensors-21-02144-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/64d92b1ff935/sensors-21-02144-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/b6efadf74899/sensors-21-02144-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/bc52d8be4610/sensors-21-02144-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/21556010019e/sensors-21-02144-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/0f75dfe9511d/sensors-21-02144-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f96b/8003152/16a33e8879dd/sensors-21-02144-g020.jpg

相似文献

1
BLAINDER-A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data.BLAINDER-A Blender AI 插件,用于生成语义标记深度感应数据。
Sensors (Basel). 2021 Mar 18;21(6):2144. doi: 10.3390/s21062144.
2
Development and Validation of LiDAR Sensor Simulators Based on Parallel Raycasting.基于平行射线投射的激光雷达传感器模拟器的开发与验证。
Sensors (Basel). 2020 Dec 15;20(24):7186. doi: 10.3390/s20247186.
3
Deep Learning for LiDAR Point Cloud Classification in Remote Sensing.用于遥感中激光雷达点云分类的深度学习
Sensors (Basel). 2022 Oct 16;22(20):7868. doi: 10.3390/s22207868.
4
On-Ground Vineyard Reconstruction Using a LiDAR-Based Automated System.基于激光雷达的自动化系统进行地面葡萄园重建。
Sensors (Basel). 2020 Feb 18;20(4):1102. doi: 10.3390/s20041102.
5
Virtual Disassembling of Historical Edifices: Experiments and Assessments of an Automatic Approach for Classifying Multi-Scalar Point Clouds into Architectural Elements.历史建筑的虚拟拆解:一种将多尺度点云分类为建筑元素的自动方法的实验与评估
Sensors (Basel). 2020 Apr 11;20(8):2161. doi: 10.3390/s20082161.
6
Semantic-Based Building Extraction from LiDAR Point Clouds Using Contexts and Optimization in Complex Environment.复杂环境下基于语义的激光雷达点云建筑物提取:利用上下文和优化方法
Sensors (Basel). 2020 Jun 15;20(12):3386. doi: 10.3390/s20123386.
7
Comparison of Depth Camera and Terrestrial Laser Scanner in Monitoring Structural Deflections.深度相机和地面激光扫描仪在监测结构挠度中的比较。
Sensors (Basel). 2020 Dec 30;21(1):201. doi: 10.3390/s21010201.
8
A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors.一种模拟雨雾对汽车激光雷达传感器性能影响的方法。
Sensors (Basel). 2023 Aug 3;23(15):6891. doi: 10.3390/s23156891.
9
Modeling and Analysis of a Direct Time-of-Flight Sensor Architecture for LiDAR Applications.激光雷达应用中一种直接飞行时间传感器架构的建模与分析。
Sensors (Basel). 2019 Dec 11;19(24):5464. doi: 10.3390/s19245464.
10
Weakly Supervised Adversarial Learning for 3D Human Pose Estimation from Point Clouds.基于点云的弱监督对抗学习三维人体姿态估计
IEEE Trans Vis Comput Graph. 2020 May;26(5):1851-1859. doi: 10.1109/TVCG.2020.2973076. Epub 2020 Feb 13.

引用本文的文献

1
Semantic segmentation using synthetic images of underwater marine-growth.使用水下海洋生物生长合成图像的语义分割
Front Robot AI. 2025 Jan 8;11:1459570. doi: 10.3389/frobt.2024.1459570. eCollection 2024.
2
GPU Rasterization-Based 3D LiDAR Simulation for Deep Learning.用于深度学习的基于GPU光栅化的3D激光雷达模拟
Sensors (Basel). 2023 Sep 28;23(19):8130. doi: 10.3390/s23198130.

本文引用的文献

1
Development and Validation of LiDAR Sensor Simulators Based on Parallel Raycasting.基于平行射线投射的激光雷达传感器模拟器的开发与验证。
Sensors (Basel). 2020 Dec 15;20(24):7186. doi: 10.3390/s20247186.