• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用多模态传感器融合和语义分割技术稳定和验证三维物体位置。

Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation.

机构信息

Computer Science Department, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj Napoca, Romania.

出版信息

Sensors (Basel). 2020 Feb 18;20(4):1110. doi: 10.3390/s20041110.

DOI:10.3390/s20041110
PMID:32085608
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7070899/
Abstract

The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis-measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.

摘要

物体测量位置的稳定和验证过程是高级感知功能和正确处理传感器数据的重要步骤。该过程的目标是检测和处理不同传感器测量之间的不一致性,这些不一致性是由感知系统引起的。不同传感器的检测结果通过将每个已识别物体的传感器数据组合到一个公共参考框架中进行聚合,从而创建一个超级传感器。数据聚合的结果可能会导致错误,例如虚假检测、对象长方体错位或场景中对象数量不正确。稳定和验证过程专注于解决这些问题。本文针对自主车辆使用三焦点相机、鱼眼相机、远程雷达(无线电探测和测距)以及 4 层和 16 层激光雷达(光探测和测距)等传感器,提出了四项用于解决稳定和验证任务的贡献。我们提出了两种用于传感器融合和跟踪过程的数据关联方法。第一种数据关联算法是为跟踪激光雷达物体而创建的,它结合了多个外观和运动特征,以利用道路物体的可用信息。第二种新颖的数据关联算法是专为三焦点相机物体设计的,旨在找到传感器融合物体的测量对应关系,以便通过添加语义类信息来丰富超级传感器数据。实现的三焦点目标关联解决方案使用了一种新颖的极坐标关联方案,结合决策树来找到最佳的假设-测量相关性。我们提出的另一个用于稳定对象位置和多类型互补传感器提供的道路对象不可预测行为的贡献是使用基于无迹卡尔曼滤波器和单层感知器的融合方法。最后一个新颖的贡献与 3D 对象位置的验证有关,该验证使用模糊逻辑技术与语义分割图像相结合来解决。所提出的算法具有实时性能,累计运行时间为 90ms,并使用从具有 2cm 精度的高精度 GPS(全球定位系统)提取的地面真实数据进行了评估,平均误差为 0.8m。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/ea3c3b5df5ee/sensors-20-01110-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/8cf46be72d08/sensors-20-01110-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/65bd8c32dff6/sensors-20-01110-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/514882287fee/sensors-20-01110-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/b87ac8cd2f25/sensors-20-01110-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/36bd773a024f/sensors-20-01110-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/978cf3cc4d4c/sensors-20-01110-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/126728329d12/sensors-20-01110-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/def4a15ef8f9/sensors-20-01110-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/44338ca035e0/sensors-20-01110-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/fd3be614f1f3/sensors-20-01110-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/0043a813e6ff/sensors-20-01110-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/2248ead727f6/sensors-20-01110-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/6590e40e8fd9/sensors-20-01110-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/06d3451761cb/sensors-20-01110-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/2a94717f8b6c/sensors-20-01110-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/13b86d6782bf/sensors-20-01110-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/fb7cf6435f2e/sensors-20-01110-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/f64a561b08e5/sensors-20-01110-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/63bbe0dbab8f/sensors-20-01110-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/e3412a6c3897/sensors-20-01110-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/321fb325184e/sensors-20-01110-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/1358498e2aee/sensors-20-01110-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/f9cdaba67017/sensors-20-01110-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/ea3c3b5df5ee/sensors-20-01110-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/8cf46be72d08/sensors-20-01110-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/65bd8c32dff6/sensors-20-01110-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/514882287fee/sensors-20-01110-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/b87ac8cd2f25/sensors-20-01110-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/36bd773a024f/sensors-20-01110-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/978cf3cc4d4c/sensors-20-01110-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/126728329d12/sensors-20-01110-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/def4a15ef8f9/sensors-20-01110-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/44338ca035e0/sensors-20-01110-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/fd3be614f1f3/sensors-20-01110-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/0043a813e6ff/sensors-20-01110-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/2248ead727f6/sensors-20-01110-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/6590e40e8fd9/sensors-20-01110-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/06d3451761cb/sensors-20-01110-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/2a94717f8b6c/sensors-20-01110-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/13b86d6782bf/sensors-20-01110-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/fb7cf6435f2e/sensors-20-01110-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/f64a561b08e5/sensors-20-01110-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/63bbe0dbab8f/sensors-20-01110-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/e3412a6c3897/sensors-20-01110-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/321fb325184e/sensors-20-01110-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/1358498e2aee/sensors-20-01110-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/f9cdaba67017/sensors-20-01110-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/ea3c3b5df5ee/sensors-20-01110-g024.jpg

相似文献

1
Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation.使用多模态传感器融合和语义分割技术稳定和验证三维物体位置。
Sensors (Basel). 2020 Feb 18;20(4):1110. doi: 10.3390/s20041110.
2
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles.实时混合多传感器融合框架用于自动驾驶车辆的感知。
Sensors (Basel). 2019 Oct 9;19(20):4357. doi: 10.3390/s19204357.
3
Cooperative Multi-Sensor Tracking of Vulnerable Road Users in the Presence of Missing Detections.协同多传感器在漏检情况下对弱势道路使用者的跟踪。
Sensors (Basel). 2020 Aug 26;20(17):4817. doi: 10.3390/s20174817.
4
Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles.基于毫米波雷达与激光雷达传感器信息融合的自动驾驶车辆多目标跟踪方法
Sensors (Basel). 2023 Aug 3;23(15):6920. doi: 10.3390/s23156920.
5
Improving the Robustness of Object Detection Through a Multi-Camera-Based Fusion Algorithm Using Fuzzy Logic.通过基于多摄像头的模糊逻辑融合算法提高目标检测的鲁棒性。
Front Artif Intell. 2021 May 26;4:638951. doi: 10.3389/frai.2021.638951. eCollection 2021.
6
High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation.利用全球导航卫星系统/惯性测量单元/激光雷达传感器集成创建高清3D地图以支持自动驾驶车辆导航。
Sensors (Basel). 2020 Feb 7;20(3):899. doi: 10.3390/s20030899.
7
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles.用于自动驾驶车辆的端到端多模态传感器数据集收集框架
Sensors (Basel). 2023 Jul 29;23(15):6783. doi: 10.3390/s23156783.
8
Multi-View Fusion-Based 3D Object Detection for Robot Indoor Scene Perception.基于多视图融合的机器人室内场景感知三维目标检测。
Sensors (Basel). 2019 Sep 21;19(19):4092. doi: 10.3390/s19194092.
9
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area.城区中杆状物体提取及基于杆辅助的GNSS/IMU/LiDAR-SLAM系统
Sensors (Basel). 2020 Dec 13;20(24):7145. doi: 10.3390/s20247145.
10
Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking.基于动态多激光雷达的多目标检测与跟踪。
Sensors (Basel). 2019 Mar 26;19(6):1474. doi: 10.3390/s19061474.

引用本文的文献

1
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking.自动驾驶车辆中的传感器融合:交通监测摄像系统中的检测、定位和人工智能网络。
Sensors (Basel). 2023 Mar 22;23(6):3335. doi: 10.3390/s23063335.
2
Part-Based Obstacle Detection Using a Multiple Output Neural Network.基于部件的障碍物检测使用多输出神经网络。
Sensors (Basel). 2022 Jun 7;22(12):4312. doi: 10.3390/s22124312.
3
Robust Data Association Using Fusion of Data-Driven and Engineered Features for Real-Time Pedestrian Tracking in Thermal Images.
基于数据驱动和工程特征融合的稳健数据关联算法在热成像中实时行人跟踪
Sensors (Basel). 2021 Nov 30;21(23):8005. doi: 10.3390/s21238005.