Suppr超能文献

使用多模态传感器融合和语义分割技术稳定和验证三维物体位置。

Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation.

机构信息

Computer Science Department, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj Napoca, Romania.

出版信息

Sensors (Basel). 2020 Feb 18;20(4):1110. doi: 10.3390/s20041110.

Abstract

The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis-measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.

摘要

物体测量位置的稳定和验证过程是高级感知功能和正确处理传感器数据的重要步骤。该过程的目标是检测和处理不同传感器测量之间的不一致性,这些不一致性是由感知系统引起的。不同传感器的检测结果通过将每个已识别物体的传感器数据组合到一个公共参考框架中进行聚合,从而创建一个超级传感器。数据聚合的结果可能会导致错误,例如虚假检测、对象长方体错位或场景中对象数量不正确。稳定和验证过程专注于解决这些问题。本文针对自主车辆使用三焦点相机、鱼眼相机、远程雷达(无线电探测和测距)以及 4 层和 16 层激光雷达(光探测和测距)等传感器,提出了四项用于解决稳定和验证任务的贡献。我们提出了两种用于传感器融合和跟踪过程的数据关联方法。第一种数据关联算法是为跟踪激光雷达物体而创建的,它结合了多个外观和运动特征,以利用道路物体的可用信息。第二种新颖的数据关联算法是专为三焦点相机物体设计的,旨在找到传感器融合物体的测量对应关系,以便通过添加语义类信息来丰富超级传感器数据。实现的三焦点目标关联解决方案使用了一种新颖的极坐标关联方案,结合决策树来找到最佳的假设-测量相关性。我们提出的另一个用于稳定对象位置和多类型互补传感器提供的道路对象不可预测行为的贡献是使用基于无迹卡尔曼滤波器和单层感知器的融合方法。最后一个新颖的贡献与 3D 对象位置的验证有关,该验证使用模糊逻辑技术与语义分割图像相结合来解决。所提出的算法具有实时性能,累计运行时间为 90ms,并使用从具有 2cm 精度的高精度 GPS(全球定位系统)提取的地面真实数据进行了评估,平均误差为 0.8m。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1562/7070899/8cf46be72d08/sensors-20-01110-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验