School of Automotive Studies, Tongji University, Shanghai 201804, China.
Sensors (Basel). 2023 Mar 21;23(6):3296. doi: 10.3390/s23063296.
High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great performance degradation in adversarial illumination environments such as on low-light roads or in underground spaces. To address this issue, in this paper, we first introduce an unsupervised learning approach to improve keypoint detection and description on monocular camera images. By emphasizing the consistency between feature points in the learning loss, visual features in dim environment can be better extracted. Second, to suppress the scale drift in monocular visual mapping, a robust loop-closure detection scheme is presented, which integrates both feature-point verification and multi-grained image similarity measurements. With experiments on public benchmarks, our keypoint detection approach is proven robust against varied illumination. With scenario tests including both underground and on-road driving, we demonstrate that our approach is able to reduce the scale drift in reconstructing the scene and achieve a mapping accuracy gain of up to 0.14 m in textureless or low-illumination environments.
高精度地图在智能驾驶车辆的定位和规划任务中得到了广泛应用。视觉传感器,特别是单目相机,由于其高灵活性和低成本,在制图方法中受到青睐。然而,单目视觉制图在对抗光照环境(如低光照道路或地下空间)中性能严重下降。针对这个问题,在本文中,我们首先介绍了一种无监督学习方法,以提高单目相机图像上的关键点检测和描述。通过在学习损失中强调特征点之间的一致性,可以更好地提取暗环境中的视觉特征。其次,为了抑制单目视觉制图中的尺度漂移,提出了一种鲁棒的闭环检测方案,该方案结合了特征点验证和多粒度图像相似性度量。在公共基准上的实验表明,我们的关键点检测方法在不同光照条件下具有鲁棒性。通过包括地下和道路驾驶在内的场景测试,我们证明我们的方法能够减少场景重建中的尺度漂移,并在无纹理或低光照环境中实现高达 0.14 米的映射精度增益。