Chang Yimeng, Hu Jun, Xu Shiyou
School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen 518107, China.
Sensors (Basel). 2023 Sep 15;23(18):7921. doi: 10.3390/s23187921.
With the rapid development of autonomous driving and robotics applications in recent years, visual Simultaneous Localization and Mapping (SLAM) has become a hot research topic. The majority of visual SLAM systems relies on the assumption of scene rigidity, which may not always hold true in real applications. In dynamic environments, SLAM systems, without accounting for dynamic objects, will easily fail to estimate the camera pose. Some existing methods attempt to address this issue by simply excluding the dynamic features lying in moving objects. But this may lead to a shortage of features for tracking. To tackle this problem, we propose OTE-SLAM, an object tracking enhanced visual SLAM system, which not only tracks the camera motion, but also tracks the movement of dynamic objects. Furthermore, we perform joint optimization of both the camera pose and object 3D position, enabling a mutual benefit between visual SLAM and object tracking. The results of experiences demonstrate that the proposed approach improves the accuracy of the SLAM system in challenging dynamic environments. The improvements include a maximum reduction in both absolute trajectory error and relative trajectory error by 22% and 33%, respectively.
近年来,随着自动驾驶和机器人应用的迅速发展,视觉同步定位与建图(SLAM)已成为一个热门研究课题。大多数视觉SLAM系统依赖于场景刚性的假设,而这在实际应用中可能并不总是成立。在动态环境中,SLAM系统若不考虑动态物体,将很容易无法估计相机位姿。一些现有方法试图通过简单地排除移动物体中的动态特征来解决这个问题。但这可能会导致跟踪特征的短缺。为了解决这个问题,我们提出了OTE-SLAM,一种物体跟踪增强的视觉SLAM系统,它不仅跟踪相机运动,还跟踪动态物体的运动。此外,我们对相机位姿和物体三维位置进行联合优化,实现视觉SLAM和物体跟踪之间的互利。实验结果表明,所提出的方法在具有挑战性的动态环境中提高了SLAM系统的精度。改进包括绝对轨迹误差和相对轨迹误差分别最大降低22%和33%。