Yang Wang, Chao Huang, Yi Zhang, Shuyi Tan
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China.
Information Accessibility Engineering R&D Center, Chongqing University of Posts and Telecommunications, Chongqing, China.
PLoS One. 2025 Jul 18;20(7):e0328052. doi: 10.1371/journal.pone.0328052. eCollection 2025.
Visual Simultaneous Localization and Mapping (VSLAM) is the key technology for autonomous navigation of mobile robots. However, feature-based VSLAM systems still face two major challenges in dynamic complex environments: insufficient feature reliability and significant dynamic interference, urgently requiring improved matching robustness. This paper innovatively proposes a dynamic adaptive VSLAM system based on the High-repeatability and High-reliability feature matching network (2HR-Net), which improves localization accuracy in dynamic environments through three key innovations: First, the 2HR feature detection network is designed, integrating the K-Means clustering algorithm into L2-Net to achieve feature point detection with both high repeatability and high reliability. Second, the lightweight YOLOv8n model is integrated to detect and remove feature points in dynamic regions in real-time, effectively reducing the impact of dynamic interference on pose estimation. Finally, the shared matching Siamese network with a unique dual-branch feature fusion strategy and similarity optimization algorithm is proposed to enhance the accuracy of feature matching. The proposed algorithm was ultimately validated using the publicly available TUM dataset. The experimental results show that the feature detection method proposed in this paper achieved a repeatability rate of approximately 70% in various dynamic scenarios, which is significantly higher than traditional methods (such as ORB-SLAM3), whose repeatability typically falls below 40%. In addition, compared with ORB-SLAM3, the root mean square error (RMSE) and standard deviation (S.D.) of the Absolute Trajectory Error (ATE) in various dynamic scenarios were reduced by approximately 90%, indicating higher localization accuracy and stability. Therefore, the experimental results demonstrate that the proposed method outperforms mainstream methods such as ORB-SLAM3 in terms of feature repeatability, matching accuracy, and localization precision, providing an effective solution for robust VSLAM in dynamic environments.
视觉同步定位与建图(VSLAM)是移动机器人自主导航的关键技术。然而,基于特征的VSLAM系统在动态复杂环境中仍面临两个主要挑战:特征可靠性不足和显著的动态干扰,迫切需要提高匹配鲁棒性。本文创新性地提出了一种基于高重复性和高可靠性特征匹配网络(2HR-Net)的动态自适应VSLAM系统,通过三项关键创新提高动态环境中的定位精度:第一,设计了2HR特征检测网络,将K-Means聚类算法集成到L2-Net中,实现具有高重复性和高可靠性的特征点检测。第二,集成轻量级YOLOv8n模型以实时检测和去除动态区域中的特征点,有效降低动态干扰对姿态估计的影响。最后,提出了具有独特双分支特征融合策略和相似度优化算法的共享匹配暹罗网络,以提高特征匹配的准确性。最终使用公开可用的TUM数据集对所提出的算法进行了验证。实验结果表明,本文提出的特征检测方法在各种动态场景下的重复率约为70%,显著高于传统方法(如ORB-SLAM3),其重复率通常低于40%。此外,与ORB-SLAM3相比,各种动态场景下绝对轨迹误差(ATE)的均方根误差(RMSE)和标准差(S.D.)降低了约90%,表明具有更高的定位精度和稳定性。因此,实验结果表明,所提出的方法在特征重复性、匹配准确性和定位精度方面优于ORB-SLAM3等主流方法,为动态环境中鲁棒的VSLAM提供了有效的解决方案。