An Qing, Li Shao, Wan Yanglu, Xuan Wei, Chen Chao, Zhao Bufan, Chen Xijiang
Hubei Engineering Research Center for BDS-Cloud High-Precision Deformation Monitoring, Artificial Intelligence School, Wuchang University of Technology, Wuhan 430223, China.
School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430079, China.
Sensors (Basel). 2025 Aug 26;25(17):5304. doi: 10.3390/s25175304.
Most existing Simultaneous Localization and Mapping (SLAM) systems rely on the assumption of static environments to achieve reliable and efficient mapping. However, such methods often suffer from degraded localization accuracy and mapping consistency in dynamic settings, as they lack explicit mechanisms to distinguish between static and dynamic elements. To overcome this limitation, we present BMP-SLAM, a vision-based SLAM approach that integrates semantic segmentation and Bayesian motion estimation to robustly handle dynamic indoor scenes. To enable real-time dynamic object detection, we integrate YOLOv5, a semantic segmentation network that identifies and localizes dynamic regions within the environment, into a dedicated dynamic target detection thread. Simultaneously, the data association Bayesian mobile probability proposed in this paper effectively eliminates dynamic feature points and successfully reduces the impact of dynamic targets in the environment on the SLAM system. To enhance complex indoor robotic navigation, the proposed system integrates semantic keyframe information with dynamic object detection outputs to reconstruct high-fidelity 3D point cloud maps of indoor environments. The evaluation conducted on the TUM RGB-D dataset indicates that the performance of BMP-SLAM is superior to that of ORB-SLAM3, with the trajectory tracking accuracy improved by 96.35%. Comparative evaluations demonstrate that the proposed system achieves superior performance in dynamic environments, exhibiting both lower trajectory drift and enhanced positioning precision relative to state-of-the-art dynamic SLAM methods.
大多数现有的同步定位与地图构建(SLAM)系统依赖于静态环境的假设来实现可靠且高效的地图构建。然而,此类方法在动态场景中常常会出现定位精度下降和地图一致性变差的情况,因为它们缺乏区分静态和动态元素的明确机制。为克服这一限制,我们提出了BMP-SLAM,这是一种基于视觉的SLAM方法,它集成了语义分割和贝叶斯运动估计,以稳健地处理动态室内场景。为实现实时动态目标检测,我们将YOLOv5(一个用于识别和定位环境中动态区域的语义分割网络)集成到一个专用的动态目标检测线程中。同时,本文提出的数据关联贝叶斯移动概率有效地消除了动态特征点,并成功降低了环境中动态目标对SLAM系统的影响。为增强复杂室内机器人导航能力,所提出的系统将语义关键帧信息与动态目标检测输出相结合,以重建室内环境的高保真三维点云地图。在TUM RGB-D数据集上进行的评估表明,BMP-SLAM的性能优于ORB-SLAM3,轨迹跟踪精度提高了96.35%。比较评估表明,相对于最先进的动态SLAM方法,所提出的系统在动态环境中实现了卓越的性能,具有更低的轨迹漂移和更高的定位精度。