Suppr超能文献

W-VSLAM:一种用于室内巡检机器人的视觉建图算法。

W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots.

作者信息

Luo Dingji, Huang Yucan, Huang Xuchao, Miao Mingda, Gao Xueshan

机构信息

School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.

School of Mechanical Engineering College, Jiangsu Ocean University, Lianyungang 222005, China.

出版信息

Sensors (Basel). 2024 Aug 30;24(17):5662. doi: 10.3390/s24175662.

Abstract

In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual-inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot's body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results.

摘要

近年来,随着室内巡检机器人的广泛应用,高精度、鲁棒的环境感知对于机器人建图变得至关重要。针对室内环境中移动机器人平面运动时因冗余姿态自由度和加速度计漂移导致的视觉惯性估计不准确问题,我们提出一种融合轮式里程计信息的视觉同步定位与地图构建(Visual SLAM)感知方法。首先,在SE(2)中对机器人本体姿态进行参数化,在SE(3)中对相应相机姿态进行参数化。在此基础上,利用相机投影模型推导重投影观测的视觉约束残差及其雅可比矩阵。采用预积分概念推导姿态约束残差及其雅可比矩阵,并利用边缘化理论推导回环闭合约束的相对姿态残差及其雅可比矩阵。该方法求解非线性优化问题,以获得地面移动机器人的最优姿态和地标点。与ORBSLAM3算法的比较表明,在记录的室内环境数据集中,所提算法的感知精度显著更高,绝对轨迹误差(ATE)的平移均方根误差(RMSE)提高了89.2%,旋转提高了98.5%。整体轨迹定位精度在5至17厘米之间,验证了所提算法的有效性。这些研究结果可应用于室内移动机器人自主导航的初步建图,并为基于建图结果的路径规划提供依据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/779d/11397734/0021b62935d2/sensors-24-05662-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验