Yu Shumei, Wu Junyi, Xu Haidong, Sun Rongchuan, Sun Lining
School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China.
Front Neurorobot. 2020 Sep 25;14:568091. doi: 10.3389/fnbot.2020.568091. eCollection 2020.
This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.
本文描述了一种改进的受大脑启发的同步定位与地图构建(RatSLAM)方法,该方法使用频率调谐(FT)模型从显著性图中提取视觉特征。在传统的RatSLAM算法中,视觉模板特征被组织成一维向量,其值仅取决于像素强度;因此,该特征易受光照强度变化的影响。与直接从原始RGB图像生成视觉模板的方法不同,我们提出了一种FT模型,该模型将RGB图像转换为显著性图以获取视觉模板。从显著性图中提取的视觉模板包含了原始图像中更多的特征信息。我们的实验结果表明,通过我们的方法检测到的闭环数量与传统RatSLAM系统相比,闭环检测的准确性得到了提高。我们还验证了基于FT模型的视觉模板提高了RatSLAM对熟悉视觉场景识别的鲁棒性。