Suppr超能文献

基于频率调谐模型的大鼠同步定位与地图构建中视觉模板匹配的鲁棒性改进

Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM.

作者信息

Yu Shumei, Wu Junyi, Xu Haidong, Sun Rongchuan, Sun Lining

机构信息

School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China.

出版信息

Front Neurorobot. 2020 Sep 25;14:568091. doi: 10.3389/fnbot.2020.568091. eCollection 2020.

Abstract

This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.

摘要

本文描述了一种改进的受大脑启发的同步定位与地图构建(RatSLAM)方法,该方法使用频率调谐(FT)模型从显著性图中提取视觉特征。在传统的RatSLAM算法中,视觉模板特征被组织成一维向量,其值仅取决于像素强度;因此,该特征易受光照强度变化的影响。与直接从原始RGB图像生成视觉模板的方法不同,我们提出了一种FT模型,该模型将RGB图像转换为显著性图以获取视觉模板。从显著性图中提取的视觉模板包含了原始图像中更多的特征信息。我们的实验结果表明,通过我们的方法检测到的闭环数量与传统RatSLAM系统相比,闭环检测的准确性得到了提高。我们还验证了基于FT模型的视觉模板提高了RatSLAM对熟悉视觉场景识别的鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20f5/7546858/df0fadd68c66/fnbot-14-568091-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验