• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用折反射相机优化基于外观的定位:用于边缘设备实时推理的小尺寸模型

Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices.

作者信息

Rostkowska Marta, Skrzypczyński Piotr

机构信息

Institute of Robotics and Machine Intelligence, Poznan University of Technology, 60-965 Poznan, Poland.

出版信息

Sensors (Basel). 2023 Jul 18;23(14):6485. doi: 10.3390/s23146485.

DOI:10.3390/s23146485
PMID:37514780
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10385632/
Abstract

This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even in self-similar environments with few discernible features. As the target application is the global localization of a low-cost service mobile robot, the proposed solutions are optimized toward being small-footprint models that provide real-time inference on edge devices, such as Nvidia Jetson. We compare several design choices for the neural network-based architecture of the localization system and then demonstrate that the best results are achieved with embeddings (global descriptors) yielded by exploiting transfer learning and fine tuning on a limited number of catadioptric images. We test our solutions on two small-scale datasets collected using different catadioptric cameras in the same office building. Next, we compare the performance of our system to state-of-the-art visual place recognition systems on the publicly available COLD Freiburg and Saarbrücken datasets that contain images collected under different lighting conditions. Our system compares favourably to the competitors both in terms of the accuracy of place recognition and the inference time, providing a cost- and energy-efficient means of appearance-based localization for an indoor service robot.

摘要

本文考虑基于外观的定位任务

从折反射相机获取的全向图像中进行视觉场所识别。重点在于设计一种高效的神经网络架构,该架构能够在折反射相机拍摄的失真图像上准确且可靠地识别室内场景,即使是在几乎没有可辨别特征的自相似环境中。由于目标应用是低成本服务移动机器人的全局定位,因此所提出的解决方案朝着小尺寸模型进行了优化,这些模型能够在诸如英伟达Jetson等边缘设备上提供实时推理。我们比较了定位系统基于神经网络的架构的几种设计选择,然后证明通过利用迁移学习并在有限数量的折反射图像上进行微调所产生的嵌入(全局描述符)能够取得最佳结果。我们在同一办公楼内使用不同折反射相机收集的两个小规模数据集上测试了我们的解决方案。接下来,我们将我们系统的性能与公开可用的COLD弗莱堡和萨尔布吕肯数据集上的最先进视觉场所识别系统进行比较,这些数据集包含在不同光照条件下收集的图像。我们的系统在场所识别准确性和推理时间方面均优于竞争对手,为室内服务机器人提供了一种经济高效的基于外观的定位方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/24098d62e2b5/sensors-23-06485-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/5d3b7c9e3cc8/sensors-23-06485-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/92c43d3cbe9f/sensors-23-06485-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/965025ca8cce/sensors-23-06485-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/6b52770bf7e4/sensors-23-06485-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/75e6afb00c15/sensors-23-06485-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/ca3cfaf1954d/sensors-23-06485-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/20f2a467a8db/sensors-23-06485-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1ad6bf088dd0/sensors-23-06485-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/445362e6dadc/sensors-23-06485-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/ace6b63e4cdd/sensors-23-06485-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/d9d926da1ae3/sensors-23-06485-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/f7ed339415df/sensors-23-06485-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/234ccd148e17/sensors-23-06485-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1206ee76f32b/sensors-23-06485-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1add1f040b27/sensors-23-06485-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/24098d62e2b5/sensors-23-06485-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/5d3b7c9e3cc8/sensors-23-06485-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/92c43d3cbe9f/sensors-23-06485-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/965025ca8cce/sensors-23-06485-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/6b52770bf7e4/sensors-23-06485-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/75e6afb00c15/sensors-23-06485-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/ca3cfaf1954d/sensors-23-06485-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/20f2a467a8db/sensors-23-06485-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1ad6bf088dd0/sensors-23-06485-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/445362e6dadc/sensors-23-06485-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/ace6b63e4cdd/sensors-23-06485-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/d9d926da1ae3/sensors-23-06485-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/f7ed339415df/sensors-23-06485-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/234ccd148e17/sensors-23-06485-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1206ee76f32b/sensors-23-06485-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/1add1f040b27/sensors-23-06485-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a2/10385632/24098d62e2b5/sensors-23-06485-g016.jpg

相似文献

1
Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices.使用折反射相机优化基于外观的定位:用于边缘设备实时推理的小尺寸模型
Sensors (Basel). 2023 Jul 18;23(14):6485. doi: 10.3390/s23146485.
2
Self-Localization of Mobile Robots Using a Single Catadioptric Camera with Line Feature Extraction.基于线特征提取的单折反射相机移动机器人自定位
Sensors (Basel). 2021 Jul 9;21(14):4719. doi: 10.3390/s21144719.
3
Accurate and Robust Monocular SLAM with Omnidirectional Cameras.具有全向摄像机的精确鲁棒单目 SLAM。
Sensors (Basel). 2019 Oct 16;19(20):4494. doi: 10.3390/s19204494.
4
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。
PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.
5
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.基于全景视觉的全局外观描述符在地图构建和定位中的性能。
Sensors (Basel). 2014 Feb 14;14(2):3033-64. doi: 10.3390/s140203033.
6
Map building and monte carlo localization using global appearance of omnidirectional images.基于全景图像全局外观的地图构建与蒙特卡罗定位。
Sensors (Basel). 2010;10(12):11468-97. doi: 10.3390/s101211468. Epub 2010 Dec 14.
7
Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching.基于贝叶斯推断的自适应概率导向特征匹配的视觉信息融合。
Sensors (Basel). 2018 Jun 26;18(7):2041. doi: 10.3390/s18072041.
8
Position estimation and local mapping using omnidirectional images and global appearance descriptors.使用全向图像和全局外观描述符进行位置估计和局部映射。
Sensors (Basel). 2015 Oct 16;15(10):26368-95. doi: 10.3390/s151026368.
9
Indoor Place Category Recognition for a Cleaning Robot by Fusing a Probabilistic Approach and Deep Learning.融合概率方法和深度学习的清洁机器人室内场所类别识别。
IEEE Trans Cybern. 2022 Aug;52(8):7265-7276. doi: 10.1109/TCYB.2021.3052499. Epub 2022 Jul 19.
10
Parametric distortion-adaptive neighborhood for omnidirectional camera.用于全向相机的参数失真自适应邻域
Appl Opt. 2015 Aug 10;54(23):6969-78. doi: 10.1364/AO.54.006969.

本文引用的文献

1
The Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrieval.全方位图像全局外观在相对距离和方向检索中的作用。
Sensors (Basel). 2021 May 11;21(10):3327. doi: 10.3390/s21103327.
2
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。
PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.
3
Building the gist of a scene: the role of global image features in recognition.
构建场景要点:全局图像特征在识别中的作用。
Prog Brain Res. 2006;155:23-36. doi: 10.1016/S0079-6123(06)55002-2.