• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

RU-SLAM:一种用于弱纹理水下环境的鲁棒深度学习视觉同步定位与地图构建(SLAM)系统。

RU-SLAM: A Robust Deep-Learning Visual Simultaneous Localization and Mapping (SLAM) System for Weakly Textured Underwater Environments.

作者信息

Wang Zhuo, Cheng Qin, Mu Xiaokai

机构信息

Science and Technology on Underwater Vehicle Laboratory, Harbin Engineering University, Harbin 150001, China.

Qingdao Innovation and Development Center, Harbin Engineering University, Qingdao 266000, China.

出版信息

Sensors (Basel). 2024 Mar 18;24(6):1937. doi: 10.3390/s24061937.

DOI:10.3390/s24061937
PMID:38544200
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10975413/
Abstract

Accurate and robust simultaneous localization and mapping (SLAM) systems are crucial for autonomous underwater vehicles (AUVs) to perform missions in unknown environments. However, directly applying deep learning-based SLAM methods to underwater environments poses challenges due to weak textures, image degradation, and the inability to accurately annotate keypoints. In this paper, a robust deep-learning visual SLAM system is proposed. First, a feature generator named UWNet is designed to address weak texture and image degradation problems and extract more accurate keypoint features and their descriptors. Further, the idea of knowledge distillation is introduced based on an improved underwater imaging physical model to train the network in a self-supervised manner. Finally, UWNet is integrated into the ORB-SLAM3 to replace the traditional feature extractor. The extracted local and global features are respectively utilized in the feature tracking and closed-loop detection modules. Experimental results on public datasets and self-collected pool datasets verify that the proposed system maintains high accuracy and robustness in complex scenarios.

摘要

准确且稳健的同步定位与地图构建(SLAM)系统对于自主水下航行器(AUV)在未知环境中执行任务至关重要。然而,由于纹理薄弱、图像退化以及无法准确标注关键点,直接将基于深度学习的SLAM方法应用于水下环境会带来挑战。本文提出了一种稳健的深度学习视觉SLAM系统。首先,设计了一个名为UWNet的特征生成器,以解决纹理薄弱和图像退化问题,并提取更准确的关键点特征及其描述符。此外,基于改进的水下成像物理模型引入知识蒸馏的思想,以自监督方式训练网络。最后,将UWNet集成到ORB-SLAM3中以取代传统的特征提取器。提取的局部和全局特征分别用于特征跟踪和闭环检测模块。在公共数据集和自行采集的水池数据集上的实验结果验证了所提出的系统在复杂场景中保持了高精度和稳健性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/527fd8ea500c/sensors-24-01937-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/18967327ead1/sensors-24-01937-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/4c2942cecf6b/sensors-24-01937-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/d4b79cdccb4e/sensors-24-01937-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/855463b346fd/sensors-24-01937-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/930f3e0a98b1/sensors-24-01937-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/43257a702770/sensors-24-01937-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/65dcf1575283/sensors-24-01937-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/527fd8ea500c/sensors-24-01937-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/18967327ead1/sensors-24-01937-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/4c2942cecf6b/sensors-24-01937-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/d4b79cdccb4e/sensors-24-01937-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/855463b346fd/sensors-24-01937-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/930f3e0a98b1/sensors-24-01937-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/43257a702770/sensors-24-01937-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/65dcf1575283/sensors-24-01937-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bcfd/10975413/527fd8ea500c/sensors-24-01937-g008.jpg

相似文献

1
RU-SLAM: A Robust Deep-Learning Visual Simultaneous Localization and Mapping (SLAM) System for Weakly Textured Underwater Environments.RU-SLAM:一种用于弱纹理水下环境的鲁棒深度学习视觉同步定位与地图构建(SLAM)系统。
Sensors (Basel). 2024 Mar 18;24(6):1937. doi: 10.3390/s24061937.
2
HFNet-SLAM: An Accurate and Real-Time Monocular SLAM System with Deep Features.HFNet-SLAM:一种基于深度特征的精确实时单目 SLAM 系统。
Sensors (Basel). 2023 Feb 13;23(4):2113. doi: 10.3390/s23042113.
3
SEG-SLAM: Dynamic Indoor RGB-D Visual SLAM Integrating Geometric and YOLOv5-Based Semantic Information.SEG-SLAM:集成几何信息与基于YOLOv5的语义信息的动态室内RGB-D视觉同步定位与地图构建
Sensors (Basel). 2024 Mar 25;24(7):2102. doi: 10.3390/s24072102.
4
Improved Point-Line Feature Based Visual SLAM Method for Complex Environments.基于点-线特征改进的复杂环境视觉 SLAM 方法。
Sensors (Basel). 2021 Jul 5;21(13):4604. doi: 10.3390/s21134604.
5
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction.BY-SLAM:基于BEBLID和语义信息提取的动态视觉同步定位与地图构建系统
Sensors (Basel). 2024 Jul 19;24(14):4693. doi: 10.3390/s24144693.
6
A Visual-Inertial Pressure Fusion-Based Underwater Simultaneous Localization and Mapping System.一种基于视觉-惯性-压力融合的水下同步定位与地图构建系统。
Sensors (Basel). 2024 May 18;24(10):3207. doi: 10.3390/s24103207.
7
Semantic visual simultaneous localization and mapping (SLAM) using deep learning for dynamic scenes.使用深度学习的语义视觉同步定位与地图构建(SLAM)用于动态场景。
PeerJ Comput Sci. 2023 Oct 10;9:e1628. doi: 10.7717/peerj-cs.1628. eCollection 2023.
8
Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping-A Benchmark Study.将基于稀疏学习的特征检测器集成到同时定位与建图中——基准研究。
Sensors (Basel). 2023 Feb 18;23(4):2286. doi: 10.3390/s23042286.
9
Robust visual SLAM algorithm based on target detection and clustering in dynamic scenarios.基于动态场景中目标检测与聚类的鲁棒视觉同步定位与地图构建算法。
Front Neurorobot. 2024 Jul 23;18:1431897. doi: 10.3389/fnbot.2024.1431897. eCollection 2024.
10
Marine Application Evaluation of Monocular SLAM for Underwater Robots.海洋环境中单目 SLAM 在水下机器人中的应用评估。
Sensors (Basel). 2022 Jun 21;22(13):4657. doi: 10.3390/s22134657.

本文引用的文献

1
DAN-SuperPoint: Self-Supervised Feature Point Detection Algorithm with Dual Attention Network.DAN-SuperPoint:基于双注意力网络的自监督特征点检测算法。
Sensors (Basel). 2022 Mar 2;22(5):1940. doi: 10.3390/s22051940.
2
An Underwater Image Enhancement Benchmark Dataset and Beyond.一个水下图像增强基准数据集及其他。
IEEE Trans Image Process. 2019 Nov 28. doi: 10.1109/TIP.2019.2955241.
3
MonoSLAM: real-time single camera SLAM.单目即时定位与地图构建(MonoSLAM):实时单目相机即时定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67. doi: 10.1109/TPAMI.2007.1049.