• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于室内机器人导航的实时语义地图生成系统。

A Real-Time Semantic Map Production System for Indoor Robot Navigation.

作者信息

Alqobali Raghad, Alnasser Reem, Rashidi Asrar, Alshmrani Maha, Alhmiedat Tareq

机构信息

Saudi Data and AI Authority, Riyadh 12382, Saudi Arabia.

Information Technology Department, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 71491, Saudi Arabia.

出版信息

Sensors (Basel). 2024 Oct 17;24(20):6691. doi: 10.3390/s24206691.

DOI:10.3390/s24206691
PMID:39460171
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11511299/
Abstract

Although grid maps help mobile robots navigate in indoor environments, some lack semantic information that would allow the robot to perform advanced autonomous tasks. In this paper, a semantic map production system is proposed to facilitate indoor mobile robot navigation tasks. The developed system is based on the employment of LiDAR technology and a vision-based system to obtain a semantic map with rich information, and it has been validated using the robot operating system (ROS) and you only look once (YOLO) v3 object detection model in simulation experiments conducted in indoor environments, adopting low-cost, -size, and -memory computers for increased accessibility. The obtained results are efficient in terms of object recognition accuracy, object localization error, and semantic map production precision, with an average map construction accuracy of 78.86%.

摘要

尽管网格地图有助于移动机器人在室内环境中导航,但有些缺乏语义信息,无法让机器人执行高级自主任务。本文提出了一种语义地图生成系统,以促进室内移动机器人的导航任务。所开发的系统基于激光雷达技术和基于视觉的系统的应用,以获得具有丰富信息的语义地图,并且在室内环境中进行的模拟实验中,使用机器人操作系统(ROS)和你只看一次(YOLO)v3目标检测模型进行了验证,采用低成本、小尺寸和低内存的计算机以提高可及性。在目标识别准确率、目标定位误差和语义地图生成精度方面,所获得的结果是高效的,平均地图构建准确率为78.86%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ccfa84591c97/sensors-24-06691-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ac23b79fbf8f/sensors-24-06691-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/9b3885939b3a/sensors-24-06691-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/b6bd29993710/sensors-24-06691-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ed5b1f3b0d29/sensors-24-06691-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/f676119da52b/sensors-24-06691-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/cb0f0f5eee34/sensors-24-06691-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/385fb422d500/sensors-24-06691-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/6585c9101b2a/sensors-24-06691-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/234c0b1d959c/sensors-24-06691-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/4ec30eb84fc2/sensors-24-06691-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/200f8e1e043a/sensors-24-06691-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/4b1251f6b9ca/sensors-24-06691-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/e2a11c04c5af/sensors-24-06691-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ccfa84591c97/sensors-24-06691-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ac23b79fbf8f/sensors-24-06691-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/9b3885939b3a/sensors-24-06691-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/b6bd29993710/sensors-24-06691-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ed5b1f3b0d29/sensors-24-06691-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/f676119da52b/sensors-24-06691-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/cb0f0f5eee34/sensors-24-06691-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/385fb422d500/sensors-24-06691-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/6585c9101b2a/sensors-24-06691-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/234c0b1d959c/sensors-24-06691-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/4ec30eb84fc2/sensors-24-06691-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/200f8e1e043a/sensors-24-06691-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/4b1251f6b9ca/sensors-24-06691-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/e2a11c04c5af/sensors-24-06691-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43d0/11511299/ccfa84591c97/sensors-24-06691-g014.jpg

相似文献

1
A Real-Time Semantic Map Production System for Indoor Robot Navigation.一种用于室内机器人导航的实时语义地图生成系统。
Sensors (Basel). 2024 Oct 17;24(20):6691. doi: 10.3390/s24206691.
2
Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D Lidar and 2D Lidar SLAM.基于3D激光雷达和2D激光雷达同步定位与地图构建的温室移动机器人自主导航系统
Front Plant Sci. 2022 Mar 10;13:815218. doi: 10.3389/fpls.2022.815218. eCollection 2022.
3
Monocular camera and laser based semantic mapping system with temporal-spatial data association for indoor mobile robots.基于单目相机和激光的室内移动机器人时空数据关联语义映射系统。
Multimed Tools Appl. 2023 Mar 7:1-26. doi: 10.1007/s11042-023-14796-1.
4
Improved Hybrid Model for Obstacle Detection and Avoidance in Robot Operating System Framework (Rapidly Exploring Random Tree and Dynamic Windows Approach).机器人操作系统框架(快速探索随机树和动态窗口方法)中用于障碍物检测与规避的改进混合模型
Sensors (Basel). 2024 Apr 2;24(7):2262. doi: 10.3390/s24072262.
5
Research and Implementation of Autonomous Navigation for Mobile Robots Based on SLAM Algorithm under ROS.基于ROS下SLAM算法的移动机器人自主导航研究与实现
Sensors (Basel). 2022 May 31;22(11):4172. doi: 10.3390/s22114172.
6
Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments.在不平坦室内环境中实现安全可靠的移动机器人导航
Sensors (Basel). 2019 Jul 7;19(13):2993. doi: 10.3390/s19132993.
7
Learning indoor robot navigation using visual and sensorimotor map information.利用视觉和运动感知地图信息学习室内机器人导航。
Front Neurorobot. 2013 Oct 7;7:15. doi: 10.3389/fnbot.2013.00015. eCollection 2013.
8
Vision-based Mobile Indoor Assistive Navigation Aid for Blind People.面向盲人的基于视觉的移动室内辅助导航工具
IEEE Trans Mob Comput. 2019 Mar;18(3):702-714. doi: 10.1109/TMC.2018.2842751. Epub 2018 Jun 1.
9
ROS-Based Autonomous Navigation Robot Platform with Stepping Motor.基于 ROS 的步进电机自主导航机器人平台。
Sensors (Basel). 2023 Mar 31;23(7):3648. doi: 10.3390/s23073648.
10
Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.结合视觉虫算法和基于熵的视觉技术实现无人机在室内环境中的导航与自我语义定位
Front Neurorobot. 2017 Aug 29;11:46. doi: 10.3389/fnbot.2017.00046. eCollection 2017.

引用本文的文献

1
Utilizing a deep neural network for robot semantic classification in indoor environments.利用深度神经网络进行室内环境中的机器人语义分类。
Sci Rep. 2025 Jul 1;15(1):21937. doi: 10.1038/s41598-025-07921-7.
2
Multi-Domain Indoor Dataset for Visual Place Recognition and Anomaly Detection by Mobile Robots.用于移动机器人视觉场所识别和异常检测的多领域室内数据集
Sci Data. 2025 May 19;12(1):817. doi: 10.1038/s41597-025-05124-3.
3
Vision-Based Collision Warning Systems with Deep Learning: A Systematic Review.基于深度学习的视觉碰撞预警系统:系统综述

本文引用的文献

1
A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets.计算机视觉深度多模态学习综述:进展、趋势、应用及数据集
Vis Comput. 2022;38(8):2939-2970. doi: 10.1007/s00371-021-02166-7. Epub 2021 Jun 10.
2
Visual SLAM for robot navigation in healthcare facility.用于医疗设施中机器人导航的视觉同步定位与地图构建
Pattern Recognit. 2021 May;113:107822. doi: 10.1016/j.patcog.2021.107822. Epub 2021 Jan 16.
J Imaging. 2025 Feb 17;11(2):64. doi: 10.3390/jimaging11020064.