• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于传感器模态的机器人系统 3D 识别:综述。

3D Recognition Based on Sensor Modalities for Robotic Systems: A Survey.

机构信息

Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Korea.

出版信息

Sensors (Basel). 2021 Oct 27;21(21):7120. doi: 10.3390/s21217120.

DOI:10.3390/s21217120
PMID:34770429
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8587961/
Abstract

3D visual recognition is a prerequisite for most autonomous robotic systems operating in the real world. It empowers robots to perform a variety of tasks, such as tracking, understanding the environment, and human-robot interaction. Autonomous robots equipped with 3D recognition capability can better perform their social roles through supportive task assistance in professional jobs and effective domestic services. For active assistance, social robots must recognize their surroundings, including objects and places to perform the task more efficiently. This article first highlights the value-centric role of social robots in society by presenting recently developed robots and describes their main features. Instigated by the recognition capability of social robots, we present the analysis of data representation methods based on sensor modalities for 3D object and place recognition using deep learning models. In this direction, we delineate the research gaps that need to be addressed, summarize 3D recognition datasets, and present performance comparisons. Finally, a discussion of future research directions concludes the article. This survey is intended to show how recent developments in 3D visual recognition based on sensor modalities using deep-learning-based approaches can lay the groundwork to inspire further research and serves as a guide to those who are interested in vision-based robotics applications.

摘要

3D 视觉识别是大多数在现实世界中运行的自主机器人系统的先决条件。它使机器人能够执行各种任务,例如跟踪、理解环境和人机交互。配备 3D 识别能力的自主机器人可以通过在专业工作和有效的家庭服务中提供支持性任务协助,更好地发挥其社会作用。为了主动协助,社交机器人必须识别周围环境,包括物体和地点,以更有效地执行任务。本文首先通过介绍最近开发的机器人,强调了社会机器人在社会中的以价值为中心的作用,并描述了它们的主要特点。受社交机器人识别能力的启发,我们提出了基于深度学习模型的传感器模态 3D 物体和位置识别的数据表示方法分析。在这个方向上,我们描绘了需要解决的研究差距,总结了 3D 识别数据集,并给出了性能比较。最后,讨论了未来的研究方向,结束了本文。本调查旨在展示基于传感器模态的基于深度学习的方法的最新 3D 视觉识别发展如何为进一步研究奠定基础,并为对基于视觉的机器人应用感兴趣的人提供指导。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/905bb5ed76f3/sensors-21-07120-g037.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/4b225899cc69/sensors-21-07120-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/4b48c43876bb/sensors-21-07120-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/b57834bdf8ea/sensors-21-07120-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a11203c12760/sensors-21-07120-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/70138c8e0a7c/sensors-21-07120-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c6667ac28a5f/sensors-21-07120-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/2a94e2af8b30/sensors-21-07120-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d405c17dbedf/sensors-21-07120-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d644fc7748e9/sensors-21-07120-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c02ca774c5e1/sensors-21-07120-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/42f7afdfb684/sensors-21-07120-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/7c56483067c8/sensors-21-07120-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/33c1197a4a1b/sensors-21-07120-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a0fdb8b0825f/sensors-21-07120-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/00c3bb805f23/sensors-21-07120-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a6fbba60f0d2/sensors-21-07120-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/40f148c2fc35/sensors-21-07120-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/843ba6b1f8ad/sensors-21-07120-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f557ef87025d/sensors-21-07120-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d29f2de8f0ed/sensors-21-07120-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/97c073ec788a/sensors-21-07120-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c0273e39920e/sensors-21-07120-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/86962f2a7da7/sensors-21-07120-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/909d7900d991/sensors-21-07120-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cc57005c626e/sensors-21-07120-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cc80d48f2c06/sensors-21-07120-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cb2543a6f00f/sensors-21-07120-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/552f97bfb03f/sensors-21-07120-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/9b520ded852c/sensors-21-07120-g029.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/394a336443b0/sensors-21-07120-g030.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f786185e7d09/sensors-21-07120-g031.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f0e395ee8e85/sensors-21-07120-g032.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/7f03c1b591d6/sensors-21-07120-g033.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/af77149649a1/sensors-21-07120-g034.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/21862bd7dcee/sensors-21-07120-g035.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/2d93f181caa9/sensors-21-07120-g036.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/905bb5ed76f3/sensors-21-07120-g037.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/4b225899cc69/sensors-21-07120-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/4b48c43876bb/sensors-21-07120-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/b57834bdf8ea/sensors-21-07120-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a11203c12760/sensors-21-07120-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/70138c8e0a7c/sensors-21-07120-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c6667ac28a5f/sensors-21-07120-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/2a94e2af8b30/sensors-21-07120-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d405c17dbedf/sensors-21-07120-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d644fc7748e9/sensors-21-07120-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c02ca774c5e1/sensors-21-07120-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/42f7afdfb684/sensors-21-07120-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/7c56483067c8/sensors-21-07120-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/33c1197a4a1b/sensors-21-07120-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a0fdb8b0825f/sensors-21-07120-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/00c3bb805f23/sensors-21-07120-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/a6fbba60f0d2/sensors-21-07120-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/40f148c2fc35/sensors-21-07120-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/843ba6b1f8ad/sensors-21-07120-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f557ef87025d/sensors-21-07120-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/d29f2de8f0ed/sensors-21-07120-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/97c073ec788a/sensors-21-07120-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/c0273e39920e/sensors-21-07120-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/86962f2a7da7/sensors-21-07120-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/909d7900d991/sensors-21-07120-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cc57005c626e/sensors-21-07120-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cc80d48f2c06/sensors-21-07120-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/cb2543a6f00f/sensors-21-07120-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/552f97bfb03f/sensors-21-07120-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/9b520ded852c/sensors-21-07120-g029.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/394a336443b0/sensors-21-07120-g030.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f786185e7d09/sensors-21-07120-g031.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/f0e395ee8e85/sensors-21-07120-g032.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/7f03c1b591d6/sensors-21-07120-g033.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/af77149649a1/sensors-21-07120-g034.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/21862bd7dcee/sensors-21-07120-g035.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/2d93f181caa9/sensors-21-07120-g036.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c128/8587961/905bb5ed76f3/sensors-21-07120-g037.jpg

相似文献

1
3D Recognition Based on Sensor Modalities for Robotic Systems: A Survey.基于传感器模态的机器人系统 3D 识别:综述。
Sensors (Basel). 2021 Oct 27;21(21):7120. doi: 10.3390/s21217120.
2
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.基于深度学习的自动驾驶激光雷达 3D 目标检测研究综述。
Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577.
3
A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.一种用于多模态图像标注的被动学习传感器架构:社交机器人的应用
Sensors (Basel). 2017 Feb 11;17(2):353. doi: 10.3390/s17020353.
4
Learning-based control approaches for service robots on cloth manipulation and dressing assistance: a comprehensive review.基于学习的服务机器人布料操作和穿衣辅助控制方法:全面综述。
J Neuroeng Rehabil. 2022 Nov 3;19(1):117. doi: 10.1186/s12984-022-01078-4.
5
RoboCoV Cleaner: An Indoor Autonomous UV-C Disinfection Robot with Advanced Dual-Safety Systems.RoboCoV 清洁器:具有先进双重安全系统的室内自主紫外线消毒机器人。
Sensors (Basel). 2024 Feb 2;24(3):0. doi: 10.3390/s24030974.
6
Vision-Based Learning from Demonstration System for Robot Arms.基于视觉的机械臂示教学习系统。
Sensors (Basel). 2022 Mar 31;22(7):2678. doi: 10.3390/s22072678.
7
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.从视觉到触觉的迁移学习:用于视触 3D 物体识别的混合深度卷积神经网络。
Sensors (Basel). 2020 Dec 27;21(1):113. doi: 10.3390/s21010113.
8
Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.基于3D视觉的辅助机器人操纵器半自动控制方法的性能评估
Disabil Rehabil Assist Technol. 2018 Feb;13(2):140-145. doi: 10.1080/17483107.2017.1299804. Epub 2017 Mar 22.
9
Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications.焊接机器人的视觉传感与深度感知及其工业应用
Sensors (Basel). 2023 Dec 8;23(24):9700. doi: 10.3390/s23249700.
10
Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System.基于传感器融合的自主多机器人系统协同尾随。
Sensors (Basel). 2019 Feb 17;19(4):823. doi: 10.3390/s19040823.

引用本文的文献

1
Advancements in Learning-Based Navigation Systems for Robotic Applications in MRO Hangar: Review.用于MRO机库机器人应用的基于学习的导航系统进展:综述
Sensors (Basel). 2024 Feb 21;24(5):1377. doi: 10.3390/s24051377.
2
A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments.一种基于安装在机器人上的被动红外(PIR)和超声波传感器的多传感器融合方法,用于在室内环境中对人员进行定位。
Sensors (Basel). 2023 Aug 5;23(15):6963. doi: 10.3390/s23156963.

本文引用的文献

1
Efficient 3D Point Cloud Feature Learning for Large-Scale Place Recognition.用于大规模场所识别的高效3D点云特征学习
IEEE Trans Image Process. 2022;31:1258-1270. doi: 10.1109/TIP.2021.3136714. Epub 2022 Jan 25.
2
Relation Graph Network for 3D Object Detection in Point Clouds.用于点云三维目标检测的关系图网络
IEEE Trans Image Process. 2021;30:92-107. doi: 10.1109/TIP.2020.3031371. Epub 2020 Nov 18.
3
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.
4
Deep Learning for 3D Point Clouds: A Survey.用于三维点云的深度学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4338-4364. doi: 10.1109/TPAMI.2020.3005434. Epub 2021 Nov 3.
5
Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor.基于相机-激光雷达融合描述符的大规模场所识别
Sensors (Basel). 2020 May 19;20(10):2870. doi: 10.3390/s20102870.
6
From Points to Parts: 3D Object Detection From Point Cloud With Part-Aware and Part-Aggregation Network.从点到部件:基于部件感知与部件聚合网络的点云三维目标检测
IEEE Trans Pattern Anal Mach Intell. 2021 Aug;43(8):2647-2664. doi: 10.1109/TPAMI.2020.2977026. Epub 2021 Jul 1.
7
Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future.作为视觉系统模型的卷积神经网络:过去、现在与未来。
J Cogn Neurosci. 2021 Sep 1;33(10):2017-2031. doi: 10.1162/jocn_a_01544.
8
SECOND: Sparsely Embedded Convolutional Detection.第二:稀疏嵌入卷积检测。
Sensors (Basel). 2018 Oct 6;18(10):3337. doi: 10.3390/s18103337.
9
Learning Effective RGB-D Representations for Scene Recognition.学习用于场景识别的有效RGB-D表示。
IEEE Trans Image Process. 2018 Sep 28. doi: 10.1109/TIP.2018.2872629.
10
Fine-Tuning CNN Image Retrieval with No Human Annotation.无人工标注微调卷积神经网络图像检索。
IEEE Trans Pattern Anal Mach Intell. 2019 Jul;41(7):1655-1668. doi: 10.1109/TPAMI.2018.2846566. Epub 2018 Jun 12.