• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

机器人系统助理(RoSA):实现直观的多模态和多设备人机交互。

Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction.

机构信息

Neuro-Information Technology, Otto-von-Guericke-University Magdeburg, 39106 Magdeburg, Germany.

出版信息

Sensors (Basel). 2022 Jan 25;22(3):923. doi: 10.3390/s22030923.

DOI:10.3390/s22030923
PMID:35161671
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8838571/
Abstract

This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework's implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method.

摘要

本文提出了一种机器人系统助手 RoSA 的实现,用于安全和直观的人机交互。交互方式是使用强调强烈倾向于语音和指点手势的“巫师”研究选择和预先审查的。基于这些发现,我们设计并实现了一种新的基于语音、面部和手势识别的非接触式人机交互多模态系统。我们在一项广泛的研究中对我们提出的系统进行了评估,以研究用户体验和交互效率。研究报告表明,与我们在“巫师”研究中完全由人远程控制的机器人交互相比,我们的方法具有相似的可用性得分。此外,我们的框架实现基于机器人操作系统 (ROS),允许我们的多设备和多用户方法具有模块化和可扩展性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/06fc60b71589/sensors-22-00923-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/bab045f8cf1c/sensors-22-00923-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/b6775dff3394/sensors-22-00923-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/36e37b51ad42/sensors-22-00923-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/3cd55a6b320d/sensors-22-00923-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/d9de309ce98b/sensors-22-00923-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/1241cac40b11/sensors-22-00923-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/de97a69d4f8b/sensors-22-00923-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/4ffff20d9c41/sensors-22-00923-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/96f6cccacb1b/sensors-22-00923-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/cc1879dea401/sensors-22-00923-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/4f9e95e04ecb/sensors-22-00923-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/1803d05581b3/sensors-22-00923-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/7628e52c9bef/sensors-22-00923-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/58718de85c18/sensors-22-00923-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/195cb4c1a9e3/sensors-22-00923-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/b38ac716bc24/sensors-22-00923-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/31780ebc11b0/sensors-22-00923-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/06fc60b71589/sensors-22-00923-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/bab045f8cf1c/sensors-22-00923-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/b6775dff3394/sensors-22-00923-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/36e37b51ad42/sensors-22-00923-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/3cd55a6b320d/sensors-22-00923-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/d9de309ce98b/sensors-22-00923-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/1241cac40b11/sensors-22-00923-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/de97a69d4f8b/sensors-22-00923-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/4ffff20d9c41/sensors-22-00923-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/96f6cccacb1b/sensors-22-00923-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/cc1879dea401/sensors-22-00923-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/4f9e95e04ecb/sensors-22-00923-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/1803d05581b3/sensors-22-00923-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/7628e52c9bef/sensors-22-00923-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/58718de85c18/sensors-22-00923-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/195cb4c1a9e3/sensors-22-00923-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/b38ac716bc24/sensors-22-00923-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/31780ebc11b0/sensors-22-00923-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d332/8838571/06fc60b71589/sensors-22-00923-g018.jpg

相似文献

1
Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction.机器人系统助理(RoSA):实现直观的多模态和多设备人机交互。
Sensors (Basel). 2022 Jan 25;22(3):923. doi: 10.3390/s22030923.
2
Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.机器人化身的标志性手势、识别与语音整合
Front Psychol. 2016 Feb 17;7:183. doi: 10.3389/fpsyg.2016.00183. eCollection 2016.
3
An Interactive Astronaut-Robot System with Gesture Control.具有手势控制功能的交互式宇航员-机器人系统。
Comput Intell Neurosci. 2016;2016:7845102. doi: 10.1155/2016/7845102. Epub 2016 Apr 11.
4
Hybrid Target Selections by "Hand Gestures + Facial Expression" for a Rehabilitation Robot.用于康复机器人的“手势+面部表情”混合目标选择
Sensors (Basel). 2022 Dec 26;23(1):237. doi: 10.3390/s23010237.
5
Understanding Multimodal User Gesture and Speech Behavior for Object Manipulation in Augmented Reality Using Elicitation.使用启发式方法理解增强现实中物体操作的多模态用户手势和语音行为。
IEEE Trans Vis Comput Graph. 2020 Dec;26(12):3479-3489. doi: 10.1109/TVCG.2020.3023566. Epub 2020 Nov 10.
6
AMiCUS-A Head Motion-Based Interface for Control of an Assistive Robot.AMiCUS-A 基于头部运动的辅助机器人控制接口。
Sensors (Basel). 2019 Jun 25;19(12):2836. doi: 10.3390/s19122836.
7
Enhancing Human-Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition.通过融合传感器的多模块交互框架增强人机协作:物体识别、语音交流、目标用户检测、手势和眼神识别。
Sensors (Basel). 2023 Jun 21;23(13):5798. doi: 10.3390/s23135798.
8
A Framework for Real-Time Gestural Recognition and Augmented Reality for Industrial Applications.一种用于工业应用的实时手势识别与增强现实框架。
Sensors (Basel). 2024 Apr 10;24(8):2407. doi: 10.3390/s24082407.
9
Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.基于直接增强现实界面的手势引导机器人辅助手术
Comput Methods Programs Biomed. 2014 Sep;116(2):68-80. doi: 10.1016/j.cmpb.2013.12.018. Epub 2014 Jan 2.
10
Improving gesture-based interaction between an assistive bathing robot and older adults via user training on the gestural commands.通过对老年人进行手势命令的用户培训,改善辅助沐浴机器人与老年人之间基于手势的交互。
Arch Gerontol Geriatr. 2020 Mar-Apr;87:103996. doi: 10.1016/j.archger.2019.103996. Epub 2019 Dec 13.

引用本文的文献

1
Robot System Assistant (RoSA): evaluation of touch and speech input modalities for on-site HRI and telerobotics.机器人系统助手(RoSA):用于现场人机交互和远程机器人技术的触摸和语音输入方式评估。
Front Robot AI. 2025 Jul 30;12:1561188. doi: 10.3389/frobt.2025.1561188. eCollection 2025.
2
Application of augmented reality and surgical robotic navigation in total hip and knee replacement.增强现实与手术机器人导航在全髋关节和膝关节置换术中的应用。
Front Surg. 2025 Jul 28;12:1591756. doi: 10.3389/fsurg.2025.1591756. eCollection 2025.
3
Editorial for the Special Issue Recognition Robotics.

本文引用的文献

1
Head pose estimation in computer vision: a survey.计算机视觉中的头部姿态估计:一项综述。
IEEE Trans Pattern Anal Mach Intell. 2009 Apr;31(4):607-26. doi: 10.1109/TPAMI.2008.106.
《识别机器人学》特刊社论
Sensors (Basel). 2023 Oct 17;23(20):8515. doi: 10.3390/s23208515.
4
Assessing the Value of Multimodal Interfaces: A Study on Human-Machine Interaction in Weld Inspection Workstations.评估多模态界面的价值:焊接检测工作站中的人机交互研究。
Sensors (Basel). 2023 May 24;23(11):5043. doi: 10.3390/s23115043.
5
Recent advancements in multimodal human-robot interaction.多模态人机交互的最新进展。
Front Neurorobot. 2023 May 11;17:1084000. doi: 10.3389/fnbot.2023.1084000. eCollection 2023.
6
No-code robotic programming for agile production: A new markerless-approach for multimodal natural interaction in a human-robot collaboration context.用于敏捷生产的无代码机器人编程:一种在人机协作环境中实现多模态自然交互的新型无标记方法。
Front Robot AI. 2022 Oct 4;9:1001955. doi: 10.3389/frobt.2022.1001955. eCollection 2022.