Suppr超能文献

用于机器人多阶段任务控制的视觉与听觉脑电图接口比较

A comparison of visual and auditory EEG interfaces for robot multi-stage task control.

作者信息

Arulkumaran Kai, Di Vincenzo Marina, Dossa Rousslan Fernand Julien, Akiyama Shogo, Ogawa Lillrank Dan, Sato Motoshige, Tomeoka Kenichi, Sasai Shuntaro

机构信息

Araya Inc., Tokyo, Japan.

出版信息

Front Robot AI. 2024 May 9;11:1329270. doi: 10.3389/frobt.2024.1329270. eCollection 2024.

Abstract

Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy-most commonly electroencephalography-based-the two most common choices are to provide either auditory or visual stimuli to the user-each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies-and preferences-for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.

摘要

共享自主性为辅助机器人技术带来了希望,身体有残障的人可以指挥机器人为他们执行各种任务。然而,一个能够执行多种任务的机器人也会给用户带来许多选择,比如哪个物体或位置应该成为交互目标。在用于共享自主性的非侵入性脑机接口(最常见的是基于脑电图的接口)的背景下,两种最常见的选择是向用户提供听觉或视觉刺激——每种都有其各自的优缺点。我们使用异常刺激范式设计了可比较的听觉和视觉接口,向用户说出/显示这些选择,并让用户完成一项涉及位置和物体选择的多阶段机器人操作任务。用户对不同的接口表现出不同的能力和偏好,这凸显了在构建人机接口时考虑视觉之外的模态的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b6d/11111866/1677e6a7b3d6/frobt-11-1329270-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验