Arulkumaran Kai, Di Vincenzo Marina, Dossa Rousslan Fernand Julien, Akiyama Shogo, Ogawa Lillrank Dan, Sato Motoshige, Tomeoka Kenichi, Sasai Shuntaro
Araya Inc., Tokyo, Japan.
Front Robot AI. 2024 May 9;11:1329270. doi: 10.3389/frobt.2024.1329270. eCollection 2024.
Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy-most commonly electroencephalography-based-the two most common choices are to provide either auditory or visual stimuli to the user-each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies-and preferences-for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.
共享自主性为辅助机器人技术带来了希望,身体有残障的人可以指挥机器人为他们执行各种任务。然而,一个能够执行多种任务的机器人也会给用户带来许多选择,比如哪个物体或位置应该成为交互目标。在用于共享自主性的非侵入性脑机接口(最常见的是基于脑电图的接口)的背景下,两种最常见的选择是向用户提供听觉或视觉刺激——每种都有其各自的优缺点。我们使用异常刺激范式设计了可比较的听觉和视觉接口,向用户说出/显示这些选择,并让用户完成一项涉及位置和物体选择的多阶段机器人操作任务。用户对不同的接口表现出不同的能力和偏好,这凸显了在构建人机接口时考虑视觉之外的模态的重要性。