Uehara Akira, Kawamoto Hiroaki, Sankai Yoshiyuki
Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan.
Center for Cybernics Research, University of Tsukuba, Tsukuba, Ibaraki, Japan.
Front Robot AI. 2025 Mar 18;12:1462243. doi: 10.3389/frobt.2025.1462243. eCollection 2025.
Due to the increasing employment of people with disabilities, support for the elderly, and childcare needs, enhancing the independence and freedom across generations and spaces is necessary. This study aimed to develop a human-collaborative robot using multimodal vital information as input with cybernics space, which is fused "human" and "Cyber/Physical Space," and confirm its feasibility experimentally. The robot allows the user to operate it via gaze and bio-electrical signals (BES), reflecting the user's intentions, and seamlessly transition among three modes (i.e., assistant, jockey, and ghost). In the assistant mode, the user collaborates with the robot in the physical space using a system that includes a head-mounted display (HMD) for gaze measurement, BES measurement unit, personal mobility system, and an arm-hand system. The HMD can be flipped up and down for hands-free control. The BES measurement unit captures leaked weak signals from the skin surface, indicating the user's voluntary movement intentions, which are processed by the main unit to generate control commands for the various actuators. The personal mobility system features omni-wheels for tight turning, and the arm-hand system can handle payloads up to 500 g. In the jockey mode, the user remotely operates a small mobile base with a display and camera, moving it through the physical space. In the ghost mode, the user navigates and inputs commands in a virtual space using a smart key and remote-control device integrated with IoT and wireless communication. The switching of each control mode is estimated using the BES from the user's upper arm, gaze direction, and position, thereby enabling movement, mobility, and manipulation without physical body movement. In basic experiments involving able-bodied participants, the macro averages of recall, precision, and F score were 1.00, 0.90, and 0.94, respectively, in the assistant mode. The macro averages of recall, precision, and F score were 0.85, 0.92, and 0.88, respectively, in the ghost mode. Therefore, the human-collaborative robot utilizing multimodal vital information has feasibility for supporting daily life tasks, contributing to a safer and more secure society by addressing various daily life challenges.
由于残疾人士就业人数的增加、对老年人的照料需求以及儿童保育需求,增强各代人和各空间的独立性与自由度变得十分必要。本研究旨在开发一种以多模态生命信息为输入、融合了“人”与“信息物理空间”的人机协作机器人,并通过实验验证其可行性。该机器人允许用户通过注视和生物电信号(BES)来操作,反映用户意图,并能在三种模式(即助手模式、骑手模式和幽灵模式)之间无缝切换。在助手模式下,用户在物理空间中与机器人协作,使用的系统包括用于注视测量的头戴式显示器(HMD)、生物电信号测量单元、个人移动系统和手臂-手部系统。HMD可以上下翻转以实现免提控制。生物电信号测量单元捕捉从皮肤表面泄漏的微弱信号,这些信号表明用户的自主运动意图,由主机进行处理以生成针对各种执行器的控制命令。个人移动系统配备全方位轮以便紧密转弯,手臂-手部系统能够处理重达500克的 payloads。在骑手模式下,用户通过带有显示器和摄像头的小型移动基座进行远程操作,使其在物理空间中移动。在幽灵模式下,用户使用集成了物联网和无线通信的智能钥匙和遥控设备在虚拟空间中导航并输入命令。利用来自用户上臂的生物电信号、注视方向和位置来估计每种控制模式的切换,从而实现无需身体移动的运动、移动性和操作。在涉及健全参与者的基础实验中,助手模式下召回率、精确率和F分数的宏观平均值分别为1.00、0.90和0.94。幽灵模式下召回率、精确率和F分数的宏观平均值分别为0.85、0.92和0.88。因此,利用多模态生命信息的人机协作机器人在支持日常生活任务方面具有可行性,通过应对各种日常生活挑战,为更安全可靠的社会做出贡献。