Zhu Bo, Zhang Daohui, Chu Yaqi, Zhao Xingang, Zhang Lixin, Zhao Lina
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China.
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China.
Front Neurorobot. 2021 Jul 16;15:692562. doi: 10.3389/fnbot.2021.692562. eCollection 2021.
Patients who have lost limb control ability, such as upper limb amputation and high paraplegia, are usually unable to take care of themselves. Establishing a natural, stable, and comfortable human-computer interface (HCI) for controlling rehabilitation assistance robots and other controllable equipments will solve a lot of their troubles. In this study, a complete limbs-free face-computer interface (FCI) framework based on facial electromyography (fEMG) including offline analysis and online control of mechanical equipments was proposed. Six facial movements related to eyebrows, eyes, and mouth were used in this FCI. In the offline stage, 12 models, eight types of features, and three different feature combination methods for model inputing were studied and compared in detail. In the online stage, four well-designed sessions were introduced to control a robotic arm to complete drinking water task in three ways (by touch screen, by fEMG with and without audio feedback) for verification and performance comparison of proposed FCI framework. Three features and one model with an average offline recognition accuracy of 95.3%, a maximum of 98.8%, and a minimum of 91.4% were selected for use in online scenarios. In contrast, the way with audio feedback performed better than that without audio feedback. All subjects completed the drinking task in a few minutes with FCI. The average and smallest time difference between touch screen and fEMG under audio feedback were only 1.24 and 0.37 min, respectively.
失去肢体控制能力的患者,如上肢截肢和高位截瘫患者,通常无法自理。建立一个自然、稳定且舒适的人机接口(HCI)来控制康复辅助机器人和其他可控设备,将解决他们的许多困扰。在本研究中,提出了一个基于面部肌电图(fEMG)的完整的无肢体面部计算机接口(FCI)框架,包括机械设备的离线分析和在线控制。该FCI使用了与眉毛、眼睛和嘴巴相关的六种面部动作。在离线阶段,详细研究并比较了12种模型、8种特征类型以及三种不同的用于模型输入的特征组合方法。在在线阶段,引入了四个精心设计的环节,以三种方式(通过触摸屏、带和不带音频反馈的fEMG)控制机械臂完成饮水任务,用于对所提出的FCI框架进行验证和性能比较。选择了三种特征和一种模型用于在线场景,其离线平均识别准确率为95.3%,最高为98.8%,最低为91.4%。相比之下,有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者使用FCI在几分钟内完成了饮水任务。在音频反馈下,触摸屏和fEMG之间的平均和最小时间差分别仅为1.24分钟和0.37分钟。