Kim Chunghwan, Kim Chaeyoon, Kim HyunSub, Kwak HwyKuen, Lee WooJin, Im Chang-Hwan
Department of Electronic Engineering, Hanyang University, Seoul, 04763 Republic of Korea.
Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, 04763 Republic of Korea.
Biomed Eng Lett. 2023 Apr 11;13(3):465-473. doi: 10.1007/s13534-023-00277-9. eCollection 2023 Aug.
The rapid expansion of virtual reality (VR) and augmented reality (AR) into various applications has increased the demand for hands-free input interfaces when traditional control methods are inapplicable (e.g., for paralyzed individuals who cannot move their hands). Facial electromyogram (fEMG), bioelectric signals generated from facial muscles, could solve this problem. Discriminating facial gestures using fEMG is possible because fEMG signals vary with these gestures. Thus, these signals can be used to generate discrete hands-free control commands. This study implemented an fEMG-based facial gesture recognition system for generating discrete commands to control an AR or VR environment. The fEMG signals around the eyes were recorded, assuming that the fEMG electrodes were embedded into the VR head-mounted display (HMD). Sixteen discrete facial gestures were classified using linear discriminant analysis (LDA) with Riemannian geometry features. Because the fEMG electrodes were far from the facial muscles associated with the facial gestures, some similar facial gestures were indistinguishable from each other. Therefore, this study determined the best facial gesture combinations with the highest classification accuracy for 3-15 commands. An analysis of the fEMG data acquired from 15 participants showed that the optimal facial gesture combinations increased the accuracy by 4.7%p compared with randomly selected facial gesture combinations. Moreover, this study is the first to investigate the feasibility of implementing a subject-independent facial gesture recognition system that does not require individual user training sessions. Lastly, our online hands-free control system was successfully applied to a media player to demonstrate the applicability of the proposed system.
The online version contains supplementary material available at 10.1007/s13534-023-00277-9.
虚拟现实(VR)和增强现实(AR)在各种应用中的迅速扩展,增加了对免提输入接口的需求,而传统控制方法在此类场景中并不适用(例如,对于无法移动手部的瘫痪患者)。面部肌电图(fEMG),即由面部肌肉产生的生物电信号,可以解决这个问题。利用fEMG来区分面部手势是可行的,因为fEMG信号会随着这些手势而变化。因此,这些信号可用于生成离散的免提控制命令。本研究实现了一个基于fEMG的面部手势识别系统,用于生成离散命令以控制AR或VR环境。假设fEMG电极嵌入到VR头戴式显示器(HMD)中,记录眼睛周围的fEMG信号。使用具有黎曼几何特征的线性判别分析(LDA)对16种离散面部手势进行分类。由于fEMG电极距离与面部手势相关的面部肌肉较远,一些相似的面部手势彼此难以区分。因此,本研究确定了用于3至15个命令的具有最高分类准确率的最佳面部手势组合。对从15名参与者获取的fEMG数据的分析表明,与随机选择的面部手势组合相比,最佳面部手势组合的准确率提高了4.7个百分点。此外,本研究首次探讨了实现无需个体用户训练环节的独立于个体的面部手势识别系统的可行性。最后,我们的在线免提控制系统成功应用于媒体播放器,以证明所提出系统的适用性。
在线版本包含可在10.1007/s13534-023-00277-9获取的补充材料。