Department of Mechanical Engineering, San Diego State University, San Diego, CA 92182, USA.
Sensors (Basel). 2024 Jun 25;24(13):4125. doi: 10.3390/s24134125.
This study aims to demonstrate the feasibility of using a new wireless electroencephalography (EEG)-electromyography (EMG) wearable approach to generate characteristic EEG-EMG mixed patterns with mouth movements in order to detect distinct movement patterns for severe speech impairments. This paper describes a method for detecting mouth movement based on a new signal processing technology suitable for sensor integration and machine learning applications. This paper examines the relationship between the mouth motion and the brainwave in an effort to develop nonverbal interfacing for people who have lost the ability to communicate, such as people with paralysis. A set of experiments were conducted to assess the efficacy of the proposed method for feature selection. It was determined that the classification of mouth movements was meaningful. EEG-EMG signals were also collected during silent mouthing of phonemes. A few-shot neural network was trained to classify the phonemes from the EEG-EMG signals, yielding classification accuracy of 95%. This technique in data collection and processing bioelectrical signals for phoneme recognition proves a promising avenue for future communication aids.
本研究旨在展示一种新的无线脑电图 (EEG)-肌电图 (EMG) 可穿戴方法的可行性,该方法可生成具有口腔运动的特征 EEG-EMG 混合模式,以便检测严重言语障碍的独特运动模式。本文描述了一种基于新信号处理技术的口腔运动检测方法,该技术适用于传感器集成和机器学习应用。本文研究了口腔运动与脑电波之间的关系,以期为丧失沟通能力的人开发非言语界面,如瘫痪患者。进行了一组实验来评估所提出的特征选择方法的效果。结果表明,对口腔运动的分类是有意义的。在无声地发音时,也采集了 EEG-EMG 信号。训练了一个少量样本的神经网络来对 EEG-EMG 信号中的音素进行分类,分类准确率达到了 95%。该技术在收集和处理用于音素识别的生物电信号方面具有广阔的应用前景。