Department of Mathematics, The University of Alabama, Tuscaloosa, AL 35487-0350, United States of America.
Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, United States of America.
J Neural Eng. 2021 Mar 3;18(2). doi: 10.1088/1741-2552/abdb3b.
Understanding and differentiating brain states is an important task in the field of cognitive neuroscience with applications in health diagnostics, such as detecting neurotypical development vs. autism spectrum or coma/vegetative state vs. locked-in state. Electroencephalography (EEG) analysis is a particularly useful tool for this task as EEG data can detect millisecond-level changes in brain activity across a range of frequencies in a non-invasive and relatively inexpensive fashion. The goal of this study is to apply machine learning methods to EEG data in order to classify visual language comprehension across multiple participants.26-channel EEG was recorded for 24 Deaf participants while they watched videos of sign language sentences played in time-direct and time-reverse formats to simulate interpretable vs. uninterpretable sign language, respectively. Sparse optimal scoring (SOS) was applied to EEG data in order to classify which type of video a participant was watching, time-direct or time-reversed. The use of SOS also served to reduce the dimensionality of the features to improve model interpretability.The analysis of frequency-domain EEG data resulted in an average out-of-sample classification accuracy of 98.89%, which was far superior to the time-domain analysis. This high classification accuracy suggests this model can accurately identify common neural responses to visual linguistic stimuli.The significance of this work is in determining necessary and sufficient neural features for classifying the high-level neural process of visual language comprehension across multiple participants.
理解和区分大脑状态是认知神经科学领域的一项重要任务,其应用包括健康诊断,例如检测神经典型发育与自闭症谱系、昏迷/植物状态与闭锁综合征的区别。脑电图(EEG)分析是这项任务的一个特别有用的工具,因为 EEG 数据可以以非侵入性且相对廉价的方式检测到大脑活动在多个频率下的毫秒级变化。本研究的目的是将机器学习方法应用于 EEG 数据,以对多个参与者的视觉语言理解进行分类。在这项研究中,为 24 名聋人参与者记录了 26 通道的 EEG,当他们观看以正常顺序和反转顺序播放的手语句子视频时,分别模拟可解释和不可解释的手语。稀疏最优评分(SOS)被应用于 EEG 数据,以分类参与者正在观看哪种类型的视频,是正常顺序还是反转顺序。SOS 的使用还可以降低特征的维度,从而提高模型的可解释性。对频域 EEG 数据的分析导致平均样本外分类准确率为 98.89%,远高于时域分析。这种高分类准确率表明,该模型可以准确识别常见的神经反应,以视觉语言刺激。这项工作的意义在于确定分类多个参与者的视觉语言理解的高级神经过程所需的充分和必要的神经特征。