IEEE Trans Neural Syst Rehabil Eng. 2021;29:2456-2463. doi: 10.1109/TNSRE.2021.3127724. Epub 2021 Dec 3.
When people listen to speech, neural activity tracks the entropy fluctuation in the acoustic envelope of the signal. This signal-based entrainment has been shown to be the basis of speech parsing and comprehension. In this electroencephalography (EEG) study, we compute sign language users' cortical tracking of changes in visual dynamics of the communicative signal in the time-direct videos of sign language, and their time-reversed counterparts, and assess the relative contribution of response frequencies between.2 and 12.4 Hz to comprehension using a machine learning approach to brain state classification. Lower frequencies of EEG response (.2-4 Hz) yield 100% classification accuracy, while information about cortical tracking of the visual envelope in higher frequencies is less informative. This suggests that signers rely on lower visual frequency data, such as envelope of visual signal, for sign language comprehension. In the context of real-time language processing, given the speed of comprehension responses, this suggests that fluent signers employ a predictive processing heuristic based on sign language knowledge.
当人们听语言时,神经活动会追踪信号声学包络的熵波动。这种基于信号的同步已被证明是语言解析和理解的基础。在这项脑电图(EEG)研究中,我们计算了手语使用者在时间直接视频中对手语交际信号视觉动态变化的皮层追踪,以及其时间反转对应物,并使用脑状态分类的机器学习方法评估了.2 到 12.4 Hz 之间的反应频率的相对贡献对理解的影响。较低的 EEG 反应频率(.2-4 Hz)产生 100%的分类准确率,而关于较高频率视觉包络的皮层追踪的信息则不那么有信息量。这表明手语使用者依赖于较低的视觉频率数据,例如视觉信号的包络,用于理解手语。在实时语言处理的背景下,考虑到理解反应的速度,这表明流畅的手语使用者基于手语知识采用了一种基于预测处理的启发式方法。