Mayor Torres Juan Manuel, Clarkson Tessa, Hauschild Kathryn M, Luhmann Christian C, Lerner Matthew D, Riccardi Giuseppe
Department of Information Engineering and Computer Science, University of Trento, Povo Trento, Italy.
Department of Psychology, Temple University, Philadelphia, Pennsylvania.
Biol Psychiatry Cogn Neurosci Neuroimaging. 2022 Jul;7(7):688-695. doi: 10.1016/j.bpsc.2021.03.015. Epub 2021 Apr 16.
Individuals with autism spectrum disorder (ASD) exhibit frequent behavioral deficits in facial emotion recognition (FER). It remains unknown whether these deficits arise because facial emotion information is not encoded in their neural signal or because it is encodes but fails to translate to FER behavior (deployment). This distinction has functional implications, including constraining when differences in social information processing occur in ASD, and guiding interventions (i.e., developing prosthetic FER vs. reinforcing existing skills).
We utilized a discriminative and contemporary machine learning approach-deep convolutional neural networks-to classify facial emotions viewed by individuals with and without ASD (N = 88) from concurrently recorded electroencephalography signals.
The convolutional neural network classified facial emotions with high accuracy for both ASD and non-ASD groups, even though individuals with ASD performed more poorly on the concurrent FER task. In fact, convolutional neural network accuracy was greater in the ASD group and was not related to behavioral performance. This pattern of results replicated across three independent participant samples. Moreover, feature importance analyses suggested that a late temporal window of neural activity (1000-1500 ms) may be uniquely important in facial emotion classification for individuals with ASD.
Our results reveal for the first time that facial emotion information is encoded in the neural signal of individuals with (and without) ASD. Thus, observed difficulties in behavioral FER associated with ASD likely arise from difficulties in decoding or deployment of facial emotion information within the neural signal. Interventions should focus on capitalizing on this intact encoding rather than promoting compensation or FER prostheses.
自闭症谱系障碍(ASD)患者在面部情绪识别(FER)方面经常表现出行为缺陷。目前尚不清楚这些缺陷是由于面部情绪信息未在其神经信号中编码,还是因为它已被编码但未能转化为FER行为(应用)。这种区别具有功能意义,包括限制ASD中社会信息处理差异出现的时间,并指导干预措施(即开发假体FER与强化现有技能)。
我们采用了一种有区分性的当代机器学习方法——深度卷积神经网络——根据同时记录的脑电图信号对患有和未患有ASD的个体(N = 88)观看的面部情绪进行分类。
卷积神经网络对ASD组和非ASD组的面部情绪分类准确率都很高,尽管患有ASD的个体在同时进行的FER任务中表现较差。事实上,ASD组的卷积神经网络准确率更高,且与行为表现无关。这一结果模式在三个独立的参与者样本中得到了重复。此外,特征重要性分析表明,神经活动的晚期时间窗口(1000 - 1500毫秒)可能在ASD个体的面部情绪分类中具有独特的重要性。
我们的结果首次揭示,面部情绪信息在患有(和未患有)ASD的个体的神经信号中被编码。因此,观察到的与ASD相关的行为FER困难可能源于神经信号中面部情绪信息的解码或应用困难。干预措施应侧重于利用这种完整的编码,而不是促进补偿或FER假体。