Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Ave. Eugenio Garza Sada 2501 Sur, Col: Tecnológico, Monterrey, N.L, 64700, México.
BMC Med. 2024 Mar 14;22(1):121. doi: 10.1186/s12916-024-03341-y.
Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics.
Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing.
Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices.
This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics.
BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.
社交情感障碍是自闭症谱系障碍(ASD)的诊断标准之一,但实际知识已经证实了情感韵律感知的改变和完整。在这里,考虑到感知的贝叶斯框架表明,在高度变化的环境中,对感官证据的过度采样会损害感知。然而,对于光谱和时间线索的可靠层次结构将通过自闭症者促进情绪识别。
从脑电图(EEG)数据中提取的事件相关谱波动(ERSP)在聆听(a)人类或(b)语音时,对愤怒、厌恶、恐惧、快乐、中性和悲伤韵律的感知进行了索引,这些语音的特征是声环境的波动性和可变性降低。通过分析非社交任务中的行为准确性,将感知机制的评估扩展到视觉领域,在该任务中,强调了自下而上证据和自上而下推理之间的精度加权的动态。80 名儿童(平均 9.7 岁;标准差 1.8)自愿参加,其中包括 40 名自闭症儿童。在研究时,通过自闭症诊断观察量表第二版和父母对自闭症谱系评定量表的回答来评估症状。进行了混合的组内-组间方差分析,以评估组(自闭症与典型发育)、声音、情绪以及因素之间的相互作用的影响。在非显著性的情况下,实施了贝叶斯分析来量化对零假设的证据。事后比较针对多次测试进行了校正。
自闭症儿童在聆听人类声音时表现出情绪分化受损,而当声音的波动性和可变性降低时,这种情况得到改善。从神经型到自闭症型观察到不同的神经模式,强调了不同的感知机制。因此,视觉任务上的行为测量与归因于环境可变性(感觉处理)的过度精确性一致,这种可变性削弱了性能。与自闭症儿童不同,神经型可以区分所有声音引起的情绪。
本研究概述了支持对感觉可变性做出反应的行为和神经生理机制。对情感韵律处理的神经生物学见解强调了经过声学修改的情感韵律在改善自闭症者的情绪分化方面的潜力。
生物医学中心 ISRCTN 注册表,ISRCTN84202101。于 2020 年 9 月 20 日注册。