Wang Zhihao, Chen Mai, Goerlich Katharina S, Aleman André, Xu Pengfei, Luo Yuejia
Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.
Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
Psychophysiology. 2021 Jun;58(6):e13806. doi: 10.1111/psyp.13806. Epub 2021 Mar 20.
Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop-like audiovisual task while recording event-related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration-related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration.
述情障碍与听觉和视觉领域的情绪识别缺陷有关。尽管在日常生活中情绪本质上是多模态的,但关于述情障碍与情绪多感官整合(eMSI)异常之间的关系却知之甚少。在这里,我们采用了一种类似情绪斯特鲁普效应的视听任务,同时记录高述情障碍水平(HA)和低述情障碍水平(LA)个体的事件相关电位(ERP)。在任务过程中,参与者必须指出一个声音是以悲伤还是愤怒的语调说出的,同时忽略同时呈现的静态面部,该面部与人类声音在情绪上可能一致或不一致。我们发现,无论情绪一致性如何,HA的表现都比LA差,且P2波幅更高。此外,在识别和描述感受方面的困难与P2成分呈正相关,而P2与行为表现呈负相关。贝叶斯统计显示,在eMSI和经典的与整合相关的ERP成分(N1和N2)方面没有组间差异。尽管述情障碍个体确实表现出听觉情绪识别缺陷,表现为表现下降和P2波幅升高,但目前的研究结果表明,述情障碍个体整合来自多个通道的情绪信息的能力是完整的。我们的工作为述情障碍与情绪多感官整合的神经心理学机制之间的关系提供了有价值的见解。