CoML/ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France; NPI/ENS/INSERM U855/UPEC/PSL Research University, Creteil, France.
CoML/ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France.
Cortex. 2022 Oct;155:150-161. doi: 10.1016/j.cortex.2022.05.024. Epub 2022 Jul 19.
Patients with Huntington's disease suffer from disturbances in the perception of emotions; they do not correctly read the body, vocal and facial expressions of others. With regard to the expression of emotions, it has been shown that they are impaired in expressing emotions through face but up until now, little research has been conducted about their ability to express emotions through spoken language. To better understand emotion production in both voice and language in Huntington's Disease (HD), we tested 115 individuals: 68 patients (HD), 22 participants carrying the mutant HD gene without any motor symptoms (pre-manifest HD), and 25 controls in a single-centre prospective observational follow-up study. Participants were recorded in interviews in which they were asked to recall sad, angry, happy, and neutral stories. Emotion expression through voice and language was investigated by comparing the identifiability of emotions expressed by controls, preHD and HD patients in these interviews. To assess separately vocal and linguistic expression of emotions in a blind design, we used machine learning models instead of a human jury performing a forced-choice recognition test. Results from this study showed that patients with HD had difficulty expressing emotions through both voice and language compared to preHD participants and controls, who behaved similarly and above chance. In addition, we did not find any differences in expression of emotions between preHD and healthy controls. We further validated our newly proposed methodology with a human jury on the speech produced by the controls. These results are consistent with the hypothesis that emotional deficits in HD are caused by impaired sensori-motor representations of emotions, in line with embodied cognition theories. This study also shows how machine learning models can be leveraged to assess emotion expression in a blind and reproducible way.
亨廷顿病患者存在情绪感知障碍;他们无法正确解读他人的身体、声音和面部表情。关于情绪表达,已经表明他们在通过面部表达情绪方面存在障碍,但到目前为止,对于他们通过口语表达情绪的能力的研究还很少。为了更好地理解亨廷顿病(HD)中语音和语言的情绪产生,我们在一项单中心前瞻性观察性随访研究中测试了 115 名个体:68 名患者(HD)、22 名携带突变 HD 基因但无运动症状的参与者(前显型 HD)和 25 名对照。在访谈中,参与者被要求回忆悲伤、愤怒、快乐和中性的故事,以此来测试他们通过声音和语言表达情绪的能力。通过比较对照、前显型 HD 和 HD 患者在访谈中表达的情绪的可识别性,研究了他们通过声音和语言表达情绪的能力。为了在盲设计中分别评估语音和语言表达情绪,我们使用了机器学习模型,而不是人类陪审团进行强制选择识别测试。该研究结果表明,与前显型 HD 参与者和对照组相比,HD 患者在通过语音和语言表达情绪方面存在困难,而对照组和前显型 HD 参与者的表现相似且高于随机水平。此外,我们在前显型 HD 和健康对照组之间未发现情绪表达的任何差异。我们进一步在对照组的语音上使用人类陪审团验证了我们新提出的方法的有效性。这些结果与情绪缺陷是由情绪感觉运动表示受损引起的假设一致,符合具身认知理论。这项研究还展示了如何利用机器学习模型以盲法和可重复的方式评估情绪表达。