Liu Ying, Wang Zixuan, Yu Ge
Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China.
Front Psychol. 2021 Aug 25;12:707809. doi: 10.3389/fpsyg.2021.707809. eCollection 2021.
This research uses facial expression recognition software (FaceReader) to explore the influence of different sound interventions on the emotions of older people with dementia. The field experiment was carried out in the public activity space of an older adult care facility. Three intervention sound sources were used, namely, music, stream, and birdsong. Data collected through the Self-Assessment Manikin Scale (SAM) were compared with facial expression recognition (FER) data. FaceReader identified differences in the emotional responses of older people with dementia to different sound interventions and revealed changes in facial expressions over time. The facial expression of the participants had significantly higher valence for all three sound interventions than in the intervention without sound (p < 0.01). The indices of sadness, fear, and disgust differed significantly between the different sound interventions. For example, before the start of the birdsong intervention, the disgust index initially increased by 0.06 from 0 s to about 20 s, followed by a linear downward trend, with an average reduction of 0.03 per 20 s. In addition, valence and arousal were significantly lower when the sound intervention began before, rather than concurrently with, the start of the activity ( < 0.01). Moreover, in the birdsong and stream interventions, there were significant differences between intervention days ( < 0.05 or < 0.01). Furthermore, facial expression valence significantly differed by age and gender. Finally, a comparison of the SAM and FER results showed that, in the music intervention, the valence in the first 80 s helps to predict dominance ( = 0.600) and acoustic comfort ( = 0.545); in the stream sound intervention, the first 40 s helps to predict pleasure ( = 0.770) and acoustic comfort ( = 0.766); for the birdsong intervention, the first 20 s helps to predict dominance ( = 0.824) and arousal ( = 0.891).
本研究使用面部表情识别软件(FaceReader)来探究不同声音干预对老年痴呆症患者情绪的影响。现场实验在一家老年护理机构的公共活动空间进行。使用了三种干预声源,即音乐、溪流声和鸟鸣声。通过自评人偶量表(SAM)收集的数据与面部表情识别(FER)数据进行了比较。FaceReader识别出老年痴呆症患者对不同声音干预的情绪反应差异,并揭示了面部表情随时间的变化。与无声音干预相比,所有三种声音干预下参与者的面部表情在效价方面均显著更高(p < 0.01)。不同声音干预之间,悲伤、恐惧和厌恶指数存在显著差异。例如,在鸟鸣声干预开始前,厌恶指数在最初0秒至约20秒内从0.06开始上升,随后呈线性下降趋势,每20秒平均下降0.03。此外,声音干预在活动开始之前而非与活动同时开始时,效价和唤醒水平显著更低(< 0.01)。而且,在鸟鸣声和溪流声干预中,不同干预日之间存在显著差异(< 0.05或< 0.01)。此外,面部表情效价在年龄和性别上存在显著差异。最后,SAM和FER结果的比较表明,在音乐干预中,前80秒的效价有助于预测支配性(= 0.600)和声学舒适度(= 0.545);在溪流声干预中,前40秒有助于预测愉悦感(= 0.770)和声学舒适度(= 0.766);对于鸟鸣声干预,前20秒有助于预测支配性(= 0.824)和唤醒水平(= 0.891)。