Camacho M Catalina, Williams Elizabeth M, Balser Dori, Kamojjala Ruchika, Sekar Nikhil, Steinberger David, Yarlagadda Sishir, Perlman Susan B, Barch Deanna M
Department of Psychological and Brain Sciences, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130 USA.
Department of Psychiatry, Washington University in St. Louis, 4444 Forest Park Drive, MO 63110 St. Louis, USA.
Affect Sci. 2022 Jan 20;3(1):168-181. doi: 10.1007/s42761-021-00100-7. eCollection 2022 Mar.
Social information processing is vital for inferring emotional states in others, yet affective neuroscience has only begun to scratch the surface of how we represent emotional information in the brain. Most previous affective neuroscience work has used isolated stimuli such as static images of affective faces or scenes to probe affective processing. While this work has provided rich insight to the initial stages of emotion processing (encoding cues), activation to isolated stimuli provides limited insight into later phases of emotion processing such as interpretation of cues or interactions between cues and established cognitive schemas. Recent work has highlighted the potential value of using complex video stimuli to probe socio-emotional processing, highlighting the need to develop standardized video coding schemas as this exciting field expands. Toward that end, we present a standardized and open-source coding system for complex videos, two fully coded videos, and a video and code processing Python library. The EmoCodes manual coding system provides an externally validated and replicable system for coding complex cartoon stimuli, with future plans to validate the system for other video types. The Python library provides automated tools for extracting low-level features from video files as well as tools for summarizing and analyzing the manual codes for suitability of use in neuroimaging analysis. Materials can be freely accessed at https://emocodes.org/. These tools represent an important step toward replicable and standardized study of socio-emotional processing using complex video stimuli.
The online version contains supplementary material available at 10.1007/s42761-021-00100-7.
社会信息处理对于推断他人的情绪状态至关重要,但情感神经科学才刚刚开始触及我们如何在大脑中表征情绪信息的表面。以前大多数情感神经科学研究都使用孤立的刺激,如情感面孔或场景的静态图像来探究情感加工。虽然这项工作为情绪加工的初始阶段(编码线索)提供了丰富的见解,但对孤立刺激的激活对情绪加工的后期阶段,如线索的解释或线索与既定认知模式之间的相互作用,提供的见解有限。最近的研究强调了使用复杂视频刺激来探究社会情感加工的潜在价值,随着这个令人兴奋的领域不断扩展,凸显了开发标准化视频编码模式的必要性。为此,我们提出了一个用于复杂视频的标准化开源编码系统、两个完全编码的视频以及一个视频和代码处理的Python库。EmoCodes手动编码系统为编码复杂的卡通刺激提供了一个经过外部验证且可复制的系统,未来计划针对其他视频类型验证该系统。Python库提供了从视频文件中提取低级特征的自动化工具,以及用于总结和分析手动编码以适合神经成像分析使用的工具。材料可在https://emocodes.org/免费获取。这些工具代表了使用复杂视频刺激对社会情感加工进行可复制和标准化研究的重要一步。
在线版本包含可在10.1007/s42761-021-00100-7获取的补充材料。