Department of Psychology and Neuroscience and Institute of Cognitive Science, University of Colorado, Boulder, CO, USA.
Institute for Behavioral Genetics, University of Colorado, Boulder, CO, USA.
Sci Adv. 2019 Jul 24;5(7):eaaw4358. doi: 10.1126/sciadv.aaw4358. eCollection 2019 Jul.
Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.
理论家们认为,情绪是与生存息息相关的情境的典型反应。如果是这样,那么情绪可能是由感官环境的特征提供的。然而,很少有计算模型描述刺激特征的组合如何引发不同的情绪。在这里,我们开发了一个卷积神经网络,可以将图像准确地解码为 11 种不同的情绪类别。我们使用超过 25000 张图像和电影对模型进行了验证,并表明图像内容足以预测人类情绪评分的类别和效价。在两项功能性磁共振成像研究中,我们证明了人类视觉皮层活动模式编码了与情绪类别相关的模型输出,并且可以解码多种类别的情绪体验。这些结果表明,丰富的、特定于类别的视觉特征可以可靠地映射到不同的情绪,并且它们在人类视觉系统中以分布式表示形式进行编码。