Lammers Sebastian, Bente Gary, Tepest Ralf, Jording Mathis, Roth Daniel, Vogeley Kai
Department of Psychiatry, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
Cognitive Neuroscience (INM-3), Institute of Neuroscience and Medicine, Research Center Jülich, Jülich, Germany.
Front Robot AI. 2019 Sep 27;6:94. doi: 10.3389/frobt.2019.00094. eCollection 2019.
Others' movements inform us about their current activities as well as their intentions and emotions. Research on the distinct mechanisms underlying action recognition and emotion inferences has been limited due to a lack of suitable comparative stimulus material. Problematic confounds can derive from low-level physical features (e.g., luminance), as well as from higher-level psychological features (e.g., stimulus difficulty). Here we present a standardized stimulus dataset, which allows to address both action and emotion recognition with identical stimuli. The stimulus set consists of 792 computer animations with a neutral avatar based on full body motion capture protocols. Motion capture was performed on 22 human volunteers, instructed to perform six everyday activities (mopping, sweeping, painting with a roller, painting with a brush, wiping, sanding) in three different moods (angry, happy, sad). Five-second clips of each motion protocol were rendered into AVI-files using two virtual camera perspectives for each clip. In contrast to video stimuli, the computer animations allowed to standardize the physical appearance of the avatar and to control lighting and coloring conditions, thus reducing the stimulus variation to mere movement. To control for low level optical features of the stimuli, we developed and applied a set of MATLAB routines extracting basic physical features of the stimuli, including average background-foreground proportion and frame-by-frame pixel change dynamics. This information was used to identify outliers and to homogenize the stimuli across action and emotion categories. This led to a smaller stimulus subset ( = 83 animations within the 792 clip database) which only contained two different actions (mopping, sweeping) and two different moods (angry, happy). To further homogenize this stimulus subset with regard to psychological criteria we conducted an online observer study ( = 112 participants) to assess the recognition rates for actions and moods, which led to a final sub-selection of 32 clips (eight per category) within the database. The ACASS database and its subsets provide unique opportunities for research applications in social psychology, social neuroscience, and applied clinical studies on communication disorders. All 792 AVI-files, selected subsets, MATLAB code, annotations, and motion capture data (FBX-files) are available online.
他人的动作能让我们了解他们当前的活动、意图和情绪。由于缺乏合适的对比刺激材料,对动作识别和情绪推断背后不同机制的研究一直有限。有问题的混淆因素可能源于低层次的物理特征(如亮度),也可能源于高层次的心理特征(如刺激难度)。在此,我们展示了一个标准化的刺激数据集,它能够使用相同的刺激来处理动作和情绪识别。该刺激集由792个基于全身动作捕捉协议的中性虚拟角色计算机动画组成。对22名人类志愿者进行了动作捕捉,他们被要求在三种不同情绪(愤怒、高兴、悲伤)下执行六项日常活动(拖地、扫地、用滚筒刷漆、用刷子刷漆、擦拭、打磨)。每个动作协议的五秒片段使用两个虚拟相机视角渲染为AVI文件。与视频刺激不同,计算机动画能够使虚拟角色的外观标准化,并控制光照和色彩条件,从而将刺激变化仅减少到动作本身。为了控制刺激的低层次光学特征,我们开发并应用了一组MATLAB程序来提取刺激的基本物理特征,包括平均背景 - 前景比例和逐帧像素变化动态。这些信息用于识别异常值,并使跨动作和情绪类别的刺激均匀化。这产生了一个较小的刺激子集(792个片段数据库中的83个动画),其中只包含两种不同的动作(拖地、扫地)和两种不同的情绪(愤怒、高兴)。为了在心理标准方面进一步使这个刺激子集均匀化,我们进行了一项在线观察者研究(112名参与者)来评估动作和情绪的识别率,这导致在数据库中最终选出32个片段(每个类别8个)。ACASS数据库及其子集为社会心理学、社会神经科学以及沟通障碍应用临床研究中的研究应用提供了独特的机会。所有792个AVI文件、选定的子集、MATLAB代码、注释和动作捕捉数据(FBX文件)均可在线获取。