Trettenbrein Patrick C, Zaccarella Emiliano
Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany.
Front Psychol. 2021 Feb 19;12:628728. doi: 10.3389/fpsyg.2021.628728. eCollection 2021.
Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor's movements. Computer vision methods such as enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using , allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.
手语和手势研究领域的研究人员经常向参与者展示视频刺激,这些视频展示了演员执行语言手语或伴随言语的手势。到目前为止,此类视频刺激大多仅在视频材料的某些技术方面(如剪辑时长、编码、帧率等)进行了控制,这使得视频刺激材料中的系统差异可能隐藏在演员动作的实际运动属性中的可能性仍然存在。诸如 之类的计算机视觉方法能够将身体姿势模型拟合到视频剪辑的连续帧,从而有可能在录制过程中不使用基于点或无标记的运动跟踪系统的情况下,恢复特定视频剪辑中演员执行的动作。 软件包提供了一种直接且可重复的方式来处理使用 从视频剪辑中提取的这些身体姿势模型数据,使手语和手势研究领域的研究人员能够量化仅与视频剪辑中演员执行的动作相关的运动量(速度和加速度)。这些定量测量可用于控制刺激视频剪辑中演员动作的差异,或者例如在实验的不同条件之间进行控制。此外,该软件包还提供了一组用于生成数据可视化图表的函数,以及一种从大量视频文件中自动提取元数据(如时长、帧率等)的简单易用方法。