Suppr超能文献

手语和手势研究中的视频刺激控制:用于分析运动跟踪数据的软件包

Controlling Video Stimuli in Sign Language and Gesture Research: The Package for Analyzing Motion-Tracking Data in .

作者信息

Trettenbrein Patrick C, Zaccarella Emiliano

机构信息

Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.

International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany.

出版信息

Front Psychol. 2021 Feb 19;12:628728. doi: 10.3389/fpsyg.2021.628728. eCollection 2021.

Abstract

Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor's movements. Computer vision methods such as enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using , allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.

摘要

手语和手势研究领域的研究人员经常向参与者展示视频刺激,这些视频展示了演员执行语言手语或伴随言语的手势。到目前为止,此类视频刺激大多仅在视频材料的某些技术方面(如剪辑时长、编码、帧率等)进行了控制,这使得视频刺激材料中的系统差异可能隐藏在演员动作的实际运动属性中的可能性仍然存在。诸如 之类的计算机视觉方法能够将身体姿势模型拟合到视频剪辑的连续帧,从而有可能在录制过程中不使用基于点或无标记的运动跟踪系统的情况下,恢复特定视频剪辑中演员执行的动作。 软件包提供了一种直接且可重复的方式来处理使用 从视频剪辑中提取的这些身体姿势模型数据,使手语和手势研究领域的研究人员能够量化仅与视频剪辑中演员执行的动作相关的运动量(速度和加速度)。这些定量测量可用于控制刺激视频剪辑中演员动作的差异,或者例如在实验的不同条件之间进行控制。此外,该软件包还提供了一组用于生成数据可视化图表的函数,以及一种从大量视频文件中自动提取元数据(如时长、帧率等)的简单易用方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/efa7/7932993/7808bb32329e/fpsyg-12-628728-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验