Arslan Aydın Ülkü, Kalkan Sinan, Acartürk Cengiz
Cognitive Science Program Middle East Technical University Ankara, Turkey.
Computer Science Department Middle East Technical University Ankara, Turkey.
J Eye Mov Res. 2018 Nov 12;11(6). doi: 10.16910/jemr.11.6.2.
The analysis of dynamic scenes has been a challenging domain in eye tracking research. This study presents a framework, named MAGiC, for analyzing gaze contact and gaze aversion in face-to-face communication. MAGiC provides an environment that is able to detect and track the conversation partner's face automatically, overlay gaze data on top of the face video, and incorporate speech by means of speech-act annotation. Specifically, MAGiC integrates eye tracking data for gaze, audio data for speech segmentation, and video data for face tracking. MAGiC is an open source framework and its usage is demonstrated via publicly available video content and wiki pages. We explored the capabilities of MAGiC through a pilot study and showed that it facilitates the analysis of dynamic gaze data by reducing the annotation effort and the time spent for manual analysis of video data.
动态场景分析一直是眼动追踪研究中一个具有挑战性的领域。本研究提出了一个名为MAGiC的框架,用于分析面对面交流中的目光接触和目光回避。MAGiC提供了一个能够自动检测和跟踪对话伙伴面部的环境,将注视数据叠加在面部视频之上,并通过言语行为注释整合语音。具体而言,MAGiC集成了用于注视的眼动追踪数据、用于语音分割的音频数据以及用于面部跟踪的视频数据。MAGiC是一个开源框架,其使用方法通过公开可用的视频内容和维基页面进行展示。我们通过一项初步研究探索了MAGiC的功能,结果表明它通过减少注释工作量和手动分析视频数据所花费的时间,促进了对动态注视数据的分析。