IEEE Trans Cybern. 2015 Mar;45(3):506-20. doi: 10.1109/TCYB.2014.2329673. Epub 2014 Jun 27.
Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.
人际对话中的欺骗检测具有挑战性,因为说谎者会利用互动的来来回回,根据对话者言语和非言语反馈中的任何怀疑迹象进行调整。人类的检测准确率很低,通常不比随机表现好。在这项研究中,我们考虑自动化方法是否可以产生更好的结果,以及强调交互同步性的可能中断是否可以表明互动者是否诚实或欺骗。我们提出了一个使用视觉线索的数据驱动和不引人注目的框架,该框架包括面部跟踪、头部运动检测、面部表情识别和交互同步性估计。分析了 242 个视频样本,这些样本来自一个实验,在这个实验中,说谎者和说实话者与专业面试官进行面对面或通过计算机中介的互动。结果表明,该框架能够自动跟踪双方的头部运动和表情,提取标准化的有意义的同步特征,并学习用于欺骗识别的分类模型。进一步的实验表明,这些特征可靠地捕捉了交互同步性,并有效地将欺骗与真实区分开来。