Rosenblum Lawrence D, Yakel Deborah A, Baseer Naser, Panchal Anjani, Nodarse Brynn C, Niehus Ryan P
Department of Psychology, University of California, Riverside 92521, USA.
Percept Psychophys. 2002 Feb;64(2):220-9. doi: 10.3758/bf03195788.
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.
两项实验测试了孤立的可见言语动作是否可用于面部匹配。使用点光方法分离可见言语信息。在一项XAB任务中,要求参与者将关节点光面部与完全照亮的关节面部进行匹配。第一个实验测试了单帧静态面部刺激作为对照。结果显示,与静态刺激相比,参与者在匹配动态面部刺激方面表现明显更好。实验2测试了观察到的动态优势是基于运动本身,还是基于动态刺激由更多静态且有序的帧组成这一事实。为此,降低了帧率,并以随机顺序、相对时间错误的正确顺序或相对时间正确的正确顺序显示这些帧。结果显示,对于正确排序和定时的帧刺激,匹配性能更好,这表明匹配是基于实际的运动本身。这些发现表明,特定于说话者的可见发音风格可以为面部匹配提供信息。