Luo Xiaoxiao, Kang Guanlan, Guo Yu, Yu Xingcheng, Zhou Xiaolin
School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China.
Institute of Psychological and Brain Sciences, Zhejiang Normal University, Zhejiang, 321004, China.
Atten Percept Psychophys. 2020 May;82(4):1928-1941. doi: 10.3758/s13414-019-01918-x.
This study investigates whether and how value-associated faces affect audiovisual speech perception and its eye movement pattern. Participants were asked to learn to associate particular faces with or without monetary reward in the training phase, and, in the subsequent test phase, to identify syllables that the talkers had said in video clips in which the talkers' faces had or had not been associated with reward. The syllables were either congruent or incongruent with the talkers' mouth movements. Crucially, in some cases, the incongruent syllables could elicit the McGurk effect. Results showed that the McGurk effect occurred more often for reward-associated faces than for non-reward-associated faces. Moreover, the signal detection analysis revealed that participants had lower criterion and higher discriminability for reward-associated faces than for non-reward-associated faces. Surprisingly, eye movement data showed that participants spent more time looking at and fixated more often on the extraoral (nose/cheek) area for reward-associated faces than for non-reward-associated faces, while the opposite pattern was observed on the oral (mouth) area. The correlation analysis demonstrated that, over participants, the more they looked at the extraoral area in the training phase because of reward, the larger the increase of McGurk proportion (and the less they looked at the oral area) in the test phase. These findings not only demonstrate that value-associated faces enhance the influence of visual information on audiovisual speech perception but also highlight the importance of the extraoral facial area in the value-driven McGurk effect.
本研究调查了与价值相关的面孔是否以及如何影响视听言语感知及其眼动模式。在训练阶段,要求参与者学习将特定面孔与有无金钱奖励联系起来,在随后的测试阶段,要求参与者识别说话者在视频片段中说出的音节,这些视频片段中说话者的面孔与奖励相关或不相关。音节与说话者的口部动作要么一致,要么不一致。关键的是,在某些情况下,不一致的音节可能会引发麦格克效应。结果表明,与奖励相关的面孔比与非奖励相关的面孔更常出现麦格克效应。此外,信号检测分析表明,与非奖励相关的面孔相比,参与者对与奖励相关的面孔的决策标准更低,辨别能力更高。令人惊讶的是,眼动数据显示,与非奖励相关的面孔相比,参与者在与奖励相关的面孔上花费更多时间注视口外(鼻子/脸颊)区域,且注视频率更高,而在口部(嘴巴)区域观察到相反的模式。相关分析表明,在所有参与者中,他们在训练阶段因奖励而对视口外区域的注视越多,测试阶段麦格克效应比例的增加就越大(且他们对视口部区域的注视越少)。这些发现不仅表明与价值相关的面孔增强了视觉信息对视听言语感知的影响,还突出了口外面部区域在价值驱动的麦格克效应中的重要性。