McMaster University, Hamilton, ON, Canada.
Psychon Bull Rev. 2012 Feb;19(1):66-72. doi: 10.3758/s13423-011-0176-8.
We are constantly exposed to our own face and voice, and we identify our own faces and voices as familiar. However, the influence of self-identity upon self-speech perception is still uncertain. Speech perception is a synthesis of both auditory and visual inputs; although we hear our own voice when we speak, we rarely see the dynamic movements of our own face. If visual speech and identity are processed independently, no processing advantage would obtain in viewing one's own highly familiar face. In the present experiment, the relative contributions of facial and vocal inputs to speech perception were evaluated with an audiovisual illusion. Our results indicate that auditory self-speech conveys a processing advantage, whereas visual self-speech does not. The data thereby support a model of visual speech as dynamic movement processed separately from speaker recognition.
我们经常接触到自己的脸和声音,并且我们将自己的脸和声音识别为熟悉的。然而,自我认同对自我言语感知的影响仍然不确定。言语感知是听觉和视觉输入的综合;虽然我们说话时能听到自己的声音,但我们很少看到自己脸部的动态运动。如果视觉言语和身份是独立处理的,那么在观看自己非常熟悉的面孔时,不会获得任何处理优势。在本实验中,通过视听错觉评估了面部和声音输入对言语感知的相对贡献。我们的结果表明,听觉自我言语传达了一种处理优势,而视觉自我言语则没有。因此,数据支持一种视觉言语模型,即作为与说话人识别分开处理的动态运动。