Department of Psychology, Macquarie University, Sydney, New South Wales, Australia.
Psychon Bull Rev. 2010 Jun;17(3):317-22. doi: 10.3758/PBR.17.3.317.
In four experiments, we examined whether facial expressions used while singing carry musical information that can be "read" by viewers. In Experiment 1, participants saw silent video recordings of sung melodic intervals and judged the size of the interval they imagined the performers to be singing. Participants discriminated interval sizes on the basis of facial expression and discriminated large from small intervals when only head movements were visible. Experiments 2 and 3 confirmed that facial expressions influenced judgments even when the auditory signal was available. When matched with the facial expressions used to perform a large interval, audio recordings of sung intervals were judged as being larger than when matched with the facial expressions used to perform a small interval. The effect was not diminished when a secondary task was introduced, suggesting that audio-visual integration is not dependent on attention. Experiment 4 confirmed that the secondary task reduced participants' ability to make judgments that require conscious attention. The results provide the first evidence that facial expressions influence perceived pitch relations.
在四项实验中,我们考察了歌唱时的面部表情是否携带可以被观众“读取”的音乐信息。在实验 1 中,参与者观看了无声的歌唱旋律间隔的视频记录,并判断他们想象表演者正在演唱的间隔大小。参与者根据面部表情来区分音程大小,并且仅在可以看到头部运动时就能区分大音程和小音程。实验 2 和 3 证实,即使有听觉信号,面部表情也会影响判断。当与演唱大音程时使用的面部表情相匹配时,与演唱小音程时使用的面部表情相匹配的音频录制的音程被判断为更大。当引入次要任务时,该效果并未减弱,这表明视听整合并不依赖于注意力。实验 4 证实,次要任务降低了参与者进行需要有意识注意的判断的能力。结果提供了第一个证据,证明面部表情影响感知音高关系。