City, University of London, London, UK.
University of Sussex, Falmer, UK.
Sci Rep. 2017 Apr 21;7:46413. doi: 10.1038/srep46413.
Are sight and sound out of synch? Signs that they are have been dismissed for over two centuries as an artefact of attentional and response bias, to which traditional subjective methods are prone. To avoid such biases, we measured performance on objective tasks that depend implicitly on achieving good lip-synch. We measured the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identification of degraded speech, while manipulating audiovisual asynchrony. Peak performance was found at an average auditory lag of ~100 ms, but this varied widely between individuals. Participants' individual optimal asynchronies showed trait-like stability when the same task was re-tested one week later, but measures based on different tasks did not correlate. This discounts the possible influence of common biasing factors, suggesting instead that our different tasks probe different brain networks, each subject to their own intrinsic auditory and visual processing latencies. Our findings call for renewed interest in the biological causes and cognitive consequences of individual sensory asynchronies, leading potentially to fresh insights into the neural representation of sensory timing. A concrete implication is that speech comprehension might be enhanced, by first measuring each individual's optimal asynchrony and then applying a compensatory auditory delay.
视听觉不同步?这种现象在过去两个多世纪以来一直被认为是注意力和反应偏差的产物,而传统的主观方法很容易受到这种偏差的影响。为了避免这种偏差,我们使用了依赖于良好唇音同步的客观任务来衡量表现。我们测量了麦格克效应(不一致的唇音-语音对会产生虚幻的语音),以及在操纵视听不同步的情况下识别语音的能力。在平均听觉滞后约 100ms 时达到最佳表现,但个体之间差异很大。参与者在一周后重新测试相同任务时,个体最佳异步的表现具有类似特征的稳定性,但基于不同任务的测量结果没有相关性。这排除了常见偏置因素的可能影响,表明我们的不同任务探测到不同的大脑网络,每个网络都受到自己内在的听觉和视觉处理延迟的影响。我们的发现呼吁重新关注个体感觉不同步的生物学原因和认知后果,这可能会为感觉时间的神经表示提供新的见解。一个具体的含义是,通过首先测量每个人的最佳异步,然后应用补偿听觉延迟,可以增强语音理解。