Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen Tübingen, Germany.
Front Psychol. 2013 Aug 16;4:530. doi: 10.3389/fpsyg.2013.00530. eCollection 2013.
In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
在盲人中,视觉通道无法通过唇读或视觉韵律来辅助面对面的交流。然而,由于视觉系统与(1)听觉系统、(2)超模态表征和(3)额前动作相关区域的交叉连接,它可能会增强对听觉信息的评估。除了例如空间或音韵表征处理的反馈或自上而下的支持之外,实验数据还表明,视觉系统可以在更基本的计算阶段(例如时间信号分辨率)影响听觉感知。例如,与视力正常的受试者相比,盲人对后向掩蔽的抵抗力更强,而这种能力似乎与视觉皮层的活动有关。关于连续语音的理解,盲人可以学习使用加速的文本到语音系统以超快速的语速(>16 音节/秒)“阅读”文本,远远超过正常的 6 音节/秒范围。一项功能磁共振成像研究表明,除其他大脑区域外,双侧丘脑下核、右视觉皮层和左辅助运动区的 BOLD 反应与这种能力显著相关。此外,脑磁图测量显示,右枕叶皮层中存在一个特定的成分与加速语音的音节起始相位锁定。在视力正常的人中,理解时间压缩语音的“瓶颈”似乎与缓冲音韵材料的更高要求有关,并且可能与额叶大脑结构有关。另一方面,克服这种瓶颈的功能的神经生理学相关性似乎取决于早期视觉皮层的活动。本假设和理论论文概述了一个模型,该模型旨在根据各种视听实验中关于空间、时间和对象识别期间的跨模态调整的早期跨模态途径,将这些数据结合在一起。