Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007.
Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052.
J Neurosci. 2023 Jul 5;43(27):4984-4996. doi: 10.1523/JNEUROSCI.1710-22.2023. Epub 2023 May 17.
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain. It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
有人假设大脑是由“超模式”的、与感觉无关的皮质模块组成的,这些模块能够在“标准”和新的感觉模式中执行任务(例如,单词识别)。然而,该理论主要在感觉剥夺的个体中进行了测试,在神经典型个体中证据不一,因此限制了其作为大脑组织一般原则的支持。关键的是,目前关于超模式处理的理论并没有指定在神经表示水平上成功的超模式处理的要求。在神经典型个体中,特定水平的规范可能特别重要,因为新的感觉模式必须与标准感觉的现有表示相接口。在这里,我们假设皮质区域的有效超模式参与需要该区域中标准和新感觉模式的刺激表示之间的一致性。为了验证这一点,我们首先使用 fMRI 来识别双侧听觉语音表示。然后,我们培训了 20 名参与者(12 名女性),使用两种听觉到触觉算法中的一种来识别触觉振动的听觉词。声码化算法试图匹配听觉语音的编码方案,而基于令牌的算法则没有。至关重要的是,使用 fMRI,我们发现,只有在声码化组中,受过训练的触觉刺激才会招募到上颞回中的语音表示,并导致它们与体感区域之间的耦合增加。我们的结果通过为解锁大脑的超模式潜力提供新的见解,从而有益于设计旨在利用大脑现有处理流的新型感觉替代设备,从而提高了我们对大脑组织的理解。有人假设大脑是由“超模式”的、与感觉无关的模块组成的,这些模块专门用于执行某些任务。这一观点激发了治疗应用,例如感觉替代设备,例如,使盲人“看到”,将视觉输入转换为声景。然而,其他研究未能证明超模式的参与。在这里,我们测试了这样一种假设,即神经典型个体的超模式参与需要匹配来自新的和标准感觉模态的刺激的编码方案。我们培训了两组受试者来识别由两种听觉到触觉转换中的一种生成的单词。至关重要的是,只有与听觉语音的神经编码匹配的触觉刺激在训练后才能参与听觉语音区域。这表明匹配编码方案对于解锁大脑的超模式潜力至关重要。