Husain Fatima T, Horwitz Barry
Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Building 10, Rm 8S235-D, 9000 Rockville Pike, Bethesda, MD 20892, USA.
J Physiol Paris. 2006 Jul-Sep;100(1-3):133-41. doi: 10.1016/j.jphysparis.2006.09.006. Epub 2006 Oct 31.
In this article, we review a combined experimental-neuromodeling framework for understanding brain function with a specific application to auditory object processing. Within this framework, a model is constructed using the best available experimental data and is used to make predictions. The predictions are verified by conducting specific or directed experiments and the resulting data are matched with the simulated data. The model is refined or tested on new data and generates new predictions. The predictions in turn lead to better-focused experiments. The auditory object processing model was constructed using available neurophysiological and neuroanatomical data from mammalian studies of auditory object processing in the cortex. Auditory objects are brief sounds such as syllables, words, melodic fragments, etc. The model can simultaneously simulate neuronal activity at a columnar level and neuroimaging activity at a systems level while processing frequency-modulated tones in a delayed-match-to-sample task. The simulated neuroimaging activity was quantitatively matched with neuroimaging data obtained from experiments; both the simulations and the experiments used similar tasks, sounds, and other experimental parameters. We then used the model to investigate the neural bases of the auditory continuity illusion, a type of perceptual grouping phenomenon, without changing any of its parameters. Perceptual grouping enables the auditory system to integrate brief, disparate sounds into cohesive perceptual units. The neural mechanisms underlying auditory continuity illusion have not been studied extensively with conventional neuroimaging or electrophysiological techniques. Our modeling results agree with behavioral studies in humans and an electrophysiological study in cats. The results predict a particular set of bottom-up cortical processing mechanisms that implement perceptual grouping, and also attest to the robustness of our model.
在本文中,我们回顾了一个用于理解大脑功能的实验与神经建模相结合的框架,并将其具体应用于听觉对象处理。在这个框架内,利用现有的最佳实验数据构建一个模型,并用于进行预测。通过开展特定的或定向的实验来验证这些预测,所得数据与模拟数据进行匹配。对模型进行改进或根据新数据进行测试,并生成新的预测。这些预测反过来又会导向更具针对性的实验。听觉对象处理模型是利用来自哺乳动物大脑皮层听觉对象处理研究的现有神经生理学和神经解剖学数据构建的。听觉对象是诸如音节、单词、旋律片段等短暂的声音。该模型在处理延迟匹配样本任务中的调频音时,能够同时在柱状水平模拟神经元活动,并在系统水平模拟神经成像活动。模拟的神经成像活动在数量上与从实验中获得的神经成像数据相匹配;模拟和实验都使用了相似的任务、声音和其他实验参数。然后,我们在不改变其任何参数的情况下,利用该模型研究了听觉连续性错觉(一种知觉分组现象)的神经基础。知觉分组使听觉系统能够将短暂、分散的声音整合为连贯的知觉单元。传统的神经成像或电生理技术尚未对听觉连续性错觉背后的神经机制进行广泛研究。我们的建模结果与人类的行为研究以及猫的电生理研究结果一致。这些结果预测了一组特定的自下而上的皮层处理机制,这些机制实现了知觉分组,同时也证明了我们模型的稳健性。