Suppr超能文献

解决视觉和听觉物体分类的时间进程。

Resolving the time course of visual and auditory object categorization.

机构信息

Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.

Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.

出版信息

J Neurophysiol. 2022 Jun 1;127(6):1622-1628. doi: 10.1152/jn.00515.2021. Epub 2022 May 18.

Abstract

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG ( = 48) and time-resolved multivariate pattern analysis to investigate ) the time course with which object category information emerges in the auditory modality and ) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that ) auditory object category representations can be reliably extracted from EEG signals and ) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects' category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code. Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects' category membership.

摘要

人类可以毫不费力地对物体进行分类,无论是通过视觉图像还是口头语言传达的物体。为了解决物体分类的神经相关性,到目前为止,研究主要集中在视觉模态上。因此,大脑如何从听觉信号中提取类别信息仍然不清楚。在当前的研究中,我们使用 EEG(=48)和时分辨的多元模式分析来研究听觉模态中物体类别信息出现的时间过程,以及在听觉模态和视觉模态之间,从个体物体识别到类别表示的代表性转变如何比较。我们的结果表明,)听觉物体类别表示可以从 EEG 信号中可靠地提取,)在视觉和听觉模态中发生相似的表示转变,其中个体物体水平的初始表示随后是物体类别的表示。总的来说,我们的结果表明,跨感觉通道的信息处理存在类似的层次结构。然而,没有朝着概念上模态独立的表示收敛,因此没有提供共享超模态代码的证据。物体分类是基于来自不同感觉模态的输入,如视觉和听觉。这个过程主要在视觉中进行研究。在这里,我们探索听觉物体分类。我们表明,听觉物体类别表示可以从 EEG 信号中可靠地提取,并且与视觉相似,听觉表示最初携带关于个体物体的信息,随后是物体类别的表示。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验