Suppr超能文献

从自然聆听音频摘录时的大脑活动模式中解码听觉显著性。

Decoding Auditory Saliency from Brain Activity Patterns during Free Listening to Naturalistic Audio Excerpts.

机构信息

School of Automation, Northwestern Polytechnical University, Xi'an, China.

Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA.

出版信息

Neuroinformatics. 2018 Oct;16(3-4):309-324. doi: 10.1007/s12021-018-9358-0.

Abstract

In recent years, natural stimuli such as audio excerpts or video streams have received increasing attention in neuroimaging studies. Compared with conventional simple, idealized and repeated artificial stimuli, natural stimuli contain more unrepeated, dynamic and complex information that are more close to real-life. However, there is no direct correspondence between the stimuli and any sensory or cognitive functions of the brain, which makes it difficult to apply traditional hypothesis-driven analysis methods (e.g., the general linear model (GLM)). Moreover, traditional data-driven methods (e.g., independent component analysis (ICA)) lack quantitative modeling of stimuli, which may limit the power of analysis models. In this paper, we propose a sparse representation based decoding framework to explore the neural correlates between the computational audio features and functional brain activities under free listening conditions. First, we adopt a biologically-plausible auditory saliency feature to quantitatively model the audio excerpts and meanwhile develop sparse representation/dictionary learning method to learn an over-complete dictionary basis of brain activity patterns. Then, we reconstruct the auditory saliency features from the learned fMRI-derived dictionaries. After that, a group-wise analysis procedure is conducted to identify the associated brain regions and networks. Experiments showed that the auditory saliency feature can be well decoded from brain activity patterns by our methods, and the identified brain regions and networks are consistent and meaningful. At last, our method is evaluated and compared with ICA method and experimental results demonstrated the superiority of our methods.

摘要

近年来,神经影像学研究越来越关注自然刺激,如音频摘录或视频流。与传统的简单、理想化和重复的人工刺激相比,自然刺激包含更多的非重复、动态和复杂的信息,更接近现实生活。然而,刺激与大脑的任何感觉或认知功能之间没有直接对应关系,这使得传统的基于假设的分析方法(例如,广义线性模型(GLM))难以应用。此外,传统的数据驱动方法(例如,独立成分分析(ICA))缺乏对刺激的定量建模,这可能限制分析模型的能力。在本文中,我们提出了一种基于稀疏表示的解码框架,以探索在自由聆听条件下计算音频特征与功能大脑活动之间的神经相关性。首先,我们采用一种生物上合理的听觉显著性特征来定量地模拟音频摘录,同时开发稀疏表示/字典学习方法来学习大脑活动模式的过完备字典基础。然后,我们从学习到的 fMRI 衍生字典中重建听觉显著性特征。之后,进行了组分析过程,以识别相关的大脑区域和网络。实验表明,我们的方法可以很好地从大脑活动模式中解码听觉显著性特征,并且识别出的大脑区域和网络是一致的,具有意义。最后,我们对方法进行了评估并与 ICA 方法进行了比较,实验结果证明了我们方法的优越性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验