Suppr超能文献

通过分层神经代码转换进行个体间深度图像重建。

Inter-individual deep image reconstruction via hierarchical neural code conversion.

机构信息

Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.

Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Hikaridai, Seika, Soraku, Kyoto, 619-0288, Japan.

出版信息

Neuroimage. 2023 May 1;271:120007. doi: 10.1016/j.neuroimage.2023.120007. Epub 2023 Mar 11.

Abstract

The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. Although anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual content. In this study, we trained a method of functional alignment called neural code converter that predicts a target subject's brain activity pattern from a source subject given the same stimulus, and analyzed the converted patterns by decoding hierarchical visual features and reconstructing perceived images. The converters were trained on fMRI responses to identical sets of natural images presented to pairs of individuals, using the voxels on the visual cortex that covers from V1 through the ventral object areas without explicit labels of the visual areas. We decoded the converted brain activity patterns into the hierarchical visual features of a deep neural network using decoders pre-trained on the target subject and then reconstructed images via the decoded features. Without explicit information about the visual cortical hierarchy, the converters automatically learned the correspondence between visual areas of the same levels. Deep neural network feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The visual images were reconstructed with recognizable silhouettes of objects even with relatively small numbers of data for converter training. The decoders trained on pooled data from multiple individuals through conversions led to a slight improvement over those trained on a single individual. These results demonstrate that the hierarchical and fine-grained representation can be converted by functional alignment, while preserving sufficient visual information to enable inter-individual visual image reconstruction.

摘要

感觉皮层的组织具有一般性原则,如拓扑和层次结构。然而,在相同输入的情况下,个体之间的大脑活动模式却存在显著差异。尽管在功能磁共振成像 (fMRI) 研究中已经提出了解剖和功能对齐方法,但仍不清楚在不保留编码感知内容的情况下,是否以及如何在个体之间转换层次和细粒度的表示。在这项研究中,我们训练了一种称为神经代码转换器的功能对齐方法,该方法可以根据相同的刺激从源主体预测目标主体的大脑活动模式,并通过解码层次视觉特征和重建感知图像来分析转换后的模式。转换器是在 fMRI 对相同的自然图像集的反应上进行训练的,这些图像是在一对个体之间呈现的,使用的是从 V1 到腹侧物体区域的视觉皮层上的体素,而没有视觉区域的明确标签。我们使用在目标主体上预先训练的解码器将转换后的大脑活动模式解码为深度神经网络的层次视觉特征,然后通过解码特征来重建图像。在没有关于视觉皮层层次结构的明确信息的情况下,转换器自动学习了同一层次的视觉区域之间的对应关系。在每个层的深度神经网络特征解码中,从相应的视觉区域的解码准确性更高,这表明转换后保留了层次表示。即使在转换器训练数据较少的情况下,也可以重建具有可识别物体轮廓的视觉图像。通过转换,在多个个体的数据上进行训练的解码器比在单个个体的数据上进行训练的解码器的解码准确性略高。这些结果表明,通过功能对齐可以转换层次和细粒度的表示,同时保留足够的视觉信息,以实现个体间的视觉图像重建。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验