Wang Haibao, Ho Jun Kai, Cheng Fan L, Aoki Shuntaro C, Muraki Yusuke, Tanaka Misato, Park Jong-Yun, Kamitani Yukiyasu
Graduate School of Informatics, Kyoto University, Kyoto, Japan.
Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Kyoto, Japan.
Nat Comput Sci. 2025 Jul;5(7):534-546. doi: 10.1038/s43588-025-00826-5. Epub 2025 Jul 11.
Inter-individual variability in fine-grained functional topographies poses challenges for scalable data analysis and modeling. Functional alignment techniques can help mitigate these individual differences but they typically require paired brain data with the same stimuli between individuals, which are often unavailable. Here we present a neural code conversion method that overcomes this constraint by optimizing conversion parameters based on the discrepancy between the stimulus contents represented by original and converted brain activity patterns. This approach, combined with hierarchical features of deep neural networks as latent content representations, achieves conversion accuracies that are comparable with methods using shared stimuli. The converted brain activity from a source subject can be accurately decoded using the target's pre-trained decoders, producing high-quality visual image reconstructions that rival within-individual decoding, even with data across different sites and limited training samples. Our approach offers a promising framework for scalable neural data analysis and modeling and a foundation for brain-to-brain communication.
细粒度功能拓扑中的个体间变异性给可扩展的数据分析和建模带来了挑战。功能对齐技术有助于减轻这些个体差异,但它们通常需要个体间具有相同刺激的配对脑数据,而这些数据往往无法获得。在这里,我们提出了一种神经代码转换方法,该方法通过基于原始和转换后的脑活动模式所表示的刺激内容之间的差异优化转换参数来克服这一限制。这种方法与深度神经网络的分层特征相结合作为潜在内容表示,实现了与使用共享刺激的方法相当的转换精度。即使使用来自不同站点的数据和有限的训练样本,源受试者转换后的脑活动也可以使用目标受试者预先训练的解码器进行准确解码,从而产生与个体内解码相媲美的高质量视觉图像重建。我们的方法为可扩展的神经数据分析和建模提供了一个有前景的框架,并为脑对脑通信奠定了基础。