Suppr超能文献

Transformer:一种用于基于功能磁共振成像的人类脑功能表征的纯Transformer框架。

Transformer: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function.

作者信息

Tian Xiaoxi, Ma Hao, Guan Yun, Xu Le, Liu Jiangcong, Tian Lixia

出版信息

IEEE J Biomed Health Inform. 2025 Jan;29(1):468-481. doi: 10.1109/JBHI.2024.3471186. Epub 2025 Jan 7.

Abstract

Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer.

摘要

有效的表征学习对于基于神经影像的个性化预测至关重要。已经针对基于功能磁共振成像(fMRI)的个性化预测开展了大量研究,利用fMRI数据中隐藏的样本层面、空间和时间的相互依赖性。然而,这些研究未能充分利用fMRI数据中隐藏的有效信息,因为仅分析了一两种相互依赖性。为了通过充分利用这三种相互依赖性来有效提取人类脑功能的表征,我们建立了一个基于纯Transformer的框架,即Transformer,利用Transformer强大的能力来捕捉输入数据中的相互依赖性。Transformer主要由三个Transformer模块组成,其中批处理Transformer模块用于处理样本层面的异同,区域Transformer模块用于处理脑区之间复杂的空间相互依赖性,时间Transformer模块用于捕捉跨时间点的时间相互依赖性。基于两个公共数据集进行的年龄、智商和性别预测实验证明了所提出的Transformer的有效性。由于唯一的假设是样本层面、空间和时间的相互依赖性广泛存在于输入数据中,因此所提出的Transformer可广泛用于基于多变量时间序列的表征学习。此外,纯Transformer框架使得理解基于Transformer的预测模型背后的驱动因素变得非常方便。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验