Asadi Nima, Olson Ingrid R, Obradovic Zoran
Department of Computer and Information Sciences, College of Science and Technology, Temple University, Philadelphia, PA, USA.
Department of Psychology and Neuroscience, College of Liberal Arts, Temple University, Philadelphia, PA, USA.
Netw Neurosci. 2023 Jan 1;7(1):22-47. doi: 10.1162/netn_a_00281. eCollection 2023.
Representation learning is a core component in data-driven modeling of various complex phenomena. Learning a contextually informative representation can especially benefit the analysis of fMRI data because of the complexities and dynamic dependencies present in such datasets. In this work, we propose a framework based on transformer models to learn an embedding of the fMRI data by taking the spatiotemporal contextual information in the data into account. This approach takes the multivariate BOLD time series of the regions of the brain as well as their functional connectivity network simultaneously as the input to create a set of meaningful features that can in turn be used in various downstream tasks such as classification, feature extraction, and statistical analysis. The proposed spatiotemporal framework uses the attention mechanism as well as the graph convolution neural network to jointly inject the contextual information regarding the dynamics in time series data and their connectivity into the representation. We demonstrate the benefits of this framework by applying it to two resting-state fMRI datasets, and provide further discussion on various aspects and advantages of it over a number of other commonly adopted architectures.
表示学习是各种复杂现象的数据驱动建模中的核心组成部分。学习上下文信息丰富的表示尤其有利于功能磁共振成像(fMRI)数据的分析,因为此类数据集中存在复杂性和动态依赖性。在这项工作中,我们提出了一个基于Transformer模型的框架,通过考虑数据中的时空上下文信息来学习fMRI数据的嵌入。这种方法将大脑区域的多变量血氧水平依赖(BOLD)时间序列及其功能连接网络同时作为输入,以创建一组有意义的特征,这些特征进而可用于各种下游任务,如分类、特征提取和统计分析。所提出的时空框架使用注意力机制以及图卷积神经网络,将关于时间序列数据动态及其连接的上下文信息共同注入到表示中。我们通过将该框架应用于两个静息态fMRI数据集来证明其优势,并进一步讨论其在多个其他常用架构方面的各个方面和优势。