Iqbal Zafar, Rahman Md Mahfuzur, Zia Qasim, Popov Pavel, Fu Zening, Calhoun Vince D, Plis Sergey
Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA.
Center for Translational Research in Neuroimaging and Data Science (TReNDS), Atlanta, GA 30303, USA.
Brain Sci. 2025 Sep 1;15(9):954. doi: 10.3390/brainsci15090954.
This study aims to develop and validate an interpretable deep learning framework that leverages self-supervised time reversal (TR) pretraining to identify consistent, biologically plausible functional network biomarkers across multiple neurological and psychiatric disorders.
We pretrained a hierarchical LSTM model using a TR pretext task on the Human Connectome Project (HCP) dataset. The pretrained weights were transferred to downstream classification tasks on five clinical datasets (FBIRN, BSNIP, ADNI, OASIS, and ABIDE) spanning schizophrenia, Alzheimer's disease, and autism spectrum disorder. After fine-tuning, we extracted latent features and employed a logistic regression probing analysis to decode class-specific functional network contributions. Models trained from scratch without pretraining served as a baseline. Statistical tests (one-sample and two-sample -tests) were performed on the latent features to assess their discriminative power and consistency.
TR pretraining consistently improved classification performance in four out of five datasets, with AUC gains of up to 5.3%, particularly in data-scarce settings. Probing analyses revealed biologically meaningful and consistent patterns: schizophrenia was associated with reduced auditory network activity, Alzheimer's with disrupted default mode and cerebellar networks, and autism with sensorimotor anomalies. TR-pretrained models produced more statistically significant latent features and demonstrated higher consistency across datasets (e.g., Pearson correlation = 0.9003 for schizophrenia probing vs. -0.67 for non-pretrained). In contrast, non-pretrained models showed unstable performance and inconsistent feature importance.
Time Reversal pretraining enhances both the performance and interpretability of deep learning models for fMRI classification. By enabling more stable and biologically plausible representations, TR pretraining supports clinically relevant insights into disorder-specific network disruptions. This study demonstrates the utility of interpretable self-supervised models in neuroimaging, offering a promising step toward transparent and trustworthy AI applications in psychiatry.
本研究旨在开发并验证一种可解释的深度学习框架,该框架利用自监督时间反转(TR)预训练来识别跨多种神经和精神疾病的一致的、生物学上合理的功能网络生物标志物。
我们在人类连接体项目(HCP)数据集上使用TR前置任务对分层长短期记忆(LSTM)模型进行预训练。将预训练的权重转移到五个临床数据集(FBIRN、BSNIP、ADNI、OASIS和ABIDE)的下游分类任务中,这些数据集涵盖精神分裂症、阿尔茨海默病和自闭症谱系障碍。微调后,我们提取潜在特征并采用逻辑回归探测分析来解码特定类别的功能网络贡献。从零开始训练且无预训练的模型作为基线。对潜在特征进行统计测试(单样本和双样本t检验)以评估其判别力和一致性。
TR预训练在五个数据集中的四个中持续提高了分类性能,曲线下面积(AUC)增益高达5.3%,特别是在数据稀缺的情况下。探测分析揭示了生物学上有意义且一致的模式:精神分裂症与听觉网络活动减少有关,阿尔茨海默病与默认模式和小脑网络破坏有关,自闭症与感觉运动异常有关。TR预训练模型产生了更多具有统计学意义的潜在特征,并在数据集之间表现出更高的一致性(例如,精神分裂症探测的皮尔逊相关系数 = 0.9003,而非预训练模型为 -0.67)。相比之下,未预训练的模型表现不稳定,特征重要性不一致。
时间反转预训练提高了用于功能磁共振成像(fMRI)分类的深度学习模型的性能和可解释性。通过实现更稳定且生物学上合理的表示,TR预训练支持对特定疾病网络破坏的临床相关见解。本研究证明了可解释的自监督模型在神经成像中的实用性,朝着精神病学中透明且可信的人工智能应用迈出了有希望的一步。