Suppr超能文献

静息态功能磁共振成像两种组内条件之间的配对比较提高了分类准确率。

PAIR Comparison between Two Within-Group Conditions of Resting-State fMRI Improves Classification Accuracy.

作者信息

Zhou Zhen, Wang Jian-Bao, Zang Yu-Feng, Pan Gang

机构信息

College of Computer Science and Technology, Zhejiang University, Hangzhou, China.

Center for Cognition and Brain Disorders and the Affiliated Hospital, Hangzhou Normal University, Hangzhou, China.

出版信息

Front Neurosci. 2018 Jan 9;11:740. doi: 10.3389/fnins.2017.00740. eCollection 2017.

Abstract

Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets ( = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired -tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC-EO) maps of ALFF from half of subjects ( = 15 for dataset-1, = 23 for dataset-2) and obtained EO-EC maps from the other half ( = 16 for dataset-1, = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01-0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01-0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01-0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies.

摘要

分类方法已越来越多地应用于利用静息态功能磁共振成像数据(RS-fMRI)区分患者和正常对照。尽管之前的大多数分类研究在单个数据集中报告了可观的准确率,但要在多个数据集中实现高水平的准确率仍然具有挑战性,主要有两个原因:高维度和个体间的高变异性。我们使用了两个独立的RS-fMRI数据集(分别为n = 31、46),均包含闭眼(EC)和睁眼(EO)条件。对于每个数据集,我们首先使用低频波动幅度(ALFF)作为指标,通过配对t检验将特征数量减少到少数脑区。其次,我们采用了一种名为PAIR方法的新特征提取方法,将EC和EO视为配对条件而非独立条件。具体而言,对于每个数据集,我们从一半的受试者(数据集1为n = 15,数据集2为n = 23)中获取ALFF的EC减去EO(EC - EO)图,并从另一半受试者(数据集1为n = 16,数据集2为n = 23)中获取EO - EC图。使用支持向量机(SVM)方法对EC RS-fMRI图谱和EO图谱进行分类。在0.01 - 0.08 Hz的传统频段中,PAIR方法在数据集1上的平均分类准确率为91.40%,在数据集2上为92.75%。对于跨数据集验证,我们将数据集1的分类器直接应用于数据集2,反之亦然。在0.01 - 0.08 Hz范围内,从数据集1到数据集·2的跨数据集验证平均准确率为94.93%,从数据集2到数据集1为90.32%。对于非配对方法,分类准确率显著较低(数据集1平均为69.89%,数据集2为82.97%),在0.01 - 0.08 Hz范围内跨数据集验证时更低(从数据集1到数据集2为64.69%,从数据集2到数据集1为64.98%)。总之,对于组内设计研究(如配对条件或随访研究),我们推荐使用PAIR方法进行特征提取。此外,在神经影像学研究的特征选择中,也应考虑结合特定脑区的强先验知识进行降维。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6cf5/5767225/c341bfecb388/fnins-11-00740-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验