Fratello Michele, Caiazzo Giuseppina, Trojsi Francesca, Russo Antonio, Tedeschi Gioacchino, Tagliaferri Roberto, Esposito Fabrizio
Department of Medical, Surgical, Neurological, Metabolic and Aging Sciences, Second University of Naples, Naples, Italy.
Department of Medicine Surgery and Dentistry Scuola Medica Salernitana, University of Salerno, Baronissi, Salerno, Italy.
Neuroinformatics. 2017 Apr;15(2):199-213. doi: 10.1007/s12021-017-9324-2.
Brain connectivity analyses using voxels as features are not robust enough for single-patient classification because of the inter-subject anatomical and functional variability. To construct more robust features, voxels can be aggregated into clusters that are maximally coherent across subjects. Moreover, combining multi-modal neuroimaging and multi-view data integration techniques allows generating multiple independent connectivity features for the same patient. Structural and functional connectivity features were extracted from multi-modal MRI images with a clustering technique, and used for the multi-view classification of different phenotypes of neurodegeneration by an ensemble learning method (random forest). Two different multi-view models (intermediate and late data integration) were trained on, and tested for the classification of, individual whole-brain default-mode network (DMN) and fractional anisotropy (FA) maps, from 41 amyotrophic lateral sclerosis (ALS) patients, 37 Parkinson's disease (PD) patients and 43 healthy control (HC) subjects. Both multi-view data models exhibited ensemble classification accuracies significantly above chance. In ALS patients, multi-view models exhibited the best performances (intermediate: 82.9%, late: 80.5% correct classification) and were more discriminative than each single-view model. In PD patients and controls, multi-view models' performances were lower (PD: 59.5%, 62.2%; HC: 56.8%, 59.1%) but higher than at least one single-view model. Training the models only on patients, produced more than 85% patients correctly discriminated as ALS or PD type and maximal performances for multi-view models. These results highlight the potentials of mining complementary information from the integration of multiple data views in the classification of connectivity patterns from multi-modal brain images in the study of neurodegenerative diseases.
由于个体间解剖和功能的变异性,使用体素作为特征的脑连接性分析对于单患者分类而言不够稳健。为构建更稳健的特征,可将体素聚合成在个体间具有最大一致性的簇。此外,结合多模态神经影像和多视图数据整合技术能够为同一患者生成多个独立的连接性特征。利用聚类技术从多模态MRI图像中提取结构和功能连接性特征,并通过集成学习方法(随机森林)用于神经退行性疾病不同表型的多视图分类。在41例肌萎缩侧索硬化(ALS)患者、37例帕金森病(PD)患者和43名健康对照(HC)受试者的个体全脑默认模式网络(DMN)和分数各向异性(FA)图谱上训练并测试了两种不同的多视图模型(中间和晚期数据整合)。两种多视图数据模型均表现出显著高于随机水平的集成分类准确率。在ALS患者中,多视图模型表现出最佳性能(中间模型:82.9%,晚期模型:80.5%正确分类),且比每个单视图模型更具区分性。在PD患者和对照中,多视图模型的性能较低(PD:59.5%,62.2%;HC:56.8%,59.1%),但高于至少一个单视图模型。仅在患者上训练模型,能正确区分超过85%的患者为ALS或PD类型,且多视图模型表现最佳。这些结果凸显了在神经退行性疾病研究中,从多模态脑图像连接模式分类的多数据视图整合中挖掘互补信息的潜力。