Ingalhalikar Madhura, Parker William A, Bloy Luke, Roberts Timothy P L, Verma Ragini
Section of Biomedical Image Analysis, University of Pennsylvania, Philadelphia, PA, USA.
Med Image Comput Comput Assist Interv. 2012;15(Pt 3):468-75. doi: 10.1007/978-3-642-33454-2_58.
The paper presents a method for learning multimodal classifiers from datasets in which not all subjects have data from all modalities. Usually, subjects with a severe form of pathology are the ones failing to satisfactorily complete the study, especially when it consists of multiple imaging modalities. A classifier capable of handling subjects with unequal numbers of modalities prevents discarding any subjects, as is traditionally done, thereby broadening the scope of the classifier to more severe pathology. It also allows design of the classifier to include as much of the available information as possible and facilitates testing of subjects with missing modalities over the constructed classifier. The presented method employs an ensemble based approach where several subsets of complete data are formed and trained using individual classifiers., The output from these classifiers is fused using a weighted aggregation step giving an optimal probabilistic score for each subject. The method is applied to a spatio-temporal dataset for autism spectrum disorders (ASD) (96 patients with ASD and 42 typically developing controls) that consists of functional features from magnetoencephalography (MEG) and structural connectivity features from diffusion tensor imaging (DTI). A clear distinction between ASD and controls is obtained with an average 5-fold accuracy of 83.3% and testing accuracy of 88.4%. The fusion classifier performance is superior to the classification achieved using single modalities as well as multimodal classifier using only complete data (78.3%). The presented multimodal classifier framework is applicable to all modality combinations.
本文提出了一种从并非所有受试者都具有所有模态数据的数据集学习多模态分类器的方法。通常,患有严重病理形式的受试者是那些未能令人满意地完成研究的人,尤其是当研究由多种成像模态组成时。一种能够处理模态数量不等的受试者的分类器可以防止像传统做法那样丢弃任何受试者,从而将分类器的范围扩大到更严重的病理情况。它还允许在设计分类器时尽可能多地纳入可用信息,并便于在构建的分类器上对具有缺失模态的受试者进行测试。所提出的方法采用基于集成的方法,其中形成几个完整数据的子集并使用单个分类器进行训练。这些分类器的输出通过加权聚合步骤进行融合,为每个受试者给出一个最优概率分数。该方法应用于一个自闭症谱系障碍(ASD)的时空数据集(96名ASD患者和42名典型发育对照),该数据集由脑磁图(MEG)的功能特征和扩散张量成像(DTI)的结构连接特征组成。ASD与对照之间获得了明显区分,平均5折准确率为83.3%,测试准确率为88.4%。融合分类器的性能优于使用单模态以及仅使用完整数据的多模态分类器(78.3%)所实现的分类。所提出的多模态分类器框架适用于所有模态组合。