Deligani Roohollah Jafari, Borgheai Seyyed Bahram, McLinden John, Shahriari Yalda
Department of Electrical, Computer and Biomedical Engineering; University of Rhode Island, Kingston, RI 02881, USA.
Interdisciplinary Neuroscience Program; University of Rhode Island, Kingston, RI 02881, USA.
Biomed Opt Express. 2021 Feb 26;12(3):1635-1650. doi: 10.1364/BOE.413666. eCollection 2021 Mar 1.
Multimodal data fusion is one of the current primary neuroimaging research directions to overcome the fundamental limitations of individual modalities by exploiting complementary information from different modalities. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) are especially compelling modalities due to their potentially complementary features reflecting the electro-hemodynamic characteristics of neural responses. However, the current multimodal studies lack a comprehensive systematic approach to properly merge the complementary features from their multimodal data. Identifying a systematic approach to properly fuse EEG-fNIRS data and exploit their complementary potential is crucial in improving performance. This paper proposes a framework for classifying fused EEG-fNIRS data at the feature level, relying on a mutual information-based feature selection approach with respect to the complementarity between features. The goal is to optimize the complementarity, redundancy and relevance between multimodal features with respect to the class labels as belonging to a pathological condition or healthy control. Nine amyotrophic lateral sclerosis (ALS) patients and nine controls underwent multimodal data recording during a visuo-mental task. Multiple spectral and temporal features were extracted and fed to a feature selection algorithm followed by a classifier, which selected the optimized subset of features through a cross-validation process. The results demonstrated considerably improved hybrid classification performance compared to the individual modalities and compared to conventional classification without feature selection, suggesting a potential efficacy of our proposed framework for wider neuro-clinical applications.
多模态数据融合是当前神经影像学的主要研究方向之一,旨在通过利用不同模态的互补信息来克服单个模态的根本局限性。脑电图(EEG)和功能近红外光谱(fNIRS)因其潜在的互补特征,能够反映神经反应的电 - 血流动力学特性,故而成为特别引人注目的模态。然而,当前的多模态研究缺乏一种全面系统的方法来恰当地融合多模态数据中的互补特征。确定一种系统的方法来恰当地融合脑电图 - 功能近红外光谱数据并发挥其互补潜力,对于提高性能至关重要。本文提出了一个在特征层面上对融合后的脑电图 - 功能近红外光谱数据进行分类的框架,该框架依赖于一种基于互信息的特征选择方法,以实现特征之间的互补性。目标是针对属于病理状况或健康对照的类别标签,优化多模态特征之间的互补性、冗余性和相关性。九名肌萎缩侧索硬化症(ALS)患者和九名对照在视觉思维任务期间进行了多模态数据记录。提取了多个光谱和时间特征,并将其输入到一个特征选择算法中,随后是一个分类器,该分类器通过交叉验证过程选择了优化的特征子集。结果表明,与单个模态以及没有特征选择的传统分类相比,混合分类性能有了显著提高,这表明我们提出的框架在更广泛的神经临床应用中具有潜在的有效性。