Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3342-3345. doi: 10.1109/EMBC48229.2022.9871605.
Electroencephalography (EEG) signals can effectively measure the level of human decision confidence. However, it is difficult to acquire EEG signals in practice due to the ex-pensive cost and complex operation, while eye movement signals are much easier to acquire and process. To tackle this problem, we propose a cross-modality deep learning method based on deep canoncial correlation analysis (CDCCA) to transform each modality separately and coordinate different modalities into a hyperspace by using specific canonical correlation analysis constraints. In our proposed method, only eye movement signals are used as inputs in the test phase and the knowledge from EEG signals is learned in the training stage. Experimental results on two human decision confidence datasets demonstrate that our proposed method achieves advanced performance compared with the existing single-modal approaches trained and tested on eye movement signals and maintains a competitive accuracy in comparison with multimodal models.
脑电图 (EEG) 信号可以有效地测量人类决策信心的水平。然而,由于昂贵的成本和复杂的操作,在实践中很难获取 EEG 信号,而眼球运动信号则更容易获取和处理。为了解决这个问题,我们提出了一种基于深度典范相关分析 (CDCCA) 的跨模态深度学习方法,分别对每种模态进行转换,并通过使用特定的典范相关分析约束将不同模态协调到一个超空间中。在我们提出的方法中,仅使用眼球运动信号作为测试阶段的输入,而在训练阶段学习 EEG 信号的知识。在两个人类决策信心数据集上的实验结果表明,与仅在眼球运动信号上进行训练和测试的现有单模态方法相比,我们提出的方法具有先进的性能,并且与多模态模型相比保持了有竞争力的准确性。