Suppr超能文献

一种基于跨模态深度学习的从眼动信号中测量决策信心的方法。

A Cross-modality Deep Learning Method for Measuring Decision Confidence from Eye Movement Signals.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3342-3345. doi: 10.1109/EMBC48229.2022.9871605.

Abstract

Electroencephalography (EEG) signals can effectively measure the level of human decision confidence. However, it is difficult to acquire EEG signals in practice due to the ex-pensive cost and complex operation, while eye movement signals are much easier to acquire and process. To tackle this problem, we propose a cross-modality deep learning method based on deep canoncial correlation analysis (CDCCA) to transform each modality separately and coordinate different modalities into a hyperspace by using specific canonical correlation analysis constraints. In our proposed method, only eye movement signals are used as inputs in the test phase and the knowledge from EEG signals is learned in the training stage. Experimental results on two human decision confidence datasets demonstrate that our proposed method achieves advanced performance compared with the existing single-modal approaches trained and tested on eye movement signals and maintains a competitive accuracy in comparison with multimodal models.

摘要

脑电图 (EEG) 信号可以有效地测量人类决策信心的水平。然而,由于昂贵的成本和复杂的操作,在实践中很难获取 EEG 信号,而眼球运动信号则更容易获取和处理。为了解决这个问题,我们提出了一种基于深度典范相关分析 (CDCCA) 的跨模态深度学习方法,分别对每种模态进行转换,并通过使用特定的典范相关分析约束将不同模态协调到一个超空间中。在我们提出的方法中,仅使用眼球运动信号作为测试阶段的输入,而在训练阶段学习 EEG 信号的知识。在两个人类决策信心数据集上的实验结果表明,与仅在眼球运动信号上进行训练和测试的现有单模态方法相比,我们提出的方法具有先进的性能,并且与多模态模型相比保持了有竞争力的准确性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验