Suppr超能文献

学习多个认知注意力任务之间脑电图的广义表征。

Learning Generalized Representations of EEG between Multiple Cognitive Attention Tasks.

作者信息

Ding Yi, Jun Ang Nigel Wei, Aung Phyo Wai Aung, Guan Cuntai

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:306-310. doi: 10.1109/EMBC46164.2021.9629575.

Abstract

Attention can be measured by different types of cognitive tasks, such as Stroop, Eriksen Flanker, and Psychomotor Vigilance Task (PVT). Despite the differing content of the three cognitive tasks, they all require the use of visual attention. To learn the generalized representations from the electroencephalogram (EEG) of different cognitive attention tasks, extensive intra and inter-task attention classification experiments were conducted on three types of attention task data using SVM, EEGNet, and TSception. With cross-validation in intra-task experiments, TSception has significantly higher classification accuracies than other methods, achieving 82.48%, 88.22%, and 87.31% for Stroop, Flanker, and PVT tests respectively. For inter-task experiments, deep learning methods showed superior performance over SVM with most of the accuracy drops not being significant. Our experiments indicate that there is common knowledge that exists across cognitive attention tasks, and deep learning methods can learn generalized representations better.

摘要

注意力可以通过不同类型的认知任务来测量,例如斯特鲁普任务、埃里克森侧翼任务和心理运动警觉任务(PVT)。尽管这三种认知任务的内容不同,但它们都需要使用视觉注意力。为了从不同认知注意力任务的脑电图(EEG)中学习广义表示,使用支持向量机(SVM)、EEGNet和TSception对三种注意力任务数据进行了广泛的任务内和任务间注意力分类实验。在任务内实验的交叉验证中,TSception的分类准确率显著高于其他方法,在斯特鲁普测试、侧翼任务测试和PVT测试中分别达到了82.48%、88.22%和87.31%。对于任务间实验,深度学习方法表现优于支持向量机,大多数准确率下降并不显著。我们的实验表明,在认知注意力任务中存在共同的知识,并且深度学习方法能够更好地学习广义表示。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验