Suppr超能文献

SAST-GCN:用于基于P3的视频目标检测的分割自适应时空图卷积网络

SAST-GCN: Segmentation Adaptive Spatial Temporal-Graph Convolutional Network for P3-Based Video Target Detection.

作者信息

Lu Runnan, Zeng Ying, Zhang Rongkai, Yan Bin, Tong Li

机构信息

Henan Key Laboratory of Imaging and Intelligent Processing, People's Liberation Army of China (PLA) Strategic Support Force Information Engineering University, Zhengzhou, China.

Key Laboratory for Neuroinformation of Ministry of Education, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China.

出版信息

Front Neurosci. 2022 Jun 2;16:913027. doi: 10.3389/fnins.2022.913027. eCollection 2022.

Abstract

Detecting video-induced P3 is crucial to building the video target detection system based on the brain-computer interface. However, studies have shown that the brain response patterns corresponding to video-induced P3 are dynamic and determined by the interaction of multiple brain regions. This paper proposes a segmentation adaptive spatial-temporal graph convolutional network (SAST-GCN) for P3-based video target detection. To make full use of the dynamic characteristics of the P3 signal data, the data is segmented according to the processing stages of the video-induced P3, and the brain network connections are constructed correspondingly. Then, the spatial-temporal feature of EEG data is extracted by adaptive spatial-temporal graph convolution to discriminate the target and non-target in the video. Especially, a style-based recalibration module is added to select feature maps with higher contributions and increase the feature extraction ability of the network. The experimental results demonstrate the superiority of our proposed model over the baseline methods. Also, the ablation experiments indicate that the segmentation of data to construct the brain connection can effectively improve the recognition performance by reflecting the dynamic connection relationship between EEG channels more accurately.

摘要

检测视频诱发的P3对于构建基于脑机接口的视频目标检测系统至关重要。然而,研究表明,与视频诱发的P3相对应的大脑反应模式是动态的,并且由多个脑区的相互作用决定。本文提出了一种用于基于P3的视频目标检测的分割自适应时空图卷积网络(SAST-GCN)。为了充分利用P3信号数据的动态特性,根据视频诱发的P3的处理阶段对数据进行分割,并相应地构建脑网络连接。然后,通过自适应时空图卷积提取脑电数据的时空特征,以区分视频中的目标和非目标。特别是,添加了一个基于风格的重新校准模块来选择贡献更高的特征图,并提高网络的特征提取能力。实验结果证明了我们提出的模型相对于基线方法的优越性。此外,消融实验表明,通过更准确地反映脑电通道之间的动态连接关系,对数据进行分割以构建脑连接可以有效地提高识别性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a8c/9201684/ff935b5c6e70/fnins-16-913027-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验