Liu Xulong, Jia Ziwei, Xun Meng, Wan Xianglong, Lu Huibin, Zhou Yanhong
School of Computer and Communication Engineering, Northeastern University, Qinhuangdao, 066004, China.
Hebei Key Laboratory of Marine Perception Network and Data Processing, Northeastern University, Qinhuangdao, 066004, China.
Med Biol Eng Comput. 2025 Jun 5. doi: 10.1007/s11517-025-03386-y.
The integration of brain-computer interface (BCI) and virtual reality (VR) systems offers transformative potential for spatial cognition training and assessment. By leveraging artificial intelligence (AI) to analyze electroencephalogram (EEG) data, brain activity patterns during spatial tasks can be decoded with high precision. In this context, a hybrid neural network named MSFHNet is proposed, optimized for extracting spatiotemporal features from spatial cognitive EEG signals. The model employs a hierarchical architecture where its temporal module uses multi-scale dilated convolutions to capture dynamic EEG variations, while its spatial module integrates channel-spatial attention mechanisms to model inter-channel dependencies and spatial distributions. Cross-stacked modules further refine discriminative features through deep-level fusion. Evaluations demonstrate the superiority of MSFHNet in the beta2 frequency band, achieving 98.58% classification accuracy and outperforming existing models. This innovation enhances EEG signal representation, advancing AI-powered BCI-VR systems for robust spatial cognitive training.
脑机接口(BCI)与虚拟现实(VR)系统的整合为空间认知训练和评估带来了变革性潜力。通过利用人工智能(AI)分析脑电图(EEG)数据,空间任务期间的大脑活动模式能够被高精度解码。在此背景下,提出了一种名为MSFHNet的混合神经网络,该网络针对从空间认知EEG信号中提取时空特征进行了优化。该模型采用分层架构,其时间模块使用多尺度扩张卷积来捕捉EEG的动态变化,而其空间模块集成了通道空间注意力机制来对通道间的依赖性和空间分布进行建模。交叉堆叠模块通过深度融合进一步细化判别特征。评估表明,MSFHNet在β2频段具有优越性,分类准确率达到98.58%,优于现有模型。这一创新增强了EEG信号表征,推动基于AI的BCI-VR系统实现强大的空间认知训练。