Sun Qikai
Sports Department of Zhejiang A&F University, Hangzhou, Zhejiang, China.
Front Neurorobot. 2024 Dec 20;18:1499734. doi: 10.3389/fnbot.2024.1499734. eCollection 2024.
In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.
To address these challenges, this paper proposes a "Cerebral Transformer" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.
The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.
近年来,随着可穿戴设备和生物信号分析技术的进步,运动表现分析已成为一个越来越受欢迎的研究领域,特别是由于在体育训练和竞技赛事中对实时监测运动员状况的需求不断增长。传统的运动表现分析方法通常依赖视频数据或传感器数据进行动作识别。然而,单模态数据往往无法完全捕捉运动员的神经状态,导致在处理复杂运动模式时,准确性和实时性能存在局限性。此外,这些方法在多模态数据融合方面存在困难,难以充分利用脑电图(EEG)信号中的深层信息。
为应对这些挑战,本文提出了一种基于EEG信号和视频数据的“脑变压器”模型。通过采用自适应注意力机制和跨模态融合,该模型有效地结合了EEG信号和视频流,以实现对运动员动作的精确识别和分析。该模型的有效性通过在四个数据集上进行实验得到验证:SEED、DEAP、电子竞技传感器和MODA。结果表明,所提出的模型在准确性、召回率和F1分数方面优于现有的主流方法,同时还展示了高计算效率。
本研究的意义在于为运动表现分析提供了一种更全面、高效的解决方案。通过跨模态数据融合,它不仅提高了复杂动作识别的准确性,还为监测运动员的神经状态提供了技术支持,在运动训练和医学康复中具有重要应用。