Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurological Surgery, University of Washington, Seattle, WA, USA; The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan.
Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
J Neurosci Methods. 2024 Nov;411:110251. doi: 10.1016/j.jneumeth.2024.110251. Epub 2024 Aug 14.
Electroencephalography (EEG) and electrocorticography (ECoG) recordings have been used to decode finger movements by analyzing brain activity. Traditional methods focused on single bandpass power changes for movement decoding, utilizing machine learning models requiring manual feature extraction.
This study introduces a 3D convolutional neural network (3D-CNN) model to decode finger movements using ECoG data. The model employs adaptive, explainable AI (xAI) techniques to interpret the physiological relevance of brain signals. ECoG signals from epilepsy patients during awake craniotomy were processed to extract power spectral density across multiple frequency bands. These data formed a 3D matrix used to train the 3D-CNN to predict finger trajectories.
The 3D-CNN model showed significant accuracy in predicting finger movements, with root-mean-square error (RMSE) values of 0.26-0.38 for single finger movements and 0.20-0.24 for combined movements. Explainable AI techniques, Grad-CAM and SHAP, identified the high gamma (HG) band as crucial for movement prediction, showing specific cortical regions involved in different finger movements. These findings highlighted the physiological significance of the HG band in motor control.
The 3D-CNN model outperformed traditional machine learning approaches by effectively capturing spatial and temporal patterns in ECoG data. The use of xAI techniques provided clearer insights into the model's decision-making process, unlike the "black box" nature of standard deep learning models.
The proposed 3D-CNN model, combined with xAI methods, enhances the decoding accuracy of finger movements from ECoG data. This approach offers a more efficient and interpretable solution for brain-computer interface (BCI) applications, emphasizing the HG band's role in motor control.
脑电图 (EEG) 和脑皮层电图 (ECoG) 记录已被用于通过分析大脑活动来解码手指运动。传统方法侧重于针对运动解码的单一带通功率变化,利用需要手动特征提取的机器学习模型。
本研究介绍了一种 3D 卷积神经网络 (3D-CNN) 模型,用于使用 ECoG 数据解码手指运动。该模型采用自适应、可解释的人工智能 (xAI) 技术来解释大脑信号的生理相关性。对清醒开颅手术期间癫痫患者的 ECoG 信号进行处理,以提取多个频带的功率谱密度。这些数据形成一个 3D 矩阵,用于训练 3D-CNN 以预测手指轨迹。
3D-CNN 模型在预测手指运动方面表现出显著的准确性,对于单个手指运动,均方根误差 (RMSE) 值为 0.26-0.38,对于组合运动,RMSE 值为 0.20-0.24。可解释人工智能技术 Grad-CAM 和 SHAP 确定高伽马 (HG) 带对于运动预测至关重要,显示了不同手指运动所涉及的特定皮质区域。这些发现强调了 HG 带在运动控制中的生理意义。
3D-CNN 模型通过有效捕获 ECoG 数据中的空间和时间模式,优于传统的机器学习方法。与标准深度学习模型的“黑盒”性质不同,xAI 技术的使用提供了对模型决策过程的更清晰见解。
提出的 3D-CNN 模型与 xAI 方法相结合,提高了 ECoG 数据中手指运动的解码精度。这种方法为脑机接口 (BCI) 应用提供了更高效和可解释的解决方案,强调了 HG 带在运动控制中的作用。