Zanola Andrea, Fabrice Tshimanga Louis, Del Pup Federico, Baiesi Marco, Atzori Manfredo
Department of Neuroscience, University of Padua, 35128 Padua, Italy.
Padua Neuroscience Center, 35128 Padua, Italy.
J Neural Eng. 2025 Aug 13;22(4). doi: 10.1088/1741-2552/adf6e6.
This work presents xEEGNet, a novel, compact, and explainable neural network for electroencephalography (EEG) data analysis. It is fully interpretable and reduces overfitting through a major parameter reduction.As an applicative use case to develop our model, we focused on the classification of common dementia conditions, Alzheimer's and frontotemporal dementia, versus controls. xEEGNet, however, is broadly applicable to other neurological conditions involving spectral alterations. We used ShallowNet, a simple and popular model in the EEGNet family, as a starting point. Its structure was analyzed and gradually modified to move from a 'black box' model to a more transparent one, without compromising performance. The learned kernels and weights were analyzed from a clinical standpoint to assess their medical significance. Model variants, including ShallowNet and the final xEEGNet, were evaluated using a robust nested-leave-n-subjects out cross-validation for unbiased performance estimates. Variability across data splits was explained using the embedded EEG representations, grouped by class and set, with pairwise separability to quantify group distinction. Overfitting was measured through training-validation loss correlation and training speed.xEEGNet uses only 168 parameters, 200 times fewer than ShallowNet, yet retains interpretability, resists overfitting, achieves comparable median performance (-1.5%), and reduces performance variability across splits. This variability is explained by the embedded EEG representations: higher accuracy correlates with greater separation between test-set controls and Alzheimer's cases, without significant influence from the training data.The capability of xEEGNet to filter specific EEG bands, learns band specific topographies and use the right EEG spectral bands for disease classification demonstrates its interpretability. While big deep learning models are typically prioritized for performance, this study shows that smaller architectures like xEEGNet can be equally effective in pathology classification, using EEG data.
这项工作提出了xEEGNet,这是一种用于脑电图(EEG)数据分析的新颖、紧凑且可解释的神经网络。它完全可解释,并通过大幅减少参数来减少过拟合。作为开发我们模型的一个应用案例,我们专注于常见痴呆症(阿尔茨海默病和额颞叶痴呆)与对照的分类。然而,xEEGNet广泛适用于其他涉及频谱改变的神经疾病。我们以EEGNet家族中一个简单且流行的模型ShallowNet为起点。分析了它的结构,并逐步进行修改,以从一个“黑箱”模型转变为一个更透明的模型,同时不影响性能。从临床角度分析了学习到的内核和权重,以评估它们的医学意义。使用稳健的嵌套留n个受试者交叉验证对包括ShallowNet和最终的xEEGNet在内的模型变体进行评估,以获得无偏的性能估计。通过按类别和数据集分组的嵌入式EEG表示来解释数据分割之间的变异性,并使用成对可分离性来量化组间差异。通过训练-验证损失相关性和训练速度来测量过拟合。xEEGNet仅使用168个参数,比ShallowNet少200倍,但仍保持可解释性,抗过拟合,实现了可比的中位数性能(-1.5%),并减少了分割间的性能变异性。这种变异性可以通过嵌入式EEG表示来解释:更高的准确率与测试集对照和阿尔茨海默病病例之间更大的分离相关,而不受训练数据的显著影响。xEEGNet过滤特定EEG频段、学习频段特定拓扑结构以及使用正确的EEG频谱频段进行疾病分类的能力证明了其可解释性。虽然大型深度学习模型通常优先考虑性能,但这项研究表明,像xEEGNet这样的较小架构在使用EEG数据进行病理分类时同样有效。