IEEE Trans Neural Syst Rehabil Eng. 2022;30:1536-1547. doi: 10.1109/TNSRE.2022.3180155. Epub 2022 Jun 10.
Deep learning (DL) methods have been widely used in the field of seizure prediction from electroencephalogram (EEG) in recent years. However, DL methods usually have numerous multiplication operations resulting in high computational complexity. In addtion, most of the current approaches in this field focus on designing models with special architectures to learn representations, ignoring the use of intrinsic patterns in the data. In this study, we propose a simple and effective end-to-end adder network and supervised contrastive learning (AddNet-SCL). The method uses addition instead of the massive multiplication in the convolution process to reduce the computational cost. Besides, contrastive learning is employed to effectively use label information, points of the same class are clustered together in the projection space, and points of different class are pushed apart at the same time. Moreover, the proposed model is trained by combining the supervised contrastive loss from the projection layer and the cross-entropy loss from the classification layer. Since the adder networks uses the l -norm distance as the similarity measure between the input feature and the filters, the gradient function of the network changes, an adaptive learning rate strategy is employed to ensure the convergence of AddNet-CL. Experimental results show that the proposed method achieves 94.9% sensitivity, an area under curve (AUC) of 94.2%, and a false positive rate of (FPR) 0.077/h on 19 patients in the CHB-MIT database and 89.1% sensitivity, an AUC of 83.1%, and an FPR of 0.120/h in the Kaggle database. Competitive results show that this method has broad prospects in clinical practice.
深度学习(DL)方法近年来在脑电图(EEG)癫痫预测领域得到了广泛应用。然而,DL 方法通常具有大量的乘法运算,导致计算复杂度高。此外,该领域的大多数当前方法都专注于设计具有特殊架构的模型来学习表示,而忽略了数据中的内在模式的利用。在这项研究中,我们提出了一种简单而有效的端到端加法网络和监督对比学习(AddNet-SCL)。该方法使用加法代替卷积过程中的大量乘法,以降低计算成本。此外,对比学习用于有效地利用标签信息,相同类别的点在投影空间中聚类,不同类别的点同时被推开。此外,所提出的模型通过结合来自投影层的监督对比损失和来自分类层的交叉熵损失来训练。由于加法网络使用 l-范数距离作为输入特征和滤波器之间的相似度度量,因此网络的梯度函数发生变化,采用自适应学习率策略确保 AddNet-CL 的收敛性。实验结果表明,该方法在 CHB-MIT 数据库的 19 名患者中实现了 94.9%的灵敏度、94.2%的曲线下面积(AUC)和 0.077/h 的假阳性率(FPR),在 Kaggle 数据库中实现了 89.1%的灵敏度、83.1%的 AUC 和 0.120/h 的 FPR。竞争结果表明,该方法在临床实践中有广阔的应用前景。