Parani Paras, Mohammad Umair, Saeed Fahad
Knight Foundation School of Computing and Information Sciences, Florida International University, Miami, FL 33172, USA.
Proc IEEE Int Conf Big Data. 2024 Dec;2024:4941-4945. doi: 10.1109/bigdata62323.2024.10825319.
Predicting seizures ahead of time will have a significant positive clinical impact for people with epilepsy. Advances in machine learning/artificial intelligence (ML/AI) has provided us the tools needed to perform such predictive tasks. To date, advanced deep learning (DL) architectures such as the convolutional neural network (CNN) and long short-term memory (LSTM) have been used with mixed results. However, highly connected activity exhibited by epileptic seizures necessitates the design of more complex ML techniques which can better capture the complex interconnected neurological processes. Other challenges include the variability of EEG sensor data quality, different epilepsy and seizure profiles, lack of annotated datasets and absence of ML-ready benchmarks. In addition, successful models will need to perform inference in almost real-time using limited hardware compute-capacity. To address these challenges, we propose a lightweight architecture, called , whose novelty lies in the simple and smaller model-size and a lower computational load footprint needed to infer in real-time compared to other works in the literature. To quantify the performance of this lightweight model, we compared its performance with a custom-designed residual neural network (ResNet), a pre-trained vision transformer (ViT) and a pre-trained large-language model (LLM). We tested ESPFormer on MLSPred-Bench which is the largest patient-independent seizure prediction dataset comprising 12 benchmarks. Our results demonstrate that ESPFormer provides the best performance in terms of prediction accuracy for 4/12 benchmarks with an average improvement of 2.65% compared to the LLM, 3.35% compared to the ViT and 17.65% compared to the ResNet - and comparable results for other benchmarks. Our results indicate that lightweight transformer architecture may outperform resource-intensive LLM based models for real-time EEG-based seizure predictions.
提前预测癫痫发作对癫痫患者具有重大的积极临床意义。机器学习/人工智能(ML/AI)的发展为我们提供了执行此类预测任务所需的工具。迄今为止,诸如卷积神经网络(CNN)和长短期记忆(LSTM)等先进的深度学习(DL)架构的使用效果参差不齐。然而,癫痫发作所表现出的高度关联活动需要设计更复杂的ML技术,以便更好地捕捉复杂的相互连接的神经过程。其他挑战包括脑电图传感器数据质量的可变性、不同的癫痫和发作特征、缺乏带注释的数据集以及缺乏适用于ML的基准。此外,成功的模型需要使用有限的硬件计算能力几乎实时地进行推理。为了应对这些挑战,我们提出了一种轻量级架构,称为ESPFormer,其新颖之处在于模型简单、尺寸更小,并且与文献中的其他工作相比,实时推理所需的计算负载更小。为了量化这个轻量级模型的性能,我们将其性能与定制设计的残差神经网络(ResNet)、预训练的视觉Transformer(ViT)和预训练的大语言模型(LLM)进行了比较。我们在MLSPred-Bench上测试了ESPFormer,MLSPred-Bench是最大的独立于患者的癫痫发作预测数据集,包含12个基准。我们的结果表明,在4/12个基准的预测准确性方面,ESPFormer提供了最佳性能,与LLM相比平均提高了2.65%,与ViT相比提高了3.35%,与ResNet相比提高了17.65%,并且在其他基准上也取得了可比的结果。我们的结果表明,轻量级Transformer架构在基于脑电图的实时癫痫发作预测方面可能优于资源密集型的基于LLM的模型。