Parani Paras, Mohammad Umair, Saeed Fahad
Knight Foundation School of Computing and Information Sciences, Florida International University, Miami, FL, USA.
2025 8th Int Conf Data Sci Mach Learn Appl (2025). 2025 Feb;2025:132-137. doi: 10.1109/cdma61895.2025.00028. Epub 2025 Mar 7.
Repeated unprovoked seizures are a major source of concern for people with epilepsy. Predicting seizures before they occur is of interest to both machine-learning scientists as well as clinicians, and is an active area of research. The variability of EEG sensors, type of seizures, and specialized knowledge required for annotating the data complicates the large-scale annotation process essential for supervised predictive models. To address these challenges, we propose the use of Vision Transformers (ViTs) and Large Language Models (LLMs) that were originally trained on publicly available image or text data. Our work leverages these pre-trained models by refining the input, embedding, and classification layers in a minimalistic fashion to predict seizures. Our results demonstrate that LLMs outperforms the ViTs in patient-independent seizure prediction achieving a sensitivity of 79.02% which is 8% higher compared to ViTs and about 12% higher compared to a custom-designed ResNet-based model. Our work demonstrates the successful feasibility of pre-trained models for seizure prediction with its potential for improving the quality of life of people with epilepsy. Our code and related materials are available open-source at: https://github.com/pcdslab/UtilLLM_EPS/.
反复出现的无诱因癫痫发作是癫痫患者主要担忧的问题。在癫痫发作前进行预测,这对机器学习科学家和临床医生都很有吸引力,并且是一个活跃的研究领域。脑电图(EEG)传感器的变异性、癫痫发作类型以及注释数据所需的专业知识,使得监督预测模型所必需的大规模注释过程变得复杂。为应对这些挑战,我们建议使用最初在公开可用的图像或文本数据上训练的视觉Transformer(ViT)和大语言模型(LLM)。我们的工作通过以简约的方式优化输入、嵌入和分类层来利用这些预训练模型来预测癫痫发作。我们的结果表明,在独立于患者的癫痫发作预测中,大语言模型的表现优于视觉Transformer,灵敏度达到79.02%,比视觉Transformer高8%,比定制设计的基于ResNet的模型高约12%。我们的工作证明了预训练模型用于癫痫发作预测的成功可行性及其改善癫痫患者生活质量的潜力。我们的代码和相关材料可在以下开源网址获取:https://github.com/pcdslab/UtilLLM_EPS/ 。