Suppr超能文献

利用预训练视觉变换器和大语言模型进行癫痫发作预测。

Utilizing Pretrained Vision Transformers and Large Language Models for Epileptic Seizure Prediction.

作者信息

Parani Paras, Mohammad Umair, Saeed Fahad

机构信息

Knight Foundation School of Computing and Information Sciences, Florida International University, Miami, FL, USA.

出版信息

2025 8th Int Conf Data Sci Mach Learn Appl (2025). 2025 Feb;2025:132-137. doi: 10.1109/cdma61895.2025.00028. Epub 2025 Mar 7.

Abstract

Repeated unprovoked seizures are a major source of concern for people with epilepsy. Predicting seizures before they occur is of interest to both machine-learning scientists as well as clinicians, and is an active area of research. The variability of EEG sensors, type of seizures, and specialized knowledge required for annotating the data complicates the large-scale annotation process essential for supervised predictive models. To address these challenges, we propose the use of Vision Transformers (ViTs) and Large Language Models (LLMs) that were originally trained on publicly available image or text data. Our work leverages these pre-trained models by refining the input, embedding, and classification layers in a minimalistic fashion to predict seizures. Our results demonstrate that LLMs outperforms the ViTs in patient-independent seizure prediction achieving a sensitivity of 79.02% which is 8% higher compared to ViTs and about 12% higher compared to a custom-designed ResNet-based model. Our work demonstrates the successful feasibility of pre-trained models for seizure prediction with its potential for improving the quality of life of people with epilepsy. Our code and related materials are available open-source at: https://github.com/pcdslab/UtilLLM_EPS/.

摘要

反复出现的无诱因癫痫发作是癫痫患者主要担忧的问题。在癫痫发作前进行预测,这对机器学习科学家和临床医生都很有吸引力,并且是一个活跃的研究领域。脑电图(EEG)传感器的变异性、癫痫发作类型以及注释数据所需的专业知识,使得监督预测模型所必需的大规模注释过程变得复杂。为应对这些挑战,我们建议使用最初在公开可用的图像或文本数据上训练的视觉Transformer(ViT)和大语言模型(LLM)。我们的工作通过以简约的方式优化输入、嵌入和分类层来利用这些预训练模型来预测癫痫发作。我们的结果表明,在独立于患者的癫痫发作预测中,大语言模型的表现优于视觉Transformer,灵敏度达到79.02%,比视觉Transformer高8%,比定制设计的基于ResNet的模型高约12%。我们的工作证明了预训练模型用于癫痫发作预测的成功可行性及其改善癫痫患者生活质量的潜力。我们的代码和相关材料可在以下开源网址获取:https://github.com/pcdslab/UtilLLM_EPS/

相似文献

1
Utilizing Pretrained Vision Transformers and Large Language Models for Epileptic Seizure Prediction.利用预训练视觉变换器和大语言模型进行癫痫发作预测。
2025 8th Int Conf Data Sci Mach Learn Appl (2025). 2025 Feb;2025:132-137. doi: 10.1109/cdma61895.2025.00028. Epub 2025 Mar 7.
5
PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers.PSAQ-ViT V2:迈向用于视觉Transformer的准确且通用的无数据量化
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17227-17238. doi: 10.1109/TNNLS.2023.3301007. Epub 2024 Dec 2.

本文引用的文献

3
An Epileptic Seizure Prediction Method Based on CBAM-3D CNN-LSTM Model.基于 CBAM-3D CNN-LSTM 模型的癫痫发作预测方法。
IEEE J Transl Eng Health Med. 2023 Jun 27;11:417-423. doi: 10.1109/JTEHM.2023.3290036. eCollection 2023.
7
Efficient Epileptic Seizure Prediction Based on Deep Learning.基于深度学习的高效癫痫发作预测。
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):804-813. doi: 10.1109/TBCAS.2019.2929053. Epub 2019 Jul 17.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验