Department of Informatics Engineering, CISUC, Univ Coimbra, Coimbra, Portugal.
Epilepsy Center, Department Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
Epilepsia Open. 2023 Jun;8(2):285-297. doi: 10.1002/epi4.12748. Epub 2023 Apr 27.
Many state-of-the-art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high-risk decisions. Seizure prediction concerns a multidimensional time-series problem that performs continuous sliding window analysis and classification. In this work, we make a critical review of which explanations increase trust in models' decisions for predicting seizures. We developed three machine learning methodologies to explore their explainability potential. These contain different levels of model transparency: a logistic regression, an ensemble of 15 support vector machines, and an ensemble of three convolutional neural networks. For each methodology, we evaluated quasi-prospectively the performance in 40 patients (testing data comprised 2055 hours and 104 seizures). We selected patients with good and poor performance to explain the models' decisions. Then, with grounded theory, we evaluated how these explanations helped specialists (data scientists and clinicians working in epilepsy) to understand the obtained model dynamics. We obtained four lessons for better communication between data scientists and clinicians. We found that the goal of explainability is not to explain the system's decisions but to improve the system itself. Model transparency is not the most significant factor in explaining a model decision for seizure prediction. Even when using intuitive and state-of-the-art features, it is hard to understand brain dynamics and their relationship with the developed models. We achieve an increase in understanding by developing, in parallel, several systems that explicitly deal with signal dynamics changes that help develop a complete problem formulation.
许多使用脑电图进行癫痫预测的最先进方法都是基于机器学习模型,这些模型是黑箱,这削弱了临床医生对其进行高风险决策的信任。癫痫预测涉及多维时间序列问题,需要进行连续滑动窗口分析和分类。在这项工作中,我们对哪些解释可以增加对模型预测癫痫决策的信任进行了批判性的回顾。我们开发了三种机器学习方法来探索其可解释性潜力。这些方法包含不同程度的模型透明度:逻辑回归、15 个支持向量机的集成和三个卷积神经网络的集成。对于每种方法,我们在 40 名患者中进行了准前瞻性评估(测试数据包括 2055 小时和 104 次癫痫发作)。我们选择表现良好和较差的患者来解释模型的决策。然后,我们使用扎根理论评估这些解释如何帮助专家(从事癫痫研究的数据科学家和临床医生)理解获得的模型动态。我们得出了四条关于数据科学家和临床医生之间更好沟通的经验教训。我们发现,可解释性的目标不是解释系统的决策,而是改进系统本身。模型透明度并不是解释癫痫预测模型决策的最重要因素。即使使用直观和最先进的特征,也很难理解大脑动态及其与开发模型的关系。我们通过并行开发几个明确处理信号动态变化的系统来实现理解的提高,这有助于提出完整的问题表述。