Upadhyaya Dipak P, Prantzalos Katrina, Thyagaraj Suraj, Shafiabadi Nassim, Fernandez-BacaVaca Guadalupe, Sivagnanam Subhashini, Majumdar Amitava, Sahoo Satya S
Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA.
Department of Neurology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA.
medRxiv. 2023 Oct 19:2023.06.25.23291874. doi: 10.1101/2023.06.25.23291874.
The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines for transparency and ethical use, ensuring the responsible integration of machine learning in healthcare. In this study, we demonstrate the effectiveness of ML interpretability methods to provide important insights into the dynamics of brain network interactions in epilepsy, a serious neurological disorder affecting more than 60 million persons worldwide. Using high-resolution intracranial electroencephalogram (EEG) recordings from a cohort of 16 patients, we developed high accuracy ML models to categorize these brain activity recordings into either seizure or non-seizure classes followed by a more complex task of delineating the different stages of seizure progression to different parts of the brain as a multi-class classification task. We applied three distinct types of interpretability methods to the high-accuracy ML models to gain an understanding of the relative contributions of different categories of brain interaction patterns, including multi-focii interactions, which play an important role in distinguishing between different states of the brain. The results of this study demonstrate for the first time that post-hoc interpretability methods enable us to understand why ML algorithms generate a given set of results and how variations in value of input values affect the accuracy of the ML algorithms. In particular, we show in this study that interpretability methods can be used to identify brain regions and interaction patterns that have a significant impact on seizure events. The results of this study highlight the importance of the integrated implementation of ML algorithms together with interpretability methods in aberrant brain network studies and the wider domain of biomedical research.
机器学习(ML)算法在广泛的生物医学应用中的迅速采用,凸显了信任问题以及对ML算法产生的结果缺乏理解。最近的研究集中在开发可解释的ML模型,并建立透明度和道德使用的指导方针,以确保机器学习在医疗保健中的负责任整合。在本研究中,我们展示了ML可解释性方法的有效性,以深入了解癫痫中脑网络相互作用的动态,癫痫是一种严重的神经系统疾病,全球有超过6000万人受其影响。我们使用来自16名患者队列的高分辨率颅内脑电图(EEG)记录,开发了高精度的ML模型,将这些脑活动记录分类为癫痫发作或非癫痫发作类别,随后是一项更复杂的任务,即将癫痫发作进展的不同阶段描绘到大脑的不同部位,作为一个多类分类任务。我们将三种不同类型的可解释性方法应用于高精度ML模型,以了解不同类别的脑交互模式的相对贡献,包括多灶性交互,其在区分大脑的不同状态中起重要作用。本研究结果首次表明,事后可解释性方法使我们能够理解ML算法为何生成给定的一组结果,以及输入值的变化如何影响ML算法的准确性。特别是,我们在本研究中表明,可解释性方法可用于识别对癫痫发作事件有重大影响的脑区和交互模式。本研究结果强调了在异常脑网络研究和更广泛的生物医学研究领域中,将ML算法与可解释性方法集成实施的重要性。