Suppr超能文献

关于脑电图癫痫发作预测黑箱系统的临床接受度。

On the clinical acceptance of black-box systems for EEG seizure prediction.

作者信息

Pinto Mauro F, Leal Adriana, Lopes Fábio, Pais José, Dourado António, Sales Francisco, Martins Pedro, Teixeira César A

机构信息

Department of Informatics Engineering, CISUC, University of Coimbra, Coimbra, Portugal.

Department Neurosurgery, Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.

出版信息

Epilepsia Open. 2022 Jun;7(2):247-259. doi: 10.1002/epi4.12597. Epub 2022 Apr 11.

Abstract

Seizure prediction may be the solution for epileptic patients whose drugs and surgery do not control seizures. Despite 46 years of research, few devices/systems underwent clinical trials and/or are commercialized, where the most recent state-of-the-art approaches, as neural networks models, are not used to their full potential. The latter demonstrates the existence of social barriers to new methodologies due to data bias, patient safety, and legislation compliance. In the form of literature review, we performed a qualitative study to analyze the seizure prediction ecosystem to find these social barriers. With the Grounded Theory, we draw hypotheses from data, while with the Actor-Network Theory we considered that technology shapes social configurations and interests, being fundamental in healthcare. We obtained a social network that describes the ecosystem and propose research guidelines aiming at clinical acceptance. Our most relevant conclusion is the need for model explainability, but not necessarily intrinsically interpretable models, for the case of seizure prediction. Accordingly, we argue that it is possible to develop robust prediction models, including black-box systems to some extent, while avoiding data bias, ensuring patient safety, and still complying with legislation, if they can deliver human- comprehensible explanations. Due to skepticism and patient safety reasons, many authors advocate the use of transparent models which may limit their performance and potential. Our study highlights a possible path, by using model explainability, on how to overcome these barriers while allowing the use of more computationally robust models.

摘要

癫痫发作预测可能是药物和手术都无法控制癫痫发作的患者的解决方案。尽管经过了46年的研究,但很少有设备/系统进行过临床试验和/或商业化,其中最先进的方法,如神经网络模型,并未得到充分利用。后者表明,由于数据偏差、患者安全和法规遵从性,新方法存在社会障碍。我们以文献综述的形式进行了一项定性研究,以分析癫痫发作预测生态系统,找出这些社会障碍。运用扎根理论,我们从数据中得出假设,而运用行动者网络理论,我们认为技术塑造社会结构和利益,这在医疗保健中至关重要。我们获得了一个描述该生态系统的社会网络,并提出了旨在实现临床接受的研究指导方针。我们最相关的结论是,对于癫痫发作预测而言,需要模型的可解释性,但不一定是本质上可解释的模型。因此,我们认为,如果能够提供人类可理解的解释,那么在避免数据偏差、确保患者安全并仍符合法规的同时,开发强大的预测模型(在一定程度上包括黑箱系统)是可能的。由于怀疑主义和患者安全原因,许多作者主张使用透明模型,这可能会限制其性能和潜力。我们的研究强调了一条可能的途径,即通过使用模型可解释性,在允许使用计算更强大的模型的同时,如何克服这些障碍。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1e8/9159247/8df3c661bfbf/EPI4-7-247-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验