Suppr超能文献

迈向 EEG 分析中的可解释机器学习。

Towards Interpretable Machine Learning in EEG Analysis.

机构信息

Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.

Institute of Medical Informatics, University of Münster, Münster, Germany.

出版信息

Stud Health Technol Inform. 2021 Sep 21;283:32-38. doi: 10.3233/SHTI210538.

Abstract

In this paper a machine learning model for automatic detection of abnormalities in electroencephalography (EEG) is dissected into parts, so that the influence of each part on the classification accuracy score can be examined. The most successful setup of several shallow artificial neural networks aggregated via voting results in accuracy of 81%. Stepwise simplification of the model shows the expected decrease in accuracy, but a naive model with thresholding of a single extracted feature (relative wavelet energy) is still able to achieve 75%, which remains strongly above the random guess baseline of 54%. These results suggest the feasibility of building a simple classification model ensuring accuracy scores close to the state-of-the-art research but remaining fully interpretable.

摘要

本文将用于自动检测脑电图(EEG)异常的机器学习模型分解成几个部分,以便检查每个部分对分类准确性评分的影响。通过投票结果聚合的几个浅层人工神经网络的最成功设置准确率达到 81%。逐步简化模型会导致准确性预期下降,但具有单个提取特征(相对小波能量)阈值的简单模型仍能够达到 75%,这仍然大大高于随机猜测基线的 54%。这些结果表明,构建一个简单的分类模型是可行的,该模型可以确保接近最先进研究的准确性评分,同时仍然完全可解释。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验