Suppr超能文献

基于可解释机器学习技术的新生儿术后死亡率相关风险因素的理解。

Understanding risk factors for postoperative mortality in neonates based on explainable machine learning technology.

机构信息

The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China.

Rhode Island Hospital, Brown University, United States.

出版信息

J Pediatr Surg. 2021 Dec;56(12):2165-2171. doi: 10.1016/j.jpedsurg.2021.03.057. Epub 2021 Apr 5.

Abstract

PURPOSE

We aimed to introduce an explainable machine learning technology to help clinicians understand the risk factors for neonatal postoperative mortality at different levels.

METHODS

A total of 1481 neonatal surgeries performed between May 2016 and December 2019 at a children's hospital were included in this study. Perioperative variables, including vital signs during surgery, were collected and used to predict postoperative mortality. Several widely used machine learning methods were trained and evaluated on split datasets. The model with the best performance was explained by SHAP (SHapley Additive exPlanations) at different levels.

RESULTS

The random forest model achieved the best performance with an area under the receiver operating characteristic curve of 0.72 in the validation set. TreeExplainer of SHAP was used to identify the risk factors for neonatal postoperative mortality. The explainable machine learning model not only explains the risk factors identified by traditional statistical analysis but also identifies additional risk factors. The visualization of feature contributions at different levels by SHAP makes the "black-box" machine learning model easily understood by clinicians and families. Based on this explanation, vital signs during surgery play an important role in eventual survival.

CONCLUSIONS

The explainable machine learning model not only exhibited good performance in predicting neonatal surgical mortality but also helped clinicians understand each risk factor and each individual case.

摘要

目的

我们旨在引入一种可解释的机器学习技术,以帮助临床医生了解不同层面新生儿术后死亡的风险因素。

方法

本研究纳入了 2016 年 5 月至 2019 年 12 月在一家儿童医院进行的 1481 例新生儿手术。收集了围手术期变量,包括手术期间的生命体征,用于预测术后死亡率。在分割数据集上训练和评估了几种广泛使用的机器学习方法。在不同水平上,通过 SHAP(Shapley Additive exPlanations)对表现最佳的模型进行解释。

结果

随机森林模型在验证集中的受试者工作特征曲线下面积为 0.72,表现最佳。使用 SHAP 的 TreeExplainer 确定新生儿术后死亡的风险因素。可解释的机器学习模型不仅解释了传统统计分析确定的风险因素,还确定了其他风险因素。SHAP 对不同水平的特征贡献的可视化使临床医生和家属更容易理解“黑盒”机器学习模型。基于这一解释,手术期间的生命体征对最终存活起着重要作用。

结论

可解释的机器学习模型不仅在预测新生儿手术死亡率方面表现良好,而且有助于临床医生了解每个风险因素和每个个体病例。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验