Department of Epidemiology and Health Statistics, College of Public Health, Xinjiang Medical University, Urumqi, Xinjiang, China.
BMC Endocr Disord. 2022 Aug 26;22(1):214. doi: 10.1186/s12902-022-01121-4.
The internal workings ofmachine learning algorithms are complex and considered as low-interpretation "black box" models, making it difficult for domain experts to understand and trust these complex models. The study uses metabolic syndrome (MetS) as the entry point to analyze and evaluate the application value of model interpretability methods in dealing with difficult interpretation of predictive models.
The study collects data from a chain of health examination institution in Urumqi from 2017 ~ 2019, and performs 39,134 remaining data after preprocessing such as deletion and filling. RFE is used for feature selection to reduce redundancy; MetS risk prediction models (logistic, random forest, XGBoost) are built based on a feature subset, and accuracy, sensitivity, specificity, Youden index, and AUROC value are used to evaluate the model classification performance; post-hoc model-agnostic interpretation methods (variable importance, LIME) are used to interpret the results of the predictive model.
Eighteen physical examination indicators are screened out by RFE, which can effectively solve the problem of physical examination data redundancy. Random forest and XGBoost models have higher accuracy, sensitivity, specificity, Youden index, and AUROC values compared with logistic regression. XGBoost models have higher sensitivity, Youden index, and AUROC values compared with random forest. The study uses variable importance, LIME and PDP for global and local interpretation of the optimal MetS risk prediction model (XGBoost), and different interpretation methods have different insights into the interpretation of model results, which are more flexible in model selection and can visualize the process and reasons for the model to make decisions. The interpretable risk prediction model in this study can help to identify risk factors associated with MetS, and the results showed that in addition to the traditional risk factors such as overweight and obesity, hyperglycemia, hypertension, and dyslipidemia, MetS was also associated with other factors, including age, creatinine, uric acid, and alkaline phosphatase.
The model interpretability methods are applied to the black box model, which can not only realize the flexibility of model application, but also make up for the uninterpretable defects of the model. Model interpretability methods can be used as a novel means of identifying variables that are more likely to be good predictors.
机器学习算法的内部运作复杂,被认为是低解释能力的“黑盒”模型,这使得领域专家难以理解和信任这些复杂模型。本研究以代谢综合征(MetS)为切入点,分析和评估模型可解释性方法在处理预测模型难以解释方面的应用价值。
本研究从乌鲁木齐市某连锁体检机构收集 2017 年至 2019 年的数据,经过删除和填补等预处理后,剩余 39134 条数据。使用 RFE 进行特征选择以减少冗余;基于特征子集构建 MetS 风险预测模型(逻辑回归、随机森林、XGBoost),并使用准确性、敏感性、特异性、约登指数和 AUROC 值评估模型分类性能;使用事后模型不可知解释方法(变量重要性、LIME)对预测模型的结果进行解释。
RFE 筛选出 18 项体检指标,可以有效解决体检数据冗余问题。与逻辑回归相比,随机森林和 XGBoost 模型具有更高的准确性、敏感性、特异性、约登指数和 AUROC 值。与随机森林相比,XGBoost 模型具有更高的敏感性、约登指数和 AUROC 值。本研究使用变量重要性、LIME 和 PDP 对最优 MetS 风险预测模型(XGBoost)进行全局和局部解释,不同的解释方法对模型结果的解释有不同的见解,在模型选择方面更具灵活性,可以可视化模型决策的过程和原因。本研究中的可解释风险预测模型有助于识别与 MetS 相关的风险因素,结果表明,除了超重和肥胖、高血糖、高血压和血脂异常等传统危险因素外,MetS 还与年龄、肌酐、尿酸和碱性磷酸酶等其他因素有关。
模型可解释性方法应用于黑盒模型,不仅可以实现模型应用的灵活性,还可以弥补模型不可解释的缺陷。模型可解释性方法可以作为识别更有可能成为良好预测因子的变量的新手段。