IBM Slovenija d.o.o., Ameriška ulica 8, 1000, Ljubljana, Slovenia.
Anton Trstenjak Institute of Gerontology and Intergenerational Relations, Resljeva cesta 7, 1000, Ljubljana, Slovenia.
BMC Med Inform Decis Mak. 2024 Oct 29;24(1):317. doi: 10.1186/s12911-024-02714-w.
Ageing is one of the most important challenges in our society. Evaluating how one is ageing is important in many aspects, from giving personalized recommendations to providing insight for long-term care eligibility. Machine learning can be utilized for that purpose, however, user reservations towards "black-box" predictions call for increased transparency and explainability of results. This study aimed to explore the potential of developing a machine learning-based healthy ageing scale that provides explainable results that could be trusted and understood by informal carers.
In this study, we used data from 696 older adults collected via personal field interviews as part of independent research. Explanatory factor analysis was used to find candidate healthy ageing aspects. For visualization of key aspects, a web annotation application was developed. Key aspects were selected by gerontologists who later used web annotation applications to evaluate healthy ageing for each older adult on a Likert scale. Logistic Regression, Decision Tree Classifier, Random Forest, KNN, SVM and XGBoost were used for multi-classification machine learning. AUC OvO, AUC OvR, F1, Precision and Recall were used for evaluation. Finally, SHAP was applied to best model predictions to make them explainable.
The experimental results show that human annotations of healthy ageing could be modelled using machine learning where among several algorithms XGBoost showed superior performance. The use of XGBoost resulted in 0.92 macro-averaged AuC OvO and 0.76 macro-averaged F1. SHAP was applied to generate local explanations for predictions and shows how each feature is influencing the prediction.
The resulting explainable predictions make a step toward practical scale implementation into decision support systems. The development of such a decision support system that would incorporate an explainable model could reduce user reluctance towards the utilization of AI in healthcare and provide explainable and trusted insights to informal carers or healthcare providers as a basis to shape tangible actions for improving ageing. Furthermore, the cooperation with gerontology specialists throughout the process also indicates expert knowledge as integrated into the model.
老龄化是我们社会面临的最重要挑战之一。评估一个人的衰老程度在许多方面都很重要,从提供个性化建议到为长期护理资格提供见解。机器学习可用于此目的,但是,用户对“黑盒”预测的保留意见要求提高结果的透明度和可解释性。本研究旨在探索开发基于机器学习的健康老龄化量表的潜力,该量表提供可信赖和可理解的结果,可供非正式护理人员使用。
在这项研究中,我们使用了通过个人实地访谈收集的 696 名老年人的数据,这些数据是独立研究的一部分。使用解释性因素分析找到候选的健康老龄化方面。为了可视化关键方面,开发了一个网络注释应用程序。老年病学家选择了关键方面,然后他们使用网络注释应用程序根据李克特量表对每个老年人的健康老龄化进行评估。逻辑回归、决策树分类器、随机森林、KNN、SVM 和 XGBoost 用于多类分类机器学习。使用 AUC OvO、AUC OvR、F1、精度和召回率进行评估。最后,将 SHAP 应用于最佳模型预测,以使它们具有可解释性。
实验结果表明,可以使用机器学习对健康老龄化进行建模,其中在几种算法中,XGBoost 表现出卓越的性能。使用 XGBoost 可得到 0.92 的宏平均 AUC OvO 和 0.76 的宏平均 F1。应用 SHAP 为预测生成局部解释,并显示每个特征如何影响预测。
可解释预测的结果使实用规模实施到决策支持系统迈出了一步。开发这样一种决策支持系统,该系统将包含一个可解释的模型,可以减少用户对人工智能在医疗保健中的使用的抵触情绪,并为非正式护理人员或医疗保健提供者提供可解释和可信赖的见解,作为改善老龄化的基础。此外,在整个过程中与老年病学专家合作也表明专家知识已整合到模型中。