Suppr超能文献

使用整合多模态数据源的公平机器学习模型预测术后慢性阿片类药物使用情况:医疗保健领域中符合伦理的机器学习实例

Predicting postoperative chronic opioid use with fair machine learning models integrating multi-modal data sources: a demonstration of ethical machine learning in healthcare.

作者信息

Soley Nidhi, Rattsev Ilia, Speed Traci J, Xie Anping, Ferryman Kadija S, Taylor Casey Overby

机构信息

Institute for Computational Medicine, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, United States.

Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21218, United States.

出版信息

J Am Med Inform Assoc. 2025 Jun 1;32(6):985-997. doi: 10.1093/jamia/ocaf053.

Abstract

OBJECTIVE

Building upon our previous work on predicting chronic opioid use using electronic health records (EHR) and wearable data, this study leveraged the Health Equity Across the AI Lifecycle (HEAAL) framework to (a) fine tune the previously built model with genomic data and evaluate model performance in predicting chronic opioid use and (b) apply IBM's AIF360 pre-processing toolkit to mitigate bias related to gender and race and evaluate the model performance using various fairness metrics.

MATERIALS AND METHODS

Participants included approximately 271 All of Us Research Program subjects with EHR, wearable, and genomic data. We fine-tuned 4 machine learning models on the new dataset. The SHapley Additive exPlanations (SHAP) technique identified the best-performing predictors. A preprocessing toolkit boosted fairness by gender and race.

RESULTS

The genetic data enhanced model performance from the prior model, with the area under the curve improving from 0.90 (95% CI, 0.88-0.92) to 0.95 (95% CI, 0.89-0.95). Key predictors included Dopamine D1 Receptor (DRD1) rs4532, general type of surgery, and time spent in physical activity. The reweighing preprocessing technique applied to the stacking algorithm effectively improved the model's fairness across racial and gender groups without compromising performance.

CONCLUSION

We leveraged 2 dimensions of the HEAAL framework to build a fair artificial intelligence (AI) solution. Multi-modal datasets (including wearable and genetic data) and applying bias mitigation strategies can help models to more fairly and accurately assess risk across diverse populations, promoting fairness in AI in healthcare.

摘要

目的

基于我们之前利用电子健康记录(EHR)和可穿戴数据预测慢性阿片类药物使用情况的工作,本研究利用人工智能生命周期中的健康公平性(HEAAL)框架来(a)使用基因组数据对先前构建的模型进行微调,并评估模型在预测慢性阿片类药物使用方面的性能,以及(b)应用IBM的AIF360预处理工具包来减轻与性别和种族相关的偏差,并使用各种公平性指标评估模型性能。

材料与方法

参与者包括约271名来自“我们所有人”研究计划的受试者,他们拥有电子健康记录、可穿戴数据和基因组数据。我们在新数据集上对4种机器学习模型进行了微调。SHapley加性解释(SHAP)技术确定了表现最佳的预测因子。一个预处理工具包提高了性别和种族方面的公平性。

结果

遗传数据提高了先前模型的性能,曲线下面积从0.90(95%CI,0.88 - 0.92)提高到0.95(95%CI,0.89 - 0.95)。关键预测因子包括多巴胺D1受体(DRD1)rs4532、手术的一般类型以及身体活动所花费的时间。应用于堆叠算法的重新加权预处理技术有效地提高了模型在种族和性别群体中的公平性,同时不影响性能。

结论

我们利用HEAAL框架的两个维度构建了一个公平的人工智能(AI)解决方案。多模态数据集(包括可穿戴和遗传数据)以及应用偏差缓解策略可以帮助模型更公平、准确地评估不同人群的风险,促进医疗保健领域人工智能的公平性。

相似文献

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验