• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

什么使临床机器学习公平?一个实用的伦理框架。

What makes clinical machine learning fair? A practical ethics framework.

作者信息

Hoche Marine, Mineeva Olga, Rätsch Gunnar, Vayena Effy, Blasimme Alessandro

机构信息

Department of Computer Science. Biomedical Informatics Group, ETH Zurich, Zurich, Switzerland.

AI Center, ETH Zurich, Zurich, Switzerland.

出版信息

PLOS Digit Health. 2025 Mar 18;4(3):e0000728. doi: 10.1371/journal.pdig.0000728. eCollection 2025 Mar.

DOI:10.1371/journal.pdig.0000728
PMID:40100898
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11918422/
Abstract

Machine learning (ML) can offer a tremendous contribution to medicine by streamlining decision-making, reducing mistakes, improving clinical accuracy and ensuring better patient outcomes. The prospects of a widespread and rapid integration of machine learning in clinical workflow have attracted considerable attention including due to complex ethical implications-algorithmic bias being among the most frequently discussed ML models. Here we introduce and discuss a practical ethics framework inductively-generated via normative analysis of the practical challenges in developing an actual clinical ML model (see case study). The framework is usable to identify, measure and address bias in clinical machine learning models, thus improving fairness as to both model performance and health outcomes. We detail a proportionate approach to ML bias by defining the demands of fair ML in light of what is ethically justifiable and, at the same time, technically feasible in light of inevitable trade-offs. Our framework enables ethically robust and transparent decision-making both in the design and the context-dependent aspects of ML bias mitigation, thus improving accountability for both developers and clinical users.

摘要

机器学习(ML)可以通过简化决策过程、减少错误、提高临床准确性并确保更好的患者预后,为医学做出巨大贡献。机器学习在临床工作流程中广泛且迅速整合的前景已引起相当大的关注,这其中包括由于复杂的伦理影响——算法偏差是最常被讨论的机器学习模型之一。在此,我们介绍并讨论一个通过对开发实际临床机器学习模型中的实际挑战进行规范分析而归纳生成的实用伦理框架(见案例研究)。该框架可用于识别、衡量和解决临床机器学习模型中的偏差,从而在模型性能和健康结果方面提高公平性。我们通过根据伦理上合理的要求以及鉴于不可避免的权衡在技术上可行的要求来定义公平机器学习的需求,详细阐述了一种针对机器学习偏差的相称方法。我们的框架能够在减轻机器学习偏差的设计和与上下文相关的方面实现符合伦理的稳健且透明的决策,从而提高开发者和临床用户的问责制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/2aea04cfc06e/pdig.0000728.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/bd9df3886c1f/pdig.0000728.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/19d7be9b60a7/pdig.0000728.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/fc0cee3faf6b/pdig.0000728.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/2aea04cfc06e/pdig.0000728.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/bd9df3886c1f/pdig.0000728.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/19d7be9b60a7/pdig.0000728.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/fc0cee3faf6b/pdig.0000728.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ee6/11918422/2aea04cfc06e/pdig.0000728.g004.jpg

相似文献

1
What makes clinical machine learning fair? A practical ethics framework.什么使临床机器学习公平?一个实用的伦理框架。
PLOS Digit Health. 2025 Mar 18;4(3):e0000728. doi: 10.1371/journal.pdig.0000728. eCollection 2025 Mar.
2
An empirical characterization of fair machine learning for clinical risk prediction.用于临床风险预测的公平机器学习的实证特征描述。
J Biomed Inform. 2021 Jan;113:103621. doi: 10.1016/j.jbi.2020.103621. Epub 2020 Nov 18.
3
Utilizing large language models for gastroenterology research: a conceptual framework.利用大语言模型进行胃肠病学研究:一个概念框架。
Therap Adv Gastroenterol. 2025 Apr 1;18:17562848251328577. doi: 10.1177/17562848251328577. eCollection 2025.
4
The Medicine Revolution Through Artificial Intelligence: Ethical Challenges of Machine Learning Algorithms in Decision-Making.通过人工智能实现的医学革命:机器学习算法在决策中的伦理挑战
Cureus. 2024 Sep 14;16(9):e69405. doi: 10.7759/cureus.69405. eCollection 2024 Sep.
5
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
6
Architectural Design of a Blockchain-Enabled, Federated Learning Platform for Algorithmic Fairness in Predictive Health Care: Design Science Study.区块链赋能的联邦学习平台的架构设计用于预测性医疗保健中的算法公平性:设计科学研究。
J Med Internet Res. 2023 Oct 30;25:e46547. doi: 10.2196/46547.
7
Ethical and regulatory considerations in the use of AI and machine learning in nursing: A systematic review.护理中人工智能和机器学习应用的伦理与监管考量:一项系统综述
Int Nurs Rev. 2025 Mar;72(1):e70010. doi: 10.1111/inr.70010.
8
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.
9
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
10
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.

引用本文的文献

1
Biases in AI: acknowledging and addressing the inevitable ethical issues.人工智能中的偏见:认识并解决不可避免的伦理问题。
Front Digit Health. 2025 Aug 20;7:1614105. doi: 10.3389/fdgth.2025.1614105. eCollection 2025.
2
Population health management through human phenotype ontology with policy for ecosystem improvement.通过人类表型本体进行人群健康管理并制定生态系统改善政策。
Front Artif Intell. 2025 Aug 1;8:1496937. doi: 10.3389/frai.2025.1496937. eCollection 2025.

本文引用的文献

1
Sync fast and solve things-best practices for responsible digital health.快速同步并解决问题——负责任数字健康的最佳实践。
NPJ Digit Med. 2024 May 4;7(1):113. doi: 10.1038/s41746-024-01105-9.
2
New AI regulation in the EU seeks to reduce risk without assessing public benefit.欧盟新的人工智能监管旨在降低风险,却未评估公共利益。
Nat Med. 2024 May;30(5):1235-1237. doi: 10.1038/s41591-024-02874-2.
3
To do no harm - and the most good - with AI in health care.在医疗保健领域利用人工智能做到无害且带来最大益处。
Nat Med. 2024 Mar;30(3):623-627. doi: 10.1038/s41591-024-02853-7.
4
Embed equity throughout innovation.在创新过程中融入公平性。
Science. 2023 Sep 8;381(6662):1029. doi: 10.1126/science.adk6365. Epub 2023 Sep 7.
5
Defining representativeness of study samples in medical and population health research.界定医学与人群健康研究中样本的代表性。
BMJ Med. 2023 May 16;2(1):e000399. doi: 10.1136/bmjmed-2022-000399. eCollection 2023.
6
Stuck in translation: Stakeholder perspectives on impediments to responsible digital health.陷入翻译困境:利益相关者对负责任数字健康的障碍的看法
Front Digit Health. 2023 Feb 6;5:1069410. doi: 10.3389/fdgth.2023.1069410. eCollection 2023.
7
Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review.导致医疗保健差距长期存在的人工智能偏差来源——一项全球综述。
PLOS Digit Health. 2022 Mar 31;1(3):e0000022. doi: 10.1371/journal.pdig.0000022. eCollection 2022 Mar.
8
Garbage in, Garbage out-Words of Caution on Big Data and Machine Learning in Medical Practice.输入垃圾,输出垃圾——医疗实践中大数据与机器学习的警示之言。
JAMA Health Forum. 2023 Feb 3;4(2):e230397. doi: 10.1001/jamahealthforum.2023.0397.
9
Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness.临床存在下的插补策略:对算法公平性的影响。
Proc Mach Learn Res. 2022;193:12-34.
10
Enabling Fairness in Healthcare Through Machine Learning.通过机器学习实现医疗保健中的公平性。
Ethics Inf Technol. 2022;24(3):39. doi: 10.1007/s10676-022-09658-7. Epub 2022 Aug 31.