Suppr超能文献

并非所有偏差都是有害的:机器学习与放射学中的公平偏差与不公平偏差

Not all biases are bad: equitable and inequitable biases in machine learning and radiology.

作者信息

Pot Mirjam, Kieusseyan Nathalie, Prainsack Barbara

机构信息

Department of Political Science, University of Vienna, Austria, Universitätsstraße 7, 1100, Wien, Austria.

OLEA MEDICAL, 93 Ave. du Sorbiers, 13600, La Ciotat, France.

出版信息

Insights Imaging. 2021 Feb 10;12(1):13. doi: 10.1186/s13244-020-00955-7.

Abstract

The application of machine learning (ML) technologies in medicine generally but also in radiology more specifically is hoped to improve clinical processes and the provision of healthcare. A central motivation in this regard is to advance patient treatment by reducing human error and increasing the accuracy of prognosis, diagnosis and therapy decisions. There is, however, also increasing awareness about bias in ML technologies and its potentially harmful consequences. Biases refer to systematic distortions of datasets, algorithms, or human decision making. These systematic distortions are understood to have negative effects on the quality of an outcome in terms of accuracy, fairness, or transparency. But biases are not only a technical problem that requires a technical solution. Because they often also have a social dimension, the 'distorted' outcomes they yield often have implications for equity. This paper assesses different types of biases that can emerge within applications of ML in radiology, and discusses in what cases such biases are problematic. Drawing upon theories of equity in healthcare, we argue that while some biases are harmful and should be acted upon, others might be unproblematic and even desirable-exactly because they can contribute to overcome inequities.

摘要

机器学习(ML)技术在医学领域的广泛应用,尤其是在放射学领域的具体应用,有望改善临床流程和医疗保健服务的提供。这方面的一个核心动机是通过减少人为错误并提高预后、诊断和治疗决策的准确性来推进患者治疗。然而,人们也越来越意识到ML技术中的偏差及其潜在的有害后果。偏差是指数据集、算法或人类决策的系统性扭曲。这些系统性扭曲被认为会在准确性、公平性或透明度方面对结果质量产生负面影响。但偏差不仅是一个需要技术解决方案的技术问题。由于它们往往还具有社会层面,它们产生的“扭曲”结果往往对公平性有影响。本文评估了ML在放射学应用中可能出现的不同类型的偏差,并讨论了在哪些情况下这些偏差会成为问题。借鉴医疗保健公平性理论,我们认为,虽然有些偏差是有害的,应该加以处理,但其他一些偏差可能没有问题,甚至是可取的——正是因为它们有助于克服不公平现象。

相似文献

6
Ensuring Fairness in Machine Learning to Advance Health Equity.确保机器学习的公正性,以促进健康公平。
Ann Intern Med. 2018 Dec 18;169(12):866-872. doi: 10.7326/M18-1990. Epub 2018 Dec 4.
9
Diagnostic biases in translational bioinformatics.转化生物信息学中的诊断偏差。
BMC Med Genomics. 2015 Aug 1;8:46. doi: 10.1186/s12920-015-0116-y.

引用本文的文献

5
What makes clinical machine learning fair? A practical ethics framework.什么使临床机器学习公平?一个实用的伦理框架。
PLOS Digit Health. 2025 Mar 18;4(3):e0000728. doi: 10.1371/journal.pdig.0000728. eCollection 2025 Mar.

本文引用的文献

2
On the ethics of algorithmic decision-making in healthcare.论医疗保健中算法决策的伦理问题。
J Med Ethics. 2020 Mar;46(3):205-211. doi: 10.1136/medethics-2019-105586. Epub 2019 Nov 20.
9
Health Care Disparities in Radiology: A Primer for Resident Education.放射学中的医疗保健差异:住院医师教育入门指南。
Curr Probl Diagn Radiol. 2019 Mar-Apr;48(2):108-110. doi: 10.1067/j.cpradiol.2018.05.007. Epub 2018 May 31.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验