Pot Mirjam, Kieusseyan Nathalie, Prainsack Barbara
Department of Political Science, University of Vienna, Austria, Universitätsstraße 7, 1100, Wien, Austria.
OLEA MEDICAL, 93 Ave. du Sorbiers, 13600, La Ciotat, France.
Insights Imaging. 2021 Feb 10;12(1):13. doi: 10.1186/s13244-020-00955-7.
The application of machine learning (ML) technologies in medicine generally but also in radiology more specifically is hoped to improve clinical processes and the provision of healthcare. A central motivation in this regard is to advance patient treatment by reducing human error and increasing the accuracy of prognosis, diagnosis and therapy decisions. There is, however, also increasing awareness about bias in ML technologies and its potentially harmful consequences. Biases refer to systematic distortions of datasets, algorithms, or human decision making. These systematic distortions are understood to have negative effects on the quality of an outcome in terms of accuracy, fairness, or transparency. But biases are not only a technical problem that requires a technical solution. Because they often also have a social dimension, the 'distorted' outcomes they yield often have implications for equity. This paper assesses different types of biases that can emerge within applications of ML in radiology, and discusses in what cases such biases are problematic. Drawing upon theories of equity in healthcare, we argue that while some biases are harmful and should be acted upon, others might be unproblematic and even desirable-exactly because they can contribute to overcome inequities.
机器学习(ML)技术在医学领域的广泛应用,尤其是在放射学领域的具体应用,有望改善临床流程和医疗保健服务的提供。这方面的一个核心动机是通过减少人为错误并提高预后、诊断和治疗决策的准确性来推进患者治疗。然而,人们也越来越意识到ML技术中的偏差及其潜在的有害后果。偏差是指数据集、算法或人类决策的系统性扭曲。这些系统性扭曲被认为会在准确性、公平性或透明度方面对结果质量产生负面影响。但偏差不仅是一个需要技术解决方案的技术问题。由于它们往往还具有社会层面,它们产生的“扭曲”结果往往对公平性有影响。本文评估了ML在放射学应用中可能出现的不同类型的偏差,并讨论了在哪些情况下这些偏差会成为问题。借鉴医疗保健公平性理论,我们认为,虽然有些偏差是有害的,应该加以处理,但其他一些偏差可能没有问题,甚至是可取的——正是因为它们有助于克服不公平现象。