Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.
Center for Legal Medicine, University of Geneva, Geneva, Switzerland.
Med Health Care Philos. 2021 Sep;24(3):341-349. doi: 10.1007/s11019-021-10008-5. Epub 2021 Mar 13.
Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James's pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine.
机器学习(ML)在医学领域的应用日益广泛,有望为临床诊断、治疗和预后提供更好的工具。虽然这些技术创新必将改变医疗保健行业,但也带来了新的伦理问题。其中一个特别难以解决的挑战是基于训练数据中固有的偏差进行歧视性算法判断。一种常见的推理方法是区分基于社会重要群体之间真实差异的合理差异化治疗方法和没有差异的不合理偏见,从而导致误诊和错误治疗。然而,在训练数据的整理过程中,这种策略会遇到严重的问题,因为区分两者几乎是不可能的。因此,我们呼吁在医疗保健环境中处理算法偏差的实用主义者。通过最近对威廉·詹姆斯实用主义真理观的重新表述,我们建议,与其追求所谓的客观真理,基于结果的治疗有效性应该作为评估医学中 ML 应用的指导原则。