Bioethics Department, The Hospital for Sick Children, Toronto, Ontario, Canada.
Vector Institute, Toronto, Ontario, Canada.
J Am Med Inform Assoc. 2020 Dec 9;27(12):2024-2027. doi: 10.1093/jamia/ocaa085.
Accumulating evidence demonstrates the impact of bias that reflects social inequality on the performance of machine learning (ML) models in health care. Given their intended placement within healthcare decision making more broadly, ML tools require attention to adequately quantify the impact of bias and reduce its potential to exacerbate inequalities. We suggest that taking a patient safety and quality improvement approach to bias can support the quantification of bias-related effects on ML. Drawing from the ethical principles underpinning these approaches, we argue that patient safety and quality improvement lenses support the quantification of relevant performance metrics, in order to minimize harm while promoting accountability, justice, and transparency. We identify specific methods for operationalizing these principles with the goal of attending to bias to support better decision making in light of controllable and uncontrollable factors.
越来越多的证据表明,反映社会不平等的偏见会对医疗保健中的机器学习 (ML) 模型的性能产生影响。鉴于它们更广泛地应用于医疗保健决策,因此需要关注 ML 工具,以充分量化偏差的影响,并降低其加剧不平等的潜力。我们认为,采用患者安全和质量改进方法来处理偏差,可以支持对 ML 中与偏差相关的影响进行量化。我们从这些方法所依据的伦理原则出发,认为患者安全和质量改进视角支持相关绩效指标的量化,以便在促进问责制、公正性和透明度的同时,将伤害降到最低。我们确定了具体的方法来实施这些原则,目的是解决偏差问题,以便根据可控和不可控因素做出更好的决策。