Muyskens Kathryn, Ballantyne Angela, Savulescu Julian, Nasir Harisan Unais, Muralidharan Anantharaman
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
Department of Primary Health Care and General Practice, University of Otago, Dunedin, New Zealand.
Asian Bioeth Rev. 2024 Oct 31;17(1):167-185. doi: 10.1007/s41649-024-00315-3. eCollection 2025 Jan.
A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).
资源分配和公共卫生伦理中一个重大且重要的伦理困境存在于效用与公平之间。我们通过考察新加坡杜克 - 国大的一组研究人员开发的用于糖尿病视网膜病变的诊断性人工智能筛查工具,来探讨健康人工智能背景下效用与公平之间的这种困境。虽然该工具被发现是有效的,但它在新加坡的每个种族群体中并非同样有效,对少数马来族人群的效果比对多数华裔人群的效果要差。我们讨论了健康人工智能中偏差问题的规范性本质,并探讨了偏差与各种形式的社会不平等相互作用的方式。从那里,我们研究糖尿病视网膜病变案例的具体情况,并权衡效用与公平之间的具体权衡。最终,我们得出结论,在某些条件下,优先考虑效用而非公平在伦理上是允许的。鉴于任何医学人工智能由于训练数据中的偏差(可能反映其他社会不平等)很可能存在持续的偏差,我们认为在以下情况下实施具有残留偏差的人工智能工具是允许的:(1)其引入减少了偏差的影响(即使总体不平等加剧),和/或(2)获得的效用足够大且在各群体中共享(即使不均衡)。