Lysaght Tamra, Lim Hannah Yeefen, Xafis Vicki, Ngiam Kee Yuan
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
Nanyang Business School, Nanyang Technology University, Singapore.
Asian Bioeth Rev. 2019 Sep 12;11(3):299-314. doi: 10.1007/s41649-019-00096-0. eCollection 2019 Sep.
Artificial intelligence (AI) is set to transform healthcare. Key ethical issues to emerge with this transformation encompass the accountability and transparency of the decisions made by AI-based systems, the potential for group harms arising from algorithmic bias and the professional roles and integrity of clinicians. These concerns must be balanced against the imperatives of generating public benefit with more efficient healthcare systems from the vastly higher and accurate computational power of AI. In weighing up these issues, this paper applies the deliberative balancing approach of the (Xafis et al. 2019). The analysis applies relevant values identified from the framework to demonstrate how decision-makers can draw on them to develop and implement AI-assisted support systems into healthcare and clinical practice ethically and responsibly. Please refer to Xafis et al. (2019) in this special issue of the Asian Bioethics Review for more information on how this framework is to be used, including a full explanation of the key values involved and the balancing approach used in the case study at the end of this paper.
人工智能(AI)必将改变医疗保健行业。随着这种转变而出现的关键伦理问题包括基于人工智能的系统所做决策的问责制和透明度、算法偏差导致群体伤害的可能性以及临床医生的专业角色和诚信。必须在这些担忧与利用人工智能强大得多且精确的计算能力打造更高效医疗系统以产生公共利益的迫切需求之间取得平衡。在权衡这些问题时,本文采用了(Xafis等人,2019年)的审议平衡方法。该分析运用了从该框架中确定的相关价值观,以展示决策者如何借鉴这些价值观,在医疗保健和临床实践中以符合伦理道德且负责任的方式开发和实施人工智能辅助支持系统。有关该框架使用方法的更多信息,请参考本期《亚洲生物伦理学评论》中的Xafis等人(2019年)的文章,包括对所涉及关键价值观的全面解释以及本文末尾案例研究中使用的平衡方法。