School of Medicine, Cardiff University, Cardiff, UK
J Med Ethics. 2022 Apr;48(4):272-277. doi: 10.1136/medethics-2020-106905. Epub 2021 Feb 16.
The UK Government's Code of Conduct for data-driven health and care technologies, specifically artificial intelligence (AI)-driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that posits trust should be made on sound epistemological bases and a principle of value-based trust that views trust in an all-things-considered manner. I argue rationally justified trust is largely infeasible in trusting AI due to AI's complexity and inexplicability. Contrarily, I show how value-based trust is more feasible as it is intuitively used by individuals. Furthermore, it better complies with Principle 1. I therefore conclude this essay by suggesting the Code of Conduct to hold the principle of value-based trust more explicitly.
英国政府的数据驱动型医疗保健技术行为准则,特别是人工智能(AI)驱动技术行为准则,包含 10 项原则,这些原则为国民保健制度(NHS)内的 AI 开发者和实施者制定了黄金标准的道德行为准则。鉴于信任在医学中的重要性,在本文中,我旨在评估这一道德治理框架中对信任的概念化。我考察了行为准则,特别是第 7 条原则,并从中提取了两个立场:一个是理性有理由信任的原则,即信任应该建立在合理的认识论基础上;另一个是基于价值的信任原则,即从全面考虑的角度看待信任。我认为,由于 AI 的复杂性和不可解释性,理性有理由信任在信任 AI 方面基本上是不可行的。相反,我展示了基于价值的信任是如何更可行的,因为它是个人直观使用的。此外,它更符合第 1 条原则。因此,本文最后建议行为准则更明确地坚持基于价值的信任原则。