Ferrario Andrea, Loi Michele, Viganò Eleonora
Department of Management, Technology and Economics, ETH Zurich, Zurich, Switzerland
Digital Society Initiative (DSI) and Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich, Zurich, Switzerland.
J Med Ethics. 2020 Nov 25;47(6):437-8. doi: 10.1136/medethics-2020-106922.
In his recent article 'Limits of trust in medical AI,' Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human-human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.
在其近期文章《对医学人工智能信任的限度》中,哈瑟利认为,如果我们认为通常被视为与人际信任相关的动机必须适用于人类与医学人工智能之间的交互,那么这些系统似乎并非合适的信任对象。在本回应中,我们认为,如果不简单地假定信任仅描述人与人之间的交互,那么讨论对医学人工智能(AI)的信任是可能的。为此,我们考虑一种信任的解释,该解释以一种与信任非人类主体兼容的方式将信任与依赖区分开来。在这种解释中,信任医学人工智能就是在对使其值得信赖的因素几乎不进行监督和控制的情况下依赖它。这种态度并不意味着人工智能系统具有实际上只有人类才具备的特定属性。这种信任的解释尤其适用于医生依靠医学人工智能预测来支持其决策的所有情况。