Hastings Cent Rep. 2021 Jul;51(4):38-45. doi: 10.1002/hast.1248. Epub 2021 Apr 6.
The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity-because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient-physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented.
在医疗保健中使用不透明、不可解释的人工智能系统可能具有医学益处,但由于这些系统是黑箱,因此通常被认为在道德上存在潜在问题。亚历克斯·约翰·伦敦最近认为,不透明性通常不是问题,因为许多标准疗法在解释上是不透明的,而且我们可以依靠系统的统计验证来决定是否实施这些疗法。但是,统计验证是否足以证明在医疗保健中实施这些人工智能系统是合理的,还是仅仅是必要条件之一?我认为,在维护允许医学机构运作的医患信任方面发挥重要作用的问责制,进一步有助于解释人工智能系统的合理性。因此,我赞同逐渐消失的问责制原则:除了统计验证之外,医学中的问责制必须得到保留。不应该实施会引入问责制问题的人工智能系统。