Kiener Maximilian
The Queen's College, Faculty of Philosophy, The University of Oxford, High Street, Oxford, OX1AW UK.
AI Soc. 2021;36(3):705-713. doi: 10.1007/s00146-020-01085-w. Epub 2020 Oct 22.
This paper focuses on the use of 'black box' AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI's implicit assumptions and an individual patient's background situation. current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient's informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the 'nature' and 'likelihood' of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.
本文聚焦于医学中“黑箱”人工智能的应用,并探讨医生是否需要向患者披露,即使是最好的人工智能也存在网络攻击、系统性偏差以及人工智能的隐含假设与个体患者背景情况之间特定类型的不匹配等风险。在当前的临床实践中,我认为,在某些情况下,这些风险确实需要披露。否则,医生要么损害患者的知情同意权,要么违反更一般的义务,即就潜在的有害后果向患者发出警告。为支持这一观点,我首先认为,在评估风险时已被广泛接受的条件,即风险的“性质”和“可能性”,支持进行披露;其次,对披露这些风险的原则性反对经不起推敲。此外,我还解释说,像新冠疫情这样的大流行会加剧这些风险,这进一步凸显了它们的重要性。