Aalborg University, Centre for Applied Ethics and Philosophy of Science, Department of Communication and Psychology, A. C. Meyers Vænge 15, 2450 Copenhagen SV, Denmark.
University of Manchester, Centre for Social Ethics and Policy, School of Law, Manchester M13 9 PL, United Kingdom; Center for Medical Ethics, Faculty of Medicine, University of Oslo, Norway.
Artif Intell Med. 2020 Jul;107:101901. doi: 10.1016/j.artmed.2020.101901. Epub 2020 Jun 9.
The problem of the explainability of AI decision-making has attracted considerable attention in recent years. In considering AI diagnostics we suggest that explainability should be explicated as 'effective contestability'. Taking a patient-centric approach we argue that patients should be able to contest the diagnoses of AI diagnostic systems, and that effective contestation of patient-relevant aspect of AI diagnoses requires the availability of different types of information about 1) the AI system's use of data, 2) the system's potential biases, 3) the system performance, and 4) the division of labour between the system and health care professionals. We justify and define thirteen specific informational requirements that follows from 'contestability'. We further show not only that contestability is a weaker requirement than some of the proposed criteria of explainability, but also that it does not introduce poorly grounded double standards for AI and health care professionals' diagnostics, and does not come at the cost of AI system performance. Finally, we briefly discuss whether the contestability requirements introduced here are domain-specific.
近年来,人工智能决策的可解释性问题引起了相当大的关注。在考虑人工智能诊断时,我们建议将可解释性解释为“有效可争议性”。我们采取以患者为中心的方法,认为患者应该能够对人工智能诊断系统的诊断提出质疑,并且有效质疑与患者相关的人工智能诊断方面的问题需要提供以下不同类型的信息:1)人工智能系统使用的数据;2)系统的潜在偏差;3)系统性能;4)系统和医疗保健专业人员之间的分工。我们从“可争议性”中推导出了十三个具体的信息要求,并对其进行了论证和定义。我们进一步表明,可争议性不仅比一些提出的可解释性标准弱,而且不会为人工智能和医疗保健专业人员的诊断引入毫无根据的双重标准,也不会以牺牲人工智能系统性能为代价。最后,我们简要讨论了这里引入的可争议性要求是否具有领域特异性。