Research Center on Ethical, Legal, and Social Issues, Osaka University, Suita, Osaka, Japan.
Front Public Health. 2023 Jul 17;11:1142062. doi: 10.3389/fpubh.2023.1142062. eCollection 2023.
Public and private investments into developing digital health technologies-including artificial intelligence (AI)-are intensifying globally. Japan is a key case study given major governmental investments, in part through a Cross-Ministerial Strategic Innovation Promotion Program (SIP) for an "Innovative AI Hospital System." Yet, there has been little critical examination of the SIP Research Plan, particularly from an ethics approach. This paper reports on an analysis of the Plan to identify the extent to which it addressed ethical considerations set out in the World Health Organization's 2021 Guidance on the Ethics and Governance of Artificial Intelligence for Health. A coding framework was created based on the six ethical principles proposed in the Guidance and was used as the basis for a content analysis. 101 references to aspects of the framework were identified in the Plan, but attention to the ethical principles was found to be uneven, ranging from the strongest focus on the potential benefits of AI to healthcare professionals and patients ( = 44; Principle 2), to no consideration of the need for responsive or sustainable AI ( = 0; Principle 6). Ultimately, the findings show that the Plan reflects insufficient consideration of the ethical issues that arise from developing and implementing AI for healthcare purposes. This case study is used to argue that, given the ethical complexity of the use of digital health technologies, consideration of the full range of ethical concerns put forward by the WHO must urgently be made visible in future plans for AI in healthcare.
公共和私人投资于开发数字健康技术,包括人工智能(AI),在全球范围内正在加剧。鉴于政府的大量投资,日本是一个关键的案例研究,部分是通过一个跨部门的战略创新促进计划(SIP)为“创新 AI 医院系统”提供资金。然而,对于 SIP 研究计划,几乎没有进行批判性的审查,特别是从伦理方法的角度来看。本文报告了对该计划的分析,以确定其在多大程度上解决了世界卫生组织 2021 年关于人工智能在健康领域的伦理和治理的指导方针中列出的伦理问题。根据该指导方针中提出的六项伦理原则创建了一个编码框架,并将其用作内容分析的基础。在该计划中确定了 101 个参考框架的各个方面,但对伦理原则的关注程度参差不齐,从对 AI 对医疗保健专业人员和患者的潜在益处的最强烈关注(=44;原则 2),到没有考虑到需要响应性或可持续性 AI(=0;原则 6)。最终,研究结果表明,该计划反映了对为医疗保健目的开发和实施 AI 所引发的伦理问题的考虑不足。本案例研究旨在论证,鉴于数字健康技术使用的伦理复杂性,必须迫切地在未来的医疗保健人工智能计划中考虑世卫组织提出的所有伦理问题。