Ratti Emanuele, Morrison Michael, Jakab Ivett
Department of Philosophy, Cotham House University of Bristol, Bristol, BS6 6JL, UK.
Helex - Centre for Health, Law and Emerging Technologies, Faculty of Law, University of Oxford, St Cross Building, Room 201St Cross Road, Oxford, OX1 3UL, UK.
BMC Med Ethics. 2025 May 27;26(1):68. doi: 10.1186/s12910-025-01198-1.
Artificial Intelligence (AI) is being designed, tested, and in many cases actively employed in almost every aspect of healthcare from primary care to public health. It is by now well established that any application of AI carries an attendant responsibility to consider the ethical and societal aspects of its development, deployment and impact. However, in the rapidly developing field of AI, developments such as machine learning, neural networks, generative AI, and large language models have the potential to raise new and distinct ethical and social issues compared to, for example, automated data processing or more 'basic' algorithms.
This article presents a scoping review of the ethical and social issues pertaining to AI in healthcare, with a novel two-pronged design. One strand of the review (SR1) consists of a broad review of the academic literature restricted to a recent timeframe (2021-23), to better capture up to date developments and debates. The second strand (SR2) consists of a narrow review, limited to prior systematic and scoping reviews on the ethics of AI in healthcare, but extended over a longer timeframe (2014-2024) to capture longstanding and recurring themes and issues in the debate. This strategy provides a practical way to deal with an increasingly voluminous literature on the ethics of AI in healthcare in a way that accounts for both the depth and evolution of the literature.
SR1 captures the heterogeneity of audience, medical fields, and ethical and societal themes (and their tradeoffs) raised by AI systems. SR2 provides a comprehensive picture of the way scoping reviews on ethical and societal issues in AI in healthcare have been conceptualized, as well as the trends and gaps identified.
Our analysis shows that the typical approach to ethical issues in AI, which is based on the appeal to general principles, becomes increasingly unlikely to do justice to the nuances and specificities of the ethical and societal issues raised by AI in healthcare, as the technology moves from abstract debate and discussion to real world situated applications and concerns in healthcare settings.
人工智能(AI)正在被设计、测试,并且在许多情况下,已积极应用于从初级保健到公共卫生的医疗保健几乎各个方面。如今,人们已经充分认识到,人工智能的任何应用都伴随着一项责任,即要考虑其开发、部署和影响所涉及的伦理和社会问题。然而,在快速发展的人工智能领域,与自动化数据处理或更“基础”的算法相比,机器学习、神经网络、生成式人工智能和大语言模型等发展有可能引发新的、独特的伦理和社会问题。
本文对医疗保健领域中与人工智能相关的伦理和社会问题进行了范围综述,采用了一种新颖的双管齐下的设计。综述的一个部分(SR1)包括对学术文献的广泛综述,限于最近的时间段(2021 - 2023年),以便更好地捕捉最新的发展和辩论。第二部分(SR2)包括一个狭义综述,限于之前关于医疗保健领域人工智能伦理的系统综述和范围综述,但时间跨度更长(2014 - 2024年),以捕捉辩论中长期存在和反复出现的主题及问题。这种策略提供了一种切实可行的方法,以一种兼顾文献深度和演变的方式来处理关于医疗保健领域人工智能伦理的日益大量的文献。
SR1捕捉了人工智能系统引发的受众、医学领域以及伦理和社会主题(及其权衡)的异质性。SR2全面呈现了医疗保健领域人工智能伦理和社会问题范围综述的概念化方式,以及所确定的趋势和差距。
我们的分析表明,随着技术从抽象的辩论和讨论转向医疗保健环境中的实际应用和关注,基于诉诸一般原则的人工智能伦理问题的典型方法,越来越难以公正地对待人工智能在医疗保健中引发的伦理和社会问题的细微差别和特殊性。