Centre for the Study of Professions, Oslo Metropolitan University, Oslo, Norway.
Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway.
Sci Eng Ethics. 2022 Apr 1;28(2):17. doi: 10.1007/s11948-022-00369-2.
This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.
本文探讨了医生、人工智能设计师和其他利益相关者在基于医学中共享决策的一般前提使应用人工智能和机器学习在伦理上可接受方面的作用。最近的政策文件,如欧盟可信人工智能战略和研究文献,经常表明,通过增加开发者和其他利益相关者之间的合作,可以使人工智能在伦理上被接受。本文阐述并考察了在患者护理中设计和应用人工智能的四种核心替代模式,我们称之为普通证据模型、伦理设计模型、协作模型和公共审议模型。我们认为,对于涵盖大多数人工智能技术来说,协作模型是最有前途的,而当技术被认为从根本上改变了伦理共享决策的条件时,则需要采用公共审议模型。
Sci Eng Ethics. 2022-4-1
Acta Neurochir Suppl. 2022
Med Health Care Philos. 2020-9
Br J Radiol. 2023-10
BMC Med Ethics. 2022-1-26
Soc Sci Med. 2020-9
Br Med Bull. 2025-1-16
Indian J Surg Oncol. 2024-9
Front Sports Act Living. 2024-5-20
Interact J Med Res. 2024-4-15
J Am Med Inform Assoc. 2021-3-18
Am J Bioeth. 2020-11
Bull World Health Organ. 2020-1-27
Bull World Health Organ. 2020-1-27
J Med Ethics. 2020-3
J Am Med Inform Assoc. 2020-3-1