Department of Health Humanities and Bioethics and Department of Philosophy, University of Rochester, Rochester, New York.
Department of Bioethics, Hospital for Sick Children, and Dana Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.
J Nucl Med. 2023 Oct;64(10):1509-1515. doi: 10.2967/jnumed.123.266110. Epub 2023 Aug 24.
The deployment of artificial intelligence (AI) has the potential to make nuclear medicine and medical imaging faster, cheaper, and both more effective and more accessible. This is possible, however, only if clinicians and patients feel that these AI medical devices (AIMDs) are trustworthy. Highlighting the need to ensure health justice by fairly distributing benefits and burdens while respecting individual patients' rights, the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks that arise during the deployment of AIMD: autonomy of patients and clinicians, transparency of clinical performance and limitations, fairness toward marginalized populations, and accountability of physicians and developers. We provide preliminary recommendations for governing these ethical risks to realize the promise of AIMD for patients and populations.
人工智能(AI)的应用有可能使核医学和医学成像更快、更便宜、更有效、更普及。然而,只有临床医生和患者认为这些人工智能医疗设备(AIMD)值得信赖,这才有可能。核医学和分子成像学会的人工智能工作组强调,有必要通过公平分配利益和负担,同时尊重个体患者的权利,以确保健康公正,该工作组确定了在部署 AIMD 期间出现的 4 大主要伦理风险:患者和临床医生的自主性、临床性能和局限性的透明度、边缘化人群的公平性以及医生和开发者的问责制。我们提供了初步的建议,以管理这些伦理风险,实现 AIMD 为患者和人群带来的承诺。