Diagnostic Radiology 3, Department of Translational Research, University of Pisa, Pisa, Italy.
Radiology Unit, Department of Diagnostic and Preventive Medicine, Sant' Orsola Malpighi University Hospital, Bologna, Italy.
Radiol Med. 2020 Jun;125(6):517-521. doi: 10.1007/s11547-020-01135-9. Epub 2020 Jan 31.
The aim of the paper is to find an answer to the question "Who or what is responsible for the benefits and harms of using artificial intelligence in radiology?" When human beings make decisions, the action itself is normally connected with a direct responsibility by the agent who generated the action. You have an effect on others, and therefore, you are responsible for what you do and what you decide to do. But if you do not do this yourself, but an artificial intelligence system, it becomes difficult and important to be able to ascribe responsibility when something goes wrong. The manuscript addresses the following statements: (1) using AI, the radiologist is responsible for the diagnosis; (2) radiologists must be trained on the use of AI since they are responsible for the actions of machines; (3) radiologists involved in R&D have the responsibility to guide the respect of rules for a trustworthy AI; (4) radiologist responsibility is at risk of validating the unknown (black box); (5) radiologist decision may be biased by the AI automation; (6)risk of a paradox: increasing AI tools to compensate the lack of radiologists; (7) need of informed consent and quality measures. Future legislation must outline the contours of the professional's responsibility, with respect to the provision of the service performed autonomously by AI, balancing the professional's ability to influence and therefore correct the machine, limiting the sphere of autonomy that instead technological evolution would like to recognize to robots.
本文旨在回答“谁或什么应该为在放射学中使用人工智能的好处和危害负责?”当人类做出决策时,行动本身通常与产生该行动的代理人直接负责相关。你对他人有影响,因此,你要对自己的所作所为以及决定负责。但是,如果不是你自己,而是一个人工智能系统做出了决策,那么当出现问题时,就很难确定并追究责任。本文述及以下观点:(1)使用人工智能,放射科医生对诊断负责;(2)由于他们对机器的行为负责,放射科医生必须接受人工智能使用方面的培训;(3)参与研发的放射科医生有责任指导遵守值得信赖的人工智能规则;(4)放射科医生的责任面临验证未知(黑箱)的风险;(5)放射科医生的决策可能会受到 AI 自动化的影响;(6)存在悖论的风险:增加人工智能工具以弥补放射科医生的不足;(7)需要知情同意和质量措施。未来的立法必须勾勒出专业人员责任的轮廓,尊重人工智能自主提供的服务,平衡专业人员影响机器的能力,从而对其进行纠正,限制自主范围,而技术发展希望赋予机器人自主范围。