Kotter Elmar, Pinto Dos Santos Daniel
Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Freiburg, Hugstetterstr. 55, 79106, Freiburg, Deutschland.
Institut für Diagnostische und Interventionelle Radiologie, Uniklinik Köln, Kerpener Str. 62, 50937, Köln, Deutschland.
Radiologie (Heidelb). 2024 Jun;64(6):498-502. doi: 10.1007/s00117-024-01286-0. Epub 2024 Mar 18.
The introduction of artificial intelligence (AI) into radiology promises to enhance efficiency and improve diagnostic accuracy, yet it also raises manifold ethical questions. These include data protection issues, the future role of radiologists, liability when using AI systems, and the avoidance of bias. To prevent data bias, the datasets need to be compiled carefully and to be representative of the target population. Accordingly, the upcoming European Union AI act sets particularly high requirements for the datasets used in training medical AI systems. Cognitive bias occurs when radiologists place too much trust in the results provided by AI systems (overreliance). So far, diagnostic AI systems are used almost exclusively as "second look" systems. If diagnostic AI systems are to be used in the future as "first look" systems or even as autonomous AI systems in order to enhance efficiency in radiology, the question of liability needs to be addressed, comparable to liability for autonomous driving. Such use of AI would also significantly change the role of radiologists.
将人工智能(AI)引入放射学有望提高效率并提升诊断准确性,但同时也引发了诸多伦理问题。这些问题包括数据保护问题、放射科医生的未来角色、使用人工智能系统时的责任以及避免偏差。为防止数据偏差,需要精心编制数据集并使其能代表目标人群。因此,即将出台的欧盟人工智能法案对用于训练医疗人工智能系统的数据集提出了特别高的要求。当放射科医生对人工智能系统提供的结果过度信任(过度依赖)时,就会出现认知偏差。到目前为止,诊断性人工智能系统几乎仅用作“二次检查”系统。如果未来要将诊断性人工智能系统用作“初次检查”系统甚至自主人工智能系统以提高放射学效率,就需要解决责任问题,这类似于自动驾驶的责任问题。人工智能的这种应用也将显著改变放射科医生的角色。