人工智能在放射学中的应用:法律责任与汽车悖论。

AI in radiology: Legal responsibilities and the car paradox.

机构信息

MRI Unit, Radiology Department, HT Medica, Jaén, Spain.

NLP Department, HT Medica, Jaén, Spain.

出版信息

Eur J Radiol. 2024 Jun;175:111462. doi: 10.1016/j.ejrad.2024.111462. Epub 2024 Apr 10.

Abstract

The integration of AI in radiology raises significant legal questions about responsibility for errors. Radiologists fear AI may introduce new legal challenges, despite its potential to enhance diagnostic accuracy. AI tools, even those approved by regulatory bodies like the FDA or CE, are not perfect, posing a risk of failure. The key issue is how AI is implemented: as a stand-alone diagnostic tool or as an aid to radiologists. The latter approach could reduce undesired side effects. However, it's unclear who should be held liable for AI failures, with potential candidates ranging from engineers and radiologists involved in AI development to companies and department heads who integrate these tools into clinical practice. The EU's AI Act, recognizing AI's risks, categorizes applications by risk level, with many radiology-related AI tools considered high risk. Legal precedents in autonomous vehicles offer some guidance on assigning responsibility. Yet, the existing legal challenges in radiology, such as diagnostic errors, persist. AI's potential to improve diagnostics raises questions about the legal implications of not using available AI tools. For instance, an AI tool improving the detection of pediatric fractures could reduce legal risks. This situation parallels innovations like car turn signals, where ignoring available safety enhancements could lead to legal problems. The debate underscores the need for further research and regulation to clarify AI's role in radiology, balancing innovation with legal and ethical considerations.

摘要

人工智能在放射学中的应用引发了关于责任的重大法律问题。尽管人工智能有可能提高诊断准确性,但放射科医生担心它可能会带来新的法律挑战。人工智能工具,即使是经过 FDA 或 CE 等监管机构批准的,也不是完美的,存在失败的风险。关键问题是人工智能的实施方式:是作为独立的诊断工具还是作为放射科医生的辅助工具。后一种方法可以减少不必要的副作用。然而,谁应该为人工智能的失败负责尚不清楚,潜在的候选人包括参与人工智能开发的工程师和放射科医生、将这些工具集成到临床实践中的公司和部门主管。欧盟的人工智能法案认识到人工智能的风险,根据风险级别对应用进行分类,许多与放射学相关的人工智能工具被认为是高风险的。自动驾驶汽车的法律先例为责任分配提供了一些指导。然而,放射学中现有的法律挑战,如诊断错误,仍然存在。人工智能提高诊断能力的潜力引发了关于不使用现有人工智能工具的法律后果的问题。例如,提高儿科骨折检测能力的人工智能工具可以降低法律风险。这种情况类似于汽车转向灯等创新,如果忽略可用的安全增强功能,可能会导致法律问题。这场辩论凸显了需要进一步研究和监管,以明确人工智能在放射学中的作用,在创新与法律和道德考虑之间取得平衡。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索