Contaldo Maria Teresa, Pasceri Giovanni, Vignati Giacomo, Bracchi Laura, Triggiani Sonia, Carrafiello Gianpaolo
Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy.
Information Society Law Center, Department "Cesare Beccaria", University of Milan, 20122 Milan, Italy.
Diagnostics (Basel). 2024 Jul 12;14(14):1506. doi: 10.3390/diagnostics14141506.
The application of Artificial Intelligence (AI) facilitates medical activities by automating routine tasks for healthcare professionals. AI augments but does not replace human decision-making, thus complicating the process of addressing legal responsibility. This study investigates the legal challenges associated with the medical use of AI in radiology, analyzing relevant case law and literature, with a specific focus on professional liability attribution. In the case of an error, the primary responsibility remains with the physician, with possible shared liability with developers according to the framework of medical device liability. If there is disagreement with the AI's findings, the physician must not only pursue but also justify their choices according to prevailing professional standards. Regulations must balance the autonomy of AI systems with the need for responsible clinical practice. Effective use of AI-generated evaluations requires knowledge of data dynamics and metrics like sensitivity and specificity, even without a clear understanding of the underlying algorithms: the opacity (referred to as the "black box phenomenon") of certain systems raises concerns about the interpretation and actual usability of results for both physicians and patients. AI is redefining healthcare, underscoring the imperative for robust liability frameworks, meticulous updates of systems, and transparent patient communication regarding AI involvement.
人工智能(AI)的应用通过为医疗保健专业人员自动化日常任务来促进医疗活动。人工智能增强但不取代人类决策,因此使确定法律责任的过程变得复杂。本研究调查了放射学中人工智能医疗应用相关的法律挑战,分析了相关判例法和文献,特别关注专业责任归属。在出现错误的情况下,主要责任仍在于医生,根据医疗器械责任框架,开发者可能承担共同责任。如果对人工智能的诊断结果存在分歧,医生不仅必须遵循,还必须根据现行专业标准为自己的选择提供理由。法规必须在人工智能系统的自主性与负责任的临床实践需求之间取得平衡。即使不明确了解底层算法,有效使用人工智能生成的评估也需要了解数据动态以及敏感性和特异性等指标:某些系统的不透明性(称为“黑箱现象”)引发了对医生和患者对结果的解释及实际可用性的担忧。人工智能正在重新定义医疗保健,凸显了建立健全责任框架、精心更新系统以及就人工智能的参与与患者进行透明沟通的紧迫性。