Cestonaro Clara, Delicati Arianna, Marcante Beatrice, Caenazzo Luciana, Tozzo Pamela
Legal Medicine Unit, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padua, Padua, Italy.
Front Med (Lausanne). 2023 Nov 27;10:1305756. doi: 10.3389/fmed.2023.1305756. eCollection 2023.
Artificial intelligence (AI) in medicine is an increasingly studied and widespread phenomenon, applied in multiple clinical settings. Alongside its many potential advantages, such as easing clinicians' workload and improving diagnostic accuracy, the use of AI raises ethical and legal concerns, to which there is still no unanimous response. A systematic literature review on medical professional liability related to the use of AI-based diagnostic algorithms was conducted using the public electronic database PubMed selecting studies published from 2020 to 2023. The systematic review was performed according to 2020 PRISMA guidelines. The literature review highlights how the issue of liability in case of AI-related error and patient's damage has received growing attention in recent years. The application of AI and diagnostic algorithm moreover raises questions about the risks of using unrepresentative populations during the development and about the completeness of information given to the patient. Concerns about the impact on the fiduciary relationship between physician and patient and on the subject of empathy have also been raised. The use of AI in medical field and the application of diagnostic algorithms introduced a revolution in the doctor-patient relationship resulting in multiple possible medico-legal consequences. The regulatory framework on medical liability when AI is applied is therefore inadequate and requires urgent intervention, as there is no single and specific regulation governing the liability of various parties involved in the AI supply chain, nor on end-users. Greater attention should be paid to inherent risk in AI and the consequent need for regulations regarding product safety as well as the maintenance of minimum safety standards through appropriate updates.
医学中的人工智能(AI)是一个研究日益深入且应用广泛的现象,已应用于多种临床场景。除了诸多潜在优势,如减轻临床医生工作量和提高诊断准确性外,人工智能的使用也引发了伦理和法律方面的担忧,对此目前尚无一致的应对措施。我们利用公共电子数据库PubMed对2020年至2023年发表的与使用基于人工智能的诊断算法相关的医疗职业责任研究进行了系统的文献综述。该系统综述按照2020年PRISMA指南进行。文献综述强调了近年来与人工智能相关错误及患者损害情况下的责任问题如何受到越来越多的关注。此外,人工智能和诊断算法的应用还引发了关于在开发过程中使用不具代表性人群的风险以及向患者提供信息完整性的问题。人们还对其对医患之间信托关系和同理心主题的影响表示担忧。医学领域人工智能的使用和诊断算法的应用给医患关系带来了一场革命,产生了多种可能的法医学后果。因此,在应用人工智能时的医疗责任监管框架并不完善,需要紧急干预,因为目前尚无单一且具体的法规来规范人工智能供应链中各方以及终端用户的责任。应更加关注人工智能的固有风险以及随之而来的对产品安全法规的需求,以及通过适当更新维持最低安全标准的必要性。