Marinelli S, De Paola L, Stark M, Montanari Vergallo G
School of Law, Polytecnic University of Marche, Ancona, Italy.
Department of Anatomical, Histological, Forensic and Orthopedic Sciences, Sapienza University of Rome, Rome, Italy.
Clin Ter. 2025 Mar-Apr;176(Suppl 1(2)):77-82. doi: 10.7417/CT.2025.5192.
This article aims to identify the opportunities and risks of Artificial Intelligence tools (AIT) applied to clinical practice, while also reflecting on their impact on the doctor-patient relationship.
The authors conducted a systematic literature review following the PRISMA guidelines, selecting the period from 2019 to October 2024. Academic databases PubMed and Scopus were drawn upon by using the keywords and searchstrings "artificial intelligence", "healthcare", "informed consent", and "doctor-patient relationship" in titles, abstracts, and keywords.
AIT has proven useful in significantly reducing the time spent on bureaucratic tasks and minimizing errors compared to traditional medicine. However, their effectiveness is highly influenced by the quantity and quality of data used for training. Additionally, there is an issue with the transparency of the decision-making process because AIT and even their programmers are unable to explain their diagnostic and therapeutic recommendations. Therefore, human supervision of AI work is essential.
The potential risks of AI for patient safety and personal data security necessitate that governments urge those involved in the production of AI tools to adhere to specific ethical standards developed with the participation of all stakeholders, including patients.
本文旨在识别应用于临床实践的人工智能工具(AIT)的机遇与风险,同时思考其对医患关系的影响。
作者按照PRISMA指南进行了系统的文献综述,选取了2019年至2024年10月这一时间段。通过在标题、摘要和关键词中使用关键词及检索词“人工智能”“医疗保健”“知情同意”和“医患关系”,检索了学术数据库PubMed和Scopus。
与传统医学相比,AIT已被证明在显著减少官僚任务所花费的时间和最大限度减少错误方面很有用。然而,其有效性受到用于训练的数据的数量和质量的高度影响。此外,决策过程的透明度存在问题,因为AIT甚至其程序员都无法解释其诊断和治疗建议。因此,对人工智能工作进行人工监督至关重要。
人工智能对患者安全和个人数据安全的潜在风险使得政府有必要敦促参与人工智能工具生产的各方遵守在包括患者在内的所有利益相关者参与下制定的特定道德标准。