Department of Anesthesia and Intensive Care, Infermi Hospital, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy.
Health Services Research, Evaluation and Policy Unit, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy.
J Clin Monit Comput. 2024 Aug;38(4):931-939. doi: 10.1007/s10877-024-01157-y. Epub 2024 Apr 4.
The integration of Clinical Decision Support Systems (CDSS) based on artificial intelligence (AI) in healthcare is groundbreaking evolution with enormous potential, but its development and ethical implementation, presents unique challenges, particularly in critical care, where physicians often deal with life-threating conditions requiring rapid actions and patients unable to participate in the decisional process. Moreover, development of AI-based CDSS is complex and should address different sources of bias, including data acquisition, health disparities, domain shifts during clinical use, and cognitive biases in decision-making. In this scenario algor-ethics is mandatory and emphasizes the integration of 'Human-in-the-Loop' and 'Algorithmic Stewardship' principles, and the benefits of advanced data engineering. The establishment of Clinical AI Departments (CAID) is necessary to lead AI innovation in healthcare, ensuring ethical integrity and human-centered development in this rapidly evolving field.
临床决策支持系统(CDSS)与人工智能(AI)的整合是医疗保健领域具有开创性的发展,具有巨大的潜力,但它的开发和伦理实施提出了独特的挑战,特别是在重症监护中,医生经常处理危及生命的情况,需要快速行动,而患者无法参与决策过程。此外,基于人工智能的 CDSS 的开发非常复杂,应该解决不同来源的偏差,包括数据采集、健康差异、临床使用期间的领域转移以及决策中的认知偏差。在这种情况下,算法伦理学是强制性的,强调了“人机交互”和“算法监管”原则的整合,以及先进的数据工程的好处。建立临床人工智能部门(CAID)是必要的,以领导医疗保健领域的人工智能创新,确保在这个快速发展的领域中的伦理完整性和以人为中心的发展。