School of Health and Welfare, Halmstad University, Halmstad, Sweden.
Stud Health Technol Inform. 2023 May 18;302:676-677. doi: 10.3233/SHTI230234.
Artificial intelligence (AI) is predicted to improve health care, increase efficiency and save time and recourses, especially in the context of emergency care where many critical decisions are made. Research shows the urgent need to develop principles and guidance to ensure ethical AI use in healthcare. This study aimed to explore healthcare professionals' perceptions of the ethical aspects of implementing an AI application to predict the mortality risk of patients in emergency departments. The analysis used an abductive qualitative content analysis based on the principles of medical ethics (autonomy, beneficence, non-maleficence, and justice), the principle of explicability, and the new principle of professional governance, that emerged from the analysis. In the analysis, two conflicts and/or considerations emerged tied to each ethical principle elucidating healthcare professionals' perceptions of the ethical aspects of implementing the AI application in emergency departments. The results were related to aspects of sharing information from the AI application, resources versus demands, providing equal care, using AI as a support system, trustworthiness to AI, AI-based knowledge, professional knowledge versus AI-based information, and conflict of interests in the healthcare system.
人工智能(AI)有望改善医疗保健,提高效率,节省时间和资源,尤其是在紧急护理方面,因为在紧急护理中需要做出许多关键决策。研究表明,迫切需要制定原则和指导方针,以确保在医疗保健中使用合乎道德的人工智能。本研究旨在探讨医疗保健专业人员对在急诊科实施人工智能应用程序以预测患者死亡风险的伦理方面的看法。该分析使用了基于(自主、善行、不伤害和公正)医学伦理原则、可解释性原则以及从分析中出现的新的专业治理原则的归纳定性内容分析。在分析中,出现了与每个伦理原则相关的两个冲突和/或考虑因素,阐明了医疗保健专业人员对在急诊科实施人工智能应用的伦理方面的看法。结果与从人工智能应用程序中共享信息、资源与需求、提供平等护理、将人工智能用作支持系统、对人工智能的信任、基于人工智能的知识、专业知识与基于人工智能的信息以及医疗保健系统中的利益冲突有关。