Department of Preventive and Population Medicine, Office of Clinical Epidemiology, Analytics, and Knowledge [OCEAN], Tan Tock Seng Hospital, Singapore, Singapore.
Nanyang Business School, Nanyang Technological University, Singapore, Singapore.
Front Public Health. 2024 Jul 1;12:1420032. doi: 10.3389/fpubh.2024.1420032. eCollection 2024.
The increased utilization of Artificial intelligence (AI) in healthcare changes practice and introduces ethical implications for AI adoption in medicine. We assess medical doctors' ethical stance in situations that arise in adopting an AI-enabled Clinical Decision Support System (AI-CDSS) for antibiotic prescribing decision support in a healthcare institution in Singapore.
We conducted in-depth interviews with 30 doctors of varying medical specialties and designations between October 2022 and January 2023. Our interview guide was anchored on the four pillars of medical ethics. We used clinical vignettes with the following hypothetical scenarios: (1) Using an antibiotic AI-enabled CDSS's recommendations for a tourist, (2) Uncertainty about the AI-CDSS's recommendation of a narrow-spectrum antibiotic vs. concerns about antimicrobial resistance, (3) Patient refusing the "best treatment" recommended by the AI-CDSS, (4) Data breach.
More than half of the participants only realized that the AI-enabled CDSS could have misrepresented non-local populations after being probed to think about the AI-CDSS's data source. Regarding prescribing a broad- or narrow-spectrum antibiotic, most participants preferred to exercise their clinical judgment over the AI-enabled CDSS's recommendations in their patients' best interest. Two-thirds of participants prioritized beneficence over patient autonomy by convincing patients who refused the best practice treatment to accept it. Many were unaware of the implications of data breaches.
The current position on the legal liability concerning the use of AI-enabled CDSS is unclear in relation to doctors, hospitals and CDSS providers. Having a comprehensive ethical legal and regulatory framework, perceived organizational support, and adequate knowledge of AI and ethics are essential for successfully implementing AI in healthcare.
人工智能(AI)在医疗保健中的应用日益广泛,这给 AI 在医学领域的应用带来了伦理问题。我们评估了新加坡一家医疗机构的医生在采用 AI 临床决策支持系统(AI-CDSS)为抗生素处方决策提供支持时所面临的伦理立场。
我们于 2022 年 10 月至 2023 年 1 月对 30 名不同医学专业和职称的医生进行了深入访谈。我们的访谈指南以医学伦理的四大支柱为基础。我们使用了以下假设情景的临床案例:(1)使用 AI 抗生素 CDSS 为游客提供建议,(2)对 AI-CDSS 推荐窄谱抗生素的不确定性与对抗微生物药物耐药性的担忧,(3)患者拒绝 AI-CDSS 推荐的“最佳治疗”,(4)数据泄露。
超过一半的参与者在被问到 AI-CDSS 的数据来源时才意识到,该 AI-CDSS 可能对非本地人群的情况存在错误表述。关于开广谱或窄谱抗生素,大多数参与者更愿意根据自己的临床判断,而不是根据 AI-CDSS 的建议来为患者提供最佳治疗。三分之二的参与者优先考虑将患者的利益放在首位,通过说服拒绝最佳治疗方案的患者接受治疗,从而实现患者自主权。许多人没有意识到数据泄露的影响。
在与医生、医院和 CDSS 提供商有关的使用 AI-CDSS 的法律责任方面,目前的立场尚不清楚。拥有全面的伦理、法律和监管框架、感知到的组织支持以及对 AI 和伦理的充分了解,对于成功地在医疗保健中应用 AI 至关重要。