Tan Sing Chee, Modra Lucy, Hensman Tamishta
Northern Health, Australia.
Austin Health, Australia.
Crit Care Resusc. 2025 Jun 26;27(2):100115. doi: 10.1016/j.ccrj.2025.100115. eCollection 2025 Jun.
In Australian intensive care units (ICUs), Artificial Intelligence (AI) promises to enhance efficiency and improve patient outcomes. However, ethical concerns surrounding AI must be addressed before widespread adoption. We examine the ethical challenges of of AI using the framework of the four pillars of biomedical ethics-beneficence, nonmaleficence, autonomy, and justice, and discuss the need for a fifth pillar of explicability. We consider the risks of perpetuating inequities, privacy breaches, and unintended harms, particularly in disadvantaged populations such as First Nations people. We advocate for a national strategy for ICUs to guide the ethical implementation of AI, that aligns with existing National AI Frameworks. Our recommendations for implementation of safe and ethical AI in ICU include education, developing guidelines, and ensuring transparency in AI decision-making. A coordinated strategy is essential to balance AI's benefits with the ethical responsibility to protect patients and healthcare providers in critical care settings.
在澳大利亚的重症监护病房(ICU)中,人工智能(AI)有望提高效率并改善患者预后。然而,在广泛采用之前,必须解决围绕AI的伦理问题。我们使用生物医学伦理四大支柱——行善、不伤害、自主和公正的框架来审视AI的伦理挑战,并讨论设立可解释性这第五大支柱的必要性。我们考虑了延续不平等、侵犯隐私和意外伤害的风险,尤其是在原住民等弱势群体中。我们主张制定一项针对ICU的国家战略,以指导AI的伦理实施,使其与现有的国家AI框架保持一致。我们关于在ICU中实施安全且符合伦理的AI的建议包括教育、制定指南以及确保AI决策的透明度。在重症监护环境中,协调一致的战略对于平衡AI的益处与保护患者和医疗服务提供者的伦理责任至关重要。