从黑箱到明晰:医疗保健中有效人工智能知情同意的策略。
From black box to clarity: Strategies for effective AI informed consent in healthcare.
作者信息
Chau M, Rahman M G, Debnath T
机构信息
Faculty of Science and Health, School of Dentistry and Medical Sciences, Charles Sturt University, Boorooma, New South Wales, Australia.
Faculty of Business, Justice and Behavioural Sciences, School of Computing, Mathematics and Engineering, Charles Sturt University, Port Macquarie, New South Wales, Australia.
出版信息
Artif Intell Med. 2025 May 24;167:103169. doi: 10.1016/j.artmed.2025.103169.
BACKGROUND
Informed consent is fundamental to ethical medical practice, ensuring that patients understand the procedures they undergo, the associated risks, and available alternatives. The advent of artificial intelligence (AI) in healthcare, particularly in diagnostics, introduces complexities that traditional informed consent forms do not adequately address. AI technologies, such as image analysis and decision-support systems, offer significant benefits but also raise ethical, legal, and practical concerns regarding patient information and autonomy.
MAIN BODY
The integration of AI in healthcare diagnostics necessitates a re-evaluation of current informed consent practices to ensure that patients are fully aware of AI's role, capabilities, and limitations in their care. Existing standards, such as those in the UK's National Health Service and the US, highlight the need for transparency and patient understanding but often fall short when applied to AI. The "black box" phenomenon, where the inner workings of AI systems are not transparent, poses a significant challenge. This lack of transparency can lead to over-reliance or distrust in AI tools by clinicians and patients alike. Additionally, the current informed consent process often fails to provide detailed explanations about AI algorithms, the data they use, and inherent biases. There is also a notable gap in the training and education of healthcare professionals on AI technologies, which impacts their ability to communicate effectively with patients. Ethical and legal considerations, including data privacy and algorithmic fairness, are frequently inadequately addressed in consent forms. Furthermore, integrating AI into clinical workflows presents practical challenges that require careful planning and robust support systems.
CONCLUSION
This review proposes strategies for redesigning informed consent forms. These include using plain language, visual aids, and personalised information to improve patient understanding and trust. Implementing continuous monitoring and feedback mechanisms can ensure the ongoing effectiveness of these forms. Future research should focus on developing comprehensive regulatory frameworks and enhancing communication techniques to convey complex AI concepts to patients. By improving informed consent practices, we can uphold ethical standards, foster patient trust, and support the responsible integration of AI in healthcare, ultimately benefiting both patients and healthcare providers.
背景
知情同意是符合伦理的医疗实践的基础,可确保患者了解他们所接受的程序、相关风险以及可用的替代方案。人工智能(AI)在医疗保健领域的出现,尤其是在诊断方面,带来了传统知情同意书未能充分解决的复杂性。图像分析和决策支持系统等人工智能技术带来了显著益处,但也引发了有关患者信息和自主权的伦理、法律及实际问题。
正文
将人工智能整合到医疗诊断中需要重新评估当前的知情同意做法,以确保患者充分了解人工智能在其治疗中的作用、能力和局限性。现有标准,如英国国家医疗服务体系和美国的标准,强调了透明度和患者理解的必要性,但在应用于人工智能时往往存在不足。人工智能系统内部运作不透明的“黑匣子”现象构成了重大挑战。这种缺乏透明度可能导致临床医生和患者对人工智能工具过度依赖或不信任。此外,当前的知情同意过程往往未能提供关于人工智能算法、其使用的数据以及固有偏差的详细解释。在医疗保健专业人员关于人工智能技术的培训和教育方面也存在明显差距,这影响了他们与患者有效沟通的能力。伦理和法律考量,包括数据隐私和算法公正性,在同意书中常常没有得到充分解决。此外,将人工智能整合到临床工作流程中带来了实际挑战,需要精心规划和强大的支持系统。
结论
本综述提出了重新设计知情同意书的策略。这些策略包括使用通俗易懂的语言、视觉辅助工具和个性化信息来提高患者的理解和信任。实施持续监测和反馈机制可以确保这些表格持续有效。未来的研究应专注于制定全面的监管框架,并加强沟通技巧,以便向患者传达复杂的人工智能概念。通过改进知情同意做法,我们可以维护伦理标准,增进患者信任,并支持人工智能在医疗保健中的负责任整合,最终使患者和医疗服务提供者都受益。