Solaiman Barry, Malik Abeer
HBKU Law, Qatar.
Weill Cornell Medicine, Qatar.
Med Law Rev. 2025 Jan 4;33(1). doi: 10.1093/medlaw/fwae033.
This article argues that the integration of artificial intelligence (AI) into healthcare, particularly under the European Union's Artificial Intelligence Act (AI-Act), poses significant implications for the doctor-patient relationship. While historically paternalistic, Western medicine now emphasises patient autonomy within a consumeristic paradigm, aided by technological advancements. However, hospitals worldwide are adopting AI more rapidly than before, potentially reshaping patient care dynamics. Three potential pathways emerge: enhanced patient autonomy, increased doctor control via AI, or disempowerment of both parties as decision-making shifts to private entities. This article contends that without addressing flaws in the AI-Act's risk-based approach, private entities could be empowered at the expense of patient autonomy. While proposed directives like the AI Liability Directive (AILD) and the revised Directive on Liability for Defective Products (revised PLD) aim to mitigate risks, they may not address the limitations of the AI-Act. Caution must be exercised in the future interpretation of the emerging regulatory architecture to protect patient autonomy and to preserve the central role of healthcare professionals in the care of their patients.
本文认为,将人工智能(AI)融入医疗保健领域,尤其是在欧盟的《人工智能法案》(AI-Act)框架下,会对医患关系产生重大影响。虽然西方医学在历史上具有家长式作风,但如今在技术进步的推动下,西方医学在消费主义范式中强调患者自主权。然而,全球各地的医院采用人工智能的速度比以往任何时候都要快,这可能会重塑患者护理的动态模式。出现了三种潜在途径:增强患者自主权、通过人工智能增强医生控制权,或者随着决策权转移到私人实体而使双方都失去权力。本文认为,如果不解决《人工智能法案》基于风险的方法中的缺陷,私人实体可能会以牺牲患者自主权为代价而被赋予权力。虽然《人工智能责任指令》(AILD)和修订后的《缺陷产品责任指令》(修订后的PLD)等拟议指令旨在降低风险,但它们可能无法解决《人工智能法案》的局限性。在未来对新兴监管架构的解释中必须谨慎行事,以保护患者自主权,并维护医疗保健专业人员在照顾患者方面的核心作用。