Mouchabac Stéphane, Adrien Vladimir, Falala-Séchet Clara, Bonnot Olivier, Maatoug Redwan, Millet Bruno, Peretti Charles-Siegfried, Bourla Alexis, Ferreri Florian
Sorbonne Université, AP-HP Department of Psychiatry, Hôpital Saint-Antoine, Paris, France.
Sorbonne Université, iCRIN Psychiatry (Infrastructure of Clinical Research In Neurosciences - Psychiatry), Brain and Spine Institute (ICM), INSERM, CNRS, Paris, France.
Front Psychiatry. 2021 Jan 22;11:622506. doi: 10.3389/fpsyt.2020.622506. eCollection 2020.
The patient's decision-making abilities are often altered in psychiatric disorders. The legal framework of psychiatric advance directives (PADs) has been made to provide care to patients in these situations while respecting their free and informed consent. The implementation of artificial intelligence (AI) within Clinical Decision Support Systems (CDSS) may result in improvements for complex decisions that are often made in situations covered by PADs. Still, it raises theoretical and ethical issues this paper aims to address. First, it goes through every level of possible intervention of AI in the PAD drafting process, beginning with what data sources it could access and if its data processing competencies should be limited, then treating of the opportune moments it should be used and its place in the contractual relationship between each party (patient, caregivers, and trusted person). Second, it focuses on ethical principles and how these principles, whether they are medical principles (autonomy, beneficence, non-maleficence, justice) applied to AI or AI principles (loyalty and vigilance) applied to medicine, should be taken into account in the future of the PAD drafting process. Some general guidelines are proposed in conclusion: AI must remain a decision support system as a partner of each party of the PAD contract; patients should be able to choose a personalized type of AI intervention or no AI intervention at all; they should stay informed, i.e., understand the functioning and relevance of AI thanks to educational programs; finally, a committee should be created for ensuring the principle of vigilance by auditing these new tools in terms of successes, failures, security, and relevance.
患者的决策能力在精神疾病中常常会发生改变。精神科预嘱(PADs)的法律框架旨在在尊重患者自由和知情同意的情况下,为处于这些情形下的患者提供护理。临床决策支持系统(CDSS)中人工智能(AI)的应用可能会改善PADs所涵盖情形中经常做出的复杂决策。然而,这也引发了本文旨在探讨的理论和伦理问题。首先,本文梳理了AI在PAD起草过程中可能进行干预的各个层面,从它可以访问哪些数据源以及其数据处理能力是否应受到限制开始,接着探讨其应被使用的适当时机及其在各方(患者、护理人员和受信任的人)之间的契约关系中的地位。其次,本文聚焦于伦理原则,以及在PAD起草过程的未来发展中,这些原则,无论是应用于AI的医学原则(自主性、有益性、无害性、公正性)还是应用于医学的AI原则(忠诚和警惕),应如何被考虑在内。最后提出了一些一般性指导方针:AI必须作为PAD合同各方的伙伴,仍然是一个决策支持系统;患者应该能够选择个性化的AI干预类型或完全不进行AI干预;他们应该了解相关信息,即通过教育项目理解AI的功能和相关性;最后,应该设立一个委员会,通过对这些新工具的成功、失败、安全性和相关性进行审核,来确保警惕原则的落实。