Joint Research Unit-UMR 7354, Law, Religion, Business and Society, University of Strasbourg, 67000 Strasbourg, France.
Biomedicine Research Center of Strasbourg (CRBS), UR 3072, "Mitochondria, Oxidative Stress and Muscle Plasticity", University of Strasbourg, 67000 Strasbourg, France.
Sensors (Basel). 2024 May 28;24(11):3491. doi: 10.3390/s24113491.
In the last few decades, there has been an ongoing transformation of our healthcare system with larger use of sensors for remote care and artificial intelligence (AI) tools. In particular, sensors improved by new algorithms with learning capabilities have proven their value for better patient care. Sensors and AI systems are no longer only non-autonomous devices such as the ones used in radiology or surgical robots; there are novel tools with a certain degree of autonomy aiming to largely modulate the medical decision. Thus, there will be situations in which the doctor is the one making the decision and has the final say and other cases in which the doctor might only apply the decision presented by the autonomous device. As those are two hugely different situations, they should not be treated the same way, and different liability rules should apply. Despite a real interest in the promise of sensors and AI in medicine, doctors and patients are reluctant to use it. One important reason is a lack clear definition of liability. Nobody wants to be at fault, or even prosecuted, because they followed the advice from an AI system, notably when it has not been perfectly adapted to a specific patient. Fears are present even with simple sensors and AI use, such as during telemedicine visits based on very useful, clinically pertinent sensors; with the risk of missing an important parameter; and, of course, when AI appears "intelligent", potentially replacing the doctors' judgment. This paper aims to provide an overview of the liability of the health professional in the context of the use of sensors and AI tools in remote healthcare, analyzing four regimes: the contract-based approach, the approach based on breach of duty to inform, the fault-based approach, and the approach related to the good itself. We will also discuss future challenges and opportunities in the promising domain of sensors and AI use in medicine.
在过去的几十年中,我们的医疗保健系统发生了持续的变革,越来越多地使用传感器进行远程护理和人工智能 (AI) 工具。特别是,具有学习能力的新算法改进的传感器已证明它们在改善患者护理方面的价值。传感器和 AI 系统不再仅仅是放射学或手术机器人等非自主设备;现在有一些具有一定自主性的新型工具,旨在在很大程度上调节医疗决策。因此,在某些情况下,医生是做出决策并拥有最终决定权的人,而在其他情况下,医生可能只应用自主设备提出的决策。由于这两种情况有很大的不同,因此不应该以同样的方式对待,而应该适用不同的责任规则。尽管人们对传感器和 AI 在医学中的应用前景非常感兴趣,但医生和患者都不愿意使用它。一个重要原因是缺乏明确的责任定义。没有人愿意因为听从 AI 系统的建议而犯错,甚至被起诉,尤其是当它没有完全适应特定患者时。即使是在使用简单的传感器和 AI 时也存在担忧,例如在基于非常有用且与临床相关的传感器的远程医疗访问中;存在错过重要参数的风险;当然,当 AI 表现出“智能”时,可能会取代医生的判断。本文旨在提供一个概述医疗保健专业人员在远程医疗中使用传感器和 AI 工具的背景下的责任,分析四种制度:基于合同的方法、基于告知义务违反的方法、基于过错的方法和与良好本身相关的方法。我们还将讨论在传感器和 AI 在医学中应用的有前途的领域中的未来挑战和机遇。