Kiseleva Anastasiya, Kotzinos Dimitris, De Hert Paul
LSTS Research Group (Law, Science, Technology and Society), Faculty of Law, Vrije Universiteit Brussels, Brussels, Belgium.
ETIS Research Lab, Faculity of Computer Science, CY Cergy Paris University, Cergy-Pontoise, France.
Front Artif Intell. 2022 May 30;5:879603. doi: 10.3389/frai.2022.879603. eCollection 2022.
The lack of transparency is one of the artificial intelligence (AI)'s fundamental challenges, but the concept of transparency might be even more opaque than AI itself. Researchers in different fields who attempt to provide the solutions to improve AI's transparency articulate different but neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there is no common taxonomy neither within one field (such as data science) nor between different fields (law and data science). In certain areas like healthcare, the requirements of transparency are crucial since the decisions directly affect people's lives. In this paper, we suggest an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and we propose a single point of reference for both legal scholars and data scientists on transparency and related concepts. Based on the analysis of the European Union (EU) legislation and literature in computer science, we submit that transparency shall be considered the "way of thinking" and umbrella concept characterizing the process of AI's development and use. Transparency shall be achieved through a set of measures such as interpretability and explainability, communication, auditability, traceability, information provision, record-keeping, data governance and management, and documentation. This approach to deal with transparency is of general nature, but transparency measures shall be always contextualized. By analyzing transparency in the healthcare context, we submit that it shall be viewed as a system of accountabilities of involved subjects (AI developers, healthcare professionals, and patients) distributed at different layers (insider, internal, and external layers, respectively). The transparency-related accountabilities shall be built-in into the existing accountability picture which justifies the need to investigate the relevant legal frameworks. These frameworks correspond to different layers of the transparency system. The requirement of informed medical consent correlates to the external layer of transparency and the Medical Devices Framework is relevant to the insider and internal layers. We investigate the said frameworks to inform AI developers on what is already expected from them with regards to transparency. We also discover the gaps in the existing legislative frameworks concerning AI's transparency in healthcare and suggest the solutions to fill them in.
缺乏透明度是人工智能(AI)面临的基本挑战之一,但透明度的概念可能比人工智能本身更晦涩难懂。不同领域的研究人员试图提供提高人工智能透明度的解决方案,他们阐述了不同但相近的概念,除了透明度之外,还包括可解释性和可诠释性。然而,无论是在一个领域(如数据科学)内部,还是在不同领域(法律和数据科学)之间,都没有统一的分类法。在医疗保健等特定领域,透明度要求至关重要,因为相关决策直接影响人们的生活。在本文中,我们提出了一个跨学科视角,探讨如何解决医疗保健领域人工智能的透明度问题,并为法律学者和数据科学家提供了一个关于透明度及相关概念的单一参考点。基于对欧盟立法和计算机科学文献的分析,我们认为透明度应被视为表征人工智能开发和使用过程的“思维方式”和总体概念。透明度应通过一系列措施来实现,如可诠释性和可解释性、沟通、可审计性、可追溯性、信息提供、记录保存、数据治理和管理以及文档编制。这种处理透明度的方法具有普遍性,但透明度措施应始终结合具体情况。通过分析医疗保健背景下的透明度,我们认为它应被视为一个涉及不同层面(分别为内部人员、内部和外部层面)的相关主体(人工智能开发者、医疗保健专业人员和患者)的问责制系统。与透明度相关的问责制应融入现有的问责框架,这证明了研究相关法律框架的必要性。这些框架对应于透明度系统的不同层面。知情医疗同意的要求与透明度的外部层面相关,而《医疗器械框架》与内部人员和内部层面相关。我们研究上述框架,以便告知人工智能开发者在透明度方面对他们的期望。我们还发现了现有立法框架在医疗保健领域人工智能透明度方面的差距,并提出了填补这些差距的解决方案。