Bak Marieke, Madai Vince Istvan, Fritzsche Marie-Christine, Mayrhofer Michaela Th, McLennan Stuart
Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands.
QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany.
Front Genet. 2022 Jun 13;13:929453. doi: 10.3389/fgene.2022.929453. eCollection 2022.
Artificial intelligence (AI) in healthcare promises to make healthcare safer, more accurate, and more cost-effective. Public and private actors have been investing significant amounts of resources into the field. However, to benefit from data-intensive medicine, particularly from AI technologies, one must first and foremost have access to data. It has been previously argued that the conventionally used "consent or anonymize approach" undermines data-intensive medicine, and worse, may ultimately harm patients. Yet, this is still a dominant approach in European countries and framed as an either-or choice. In this paper, we contrast the different data governance approaches in the EU and their advantages and disadvantages in the context of healthcare AI. We detail the ethical trade-offs inherent to data-intensive medicine, particularly the balancing of data privacy and data access, and the subsequent prioritization between AI and other effective health interventions. If countries wish to allocate resources to AI, they also need to make corresponding efforts to improve (secure) data access. We conclude that it is unethical to invest significant amounts of public funds into AI development whilst at the same time limiting data access through strict privacy measures, as this constitutes a waste of public resources. The "AI revolution" in healthcare can only realise its full potential if a fair, inclusive engagement process spells out the values underlying (trans) national data governance policies and their impact on AI development, and priorities are set accordingly.
医疗保健领域的人工智能有望使医疗保健更安全、更准确且更具成本效益。公共和私营部门一直在向该领域投入大量资源。然而,要从数据密集型医学,尤其是从人工智能技术中受益,首先必须能够获取数据。此前有人认为,传统使用的“同意或匿名化方法”会破坏数据密集型医学,更糟糕的是,最终可能会伤害患者。然而,这在欧洲国家仍然是一种主导方法,并被框定为非此即彼的选择。在本文中,我们对比了欧盟不同的数据治理方法及其在医疗保健人工智能背景下的优缺点。我们详细阐述了数据密集型医学固有的伦理权衡,特别是数据隐私与数据访问之间的平衡,以及随后在人工智能与其他有效健康干预措施之间的优先级排序。如果各国希望将资源分配给人工智能,它们还需要做出相应努力来改善(确保)数据访问。我们得出结论,在通过严格的隐私措施限制数据访问的同时,将大量公共资金投入人工智能开发是不道德的,因为这构成了公共资源的浪费。医疗保健领域的“人工智能革命”只有在一个公平、包容的参与过程明确(跨)国家数据治理政策背后的价值观及其对人工智能发展的影响,并据此确定优先事项的情况下,才能充分发挥其潜力。