School of Medicine, University of Dundee, Dundee, UK
Dundee Law School, School of Humanities Social Sciences and Law, University of Dundee, Dundee, UK.
J Med Ethics. 2023 Nov 23;49(12):838-843. doi: 10.1136/jme-2022-108696.
Digitalisation of health and the use of health data in artificial intelligence, and machine learning (ML), including for applications that will then in turn be used in healthcare are major themes permeating current UK and other countries' healthcare systems and policies. Obtaining rich and representative data is key for robust ML development, and UK health data sets are particularly attractive sources for this. However, ensuring that such research and development is in the public interest, produces public benefit and preserves privacy are key challenges. Trusted research environments (TREs) are positioned as a way of balancing the diverging interests in healthcare data research with privacy and public benefit. Using TRE data to train ML models presents various challenges to the balance previously struck between these societal interests, which have hitherto not been discussed in the literature. These challenges include the possibility of personal data being disclosed in ML models, the dynamic nature of ML models and how public benefit may be (re)conceived in this context. For ML research to be facilitated using UK health data, TREs and others involved in the UK health data policy ecosystem need to be aware of these issues and work to address them in order to continue to ensure a 'safe' health and care data environment that truly serves the public.
健康的数字化以及在人工智能和机器学习(ML)中使用健康数据,包括将随后应用于医疗保健的应用程序,是当前英国和其他国家医疗保健系统和政策中的主要主题。获得丰富且有代表性的数据是稳健的 ML 开发的关键,而英国的健康数据集是此类数据的特别有吸引力的来源。然而,确保此类研究和开发符合公众利益并保护隐私是关键挑战。可信研究环境(TRE)被定位为在医疗保健数据研究的利益与隐私和公众利益之间取得平衡的一种方式。使用 TRE 数据来训练 ML 模型对以前在这些社会利益之间取得的平衡提出了各种挑战,这些挑战迄今尚未在文献中讨论过。这些挑战包括个人数据在 ML 模型中被披露的可能性、ML 模型的动态性质以及在这种情况下如何重新构想公共利益。为了使用英国健康数据促进 ML 研究,使用 TRE 和参与英国健康数据政策生态系统的其他人需要意识到这些问题,并努力解决这些问题,以继续确保一个真正为公众服务的“安全”的健康和护理数据环境。