Health Law Institute, Faculty of Law, University of Alberta, Edmonton, AB, T6G 2H5, Canada.
BMC Med Ethics. 2021 Sep 15;22(1):122. doi: 10.1186/s12910-021-00687-3.
Advances in healthcare artificial intelligence (AI) are occurring rapidly and there is a growing discussion about managing its development. Many AI technologies end up owned and controlled by private entities. The nature of the implementation of AI could mean such corporations, clinics and public bodies will have a greater than typical role in obtaining, utilizing and protecting patient health information. This raises privacy issues relating to implementation and data security.
The first set of concerns includes access, use and control of patient data in private hands. Some recent public-private partnerships for implementing AI have resulted in poor protection of privacy. As such, there have been calls for greater systemic oversight of big data health research. Appropriate safeguards must be in place to maintain privacy and patient agency. Private custodians of data can be impacted by competing goals and should be structurally encouraged to ensure data protection and to deter alternative use thereof. Another set of concerns relates to the external risk of privacy breaches through AI-driven methods. The ability to deidentify or anonymize patient health data may be compromised or even nullified in light of new algorithms that have successfully reidentified such data. This could increase the risk to patient data under private custodianship.
We are currently in a familiar situation in which regulation and oversight risk falling behind the technologies they govern. Regulation should emphasize patient agency and consent, and should encourage increasingly sophisticated methods of data anonymization and protection.
医疗人工智能(AI)领域正在迅速发展,人们对其发展的管理也展开了越来越多的讨论。许多 AI 技术最终归私人实体所有和控制。AI 的实施性质意味着这些公司、诊所和公共机构将在获取、使用和保护患者健康信息方面发挥比典型情况更大的作用。这引发了与实施和数据安全相关的隐私问题。
首先关注的问题包括患者数据在私人手中的访问、使用和控制。最近一些公私合作实施 AI 的项目导致隐私保护不善。因此,人们呼吁对大数据健康研究进行更大程度的系统监督。必须采取适当的保障措施来维护隐私和患者自主权。数据的私人保管人可能受到相互竞争的目标的影响,应从结构上鼓励他们确保数据保护,并阻止对数据的替代使用。另一组关注的问题涉及到通过 AI 驱动的方法存在隐私泄露的外部风险。考虑到成功重新识别此类数据的新算法,患者健康数据的去识别或匿名化能力可能会受到损害甚至失效。这可能会增加患者数据在私人保管下的风险。
我们目前正处于一种熟悉的情况,即监管和监督有可能落后于它们所管理的技术。监管应强调患者自主权和同意,并应鼓励越来越复杂的数据匿名化和保护方法。