Almeida Denise, Shmarko Konstantin, Lomas Elizabeth
Department of Information Studies, UCL, London, UK.
Department of Economics, UCL, London, UK.
AI Ethics. 2022;2(3):377-387. doi: 10.1007/s43681-021-00077-w. Epub 2021 Jul 29.
The rapid development of facial recognition technologies (FRT) has led to complex ethical choices in terms of balancing individual privacy rights versus delivering societal safety. Within this space, increasingly commonplace use of these technologies by law enforcement agencies has presented a particular lens for probing this complex landscape, its application, and the acceptable extent of citizen surveillance. This analysis focuses on the regulatory contexts and recent case law in the United States (USA), United Kingdom (UK), and European Union (EU) in terms of the use and misuse of FRT by law enforcement agencies. In the case of the USA, it is one of the main global regions in which the technology is being rapidly evolved, and yet, it has a patchwork of legislation with less emphasis on data protection and privacy. Within the context of the EU and the UK, there has been a critical focus on the development of accountability requirements particularly when considered in the context of the EU's General Data Protection Regulation (GDPR) and the legal focus on Privacy by Design (PbD). However, globally, there is no standardised human rights framework and regulatory requirements that can be easily applied to FRT rollout. This article contains a discursive discussion considering the complexity of the ethical and regulatory dimensions at play in these spaces including considering data protection and human rights frameworks. It concludes that data protection impact assessments (DPIA) and human rights impact assessments together with greater transparency, regulation, audit and explanation of FRT use, and application in individual contexts would improve FRT deployments. In addition, it sets out ten critical questions which it suggests need to be answered for the successful development and deployment of FRT and AI more broadly. It is suggested that these should be answered by lawmakers, policy makers, AI developers, and adopters.
面部识别技术(FRT)的迅速发展在平衡个人隐私权与保障社会安全方面带来了复杂的伦理选择。在这个领域,执法机构对这些技术的使用日益普遍,这为探究这一复杂局面、其应用以及公民监控的可接受程度提供了一个特殊视角。本分析聚焦于美国、英国和欧盟在执法机构使用和滥用FRT方面的监管背景及近期判例法。就美国而言,它是该技术迅速发展的主要全球地区之一,然而其立法拼凑,对数据保护和隐私的重视不足。在欧盟和英国的背景下,尤其在考虑欧盟《通用数据保护条例》(GDPR)以及“设计即隐私”(PbD)的法律重点时,问责要求的发展受到了关键关注。然而,全球范围内没有一个可轻易应用于FRT推广的标准化人权框架和监管要求。本文进行了深入讨论,考量了这些领域中伦理和监管层面的复杂性,包括数据保护和人权框架。结论是,数据保护影响评估(DPIA)和人权影响评估,以及提高FRT使用的透明度、监管、审计和解释,并在具体情境中应用,将改善FRT的部署。此外,本文还提出了十个关键问题,认为这些问题对于FRT以及更广泛的人工智能的成功开发和部署需要得到解答。建议应由立法者、政策制定者、人工智能开发者和采用者来回答这些问题。