Parde Connor J, Strehle Virginia E, Banerjee Vivekjyoti, Hu Ying, Cavazos Jacqueline G, Castillo Carlos D, O'Toole Alice J
School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA.
University of Maryland Institute of Advanced Computer Studies, University of Maryland, USA.
ACM Trans Appl Percept. 2023 Jul;20(3). doi: 10.1145/3609224.
Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants ( = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range = 0.38 to = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.
深度卷积神经网络(DCNNs)在人脸识别方面已达到人类水平的准确率(菲利普斯等人,2018年),不过尚不清楚它们辨别高度相似面孔的准确程度如何。在此,人类和一个深度卷积神经网络执行了一项具有挑战性的面部身份匹配任务,其中包括同卵双胞胎。参与者( = 87)观看了三种类型的面部图像对:相同身份、一般冒名顶替者(来自相似人口统计学群体的不同身份)和双胞胎冒名顶替者(同卵双胞胎兄弟姐妹)。任务是确定这些图像对展示的是同一个人还是不同的人。身份比较在三种视角差异条件下进行测试:正面到正面、正面到45°侧面以及正面到90°侧面。在每种视角差异条件下,评估辨别匹配身份图像对与双胞胎冒名顶替者图像对和一般冒名顶替者图像对的准确率。对于一般冒名顶替者图像对,人类比双胞胎冒名顶替者图像对的准确率更高,并且随着一对图像中视角差异的增加,准确率会下降。一个经过人脸识别训练的深度卷积神经网络(兰詹等人,2018年)在呈现给人类的相同图像对上进行了测试。机器的表现反映了人类准确率的模式,但在除一种情况外的所有条件下,其表现都达到或高于所有人类。对所有图像对类型的人类和机器相似度分数进行了比较。这种项目级分析表明,在九种图像对类型中的六种中,人类和机器的相似度评级显著相关[范围 = 0.38至 = 0.63],这表明人类和深度卷积神经网络对面部相似度的感知总体上是一致的。这些发现也有助于我们理解深度卷积神经网络辨别高度相似面孔的性能,证明深度卷积神经网络的表现达到或高于人类水平,并表明人类和深度卷积神经网络所使用的特征在一定程度上具有等同性。