Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
Department of Medical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
Neuroinformatics. 2023 Jul;21(3):575-587. doi: 10.1007/s12021-023-09631-9. Epub 2023 May 25.
Head CT, which includes the facial region, can visualize faces using 3D reconstruction, raising concern that individuals may be identified. We developed a new de-identification technique that distorts the faces of head CT images. Head CT images that were distorted were labeled as "original images" and the others as "reference images." Reconstructed face models of both were created, with 400 control points on the facial surfaces. All voxel positions in the original image were moved and deformed according to the deformation vectors required to move to corresponding control points on the reference image. Three face detection and identification programs were used to determine face detection rates and match confidence scores. Intracranial volume equivalence tests were performed before and after deformation, and correlation coefficients between intracranial pixel value histograms were calculated. Output accuracy of the deep learning model for intracranial segmentation was determined using Dice Similarity Coefficient before and after deformation. The face detection rate was 100%, and match confidence scores were < 90. Equivalence testing of the intracranial volume revealed statistical equivalence before and after deformation. The median correlation coefficient between intracranial pixel value histograms before and after deformation was 0.9965, indicating high similarity. Dice Similarity Coefficient values of original and deformed images were statistically equivalent. We developed a technique to de-identify head CT images while maintaining the accuracy of deep-learning models. The technique involves deforming images to prevent face identification, with minimal changes to the original information.
头部 CT 包括面部区域,可以通过 3D 重建来可视化面部,这引起了人们对面部识别的担忧。我们开发了一种新的去识别技术,可以扭曲头部 CT 图像的面部。被扭曲的头部 CT 图像被标记为“原始图像”,而其他图像则被标记为“参考图像”。创建了这两种图像的重建面部模型,在面部表面上有 400 个控制点。原始图像中的所有体素位置都根据移动到参考图像上对应控制点所需的变形向量进行移动和变形。使用三个面部检测和识别程序来确定面部检测率和匹配置信度得分。在变形前后进行颅内体积等效性测试,并计算颅内像素值直方图之间的相关系数。使用变形前后的 Dice 相似系数确定深度学习模型进行颅内分割的输出准确性。面部检测率为 100%,匹配置信度得分<90。颅内体积的等效性测试表明变形前后具有统计学等效性。变形前后颅内像素值直方图之间的中位数相关系数为 0.9965,表明高度相似性。原始图像和变形图像的 Dice 相似系数值具有统计学等效性。我们开发了一种在保持深度学习模型准确性的同时对头部 CT 图像进行去识别的技术。该技术涉及通过变形图像来防止面部识别,同时对原始信息进行最小的更改。