Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
Med Image Anal. 2022 Apr;77:102329. doi: 10.1016/j.media.2021.102329. Epub 2021 Dec 30.
We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surface scans. The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained objected detection model, Faster R-CNN, localizes the lesions within the 2D domain. These detected skin lesions are mapped back to the 3D surface of the subject and, for subjects imaged multiple times, we construct a graph-based matching procedure to longitudinally track lesions that considers the anatomical correspondences among pairs of meshes and the geodesic proximity of corresponding lesions and the inter-lesion geodesic distances. We evaluated the proposed approach using 3DBodyTex, a publicly available dataset composed of 3D scans imaging the coloured skin (textured meshes) of 200 human subjects. We manually annotated locations that appeared to the human eye to contain a pigmented skin lesion as well as tracked a subset of lesions occurring on the same subject imaged in different poses. Our results, when compared to three human annotators, suggest that the trained Faster R-CNN detects lesions at a similar performance level as the human annotators. Our lesion tracking algorithm achieves an average matching accuracy of 88% on a set of detected corresponding pairs of prominent lesions of subjects imaged in different poses, and an average longitudinal accuracy of 71% when encompassing additional errors due to lesion detection. As there currently is no other large-scale publicly available dataset of 3D total-body skin lesions, we publicly release over 25,000 3DBodyTex manual annotations, which we hope will further research on total-body skin lesion analysis.
我们提出了一种自动检测和跟踪 3D 全身皮肤表面扫描中皮肤病变的方法。将主体的获取的 3D 网格展开为 2D 纹理图像,在该 2D 域中,经过训练的目标检测模型 Faster R-CNN 定位病变。这些检测到的皮肤病变被映射回主体的 3D 表面,对于多次成像的主体,我们构建了基于图的匹配过程来对病变进行纵向跟踪,该过程考虑了主体之间的网格的解剖对应关系以及对应病变之间的测地线接近度和病变之间的测地线距离。我们使用 3DBodyTex 评估了所提出的方法,3DBodyTex 是一个公开的数据集,由 200 个人体的 3D 扫描成像彩色皮肤(纹理网格)组成。我们手动注释了看起来包含色素性皮肤病变的位置,并且跟踪了同一主体在不同姿势成像时发生的病变子集。我们的结果与三名人类注释者进行了比较,表明经过训练的 Faster R-CNN 以与人类注释者相似的性能水平检测病变。我们的病变跟踪算法在不同姿势成像的主体的一组检测到的对应突出病变对上的平均匹配准确率为 88%,并且当包含由于病变检测引起的额外误差时,平均纵向准确率为 71%。由于目前没有其他大规模的 3D 全身皮肤病变的公开数据集,因此我们公开发布了超过 25000 个 3DBodyTex 手动注释,我们希望这将进一步推动全身皮肤病变分析的研究。