Department of Computational Science and Engineering, Yonsei University, Seoul, Republic of Korea.
Phys Med Biol. 2020 Apr 23;65(8):085018. doi: 10.1088/1361-6560/ab7a71.
The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.
三维(3D)头影测量标志的注释已成为头影测量分析的重要组成部分,该分析用于诊断、手术规划和治疗评估。由于训练数据的有限可用性和高计算负担,高精度的 3D 标志自动注释仍然具有挑战性。本文通过提出一个由四个阶段组成的分层深度学习方法来解决这些挑战:1)3D 颅骨姿势归一化的基本标志注释器,2)中矢状面上基于深度学习的粗到精标志注释器,3)使用变分自动编码器(VAE)的总标志数量的低维表示,4)局部到全局标志注释器。VAE 的实现允许基于二维图像的 3D 形态特征学习和连接的头影测量标志向量的相似性/差异性表示学习。该方法使用少量训练 CT 数据集实现了 93 个头影测量标志的平均 3D 点到点误差为 3.63 毫米。值得注意的是,VAE 捕获了颅面结构特征的变化。