Department of Orthodontics, School of Stomatology, Capital Medical University, Beijing, China.
Department of Engineering Physics, Tsinghua University, Beijing, China.
Orthod Craniofac Res. 2024 Dec;27(6):893-902. doi: 10.1111/ocr.12830. Epub 2024 Jul 5.
To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues.
A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability.
The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency.
Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.
建立基于深度学习的自动软组织分析模型,对正畸面部照片进行标志点检测和测量计算,实现软组织的更全面定量评估。
共收集 578 张正畸正位照片和 450 张侧位照片构建数据集。所有图像均由两位正畸医生手动标注,标注了 43 个正位图像标志点和 17 个侧位图像标志点。建立自动标志点检测模型,由高分辨率网络、基于深度可分离卷积的特征融合模块和基于像素混洗的预测模型组成。定义了 10 个正位图像测量值和 8 个侧位图像测量值。分别使用测试集评估模型性能。计算标志点和测量误差的平均径向误差,并进行统计学分析,以评估其可靠性。
正位图像标志点的平均径向误差为 14.44±17.20 像素,侧位图像标志点的平均径向误差为 13.48±17.12 像素。除了中面部-下面部高度指数外,模型预测值与手动标注值的测量值之间无统计学差异。共有 14 个测量值具有高度一致性。
基于深度学习,我们建立了正畸面部照片的自动软组织分析模型,可自动检测 43 个正位图像标志点和 17 个侧位图像标志点,同时进行全面的软组织测量。该模型可辅助正畸医生进行高效、准确的临床应用定量软组织评估。