Qi Baiyan, Sasi Lekshmi, Khan Suhel, Luo Jordan, Chen Casey, Rahmani Keivan, Jahed Zeinab, Jokerst Jesse V
Aiiso Yufeng Li Family Department of Chemical and Nano Engineering, University of California San Diego, La Jolla, CA 92093, United States.
Herman Ostrow School of Dentistry, University of Southern California, Los Angeles, CA 90089, United States.
Dentomaxillofac Radiol. 2025 Mar 1;54(3):210-221. doi: 10.1093/dmfr/twaf001.
To identify landmarks in ultrasound periodontal images and automate the image-based measurements of gingival recession (iGR), gingival height (iGH), and alveolar bone level (iABL) using machine learning.
We imaged 184 teeth from 29 human subjects. The dataset included 1580 frames for training and validating the U-Net convolutional neural network machine learning model, and 250 frames from new teeth that were not used in training for testing the generalization performance. The predicted landmarks, including the tooth, gingiva, bone, gingival margin (GM), cementoenamel junction (CEJ), and alveolar bone crest (ABC), were compared to manual annotations. We further demonstrated automated measurements of the clinical metrics iGR, iGH, and iABL.
Over 98% of predicted GM, CEJ, and ABC distances are within 200 µm of the manual annotation. Bland-Altman analysis revealed biases (bias of machine learning vs ground truth) of -0.1 µm, -37.6 µm, and -40.9 µm, with 95% limits of agreement of [-281.3, 281.0] µm, [-203.1, 127.9] µm, and [-297.6, 215.8] µm for iGR, iGH, and iABL, respectively, when compared to manual annotations. On the test dataset, the biases were 167.5 µm, 40.1 µm, and 78.7 µm with 95% CIs of [-1175 to 1510] µm, [-910.3 to 990.4] µm, and [-1954 to 1796] µm for iGR, iGH, and iABL, respectively.
The proposed machine learning model demonstrates robust prediction performance, with the potential to enhance the efficiency of clinical periodontal diagnosis by automating landmark identification and clinical metrics measurements.
识别超声牙周图像中的地标,并使用机器学习自动进行基于图像的牙龈退缩(iGR)、牙龈高度(iGH)和牙槽骨水平(iABL)测量。
我们对29名人类受试者的184颗牙齿进行了成像。该数据集包括1580帧用于训练和验证U-Net卷积神经网络机器学习模型,以及250帧来自未用于训练的新牙齿的图像,用于测试泛化性能。将预测的地标,包括牙齿、牙龈、骨、牙龈边缘(GM)、牙骨质釉质界(CEJ)和牙槽嵴顶(ABC),与手动标注进行比较。我们进一步展示了对临床指标iGR、iGH和iABL的自动测量。
超过98%的预测GM、CEJ和ABC距离在手动标注的200 µm范围内。Bland-Altman分析显示,与手动标注相比,iGR、iGH和iABL的偏差(机器学习与真实值的偏差)分别为-0.1 µm、-37.6 µm和-40.9 µm,95%一致性界限分别为[-281.3, 281.0] µm、[-203.1, 127.9] µm和[-297.6, 215.8] µm。在测试数据集上,iGR、iGH和iABL的偏差分别为167.5 µm、40.1 µm和78.7 µm,95%置信区间分别为[-1175至1510] µm、[-910.3至990.4] µm和[-1954至1796] µm。
所提出的机器学习模型展示了强大的预测性能,有可能通过自动识别地标和测量临床指标来提高临床牙周诊断的效率。