Department of Oral and Maxillofacial Radiology, Beijing Stomatology Hospital, School of Stomatology, Capital Medical University, Beijing, China.
Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China.
Dentomaxillofac Radiol. 2023 Feb;52(3):20220345. doi: 10.1259/dmfr.20220345.
This study aims to evaluate the performance of ResNet models in the detection of and vertical root fractures (VRF) in Cone-beam Computed Tomography (CBCT) images.
A CBCT image dataset consisting of 28 teeth (14 intact and 14 teeth with VRF, 1641 slices) from 14 patients, and another dataset containing 60 teeth (30 intact and 30 teeth with VRF, 3665 slices) from an model were used for the establishment of VRFconvolutional neural network (CNN) models. The most popular CNN architecture ResNet with different layers was fine-tuned for the detection of VRF. Sensitivity, specificity, accuracy, PPV (positive predictive value), NPV (negative predictive value), and AUC (the area under the receiver operating characteristic curve) of the VRF slices classified by the CNN in the test set were compared. Two oral and maxillofacial radiologists independently reviewed all the CBCT images of the test set, and intraclass correlation coefficients (ICCs) were calculated to assess the interobserver agreement for the oral maxillofacial radiologists.
The AUC of the models on the patient data were: 0.827(ResNet-18), 0.929(ResNet-50), and 0.882(ResNet-101). The AUC of the models on the mixed data get improved as:0.927(ResNet-18), 0.936(ResNet-50), and 0.893(ResNet-101). The maximum AUC were: 0.929 (0.908-0.950, 95% CI) and 0.936 (0.924-0.948, 95% CI) for the patient data and mixed data from ResNet-50, which is comparable to the AUC (0.937 and 0.950) for patient data and (0.915 and 0.935) for the mixed data obtained from the two oral and maxillofacial radiologists, respectively.
Deep-learning models showed high accuracy in the detection of VRF using CBCT images. The data obtained from the in vitro VRF model increases the data scale, which is beneficial to the training of deep-learning models.
本研究旨在评估 ResNet 模型在检测牙隐裂和垂直根折(VRF)方面的性能,这些检测是基于锥形束计算机断层扫描(CBCT)图像。
使用来自 14 名患者的 28 颗牙齿(14 颗完整牙齿和 14 颗 VRF 牙齿,共 1641 个切片)的 CBCT 图像数据集和另一个包含 60 颗牙齿(30 颗完整牙齿和 30 颗 VRF 牙齿,共 3665 个切片)的体外 VRF 模型数据集,来建立 VRF 卷积神经网络(CNN)模型。对不同层的最流行的 CNN 架构 ResNet 进行微调,以检测 VRF。在测试集中,对 CNN 分类的 VRF 切片进行敏感性、特异性、准确性、阳性预测值(PPV)、阴性预测值(NPV)和接收器工作特征曲线下的面积(AUC)的比较。两名口腔颌面放射科医生独立回顾了测试集中的所有 CBCT 图像,并计算了组内相关系数(ICC),以评估口腔颌面放射科医生之间的观察者间一致性。
患者数据模型的 AUC 为:ResNet-18(0.827)、ResNet-50(0.929)和 ResNet-101(0.882)。混合数据模型的 AUC 得到改善,分别为:ResNet-18(0.927)、ResNet-50(0.936)和 ResNet-101(0.893)。最大 AUC 分别为:来自 ResNet-50 的患者数据和混合数据的 0.929(0.908-0.950,95%CI)和 0.936(0.924-0.948,95%CI),与两名口腔颌面放射科医生分别获得的患者数据的 AUC(0.937 和 0.950)和混合数据的 AUC(0.915 和 0.935)相当。
深度学习模型在使用 CBCT 图像检测 VRF 方面具有很高的准确性。从体外 VRF 模型获得的数据增加了数据规模,有利于深度学习模型的训练。