Chindanuruks T, Jindanil T, Cumpim C, Sinpitaksakul P, Arunjaroensuk S, Mattheos N, Pimkhaokham A
Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
Department of Radiology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
Int J Oral Maxillofac Surg. 2025 May;54(5):452-460. doi: 10.1016/j.ijom.2024.11.008. Epub 2024 Dec 4.
The aim of this study was to develop and validate a convolutional neural network (CNN) algorithm for the detection of impacted mandibular third molars in panoramic radiographs and the classification of the surgical extraction difficulty level. A dataset of 1730 panoramic radiographs was collected; 1300 images were allocated to training and 430 to testing. The performance of the model was evaluated using the confusion matrix for multiclass classification, and the actual scores were compared to those of two human experts. The area under the precision-recall curve of the YOLOv5 model ranged from 72% to 89% across the variables in the surgical difficulty index. The area under the receiver operating characteristic curve showed promising results of the YOLOv5 model for classifying third molars into three surgical difficulty levels (micro-average AUC 87%). Furthermore, the algorithm scores demonstrated good agreement with the human experts. In conclusion, the YOLOv5 model has the potential to accurately detect and classify the position of mandibular third molars, with high performance for every criterion in radiographic images. The proposed model could serve as an aid in improving clinician performance and could be integrated into a screening system.
本研究的目的是开发并验证一种卷积神经网络(CNN)算法,用于在全景X线片中检测下颌阻生第三磨牙,并对手术拔除难度级别进行分类。收集了一个包含1730张全景X线片的数据集;1300张图像用于训练,430张用于测试。使用多类分类的混淆矩阵评估模型的性能,并将实际得分与两位人类专家的得分进行比较。在手术难度指数的各个变量上,YOLOv5模型的精确率-召回率曲线下面积在72%至89%之间。受试者工作特征曲线下面积显示,YOLOv5模型在将第三磨牙分为三个手术难度级别方面取得了良好的结果(微平均AUC为87%)。此外,算法得分与人类专家的得分显示出良好的一致性。总之,YOLOv5模型有潜力准确检测和分类下颌第三磨牙的位置,在影像学图像的各项标准上都具有高性能。所提出的模型可以帮助提高临床医生的表现,并可以集成到筛查系统中。