Ketenci Çay Fatmanur, Yeşil Çağrı, Çay Oktay, Yılmaz Büşra Gül, Özçini Fatma Hasene, İlgüy Dilhan
Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Yeditepe University, Istanbul, Turkey.
Faculty of Dentistry, Yeditepe University, Caddebostan, Bağdat St. Nu:238, Kadıköy, İstanbul, 34728, Turkey.
Clin Oral Investig. 2025 Jan 31;29(2):101. doi: 10.1007/s00784-025-06156-0.
This study aimed to apply the DeepLabv3 + model and compare it with the U-Net model in terms of detecting and segmenting apical lesions on panoramic radiography.
260 panoramic images that contain apical lesions in different regions were collected and randomly divided into training and test datasets. All images were manually annotated for apical lesions using Computer Vision Annotation Tool software by two independent dental radiologists and a master reviewer. The DeepLabv3 + model, one of the state-of-the-art deep semantic segmentation models, was utilized using Python programming language and the TensorFlow library and applied to the prepared datasets. The model was compared with the U-Net model applied to apical lesions and other medical image segmentation problems in the literature.
The DeepLabv3 + and U-Net models were applied to the same datasets with the same hyper-parameters. The AUC and recall results of the DeepLabv3 + were 29.96% and 61.06% better than the U-Net model. However, the U-Net model gets 69.17% and 25.55% better precision and F1-score results than the DeepLabv3 + model. The difference in the IoU results of the models was not statistically significant.
This paper comprehensively evaluated the DeepLabv3 + model and compared it with the U-Net model. Our experimental findings indicated that DeepLabv3 + outperforms the U-Net model by a substantial margin for both AUC and recall metrics. According to those results, for detecting apical lesions, we encourage researchers to use and improve the DeepLabv3 + model.
The DeepLabv3 + model has the poten tial to improve clinical diagnosis and treatment planning and save time in the clinic.
本研究旨在应用深度卷积神经网络语义分割模型(DeepLabv3+),并将其与U-Net模型在全景X线片上检测和分割根尖病变方面进行比较。
收集260张包含不同区域根尖病变的全景图像,并随机分为训练集和测试集。由两名独立的牙科放射科医生和一名主审人员使用计算机视觉标注工具软件对所有图像的根尖病变进行手动标注。使用Python编程语言和TensorFlow库,利用深度卷积神经网络语义分割模型(DeepLabv3+),这一最先进的深度语义分割模型之一,并将其应用于准备好的数据集。将该模型与文献中应用于根尖病变及其他医学图像分割问题的U-Net模型进行比较。
深度卷积神经网络语义分割模型(DeepLabv3+)和U-Net模型应用于具有相同超参数的相同数据集。深度卷积神经网络语义分割模型(DeepLabv3+)的曲线下面积(AUC)和召回率结果比U-Net模型分别高29.96%和61.06%。然而,U-Net模型的精确率和F1分数结果比深度卷积神经网络语义分割模型(DeepLabv3+)分别高69.17%和25.55%。模型的交并比(IoU)结果差异无统计学意义。
本文全面评估了深度卷积神经网络语义分割模型(DeepLabv3+),并将其与U-Net模型进行比较。我们的实验结果表明,在AUC和召回率指标方面,深度卷积神经网络语义分割模型(DeepLabv3+)显著优于U-Net模型。根据这些结果,对于检测根尖病变,我们鼓励研究人员使用并改进深度卷积神经网络语义分割模型(DeepLabv3+)。
深度卷积神经网络语义分割模型(DeepLabv3+)有潜力改善临床诊断和治疗计划,并在临床上节省时间。