Suppr超能文献

用于在全景放射照片上检测和分割根尖病变的DeepLabv3+方法。

DeepLabv3 + method for detecting and segmenting apical lesions on panoramic radiography.

作者信息

Ketenci Çay Fatmanur, Yeşil Çağrı, Çay Oktay, Yılmaz Büşra Gül, Özçini Fatma Hasene, İlgüy Dilhan

机构信息

Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Yeditepe University, Istanbul, Turkey.

Faculty of Dentistry, Yeditepe University, Caddebostan, Bağdat St. Nu:238, Kadıköy, İstanbul, 34728, Turkey.

出版信息

Clin Oral Investig. 2025 Jan 31;29(2):101. doi: 10.1007/s00784-025-06156-0.

Abstract

OBJECTIVE

This study aimed to apply the DeepLabv3 + model and compare it with the U-Net model in terms of detecting and segmenting apical lesions on panoramic radiography.

METHODS

260 panoramic images that contain apical lesions in different regions were collected and randomly divided into training and test datasets. All images were manually annotated for apical lesions using Computer Vision Annotation Tool software by two independent dental radiologists and a master reviewer. The DeepLabv3 + model, one of the state-of-the-art deep semantic segmentation models, was utilized using Python programming language and the TensorFlow library and applied to the prepared datasets. The model was compared with the U-Net model applied to apical lesions and other medical image segmentation problems in the literature.

RESULTS

The DeepLabv3 + and U-Net models were applied to the same datasets with the same hyper-parameters. The AUC and recall results of the DeepLabv3 + were 29.96% and 61.06% better than the U-Net model. However, the U-Net model gets 69.17% and 25.55% better precision and F1-score results than the DeepLabv3 + model. The difference in the IoU results of the models was not statistically significant.

CONCLUSIONS

This paper comprehensively evaluated the DeepLabv3 + model and compared it with the U-Net model. Our experimental findings indicated that DeepLabv3 + outperforms the U-Net model by a substantial margin for both AUC and recall metrics. According to those results, for detecting apical lesions, we encourage researchers to use and improve the DeepLabv3 + model.

CLINICAL RELEVANCE

The DeepLabv3 + model has the poten tial to improve clinical diagnosis and treatment planning and save time in the clinic.

摘要

目的

本研究旨在应用深度卷积神经网络语义分割模型(DeepLabv3+),并将其与U-Net模型在全景X线片上检测和分割根尖病变方面进行比较。

方法

收集260张包含不同区域根尖病变的全景图像,并随机分为训练集和测试集。由两名独立的牙科放射科医生和一名主审人员使用计算机视觉标注工具软件对所有图像的根尖病变进行手动标注。使用Python编程语言和TensorFlow库,利用深度卷积神经网络语义分割模型(DeepLabv3+),这一最先进的深度语义分割模型之一,并将其应用于准备好的数据集。将该模型与文献中应用于根尖病变及其他医学图像分割问题的U-Net模型进行比较。

结果

深度卷积神经网络语义分割模型(DeepLabv3+)和U-Net模型应用于具有相同超参数的相同数据集。深度卷积神经网络语义分割模型(DeepLabv3+)的曲线下面积(AUC)和召回率结果比U-Net模型分别高29.96%和61.06%。然而,U-Net模型的精确率和F1分数结果比深度卷积神经网络语义分割模型(DeepLabv3+)分别高69.17%和25.55%。模型的交并比(IoU)结果差异无统计学意义。

结论

本文全面评估了深度卷积神经网络语义分割模型(DeepLabv3+),并将其与U-Net模型进行比较。我们的实验结果表明,在AUC和召回率指标方面,深度卷积神经网络语义分割模型(DeepLabv3+)显著优于U-Net模型。根据这些结果,对于检测根尖病变,我们鼓励研究人员使用并改进深度卷积神经网络语义分割模型(DeepLabv3+)。

临床意义

深度卷积神经网络语义分割模型(DeepLabv3+)有潜力改善临床诊断和治疗计划,并在临床上节省时间。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/761f/11785705/4483b0caf1f9/784_2025_6156_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验