Suppr超能文献

使用卷积神经网络(CNNs)和视觉Transformer自动检测和分类全景X光片中的溶骨性病变。

Automated detection and classification of osteolytic lesions in panoramic radiographs using CNNs and vision transformers.

作者信息

van Nistelrooij Niels, Ghanad Iman, Bigdeli Amir K, Thiem Daniel G E, von See Constantin, Rendenbach Carsten, Maistreli Ira, Xi Tong, Bergé Stefaan, Heiland Max, Vinayahalingam Shankeeth, Gaudin Robert

机构信息

Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, P.O. Box 9101, Nijmegen, 6500 HB, the Netherlands.

Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Augustenburger Platz 1, Berlin, 13353, Germany.

出版信息

BMC Oral Health. 2025 Jun 21;25(1):950. doi: 10.1186/s12903-025-06209-6.

Abstract

BACKGROUND

Diseases underlying osteolytic lesions in jaws are characterized by the absorption of bone tissue and are often asymptomatic, delaying their diagnosis. Well-defined lesions (benign cyst-like lesions) and ill-defined lesions (osteomyelitis or malignancy) can be detected early in a panoramic radiograph (PR) by an experienced examiner, but most dentists lack appropriate training. To support dentists, this study aimed to develop and evaluate deep learning models for the detection of osteolytic lesions in PRs.

METHODS

A dataset of 676 PRs (165 well-defined, 181 ill-defined, 330 control) was collected from the Department of Oral and Maxillofacial Surgery at Charité Berlin, Germany. The osteolytic lesions were pixel-wise segmented and labeled as well-defined or ill-defined. Four model architectures for instance segmentation (Mask R-CNN with a Swin-Tiny or ResNet-50 backbone, Mask DINO, and YOLOv5) were employed with five-fold cross-validation. Their effectiveness was evaluated with sensitivity, specificity, F1-score, and AUC and failure cases were shown.

RESULTS

Mask R-CNN with a Swin-Tiny backbone was most effective (well-defined F1 = 0.784, AUC = 0.881; ill-defined F1 = 0.904, AUC = 0.971) and the model architectures including vision transformer components were more effective than those without. Model mistakes were observed around the maxillary sinus, at tooth extraction sites, and for radiolucent bands.

CONCLUSIONS

Promising deep learning models were developed for the detection of osteolytic lesions in PRs, particularly those with vision transformer components (Mask R-CNN with Swin-Tiny and Mask DINO). These results underline the potential of vision transformers for enhancing the automated detection of osteolytic lesions, offering a significant improvement over traditional deep learning models.

摘要

背景

颌骨溶骨性病变的潜在疾病以骨组织吸收为特征,且通常无症状,这会延迟其诊断。经验丰富的检查人员可在全景X线片(PR)中早期检测到边界清晰的病变(良性囊肿样病变)和边界不清的病变(骨髓炎或恶性肿瘤),但大多数牙医缺乏适当的培训。为了帮助牙医,本研究旨在开发和评估用于检测PR中溶骨性病变的深度学习模型。

方法

从德国柏林夏里特大学口腔颌面外科收集了676张PR的数据集(165张边界清晰的、181张边界不清的、330张对照)。对溶骨性病变进行逐像素分割,并标记为边界清晰或边界不清。采用四种实例分割模型架构(具有Swin-Tiny或ResNet-50主干的Mask R-CNN、Mask DINO和YOLOv5)进行五折交叉验证。通过灵敏度、特异性、F1分数和AUC评估其有效性,并展示失败案例。

结果

具有Swin-Tiny主干的Mask R-CNN最有效(边界清晰的F1 = 0.784,AUC = 0.881;边界不清的F1 = 0.904,AUC = 0.971),并且包含视觉Transformer组件的模型架构比不包含的更有效。在上颌窦周围、拔牙部位和透射带处观察到模型错误。

结论

开发了用于检测PR中溶骨性病变的有前景的深度学习模型,特别是那些具有视觉Transformer组件的模型(具有Swin-Tiny的Mask R-CNN和Mask DINO)。这些结果强调了视觉Transformer在增强溶骨性病变自动检测方面的潜力,与传统深度学习模型相比有显著改进。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15a8/12182663/130a83198cd8/12903_2025_6209_Fig7_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验