文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

基于分组注意力机制的自动X射线牙齿分割

Automatic X-ray teeth segmentation with grouped attention.

作者信息

Zhong Wenjin, Ren XiaoXiao, Zhang HanWen

机构信息

Macquarie University, Sydney, Australia.

The University of New South Wales, Sydney, Australia.

出版信息

Sci Rep. 2025 Jan 2;15(1):64. doi: 10.1038/s41598-024-84629-0.


DOI:10.1038/s41598-024-84629-0
PMID:39747360
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11696191/
Abstract

Detection and teeth segmentation from X-rays, aiding healthcare professionals in accurately determining the shape and growth trends of teeth. However, small dataset sizes due to patient privacy, high noise, and blurred boundaries between periodontal tissue and teeth pose challenges to the models' transportability and generalizability, making them prone to overfitting. To address these issues, we propose a novel model, named Grouped Attention and Cross-Layer Fusion Network (GCNet). GCNet effectively handles numerous noise points and significant individual differences in the data, achieving stable and precise segmentation on small-scale datasets. The model comprises two core modules: Grouped Global Attention (GGA) modules and Cross-Layer Fusion (CLF) modules. The GGA modules capture and group texture and contour features, while the CLF modules combine these features with deep semantic information to improve prediction. Experimental results on the Children's Dental Panoramic Radiographs dataset show that our model outperformed existing models such as GT-U-Net and Teeth U-Net, with a Dice coefficient of 0.9338, sensitivity of 0.9426, and specificity of 0.9821. The GCNet model also demonstrates clearer segmentation boundaries compared to other models.

摘要

从X光片中进行牙齿检测和分割,帮助医疗保健专业人员准确确定牙齿的形状和生长趋势。然而,由于患者隐私导致数据集规模较小、噪声较高以及牙周组织和牙齿之间的边界模糊,给模型的可迁移性和通用性带来了挑战,使其容易出现过拟合。为了解决这些问题,我们提出了一种名为分组注意力和跨层融合网络(GCNet)的新型模型。GCNet有效地处理了数据中的大量噪声点和显著的个体差异,在小规模数据集上实现了稳定而精确的分割。该模型由两个核心模块组成:分组全局注意力(GGA)模块和跨层融合(CLF)模块。GGA模块捕获并分组纹理和轮廓特征,而CLF模块将这些特征与深度语义信息相结合以改进预测。在儿童牙科全景X光片数据集上的实验结果表明,我们的模型优于GT-U-Net和牙齿U-Net等现有模型,其Dice系数为0.9338,灵敏度为0.9426,特异性为0.9821。与其他模型相比,GCNet模型的分割边界也更清晰。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/dc5c5a102028/41598_2024_84629_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/7cf2d57dd2b3/41598_2024_84629_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/796ef3703ba9/41598_2024_84629_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/a9eec17fb169/41598_2024_84629_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/b52e9155a52d/41598_2024_84629_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4326009046f8/41598_2024_84629_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4403d2f7a491/41598_2024_84629_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/c3dd30509b95/41598_2024_84629_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/039274bf75cf/41598_2024_84629_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4b678db4824c/41598_2024_84629_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/41f4b39ac4d3/41598_2024_84629_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/0d22edb4f6c3/41598_2024_84629_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/8d970e24e4fe/41598_2024_84629_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/28227b0dd6bf/41598_2024_84629_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/dc5c5a102028/41598_2024_84629_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/7cf2d57dd2b3/41598_2024_84629_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/796ef3703ba9/41598_2024_84629_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/a9eec17fb169/41598_2024_84629_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/b52e9155a52d/41598_2024_84629_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4326009046f8/41598_2024_84629_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4403d2f7a491/41598_2024_84629_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/c3dd30509b95/41598_2024_84629_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/039274bf75cf/41598_2024_84629_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/4b678db4824c/41598_2024_84629_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/41f4b39ac4d3/41598_2024_84629_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/0d22edb4f6c3/41598_2024_84629_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/8d970e24e4fe/41598_2024_84629_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/28227b0dd6bf/41598_2024_84629_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea02/11696191/dc5c5a102028/41598_2024_84629_Fig14_HTML.jpg

相似文献

[1]
Automatic X-ray teeth segmentation with grouped attention.

Sci Rep. 2025-1-2

[2]
Teeth U-Net: A segmentation model of dental panoramic X-ray images for context semantics and contrast enhancement.

Comput Biol Med. 2023-1

[3]
A dual-labeled dataset and fusion model for automatic teeth segmentation, numbering, and state assessment on panoramic radiographs.

BMC Oral Health. 2024-10-9

[4]
MADR-Net: multi-level attention dilated residual neural network for segmentation of medical images.

Sci Rep. 2024-6-3

[5]
Segmentation of teeth in panoramic dental X-ray images using U-Net with a loss function weighted on the tooth edge.

Radiol Phys Technol. 2021-3

[6]
Enhancing teeth segmentation using multifusion deep neural net in panoramic X-ray images.

J Xray Sci Technol. 2023

[7]
STC-UNet: renal tumor segmentation based on enhanced feature extraction at different network levels.

BMC Med Imaging. 2024-7-19

[8]
Assessment of CNNs, transformers, and hybrid architectures in dental image segmentation.

J Dent. 2025-5

[9]
Deep learning for automatic mandible segmentation on dental panoramic x-ray images.

Biomed Phys Eng Express. 2023-3-10

[10]
ASD-Net: a novel U-Net based asymmetric spatial-channel convolution network for precise kidney and kidney tumor image segmentation.

Med Biol Eng Comput. 2024-6

引用本文的文献

[1]
A visualization system for intelligent diagnosis and statistical analysis of oral diseases based on panoramic radiography.

Sci Rep. 2025-5-25

本文引用的文献

[1]
Children's dental panoramic radiographs dataset for caries segmentation and dental disease detection.

Sci Data. 2023-6-14

[2]
Teeth U-Net: A segmentation model of dental panoramic X-ray images for context semantics and contrast enhancement.

Comput Biol Med. 2023-1

[3]
TransMorph: Transformer for unsupervised medical image registration.

Med Image Anal. 2022-11

[4]
Transfer learning for medical image classification: a literature review.

BMC Med Imaging. 2022-4-13

[5]
Deep learning for caries detection: A systematic review.

J Dent. 2022-7

[6]
Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images.

IEEE Trans Med Imaging. 2020-8

[7]
A review of the application of deep learning in medical image classification and segmentation.

Ann Transl Med. 2020-6

[8]
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.

Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018-9

[9]
Deep learning in medical image registration: a review.

Phys Med Biol. 2020-10-22

[10]
A survey on deep learning in medical image analysis.

Med Image Anal. 2017-7-26

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索