Suppr超能文献

评估用于3D牙科模型分割任务的掩码自监督学习框架。

Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks.

作者信息

Krenmayr Lucas, von Schwerin Reinhold, Schaudt Daniel, Riedel Pascal, Hafner Alexander, Geserick Marc

机构信息

Cooperative Doctoral Program for Data Science and Analytics, University of Ulm, 89081, Ulm, Germany.

Department of Computer Science, University of Applied Sciences, 89081, Ulm, Germany.

出版信息

Sci Rep. 2025 May 14;15(1):16818. doi: 10.1038/s41598-025-01014-1.

Abstract

The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked self-supervised learning has shown great promise in overcoming the challenge of data scarcity. However, its effectiveness has not been well explored in the 3D domain, particularly on dental models. In this work, we investigate the applicability of the four recently published masked self-supervised learning frameworks-Point-BERT, Point-MAE, Point-GPT, and Point-M2AE-for improving downstream tasks such as tooth and brace segmentation. These frameworks were pre-trained on a proprietary dataset of over 4000 unlabeled 3D dental models and fine-tuned using the publicly available Teeth3DS dataset for tooth segmentation and a self-constructed braces segmentation dataset. Through a set of experiments we demonstrate that pre-training can enhance the performance of downstream tasks, especially when training data is scarce or imbalanced-a critical factor for clinical usability. Our results show that the benefits are most noticeable when training data is limited but diminish as more labeled data becomes available, providing insights into when and how this technique should be applied to maximize its effectiveness.

摘要

使用牙科模型的深度学习应用对于自动化计算机辅助治疗计划至关重要。然而,开发高精度模型需要大量准确标注的数据。获取这些数据具有挑战性,尤其是在医学领域。掩码自监督学习在克服数据稀缺挑战方面显示出巨大潜力。然而,其在3D领域的有效性尚未得到充分探索,特别是在牙科模型上。在这项工作中,我们研究了最近发布的四个掩码自监督学习框架——Point-BERT、Point-MAE、Point-GPT和Point-M2AE——对改善诸如牙齿和托槽分割等下游任务的适用性。这些框架在一个包含4000多个未标注3D牙科模型的专有数据集上进行预训练,并使用公开可用的Teeth3DS数据集进行牙齿分割微调以及一个自行构建的托槽分割数据集进行微调。通过一系列实验,我们证明预训练可以提高下游任务的性能,特别是在训练数据稀缺或不均衡时——这是临床可用性的关键因素。我们的结果表明,当训练数据有限时,益处最为明显,但随着可用标注数据的增加而减少,这为何时以及如何应用该技术以最大化其有效性提供了见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验