Sivamani Chidambaram Rajmohan, Dhandapani Ragavesh, Rajmohan Sudha
Department of Prosthodontics, Oman Dental College, Muscat, OMN.
Department of Electrical and Communication Engineering, National University of Science and Technology, Muscat, OMN.
Cureus. 2025 May 26;17(5):e84816. doi: 10.7759/cureus.84816. eCollection 2025 May.
Background Classifying large dental radiographic datasets enables efficient data management and retrieval, facilitating quick access to specific types of radiographs for clinical or research purposes. It also supports advanced analytics, research, and the development of Artificial Intelligence (AI) tools. This study aimed to develop an automated workflow to improve the efficiency of dental radiograph classification. The workflow covers the entire process, from retrieving Digital Imaging and Communication in Medicine (DICOM) files to converting them into Joint Photographic Experts Group (JPEG) format and classifying them using Convolutional Neural Networks (CNNs) on a large dataset. Materials and methods This cross-sectional machine learning study was conducted using 48,329 dental radiographs to develop an automated classification workflow using CNNs. The workflow involved retrieving 48,329 DICOM files, standardizing them to a uniform size, and converting them to JPEG using the Pydicom library. Image preprocessing, including normalization, prepared the images for machine learning analysis. Various models, such as ResNet-50, AlexNet, and custom CNN models, were trained, validated, and tested on distinct datasets. Results These models were then deployed to classify a dataset of 48329 images. AlexNet demonstrated the highest performance, with a 95.98% detection rate and no errors, while ResNet-50 achieved 92.3% accuracy with 194 errors, and the custom CNN model showed a 77.25% detection rate with 1,623 errors. Conclusion The study established an effective automated workflow for dental radiograph classification, demonstrating that CNN models significantly improve classification accuracy and efficiency.
背景 对大型牙科放射影像数据集进行分类可实现高效的数据管理和检索,便于快速获取特定类型的放射影像以用于临床或研究目的。它还支持高级分析、研究以及人工智能(AI)工具的开发。本研究旨在开发一种自动化工作流程,以提高牙科放射影像分类的效率。该工作流程涵盖了从检索医学数字成像和通信(DICOM)文件到将其转换为联合图像专家组(JPEG)格式,并在大型数据集上使用卷积神经网络(CNN)对其进行分类的整个过程。
材料和方法 本横断面机器学习研究使用48329张牙科放射影像来开发一种使用CNN的自动化分类工作流程。该工作流程包括检索48329个DICOM文件,将它们标准化为统一大小,并使用Pydicom库将其转换为JPEG格式。图像预处理(包括归一化)为机器学习分析准备图像。各种模型,如ResNet-50、AlexNet和自定义CNN模型,在不同的数据集上进行了训练、验证和测试。
结果 然后将这些模型部署到对48329张图像的数据集进行分类。AlexNet表现出最高的性能,检测率为95.98%且无错误,而ResNet-50的准确率为92.3%,有194个错误,自定义CNN模型显示检测率为77.25%,有1623个错误。
结论 该研究建立了一种有效的牙科放射影像分类自动化工作流程,表明CNN模型显著提高了分类的准确性和效率。