Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh.
Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia.
Int J Environ Res Public Health. 2023 Mar 31;20(7):5351. doi: 10.3390/ijerph20075351.
Access to oral healthcare is not uniform globally, particularly in rural areas with limited resources, which limits the potential of automated diagnostics and advanced tele-dentistry applications. The use of digital caries detection and progression monitoring through photographic communication, is influenced by multiple variables that are difficult to standardize in such settings. The objective of this study was to develop a novel and cost-effective virtual computer vision AI system to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy.
A set of 1703 augmented images was obtained from 233 de-identified teeth specimens. Images were acquired using a consumer smartphone, without any standardised apparatus applied. The study utilised state-of-the-art ensemble modeling, test-time augmentation, and transfer learning processes. The "you only look once" algorithm (YOLO) derivatives, v5s, v5m, v5l, and v5x, were independently evaluated, and an ensemble of the best results was augmented, and transfer learned with ResNet50, ResNet101, VGG16, AlexNet, and DenseNet. The outcomes were evaluated using precision, recall, and mean average precision ().
The YOLO model ensemble achieved a mean average precision () of 0.732, an accuracy of 0.789, and a recall of 0.701. When transferred to VGG16, the final model demonstrated a diagnostic accuracy of 86.96%, precision of 0.89, and recall of 0.88. This surpassed all other base methods of object detection from free-hand non-standardised smartphone photographs.
A virtual computer vision AI system, blending a model ensemble, test-time augmentation, and transferred deep learning processes, was developed to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. This model can improve access to oral healthcare in rural areas with limited resources, and has the potential to aid in automated diagnostics and advanced tele-dentistry applications.
口腔保健服务在全球范围内并不均衡,特别是在资源有限的农村地区,这限制了自动化诊断和先进远程牙科应用的潜力。通过摄影通讯进行数字化龋齿检测和进展监测的使用受到多种难以在这种环境下标准化的变量的影响。本研究的目的是开发一种新颖且具有成本效益的虚拟计算机视觉人工智能系统,以从非标准化照片中以合理的临床准确性预测龋齿。
从 233 个去识别的牙齿标本中获得了一组 1703 个增强图像。使用消费级智能手机获取图像,而没有应用任何标准化设备。该研究利用了最先进的集成建模、测试时增强和迁移学习过程。独立评估了“你只看一次”算法(YOLO)的衍生产品 v5s、v5m、v5l 和 v5x,并增强了最佳结果的集成,并使用 ResNet50、ResNet101、VGG16、AlexNet 和 DenseNet 进行了迁移学习。使用精度、召回率和平均精度(mAP)评估结果。
YOLO 模型集成的平均精度(mAP)为 0.732,准确率为 0.789,召回率为 0.701。当转移到 VGG16 时,最终模型的诊断准确率为 86.96%,精度为 0.89,召回率为 0.88。这超过了从非标准化智能手机照片中进行自由手对象检测的所有其他基础方法。
开发了一种虚拟计算机视觉人工智能系统,该系统融合了模型集成、测试时增强和转移的深度学习过程,可从非标准化照片中以合理的临床准确性预测龋齿。该模型可以提高资源有限的农村地区的口腔保健服务水平,并有潜力辅助自动化诊断和先进远程牙科应用。