• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于牙科X光片中多模态病变分类的可解释CNN-放射组学融合与集成学习

Explainable CNN-Radiomics Fusion and Ensemble Learning for Multimodal Lesion Classification in Dental Radiographs.

作者信息

Can Zuhal, Aydin Emre

机构信息

Computer Engineering Department, Engineering and Architecture Faculty, Eskisehir Osmangazi University, Eskisehir 26040, Türkiye.

出版信息

Diagnostics (Basel). 2025 Aug 9;15(16):1997. doi: 10.3390/diagnostics15161997.

DOI:10.3390/diagnostics15161997
PMID:40870848
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12385016/
Abstract

: Clinicians routinely rely on periapical radiographs to identify root-end disease, but interpretation errors and inconsistent readings compromise diagnostic accuracy. We, therefore, developed an explainable, multimodal AI framework that (i) fuses two data modalities, deep CNN embeddings and radiomic texture descriptors that are extracted only from lesion-relevant pixels selected by Grad-CAM, and (ii) makes every prediction transparent through dual-layer explainability (pixel-level Grad-CAM heatmaps + feature-level SHAP values). : A dataset of 2285 periapical radiographs was processed using six CNN architectures (EfficientNet-B1/B4/V2M/V2S, ResNet-50, Xception). For each image, a Grad-CAM heatmap generated from the penultimate layer of the CNN was thresholded to create a binary mask that delineated the region most responsible for the network's decision. Radiomic features (first-order, GLCM, GLRLM, GLDM, NGTDM, and shape2D) were then computed only within that mask, ensuring that handcrafted descriptors and learned embeddings referred to the same anatomic focus. The two feature streams were concatenated, optionally reduced by principal component analysis or SelectKBest, and fed to random forest or XGBoost classifiers; five-view test-time augmentation (TTA) was applied at inference. Pixel-level interpretability was provided by the original Grad-CAM, while SHAP quantified the contribution of each radiomic and deep feature to the final vote. : Raw CNNs achieved a ca. 52% accuracy and AUC values near 0.60. The multimodal fusion raised performance dramatically; the Xception + radiomics + random forest model achieved a 95.4% accuracy and an AUC of 0.9867, and adding TTA increased these to 96.3% and 0.9917, respectively. The top ensemble, Xception and EfficientNet-V2S fusion vectors classified with XGBoost under five-view TTA, reached a 97.16% accuracy and an AUC of 0.9914, with false-positive and false-negative rates of 4.6% and 0.9%, respectively. Grad-CAM heatmaps consistently highlighted periapical regions, while SHAP plots revealed that radiomic texture heterogeneity and high-level CNN features jointly contributed to correct classifications. : By tightly integrating CNN embeddings, mask-targeted radiomics, and a two-tiered explainability stack (Grad-CAM + SHAP), the proposed system delivers state-of-the-art lesion detection and a transparent technique, addressing both accuracy and trust.

摘要

临床医生通常依靠根尖片来识别根尖周疾病,但解读错误和读数不一致会影响诊断准确性。因此,我们开发了一个可解释的多模态人工智能框架,该框架(i)融合了两种数据模态,即深度卷积神经网络(CNN)嵌入和仅从通过梯度加权类激活映射(Grad-CAM)选择的与病变相关像素中提取的放射组学纹理描述符,并且(ii)通过双层可解释性(像素级Grad-CAM热图+特征级SHAP值)使每个预测都具有透明度。

使用六种CNN架构(EfficientNet-B1/B4/V2M/V2S、ResNet-50、Xception)处理了一个包含2285张根尖片的数据集。对于每张图像,对从CNN倒数第二层生成的Grad-CAM热图进行阈值处理,以创建一个二值掩码,该掩码勾勒出对网络决策最负责的区域。然后仅在该掩码内计算放射组学特征(一阶、灰度共生矩阵(GLCM)、灰度行程长度矩阵(GLRLM)、灰度差异矩阵(GLDM)、邻域灰度差矩阵(NGTDM)和二维形状),确保手工制作的描述符和学习到的嵌入都指的是同一个解剖焦点。将这两个特征流连接起来,可选择通过主成分分析或SelectKBest进行降维,然后输入到随机森林或XGBoost分类器中;在推理时应用五视图测试时间增强(TTA)。原始的Grad-CAM提供像素级可解释性,而SHAP量化了每个放射组学和深度特征对最终投票的贡献。

原始的CNN达到了约52%的准确率和接近0.60的曲线下面积(AUC)值。多模态融合显著提高了性能;Xception+放射组学+随机森林模型达到了95.4%的准确率和0.9867的AUC,添加TTA后分别提高到96.3%和0.9917。最佳集成模型,即Xception和EfficientNet-V2S融合向量在五视图TTA下用XGBoost分类,达到了97.16%的准确率和0.9914的AUC,假阳性率和假阴性率分别为4.6%和0.9%。Grad-CAM热图始终突出显示根尖周区域,而SHAP图显示放射组学纹理异质性和高级CNN特征共同促成了正确分类。

通过紧密集成CNN嵌入、掩码靶向放射组学和双层可解释性堆栈(Grad-CAM+SHAP),所提出的系统提供了先进的病变检测和一种透明技术,兼顾了准确性和可信度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/c2709481a3bf/diagnostics-15-01997-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/f7272f67b34f/diagnostics-15-01997-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/00fce48b9e60/diagnostics-15-01997-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/aacbbcefa478/diagnostics-15-01997-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/9eae48e44e6e/diagnostics-15-01997-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/78304e67f33a/diagnostics-15-01997-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/ca6f87896c4e/diagnostics-15-01997-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/002dcffb1284/diagnostics-15-01997-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/5012a92605bf/diagnostics-15-01997-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/c2709481a3bf/diagnostics-15-01997-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/f7272f67b34f/diagnostics-15-01997-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/00fce48b9e60/diagnostics-15-01997-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/aacbbcefa478/diagnostics-15-01997-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/9eae48e44e6e/diagnostics-15-01997-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/78304e67f33a/diagnostics-15-01997-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/ca6f87896c4e/diagnostics-15-01997-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/002dcffb1284/diagnostics-15-01997-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/5012a92605bf/diagnostics-15-01997-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/500c/12385016/c2709481a3bf/diagnostics-15-01997-g009.jpg

相似文献

1
Explainable CNN-Radiomics Fusion and Ensemble Learning for Multimodal Lesion Classification in Dental Radiographs.用于牙科X光片中多模态病变分类的可解释CNN-放射组学融合与集成学习
Diagnostics (Basel). 2025 Aug 9;15(16):1997. doi: 10.3390/diagnostics15161997.
2
A Multimodal MRI-Based Model for Colorectal Liver Metastasis Prediction: Integrating Radiomics, Deep Learning, and Clinical Features with SHAP Interpretation.一种基于多模态磁共振成像的结直肠癌肝转移预测模型:将影像组学、深度学习和临床特征与SHAP解释相结合。
Curr Oncol. 2025 Jul 30;32(8):431. doi: 10.3390/curroncol32080431.
3
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.CXR-MultiTaskNet:一种用于胸部X光片中疾病联合定位与分类的统一深度学习框架。
Sci Rep. 2025 Aug 31;15(1):32022. doi: 10.1038/s41598-025-16669-z.
4
Development and Validation of a Convolutional Neural Network Model to Predict a Pathologic Fracture in the Proximal Femur Using Abdomen and Pelvis CT Images of Patients With Advanced Cancer.利用晚期癌症患者腹部和骨盆 CT 图像建立卷积神经网络模型预测股骨近端病理性骨折的研究
Clin Orthop Relat Res. 2023 Nov 1;481(11):2247-2256. doi: 10.1097/CORR.0000000000002771. Epub 2023 Aug 23.
5
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
6
Rad-EfficientNet: Improving Breast MRI Diagnosis Through Integration of Radiomics and Deep Learning.Rad-EfficientNet:通过整合影像组学和深度学习改善乳腺磁共振成像诊断
IEEE J Biomed Health Inform. 2025 Aug;29(8):5667-5674. doi: 10.1109/JBHI.2025.3551840.
7
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.深度学习方法在自身免疫性大疱性疾病中的直接免疫荧光模式识别。
Br J Dermatol. 2024 Jul 16;191(2):261-266. doi: 10.1093/bjd/ljae142.
8
Integrative radiomics of intra- and peri-tumoral features for enhanced risk prediction in thymic tumors: a multimodal analysis of tumor microenvironment contributions.整合瘤内和瘤周特征的放射组学以增强胸腺瘤风险预测:肿瘤微环境贡献的多模态分析
BMC Med Imaging. 2025 Jul 17;25(1):286. doi: 10.1186/s12880-025-01790-2.
9
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
10
Deep Learning and Image Generator Health Tabular Data (IGHT) for Predicting Overall Survival in Patients With Colorectal Cancer: Retrospective Study.深度学习与图像生成器健康表格数据(IGHT)用于预测结直肠癌患者的总生存期:回顾性研究
JMIR Med Inform. 2025 Aug 19;13:e75022. doi: 10.2196/75022.

本文引用的文献

1
Segmentation of periapical lesions with automatic deep learning on panoramic radiographs: an artificial intelligence study.基于全景片的自动深度学习对根尖病变的分割:一项人工智能研究。
BMC Oral Health. 2024 Nov 1;24(1):1332. doi: 10.1186/s12903-024-05126-4.
2
Periapical lesion detection in periapical radiographs using the latest convolutional neural network ConvNeXt and its integrated models.使用最新卷积神经网络 ConvNeXt 及其集成模型在根尖射线照片中检测根尖病变。
Sci Rep. 2024 Oct 25;14(1):25429. doi: 10.1038/s41598-024-75748-9.
3
Explainable AI-based Deep-SHAP for mapping the multivariate relationships between regional neuroimaging biomarkers and cognition.
基于可解释人工智能的深度SHAP,用于绘制区域神经影像生物标志物与认知之间的多变量关系。
Eur J Radiol. 2024 May;174:111403. doi: 10.1016/j.ejrad.2024.111403. Epub 2024 Mar 2.
4
Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review.深度学习在牙体异常与疾病诊断中的应用:一项系统综述
Diagnostics (Basel). 2023 Jul 27;13(15):2512. doi: 10.3390/diagnostics13152512.
5
Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review.牙科信息学深度学习技术分析:一项系统文献综述
Healthcare (Basel). 2022 Sep 28;10(10):1892. doi: 10.3390/healthcare10101892.
6
Artificial Intelligence in Dentistry: Past, Present, and Future.牙科领域的人工智能:过去、现在与未来。
Cureus. 2022 Jul 28;14(7):e27405. doi: 10.7759/cureus.27405. eCollection 2022 Jul.
7
Artificial Intelligence for Caries Detection: Value of Data and Information.人工智能在龋齿检测中的应用:数据与信息的价值。
J Dent Res. 2022 Oct;101(11):1350-1356. doi: 10.1177/00220345221113756. Epub 2022 Aug 22.
8
Combining radiomics and deep convolutional neural network features from preoperative MRI for predicting clinically relevant genetic biomarkers in glioblastoma.结合术前磁共振成像的影像组学和深度卷积神经网络特征以预测胶质母细胞瘤中临床相关的遗传生物标志物。
Neurooncol Adv. 2022 Apr 22;4(1):vdac060. doi: 10.1093/noajnl/vdac060. eCollection 2022 Jan-Dec.
9
Applications of artificial intelligence and machine learning in orthodontics: a scoping review.人工智能和机器学习在口腔正畸学中的应用:范围综述。
Prog Orthod. 2021 Jul 5;22(1):18. doi: 10.1186/s40510-021-00361-9.
10
Machine learning in dental, oral and craniofacial imaging: a review of recent progress.牙科、口腔和颅面成像中的机器学习:近期进展综述
PeerJ. 2021 May 17;9:e11451. doi: 10.7717/peerj.11451. eCollection 2021.