Ma Junjie, Yang Fang, Yang Rong, Li Yuan, Chen Yongjing
Department of Gastrointestinal Surgery, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China.
Head and Neck Radiotherapy Ward, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China.
Front Immunol. 2025 May 29;16:1596085. doi: 10.3389/fimmu.2025.1596085. eCollection 2025.
The rise in cases of Gastric Cancer has increased in recent times and demands accurate and timely detection to improve patients' well-being. The traditional cancer detection techniques face issues of explainability and precision posing requirement of interpretable AI based Gastric Cancer detection system.
This work proposes a novel deep-learning (DL) fusion approach to detect gastric cancer by combining three DL architectures, namely Visual Geometry Group (VGG16), Residual Networks-50 (RESNET50), and MobileNetV2. The fusion of DL models leverages robust feature extraction and global contextual understanding that is best suited for image data to improve the accuracy of cancer detection systems. The proposed approach then employs the Explainable Artificial Intelligence (XAI) technique, namely Local Interpretable Model-Agnostic Explanations (LIME), to present insights and transparency through visualizations into the model's decision-making process. The visualizations by LIME help understand the specific image section that contributes to the model's decision, which may help in clinical applications.
Experimental results show an enhancement in accuracy by 7% of the fusion model, achieving an accuracy of 97.8% compared to the individual stand-alone models. The usage of LIME presents the critical regions in the Image leading to cancer detection.
The enhanced accuracy of Gastric Cancer detection offers high suitability in clinical applications The usage of LIME ensures trustworthiness and reliability in predictions made by the model by presenting the explanations of the decisions, making it useful for medical practitioners. This research contributes to developing an AI-driven, trustworthy cancer detection system that supports clinical decisions and improves patient outcomes.
近年来,胃癌病例数呈上升趋势,需要准确及时的检测以改善患者的健康状况。传统的癌症检测技术面临可解释性和精确性问题,这就要求有基于可解释人工智能的胃癌检测系统。
这项工作提出了一种新颖的深度学习(DL)融合方法,通过结合三种DL架构来检测胃癌,即视觉几何组(VGG16)、残差网络50(RESNET50)和MobileNetV2。DL模型的融合利用了强大的特征提取和全局上下文理解,这最适合图像数据,以提高癌症检测系统的准确性。然后,所提出的方法采用可解释人工智能(XAI)技术,即局部可解释模型无关解释(LIME),通过可视化呈现模型决策过程的见解和透明度。LIME的可视化有助于理解对模型决策有贡献的特定图像部分,这在临床应用中可能会有所帮助。
实验结果表明,融合模型的准确率提高了7%,与单个独立模型相比,准确率达到了97.8%。LIME的使用呈现了图像中导致癌症检测的关键区域。
胃癌检测准确率的提高在临床应用中具有很高的适用性。LIME的使用通过呈现决策解释,确保了模型预测的可信度和可靠性,对医学从业者很有用。这项研究有助于开发一个由人工智能驱动、值得信赖的癌症检测系统,该系统支持临床决策并改善患者预后。