Suppr超能文献

一种用于食管癌分级的新型框架:结合CT成像、影像组学、可重复性和深度学习见解。

A novel framework for esophageal cancer grading: combining CT imaging, radiomics, reproducibility, and deep learning insights.

作者信息

Alsallal Muna, Ahmed Hanan Hassan, Kareem Radhwan Abdul, Yadav Anupam, Ganesan Subbulakshmi, Shankhyan Aman, Gupta Sofia, Joshi Kamal Kant, Sameer Hayder Naji, Yaseen Ahmed, Athab Zainab H, Adil Mohaned, Farhood Bagher

机构信息

Electronics and Communication Department, College of Engineering, Al- Muthanna University, Education Zone, AL-Muthanna, Iraq.

College of Pharmacy, Alnoor University, Mosul, Iraq.

出版信息

BMC Gastroenterol. 2025 May 10;25(1):356. doi: 10.1186/s12876-025-03952-6.

Abstract

OBJECTIVE

This study aims to create a reliable framework for grading esophageal cancer. The framework combines feature extraction, deep learning with attention mechanisms, and radiomics to ensure accuracy, interpretability, and practical use in tumor analysis.

MATERIALS AND METHODS

This retrospective study used data from 2,560 esophageal cancer patients across multiple clinical centers, collected from 2018 to 2023. The dataset included CT scan images and clinical information, representing a variety of cancer grades and types. Standardized CT imaging protocols were followed, and experienced radiologists manually segmented the tumor regions. Only high-quality data were used in the study. A total of 215 radiomic features were extracted using the SERA platform. The study used two deep learning models-DenseNet121 and EfficientNet-B0-enhanced with attention mechanisms to improve accuracy. A combined classification approach used both radiomic and deep learning features, and machine learning models like Random Forest, XGBoost, and CatBoost were applied. These models were validated with strict training and testing procedures to ensure effective cancer grading.

RESULTS

This study analyzed the reliability and performance of radiomic and deep learning features for grading esophageal cancer. Radiomic features were classified into four reliability levels based on their ICC (Intraclass Correlation) values. Most of the features had excellent (ICC > 0.90) or good (0.75 < ICC ≤ 0.90) reliability. Deep learning features extracted from DenseNet121 and EfficientNet-B0 were also categorized, and some of them showed poor reliability. The machine learning models, including XGBoost and CatBoost, were tested for their ability to grade cancer. XGBoost with Recursive Feature Elimination (RFE) gave the best results for radiomic features, with an AUC (Area Under the Curve) of 91.36%. For deep learning features, XGBoost with Principal Component Analysis (PCA) gave the best results using DenseNet121, while CatBoost with RFE performed best with EfficientNet-B0, achieving an AUC of 94.20%. Combining radiomic and deep features led to significant improvements, with XGBoost achieving the highest AUC of 96.70%, accuracy of 96.71%, and sensitivity of 95.44%. The combination of both DenseNet121 and EfficientNet-B0 models in ensemble models achieved the best overall performance, with an AUC of 95.14% and accuracy of 94.88%.

CONCLUSIONS

This study improves esophageal cancer grading by combining radiomics and deep learning. It enhances diagnostic accuracy, reproducibility, and interpretability, while also helping in personalized treatment planning through better tumor characterization.

CLINICAL TRIAL NUMBER

Not applicable.

摘要

目的

本研究旨在创建一个可靠的食管癌分级框架。该框架结合了特征提取、带注意力机制的深度学习和放射组学,以确保在肿瘤分析中的准确性、可解释性和实际应用价值。

材料与方法

本回顾性研究使用了2018年至2023年期间从多个临床中心收集的2560例食管癌患者的数据。数据集包括CT扫描图像和临床信息,涵盖了多种癌症分级和类型。遵循标准化的CT成像方案,由经验丰富的放射科医生手动分割肿瘤区域。本研究仅使用高质量数据。使用SERA平台提取了总共215个放射组学特征。本研究使用了两种带注意力机制增强的深度学习模型——DenseNet121和EfficientNet-B0,以提高准确性。一种联合分类方法同时使用了放射组学和深度学习特征,并应用了随机森林、XGBoost和CatBoost等机器学习模型。这些模型通过严格的训练和测试程序进行验证,以确保有效的癌症分级。

结果

本研究分析了放射组学和深度学习特征用于食管癌分级的可靠性和性能。放射组学特征根据其组内相关系数(ICC)值分为四个可靠性级别。大多数特征具有优异(ICC > 0.90)或良好(0.75 < ICC ≤ 0.90)的可靠性。从DenseNet121和EfficientNet-B0提取的深度学习特征也进行了分类,其中一些显示出较差的可靠性。对包括XGBoost和CatBoost在内的机器学习模型进行了癌症分级能力测试。采用递归特征消除(RFE)的XGBoost对放射组学特征给出了最佳结果,曲线下面积(AUC)为91.36%。对于深度学习特征,采用主成分分析(PCA)的XGBoost使用DenseNet121时给出了最佳结果,而采用RFE的CatBoost使用EfficientNet-B0时表现最佳,AUC达到94.20%。结合放射组学和深度特征带来了显著改善,XGBoost的AUC最高达到96.70%,准确率为96.71%,灵敏度为95.44%。在集成模型中结合DenseNet121和EfficientNet-B0两种模型实现了最佳的整体性能,AUC为95.14%,准确率为94.88%。

结论

本研究通过结合放射组学和深度学习改进了食管癌分级。它提高了诊断准确性、可重复性和可解释性,同时通过更好地对肿瘤进行特征描述,有助于个性化治疗方案的制定。

临床试验编号

不适用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7cb/12065308/37e794e57e72/12876_2025_3952_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验