• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索将视觉Transformer和XGBoost作为深度学习集成方法用于转化型癌的识别。

Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition.

作者信息

Raju Akella Subrahmanya Narasimha, Venkatesh K, Padmaja B, Kumar C H N Santhosh, Patnala Pattabhi Rama Mohan, Lasisi Ayodele, Islam Saiful, Razak Abdul, Khan Wahaj Ahmad

机构信息

Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.

Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India.

出版信息

Sci Rep. 2024 Dec 3;14(1):30052. doi: 10.1038/s41598-024-81456-1.

DOI:10.1038/s41598-024-81456-1
PMID:39627293
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11614869/
Abstract

Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.

摘要

结直肠癌(CRC)是全球最常见的癌症形式之一,早期检测可显著提高患者的预后。本研究提出了一种新方法,即使用深度学习集成与计算机辅助诊断(CADx)来改进CRC检测。该方法包括使用视觉Transformer(ViT)和XGBoost将预训练的卷积神经网络(CNN)模型(如ADaRDEV2I - 22、DaRD - 22和ADaDR - 22)进行组合。该研究解决了医学图像分析中与数据集不平衡相关的挑战以及复杂特征提取的必要性。最初,CKHK - 22数据集包含24个类别。然而,我们将其细化为14个类别,这导致了数据平衡和质量的提高。这种改进使得能够进行更精确的特征提取并改善分类结果。我们创建了两个集成模型:第一个模型使用视觉Transformer来捕捉图像中的长距离空间关系,而第二个模型将CNN与XGBoost相结合以促进结构化数据分类。我们实施了基于深度卷积生成对抗网络(DCGAN)的增强来提高数据集的多样性。测试显示性能有了很大提高,ADaDR - 22 +视觉Transformer组取得了最佳结果,测试准确率为93.4%,曲线下面积(AUC)为98.8%。相比之下,ADaDR - 22 + XGBoost模型的AUC为97.8%,准确率为92.2%。这些发现突出了所提出的集成模型在检测CRC方面的有效性,并强调了使用平衡良好且高质量数据集的重要性。所提出方法显著提高了临床诊断准确性以及医学图像分析或早期CRC检测的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/91ac06998506/41598_2024_81456_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/dadb8e3b7a53/41598_2024_81456_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/799f8c1a6e83/41598_2024_81456_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/c01db10ca944/41598_2024_81456_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ee9d0d1d1a5/41598_2024_81456_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/31fe55e96e59/41598_2024_81456_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/2196d3f70ad9/41598_2024_81456_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/88b797db811d/41598_2024_81456_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/03c0311d82aa/41598_2024_81456_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cbe5dd158b92/41598_2024_81456_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/d1524121d6cf/41598_2024_81456_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/7ca0cd59cc70/41598_2024_81456_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/83232398dba7/41598_2024_81456_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/263bcf86625d/41598_2024_81456_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/e50e1e4d7dc3/41598_2024_81456_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/bee700a79567/41598_2024_81456_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cd50a523e2af/41598_2024_81456_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/461a906bbd90/41598_2024_81456_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/a6c91024d1bc/41598_2024_81456_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5966c38e3881/41598_2024_81456_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/19c3f8e5ad73/41598_2024_81456_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ef88bdf2395/41598_2024_81456_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/11c11ba1b5bd/41598_2024_81456_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/91ac06998506/41598_2024_81456_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/dadb8e3b7a53/41598_2024_81456_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/799f8c1a6e83/41598_2024_81456_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/c01db10ca944/41598_2024_81456_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ee9d0d1d1a5/41598_2024_81456_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/31fe55e96e59/41598_2024_81456_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/2196d3f70ad9/41598_2024_81456_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/88b797db811d/41598_2024_81456_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/03c0311d82aa/41598_2024_81456_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cbe5dd158b92/41598_2024_81456_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/d1524121d6cf/41598_2024_81456_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/7ca0cd59cc70/41598_2024_81456_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/83232398dba7/41598_2024_81456_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/263bcf86625d/41598_2024_81456_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/e50e1e4d7dc3/41598_2024_81456_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/bee700a79567/41598_2024_81456_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cd50a523e2af/41598_2024_81456_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/461a906bbd90/41598_2024_81456_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/a6c91024d1bc/41598_2024_81456_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5966c38e3881/41598_2024_81456_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/19c3f8e5ad73/41598_2024_81456_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ef88bdf2395/41598_2024_81456_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/11c11ba1b5bd/41598_2024_81456_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg

相似文献

1
Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition.探索将视觉Transformer和XGBoost作为深度学习集成方法用于转化型癌的识别。
Sci Rep. 2024 Dec 3;14(1):30052. doi: 10.1038/s41598-024-81456-1.
2
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset.EnsemDeepCADx:基于CKHK - 22数据集,利用混合数据集特征和集成融合卷积神经网络助力结直肠癌诊断
Bioengineering (Basel). 2023 Jun 19;10(6):738. doi: 10.3390/bioengineering10060738.
3
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
4
ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.ResViT融合网络模型:一种用于视网膜图像中糖尿病视网膜病变自动分级的可解释人工智能驱动方法。
Comput Biol Med. 2025 Mar;186:109656. doi: 10.1016/j.compbiomed.2025.109656. Epub 2025 Jan 16.
5
Vision transformer and deep learning based weighted ensemble model for automated spine fracture type identification with GAN generated CT images.基于视觉Transformer和深度学习的加权集成模型,用于利用生成对抗网络(GAN)生成的CT图像自动识别脊柱骨折类型。
Sci Rep. 2025 Apr 25;15(1):14408. doi: 10.1038/s41598-025-98518-7.
6
Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform.Color-CADx:一种基于三卷积神经网络和离散余弦变换的结直肠癌分类深度学习方法。
Sci Rep. 2024 Mar 22;14(1):6914. doi: 10.1038/s41598-024-56820-w.
7
Augmented histopathology: Enhancing colon cancer detection through deep learning and ensemble techniques.增强型组织病理学:通过深度学习和集成技术提高结肠癌检测能力。
Microsc Res Tech. 2025 Jan;88(1):298-314. doi: 10.1002/jemt.24692. Epub 2024 Sep 30.
8
Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising.基于 Noise2Neighbors 插值的纯 Vision Transformer(CT-ViT)用于低剂量 CT 图像降噪。
J Imaging Inform Med. 2024 Oct;37(5):2669-2687. doi: 10.1007/s10278-024-01108-8. Epub 2024 Apr 15.
9
ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence.结直肠癌计算机辅助诊断(ColoRectalCADx):利用集成卷积神经网络和基于混合数据集证据的可视化解释快速识别结直肠癌
Comput Math Methods Med. 2022 Nov 10;2022:8723957. doi: 10.1155/2022/8723957. eCollection 2022.
10
Pathological Insights: Enhanced Vision Transformers for the Early Detection of Colorectal Cancer.病理洞察:用于早期检测结直肠癌的增强视觉Transformer
Cancers (Basel). 2024 Apr 8;16(7):1441. doi: 10.3390/cancers16071441.

引用本文的文献

1
A hybrid TCN-XGBoost model for agricultural product market price forecasting.一种用于农产品市场价格预测的混合TCN-XGBoost模型。
PLoS One. 2025 May 2;20(5):e0322496. doi: 10.1371/journal.pone.0322496. eCollection 2025.

本文引用的文献

1
Automated system for classifying uni-bicompartmental knee osteoarthritis by using redefined residual learning with convolutional neural network.基于卷积神经网络的重新定义残差学习的单髁膝关节骨关节炎自动分类系统
Heliyon. 2024 May 14;10(10):e31017. doi: 10.1016/j.heliyon.2024.e31017. eCollection 2024 May 30.
2
Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures.使用弹性高效神经网络架构从多模态图像进行结肠癌和肺癌分类。
Heliyon. 2024 May 3;10(9):e30625. doi: 10.1016/j.heliyon.2024.e30625. eCollection 2024 May 15.
3
DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images.
DVFNet:一种基于深度特征融合的利用皮肤镜图像进行皮肤癌多分类的模型。
PLoS One. 2024 Mar 20;19(3):e0297667. doi: 10.1371/journal.pone.0297667. eCollection 2024.
4
Personalized circulating tumor DNA monitoring improves recurrence surveillance and management after curative resection of colorectal liver metastases: a prospective cohort study.个体化循环肿瘤 DNA 监测可改善结直肠癌肝转移术后的复发监测和管理:一项前瞻性队列研究。
Int J Surg. 2024 May 1;110(5):2776-2787. doi: 10.1097/JS9.0000000000001236.
5
MSCDNet-based multi-class classification of skin cancer using dermoscopy images.基于MSCDNet的皮肤镜图像用于皮肤癌多类别分类
PeerJ Comput Sci. 2023 Aug 29;9:e1520. doi: 10.7717/peerj-cs.1520. eCollection 2023.
6
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset.EnsemDeepCADx:基于CKHK - 22数据集,利用混合数据集特征和集成融合卷积神经网络助力结直肠癌诊断
Bioengineering (Basel). 2023 Jun 19;10(6):738. doi: 10.3390/bioengineering10060738.
7
Primate brain pattern-based automated Alzheimer's disease detection model using EEG signals.基于灵长类动物脑模式的脑电图信号自动阿尔茨海默病检测模型
Cogn Neurodyn. 2023 Jun;17(3):647-659. doi: 10.1007/s11571-022-09859-2. Epub 2022 Aug 12.
8
DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images.DSCC_Net:使用皮肤镜图像诊断皮肤癌的多分类深度学习模型
Cancers (Basel). 2023 Apr 6;15(7):2179. doi: 10.3390/cancers15072179.
9
Classification of Cervical Spine Fracture and Dislocation Using Refined Pre-Trained Deep Model and Saliency Map.使用精细预训练深度模型和显著性图对颈椎骨折和脱位进行分类
Diagnostics (Basel). 2023 Mar 28;13(7):1273. doi: 10.3390/diagnostics13071273.
10
Cancer incidence estimates for 2022 & projection for 2025: Result from National Cancer Registry Programme, India.2022 年癌症发病估计数及 2025 年预测:印度国家癌症登记计划的结果。
Indian J Med Res. 2022 Oct-Nov;156(4&5):598-607. doi: 10.4103/ijmr.ijmr_1821_22.