文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition.

作者信息

Raju Akella Subrahmanya Narasimha, Venkatesh K, Padmaja B, Kumar C H N Santhosh, Patnala Pattabhi Rama Mohan, Lasisi Ayodele, Islam Saiful, Razak Abdul, Khan Wahaj Ahmad

机构信息

Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.

Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India.

出版信息

Sci Rep. 2024 Dec 3;14(1):30052. doi: 10.1038/s41598-024-81456-1.


DOI:10.1038/s41598-024-81456-1
PMID:39627293
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11614869/
Abstract

Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/91ac06998506/41598_2024_81456_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/dadb8e3b7a53/41598_2024_81456_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/799f8c1a6e83/41598_2024_81456_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/c01db10ca944/41598_2024_81456_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ee9d0d1d1a5/41598_2024_81456_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/31fe55e96e59/41598_2024_81456_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/2196d3f70ad9/41598_2024_81456_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/88b797db811d/41598_2024_81456_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/03c0311d82aa/41598_2024_81456_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cbe5dd158b92/41598_2024_81456_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/d1524121d6cf/41598_2024_81456_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/7ca0cd59cc70/41598_2024_81456_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/83232398dba7/41598_2024_81456_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/263bcf86625d/41598_2024_81456_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/e50e1e4d7dc3/41598_2024_81456_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/bee700a79567/41598_2024_81456_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cd50a523e2af/41598_2024_81456_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/461a906bbd90/41598_2024_81456_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/a6c91024d1bc/41598_2024_81456_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5966c38e3881/41598_2024_81456_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/19c3f8e5ad73/41598_2024_81456_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ef88bdf2395/41598_2024_81456_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/11c11ba1b5bd/41598_2024_81456_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/91ac06998506/41598_2024_81456_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/dadb8e3b7a53/41598_2024_81456_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/799f8c1a6e83/41598_2024_81456_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/c01db10ca944/41598_2024_81456_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ee9d0d1d1a5/41598_2024_81456_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/31fe55e96e59/41598_2024_81456_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/2196d3f70ad9/41598_2024_81456_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/88b797db811d/41598_2024_81456_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/03c0311d82aa/41598_2024_81456_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cbe5dd158b92/41598_2024_81456_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/d1524121d6cf/41598_2024_81456_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/7ca0cd59cc70/41598_2024_81456_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/83232398dba7/41598_2024_81456_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/263bcf86625d/41598_2024_81456_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/e50e1e4d7dc3/41598_2024_81456_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/bee700a79567/41598_2024_81456_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/cd50a523e2af/41598_2024_81456_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/461a906bbd90/41598_2024_81456_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/a6c91024d1bc/41598_2024_81456_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5966c38e3881/41598_2024_81456_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/19c3f8e5ad73/41598_2024_81456_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/5ef88bdf2395/41598_2024_81456_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/11c11ba1b5bd/41598_2024_81456_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0c33/11614869/b0cf703e683d/41598_2024_81456_Fig24_HTML.jpg

相似文献

[1]
Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition.

Sci Rep. 2024-12-3

[2]
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset.

Bioengineering (Basel). 2023-6-19

[3]
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.

Cancer Biomark. 2025-3

[4]
ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.

Comput Biol Med. 2025-3

[5]
Vision transformer and deep learning based weighted ensemble model for automated spine fracture type identification with GAN generated CT images.

Sci Rep. 2025-4-25

[6]
Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform.

Sci Rep. 2024-3-22

[7]
Augmented histopathology: Enhancing colon cancer detection through deep learning and ensemble techniques.

Microsc Res Tech. 2025-1

[8]
Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising.

J Imaging Inform Med. 2024-10

[9]
ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence.

Comput Math Methods Med. 2022-11-10

[10]
Pathological Insights: Enhanced Vision Transformers for the Early Detection of Colorectal Cancer.

Cancers (Basel). 2024-4-8

引用本文的文献

[1]
A hybrid TCN-XGBoost model for agricultural product market price forecasting.

PLoS One. 2025-5-2

本文引用的文献

[1]
Automated system for classifying uni-bicompartmental knee osteoarthritis by using redefined residual learning with convolutional neural network.

Heliyon. 2024-5-14

[2]
Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures.

Heliyon. 2024-5-3

[3]
DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images.

PLoS One. 2024

[4]
Personalized circulating tumor DNA monitoring improves recurrence surveillance and management after curative resection of colorectal liver metastases: a prospective cohort study.

Int J Surg. 2024-5-1

[5]
MSCDNet-based multi-class classification of skin cancer using dermoscopy images.

PeerJ Comput Sci. 2023-8-29

[6]
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset.

Bioengineering (Basel). 2023-6-19

[7]
Primate brain pattern-based automated Alzheimer's disease detection model using EEG signals.

Cogn Neurodyn. 2023-6

[8]
DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images.

Cancers (Basel). 2023-4-6

[9]
Classification of Cervical Spine Fracture and Dislocation Using Refined Pre-Trained Deep Model and Saliency Map.

Diagnostics (Basel). 2023-3-28

[10]
Cancer incidence estimates for 2022 & projection for 2025: Result from National Cancer Registry Programme, India.

Indian J Med Res. 2022

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索