Chanchal Amit Kumar, Lal Shyam, Suresh Shilpa
School of Computing, MIT Vishwaprayag University, Solapur, Maharashtra, 413255, India.
Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India.
Sci Rep. 2025 Aug 5;15(1):28585. doi: 10.1038/s41598-025-10712-9.
Kidney cancer is a leading cause of cancer-related mortality, with renal cell carcinoma (RCC) being the most prevalent form, accounting for 80-85% of all renal tumors. Traditional diagnosis of kidney cancer requires manual examination and analysis of histopathology images, which is time-consuming, error-prone, and depends on the pathologist's expertise. Recently, deep learning algorithms have gained significant attention in histopathology image analysis. In this study, we developed an efficient and robust deep learning architecture called RenalNet for the classification of subtypes of RCC from kidney histopathology images. The RenalNet is designed to capture cross-channel and inter-spatial features at three different scales simultaneously and combine them together. Cross-channel features refer to the relationships and dependencies between different data channels, while inter-spatial features refer to patterns within small spatial regions. The architecture contains a CNN module called multiple channel residual transformation (MCRT), to focus on the most relevant morphological features of RCC by fusing the information from multiple paths. Further, to improve the network's representation power, a CNN module called Group Convolutional Deep Localization (GCDL) has been introduced, which effectively integrates three different feature descriptors. As a part of this study, we also introduced a novel benchmark dataset for the classification of subtypes of RCC from kidney histopathology images. We obtained digital hematoxylin and eosin (H&E) stained WSIs from The Cancer Genome Atlas (TCGA) and acquired region of interest (ROIs) under the supervision of experienced pathologists resulted in the creation of patches. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on three well-known datasets. Compared to the best-performing state-of-the-art model, RenalNet achieves accuracies of 91.67%, 97.14%, and 97.24% on three different datasets. Additionally, the proposed method significantly reduces the number of parameters and FLOPs, demonstrating computationally efficient with 2.71 × [Formula: see text] FLOPs & 0.2131 × [Formula: see text] parameters.
肾癌是癌症相关死亡的主要原因,其中肾细胞癌(RCC)是最常见的形式,占所有肾肿瘤的80 - 85%。肾癌的传统诊断需要人工检查和分析组织病理学图像,这既耗时又容易出错,并且依赖于病理学家的专业知识。最近,深度学习算法在组织病理学图像分析中受到了广泛关注。在本研究中,我们开发了一种高效且强大的深度学习架构RenalNet,用于从肾脏组织病理学图像中对RCC亚型进行分类。RenalNet旨在同时在三个不同尺度上捕获跨通道和空间间特征,并将它们组合在一起。跨通道特征是指不同数据通道之间的关系和依赖性,而空间间特征是指小空间区域内的模式。该架构包含一个名为多通道残差变换(MCRT)的卷积神经网络模块,通过融合来自多条路径的信息来关注RCC最相关的形态特征。此外,为了提高网络的表示能力,引入了一个名为组卷积深度定位(GCDL)的卷积神经网络模块,它有效地整合了三种不同的特征描述符。作为本研究的一部分,我们还引入了一个用于从肾脏组织病理学图像中对RCC亚型进行分类的新型基准数据集。我们从癌症基因组图谱(TCGA)获得了数字苏木精和伊红(H&E)染色的全切片图像(WSIs),并在经验丰富的病理学家的监督下获取感兴趣区域(ROIs),从而创建了图像块。为了证明所提出的模型具有通用性且独立于数据集,我们在三个知名数据集上进行了实验。与性能最佳的现有模型相比,RenalNet在三个不同数据集上分别达到了91.67%、97.14%和97.24%的准确率。此外,所提出的方法显著减少了参数数量和浮点运算次数(FLOPs),以2.71×[公式:见原文]FLOPs和0.2131×[公式:见原文]参数展示了计算效率。