Department of Computer Science and Information Technology, University of Balochistan, Quetta 87300, Pakistan.
College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia.
Sensors (Basel). 2021 Aug 9;21(16):5361. doi: 10.3390/s21165361.
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
全切片图像 (WSI) 的分类为医生提供了对疾病的准确分析,也有助于他们有效地治疗患者。分类可以与进一步的详细分析和诊断相关联。深度学习 (DL) 在医疗行业取得了重大进展,包括使用磁共振成像 (MRI) 扫描、计算机断层扫描 (CT) 扫描和心电图 (ECG) 来检测危及生命的疾病,包括心脏病、癌症和脑肿瘤。然而,病理学领域需要更多的进展,但导致进展缓慢的主要障碍是缺乏用于训练模型的大型标记组织病理学图像数据集。Kimia Path24 数据集是专门为组织病理学图像的分类和检索而创建的。它包含 23916 个具有 24 种组织纹理类别的组织病理学斑块。提出并评估了基于迁移学习的框架,该框架在两个著名的深度学习模型 Inception-V3 和 VGG-16 上进行了评估。为了提高 Inception-V3 和 VGG-16 的生产力,我们使用了它们的预训练权重,并将其与图像向量串联,将其用作相同架构的训练输入。实验表明,所提出的创新提高了这两个著名模型的准确性。VGG-16 的斑块到扫描的准确率从 0.65 提高到 0.77,而对于 Inception-V3,则从 0.74 提高到 0.79。