Department of E&C Engineering, National Institute of Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India.
Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Manipal, India.
Int J Comput Assist Radiol Surg. 2021 Dec;16(12):2159-2175. doi: 10.1007/s11548-021-02497-9. Epub 2021 Oct 7.
Increasing cancer disease incidence worldwide has become a major public health issue. Manual histopathological analysis is a common diagnostic method for cancer detection. Due to the complex structure and wide variability in the texture of histopathology images, it has been challenging for pathologists to diagnose manually those images. Automatic segmentation of histopathology images to diagnose cancer disease is a continuous exploration field in recent times. Segmentation and analysis for diagnosis of histopathology images by using an efficient deep learning algorithm are the purpose of the proposed method.
To improve the segmentation performance, we proposed a deep learning framework that consists of a high-resolution encoder path, an atrous spatial pyramid pooling bottleneck module, and a powerful decoder. Compared to the benchmark segmentation models having a deep and thin path, our network is wide and deep that effectively leverages the strength of residual learning as well as encoder-decoder architecture.
We performed careful experimentation and analysis on three publically available datasets namely kidney dataset, Triple Negative Breast Cancer (TNBC) dataset, and MoNuSeg histopathology image dataset. We have used the two most preferred performance metrics called F1 score and aggregated Jaccard index (AJI) to evaluate the performance of the proposed model. The measured values of F1 score and AJI score are (0.9684, 0.9394), (0.8419, 0.7282), and (0.8344, 0.7169) on the kidney dataset, TNBC histopathology dataset, and MoNuSeg dataset, respectively.
Our proposed method yields better results as compared to benchmark segmentation methods on three histopathology datasets. Visual segmentation results justify the high value of the F1 score and AJI scores which indicated that it is a very good prediction by our proposed model.
全球癌症发病率的上升已成为一个主要的公共卫生问题。手动组织病理学分析是癌症检测的常用诊断方法。由于组织病理学图像的结构复杂且纹理变化多样,病理学家手动诊断这些图像具有挑战性。自动分割组织病理学图像以诊断癌症是近年来的一个持续探索领域。使用有效的深度学习算法对组织病理学图像进行分割和分析是本研究方法的目的。
为了提高分割性能,我们提出了一种深度学习框架,该框架由高分辨率编码器路径、空洞空间金字塔池瓶颈模块和强大的解码器组成。与具有深而细路径的基准分割模型相比,我们的网络又宽又深,有效地利用了残差学习和编码器-解码器架构的优势。
我们在三个公共数据集上进行了仔细的实验和分析,即肾脏数据集、三阴性乳腺癌(TNBC)数据集和 MoNuSeg 组织病理学图像数据集。我们使用了两个最受欢迎的性能指标,即 F1 得分和聚合 Jaccard 指数(AJI)来评估所提出模型的性能。在肾脏数据集、TNBC 组织病理学数据集和 MoNuSeg 数据集上,F1 得分和 AJI 得分的测量值分别为(0.9684,0.9394)、(0.8419,0.7282)和(0.8344,0.7169)。
与三个组织病理学数据集的基准分割方法相比,我们提出的方法产生了更好的结果。F1 得分和 AJI 得分的高值验证了分割结果,这表明我们提出的模型是非常好的预测。