IEEE Trans Med Imaging. 2020 Jul;39(7):2395-2405. doi: 10.1109/TMI.2020.2971006. Epub 2020 Feb 3.
Digital histology images are amenable to the application of convolutional neural networks (CNNs) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224×224 ) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate a larger context by a context-aware neural network based on images with a dimension of 1792×1792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. We evaluated the proposed method on two colorectal cancer datasets for the task of cancer grading. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods. We also presented a comprehensive analysis of different variants of the proposed method.
数字组织学图像由于其像素数据的巨大规模,适用于卷积神经网络(CNN)进行分析。由于计算和内存的限制,CNN 通常用于从小型图像补丁(例如 224×224)中提取的数字组织学图像进行表示学习。然而,这种方法并没有将高分辨率的上下文信息纳入组织学图像中。我们提出了一种新的方法,通过基于 1792×1792 像素的图像的上下文感知神经网络来纳入更大的上下文。所提出的框架首先将组织学图像的局部表示编码为高维特征,然后通过考虑其空间组织来聚合特征,以做出最终预测。我们在两个结直肠癌数据集上评估了该方法在癌症分级任务中的性能。我们的方法优于传统的基于补丁的方法、特定问题的方法和现有的基于上下文的方法。我们还对所提出方法的不同变体进行了全面的分析。