Diagnostic Image Analysis Group and the Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands.
Diagnostic Image Analysis Group and the Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands.
Med Image Anal. 2021 Feb;68:101890. doi: 10.1016/j.media.2020.101890. Epub 2020 Oct 29.
We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentric patches at multiple resolutions with different fields of view, feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. We show the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation. We have made HookNet publicly available by releasing the source code as well as in the form of web-based applications based on the grand-challenge.org platform.
我们提出了 HookNet,这是一种用于组织病理学全切片图像的语义分割模型,它通过多个编解码器卷积神经网络分支来结合上下文和细节。具有不同视场的同心补丁在多个分辨率下馈送到 HookNet 的不同分支,并通过挂钩机制组合中间表示。我们描述了一种设计和训练 HookNet 的框架,以实现高分辨率语义分割,并引入约束以在挂钩过程中保证特征图中的像素级对齐。我们展示了在两个组织病理学图像分割任务中使用 HookNet 的优势,其中组织类型预测准确性强烈依赖于上下文信息,即 (1) 乳腺癌的多类组织分割,以及 (2) 肺癌中的三级淋巴结构和生发中心的分割。与在不同分辨率下工作的单分辨率 U-Net 模型以及最近发布的用于组织病理学图像分割的多分辨率模型相比,我们展示了 HookNet 的优越性。我们通过发布源代码以及基于 grand-challenge.org 平台的基于网络的应用程序的形式,使 HookNet 公开可用。