Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany.
Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany.
Comput Med Imaging Graph. 2023 Jul;107:102238. doi: 10.1016/j.compmedimag.2023.102238. Epub 2023 May 11.
The segmentation of histopathological whole slide images into tumourous and non-tumourous types of tissue is a challenging task that requires the consideration of both local and global spatial contexts to classify tumourous regions precisely. The identification of subtypes of tumour tissue complicates the issue as the sharpness of separation decreases and the pathologist's reasoning is even more guided by spatial context. However, the identification of detailed tissue types is crucial for providing personalized cancer therapies. Due to the high resolution of whole slide images, existing semantic segmentation methods, restricted to isolated image sections, are incapable of processing context information beyond. To take a step towards better context comprehension, we propose a patch neighbour attention mechanism to query the neighbouring tissue context from a patch embedding memory bank and infuse context embeddings into bottleneck hidden feature maps. Our memory attention framework (MAF) mimics a pathologist's annotation procedure - zooming out and considering surrounding tissue context. The framework can be integrated into any encoder-decoder segmentation method. We evaluate the MAF on two public breast cancer and liver cancer data sets and an internal kidney cancer data set using famous segmentation models (U-Net, DeeplabV3) and demonstrate the superiority over other context-integrating algorithms - achieving a substantial improvement of up to 17% on Dice score. The code is publicly available at https://github.com/tio-ikim/valuing-vicinity.
将组织病理学全切片图像分割为肿瘤和非肿瘤组织类型是一项具有挑战性的任务,需要考虑局部和全局空间上下文,以便准确地对肿瘤区域进行分类。肿瘤组织亚型的识别使问题更加复杂,因为分离的清晰度降低,病理学家的推理甚至更多地受到空间上下文的指导。然而,识别详细的组织类型对于提供个性化的癌症治疗至关重要。由于全切片图像具有较高的分辨率,现有的语义分割方法仅限于孤立的图像部分,无法处理超出范围的上下文信息。为了更好地理解上下文,我们提出了一种补丁邻居注意机制,从补丁嵌入存储库中查询相邻组织上下文,并将上下文嵌入注入瓶颈隐藏特征图中。我们的记忆注意力框架 (MAF) 模拟了病理学家的注释过程——放大并考虑周围组织上下文。该框架可以集成到任何编码器-解码器分割方法中。我们使用著名的分割模型 (U-Net、DeeplabV3) 在两个公共的乳腺癌和肝癌数据集以及一个内部肾癌数据集上评估了 MAF,并证明了其优于其他集成上下文算法的优越性——在 Dice 得分上提高了高达 17%。代码可在 https://github.com/tio-ikim/valuing-vicinity 上获得。