IEEE Trans Med Imaging. 2018 Mar;37(3):792-802. doi: 10.1109/TMI.2017.2781228.
It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.
人们普遍认为,颜色信息对于组织病理学幻灯片的自动和可视化分析至关重要。在实践中,病理学家依靠颜色来反映特定组织成分的存在,从而做出诊断。同样,自动组织病理学图像分析算法也依赖于颜色或强度度量来提取组织特征。随着数字化组织病理学图像的普及,颜色变化及其影响已成为一个关键问题。这些变化不仅是组织切片制备过程中涉及的多种因素的结果,也是数字化过程本身的结果。因此,已经提出了许多不同的策略来减轻自动图像分析系统中与染色相关的组织不一致性。这些技术通常依赖于收集颜色统计信息来执行跨图像的颜色匹配。在这项工作中,我们提出了一种不同的染色归一化方法,称为染色转移。我们设计了一个具有染色归一化组件的判别式图像分析模型,该组件可以在数据集之间进行染色转移。我们的模型包括一个生成网络,它学习数据集特定的染色特性和图像特定的颜色变换,以及一个特定于任务的网络(例如分类器或分割网络)。该模型使用多目标成本函数进行端到端训练。我们在三个数据集和两个不同的分析任务(组织分割和分类)中评估了所提出的方法。与各种基线相比,所提出的方法在归一化图像的准确性和质量方面取得了更好的结果。