Suppr超能文献

StainNet:一种快速且稳健的染色归一化网络。

StainNet: A Fast and Robust Stain Normalization Network.

作者信息

Kang Hongtao, Luo Die, Feng Weihua, Zeng Shaoqun, Quan Tingwei, Hu Junbo, Liu Xiuli

机构信息

Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.

Ministry of Education (MOE) Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.

出版信息

Front Med (Lausanne). 2021 Nov 5;8:746307. doi: 10.3389/fmed.2021.746307. eCollection 2021.

Abstract

Stain normalization often refers to transferring the color distribution to the target image and has been widely used in biomedical image analysis. The conventional stain normalization usually achieves through a pixel-by-pixel color mapping model, which depends on one reference image, and it is hard to achieve accurately the style transformation between image datasets. In principle, this difficulty can be well-solved by deep learning-based methods, whereas, its complicated structure results in low computational efficiency and artifacts in the style transformation, which has restricted the practical application. Here, we use distillation learning to reduce the complexity of deep learning methods and a fast and robust network called StainNet to learn the color mapping between the source image and the target image. StainNet can learn the color mapping relationship from a whole dataset and adjust the color value in a pixel-to-pixel manner. The pixel-to-pixel manner restricts the network size and avoids artifacts in the style transformation. The results on the cytopathology and histopathology datasets show that StainNet can achieve comparable performance to the deep learning-based methods. Computation results demonstrate StainNet is more than 40 times faster than StainGAN and can normalize a 100,000 × 100,000 whole slide image in 40 s.

摘要

染色归一化通常是指将颜色分布转移到目标图像上,并且已在生物医学图像分析中广泛使用。传统的染色归一化通常通过逐像素颜色映射模型来实现,该模型依赖于一张参考图像,并且很难准确实现图像数据集之间的风格转换。原则上,基于深度学习的方法可以很好地解决这个难题,然而,其复杂的结构导致风格转换中的计算效率低下和伪影,这限制了其实际应用。在此,我们使用蒸馏学习来降低深度学习方法的复杂度,并使用一个名为StainNet的快速且鲁棒的网络来学习源图像和目标图像之间的颜色映射。StainNet可以从整个数据集中学习颜色映射关系,并以逐像素的方式调整颜色值。逐像素方式限制了网络规模,并避免了风格转换中的伪影。在细胞病理学和组织病理学数据集上的结果表明,StainNet可以实现与基于深度学习的方法相当的性能。计算结果表明,StainNet比StainGAN快40多倍,并且可以在40秒内对100,000×100,000的全切片图像进行归一化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f3e/8602577/25a9f5859fd6/fmed-08-746307-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验