Department of Computer and Information Science, University of Konstanz, Konstanz, Germany.
Department of Research Pathology, Genentech Inc, South San Francisco, California.
Mod Pathol. 2024 Nov;37(11):100591. doi: 10.1016/j.modpat.2024.100591. Epub 2024 Aug 13.
Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence (IF) staining and mapping these annotations to a post-IF hematoxylin and eosin (H&E) (terminal H&E) stain. Mapping the annotations between IF and terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network was applied to transfer the appearance of conventional H&E such that it emulates terminal H&E. These synthetic emulations allowed us to train a deep learning model for the segmentation of epithelium in terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the cycle-consistent generative adversarial network stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
尽管最近取得了一些进展,但由于缺乏训练强大监督模型所需的准确组织注释,计算机视觉方法在临床和商业应用中的应用受到了限制。使用免疫荧光 (IF) 染色对组织进行分子标记,并将这些注释映射到 IF 后的苏木精和伊红 (H&E) (终 H&E) 染色上,可以加速生成这种真实数据。在 IF 和终 H&E 之间进行注释映射可以提高生成真实数据的规模和准确性。然而,IF 组织处理导致终 H&E 与传统 H&E 之间存在差异,从而限制了这种方法的实施。我们试图克服这一挑战,并使用合成图像生成来实现这些并行模式之间的兼容性,其中循环一致性生成对抗网络被应用于转移传统 H&E 的外观,使其模拟终 H&E。这些合成模拟使我们能够为终 H&E 中的上皮组织分割训练深度学习模型,并可以针对基于上皮细胞的细胞角蛋白的 IF 染色进行验证。将该分割模型与循环一致性生成对抗网络的染色转移模型相结合,使得能够在传统 H&E 图像中进行可执行的上皮组织分割。该方法表明,只要生成模型能够捕获分子标记协议的组织影响,并在分割过程之前部署这些生成模型,就可以通过 IF 等分子标记策略来执行针对传统 H&E 数据的广泛、准确的分割模型的训练,而无需人工专家注释。