Berijanian Maryam, Schaadt Nadine S, Huang Boqiang, Lotz Johannes, Feuerhake Friedrich, Merhof Dorit
Department of Computational Mathematics, Science and Engineering (CMSE), Michigan State University, East Lansing, USA.
Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany.
J Pathol Inform. 2023 Jan 25;14:100195. doi: 10.1016/j.jpi.2023.100195. eCollection 2023.
Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.
StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.
The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.
This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.
深度学习任务需要大量图像,在数字病理学中得到广泛应用。这尤其给监督任务带来挑战,因为手动图像标注是一个昂贵且费力的过程。在图像变化很大的情况下,这种情况会更加恶化。应对这个问题需要诸如图像增强和合成图像生成等方法。在这方面,通过生成对抗网络(GAN)进行无监督染色转换最近受到了很多关注,但对于每一对源域和目标域都必须训练一个单独的网络。这项工作通过一个单一网络实现了组织病理学染色的无监督多对多转换,同时力求保持组织的形状和结构。
将StarGAN-v2应用于乳腺组织病理图像的无监督多对多染色转换。并入一个边缘检测器,促使网络保持组织的形状和结构,并进行边缘保留转换。此外,对数字病理学领域的医学和技术专家进行了一项主观测试,以评估生成图像的质量,并验证它们与真实图像难以区分。作为概念验证,使用生成图像和不使用生成图像训练乳腺癌分类器,以量化使用合成图像进行图像增强对分类准确率的影响。
结果表明,添加边缘检测器有助于提高转换后图像的质量,并保留组织的总体结构。对我们的医学和技术专家进行的质量控制和主观测试表明,真实图像和人工图像无法区分,从而证实合成图像在技术上是合理的。此外,这项研究表明,通过用所提出的染色转换方法的输出扩充训练数据集,使用ResNet-50和VGG-16的乳腺癌分类器的准确率分别提高了8.0%和9.3%。
这项研究表明,在所提出的框架内,可以有效地从任意源染色转换为其他染色。生成的图像很逼真,可用于训练深度神经网络,以提高其性能并应对标注图像数量不足的问题。