Juhong Aniwat, Li Bo, Yao Cheng-You, Yang Chia-Wei, Agnew Dalen W, Lei Yu Leo, Huang Xuefei, Piyawattanametha Wibool, Qiu Zhen
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA.
Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.
Biomed Opt Express. 2022 Dec 5;14(1):18-36. doi: 10.1364/BOE.463839. eCollection 2023 Jan 1.
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.
传统上,需要一台具有大数值孔径的高性能显微镜来获取高分辨率图像。然而,这些图像的尺寸通常非常大。因此,它们在计算机网络中不方便管理和传输,也难以存储在有限的计算机存储系统中。结果,图像压缩通常被用于减小图像尺寸,但这会导致图像分辨率变差。在此,我们展示了定制卷积神经网络(CNN),通过使用生成器和判别器网络的组合,即基于聚合残差变换的超分辨率生成对抗网络(SRGAN-ResNeXt),用于从低分辨率图像进行超分辨率图像增强以及对苏木精和伊红(H&E)染色的乳腺癌组织病理学图像中的细胞和细胞核进行表征,以促进在资源有限的环境中进行癌症诊断。结果表明,我们网络的峰值信噪比和结构相似性分别超过30 dB和0.93,图像质量得到了显著提高。所获得的性能优于双立方插值和著名的SRGAN深度学习方法的结果。此外,另一个定制CNN用于对我们模型生成的高分辨率乳腺癌图像进行图像分割,对于H&E图像分割结果,平均交并比为0.869,平均骰子相似系数为0.893。最后,我们提出了联合训练的SRGAN-ResNeXt和Inception U-net模型,它们将单独训练的SRGAN-ResNeXt和Inception U-net模型的权重用作迁移学习的预训练权重。联合训练模型的结果不断改进且前景乐观。我们预计这些定制CNN能够帮助解决在偏远受限环境中,因无法使用先进显微镜或全玻片成像(WSI)系统而难以从低性能显微镜获取高分辨率图像的问题。