Rehman Zaka Ur, Ahmad Fauzi Mohammad Faizal, Wan Ahmad Wan Siti Halimatul Munirah, Abas Fazly Salleh, Cheah Phaik-Leng, Chiew Seow-Fan, Looi Lai-Meng
Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia.
Institute for Research, Development and Innovation, IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia.
Cancers (Basel). 2024 Nov 11;16(22):3794. doi: 10.3390/cancers16223794.
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
荧光原位杂交(FISH)被广泛认为是评估乳腺癌中人表皮生长因子受体2(HER2)状态的金标准;然而,它也带来了一些挑战,比如需要专业培训以及与染料淬灭导致的信号降解相关的问题。银增强原位杂交(SISH)是一种自动化替代方法,采用适用于明场显微镜的永久染色。确定HER2状态需要通过评估SISH染色玻片上的HER2和17号染色体着丝粒(CEN17)信号来区分“扩增”和“非扩增”区域。本研究首次利用深度学习对HER2-SISH全玻片图像(WSIs)中的正常、扩增和非扩增区域进行分类,与苏木精和伊红(H&E)染色的玻片相比,HER2-SISH全玻片图像的分析要复杂得多。我们提出的方法包括两个阶段:首先,我们在带注释的图像区域上评估深度学习模型,然后将最有效的模型应用于全玻片图像以进行区域识别和定位。随后,叠加表示每个类别的伪彩色图,并用这些映射区域重建全玻片图像。使用一个以40倍放大率数字化的HER2-SISH乳腺癌玻片的私有数据集,通过应用视觉Transformer(ViT)模型进行迁移学习,我们实现了补丁级分类准确率为99.9%,泛化准确率为78.8%。通过k折交叉验证进一步评估了模型的稳健性,平均性能准确率为98%,同时报告了带有95%置信区间的指标以确保统计可靠性。该方法在临床应用中显示出巨大的前景,特别是在评估HER2-SISH组织病理学图像中的HER2表达状态方面。它提供了一种自动化解决方案,可以帮助病理学家有效地识别HER2扩增区域,从而提高乳腺癌治疗的诊断结果。