Huang Pan, Tan Xiaoheng, Zhou Xiaoli, Liu Shuxian, Mercaldo Francesco, Santone Antonella
IEEE J Biomed Health Inform. 2022 Apr;26(4):1696-1707. doi: 10.1109/JBHI.2021.3108999. Epub 2022 Apr 14.
Laryngeal cancer tumor (LCT) grading is a challenging task in P63 Immunohistochemical (IHC) histopathology images due to small differences between LCT levels in pathology images, the lack of precision in lesion regions of interest (LROIs) and the paucity of LCT pathology image samples. The key to solving the LCT grading problem is to transfer knowledge from other images and to identify more accurate LROIs, but the following problems occur: 1) transferring knowledge without a priori experience often causes negative transfer and creates a heavy workload due to the abundance of image types, and 2) convolutional neural networks (CNNs) constructing deep models by stacking cannot sufficiently identify LROIs, often deviate significantly from the LROIs focused on by experienced pathologists, and are prone to providing misleading second opinions. So we propose a novel fusion attention block network (FABNet) to address these problems. First, we propose a model transfer method based on clinical a priori experience and sample analysis (CPESA) that analyzes the transfer ability by integrating clinical a priori experience using indicators such as the relationship between the cancer onset location and morphology and the texture and staining degree of cell nuclei in histopathology images; our method further validates these indicators by the probability distribution of cancer image samples. Then, we propose a fusion attention block (FAB) structure, which can both provide an advanced non-uniform sparse representation of images and extract spatial relationship information between nuclei; consequently, the LROI can be more accurate and more relevant to pathologists. We conducted extensive experiments, compared with the best Baseline model, the classification accuracy is improved 25%, and It is demonstrated that FABNet performs better on different cancer pathology image datasets and outperforms other state of the art (SOTA) models.
喉癌肿瘤(LCT)分级在P63免疫组织化学(IHC)组织病理学图像中是一项具有挑战性的任务,这是由于病理图像中LCT水平之间的差异较小、感兴趣病变区域(LROI)缺乏精确性以及LCT病理图像样本较少。解决LCT分级问题的关键在于从其他图像中转移知识并识别更准确的LROI,但会出现以下问题:1)在没有先验经验的情况下转移知识往往会导致负迁移,并且由于图像类型丰富而产生繁重的工作量;2)通过堆叠构建深度模型的卷积神经网络(CNN)不能充分识别LROI,常常与经验丰富的病理学家关注的LROI有显著偏差,并且容易提供误导性的二次诊断意见。因此,我们提出了一种新颖的融合注意力块网络(FABNet)来解决这些问题。首先,我们提出了一种基于临床先验经验和样本分析(CPESA)的模型转移方法,该方法通过使用诸如癌症发病位置与形态之间的关系以及组织病理学图像中细胞核的纹理和染色程度等指标来整合临床先验经验,从而分析转移能力;我们的方法通过癌症图像样本的概率分布进一步验证这些指标。然后,我们提出了一种融合注意力块(FAB)结构,它既可以提供图像的高级非均匀稀疏表示,又可以提取细胞核之间的空间关系信息;因此,LROI可以更准确且与病理学家的关注点更相关。我们进行了广泛的实验,与最佳基线模型相比,分类准确率提高了25%,并且证明FABNet在不同的癌症病理图像数据集上表现更好,优于其他现有技术(SOTA)模型。