Yan Rui, Yang Zhidong, Li Jintao, Zheng Chunhou, Zhang Fa
High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China.
University of Chinese Academy of Sciences, Beijing 101408, China.
Biology (Basel). 2022 Jun 29;11(7):982. doi: 10.3390/biology11070982.
Since pathological images have some distinct characteristics that are different from natural images, the direct application of a general convolutional neural network cannot achieve good classification performance, especially for fine-grained classification problems (such as pathological image grading). Inspired by the clinical experience that decomposing a pathological image into different components is beneficial for diagnosis, in this paper, we propose a ivide-and-ttention work () for Hematoxylin-and-Eosin (HE)-stained pathological image classification. The DANet utilizes a deep-learning method to decompose a pathological image into nuclei and non-nuclei parts. With such decomposed pathological images, the DANet first performs feature learning independently in each branch, and then focuses on the most important feature representation through the branch selection attention module. In this way, the DANet can learn representative features with respect to different tissue structures and adaptively focus on the most important ones, thereby improving classification performance. In addition, we introduce deep canonical correlation analysis (DCCA) constraints in the feature fusion process of different branches. The DCCA constraints play the role of branch fusion attention, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. The experimental results of three datasets demonstrate the superiority of the DANet, with an average classification accuracy of 92.5% on breast cancer classification, 95.33% on colorectal cancer grading, and 91.6% on breast cancer grading tasks.
由于病理图像具有一些与自然图像不同的独特特征,直接应用通用卷积神经网络无法取得良好的分类性能,尤其是对于细粒度分类问题(如病理图像分级)。受将病理图像分解为不同成分有利于诊断的临床经验启发,本文提出了一种用于苏木精-伊红(HE)染色病理图像分类的“分割与注意力”工作(DANet)。DANet利用深度学习方法将病理图像分解为细胞核和非细胞核部分。对于这样分解后的病理图像,DANet首先在每个分支中独立进行特征学习,然后通过分支选择注意力模块聚焦于最重要的特征表示。通过这种方式,DANet可以学习关于不同组织结构的代表性特征,并自适应地聚焦于最重要的特征,从而提高分类性能。此外,我们在不同分支的特征融合过程中引入了深度典型相关分析(DCCA)约束。DCCA约束起到分支融合注意力的作用,以最大化不同分支的相关性,并确保融合后的分支强调特定的组织结构。三个数据集的实验结果证明了DANet的优越性,在乳腺癌分类任务上平均分类准确率为92.5%,在结直肠癌分级任务上为95.33%,在乳腺癌分级任务上为91.6%。
Biology (Basel). 2022-6-29
Sensors (Basel). 2022-5-27
BMC Med Inform Decis Mak. 2021-4-22
Neural Netw. 2023-4
Comput Methods Programs Biomed. 2024-8
Front Neurorobot. 2024-5-3
Proc IEEE Inst Electr Electron Eng. 2021-5
Sensors (Basel). 2022-5-27
Front Med. 2020-8
IEEE Trans Med Imaging. 2020-7
Med Image Anal. 2019-9-18
Med Image Anal. 2019-5-31