Suppr超能文献

一种用于组织学图像少样本分类的相互重建网络模型:解决类间相似性和类内多样性问题。

A mutual reconstruction network model for few-shot classification of histological images: addressing interclass similarity and intraclass diversity.

作者信息

Li Xiangbo, Zhang Yinghui, Ge Fengxiang

机构信息

Huitong College, Beijing Normal University, Zhuhai, China.

College of Education for the Future, Beijing Normal University, Zhuhai, China.

出版信息

Quant Imaging Med Surg. 2024 Aug 1;14(8):5443-5459. doi: 10.21037/qims-24-253. Epub 2024 Jul 25.

Abstract

BACKGROUND

The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images.

METHODS

The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features.

RESULTS

We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images.

CONCLUSIONS

The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.

摘要

背景

组织学图像的自动分类对于癌症诊断至关重要。标注良好的数据集数量有限,尤其是对于罕见癌症,由于相关图像数量少,这给深度学习方法带来了重大挑战。这导致了少样本学习方法的发展,这些方法具有相当大的临床重要性,因为它们旨在克服深度学习中组织学图像分类数据稀缺的挑战。传统方法往往忽略了组织学图像中类内多样性和类间相似性的挑战。为了解决这个问题,我们提出了一种新颖的相互重建网络模型,旨在应对这些挑战并提高组织学图像的少样本分类性能。

方法

我们方法的关键是提取微妙且有区分性的特征。我们引入了一个特征增强模块(FEM)和一个相互重建模块,以增加类间差异,同时减少类内方差。首先,我们使用特征提取器提取支持图像和查询图像的特征。然后,这些特征由FEM处理,FEM使用自注意力机制对特征进行自我重建,增强对详细特征的学习。然后将这些增强后的特征输入到相互重建模块中。该模块使用增强后的支持特征来重建增强后的查询特征,反之亦然。查询样本的分类基于查询特征与重建后的查询特征之间以及支持特征与重建后的支持特征之间距离的加权计算。

结果

我们使用专门创建的少样本组织学图像数据集对我们的模型进行了广泛评估。结果表明,在5类10样本的设置中,我们的模型取得了令人印象深刻的92.09%的准确率。与不关注细粒度属性的模型无关元学习(MAML)方法相比,准确率提高了23.59%。在更具挑战性的5类1样本设置中,我们的模型也表现出色,比未解决此挑战的ProtoNet方法提高了18.52%。额外的消融研究表明了每个模块的有效性和互补性,并证实了我们的方法能够解析组织学图像中类间的微小差异和类内的大变化。这些发现有力地支持了我们提出的方法在组织学图像少样本分类中的优越性。

结论

相互重建网络在组织学图像的少样本分类中表现出色,成功克服了类间相似性和类内多样性的挑战。这标志着组织学图像自动分类方面的重大进展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/c88f6f104344/qims-14-08-5443-f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验