• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于组织学图像少样本分类的相互重建网络模型:解决类间相似性和类内多样性问题。

A mutual reconstruction network model for few-shot classification of histological images: addressing interclass similarity and intraclass diversity.

作者信息

Li Xiangbo, Zhang Yinghui, Ge Fengxiang

机构信息

Huitong College, Beijing Normal University, Zhuhai, China.

College of Education for the Future, Beijing Normal University, Zhuhai, China.

出版信息

Quant Imaging Med Surg. 2024 Aug 1;14(8):5443-5459. doi: 10.21037/qims-24-253. Epub 2024 Jul 25.

DOI:10.21037/qims-24-253
PMID:39144045
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11320516/
Abstract

BACKGROUND

The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images.

METHODS

The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features.

RESULTS

We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images.

CONCLUSIONS

The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/06bdac8e8361/qims-14-08-5443-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/c88f6f104344/qims-14-08-5443-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/841098f17352/qims-14-08-5443-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/adca67b60ef7/qims-14-08-5443-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/fe8b0e199d5b/qims-14-08-5443-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/06bdac8e8361/qims-14-08-5443-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/c88f6f104344/qims-14-08-5443-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/841098f17352/qims-14-08-5443-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/adca67b60ef7/qims-14-08-5443-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/fe8b0e199d5b/qims-14-08-5443-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64e5/11320516/06bdac8e8361/qims-14-08-5443-f5.jpg
摘要

背景

组织学图像的自动分类对于癌症诊断至关重要。标注良好的数据集数量有限,尤其是对于罕见癌症,由于相关图像数量少,这给深度学习方法带来了重大挑战。这导致了少样本学习方法的发展,这些方法具有相当大的临床重要性,因为它们旨在克服深度学习中组织学图像分类数据稀缺的挑战。传统方法往往忽略了组织学图像中类内多样性和类间相似性的挑战。为了解决这个问题,我们提出了一种新颖的相互重建网络模型,旨在应对这些挑战并提高组织学图像的少样本分类性能。

方法

我们方法的关键是提取微妙且有区分性的特征。我们引入了一个特征增强模块(FEM)和一个相互重建模块,以增加类间差异,同时减少类内方差。首先,我们使用特征提取器提取支持图像和查询图像的特征。然后,这些特征由FEM处理,FEM使用自注意力机制对特征进行自我重建,增强对详细特征的学习。然后将这些增强后的特征输入到相互重建模块中。该模块使用增强后的支持特征来重建增强后的查询特征,反之亦然。查询样本的分类基于查询特征与重建后的查询特征之间以及支持特征与重建后的支持特征之间距离的加权计算。

结果

我们使用专门创建的少样本组织学图像数据集对我们的模型进行了广泛评估。结果表明,在5类10样本的设置中,我们的模型取得了令人印象深刻的92.09%的准确率。与不关注细粒度属性的模型无关元学习(MAML)方法相比,准确率提高了23.59%。在更具挑战性的5类1样本设置中,我们的模型也表现出色,比未解决此挑战的ProtoNet方法提高了18.52%。额外的消融研究表明了每个模块的有效性和互补性,并证实了我们的方法能够解析组织学图像中类间的微小差异和类内的大变化。这些发现有力地支持了我们提出的方法在组织学图像少样本分类中的优越性。

结论

相互重建网络在组织学图像的少样本分类中表现出色,成功克服了类间相似性和类内多样性的挑战。这标志着组织学图像自动分类方面的重大进展。

相似文献

1
A mutual reconstruction network model for few-shot classification of histological images: addressing interclass similarity and intraclass diversity.一种用于组织学图像少样本分类的相互重建网络模型:解决类间相似性和类内多样性问题。
Quant Imaging Med Surg. 2024 Aug 1;14(8):5443-5459. doi: 10.21037/qims-24-253. Epub 2024 Jul 25.
2
Bi-Directional Ensemble Feature Reconstruction Network for Few-Shot Fine-Grained Classification.用于少样本细粒度分类的双向集成特征重构网络
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6082-6096. doi: 10.1109/TPAMI.2024.3376686. Epub 2024 Aug 6.
3
Fine-Grained 3D-Attention Prototypes for Few-Shot Learning.细粒度 3D-注意力原型用于少样本学习。
Neural Comput. 2020 Sep;32(9):1664-1684. doi: 10.1162/neco_a_01302. Epub 2020 Jul 20.
4
BSNet: Bi-Similarity Network for Few-shot Fine-grained Image Classification.BSNet:用于少样本细粒度图像分类的双相似性网络。
IEEE Trans Image Process. 2021;30:1318-1331. doi: 10.1109/TIP.2020.3043128. Epub 2020 Dec 23.
5
Feature relocation network for fine-grained image classification.用于细粒度图像分类的特征重定位网络。
Neural Netw. 2023 Apr;161:306-317. doi: 10.1016/j.neunet.2023.01.050. Epub 2023 Feb 4.
6
A medical image classification method based on self-regularized adversarial learning.基于自正则化对抗学习的医学图像分类方法。
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
7
Dual Attention Relation Network With Fine-Tuning for Few-Shot EEG Motor Imagery Classification.基于微调的双通道注意力关系网络在少拍 EEG 运动想象分类中的应用。
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15479-15493. doi: 10.1109/TNNLS.2023.3287181. Epub 2024 Oct 29.
8
Analysis of Few-Shot Techniques for Fungal Plant Disease Classification and Evaluation of Clustering Capabilities Over Real Datasets.用于真菌植物病害分类的少样本技术分析及对真实数据集聚类能力的评估
Front Plant Sci. 2022 Mar 7;13:813237. doi: 10.3389/fpls.2022.813237. eCollection 2022.
9
Transductive Relation-Propagation With Decoupling Training for Few-Shot Learning.用于少样本学习的解耦训练的转导关系传播
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6652-6664. doi: 10.1109/TNNLS.2021.3082928. Epub 2022 Oct 27.
10
Self-supervised learning for remote sensing scene classification under the few shot scenario.基于小样本场景的遥感场景分类的自监督学习。
Sci Rep. 2023 Jan 9;13(1):433. doi: 10.1038/s41598-022-27313-5.

本文引用的文献

1
MTRRE-Net: A deep learning model for detection of breast cancer from histopathological images.MTRRE-Net:一种用于从组织病理学图像中检测乳腺癌的深度学习模型。
Comput Biol Med. 2022 Nov;150:106155. doi: 10.1016/j.compbiomed.2022.106155. Epub 2022 Sep 30.
2
Hallmarks of Cancer: New Dimensions.癌症的特征:新视角。
Cancer Discov. 2022 Jan;12(1):31-46. doi: 10.1158/2159-8290.CD-21-1059.
3
Constellation Loss: Improving the Efficiency of Deep Metric Learning Loss Functions for the Optimal Embedding of histopathological images.
星座损失:提高深度度量学习损失函数对组织病理学图像进行最优嵌入的效率
J Pathol Inform. 2020 Nov 26;11:38. doi: 10.4103/jpi.jpi_41_20. eCollection 2020.
4
Cancer statistics for the year 2020: An overview.2020年癌症统计数据概述。
Int J Cancer. 2021 Apr 5. doi: 10.1002/ijc.33588.
5
Fine-Grained Breast Cancer Classification With Bilinear Convolutional Neural Networks (BCNNs).基于双线性卷积神经网络(BCNN)的细粒度乳腺癌分类
Front Genet. 2020 Sep 4;11:547327. doi: 10.3389/fgene.2020.547327. eCollection 2020.
6
Deep neural network models for computational histopathology: A survey.用于计算组织病理学的深度神经网络模型:一项综述。
Med Image Anal. 2021 Jan;67:101813. doi: 10.1016/j.media.2020.101813. Epub 2020 Sep 25.
7
Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping.用于组织表型分析的多千兆像素结直肠癌组织学图像中的多重细胞群落
IEEE Trans Image Process. 2020 Sep 23;PP. doi: 10.1109/TIP.2020.3023795.
8
Breast cancer histopathological image classification using a hybrid deep neural network.基于混合深度神经网络的乳腺癌病理图像分类
Methods. 2020 Feb 15;173:52-60. doi: 10.1016/j.ymeth.2019.06.014. Epub 2019 Jun 15.
9
Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer.深度学习可直接从胃肠道癌症的组织学预测微卫星不稳定性。
Nat Med. 2019 Jul;25(7):1054-1056. doi: 10.1038/s41591-019-0462-y. Epub 2019 Jun 3.
10
Histopathological image classification with bilinear convolutional neural networks.基于双线性卷积神经网络的组织病理学图像分类
Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:4050-4053. doi: 10.1109/EMBC.2017.8037745.