• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Hagnifinder:利用深度学习恢复数字组织学图像的放大倍数信息

Hagnifinder: Recovering magnification information of digital histological images using deep learning.

作者信息

Zhang Hongtai, Liu Zaiyi, Song Mingli, Lu Cheng

机构信息

School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China.

Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou 510080, China.

出版信息

J Pathol Inform. 2023 Feb 16;14:100302. doi: 10.1016/j.jpi.2023.100302. eCollection 2023.

DOI:10.1016/j.jpi.2023.100302
PMID:36923447
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10009300/
Abstract

BACKGROUND AND OBJECTIVE

Training a robust cancer diagnostic or prognostic artificial intelligent model using histology images requires a large number of representative cases with labels or annotations, which are difficult to obtain. The histology snapshots available in published papers or case reports can be used to enrich the training dataset. However, the magnifications of these invaluable snapshots are generally unknown, which limits their usage. Therefore, a robust magnification predictor is required for utilizing those diverse snapshot repositories consisting of different diseases. This paper presents a magnification prediction model named Hagnifinder for H&E-stained histological images.

METHODS

Hagnifinder is a regression model based on a modified convolutional neural network (CNN) that contains 3 modules: Feature Extraction Module, Regression Module, and Adaptive Scaling Module (ASM). In the training phase, the Feature Extraction Module first extracts the image features. Secondly, the ASM is proposed to address the learned feature values uneven distribution problem. Finally, the Regression Module estimates the mapping between the regularized extracted features and the magnifications. We construct a new dataset for training a robust model, named Hagni40, consisting of 94 643 H&E-stained histology image patches at 40 different magnifications of 13 types of cancer based on The Cancer Genome Atlas. To verify the performance of the Hagnifinder, we measure the accuracy of the predictions by setting the maximum allowable difference values (0.5, 1, and 5) between the predicted magnification and the actual magnification. We compare Hagnifinder with state-of-the-art methods on a public dataset BreakHis and the Hagni40.

RESULTS

The Hagnifinder provides consistent prediction accuracy, with a mean accuracy of 98.9%, across 40 different magnifications and 13 different cancer types when Resnet50 is used as the feature extractor. Compared with the state-of-the-art methods focusing on 4-5 levels of magnification classification, the Hagnifinder achieve the best and most comparable performance in the BreakHis and Hagni40 datasets.

CONCLUSIONS

The experimental results suggest that Hagnifinder can be a valuable tool for predicting the associated magnification of any given histology image.

摘要

背景与目的

使用组织学图像训练强大的癌症诊断或预后人工智能模型需要大量带有标签或注释的代表性病例,而这些病例很难获得。已发表论文或病例报告中的组织学快照可用于扩充训练数据集。然而,这些宝贵快照的放大倍数通常未知,这限制了它们的使用。因此,需要一个强大的放大倍数预测器来利用那些包含不同疾病的多样快照库。本文提出了一种用于苏木精-伊红(H&E)染色组织学图像的放大倍数预测模型Hagnifinder。

方法

Hagnifinder是一个基于改进卷积神经网络(CNN)的回归模型,包含3个模块:特征提取模块、回归模块和自适应缩放模块(ASM)。在训练阶段,特征提取模块首先提取图像特征。其次,提出ASM来解决学习到的特征值分布不均的问题。最后,回归模块估计正则化提取特征与放大倍数之间的映射。我们基于癌症基因组图谱构建了一个新的数据集用于训练强大的模型,名为Hagni40,它由13种癌症在40种不同放大倍数下的94643个H&E染色组织学图像块组成。为了验证Hagnifinder的性能,我们通过设置预测放大倍数与实际放大倍数之间的最大允许差值(0.5、1和5)来衡量预测的准确性。我们在公共数据集BreakHis和Hagni40上,将Hagnifinder与最先进的方法进行比较。

结果

当使用Resnet50作为特征提取器时,Hagnifinder在40种不同放大倍数和13种不同癌症类型上提供了一致的预测准确性,平均准确率为98.9%。与专注于4 - 5个放大倍数分类水平的最先进方法相比,Hagnifinder在BreakHis和Hagni40数据集中实现了最佳且最具可比性的性能。

结论

实验结果表明,Hagnifinder可以成为预测任何给定组织学图像相关放大倍数的有价值工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/84b0b79fb9b6/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/914aff35ede8/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/cdac4630ab9f/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/02d91e521908/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/d19ed7efe27c/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/127ada5b571c/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/9646187c4076/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/93d09b0819f0/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/de1734a14627/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/84b0b79fb9b6/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/914aff35ede8/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/cdac4630ab9f/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/02d91e521908/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/d19ed7efe27c/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/127ada5b571c/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/9646187c4076/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/93d09b0819f0/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/de1734a14627/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dc5/10009300/84b0b79fb9b6/gr9.jpg

相似文献

1
Hagnifinder: Recovering magnification information of digital histological images using deep learning.Hagnifinder:利用深度学习恢复数字组织学图像的放大倍数信息
J Pathol Inform. 2023 Feb 16;14:100302. doi: 10.1016/j.jpi.2023.100302. eCollection 2023.
2
CroMAM: A Cross-Magnification Attention Feature Fusion Model for Predicting Genetic Status and Survival of Gliomas Using Histological Images.CroMAM:一种用于使用组织学图像预测胶质瘤基因状态和生存情况的交叉放大注意力特征融合模型。
IEEE J Biomed Health Inform. 2024 Dec;28(12):7345-7356. doi: 10.1109/JBHI.2024.3431471. Epub 2024 Dec 5.
3
Convolutional neural network with parallel convolution scale attention module and ResCBAM for breast histology image classification.具有并行卷积尺度注意力模块和ResCBAM的卷积神经网络用于乳腺组织学图像分类
Heliyon. 2024 May 8;10(10):e30889. doi: 10.1016/j.heliyon.2024.e30889. eCollection 2024 May 30.
4
Feature Generalization for Breast Cancer Detection in Histopathological Images.基于组织病理学图像的乳腺癌检测中的特征泛化。
Interdiscip Sci. 2022 Jun;14(2):566-581. doi: 10.1007/s12539-022-00515-1. Epub 2022 Apr 28.
5
Convolutional Rebalancing Network for the Classification of Large Imbalanced Rice Pest and Disease Datasets in the Field.用于田间大失衡水稻病虫害数据集分类的卷积重平衡网络
Front Plant Sci. 2021 Jul 5;12:671134. doi: 10.3389/fpls.2021.671134. eCollection 2021.
6
Recognizing Magnification Levels in Microscopic Snapshots.识别微观快照中的放大级别。
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:1416-1419. doi: 10.1109/EMBC44109.2020.9175653.
7
Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis.基于弱监督学习的全幻灯片肺癌图像分析。
IEEE Trans Cybern. 2020 Sep;50(9):3950-3962. doi: 10.1109/TCYB.2019.2935141. Epub 2019 Sep 2.
8
Deep Multi-Magnification Similarity Learning for Histopathological Image Classification.用于组织病理学图像分类的深度多倍率相似性学习
IEEE J Biomed Health Inform. 2023 Mar;27(3):1535-1545. doi: 10.1109/JBHI.2023.3237137. Epub 2023 Mar 7.
9
Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning.基于混合 CNN-LSTM 的迁移学习的乳腺癌组织病理学成像的良恶性亚型分类。
BMC Med Imaging. 2023 Jan 30;23(1):19. doi: 10.1186/s12880-023-00964-0.
10
Deep Learning-Based Mapping of Tumor Infiltrating Lymphocytes in Whole Slide Images of 23 Types of Cancer.基于深度学习的23种癌症全切片图像中肿瘤浸润淋巴细胞的图谱绘制
Front Oncol. 2022 Feb 16;11:806603. doi: 10.3389/fonc.2021.806603. eCollection 2021.

引用本文的文献

1
SAMPLER: unsupervised representations for rapid analysis of whole slide tissue images.SAMPLER:用于快速分析全玻片组织图像的无监督表示。
EBioMedicine. 2024 Jan;99:104908. doi: 10.1016/j.ebiom.2023.104908. Epub 2023 Dec 14.

本文引用的文献

1
Feature-driven local cell graph (FLocK): New computational pathology-based descriptors for prognosis of lung cancer and HPV status of oropharyngeal cancers.特征驱动的局部细胞图谱(FLocK):基于计算病理学的肺癌预后及口咽癌HPV状态的新描述符
Med Image Anal. 2021 Feb;68:101903. doi: 10.1016/j.media.2020.101903. Epub 2020 Nov 16.
2
A prognostic model for overall survival of patients with early-stage non-small cell lung cancer: a multicentre, retrospective study.早期非小细胞肺癌患者总生存期的预后模型:一项多中心回顾性研究。
Lancet Digit Health. 2020 Nov;2(11):e594-e606. doi: 10.1016/s2589-7500(20)30225-9. Epub 2020 Oct 19.
3
Recognizing Magnification Levels in Microscopic Snapshots.
识别微观快照中的放大级别。
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:1416-1419. doi: 10.1109/EMBC44109.2020.9175653.
4
HistoQC: An Open-Source Quality Control Tool for Digital Pathology Slides.HistoQC:一种用于数字病理切片的开源质量控制工具。
JCO Clin Cancer Inform. 2019 Apr;3:1-7. doi: 10.1200/CCI.18.00157.
5
Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning.基于深度学习的非小细胞肺癌组织病理学图像分类和突变预测。
Nat Med. 2018 Oct;24(10):1559-1567. doi: 10.1038/s41591-018-0177-5. Epub 2018 Sep 17.
6
Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases.用于数字病理学图像分析的深度学习:包含选定用例的全面教程。
J Pathol Inform. 2016 Jul 26;7:29. doi: 10.4103/2153-3539.186902. eCollection 2016.
7
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.深度学习作为提高组织病理学诊断准确性和效率的工具。
Sci Rep. 2016 May 23;6:26286. doi: 10.1038/srep26286.
8
A Dataset for Breast Cancer Histopathological Image Classification.一个用于乳腺癌组织病理学图像分类的数据集。
IEEE Trans Biomed Eng. 2016 Jul;63(7):1455-62. doi: 10.1109/TBME.2015.2496264. Epub 2015 Oct 30.
9
Triaging Diagnostically Relevant Regions from Pathology Whole Slides of Breast Cancer: A Texture Based Approach.从乳腺癌全切片病理中筛选有诊断意义的区域:一种基于纹理的方法。
IEEE Trans Med Imaging. 2016 Jan;35(1):307-15. doi: 10.1109/TMI.2015.2470529. Epub 2015 Aug 20.
10
Breast cancer histopathology image analysis: a review.乳腺癌组织病理学图像分析:综述
IEEE Trans Biomed Eng. 2014 May;61(5):1400-11. doi: 10.1109/TBME.2014.2303852.