• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CellRegNet:通过密度图回归在组织病理学图像中基于点注释的细胞检测

CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression.

作者信息

Jin Xu, An Hong, Chi Mengxian

机构信息

School of Computer Science and Technology, University of Science and Technology of China, Hefei 230000, China.

出版信息

Bioengineering (Basel). 2024 Aug 10;11(8):814. doi: 10.3390/bioengineering11080814.

DOI:10.3390/bioengineering11080814
PMID:39199772
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11352042/
Abstract

Recent advances in deep learning have shown significant potential for accurate cell detection via density map regression using point annotations. However, existing deep learning models often struggle with multi-scale feature extraction and integration in complex histopathological images. Moreover, in multi-class cell detection scenarios, current density map regression methods typically predict each cell type independently, failing to consider the spatial distribution priors of different cell types. To address these challenges, we propose CellRegNet, a novel deep learning model for cell detection using point annotations. CellRegNet integrates a hybrid CNN/Transformer architecture with innovative feature refinement and selection mechanisms, addressing the need for effective multi-scale feature extraction and integration. Additionally, we introduce a contrastive regularization loss that models the mutual exclusiveness prior in multi-class cell detection cases. Extensive experiments on three histopathological image datasets demonstrate that CellRegNet outperforms existing state-of-the-art methods for cell detection using point annotations, with F1-scores of 86.38% on BCData (breast cancer), 85.56% on EndoNuke (endometrial tissue) and 93.90% on MBM (bone marrow cells), respectively. These results highlight CellRegNet's potential to enhance the accuracy and reliability of cell detection in digital pathology.

摘要

深度学习的最新进展显示出通过使用点注释的密度图回归进行精确细胞检测的巨大潜力。然而,现有的深度学习模型在复杂的组织病理学图像中的多尺度特征提取和整合方面常常面临困难。此外,在多类细胞检测场景中,当前的密度图回归方法通常独立预测每种细胞类型,而没有考虑不同细胞类型的空间分布先验。为了应对这些挑战,我们提出了CellRegNet,一种使用点注释进行细胞检测的新型深度学习模型。CellRegNet集成了混合的CNN/Transformer架构以及创新的特征细化和选择机制,满足了有效进行多尺度特征提取和整合的需求。此外,我们引入了一种对比正则化损失,用于对多类细胞检测案例中的互斥先验进行建模。在三个组织病理学图像数据集上进行的广泛实验表明,CellRegNet在使用点注释进行细胞检测方面优于现有的最先进方法,在BCData(乳腺癌)上的F1分数为86.38%,在EndoNuke(子宫内膜组织)上为85.56%,在MBM(骨髓细胞)上为93.90%。这些结果凸显了CellRegNet在提高数字病理学中细胞检测的准确性和可靠性方面的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/68298ff85773/bioengineering-11-00814-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/b2da7443f4b9/bioengineering-11-00814-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/519367e8b2be/bioengineering-11-00814-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/a30cb517b7e5/bioengineering-11-00814-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/68298ff85773/bioengineering-11-00814-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/b2da7443f4b9/bioengineering-11-00814-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/519367e8b2be/bioengineering-11-00814-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/a30cb517b7e5/bioengineering-11-00814-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/041a/11352042/68298ff85773/bioengineering-11-00814-g004.jpg

相似文献

1
CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression.CellRegNet:通过密度图回归在组织病理学图像中基于点注释的细胞检测
Bioengineering (Basel). 2024 Aug 10;11(8):814. doi: 10.3390/bioengineering11080814.
2
Celiac disease diagnosis from endoscopic images based on multi-scale adaptive hybrid architecture model.基于多尺度自适应混合架构模型的内镜图像乳糜泻诊断。
Phys Med Biol. 2024 Mar 18;69(7). doi: 10.1088/1361-6560/ad25c1.
3
A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations.一种用于具有有限标注的未配对多模态医学图像分割的模态协作卷积与Transformer混合网络。
Med Phys. 2023 Sep;50(9):5460-5478. doi: 10.1002/mp.16338. Epub 2023 Mar 15.
4
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
5
ETU-Net: edge enhancement-guided U-Net with transformer for skin lesion segmentation.ETU-Net:基于边缘增强引导的 U-Net 与 Transformer 的皮肤病变分割。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad13d2.
6
MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images.MTU:一种具有混合卷积学习和注意力模块的多任务 U-net,用于结肠组织病理学图像中的癌症分类和腺体分割。
Comput Biol Med. 2022 Nov;150:106095. doi: 10.1016/j.compbiomed.2022.106095. Epub 2022 Sep 21.
7
Spatial-aware contrastive learning for cross-domain medical image registration.用于跨域医学图像配准的空间感知对比学习
Med Phys. 2024 Nov;51(11):8141-8150. doi: 10.1002/mp.17311. Epub 2024 Jul 19.
8
TGMIL: A hybrid multi-instance learning model based on the Transformer and the Graph Attention Network for whole-slide images classification of renal cell carcinoma.TGMIL:一种基于Transformer和图注意力网络的混合多实例学习模型,用于肾细胞癌全切片图像分类。
Comput Methods Programs Biomed. 2023 Dec;242:107789. doi: 10.1016/j.cmpb.2023.107789. Epub 2023 Sep 3.
9
MS-TCNet: An effective Transformer-CNN combined network using multi-scale feature learning for 3D medical image segmentation.MS-TCNet:一种基于多尺度特征学习的有效的 Transformer-CNN 组合网络,用于 3D 医学图像分割。
Comput Biol Med. 2024 Mar;170:108057. doi: 10.1016/j.compbiomed.2024.108057. Epub 2024 Jan 28.
10
Attention-Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus Tissue on Histopathological Slides.基于注意力的深度学习神经网络在组织病理学切片上用于癌症和癌前食管组织的检测。
JAMA Netw Open. 2019 Nov 1;2(11):e1914645. doi: 10.1001/jamanetworkopen.2019.14645.

本文引用的文献

1
A whole-slide foundation model for digital pathology from real-world data.基于真实世界数据的全幻灯片数字病理学基础模型。
Nature. 2024 Jun;630(8015):181-188. doi: 10.1038/s41586-024-07441-w. Epub 2024 May 22.
2
Towards a general-purpose foundation model for computational pathology.迈向计算病理学的通用基础模型。
Nat Med. 2024 Mar;30(3):850-862. doi: 10.1038/s41591-024-02857-3. Epub 2024 Mar 19.
3
Computational pathology: A survey review and the way forward.计算病理学:综述与未来发展方向
J Pathol Inform. 2024 Jan 14;15:100357. doi: 10.1016/j.jpi.2023.100357. eCollection 2024 Dec.
4
Difference-Deformable Convolution with Pseudo Scale Instance Map for Cell Localization.用于细胞定位的具有伪尺度实例图的差异可变形卷积
IEEE J Biomed Health Inform. 2023 Nov 6;PP. doi: 10.1109/JBHI.2023.3329542.
5
Endometriosis and gynaecological cancers: molecular insights behind a complex machinery.子宫内膜异位症与妇科癌症:复杂机制背后的分子见解
Prz Menopauzalny. 2021 Dec;20(4):201-206. doi: 10.5114/pm.2021.111276. Epub 2021 Dec 6.
6
SAU-Net: A Universal Deep Network for Cell Counting.SAU-Net:一种用于细胞计数的通用深度网络。
ACM BCB. 2019 Sep;2019:299-306. doi: 10.1145/3307339.3342153.
7
Deep learning in histopathology: the path to the clinic.深度学习在组织病理学中的应用:通往临床的道路。
Nat Med. 2021 May;27(5):775-784. doi: 10.1038/s41591-021-01343-4. Epub 2021 May 14.
8
Deep neural network models for computational histopathology: A survey.用于计算组织病理学的深度神经网络模型:一项综述。
Med Image Anal. 2021 Jan;67:101813. doi: 10.1016/j.media.2020.101813. Epub 2020 Sep 25.
9
Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images.基于部分点标注的弱监督深度学习细胞核分割方法在病理图像中的应用
IEEE Trans Med Imaging. 2020 Nov;39(11):3655-3666. doi: 10.1109/TMI.2020.3002244. Epub 2020 Oct 28.
10
Deep High-Resolution Representation Learning for Visual Recognition.用于视觉识别的深度高分辨率表征学习
IEEE Trans Pattern Anal Mach Intell. 2021 Oct;43(10):3349-3364. doi: 10.1109/TPAMI.2020.2983686. Epub 2021 Sep 2.