• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用基于Transformer的图卷积网络和上下文信息增强的细胞核实例分割

Nuclei instance segmentation using a transformer-based graph convolutional network and contextual information augmentation.

作者信息

Wang Juan, Zhang Zetao, Wu Minghu, Ye Yonggang, Wang Sheng, Cao Ye, Yang Hao

机构信息

School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China; Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, China.

School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.

出版信息

Comput Biol Med. 2023 Oct 25;167:107622. doi: 10.1016/j.compbiomed.2023.107622.

DOI:10.1016/j.compbiomed.2023.107622
PMID:39491378
Abstract

Nucleus instance segmentation is an important task in medical image analysis involving cell-level pathological analysis and is of great significance for many biomedical applications, such as disease diagnosis and drug screening. However, the high-density and tight-contact between cells is a common feature of most cell images, which poses a great technical challenge for nuclei instance segmentation. The latest research focuses on CNN-based methods for nuclei instance segmentation, which typically rely on bounding box regression and non-maximum suppression to locate nuclei. However, this frequently results in poor local bounding boxes for nuclei that are adhered or clustered together. In response to the challenges of high-density and tight-contact in cellular images, we propose a novel end-to-end nuclei instance segmentation model. Specifically, we first employ the Swin Transformer as the backbone network of our model, which captures global multi-scale information by combining the global modelling capability of transformers and the local modelling capability of convolutional neural networks (CNNs). Additionally, we integrate a graph convolutional feature fusion module (GCFM), that combines deep and shallow features to learn an affinity matrix. The module also adopts graph convolution to guide the network in learning the object-level local information. Finally, we design a hybrid dilated convolution module (HDC) and insert it into the backbone network to enhance the contextual information over a large range. These components assist the network in extracting rich features. The experimental results demonstrate that our algorithm outperforms several state-of-the-art models on the DSB2018 and LIVECell datasets.

摘要

细胞核实例分割是医学图像分析中的一项重要任务,涉及细胞水平的病理分析,对许多生物医学应用具有重要意义,如疾病诊断和药物筛选。然而,细胞之间的高密度和紧密接触是大多数细胞图像的共同特征,这给细胞核实例分割带来了巨大的技术挑战。最新研究集中在基于卷积神经网络(CNN)的细胞核实例分割方法上,这些方法通常依靠边界框回归和非极大值抑制来定位细胞核。然而,对于粘连或聚集在一起的细胞核,这常常导致局部边界框效果不佳。针对细胞图像中高密度和紧密接触的挑战,我们提出了一种新颖的端到端细胞核实例分割模型。具体来说,我们首先采用Swin Transformer作为模型的主干网络,它通过结合Transformer的全局建模能力和卷积神经网络(CNN)的局部建模能力来捕获全局多尺度信息。此外,我们集成了一个图卷积特征融合模块(GCFM),它结合深层和浅层特征来学习亲和矩阵。该模块还采用图卷积来引导网络学习对象级别的局部信息。最后,我们设计了一个混合扩张卷积模块(HDC)并将其插入主干网络,以在大范围内增强上下文信息。这些组件有助于网络提取丰富的特征。实验结果表明,我们的算法在DSB2018和LIVECell数据集上优于几个现有的先进模型。

相似文献

1
Nuclei instance segmentation using a transformer-based graph convolutional network and contextual information augmentation.使用基于Transformer的图卷积网络和上下文信息增强的细胞核实例分割
Comput Biol Med. 2023 Oct 25;167:107622. doi: 10.1016/j.compbiomed.2023.107622.
2
TGDAUNet: Transformer and GCNN based dual-branch attention UNet for medical image segmentation.TGDAUNet:基于 Transformer 和 GCNN 的双分支注意力 U-Net 用于医学图像分割。
Comput Biol Med. 2023 Dec;167:107583. doi: 10.1016/j.compbiomed.2023.107583. Epub 2023 Oct 21.
3
Dual encoder network with transformer-CNN for multi-organ segmentation.基于 Transformer-CNN 的双编码器网络的多器官分割。
Med Biol Eng Comput. 2023 Mar;61(3):661-671. doi: 10.1007/s11517-022-02723-9. Epub 2022 Dec 29.
4
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.
5
Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images.基于卷积神经网络(CNN)与变换器(Transformer)相结合的多任务方法用于超声图像中乳腺肿瘤的高效分割与分类
Vis Comput Ind Biomed Art. 2024 Jan 26;7(1):2. doi: 10.1186/s42492-024-00155-w.
6
SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer.SwinBTS:一种使用Swin Transformer进行3D多模态脑肿瘤分割的方法。
Brain Sci. 2022 Jun 17;12(6):797. doi: 10.3390/brainsci12060797.
7
CPFTransformer: transformer fusion context pyramid medical image segmentation network.CPFTransformer:变换器融合上下文金字塔医学图像分割网络。
Front Neurosci. 2023 Dec 7;17:1288366. doi: 10.3389/fnins.2023.1288366. eCollection 2023.
8
VSmTrans: A hybrid paradigm integrating self-attention and convolution for 3D medical image segmentation.VSmTrans:一种融合自注意力机制和卷积的 3D 医学图像分割混合范式。
Med Image Anal. 2024 Dec;98:103295. doi: 10.1016/j.media.2024.103295. Epub 2024 Aug 24.
9
G2ViT: Graph Neural Network-Guided Vision Transformer Enhanced Network for retinal vessel and coronary angiograph segmentation.G2ViT:基于图神经网络引导的视觉Transformer 增强网络,用于视网膜血管和冠状动脉造影分割。
Neural Netw. 2024 Aug;176:106356. doi: 10.1016/j.neunet.2024.106356. Epub 2024 May 3.
10
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.

引用本文的文献

1
A new Similarity Based Adapted Louvain Algorithm (SIMBA) for active module identification in p-value attributed biological networks.一种基于相似度的改进鲁汶算法(SIMBA),用于在p值归因生物网络中识别活跃模块。
Sci Rep. 2025 Apr 2;15(1):11360. doi: 10.1038/s41598-025-95749-6.
2
Multimodal multi-instance evidence fusion neural networks for cancer survival prediction.用于癌症生存预测的多模态多实例证据融合神经网络
Sci Rep. 2025 Mar 26;15(1):10470. doi: 10.1038/s41598-025-93770-3.