• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于胰腺和肿瘤分割的注意力增强多尺度特征融合网络。

Attention-enhanced multiscale feature fusion network for pancreas and tumor segmentation.

作者信息

Dong Kaiqi, Hu Peijun, Zhu Yan, Tian Yu, Li Xiang, Zhou Tianshu, Bai Xueli, Liang Tingbo, Li Jingsong

机构信息

Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China.

Research Center for Data Hub and Security, Zhejiang Laboratory, Hangzhou, China.

出版信息

Med Phys. 2024 Dec;51(12):8999-9016. doi: 10.1002/mp.17385. Epub 2024 Sep 22.

DOI:10.1002/mp.17385
PMID:39306864
Abstract

BACKGROUND

Accurate pancreas and pancreatic tumor segmentation from abdominal scans is crucial for diagnosing and treating pancreatic diseases. Automated and reliable segmentation algorithms are highly desirable in both clinical practice and research.

PURPOSE

Segmenting the pancreas and tumors is challenging due to their low contrast, irregular morphologies, and variable anatomical locations. Additionally, the substantial difference in size between the pancreas and small tumors makes this task difficult. This paper proposes an attention-enhanced multiscale feature fusion network (AMFF-Net) to address these issues via 3D attention and multiscale context fusion methods.

METHODS

First, to prevent missed segmentation of tumors, we design the residual depthwise attention modules (RDAMs) to extract global features by expanding receptive fields of shallow layers in the encoder. Second, hybrid transformer modules (HTMs) are proposed to model deep semantic features and suppress irrelevant regions while highlighting critical anatomical characteristics. Additionally, the multiscale feature fusion module (MFFM) fuses adjacent top and bottom scale semantic features to address the size imbalance issue.

RESULTS

The proposed AMFF-Net was evaluated on the public MSD dataset, achieving 82.12% DSC for pancreas and 57.00% for tumors. It also demonstrated effective segmentation performance on the NIH and private datasets, outperforming previous State-Of-The-Art (SOTA) methods. Ablation studies verify the effectiveness of RDAMs, HTMs, and MFFM.

CONCLUSIONS

We propose an effective deep learning network for pancreas and tumor segmentation from abdominal CT scans. The proposed modules can better leverage global dependencies and semantic information and achieve significantly higher accuracy than the previous SOTA methods.

摘要

背景

从腹部扫描中准确分割胰腺和胰腺肿瘤对于胰腺疾病的诊断和治疗至关重要。在临床实践和研究中,自动化且可靠的分割算法都非常必要。

目的

由于胰腺和肿瘤对比度低、形态不规则以及解剖位置多变,对其进行分割具有挑战性。此外,胰腺和小肿瘤之间的尺寸差异巨大,使得这项任务变得困难。本文提出了一种注意力增强多尺度特征融合网络(AMFF-Net),通过三维注意力和多尺度上下文融合方法来解决这些问题。

方法

首先,为防止肿瘤分割遗漏,我们设计了残差深度注意力模块(RDAM),通过扩展编码器中浅层的感受野来提取全局特征。其次,提出了混合变压器模块(HTM)来对深度语义特征进行建模,并抑制无关区域,同时突出关键的解剖特征。此外,多尺度特征融合模块(MFFM)融合相邻的上下尺度语义特征,以解决尺寸不平衡问题。

结果

所提出的AMFF-Net在公开的MSD数据集上进行了评估,胰腺分割的DSC达到82.12%,肿瘤分割的DSC达到57.00%。它在NIH和私有数据集上也展示了有效的分割性能,优于先前的最先进(SOTA)方法。消融研究验证了RDAM、HTM和MFFM的有效性。

结论

我们提出了一种用于从腹部CT扫描中分割胰腺和肿瘤的有效深度学习网络。所提出的模块能够更好地利用全局依赖性和语义信息,并且比先前的SOTA方法实现了显著更高的准确率。

相似文献

1
Attention-enhanced multiscale feature fusion network for pancreas and tumor segmentation.用于胰腺和肿瘤分割的注意力增强多尺度特征融合网络。
Med Phys. 2024 Dec;51(12):8999-9016. doi: 10.1002/mp.17385. Epub 2024 Sep 22.
2
MAD-UNet: A deep U-shaped network combined with an attention mechanism for pancreas segmentation in CT images.MAD-UNet:一种结合注意力机制的深度U型网络,用于CT图像中的胰腺分割。
Med Phys. 2021 Jan;48(1):329-341. doi: 10.1002/mp.14617. Epub 2020 Dec 7.
3
Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning.基于对抗学习的 CT 容积中多尺度无监督域自适应自动胰腺分割。
Med Phys. 2022 Sep;49(9):5799-5818. doi: 10.1002/mp.15827. Epub 2022 Jul 27.
4
A transformer-guided cross-modality adaptive feature fusion framework for esophageal gross tumor volume segmentation.基于Transformer 的跨模态自适应特征融合框架用于食管大体肿瘤体积分割。
Comput Methods Programs Biomed. 2024 Jun;251:108216. doi: 10.1016/j.cmpb.2024.108216. Epub 2024 May 11.
5
Hepatic and portal vein segmentation with dual-stream deep neural network.基于双流深度神经网络的肝脏及门静脉分割。
Med Phys. 2024 Aug;51(8):5441-5456. doi: 10.1002/mp.17090. Epub 2024 Apr 22.
6
LUNETR: Language-Infused UNETR for precise pancreatic tumor segmentation in 3D medical image.LUNETR:用于3D医学图像中精确胰腺肿瘤分割的语言注入式UNETR
Neural Netw. 2025 Jul;187:107414. doi: 10.1016/j.neunet.2025.107414. Epub 2025 Mar 15.
7
MP-FocalUNet: Multiscale parallel focal self-attention U-Net for medical image segmentation.MP-FocalUNet:用于医学图像分割的多尺度并行焦点自注意力U-Net
Comput Methods Programs Biomed. 2025 Mar;260:108562. doi: 10.1016/j.cmpb.2024.108562. Epub 2024 Dec 9.
8
Deep multi-scale feature fusion for pancreas segmentation from CT images.基于 CT 图像的胰腺分割的深度多尺度特征融合。
Int J Comput Assist Radiol Surg. 2020 Mar;15(3):415-423. doi: 10.1007/s11548-020-02117-y. Epub 2020 Jan 22.
9
Skin lesion segmentation with a multiscale input fusion U-Net incorporating Res2-SE and pyramid dilated convolution.基于融合Res2-SE和金字塔扩张卷积的多尺度输入融合U-Net的皮肤病变分割
Sci Rep. 2025 Mar 7;15(1):7975. doi: 10.1038/s41598-025-92447-1.
10
STC-UNet: renal tumor segmentation based on enhanced feature extraction at different network levels.STC-UNet:基于不同网络层次增强特征提取的肾肿瘤分割。
BMC Med Imaging. 2024 Jul 19;24(1):179. doi: 10.1186/s12880-024-01359-5.

引用本文的文献

1
Federated prediction for scalable and privacy-preserved knowledge-based planning in radiotherapy.用于放射治疗中可扩展且隐私保护的基于知识规划的联邦预测
ArXiv. 2025 May 20:arXiv:2505.14507v1.
2
Relationship between pancreatic morphological changes and diabetes in autoimmune pancreatitis: Multimodal medical imaging assessment has important potential.自身免疫性胰腺炎中胰腺形态学改变与糖尿病之间的关系:多模态医学成像评估具有重要潜力。
World J Radiol. 2024 Nov 28;16(11):703-707. doi: 10.4329/wjr.v16.i11.703.