• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种具有解剖结构保持的双向多层对比适应网络,用于非配对跨模态医学图像分割。

A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation.

机构信息

Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.

Institute of Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, 430074, China.

出版信息

Comput Biol Med. 2022 Oct;149:105964. doi: 10.1016/j.compbiomed.2022.105964. Epub 2022 Aug 19.

DOI:10.1016/j.compbiomed.2022.105964
PMID:36007288
Abstract

Multi-modal medical image segmentation has achieved great success through supervised deep learning networks. However, because of domain shift and limited annotation information, unpaired cross-modality segmentation tasks are still challenging. The unsupervised domain adaptation (UDA) methods can alleviate the segmentation degradation of cross-modality segmentation by knowledge transfer between different domains, but current methods still suffer from the problems of model collapse, adversarial training instability, and mismatch of anatomical structures. To tackle these issues, we propose a bidirectional multilayer contrastive adaptation network (BMCAN) for unpaired cross-modality segmentation. The shared encoder is first adopted for learning modality-invariant encoding representations in image synthesis and segmentation simultaneously. Secondly, to retain the anatomical structure consistency in cross-modality image synthesis, we present a structure-constrained cross-modality image translation approach for image alignment. Thirdly, we construct a bidirectional multilayer contrastive learning approach to preserve the anatomical structures and enhance encoding representations, which utilizes two groups of domain-specific multilayer perceptron (MLP) networks to learn modality-specific features. Finally, a semantic information adversarial learning approach is designed to learn structural similarities of semantic outputs for output space alignment. Our proposed method was tested on three different cross-modality segmentation tasks: brain tissue, brain tumor, and cardiac substructure segmentation. Compared with other UDA methods, experimental results show that our proposed BMCAN achieves state-of-the-art segmentation performance on the above three tasks, and it has fewer training components and better feature representations for overcoming overfitting and domain shift problems. Our proposed method can efficiently reduce the annotation burden of radiologists in cross-modality image analysis.

摘要

多模态医学图像分割通过监督深度学习网络取得了巨大成功。然而,由于领域转移和有限的标注信息,未配对的跨模态分割任务仍然具有挑战性。无监督域自适应(UDA)方法可以通过不同域之间的知识转移来缓解跨模态分割的分割降级,但当前的方法仍然存在模型崩溃、对抗训练不稳定和解剖结构不匹配的问题。为了解决这些问题,我们提出了一种用于未配对跨模态分割的双向多层对比适应网络(BMCAN)。首先采用共享编码器同时学习图像合成和分割中的模态不变编码表示。其次,为了保留跨模态图像合成中的解剖结构一致性,我们提出了一种结构约束的跨模态图像翻译方法进行图像对齐。第三,我们构建了一种双向多层对比学习方法来保留解剖结构并增强编码表示,该方法利用两组特定于模态的多层感知机(MLP)网络来学习特定于模态的特征。最后,设计了一种语义信息对抗学习方法来学习语义输出的结构相似性,以进行输出空间对齐。我们的方法在三个不同的跨模态分割任务上进行了测试:脑组织、脑肿瘤和心脏子结构分割。与其他 UDA 方法相比,实验结果表明,我们的 BMCAN 在上述三个任务上实现了最先进的分割性能,并且它具有更少的训练组件和更好的特征表示,以克服过拟合和领域转移问题。我们的方法可以有效地减少放射科医生在跨模态图像分析中的标注负担。

相似文献

1
A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation.一种具有解剖结构保持的双向多层对比适应网络,用于非配对跨模态医学图像分割。
Comput Biol Med. 2022 Oct;149:105964. doi: 10.1016/j.compbiomed.2022.105964. Epub 2022 Aug 19.
2
A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations.一种用于具有有限标注的未配对多模态医学图像分割的模态协作卷积与Transformer混合网络。
Med Phys. 2023 Sep;50(9):5460-5478. doi: 10.1002/mp.16338. Epub 2023 Mar 15.
3
Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation.基于深度协同图像和特征对齐的无监督双向跨模态适配在医学图像分割中的应用。
IEEE Trans Med Imaging. 2020 Jul;39(7):2494-2505. doi: 10.1109/TMI.2020.2972701. Epub 2020 Feb 10.
4
Disentangled representation and cross-modality image translation based unsupervised domain adaptation method for abdominal organ segmentation.基于解缠表示和跨模态图像翻译的无监督域自适应腹部器官分割方法。
Int J Comput Assist Radiol Surg. 2022 Jun;17(6):1101-1113. doi: 10.1007/s11548-022-02590-7. Epub 2022 Mar 17.
5
DDA-Net: Unsupervised cross-modality medical image segmentation via dual domain adaptation.DDA-Net:通过双域自适应实现无监督跨模态医学图像分割
Comput Methods Programs Biomed. 2022 Jan;213:106531. doi: 10.1016/j.cmpb.2021.106531. Epub 2021 Nov 14.
6
Unsupervised deep consistency learning adaptation network for cardiac cross-modality structural segmentation.用于心脏跨模态结构分割的无监督深度一致性学习自适应网络
Med Biol Eng Comput. 2023 Oct;61(10):2713-2732. doi: 10.1007/s11517-023-02833-y. Epub 2023 Jul 14.
7
Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation.自注意空间自适应归一化用于跨模态域自适应。
IEEE Trans Med Imaging. 2021 Oct;40(10):2926-2938. doi: 10.1109/TMI.2021.3059265. Epub 2021 Sep 30.
8
Unsupervised domain adaptive building semantic segmentation network by edge-enhanced contrastive learning.基于边缘增强对比学习的无监督领域自适应建筑语义分割网络。
Neural Netw. 2024 Nov;179:106581. doi: 10.1016/j.neunet.2024.106581. Epub 2024 Jul 30.
9
Unsupervised Cross-Modality Adaptation via Dual Structural-Oriented Guidance for 3D Medical Image Segmentation.基于双结构导向引导的无监督跨模态适配在 3D 医学图像分割中的应用。
IEEE Trans Med Imaging. 2023 Jun;42(6):1774-1785. doi: 10.1109/TMI.2023.3238114. Epub 2023 Jun 1.
10
Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation.减少磁共振成像中的标注负担:一种新的基于磁共振对比引导的对比学习方法用于图像分割。
Med Phys. 2024 Apr;51(4):2707-2720. doi: 10.1002/mp.16820. Epub 2023 Nov 13.