• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多模态全脑图像的深度耦合配准与分割。

Deep coupled registration and segmentation of multimodal whole-brain images.

机构信息

Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui, 230601, China.

SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu, 210096, China.

出版信息

Bioinformatics. 2024 Nov 1;40(11). doi: 10.1093/bioinformatics/btae606.

DOI:10.1093/bioinformatics/btae606
PMID:39400311
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11543610/
Abstract

MOTIVATION

Recent brain mapping efforts are producing large-scale whole-brain images using different imaging modalities. Accurate alignment and delineation of anatomical structures in these images are essential for numerous studies. These requirements are typically modeled as two distinct tasks: registration and segmentation. However, prevailing methods, fail to fully explore and utilize the inherent correlation and complementarity between the two tasks. Furthermore, variations in brain anatomy, brightness, and texture pose another formidable challenge in designing multi-modal similarity metrics. A high-throughput approach capable of overcoming the bottleneck of multi-modal similarity metric design, while effective leveraging the highly correlated and complementary nature of two tasks is highly desirable.

RESULTS

We introduce a deep learning framework for joint registration and segmentation of multi-modal brain images. Under this framework, registration and segmentation tasks are deeply coupled and collaborated at two hierarchical layers. In the inner layer, we establish a strong feature-level coupling between the two tasks by learning a unified common latent feature representation. In the outer layer, we introduce a mutually supervised dual-branch network to decouple latent features and facilitate task-level collaboration between registration and segmentation. Since the latent features we designed are also modality-independent, the bottleneck of designing multi-modal similarity metric is essentially addressed. Another merit offered by this framework is the interpretability of latent features, which allows intuitive manipulation of feature learning, thereby further enhancing network training efficiency and the performance of both tasks. Extensive experiments conducted on both multi-modal and mono-modal datasets of mouse and human brains demonstrate the superiority of our method.

AVAILABILITY AND IMPLEMENTATION

The code is available at https://github.com/tingtingup/DCRS.

摘要

动机

最近的脑图谱研究工作正在使用不同的成像模式产生大规模的全脑图像。在这些图像中,准确地对齐和描绘解剖结构对于许多研究都是至关重要的。这些要求通常被建模为两个不同的任务:配准和分割。然而,现有的方法未能充分探索和利用这两个任务之间的固有相关性和互补性。此外,大脑解剖结构、亮度和纹理的变化给设计多模态相似性度量标准带来了另一个巨大的挑战。需要一种能够克服多模态相似性度量标准设计瓶颈的高通量方法,同时有效地利用两个任务的高度相关性和互补性。

结果

我们提出了一种用于多模态脑图像联合配准和分割的深度学习框架。在这个框架下,配准和分割任务在两个层次上进行深入的耦合和协作。在内层,我们通过学习统一的公共潜在特征表示,在两个任务之间建立了强烈的特征级耦合。在外层,我们引入了一个相互监督的双分支网络,以解耦潜在特征,并促进配准和分割之间的任务级协作。由于我们设计的潜在特征也是与模态无关的,因此基本上解决了设计多模态相似性度量标准的瓶颈问题。这个框架的另一个优点是潜在特征的可解释性,它允许直观地操作特征学习,从而进一步提高网络训练效率和两个任务的性能。在小鼠和人类大脑的多模态和单模态数据集上进行的广泛实验证明了我们方法的优越性。

可用性和实现

代码可在 https://github.com/tingtingup/DCRS 上获得。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/488f4f164999/btae606f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/48adfb33b03d/btae606f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/2d62aeb4e1bf/btae606f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/2447736e3133/btae606f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/36a175ba5148/btae606f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/488f4f164999/btae606f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/48adfb33b03d/btae606f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/2d62aeb4e1bf/btae606f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/2447736e3133/btae606f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/36a175ba5148/btae606f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0863/11543610/488f4f164999/btae606f5.jpg

相似文献

1
Deep coupled registration and segmentation of multimodal whole-brain images.多模态全脑图像的深度耦合配准与分割。
Bioinformatics. 2024 Nov 1;40(11). doi: 10.1093/bioinformatics/btae606.
2
A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations.一种用于具有有限标注的未配对多模态医学图像分割的模态协作卷积与Transformer混合网络。
Med Phys. 2023 Sep;50(9):5460-5478. doi: 10.1002/mp.16338. Epub 2023 Mar 15.
3
Semi-supervised multi-modal medical image segmentation with unified translation.基于统一翻译的半监督多模态医学图像分割
Comput Biol Med. 2024 Jun;176:108570. doi: 10.1016/j.compbiomed.2024.108570. Epub 2024 May 8.
4
Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation.基于联合学习的特征重构和增强网络用于不完全多模态脑肿瘤分割。
Comput Biol Med. 2023 Sep;163:107234. doi: 10.1016/j.compbiomed.2023.107234. Epub 2023 Jul 4.
5
Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.基于重建一致性损失的多模态脑肿瘤数据补全。
J Digit Imaging. 2023 Aug;36(4):1794-1807. doi: 10.1007/s10278-022-00697-6. Epub 2023 Mar 1.
6
Self-Supervised Multi-Modal Hybrid Fusion Network for Brain Tumor Segmentation.基于自监督多模态混合融合网络的脑肿瘤分割。
IEEE J Biomed Health Inform. 2022 Nov;26(11):5310-5320. doi: 10.1109/JBHI.2021.3109301. Epub 2022 Nov 10.
7
Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning.从具有课程去关联学习的特权半配对 MRI 图像中学习多模态脑肿瘤分割。
Comput Biol Med. 2023 Jun;159:106927. doi: 10.1016/j.compbiomed.2023.106927. Epub 2023 Apr 21.
8
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images.VoxResNet:基于 3D MR 图像的脑分割深度体素残差网络。
Neuroimage. 2018 Apr 15;170:446-455. doi: 10.1016/j.neuroimage.2017.04.041. Epub 2017 Apr 23.
9
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
10
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.

本文引用的文献

1
DeepAtlas: Joint Semi-Supervised Learning of Image Registration and Segmentation.深度图谱:图像配准与分割的联合半监督学习
Med Image Comput Comput Assist Interv. 2019 Oct;11765:420-429. doi: 10.1007/978-3-030-32245-8_47. Epub 2019 Oct 10.
2
Interpretable Multi-Modal Image Registration Network Based on Disentangled Convolutional Sparse Coding.基于解缠卷积稀疏编码的可解释多模态图像配准网络
IEEE Trans Image Process. 2023;32:1078-1091. doi: 10.1109/TIP.2023.3240024. Epub 2023 Feb 7.
3
TransMorph: Transformer for unsupervised medical image registration.
TransMorph:用于无监督医学图像配准的转换器。
Med Image Anal. 2022 Nov;82:102615. doi: 10.1016/j.media.2022.102615. Epub 2022 Sep 14.
4
Cross-modal coherent registration of whole mouse brains.全鼠脑的跨模态相干配准。
Nat Methods. 2022 Jan;19(1):111-118. doi: 10.1038/s41592-021-01334-w. Epub 2021 Dec 9.
5
High-throughput mapping of a whole rhesus monkey brain at micrometer resolution.高通量绘制猕猴全脑微米分辨率图谱。
Nat Biotechnol. 2021 Dec;39(12):1521-1528. doi: 10.1038/s41587-021-00986-5. Epub 2021 Jul 26.
6
Few-Shot Learning for Deformable Medical Image Registration With Perception-Correspondence Decoupling and Reverse Teaching.基于感知对应解耦和反向教学的少量样本可变形医学图像配准。
IEEE J Biomed Health Inform. 2022 Mar;26(3):1177-1187. doi: 10.1109/JBHI.2021.3095409. Epub 2022 Mar 7.
7
The Allen Mouse Brain Common Coordinate Framework: A 3D Reference Atlas.艾伦鼠脑通用坐标系:一个 3D 参考图谱。
Cell. 2020 May 14;181(4):936-953.e20. doi: 10.1016/j.cell.2020.04.007. Epub 2020 May 7.
8
VoxelMorph: A Learning Framework for Deformable Medical Image Registration.VoxelMorph:一种用于可变形医学图像配准的学习框架。
IEEE Trans Med Imaging. 2019 Feb 4. doi: 10.1109/TMI.2019.2897538.
9
The BRAIN Initiative Cell Census Consortium: Lessons Learned toward Generating a Comprehensive Brain Cell Atlas.大脑计划细胞普查联盟:生成全面脑细胞图谱的经验教训
Neuron. 2017 Nov 1;96(3):542-557. doi: 10.1016/j.neuron.2017.10.007.
10
High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level.高通量双色精准成像技术实现大脑全连接组图谱绘制,达到细胞水平的细胞构筑学地标精度。
Nat Commun. 2016 Jul 4;7:12142. doi: 10.1038/ncomms12142.