Suppr超能文献

MRI 跨模态图像到图像翻译。

MRI Cross-Modality Image-to-Image Translation.

机构信息

State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.

Ping An Technology (Shenzhen) Co., Ltd., Shanghai, 200030, China.

出版信息

Sci Rep. 2020 Feb 28;10(1):3753. doi: 10.1038/s41598-020-60520-6.

Abstract

We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.

摘要

我们提出了一种跨模态生成框架,该框架可以从给定模态的磁共振图像中学习生成翻译模态。我们提出的方法通过利用条件生成对抗网络(cGAN)的深度学习模型来执行图像模态翻译(简称 IMT)。我们的框架联合利用了跨模态之间的低级特征(像素级信息)和高级表示(例如脑肿瘤、灰质等脑结构),这对于解决脑结构中具有挑战性的复杂性非常重要。我们的框架可以作为医学应用的辅助方法,具有巨大的应用潜力。基于我们提出的框架,我们首先提出了一种跨模态配准方法,通过融合变形场来采用翻译模态的跨模态信息。其次,我们提出了一种 MRI 分割的方法,即翻译多通道分割(TMS),其中全卷积网络(FCN)以多通道方式对给定模态和翻译模态进行分割。这两种方法都成功地采用了跨模态信息来提高性能,而无需添加任何额外的数据。实验表明,我们提出的框架在五个脑 MRI 数据集上取得了最先进的结果。我们还在一些广泛采用的脑数据集上的跨模态配准和分割方面观察到了令人鼓舞的结果。总的来说,我们的工作可以作为医学应用的辅助方法,并应用于医学领域的各种任务。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ea6/7048849/3bbe4c68abcb/41598_2020_60520_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验