Suppr超能文献

自注意空间自适应归一化用于跨模态域自适应。

Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation.

出版信息

IEEE Trans Med Imaging. 2021 Oct;40(10):2926-2938. doi: 10.1109/TMI.2021.3059265. Epub 2021 Sep 30.

Abstract

Despite the successes of deep neural networks on many challenging vision tasks, they often fail to generalize to new test domains that are not distributed identically to the training data. The domain adaptation becomes more challenging for cross-modality medical data with a notable domain shift. Given that specific annotated imaging modalities may not be accessible nor complete. Our proposed solution is based on the cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists and bridge the domain gap in radiological images. We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups. Built upon adversarial training, we propose a learnable self-attentive spatial normalization of the deep convolutional generator network's intermediate activations. Unlike previous attention-based image-to-image translation approaches, which are either domain-specific or require distortion of the source domain's structures, we unearth the importance of the auxiliary semantic information to handle the geometric changes and preserve anatomical structures during image translation. We achieve superior results for cross-modality segmentation between unpaired MRI and CT data for multi-modality whole heart and multi-modal brain tumor MRI (T1/T2) datasets compared to the state-of-the-art methods. We also observe encouraging results in cross-modality conversion for paired MRI and CT images on a brain dataset. Furthermore, a detailed analysis of the cross-modality image translation, thorough ablation studies confirm our proposed method's efficacy.

摘要

尽管深度神经网络在许多具有挑战性的视觉任务上取得了成功,但它们通常无法推广到与训练数据分布不完全相同的新测试领域。对于具有显著域转移的跨模态医学数据,域自适应变得更加具有挑战性。鉴于特定的注释成像方式可能无法访问或不完整。我们提出的解决方案基于医学图像的跨模态合成,以减少放射科医生的昂贵注释负担,并弥合放射图像中的域差距。我们提出了一种新的医学图像图像到图像翻译方法,能够进行有监督或无监督(无配对图像数据)设置。基于对抗训练,我们提出了一种可学习的自我注意空间归一化方法,用于深度卷积生成网络的中间激活。与以前基于注意力的图像到图像翻译方法不同,后者要么是特定于域的,要么需要扭曲源域的结构,我们发现辅助语义信息的重要性,可以在图像翻译过程中处理几何变化并保留解剖结构。与最先进的方法相比,我们在多模态全心脏和多模态脑肿瘤 MRI(T1/T2)数据集的未配对 MRI 和 CT 数据之间的跨模态分割方面取得了卓越的结果,以及在脑数据集上的配对 MRI 和 CT 图像的跨模态转换方面也取得了令人鼓舞的结果。此外,对跨模态图像翻译的详细分析,彻底的消融研究证实了我们提出的方法的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验