Suppr超能文献

联合对抗学习的多模态非对比 MRI 肝肿瘤分割和检测。

United adversarial learning for liver tumor segmentation and detection of multi-modality non-contrast MRI.

机构信息

Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, 250358, China; Digital Imaging Group of London, London, ON, Canada.

Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, 250358, China.

出版信息

Med Image Anal. 2021 Oct;73:102154. doi: 10.1016/j.media.2021.102154. Epub 2021 Jun 29.

Abstract

Simultaneous segmentation and detection of liver tumors (hemangioma and hepatocellular carcinoma (HCC)) by using multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for the clinical diagnosis. However, it is still a challenging task due to: (1) the HCC information on NCMRI is insufficient makes extraction of liver tumors feature difficult; (2) diverse imaging characteristics in multi-modality NCMRI causes feature fusion and selection difficult; (3) no specific information between hemangioma and HCC on NCMRI cause liver tumors detection difficult. In this study, we propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI. The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection. In this encoder, a novel edge dissimilarity feature pyramid module is designed to facilitate the complementary multi-modality feature extraction. Secondly, the newly designed fusion and selection channel is used to fuse the multi-modality feature and make the decision of the feature selection. Then, the proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator. Lastly, an innovative multi-phase radiomics guided discriminator exploits the clear and specific tumor information to improve the multi-task performance via the adversarial learning strategy. The UAL is validated in corresponding multi-modality NCMRI (i.e. T1FS pre-contrast MRI, T2FS MRI, and DWI) and three phases contrast-enhanced MRI of 255 clinical subjects. The experiments show that UAL gains high performance with the dice similarity coefficient of 83.63%, the pixel accuracy of 97.75%, the intersection-over-union of 81.30%, the sensitivity of 92.13%, the specificity of 93.75%, and the detection accuracy of 92.94%, which demonstrate that UAL has great potential in the clinical diagnosis of liver tumors.

摘要

利用多模态非对比磁共振成像(NCMRI)对肝肿瘤(血管瘤和肝细胞癌(HCC))进行同时分割和检测对于临床诊断至关重要。然而,由于以下原因,这仍然是一项具有挑战性的任务:(1)NCMRI 上的 HCC 信息不足,使得提取肝肿瘤特征变得困难;(2)多模态 NCMRI 中的不同成像特征使得特征融合和选择变得困难;(3)NCMRI 上的血管瘤和 HCC 之间没有特定信息,使得肝肿瘤检测变得困难。在这项研究中,我们提出了一种联合对抗学习框架(UAL),用于使用多模态 NCMRI 进行同时肝肿瘤分割和检测。UAL 首先利用多视图感知编码器提取多模态 NCMRI 信息进行肝肿瘤分割和检测。在这个编码器中,设计了一个新颖的边缘不相似性特征金字塔模块,以促进互补的多模态特征提取。其次,新设计的融合和选择通道用于融合多模态特征并做出特征选择的决策。然后,提出的带有填充的坐标共享机制将分割和检测的多任务集成在一起,使得多任务能够在一个鉴别器中进行联合对抗学习。最后,创新的多阶段放射组学引导鉴别器利用清晰和特定的肿瘤信息,通过对抗学习策略提高多任务性能。UAL 在相应的多模态 NCMRI(即 T1FS 对比前 MRI、T2FS MRI 和 DWI)和 255 名临床患者的三个阶段对比增强 MRI 中进行了验证。实验表明,UAL 具有很高的性能,其 Dice 相似系数为 83.63%,像素准确率为 97.75%,交并比为 81.30%,灵敏度为 92.13%,特异性为 93.75%,检测准确率为 92.94%,这表明 UAL 在肝肿瘤的临床诊断中具有很大的潜力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验