Suppr超能文献

基于多模态卷积神经网络的多参数磁共振成像中前列腺癌的自动诊断

Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks.

作者信息

Le Minh Hung, Chen Jingyu, Wang Liang, Wang Zhiwei, Liu Wenyu, Cheng Kwang-Ting Tim, Yang Xin

机构信息

School of Electronics and Communications, Huazhong University of Science and Technology, Wuhan, People's Republic of China.

出版信息

Phys Med Biol. 2017 Jul 24;62(16):6497-6514. doi: 10.1088/1361-6560/aa7731.

Abstract

Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to 'see' the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.

摘要

多参数磁共振成像(MP-MRI)中用于前列腺癌(PCa)诊断的自动化方法对于减轻X线片解读需求并提高诊断准确性至关重要(阿尔坦等人,《IEEE图像处理汇刊》,2010年,第19卷,第2444 - 2455页;利延斯等人,《IEEE医学成像汇刊》,2014年,第33卷,第1083 - 1092页;刘等人,《SPIE医学成像》(国际光学与光子学学会),第86701G页;莫拉迪等人,《磁共振成像杂志》,2012年,第35卷,第1403 - 1413页;尼亚夫等人,《IEEE图像处理汇刊》,2014年,第23卷,第979 - 991页;尼亚夫等人,《物理医学与生物学》,2012年,第57卷,第3833页;彭等人,《SPIE医学成像》(国际光学与光子学学会),第86701H页;彭等人,《放射学》,2013年,第267卷,第787 - 796页;王等人,《生物医学研究国际》,2014年)。本文提出了一种基于多模态卷积神经网络(CNN)的自动化方法,用于两项PCa诊断任务:(1)区分癌组织和非癌组织;(2)区分临床显著(CS)型和惰性PCa。具体而言,我们的多模态CNN有效地融合了表观扩散系数(ADC)和T2加权MP-MRI图像(T2WI)。为了有效地融合ADC和T2WI,我们设计了一种新的相似性损失函数,以强制从ADC和T2WI中提取一致的特征。相似性损失与传统分类损失函数相结合,并集成到CNN训练的反向传播过程中。由于两种模态的特征学习过程相互引导,相似性损失比现有方法能实现更好的融合结果,共同促进CNN“看到”PCa的真实视觉模式。多模态CNN的分类结果进一步与基于手工特征的结果结合,使用支持向量机分类器。为了在临床应用中获得令人满意的准确率,我们全面研究了三个可能极大影响多模态CNN性能但此前未被仔细研究的关键因素。(1)在训练数据有限的情况下,如何增加其数量和种类以对用于PCa诊断的深度CNN网络进行微调?(2)如何在CNN中有效地结合多模态MP-MRI信息?(3)不同的CNN架构对PCa诊断准确率有何影响?对来自364例患者的大量临床数据(共463个PCa病灶和450个已识别的非癌图像块)的实验结果表明,我们的系统在区分癌组织和非癌组织时可实现89.85%的灵敏度和95.83%的特异性,在区分惰性PCa和CS PCa时可实现100%的灵敏度和76.92%的特异性。该结果显著优于依赖手工特征的现有最先进方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验