Department of Biomedical Engineering, Duke University, Durham, NC, USA.
Department of Biomedical Engineering, Duke University, Durham, NC, USA.
Ultrasound Med Biol. 2024 Nov;50(11):1716-1723. doi: 10.1016/j.ultrasmedbio.2024.07.012. Epub 2024 Aug 22.
A deep neural network (DNN) was trained to generate a multiparametric ultrasound (mpUS) volume from four input ultrasound-based modalities (acoustic radiation force impulse [ARFI] imaging, shear wave elasticity imaging [SWEI], quantitative ultrasound-midband fit [QUS-MF], and B-mode) for the detection of prostate cancer.
A DNN was trained using co-registered ARFI, SWEI, MF, and B-mode data obtained in men with biopsy-confirmed prostate cancer prior to radical prostatectomy (15 subjects, comprising 980,620 voxels). Data were obtained using a commercial scanner that was modified to allow user control of the acoustic beam sequences and provide access to the raw image data. For each subject, the index lesion and a non-cancerous region were manually segmented using visual confirmation based on whole-mount histopathology data.
In a prostate phantom, the DNN increased lesion contrast-to-noise ratio (CNR) compared to a previous approach that used a linear support vector machine (SVM). In the in vivo test datasets (n = 15), the DNN-based mpUS volumes clearly portrayed histopathology-confirmed prostate cancer and significantly improved CNR compared to the linear SVM (2.79 ± 0.88 vs. 1.98 ± 0.73, paired-sample t-test p < 0.001). In a sub-analysis in which the input modalities to the DNN were selectively omitted, the CNR decreased with fewer inputs; both stiffness- and echogenicity-based modalities were important contributors to the multiparametric model.
The findings from this study indicate that a DNN can be optimized to generate mpUS prostate volumes with high CNR from ARFI, SWEI, MF, and B-mode and that this approach outperforms a linear SVM approach.
训练一个深度神经网络(DNN),从四种输入的基于超声的模态(声辐射力脉冲成像[ARFI]、剪切波弹性成像[SWEI]、定量超声中带拟合[QUS-MF]和 B 型模式)生成多参数超声(mpUS)体积,以检测前列腺癌。
使用在接受根治性前列腺切除术(15 名患者,包含 980,620 个体素)之前经活检证实患有前列腺癌的男性中获得的经 ARFI、SWEI、MF 和 B 型模式配准的 DNN 进行训练。使用商业扫描仪获得数据,该扫描仪经过修改后允许用户控制声束序列并提供对原始图像数据的访问。对于每个患者,使用基于全视病理数据的视觉确认手动分割索引病变和非癌区域。
在前列腺模型中,与使用线性支持向量机(SVM)的先前方法相比,DNN 增加了病变的对比噪声比(CNR)。在体内测试数据集(n = 15)中,基于 DNN 的 mpUS 体积清晰地描绘了经组织病理学证实的前列腺癌,并与线性 SVM 相比显著提高了 CNR(2.79 ± 0.88 vs. 1.98 ± 0.73,配对样本 t 检验,p <0.001)。在选择性省略 DNN 的输入模态的子分析中,随着输入模态的减少,CNR 降低;基于刚性和回声的模态都是多参数模型的重要贡献者。
本研究的结果表明,可以优化 DNN 以从 ARFI、SWEI、MF 和 B 型模式生成具有高 CNR 的 mpUS 前列腺体积,并且这种方法优于线性 SVM 方法。