Suppr超能文献

基于深度学习的前列腺常规加权磁共振成像的回顾性T2定量分析。

Retrospective T2 quantification from conventional weighted MRI of the prostate based on deep learning.

作者信息

Sun Haoran, Wang Lixia, Daskivich Timothy, Qiu Shihan, Han Fei, D'Agnolo Alessandro, Saouaf Rola, Christodoulou Anthony G, Kim Hyung, Li Debiao, Xie Yibin

机构信息

Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States.

Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States.

出版信息

Front Radiol. 2023 Oct 11;3:1223377. doi: 10.3389/fradi.2023.1223377. eCollection 2023.

Abstract

PURPOSE

To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images.

METHODS

Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images. A U-Net based neural network was developed to directly estimate T2 maps from the weighted images using a four-fold cross-validation training strategy. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean percentage error (MPE), and Pearson correlation coefficient were calculated to evaluate the quality of network-estimated T2 maps. To explore the potential of this approach in clinical practice, a retrospective T2 quantification was performed on a high-risk prostate cancer cohort (Group 1) and a low-risk active surveillance cohort (Group 2). Tumor and non-tumor T2 values were evaluated by an experienced radiologist based on region of interest (ROI) analysis.

RESULTS

The T2 maps generated by the trained network were consistent with the corresponding reference. Prostate tissue structures and contrast were well preserved, with a PSNR of 26.41 ± 1.17 dB, an SSIM of 0.85 ± 0.02, and a Pearson correlation coefficient of 0.86. Quantitative ROI analyses performed on 38 prostate cancer patients revealed estimated T2 values of 80.4 ± 14.4 ms and 106.8 ± 16.3 ms for tumor and non-tumor regions, respectively. ROI measurements showed a significant difference between tumor and non-tumor regions of the estimated T2 maps (< 0.001). In the two-timepoints active surveillance cohort, patients defined as progressors exhibited lower estimated T2 values of the tumor ROIs at the second time point compared to the first time point. Additionally, the T2 difference between two time points for progressors was significantly greater than that for non-progressors ( = 0.010).

CONCLUSION

A deep learning method was developed to estimate prostate T2 maps retrospectively from clinically acquired T1- and T2-weighted images, which has the potential to improve prostate cancer diagnosis and characterization without requiring extra scans.

摘要

目的

开发一种基于深度学习的方法,用于从传统的T1加权和T2加权图像中回顾性地量化T2。

方法

对25名受试者使用多回波自旋回波序列进行成像,以估计参考前列腺T2图。采集传统的T1加权和T2加权图像作为输入图像。开发了一种基于U-Net的神经网络,使用四重交叉验证训练策略从加权图像中直接估计T2图。计算结构相似性指数(SSIM)、峰值信噪比(PSNR)、平均百分比误差(MPE)和皮尔逊相关系数,以评估网络估计的T2图的质量。为了探索这种方法在临床实践中的潜力,对高危前列腺癌队列(第1组)和低风险主动监测队列(第2组)进行了回顾性T2量化。由经验丰富的放射科医生基于感兴趣区域(ROI)分析评估肿瘤和非肿瘤的T2值。

结果

训练后的网络生成的T2图与相应的参考图一致。前列腺组织结构和对比度保存良好,PSNR为26.41±1.17dB,SSIM为0.85±0.02,皮尔逊相关系数为0.86。对38名前列腺癌患者进行的定量ROI分析显示,肿瘤和非肿瘤区域的估计T2值分别为80.4±14.4ms和106.8±16.3ms。ROI测量显示估计的T2图的肿瘤和非肿瘤区域之间存在显著差异(<0.001)。在两次时间点的主动监测队列中,被定义为进展者的患者在第二个时间点的肿瘤ROI估计T2值低于第一个时间点。此外,进展者两个时间点之间的T2差异显著大于非进展者(=0.010)。

结论

开发了一种深度学习方法,可从临床获取的T1加权和T2加权图像中回顾性估计前列腺T2图,该方法有可能在无需额外扫描的情况下改善前列腺癌的诊断和特征描述。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e27b/10598780/057f114fec24/fradi-03-1223377-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验