Suppr超能文献

使用MiniSegCaps网络对多参数磁共振成像进行关节癌分割和PI-RADS分类

Joint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network.

作者信息

Jiang Wenting, Lin Yingying, Vardhanabhuti Varut, Ming Yanzhen, Cao Peng

机构信息

Department of Diagnostic Radiology, University of Hong Kong, Hong Kong SAR, China.

出版信息

Diagnostics (Basel). 2023 Feb 7;13(4):615. doi: 10.3390/diagnostics13040615.

Abstract

MRI is the primary imaging approach for diagnosing prostate cancer. Prostate Imaging Reporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamental MRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks show great promise in automatic lesion segmentation and classification, which help to ease the burden on radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branch network, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI. MiniSeg branch outputted the segmentation in conjunction with PI-RADS prediction, guided by the attention map from the CapsuleNet. CapsuleNet branch exploited the relative spatial information of prostate cancer to anatomical structures, such as the zonal location of the lesion, which also reduced the sample size requirement in training due to its equivariance properties. In addition, a gated recurrent unit (GRU) is adopted to exploit spatial knowledge across slices, improving through-plane consistency. Based on the clinical reports, we established a prostate mpMRI database from 462 patients paired with radiologically estimated annotations. MiniSegCaps was trained and evaluated with fivefold cross-validation. On 93 testing cases, our model achieved a 0.712 dice coefficient on lesion segmentation, 89.18% accuracy, and 92.52% sensitivity on PI-RADS classification (PI-RADS ≥ 4) in patient-level evaluation, significantly outperforming existing methods. In addition, a graphical user interface (GUI) integrated into the clinical workflow can automatically produce diagnosis reports based on the results from MiniSegCaps.

摘要

磁共振成像(MRI)是诊断前列腺癌的主要成像方法。多参数MRI(mpMRI)上的前列腺成像报告和数据系统(PI-RADS)提供了基本的MRI解读指南,但存在阅片者之间的差异。深度学习网络在自动病变分割和分类方面显示出巨大潜力,有助于减轻放射科医生的负担并减少阅片者之间的差异。在本研究中,我们提出了一种新型多分支网络MiniSegCaps,用于在mpMRI上进行前列腺癌分割和PI-RADS分类。MiniSeg分支结合PI-RADS预测输出分割结果,由CapsuleNet的注意力图引导。CapsuleNet分支利用前列腺癌相对于解剖结构的相对空间信息,例如病变的区域位置,由于其等变性质,这也降低了训练中的样本量要求。此外,采用门控循环单元(GRU)来利用跨切片的空间知识,提高层面间的一致性。基于临床报告,我们从462例患者中建立了一个与放射学估计注释配对的前列腺mpMRI数据库。MiniSegCaps采用五折交叉验证进行训练和评估。在93个测试病例上,我们的模型在病变分割方面获得了0.712 的骰子系数,在患者水平评估中PI-RADS分类(PI-RADS≥4)的准确率为89.18%,灵敏度为92.52%,显著优于现有方法。此外,集成到临床工作流程中的图形用户界面(GUI)可以根据MiniSegCaps的结果自动生成诊断报告。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2312/9955952/35f0d37c6e6d/diagnostics-13-00615-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验