Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, People's Republic of China.
Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, People's Republic of China.
Sci Rep. 2023 Feb 16;13(1):2770. doi: 10.1038/s41598-023-29814-3.
To establish a deep learning (DL) model in differentiating borderline ovarian tumor (BOT) from epithelial ovarian cancer (EOC) on conventional MR imaging. We retrospectively enrolled 201 patients of 102 pathologically proven BOTs and 99 EOCs at OB/GYN hospital Fudan University, between January 2015 and December 2017. All imaging data were reviewed on picture archiving and communication systems (PACS) server. Both T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI) MR images were used for lesion area determination. We trained a U-net++ model with deep supervision to segment the lesion area on MR images. Then, the segmented regions were fed into a classification model based on DL network to categorize ovarian masses automatically. For ovarian lesion segmentation, the mean dice similarity coefficient (DSC) of the trained U-net++ model in the testing dataset achieved 0.73 [Formula: see text] 0.25, 0.76 [Formula: see text] 0.18, and 0.60 [Formula: see text] 0.24 in the sagittal T2WI, coronal T2WI, and axial T1WI images, respectively. The DL model by combined T2WI computerized network could differentiate BOT from EOC with a significantly higher AUC of 0.87, an accuracy of 83.7%, a sensitivity of 75.0% and a specificity of 87.5%. In comparison, the AUC yielded by radiologist was only 0.75, with an accuracy of 75.5%, a sensitivity of 96.0% and specificity of 54.2% (P < 0.001).The trained DL network model derived from routine MR imaging could help to distinguish BOT from EOC with a high accuracy, which was superior to radiologists' assessment.
基于常规磁共振成像建立深度学习(DL)模型以区分交界性卵巢肿瘤(BOT)和上皮性卵巢癌(EOC)。我们回顾性纳入了 2015 年 1 月至 2017 年 12 月在复旦大学妇产科医院经病理证实的 102 例 BOT 和 99 例 EOC 患者。所有影像学数据均在图像存档与通信系统(PACS)服务器上进行评估。T1 加权成像(T1WI)和 T2 加权成像(T2WI)磁共振图像均用于确定病变面积。我们使用深度监督的 U-net++模型训练了一个用于分割磁共振图像上病变区域的模型。然后,将分割区域输入基于深度学习网络的分类模型,以自动对卵巢肿块进行分类。对于卵巢病变的分割,在测试数据集上训练的 U-net++模型的平均骰子相似系数(DSC)分别在矢状位 T2WI、冠状位 T2WI 和轴位 T1WI 图像中达到 0.73 [Formula: see text] 0.25、0.76 [Formula: see text] 0.18 和 0.60 [Formula: see text] 0.24。联合 T2WI 计算机网络的 DL 模型可将 BOT 与 EOC 区分开来,其 AUC 显著更高,为 0.87,准确率为 83.7%,敏感度为 75.0%,特异性为 87.5%。相比之下,放射科医生的 AUC 仅为 0.75,准确率为 75.5%,敏感度为 96.0%,特异性为 54.2%(P < 0.001)。从常规磁共振成像中提取的训练有素的 DL 网络模型可以帮助以高准确率区分 BOT 和 EOC,优于放射科医生的评估。