Park Junghoan, Bae Jae Seok, Kim Jong-Min, Witanto Joseph Nathanael, Park Sang Joon, Lee Jeong Min
Department of Radiology, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
Research and Science Division, MEDICAL IP Co., Ltd., Seoul, Republic of Korea.
Abdom Radiol (NY). 2023 Aug;48(8):2547-2556. doi: 10.1007/s00261-023-03962-6. Epub 2023 May 24.
Liver Imaging Reporting and Data System (LI-RADS) is limited by interreader variability. Thus, our study aimed to develop a deep-learning model for classifying LI-RADS major features using subtraction images using magnetic resonance imaging (MRI).
This single-center retrospective study included 222 consecutive patients who underwent resection for hepatocellular carcinoma (HCC) between January, 2015 and December, 2017. Subtraction arterial, portal venous, and transitional phase images of preoperative gadoxetic acid-enhanced MRI were used to train and test the deep-learning models. Initially, a three-dimensional (3D) nnU-Net-based deep-learning model was developed for HCC segmentation. Subsequently, a 3D U-Net-based deep-learning model was developed to assess three LI-RADS major features (nonrim arterial phase hyperenhancement [APHE], nonperipheral washout, and enhancing capsule [EC]), utilizing the results determined by board-certified radiologists as reference standards. The HCC segmentation performance was assessed using the Dice similarity coefficient (DSC), sensitivity, and precision. The sensitivity, specificity, and accuracy of the deep-learning model for classifying LI-RADS major features were calculated.
The average DSC, sensitivity, and precision of our model for HCC segmentation were 0.884, 0.891, and 0.887, respectively, across all the phases. Our model demonstrated a sensitivity, specificity, and accuracy of 96.6% (28/29), 66.7% (4/6), and 91.4% (32/35), respectively, for nonrim APHE; 95.0% (19/20), 50.0% (4/8), and 82.1% (23/28), respectively, for nonperipheral washout; and 86.7% (26/30), 54.2% (13/24), and 72.2% (39/54) for EC, respectively.
We developed an end-to-end deep-learning model that classifies the LI-RADS major features using subtraction MRI images. Our model exhibited satisfactory performance in classifying LI-RADS major features.
肝脏影像报告和数据系统(LI-RADS)受阅片者间变异性的限制。因此,我们的研究旨在开发一种深度学习模型,用于使用磁共振成像(MRI)的减影图像对LI-RADS主要特征进行分类。
这项单中心回顾性研究纳入了2015年1月至2017年12月期间连续222例行肝细胞癌(HCC)切除术的患者。术前钆塞酸增强MRI的减影动脉期、门静脉期和移行期图像用于训练和测试深度学习模型。最初,开发了一种基于三维(3D)nnU-Net的深度学习模型用于HCC分割。随后,基于3D U-Net开发了一种深度学习模型,以利用经委员会认证的放射科医生确定的结果作为参考标准来评估三个LI-RADS主要特征(非边缘动脉期高增强[APHE]、非周边廓清和强化包膜[EC])。使用Dice相似系数(DSC)、敏感性和精确性评估HCC分割性能。计算用于分类LI-RADS主要特征的深度学习模型的敏感性、特异性和准确性。
我们的模型在所有阶段对HCC分割的平均DSC、敏感性和精确性分别为0.884、0.891和0.887。对于非边缘APHE,我们的模型的敏感性、特异性和准确性分别为96.6%(28/29)、66.7%(4/6)和91.4%(32/35);对于非周边廓清,分别为95.0%(19/20)、50.0%(4/8)和82.1%(23/28);对于EC,分别为86.7%(26/30)、54.2%(13/24)和72.2%(39/54)。
我们开发了一种端到端的深度学习模型,该模型使用减影MRI图像对LI-RADS主要特征进行分类。我们的模型在对LI-RADS主要特征进行分类方面表现出令人满意的性能。