Hu Lei, Pei Chong, Xie Li, Liu Zhen, He Nianan, Lv Weifu
Department of Ultrasound, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 230001, China.
Department of Respiratory and Critical Care Medicine, The First People's Hospital of Hefei City, The Third Affiliated Hospital of Anhui Medical University, Hefei 230001, China.
Endocrinology. 2022 Oct 11;163(11). doi: 10.1210/endocr/bqac135.
We aimed to develop deep learning models based on perinodular regions' shear-wave elastography (SWE) images and ultrasound (US) images of thyroid nodules (TNs) and determine their performances in predicting thyroid cancer. A total of 1747 American College of Radiology Thyroid Imaging Reporting & Data System 4 (TR4) thyroid nodules (TNs) in 1582 patients were included in this retrospective study. US images, SWE images, and 2 quantitative SWE parameters (maximum elasticity of TNs; 5-point average maximum elasticity of TNs) were obtained. Based on US and SWE images of TNs and perinodular tissue, respectively, 7 single-image convolutional neural networks (CNN) models [US, internal SWE, 0.5 mm SWE, 1.0 mm SWE, 1.5 mm SWE, 2.0 mm SWE of perinodular tissue, and whole SWE region of interest (ROI) image] and another 6 fusional-image CNN models (US + internal SWE, US + 0.5 mm SWE, US + 1.0 mm SWE, US + 1.5 mm SWE, US + 2.0 mm SWE, US + ROI SWE) were established using RestNet18. All of the CNN models and quantitative SWE parameters were built on a training cohort (1247 TNs) and evaluated on a validation cohort (500 TNs). In predicting thyroid cancer, US + 2.0 mm SWE image CNN model obtained the highest area under the curve in 10 mm < TNs ≤ 20 mm (0.95 for training; 0.92 for validation) and TNs > 20 mm (0.95 for training; 0.92 for validation), while US + 1.0 mm SWE image CNN model obtained the highest area under the curve in TNs ≤ 10 mm (0.95 for training; 0.92 for validation). The CNN models based on the fusion of SWE segmentation images and US images improve the radiological diagnostic accuracy of thyroid cancer.
我们旨在基于甲状腺结节(TN)的结节周围区域剪切波弹性成像(SWE)图像和超声(US)图像开发深度学习模型,并确定其在预测甲状腺癌方面的性能。本回顾性研究纳入了1582例患者的1747个美国放射学会甲状腺影像报告和数据系统4类(TR4)甲状腺结节。获取了US图像、SWE图像以及2个定量SWE参数(TN的最大弹性;TN的5点平均最大弹性)。分别基于TN和结节周围组织的US图像及SWE图像,使用RestNet18建立了7个单图像卷积神经网络(CNN)模型[US、内部SWE、0.5 mm SWE、1.0 mm SWE、1.5 mm SWE、2.0 mm SWE的结节周围组织以及整个SWE感兴趣区域(ROI)图像]和另外6个融合图像CNN模型(US + 内部SWE、US + 0.5 mm SWE、US + 1.0 mm SWE、US + 1.5 mm SWE、US + 2.0 mm SWE、US + ROI SWE)。所有CNN模型和定量SWE参数均在训练队列(1247个TN)上构建,并在验证队列(500个TN)上进行评估。在预测甲状腺癌方面,US + 2.0 mm SWE图像CNN模型在10 mm < TN ≤ 20 mm(训练时为0.95;验证时为0.92)和TN > 20 mm(训练时为0.95;验证时为0.92)的情况下获得了最高的曲线下面积,而US + 1.0 mm SWE图像CNN模型在TN ≤ 10 mm(训练时为0.95;验证时为0.92)的情况下获得了最高的曲线下面积。基于SWE分割图像与US图像融合的CNN模型提高了甲状腺癌的放射学诊断准确性。