Jan Ya-Ting, Tsai Pei-Shan, Huang Wen-Hui, Chou Ling-Ying, Huang Shih-Chieh, Wang Jing-Zhe, Lu Pei-Hsuan, Lin Dao-Chen, Yen Chun-Sheng, Teng Ju-Ping, Mok Greta S P, Shih Cheng-Ting, Wu Tung-Hsin
Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan.
Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan.
Insights Imaging. 2023 Apr 24;14(1):68. doi: 10.1186/s13244-023-01412-x.
BACKGROUND: To develop an artificial intelligence (AI) model with radiomics and deep learning (DL) features extracted from CT images to distinguish benign from malignant ovarian tumors. METHODS: We enrolled 149 patients with pathologically confirmed ovarian tumors. A total of 185 tumors were included and divided into training and testing sets in a 7:3 ratio. All tumors were manually segmented from preoperative contrast-enhanced CT images. CT image features were extracted using radiomics and DL. Five models with different combinations of feature sets were built. Benign and malignant tumors were classified using machine learning (ML) classifiers. The model performance was compared with five radiologists on the testing set. RESULTS: Among the five models, the best performing model is the ensemble model with a combination of radiomics, DL, and clinical feature sets. The model achieved an accuracy of 82%, specificity of 89% and sensitivity of 68%. Compared with junior radiologists averaged results, the model had a higher accuracy (82% vs 66%) and specificity (89% vs 65%) with comparable sensitivity (68% vs 67%). With the assistance of the model, the junior radiologists achieved a higher average accuracy (81% vs 66%), specificity (80% vs 65%), and sensitivity (82% vs 67%), approaching to the performance of senior radiologists. CONCLUSIONS: We developed a CT-based AI model that can differentiate benign and malignant ovarian tumors with high accuracy and specificity. This model significantly improved the performance of less-experienced radiologists in ovarian tumor assessment, and may potentially guide gynecologists to provide better therapeutic strategies for these patients.
背景:开发一种利用从CT图像中提取的放射组学和深度学习(DL)特征的人工智能(AI)模型,以区分卵巢良性肿瘤和恶性肿瘤。 方法:我们纳入了149例经病理证实的卵巢肿瘤患者。共纳入185个肿瘤,并以7:3的比例分为训练集和测试集。所有肿瘤均从术前增强CT图像中手动分割出来。使用放射组学和DL提取CT图像特征。构建了五个具有不同特征集组合的模型。使用机器学习(ML)分类器对良性和恶性肿瘤进行分类。在测试集上,将模型性能与五位放射科医生进行比较。 结果:在五个模型中,表现最佳的模型是结合了放射组学、DL和临床特征集的集成模型。该模型的准确率为82%,特异性为89%,敏感性为68%。与初级放射科医生的平均结果相比,该模型具有更高的准确率(82%对66%)和特异性(89%对65%),敏感性相当(68%对67%)。在该模型的辅助下,初级放射科医生获得了更高的平均准确率(81%对66%)、特异性(80%对65%)和敏感性(82%对67%),接近高级放射科医生的表现。 结论:我们开发了一种基于CT的AI模型,该模型能够以高准确率和特异性区分卵巢良性肿瘤和恶性肿瘤。该模型显著提高了经验不足的放射科医生在卵巢肿瘤评估中的表现,并可能潜在地指导妇科医生为这些患者提供更好的治疗策略。
Abdom Radiol (NY). 2025-7-8
Abdom Radiol (NY). 2025-3-12
Cancer Manag Res. 2020-12-14