Su Chang, Miao Kuo, Zhang Liwei, Yu Xuemei, Guo Zhiyao, Li Daoshuang, Xu Mingda, Zhang Qiming, Dong Xiaoqiu
Department of Ultrasound, Fourth Affiliated Hospital of Harbin Medical University, 37#Yi Yuan Street, Harbin, 150086, China.
J Imaging Inform Med. 2025 Jun 24. doi: 10.1007/s10278-025-01566-8.
This study aimed to develop and validate a multimodal deep learning model that leverages 2D grayscale ultrasound (US) images alongside readily available clinical data to improve diagnostic performance for ovarian cancer (OC). A retrospective analysis was conducted involving 1899 patients who underwent preoperative US examinations and subsequent surgeries for adnexal masses between 2019 and 2024. A multimodal deep learning model was constructed for OC diagnosis and extracting US morphological features from the images. The model's performance was evaluated using metrics such as receiver operating characteristic (ROC) curves, accuracy, and F1 score. The multimodal deep learning model exhibited superior performance compared to the image-only model, achieving areas under the curves (AUCs) of 0.9393 (95% CI 0.9139-0.9648) and 0.9317 (95% CI 0.9062-0.9573) in the internal and external test sets, respectively. The model significantly improved the AUCs for OC diagnosis by radiologists and enhanced inter-reader agreement. Regarding US morphological feature extraction, the model demonstrated robust performance, attaining accuracies of 86.34% and 85.62% in the internal and external test sets, respectively. Multimodal deep learning has the potential to enhance the diagnostic accuracy and consistency of radiologists in identifying OC. The model's effective feature extraction from ultrasound images underscores the capability of multimodal deep learning to automate the generation of structured ultrasound reports.
本研究旨在开发并验证一种多模态深度学习模型,该模型利用二维灰阶超声(US)图像以及现成的临床数据来提高卵巢癌(OC)的诊断性能。进行了一项回顾性分析,涉及2019年至2024年间接受术前超声检查及随后附件包块手术的1899例患者。构建了一个用于OC诊断并从图像中提取超声形态学特征的多模态深度学习模型。使用受试者工作特征(ROC)曲线、准确率和F1分数等指标评估该模型的性能。与仅使用图像的模型相比,多模态深度学习模型表现出更优的性能,在内部和外部测试集中的曲线下面积(AUC)分别达到0.9393(95%可信区间0.9139 - 0.9648)和0.9317(95%可信区间0.9062 - 0.9573)。该模型显著提高了放射科医生对OC诊断的AUC,并增强了阅片者之间的一致性。关于超声形态学特征提取,该模型表现出稳健的性能,在内部和外部测试集中的准确率分别达到86.34%和85.62%。多模态深度学习有潜力提高放射科医生在识别OC方面的诊断准确性和一致性。该模型从超声图像中有效提取特征突出了多模态深度学习自动生成结构化超声报告的能力。
J Imaging Inform Med. 2025-6-24
Br J Hosp Med (Lond). 2025-6-25
Gynecol Oncol. 2025-2
Radiol Clin North Am. 2025-1
Chin Med J Pulm Crit Care Med. 2023-9-9
Cancer Immunol Immunother. 2024-6-4