Suppr超能文献

用于利用超声图像预测乳腺癌新辅助化疗反应及腋窝淋巴结转移的自动化可重复深度学习(AutoRDL)框架:一项回顾性多中心研究

Automated and reusable deep learning (AutoRDL) framework for predicting response to neoadjuvant chemotherapy and axillary lymph node metastasis in breast cancer using ultrasound images: a retrospective, multicentre study.

作者信息

You Jingjing, Huang Yue, Ouyang Lizhu, Zhang Xiao, Chen Pei, Wu Xuewei, Jin Zhe, Shen Hui, Zhang Lu, Chen Qiuying, Pei Shufang, Zhang Bin, Zhang Shuixing

机构信息

Department of Radiology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China.

Department of Ultrasound, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.

出版信息

EClinicalMedicine. 2024 Feb 27;69:102499. doi: 10.1016/j.eclinm.2024.102499. eCollection 2024 Mar.

Abstract

BACKGROUND

Previous deep learning models have been proposed to predict the pathological complete response (pCR) and axillary lymph node metastasis (ALNM) in breast cancer. Yet, the models often leveraged multiple frameworks, required manual annotation, and discarded low-quality images. We aimed to develop an automated and reusable deep learning (AutoRDL) framework for tumor detection and prediction of pCR and ALNM using ultrasound images with diverse qualities.

METHODS

The AutoRDL framework includes a You Only Look Once version 5 (YOLOv5) network for tumor detection and a progressive multi-granularity (PMG) network for pCR and ALNM prediction. The training cohort and the internal validation cohort were recruited from Guangdong Provincial People's Hospital (GPPH) between November 2012 and May 2021. The two external validation cohorts were recruited from the First Affiliated Hospital of Kunming Medical University (KMUH), between January 2016 and December 2019, and Shunde Hospital of Southern Medical University (SHSMU) between January 2014 and July 2015. Prior to model training, super-resolution via iterative refinement (SR3) was employed to improve the spatial resolution of low-quality images from the KMUH. We developed three models for predicting pCR and ALNM: a clinical model using multivariable logistic regression analysis, an image model utilizing the PMG network, and a combined model that integrates both clinical and image data using the PMG network.

FINDINGS

The YOLOv5 network demonstrated excellent accuracy in tumor detection, achieving average precisions of 0.880-0.921 during validation. In terms of pCR prediction, the combined model outperformed the combined model, image model, image model, and clinical model (AUC: 0.833 vs 0.822 vs 0.806 vs 0.790 vs 0.712, all p < 0.05) in the external validation cohort (KMUH). Consistently, the combined model exhibited the highest accuracy in ALNM prediction, surpassing the combined model, image model, image model, and clinical model (AUC: 0.825 vs 0.806 vs 0.802 vs 0.787 vs 0.703, all p < 0.05) in the external validation cohort 1 (KMUH). In the external validation cohort 2 (SHSMU), the combined model also showed superiority over the clinical and image models (0.819 vs 0.712 vs 0.806, both p < 0.05).

INTERPRETATION

Our proposed AutoRDL framework is feasible in automatically predicting pCR and ALNM in real-world settings, which has the potential to assist clinicians in optimizing individualized treatment options for patients.

FUNDING

National Key Research and Development Program of China (2023YFF1204600); National Natural Science Foundation of China (82227802, 82302306); Clinical Frontier Technology Program of the First Affiliated Hospital of Jinan University, China (JNU1AF-CFTP-2022-a01201); Science and Technology Projects in Guangzhou (202201020022, 2023A03J1036, 2023A03J1038); Science and Technology Youth Talent Nurturing Program of Jinan University (21623209); and Postdoctoral Science Foundation of China (2022M721349).

摘要

背景

先前已提出深度学习模型来预测乳腺癌的病理完全缓解(pCR)和腋窝淋巴结转移(ALNM)。然而,这些模型通常利用多个框架,需要人工标注,并且会丢弃低质量图像。我们旨在开发一种自动化且可重复使用的深度学习(AutoRDL)框架,用于使用具有不同质量的超声图像检测肿瘤并预测pCR和ALNM。

方法

AutoRDL框架包括用于肿瘤检测的You Only Look Once版本5(YOLOv5)网络和用于pCR及ALNM预测的渐进多粒度(PMG)网络。训练队列和内部验证队列于2012年11月至2021年5月从广东省人民医院(GPPH)招募。两个外部验证队列分别于2016年1月至2019年12月从昆明医科大学第一附属医院(KMUH)以及2014年1月至2015年7月从南方医科大学顺德医院(SHSMU)招募。在模型训练之前,采用迭代细化超分辨率(SR3)来提高来自KMUH的低质量图像的空间分辨率。我们开发了三种用于预测pCR和ALNM的模型:使用多变量逻辑回归分析的临床模型、利用PMG网络的图像模型以及使用PMG网络整合临床和图像数据的联合模型。

结果

YOLOv5网络在肿瘤检测中表现出优异的准确性,在验证期间平均精度达到0.880 - 0.921。在pCR预测方面,联合模型在外部验证队列(KMUH)中优于临床模型、图像模型和图像模型(AUC:0.833对0.822对0.806对0.790对0.712,所有p < 0.05)。同样,联合模型在ALNM预测中表现出最高的准确性,在外部验证队列(KMUH)中超过临床模型、图像模型和图像模型(AUC:0.825对0.806对0.802对0.787对0.703,所有p < 0.05)。在外部验证队列2(SHSMU)中,联合模型也优于临床模型和图像模型(0.819对0.712对0.806,两者p < 0.05)。

解读

我们提出的AutoRDL框架在现实环境中自动预测pCR和ALNM是可行的,这有可能帮助临床医生为患者优化个体化治疗方案。

资助

中国国家重点研发计划(2023YFF1204600);中国国家自然科学基金(82227802,82302306);中国暨南大学附属第一医院临床前沿技术项目(JNU1AF - CFTP - 2022 - a01201);广州科技项目(202201020022,2023A03J1036,2023A03J1038);暨南大学科技青年人才培养计划(21623209);以及中国博士后科学基金(2022M721349)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b93f/10909626/472d9f03ca2c/gr1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验