Suppr超能文献

基于深度学习的 HER2 阳性和三阴性乳腺癌 MRI 病理完全缓解的鉴别诊断。

Deep-learning based discrimination of pathologic complete response using MRI in HER2-positive and triple-negative breast cancer.

机构信息

Department of Radiology, Korea University Guro Hospital, Korea University College of Medicine, Seoul, Korea.

Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.

出版信息

Sci Rep. 2024 Oct 4;14(1):23065. doi: 10.1038/s41598-024-74276-w.

Abstract

Distinguishing between pathologic complete response and residual cancer after neoadjuvant chemotherapy (NAC) is crucial for treatment decisions, but the current imaging methods face challenges. To address this, we developed deep-learning models using post-NAC dynamic contrast-enhanced MRI and clinical data. A total of 852 women with human epidermal growth factor receptor 2 (HER2)-positive or triple-negative breast cancer were randomly divided into a training set (n = 724) and a validation set (n = 128). A 3D convolutional neural network model was trained on the training set and validated independently. The main models were developed using cropped MRI images, but models using uncropped whole images were also explored. The delayed-phase model demonstrated superior performance compared to the early-phase model (area under the receiver operating characteristic curve [AUC] = 0.74 vs. 0.69, P = 0.013) and the combined model integrating multiple dynamic phases and clinical data (AUC = 0.74 vs. 0.70, P = 0.022). Deep-learning models using uncropped whole images exhibited inferior performance, with AUCs ranging from 0.45 to 0.54. Further refinement and external validation are necessary for enhanced accuracy.

摘要

区分新辅助化疗(NAC)后的病理完全缓解和残留癌对于治疗决策至关重要,但目前的影像学方法面临挑战。为了解决这个问题,我们使用 NAC 后的动态对比增强 MRI 和临床数据开发了深度学习模型。总共 852 名人表皮生长因子受体 2(HER2)阳性或三阴性乳腺癌女性被随机分为训练集(n=724)和验证集(n=128)。在训练集上训练 3D 卷积神经网络模型,并进行独立验证。主要模型是使用裁剪后的 MRI 图像开发的,但也探索了使用未经裁剪的全图像的模型。延迟期模型的表现优于早期期模型(接受者操作特征曲线下面积 [AUC]为 0.74 比 0.69,P=0.013)和整合多个动态期和临床数据的组合模型(AUC 为 0.74 比 0.70,P=0.022)。使用未经裁剪的全图像的深度学习模型表现不佳,AUC 范围为 0.45 至 0.54。需要进一步改进和外部验证以提高准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5e8/11452398/b3c51cdbf3f8/41598_2024_74276_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验