Suppr超能文献

基于三维全乳超声的多任务深度学习可解释性乳腺癌分子表达预测

Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound.

作者信息

Huang Zengan, Zhang Xin, Ju Yan, Zhang Ge, Chang Wanying, Song Hongping, Gao Yi

机构信息

School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China.

Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China.

出版信息

Insights Imaging. 2024 Sep 19;15(1):227. doi: 10.1186/s13244-024-01810-9.

Abstract

OBJECTIVES

To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.

METHODS

The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.

RESULTS

All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.

CONCLUSION

Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.

CRITICAL RELEVANCE STATEMENT

The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.

KEY POINTS

Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.

摘要

目的

通过多任务深度学习对雌激素受体(ER)、孕激素受体(PR)和人表皮生长因子受体2(HER2)这三种乳腺癌生物标志物进行无创估计,并提高性能和可解释性。

方法

该研究纳入了2020年10月至2021年9月期间在西京医院接受三维全乳超声系统(3DWBUS)检查的388例乳腺癌患者。开发了两种预测模型,即单任务模型和多任务模型;前者预测生物标志物表达,而后者将肿瘤分割与生物标志物预测相结合以提高可解释性。性能评估包括个体和整体预测指标,并使用德龙检验进行性能比较。使用Grad-CAM++技术对模型的关注区域进行可视化。

结果

所有患者被随机分为训练集(n = 240,62%)、验证集(n = 60,15%)和测试集(n = 88,23%)。在对ER、PR和HER2表达预测的个体评估中,在测试集中观察到,单任务模型和多任务模型对ER的AUC分别为0.809和0.735,对PR的AUC分别为0.688和0.767,对HER2的AUC分别为0.626和0.697。在整体评估中,多任务模型在测试集中表现出更优的性能,获得了更高的宏观AUC为0.733,而单任务模型为0.708。Grad-CAM++方法显示,多任务模型对病变组织区域表现出更强的关注,提高了模型工作方式的可解释性。

结论

两种模型均表现出令人印象深刻的性能,多任务模型在准确性方面表现出色,并使用Grad-CAM++技术提高了对无创3DWBUS图像的可解释性。

关键相关性声明

多任务深度学习模型对乳腺癌生物标志物具有有效的预测能力,可直接识别生物标志物并提高临床可解释性,可能提高靶向药物筛选的效率。

要点

肿瘤生物标志物对于确定乳腺癌治疗至关重要。多任务模型可以提高预测性能,并在临床实践中提高可解释性。基于三维全乳超声系统的深度学习模型在预测乳腺癌生物标志物方面表现出色。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8306/11424596/ce5497ce715d/13244_2024_1810_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验