Chen Qijian, Wang Lihui, Deng Zeyu, Wang Rongpin, Wang Li, Jian Caiqing, Zhu Yue-Min
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.
Med Image Anal. 2025 Apr;101:103435. doi: 10.1016/j.media.2024.103435. Epub 2024 Dec 30.
Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.
深度学习方法已被广泛应用于各种胶质瘤预测。然而,它们通常是特定任务、依赖分割的,并且缺乏可解释的生物标志物。如何同时准确预测胶质瘤的组织学分级和分子亚型并提供可靠的影像学生物标志物仍然具有挑战性。为了实现这一目标,我们提出了一种新型的协作多任务学习网络(CMTLNet),它由一个任务通用特征提取(CFE)模块、一个任务特定独特特征提取(UFE)模块和一个独特-通用特征协作分类(UCFC)模块组成。在CFE中,首先设计了一个无分割肿瘤特征感知(SFTFP)模块,以分类方式而非分割方式提取肿瘤感知特征。随后,基于SFTFP模块提取的多尺度肿瘤感知特征,CFE使用卷积层进一步细化这些特征,从中学习任务通用特征。在UFE中,基于正交投影和条件分类策略,提取任务特定的独特特征。在UCFC中,独特特征和通用特征通过注意力机制融合,使其适应不同的胶质瘤预测任务。最后,通过结合SHAP值和相关性分析,探索了针对每个胶质瘤预测任务的深度特征引导的可解释放射组学生物标志物。通过在一个包含超过1800例病例的大型多中心数据集上与最近报道的方法进行比较,我们证明了所提出的CMTLNet的优越性,在验证集和测试集中,对于胶质瘤分级、1p/19q和IDH状态预测任务,平均马修斯相关系数分别提高了(4.1%,10.7%)、(3.6%,23.4%)和(2.7%,22.7%)。此外,我们发现一些放射组学特征与不可解释的深度特征高度相关,并且它们在多中心数据集中的变化趋势是一致的,这些特征可以作为胶质瘤诊断的可靠影像学生物标志物。所提出的CMTLNet为胶质瘤多任务预测提供了一个可解释的工具,这有利于胶质瘤的精确诊断和个性化治疗。