Mai Wenfeng, Fan Xiaole, Zhang Lingtao, Li Jian, Chen Liting, Hua Xiaoyu, Zhang Dong, Li Hengguo, Cai Minxiang, Shi Changzheng, Liu Xiangning
Medical Imaging Center, The First Affiliated Hospital of Jinan University, Guangzhou, China.
Department of Ultrasound, The First Affiliated Hospital of Jinan University, Guangzhou, China.
Ann Med. 2025 Dec;57(1):2520401. doi: 10.1080/07853890.2025.2520401. Epub 2025 Jun 18.
Accurate preoperative diagnosis of parotid gland tumors (PGTs) is crucial for surgical planning since malignant tumors require more extensive excision. Though fine-needle aspiration biopsy is the diagnostic gold standard, its sensitivity in detecting malignancies is limited. While Deep learning (DL) models based on magnetic resonance imaging (MRI) are common in medicine, they are less studied for parotid gland tumors. This study used a 2.5D imaging approach (Incorporating Inter-Slice Information) to train a DL model to differentiate between benign and malignant PGTs.
This retrospective study included 122 parotid tumor patients, using MRI and clinical features to build predictive models. In the traditional model, univariate analysis identified statistically significant features, which were then used in multivariate logistic regression to determine independent predictors. The model was built using four-fold cross-validation. The deep learning model was trained using 2D and 2.5D imaging approaches, with a transformer-based architecture employed for transfer learning. The model's performance was evaluated using the area under the receiver operating characteristic curve (AUC) and confusion matrix metrics.
In the traditional model, boundary and peritumoral invasion were identified as independent predictors for PGTs, and the model was constructed based on these features. The model achieved an AUC of 0.79 but demonstrated low sensitivity (0.54). In contrast, the DL model based on 2.5D T2 fat-suppressed images showed superior performance, with an AUC of 0.86 and a sensitivity of 0.78.
The 2.5D imaging technique, when integrated with a transformer-based transfer learning model, demonstrates significant efficacy in differentiating between PGTs.
腮腺肿瘤(PGT)的准确术前诊断对于手术规划至关重要,因为恶性肿瘤需要更广泛的切除。尽管细针穿刺活检是诊断的金标准,但其检测恶性肿瘤的敏感性有限。虽然基于磁共振成像(MRI)的深度学习(DL)模型在医学中很常见,但它们在腮腺肿瘤方面的研究较少。本研究采用2.5D成像方法(纳入层间信息)来训练一个DL模型,以区分良性和恶性PGT。
这项回顾性研究纳入了122例腮腺肿瘤患者,利用MRI和临床特征建立预测模型。在传统模型中,单因素分析确定了具有统计学意义的特征,然后将这些特征用于多因素逻辑回归以确定独立预测因素。该模型采用四折交叉验证构建。深度学习模型使用2D和2.5D成像方法进行训练,采用基于Transformer的架构进行迁移学习。使用受试者操作特征曲线(AUC)下的面积和混淆矩阵指标评估模型的性能。
在传统模型中,边界和肿瘤周围侵犯被确定为PGT的独立预测因素,并基于这些特征构建了模型。该模型的AUC为0.79,但敏感性较低(0.54)。相比之下,基于2.5D T2脂肪抑制图像的DL模型表现更优,AUC为0.86,敏感性为0.78。
2.5D成像技术与基于Transformer的迁移学习模型相结合时,在区分PGT方面显示出显著疗效。