Tang Jingli, Wang Hao, Wu Dinghui, Kong Yan, Huang Jianfeng, Han Shuguang
School of Internet of Things Engineering, Jiangnan University, Wuxi, 214122, China.
Key Laboratory of Light Industry, Jiangnan University, Wuxi, 214122, China.
J Imaging Inform Med. 2024 Dec 4. doi: 10.1007/s10278-024-01339-9.
Radiation pneumonitis (RP) is an inflammatory reaction in the lungs caused by radiation therapy. The etiology of RP is complex and varied, making it challenging to establish accurate predictive models for RP. The aim of this study is to develop a dual-modal prediction model for RP using pre-radiotherapy CT images of the patient's lungs, along with clinical and dose data. In this paper, an RP prediction model utilizing dual-modal data is proposed. Firstly, for CT image data, the Med3D transfer network, trained on a large-scale medical dataset, is employed to extract CT image features. To adapt the transfer network for the RP task, the last convolutional block, which incorporates the 3D channel attention mechanism, is trained and saved as the optimal model for extracting deep features once training stabilizes. Subsequently, an autoencoder (AE) is employed to compress the deep features to reduce their dimensionality. Secondly, for clinical and dose features, univariate analysis and lasso regression are used to screen the features. Finally, the two groups of features are multiplied by their respective adaptive rates to achieve data fusion, and then input into the binary classification model for training and prediction. The KAN classifier with adaptive feature fusion demonstrates superior performance, achieving precision rates of 74% and 76%, a recall rate of 73%, and an AUC of 86%. These results surpass those obtained from single-modality approaches. The experimental results on a dataset of 117 chest cancer patients receiving radiotherapy show that the dual-modal data fusion model can predict RP more effectively than single-modality models.
放射性肺炎(RP)是由放射治疗引起的肺部炎症反应。RP的病因复杂多样,这使得建立准确的RP预测模型具有挑战性。本研究的目的是利用患者肺部放疗前的CT图像以及临床和剂量数据,开发一种用于RP的双模态预测模型。本文提出了一种利用双模态数据的RP预测模型。首先,对于CT图像数据,采用在大规模医学数据集上训练的Med3D迁移网络来提取CT图像特征。为了使迁移网络适应RP任务,对包含3D通道注意力机制的最后一个卷积块进行训练,并在训练稳定后将其保存为提取深度特征的最优模型。随后,使用自动编码器(AE)对深度特征进行压缩以降低其维度。其次,对于临床和剂量特征,使用单变量分析和套索回归来筛选特征。最后,将两组特征分别乘以各自的自适应率以实现数据融合,然后输入到二元分类模型中进行训练和预测。具有自适应特征融合的KAN分类器表现出卓越的性能,精确率达到74%和76%,召回率为73%,曲线下面积(AUC)为86%。这些结果超过了单模态方法所获得的结果。对117例接受放疗的胸癌患者数据集的实验结果表明,双模态数据融合模型比单模态模型能更有效地预测RP。