Zhou Jinzhao, Zhang Xingming, Zhu Ziwei, Lan Xiangyuan, Fu Lunkai, Wang Haoxiang, Wen Hanchun
School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
Department of Computer ScienceHong Kong Baptist University Hong Kong.
IEEE Trans Circuits Syst Video Technol. 2021 Mar 4;32(5):2535-2549. doi: 10.1109/TCSVT.2021.3063952. eCollection 2022 May.
The outbreak of coronavirus disease (COVID-19) has been a nightmare to citizens, hospitals, healthcare practitioners, and the economy in 2020. The overwhelming number of confirmed cases and suspected cases put forward an unprecedented challenge to the hospital's capacity of management and medical resource distribution. To reduce the possibility of cross-infection and attend a patient according to his severity level, expertly diagnosis and sophisticated medical examinations are often required but hard to fulfil during a pandemic. To facilitate the assessment of a patient's severity, this paper proposes a multi-modality feature learning and fusion model for end-to-end covid patient severity prediction using the blood test supported electronic medical record (EMR) and chest computerized tomography (CT) scan images. To evaluate a patient's severity by the co-occurrence of salient clinical features, the High-order Factorization Network (HoFN) is proposed to learn the impact of a set of clinical features without tedious feature engineering. On the other hand, an attention-based deep convolutional neural network (CNN) using pre-trained parameters are used to process the lung CT images. Finally, to achieve cohesion of cross-modality representation, we design a loss function to shift deep features of both-modality into the same feature space which improves the model's performance and robustness when one modality is absent. Experimental results demonstrate that the proposed multi-modality feature learning and fusion model achieves high performance in an authentic scenario.
2020年,冠状病毒病(COVID-19)的爆发对市民、医院、医护人员以及经济来说犹如一场噩梦。大量的确诊病例和疑似病例给医院的管理能力和医疗资源分配带来了前所未有的挑战。为了降低交叉感染的可能性并根据患者的严重程度进行救治,通常需要专业的诊断和精密的医学检查,但在疫情期间却很难实现。为了便于评估患者的严重程度,本文提出了一种多模态特征学习与融合模型,用于使用血液检测支持的电子病历(EMR)和胸部计算机断层扫描(CT)图像对COVID患者的严重程度进行端到端预测。为了通过显著临床特征的共现来评估患者的严重程度,提出了高阶分解网络(HoFN),以在无需繁琐特征工程的情况下学习一组临床特征的影响。另一方面,使用基于注意力的深度卷积神经网络(CNN)和预训练参数来处理肺部CT图像。最后,为了实现跨模态表示的凝聚,我们设计了一个损失函数,将两种模态的深度特征转移到同一特征空间,当一种模态缺失时,这提高了模型的性能和鲁棒性。实验结果表明,所提出的多模态特征学习与融合模型在真实场景中取得了高性能。