Li Thomas Z, Still John M, Xu Kaiwen, Lee Ho Hin, Cai Leon Y, Krishnan Aravind R, Gao Riqiang, Khan Mirza S, Antic Sanja, Kammer Michael, Sandler Kim L, Maldonado Fabien, Landman Bennett A, Lasko Thomas A
Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA.
Biomedical Informatics, Vanderbilt University, Nashville, TN 37212, USA.
Med Image Comput Comput Assist Interv. 2023 Oct;14221:649-659. doi: 10.1007/978-3-031-43895-0_61. Epub 2023 Oct 1.
The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.
通过纳入重复成像和医学背景信息(如电子健康记录(EHR)),孤立性肺结节(SPN)诊断预测模型的准确性可以大幅提高。然而,成像和诊断代码等临床常规模式在不同时间尺度上可能是异步且采样不规则的,这对纵向多模态学习构成了障碍。在这项工作中,我们提出了一种基于Transformer的多模态策略,将重复成像与来自常规收集的EHR的纵向临床特征相结合,用于SPN分类。我们对潜在临床特征进行无监督解缠,并利用时间距离缩放自注意力机制,从临床特征表达和胸部计算机断层扫描(CT)中进行联合学习。我们的分类器在来自公共数据集的2668次扫描以及来自我们机构EHR的1149名有纵向胸部CT、计费代码、药物和实验室检查的受试者上进行了预训练。对227名患有具有挑战性的SPN的受试者的评估显示,与纵向多模态基线相比,AUC有显著提高(0.824对0.752 AUC),同时也优于单一横截面多模态场景(0.809 AUC)和仅纵向成像场景(0.741 AUC)。这项工作通过一种使用Transformer共同学习纵向成像和非成像表型的新方法展示了显著优势。代码可在https://github.com/MASILab/lmsignatures获取。