Li Thomas Z, Xu Kaiwen, Gao Riqiang, Tang Yucheng, Lasko Thomas A, Maldonado Fabien, Sandler Kim L, Landman Bennett A
Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235.
School of Medicine, Vanderbilt University, Nashville, TN, US 37235.
Proc SPIE Int Soc Opt Eng. 2023 Feb;12464. doi: 10.1117/12.2653911. Epub 2023 Apr 3.
Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.
从单一放射影像中学习到的特征无法提供有关病变是否以及随时间变化程度的信息。从重复影像中计算得出的时间相关特征可以捕捉这些变化,并通过其时间行为帮助识别恶性病变。然而,纵向医学影像在数据采集中存在稀疏、不规则时间间隔这一独特挑战。虽然自注意力已被证明是用于时间序列和自然图像的通用且高效的学习机制,但其在解释稀疏、不规则采样的空间特征之间的时间距离方面的潜力尚未得到探索。在这项工作中,我们通过使用(1)连续时间的向量嵌入和(2)一个时间强调模型来缩放自注意力权重,提出了时间距离视觉Transformer(ViT)的两种解释。基于合成肺结节的良性与恶性肺癌鉴别以及来自国家肺癌筛查试验(NLST)的肺部筛查计算机断层扫描研究,对这两种算法进行了评估。对合成结节评估时间距离ViT的实验表明,与标准ViT相比,在对不规则采样的纵向图像进行分类时,有了根本性的改进。在对NLST的筛查胸部CT进行交叉验证时,我们的方法(分别为0.785和0.786的AUC)显著优于横断面方法(0.734的AUC),并且在良性与恶性分类方面与领先的纵向医学影像算法(0.779的AUC)的判别性能相当。这项工作代表了第一个基于自注意力的纵向医学影像分类框架。我们的代码可在https://github.com/tom1193/time-distance-transformer获取。