Department of Information and Statistics, Chungnam National University, Daejeon, Republic of Korea.
Department of Artificial Intelligence, Sungkyunkwan University, Suwon, Republic of Korea.
PLoS One. 2023 Jun 29;18(6):e0287301. doi: 10.1371/journal.pone.0287301. eCollection 2023.
Recent advancements in computer vision and neural networks have facilitated the medical imaging survival analysis for various medical applications. However, challenges arise when patients have multiple images from multiple lesions, as current deep learning methods provide multiple survival predictions for each patient, complicating result interpretation. To address this issue, we developed a deep learning survival model that can provide accurate predictions at the patient level. We propose a deep attention long short-term memory embedded aggregation network (DALAN) for histopathology images, designed to simultaneously perform feature extraction and aggregation of lesion images. This design enables the model to efficiently learn imaging features from lesions and aggregate lesion-level information to the patient level. DALAN comprises a weight-shared CNN, attention layers, and LSTM layers. The attention layer calculates the significance of each lesion image, while the LSTM layer combines the weighted information to produce an all-encompassing representation of the patient's lesion data. Our proposed method performed better on both simulated and real data than other competing methods in terms of prediction accuracy. We evaluated DALAN against several naive aggregation methods on simulated and real datasets. Our results showed that DALAN outperformed the competing methods in terms of c-index on the MNIST and Cancer dataset simulations. On the real TCGA dataset, DALAN also achieved a higher c-index of 0.803±0.006 compared to the naive methods and the competing models. Our DALAN effectively aggregates multiple histopathology images, demonstrating a comprehensive survival model using attention and LSTM mechanisms.
最近,计算机视觉和神经网络的进步为各种医学应用的医学成像生存分析提供了便利。然而,当患者有多张来自多个病变的图像时,就会出现挑战,因为当前的深度学习方法为每个患者提供了多个生存预测,这使得结果解释变得复杂。为了解决这个问题,我们开发了一种深度学习生存模型,可以在患者水平上提供准确的预测。我们提出了一种用于组织病理学图像的深度注意长短时记忆嵌入聚合网络(DALAN),旨在同时执行病变图像的特征提取和聚合。这种设计使模型能够从病变中高效地学习成像特征,并将病变级别的信息聚合到患者级别。DALAN 由共享权重的 CNN、注意力层和 LSTM 层组成。注意力层计算每个病变图像的重要性,而 LSTM 层则结合加权信息,生成患者病变数据的全面表示。在预测准确性方面,我们提出的方法在模拟数据和真实数据上都优于其他竞争方法。我们在模拟数据集和真实数据集上评估了 DALAN 与几种简单聚合方法的对比。我们的结果表明,在 MNIST 和癌症数据集模拟中,DALAN 在 c 指数方面优于竞争方法。在真实的 TCGA 数据集上,DALAN 还实现了 0.803±0.006 的更高 c 指数,优于简单聚合方法和竞争模型。我们的 DALAN 有效地聚合了多个组织病理学图像,通过使用注意力和 LSTM 机制展示了一种全面的生存模型。