Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA.
The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA.
Med Phys. 2021 Dec;48(12):7747-7756. doi: 10.1002/mp.15321. Epub 2021 Nov 13.
Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging.
We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame.
The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved.
A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
超声(US)成像作为一种成熟的成像模式,能够提供无电离辐射的视频帧率容积图像。它有可能在放射治疗中进行分次内运动跟踪。在这项研究中,开发了一种基于深度学习的方法来解决使用 US 成像进行运动跟踪的挑战。
我们提出了一种类似于马尔可夫的网络,通过生成对抗网络实现,从连续的 US 帧(一个跟踪帧和未跟踪帧)中提取特征,并通过跟踪帧和未跟踪帧的配准来估计一组变形矢量场(DVF)。最后,通过根据估计的 DVF 移动跟踪帧中的地标来确定未跟踪帧中的地标位置。通过计算每帧上预测地标和真实地标之间的跟踪误差(TE),在测试数据集上评估了所提出方法的性能。
使用 MICCAI CLUST 2015 数据集评估了所提出的方法,该数据集是使用七种具有八种换能器的 US 扫描仪和心脏多结构超声分割(CAMUS)数据集采集的,后者是使用 GE Vivid E95 超声扫描仪采集的。CLUST 数据集分别包含 63 个 2D 和 22 个 3D US 图像序列,来自 42 个和 18 个受试者,CAMUS 数据集包含 450 个患者的 2D US 图像。在 CLUST 数据集上,对于那些公共可用的注释,我们提出的方法在 2D 序列上的平均跟踪误差为 0.70 ± 0.38 毫米,在 3D 序列上的平均跟踪误差为 1.71 ± 0.84 毫米。在 CAMUS 数据集上,左心房地标实现了 0.54 ± 1.24 毫米的平均跟踪误差。
本研究证明了一种使用现代深度学习技术的基于 US 图像的新型运动跟踪算法。该方法可以实时提供毫米级的肿瘤运动预测,有望应用于放射治疗中的常规肿瘤运动管理。