Suppr超能文献

基于二维荧光透视成像利用卷积长短期记忆网络的肺部肿瘤实时无标记跟踪

Real-time Markerless Tracking of Lung Tumors based on 2-D Fluoroscopy Imaging using Convolutional LSTM.

作者信息

Peng Tengya, Jiang Zhuoran, Chang Yushi, Ren Lei

机构信息

Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China.

Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA.

出版信息

IEEE Trans Radiat Plasma Med Sci. 2022 Feb;6(2):189-199. doi: 10.1109/trpms.2021.3126318. Epub 2021 Nov 13.

Abstract

PURPOSE

To investigate the feasibility of tracking targets in 2D fluor images using a novel deep learning network.

METHODS

Our model design aims to capture the consistent motion of tumors in fluoroscopic images by neural network. Specifically, the model is trained by generative adversarial methods. The network is a coarse-to-fine architecture design. Convolutional LSTM (Long Short-term Memory) modules are introduced to account for the time correlation between different frames of the fluoroscopic images. The model was trained and tested on a digital X-CAT phantom in two studies. Series of coherent 2D fluoroscopic images representing the full respiration cycle were fed into the model to predict the localized tumor regions. In first study to test on massive scenarios, phantoms of different scales, tumor positions, sizes, and respiration amplitudes were generated to evaluate the accuracy of the model comprehensively. In second study to test on specific sample, phantoms were generated with fixed body and tumor sizes but different respiration amplitudes to investigate the effects of motion amplitude on the tracking accuracy. The tracking accuracy was quantitatively evaluated using intersection over union (IOU), tumor area difference, and centroid of mass difference (COMD).

RESULTS

In the first comprehensive study, the mean IOU and dice coefficient achieved 0.93±0.04 and 0.96±0.02. The mean tumor area difference was 4.34%±4.04%. And the COMD was 0.16 cm and 0.07 cm on average in SI (superior-interior) and LR (left-right) directions, respectively. In the second amplitude study, the mean IOU and dice coefficient achieved 0.98 and 0.99. The mean tumor difference was 0.17%. And the COMD was 0.03cm and 0.01 cm on average in SI and LR directions, respectively. Results demonstrated the robustness of our model against breathing variations.

CONCLUSION

Our study showed the feasibility of using deep learning to track targets in x-ray fluoroscopic projection images without the aid of markers. The technique can be valuable for both pre- and during-treatment real-time target verification using fluoroscopic imaging in lung SBRT treatments.

摘要

目的

研究使用新型深度学习网络在二维荧光图像中跟踪目标的可行性。

方法

我们的模型设计旨在通过神经网络捕捉荧光透视图像中肿瘤的一致运动。具体而言,该模型采用生成对抗方法进行训练。网络采用从粗到细的架构设计。引入卷积长短期记忆(LSTM)模块来考虑荧光透视图像不同帧之间的时间相关性。该模型在两项研究中对数字X-CAT体模进行了训练和测试。将代表完整呼吸周期的一系列连贯二维荧光透视图像输入模型,以预测局部肿瘤区域。在第一项针对大量场景的测试研究中,生成了不同尺度、肿瘤位置、大小和呼吸幅度的体模,以全面评估模型的准确性。在第二项针对特定样本的测试研究中,生成了身体和肿瘤大小固定但呼吸幅度不同的体模,以研究运动幅度对跟踪准确性的影响。使用交并比(IOU)、肿瘤面积差异和质心差异(COMD)对跟踪准确性进行定量评估。

结果

在第一项综合研究中,平均IOU和骰子系数分别达到0.93±0.04和0.96±0.02。平均肿瘤面积差异为4.34%±4.04%。在SI(上-内)和LR(左-右)方向上,COMD平均分别为0.16厘米和0.07厘米。在第二项幅度研究中,平均IOU和骰子系数分别达到0.98和0.99。平均肿瘤差异为0.17%。在SI和LR方向上,COMD平均分别为0.03厘米和0.01厘米。结果证明了我们的模型对呼吸变化的鲁棒性。

结论

我们的研究表明,在不借助标记物的情况下,使用深度学习在X射线荧光透视投影图像中跟踪目标是可行的。该技术对于肺部立体定向放疗(SBRT)治疗中使用荧光透视成像进行治疗前和治疗期间的实时目标验证都可能具有重要价值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e350/8979268/e31c65f1fb3a/nihms-1776973-f0001.jpg

相似文献

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验