Suppr超能文献

基于深度残差学习的颈椎 CT 图像金属伪影降低。

Metal artifact reduction on cervical CT images by deep residual learning.

机构信息

School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.

Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, Guangdong, China.

出版信息

Biomed Eng Online. 2018 Nov 27;17(1):175. doi: 10.1186/s12938-018-0609-y.

Abstract

BACKGROUND

Cervical cancer is the fifth most common cancer among women, which is the third leading cause of cancer death in women worldwide. Brachytherapy is the most effective treatment for cervical cancer. For brachytherapy, computed tomography (CT) imaging is necessary since it conveys tissue density information which can be used for dose planning. However, the metal artifacts caused by brachytherapy applicators remain a challenge for the automatic processing of image data for image-guided procedures or accurate dose calculations. Therefore, developing an effective metal artifact reduction (MAR) algorithm in cervical CT images is of high demand.

METHODS

A novel residual learning method based on convolutional neural network (RL-ARCNN) is proposed to reduce metal artifacts in cervical CT images. For MAR, a dataset is generated by simulating various metal artifacts in the first step, which will be applied to train the CNN. This dataset includes artifact-insert, artifact-free, and artifact-residual images. Numerous image patches are extracted from the dataset for training on deep residual learning artifact reduction based on CNN (RL-ARCNN). Afterwards, the trained model can be used for MAR on cervical CT images.

RESULTS

The proposed method provides a good MAR result with a PSNR of 38.09 on the test set of simulated artifact images. The PSNR of residual learning (38.09) is higher than that of ordinary learning (37.79) which shows that CNN-based residual images achieve favorable artifact reduction. Moreover, for a 512 × 512 image, the average removal artifact time is less than 1 s.

CONCLUSIONS

The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation. Metal artifacts are eliminated efficiently free of sinogram data and complicated post-processing procedure.

摘要

背景

宫颈癌是女性中第五种最常见的癌症,也是全球女性癌症死亡的第三大原因。近距离放射治疗是宫颈癌最有效的治疗方法。对于近距离放射治疗,计算机断层扫描(CT)成像必不可少,因为它可以提供组织密度信息,用于剂量规划。然而,由于近距离放射治疗施源器引起的金属伪影仍然是图像引导程序中图像数据的自动处理或准确剂量计算的挑战。因此,开发一种有效的宫颈癌 CT 图像金属伪影减少(MAR)算法具有很高的需求。

方法

提出了一种基于卷积神经网络(CNN)的新的残差学习方法(RL-ARCNN),用于减少宫颈癌 CT 图像中的金属伪影。对于 MAR,首先通过模拟各种金属伪影生成数据集,然后将其应用于训练 CNN。该数据集包括插入伪影、无伪影和残留伪影图像。从数据集中提取大量图像块,基于 CNN 的深度残差学习进行 MAR(RL-ARCNN)训练。之后,可以使用训练好的模型对宫颈癌 CT 图像进行 MAR。

结果

该方法在模拟伪影图像的测试集中提供了良好的 MAR 结果,PSNR 为 38.09。基于 CNN 的残差学习 PSNR(38.09)高于普通学习 PSNR(37.79),表明基于 CNN 的残差图像实现了良好的伪影减少。此外,对于 512×512 图像,平均去除伪影时间小于 1 秒。

结论

RL-ARCNN 表明,CNN 的残差学习可以显著减少金属伪影,提高关键结构的可视化程度,并增强放射肿瘤学家在靶区勾画方面的信心。该方法无需正弦图数据和复杂的后处理过程,就能有效地消除金属伪影。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e639/6260559/5d4a440ba8e4/12938_2018_609_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验