Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008.
National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008.
Zhong Nan Da Xue Xue Bao Yi Xue Ban. 2022 Aug 28;47(8):1058-1064. doi: 10.11817/j.issn.1672-7347.2022.220101.
The automatic delineation of organs at risk (OARs) can help doctors make radiotherapy plans efficiently and accurately, and effectively improve the accuracy of radiotherapy and the therapeutic effect. Therefore, this study aims to propose an automatic delineation method for OARs in cervical cancer scenarios of both after-loading and external irradiation. At the same time, the similarity of OARs structure between different scenes is used to improve the segmentation accuracy of OARs in difficult segmentations.
Our ensemble model adopted the strategy of ensemble learning. The model obtained from the pre-training based on the after-loading and external irradiation was introduced into the integrated model as a feature extraction module. The data in different scenes were trained alternately, and the personalized features of the OARs within the model and the common features of the OARs between scenes were introduced. Computer tomography (CT) images for 84 cases of after-loading and 46 cases of external irradiation were collected as the train data set. Five-fold cross-validation was adopted to split training sets and test sets. The five-fold average dice similarity coefficient (DSC) served as the figure-of-merit in evaluating the segmentation model.
The DSCs of the OARs (the rectum and bladder in the after-loading images and the bladder in the external irradiation images) were higher than 0.7. Compared with using an independent residual U-net (convolutional networks for biomedical image segmentation) model [residual U-net (Res-Unet)] delineate OARs, the proposed model can effectively improve the segmentation performance of difficult OARs (the sigmoid in the after-loading CT images and the rectum in the external irradiation images), and the DSCs were increased by more than 3%.
Comparing to the dedicated models, our ensemble model achieves the comparable result in segmentation of OARs for different treatment options in cervical cancer radiotherapy, which may be shorten time for doctors to sketch OARs and improve doctor's work efficiency.
自动勾画危及器官(OAR)可以帮助医生高效、准确地制定放疗计划,有效提高放疗的准确性和治疗效果。因此,本研究旨在提出一种适用于后装和外照射宫颈癌场景的 OAR 自动勾画方法。同时,利用不同场景 OAR 结构的相似性,提高 OAR 难分割区域的分割精度。
我们的集成模型采用集成学习策略。基于后装和外照射的预训练模型获得的模型被引入到集成模型中作为特征提取模块。交替训练不同场景的数据,引入模型内 OAR 的个性化特征和场景间 OAR 的公共特征。收集了 84 例后装和 46 例外照射的计算机断层扫描(CT)图像作为训练数据集。采用 5 折交叉验证法将训练集和测试集分开。采用 5 折平均骰子相似系数(DSC)作为评价分割模型的指标。
OAR(后装图像中的直肠和膀胱,外照射图像中的膀胱)的 DSC 高于 0.7。与使用独立的残差 U 型网络(用于生物医学图像分割的卷积网络)模型[残差 U 型网络(Res-Unet)]勾画 OAR 相比,所提出的模型可以有效提高困难 OAR(后装 CT 图像中的乙状结肠和外照射图像中的直肠)的分割性能,DSC 提高了 3%以上。
与专用模型相比,我们的集成模型在宫颈癌放疗不同治疗方案的 OAR 分割中取得了相当的效果,这可能会缩短医生勾画 OAR 的时间,提高医生的工作效率。