Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.
Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden.
J Appl Clin Med Phys. 2021 Dec;22(12):51-63. doi: 10.1002/acm2.13446. Epub 2021 Oct 8.
Radiotherapy (RT) datasets can suffer from variations in annotation of organ at risk (OAR) and target structures. Annotation standards exist, but their description for prostate targets is limited. This restricts the use of such data for supervised machine learning purposes as it requires properly annotated data. The aim of this work was to develop a modality independent deep learning (DL) model for automatic classification and annotation of prostate RT DICOM structures. Delineated prostate organs at risk (OAR), support- and target structures (gross tumor volume [GTV]/clinical target volume [CTV]/planning target volume [PTV]), along with or without separate vesicles and/or lymph nodes, were extracted as binary masks from 1854 patients. An image modality independent 2D InceptionResNetV2 classification network was trained with varying amounts of training data using four image input channels. Channel 1-3 consisted of orthogonal 2D projections from each individual binary structure. The fourth channel contained a summation of the other available binary structure masks. Structure classification performance was assessed in independent CT (n = 200 pat) and magnetic resonance imaging (MRI) (n = 40 pat) test datasets and an external CT (n = 99 pat) dataset from another clinic. A weighted classification accuracy of 99.4% was achieved during training. The unweighted classification accuracy and the weighted average F1 score among different structures in the CT test dataset were 98.8% and 98.4% and 98.6% and 98.5% for the MRI test dataset, respectively. The external CT dataset yielded the corresponding results 98.4% and 98.7% when analyzed for trained structures only, and results from the full dataset yielded 79.6% and 75.2%. Most misclassifications in the external CT dataset occurred due to multiple CTVs and PTVs being fused together, which was not included in the training data. Our proposed DL-based method for automated renaming and standardization of prostate radiotherapy annotations shows great potential. Clinic specific contouring standards however need to be represented in the training data for successful use. Source code is available at https://github.com/jamtheim/DicomRTStructRenamerPublic.
放射治疗 (RT) 数据集可能存在危及器官 (OAR) 和靶结构标注的差异。虽然存在标注标准,但对前列腺靶区的描述有限。这限制了此类数据在监督机器学习中的使用,因为它需要经过适当标注的数据。本研究旨在开发一种与模态无关的深度学习 (DL) 模型,用于自动分类和标注前列腺 RT DICOM 结构。从 1854 名患者中提取出前列腺 OAR(包括前列腺、支持和靶区结构[大体肿瘤体积 (GTV)/临床靶区体积 (CTV)/计划靶区体积 (PTV)])、单独的囊泡和/或淋巴结的勾画器官作为二进制掩模。使用四个图像输入通道,使用不同数量的训练数据训练了一种与图像模态无关的 2D InceptionResNetV2 分类网络。通道 1-3 由来自每个单独二进制结构的正交 2D 投影组成。第四个通道包含其他可用二进制结构掩模的总和。在独立的 CT(n=200 例)和 MRI(n=40 例)测试数据集以及来自另一家诊所的外部 CT(n=99 例)数据集上评估了结构分类性能。在训练过程中,实现了 99.4%的加权分类准确率。在 CT 测试数据集上,不同结构的未加权分类准确率和加权平均 F1 评分分别为 98.8%和 98.4%,在 MRI 测试数据集上分别为 98.6%和 98.5%。仅对经过训练的结构进行分析时,外部 CT 数据集的结果分别为 98.4%和 98.7%,而对完整数据集的分析结果分别为 79.6%和 75.2%。外部 CT 数据集中的大多数误分类是由于多个 CTV 和 PTV 融合在一起造成的,而这些在训练数据中并未包括。我们提出的基于深度学习的前列腺放射治疗自动标注重命名和标准化方法具有很大的潜力。然而,成功使用该方法需要在训练数据中包含特定诊所的勾画标准。源代码可在 https://github.com/jamtheim/DicomRTStructRenamerPublic 上获得。