Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands.
Department of Radiation Science and Technology, Faculty of Applied Sciences, Delft University of Technology, 2629 JB, Delft, The Netherlands.
Sci Rep. 2022 Feb 2;12(1):1822. doi: 10.1038/s41598-022-05868-7.
For image-guided small animal irradiations, the whole workflow of imaging, organ contouring, irradiation planning, and delivery is typically performed in a single session requiring continuous administration of anaesthetic agents. Automating contouring leads to a faster workflow, which limits exposure to anaesthesia and thereby, reducing its impact on experimental results and on animal wellbeing. Here, we trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) for autocontouring of the thorax in mouse micro-CT images. We trained the models only on native CTs and evaluated their performance using an independent testing dataset (i.e., native CTs not included in the training and validation). Unlike previous studies, we also tested the model performance on an external dataset (i.e., contrast-enhanced CTs) to see how well they predict on CTs completely different from what they were trained on. We also assessed the interobserver variability using the generalized conformity index ([Formula: see text]) among three observers, providing a stronger human baseline for evaluating automated contours than previous studies. Lastly, we showed the benefit on the contouring time compared to manual contouring. The results show that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models. For all target organs, the mean surface distance (MSD) and the Hausdorff distance (95p HD) of the best performing model for this task (nnU-Net 3d_fullres) are within 0.16 mm and 0.60 mm, respectively. These values are below the minimum required contouring accuracy of 1 mm for small animal irradiations, and improve significantly upon state-of-the-art 2D U-Net-based AIMOS method. Moreover, the conformity indices of the 3d_fullres model also compare favourably to the interobserver variability for all target organs, whereas the 2D models perform poorly in this regard. Importantly, the 3d_fullres model offers 98% reduction in contouring time.
对于图像引导的小动物照射,成像、器官轮廓勾画、照射计划和传递的整个工作流程通常在单个会话中完成,需要持续给予麻醉剂。自动勾画轮廓可实现更快的工作流程,从而限制麻醉暴露,从而降低其对实验结果和动物福利的影响。在这里,我们针对小鼠微 CT 图像的胸部自动勾画,训练了 no-new-Net (nnU-Net) 的 2D 和 3D U-Net 架构。我们仅在原生 CT 上训练模型,并使用独立的测试数据集(即不包含在训练和验证中的原生 CT)评估其性能。与之前的研究不同,我们还在外部数据集(即对比度增强 CT)上测试模型性能,以了解它们在与训练完全不同的 CT 上的预测能力。我们还使用三个观察者之间的广义一致性指数 ([Formula: see text]) 评估了观察者间的可变性,为评估自动勾画提供了比以前的研究更强的人类基准。最后,我们展示了与手动勾画相比在勾画时间上的优势。结果表明,nnU-Net 的 3D 模型在分割准确性方面优于 2D 模型,并且对未见数据更稳健。对于所有目标器官,该任务表现最佳的模型(nnU-Net 3d_fullres)的平均表面距离(MSD)和 Hausdorff 距离(95p HD)分别在 0.16mm 和 0.60mm 以内。这些值低于小动物照射所需的最小勾画精度 1mm,并且明显优于基于 2D U-Net 的最先进的 AIMOS 方法。此外,3d_fullres 模型的一致性指数也与所有目标器官的观察者间可变性相当,而 2D 模型在这方面表现不佳。重要的是,3d_fullres 模型可将勾画时间减少 98%。