Suppr超能文献

AAR-RT - 一种用于放射治疗计划的CT图像上危及器官自动轮廓勾画的系统:原理、设计及对头颈部和胸段癌症病例的大规模评估。

AAR-RT - A system for auto-contouring organs at risk on CT images for radiation therapy planning: Principles, design, and large-scale evaluation on head-and-neck and thoracic cancer cases.

作者信息

Wu Xingyu, Udupa Jayaram K, Tong Yubing, Odhner Dewey, Pednekar Gargi V, Simone Charles B, McLaughlin David, Apinorasethkul Chavanon, Apinorasethkul Ontida, Lukens John, Mihailidis Dimitris, Shammo Geraldine, James Paul, Tiwari Akhil, Wojtowicz Lisa, Camaratta Joseph, Torigian Drew A

机构信息

Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States.

Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States.

出版信息

Med Image Anal. 2019 May;54:45-62. doi: 10.1016/j.media.2019.01.008. Epub 2019 Jan 29.

Abstract

Contouring (segmentation) of Organs at Risk (OARs) in medical images is required for accurate radiation therapy (RT) planning. In current clinical practice, OAR contouring is performed with low levels of automation. Although several approaches have been proposed in the literature for improving automation, it is difficult to gain an understanding of how well these methods would perform in a realistic clinical setting. This is chiefly due to three key factors - small number of patient studies used for evaluation, lack of performance evaluation as a function of input image quality, and lack of precise anatomic definitions of OARs. In this paper, extending our previous body-wide Automatic Anatomy Recognition (AAR) framework to RT planning of OARs in the head and neck (H&N) and thoracic body regions, we present a methodology called AAR-RT to overcome some of these hurdles. AAR-RT follows AAR's 3-stage paradigm of model-building, object-recognition, and object-delineation. Model-building: Three key advances were made over AAR. (i) AAR-RT (like AAR) starts off with a computationally precise definition of the two body regions and all of their OARs. Ground truth delineations of OARs are then generated following these definitions strictly. We retrospectively gathered patient data sets and the associated contour data sets that have been created previously in routine clinical RT planning from our Radiation Oncology department and mended the contours to conform to these definitions. We then derived an Object Quality Score (OQS) for each OAR sample and an Image Quality Score (IQS) for each study, both on a 1-to-10 scale, based on quality grades assigned to each OAR sample following 9 key quality criteria. Only studies with high IQS and high OQS for all of their OARs were selected for model building. IQS and OQS were employed for evaluating AAR-RT's performance as a function of image/object quality. (ii) In place of the previous hand-crafted hierarchy for organizing OARs in AAR, we devised a method to find an optimal hierarchy for each body region. Optimality was based on minimizing object recognition error. (iii) In addition to the parent-to-child relationship encoded in the hierarchy in previous AAR, we developed a directed probability graph technique to further improve recognition accuracy by learning and encoding in the model "steady" relationships that may exist among OAR boundaries in the three orthogonal planes. Object-recognition: The two key improvements over the previous approach are (i) use of the optimal hierarchy for actual recognition of OARs in a given image, and (ii) refined recognition by making use of the trained probability graph. Object-delineation: We use a kNN classifier confined to the fuzzy object mask localized by the recognition step and then fit optimally the fuzzy mask to the kNN-derived voxel cluster to bring back shape constraint on the object. We evaluated AAR-RT on 205 thoracic and 298 H&N (total 503) studies, involving both planning and re-planning scans and a total of 21 organs (9 - thorax, 12 - H&N). The studies were gathered from two patient age groups for each gender - 40-59 years and 60-79 years. The number of 3D OAR samples analyzed from the two body regions was 4301. IQS and OQS tended to cluster at the two ends of the score scale. Accordingly, we considered two quality groups for each gender - good and poor. Good quality data sets typically had OQS ≥ 6 and had distortions, artifacts, pathology etc. in not more than 3 slices through the object. The number of model-worthy data sets used for training were 38 for thorax and 36 for H&N, and the remaining 479 studies were used for testing AAR-RT. Accordingly, we created 4 anatomy models, one each for: Thorax male (20 model-worthy data sets), Thorax female (18 model-worthy data sets), H&N male (20 model-worthy data sets), and H&N female (16 model-worthy data sets). On "good" cases, AAR-RT's recognition accuracy was within 2 voxels and delineation boundary distance was within ∼1 voxel. This was similar to the variability observed between two dosimetrists in manually contouring 5-6 OARs in each of 169 studies. On "poor" cases, AAR-RT's errors hovered around 5 voxels for recognition and 2 voxels for boundary distance. The performance was similar on planning and replanning cases, and there was no gender difference in performance. AAR-RT's recognition operation is much more robust than delineation. Understanding object and image quality and how they influence performance is crucial for devising effective object recognition and delineation algorithms. OQS seems to be more important than IQS in determining accuracy. Streak artifacts arising from dental implants and fillings and beam hardening from bone pose the greatest challenge to auto-contouring methods.

摘要

在精确的放射治疗(RT)计划中,需要对医学图像中的危及器官(OARs)进行轮廓描绘(分割)。在当前的临床实践中,OAR轮廓描绘的自动化程度较低。尽管文献中已经提出了几种提高自动化程度的方法,但很难了解这些方法在实际临床环境中的表现如何。这主要归因于三个关键因素——用于评估的患者研究数量少、缺乏作为输入图像质量函数的性能评估以及缺乏OARs的精确解剖定义。在本文中,我们将之前的全身自动解剖识别(AAR)框架扩展到头部和颈部(H&N)以及胸部的OARs的RT计划中,提出了一种名为AAR-RT的方法来克服其中的一些障碍。AAR-RT遵循AAR的模型构建、目标识别和目标描绘的三阶段范式。模型构建:相对于AAR有三个关键进展。(i)AAR-RT(与AAR一样)从对两个身体区域及其所有OARs的计算精确定义开始。然后严格按照这些定义生成OARs的真实轮廓描绘。我们回顾性收集了患者数据集以及之前在我们放射肿瘤学部门的常规临床RT计划中创建的相关轮廓数据集,并对轮廓进行了修正以符合这些定义。然后,我们基于根据9个关键质量标准为每个OAR样本分配的质量等级,为每个OAR样本得出一个目标质量分数(OQS),为每个研究得出一个图像质量分数(IQS),两者都在1到10的范围内。仅选择所有OARs具有高IQS和高OQS的研究用于模型构建。IQS和OQS用于评估AAR-RT作为图像/目标质量函数的性能。(ii)代替之前在AAR中用于组织OARs的手工构建层次结构,我们设计了一种方法来为每个身体区域找到最优层次结构。最优性基于最小化目标识别误差。(iii)除了之前AAR层次结构中编码的父子关系外,我们开发了一种有向概率图技术,通过在模型中学习和编码三个正交平面中OAR边界之间可能存在的“稳定”关系,进一步提高识别准确性。目标识别:相对于之前的方法有两个关键改进:(i)在给定图像中实际识别OARs时使用最优层次结构,(ii)利用训练好的概率图进行细化识别。目标描绘:我们使用一个kNN分类器,该分类器局限于由识别步骤定位的模糊目标掩码,然后将模糊掩码最优地拟合到kNN导出的体素簇上,以恢复目标的形状约束。我们在205个胸部和298个H&N(总共503个)研究上评估了AAR-RT,这些研究涉及计划扫描和重新计划扫描,总共21个器官(9个——胸部,12个——H&N)。这些研究是从每个性别的两个患者年龄组收集的——40 - 59岁和60 - 79岁。从这两个身体区域分析的3D OAR样本数量为4301个。IQS和OQS倾向于聚集在分数范围的两端。因此,我们为每个性别考虑了两个质量组——好和差。高质量数据集通常具有OQS≥6,并且在穿过目标的不超过3个切片中存在扭曲、伪影、病变等。用于训练的有价值模型的数据集数量为胸部38个,H&N 36个,其余479个研究用于测试AAR-RT。因此,我们创建了4个解剖模型,分别用于:胸部男性(20个有价值模型的数据集)、胸部女性(18个有价值模型的数据集)、H&N男性(20个有价值模型的数据集)和H&N女性(16个有价值模型的数据集)。在“好”的情况下,AAR-RT的识别准确率在2个体素以内,描绘边界距离在约1个体素以内。这与在169项研究中,两名剂量师手动勾勒每个研究中的5 - 6个OARs时观察到的变异性相似。在“差”的情况下,AAR-RT的识别误差在识别时徘徊在5个体素左右,边界距离误差在2个体素左右。在计划和重新计划的情况下性能相似,并且在性能上没有性别差异。AAR-RT的识别操作比描绘更稳健。理解目标和图像质量以及它们如何影响性能对于设计有效的目标识别和描绘算法至关重要。在确定准确性方面,OQS似乎比IQS更重要。由牙科植入物和填充物产生的条纹伪影以及骨骼的束硬化对自动轮廓描绘方法构成了最大挑战。

相似文献

2
Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images.
Med Image Anal. 2014 Jul;18(5):752-71. doi: 10.1016/j.media.2014.04.003. Epub 2014 Apr 24.
4
Object recognition in medical images via anatomy-guided deep learning.
Med Image Anal. 2022 Oct;81:102527. doi: 10.1016/j.media.2022.102527. Epub 2022 Jun 25.
5
Auto-contouring via Automatic Anatomy Recognition of Organs at Risk in Head and Neck Cancer on CT images.
Proc SPIE Int Soc Opt Eng. 2018 Feb;10576. doi: 10.1117/12.2293946. Epub 2018 Mar 13.
7
Hierarchical model-based object localization for auto-contouring in head and neck radiation therapy planning.
Proc SPIE Int Soc Opt Eng. 2018 Feb;10578. doi: 10.1117/12.2294042. Epub 2018 Mar 12.
9
Automatic anatomy recognition in whole-body PET/CT images.
Med Phys. 2016 Jan;43(1):613. doi: 10.1118/1.4939127.
10
Disease quantification on PET/CT images without explicit object delineation.
Med Image Anal. 2019 Jan;51:169-183. doi: 10.1016/j.media.2018.11.002. Epub 2018 Nov 10.

引用本文的文献

1
Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives.
Strahlenther Onkol. 2025 Mar;201(3):236-254. doi: 10.1007/s00066-024-02262-2. Epub 2024 Aug 6.
2
Optimal strategies for modeling anatomy in a hybrid intelligence framework for auto-segmentation of organs.
Proc SPIE Int Soc Opt Eng. 2024 Feb;12928. doi: 10.1117/12.3006617. Epub 2024 Mar 29.
3
Auto-segmentation of thoracic brachial plexuses for radiation therapy planning.
Proc SPIE Int Soc Opt Eng. 2023 Feb;12466. doi: 10.1117/12.2655159. Epub 2023 Apr 3.
4
Anatomy-guided deep learning for object localization in medical images.
Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12032. doi: 10.1117/12.2612566. Epub 2022 Apr 4.
5
Integration of artificial intelligence in lung cancer: Rise of the machine.
Cell Rep Med. 2023 Feb 21;4(2):100933. doi: 10.1016/j.xcrm.2023.100933. Epub 2023 Feb 3.
7
Obtaining the potential number of object models/atlases needed in medical image analysis.
Proc SPIE Int Soc Opt Eng. 2020 Feb;11315. doi: 10.1117/12.2549827. Epub 2020 Mar 16.
8
Educative Impact of Automatic Delineation Applied to Head and Neck Cancer Patients on Radiation Oncology Residents.
J Cancer Educ. 2023 Apr;38(2):578-589. doi: 10.1007/s13187-022-02157-9. Epub 2022 Apr 1.
9
Anatomy Recognition in CT Images of Head & Neck Region via Precision Atlases.
Proc SPIE Int Soc Opt Eng. 2021;11596. doi: 10.1117/12.2581234. Epub 2021 Feb 15.

本文引用的文献

1
Auto-contouring via Automatic Anatomy Recognition of Organs at Risk in Head and Neck Cancer on CT images.
Proc SPIE Int Soc Opt Eng. 2018 Feb;10576. doi: 10.1117/12.2293946. Epub 2018 Mar 13.
2
Image Quality and Segmentation.
Proc SPIE Int Soc Opt Eng. 2018 Feb;10576. doi: 10.1117/12.2293622. Epub 2018 Mar 13.
3
Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans.
IEEE Trans Pattern Anal Mach Intell. 2019 Jan;41(1):176-189. doi: 10.1109/TPAMI.2017.2782687. Epub 2017 Dec 12.
4
Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning.
IEEE Trans Image Process. 2018 Feb;27(2):923-937. doi: 10.1109/TIP.2017.2768621.
5
Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2017). 2017 Sep;10553:21-29. doi: 10.1007/978-3-319-67558-9_3. Epub 2017 Sep 9.
6
Cancer statistics, 2018.
CA Cancer J Clin. 2018 Jan;68(1):7-30. doi: 10.3322/caac.21442. Epub 2018 Jan 4.
7
Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer.
Radiother Oncol. 2018 Feb;126(2):312-317. doi: 10.1016/j.radonc.2017.11.012. Epub 2017 Dec 5.
8
SEGMENTATION OF ORGANS AT RISK IN THORACIC CT IMAGES USING A SHARPMASK ARCHITECTURE AND CONDITIONAL RANDOM FIELDS.
Proc IEEE Int Symp Biomed Imaging. 2017 Apr;2017:1003-1006. doi: 10.1109/ISBI.2017.7950685. Epub 2017 Jun 19.
10
Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation.
IEEE Trans Med Imaging. 2018 Feb;37(2):384-395. doi: 10.1109/TMI.2017.2743464. Epub 2017 Sep 26.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验