Suppr超能文献

异常人类和动物胸部CT扫描中的交互式肺部分割

Interactive lung segmentation in abnormal human and animal chest CT scans.

作者信息

Kockelkorn Thessa T J P, Schaefer-Prokop Cornelia M, Bozovic Gracijela, Muñoz-Barrutia Arrate, van Rikxoort Eva M, Brown Matthew S, de Jong Pim A, Viergever Max A, van Ginneken Bram

机构信息

Image Sciences Institute, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands.

Department of Radiology, Meander Medical Centre, 3813 TZ Amersfoort, The Netherlands and Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, 6525 GA Nijmegen, The Netherlands.

出版信息

Med Phys. 2014 Aug;41(8):081915. doi: 10.1118/1.4890597.

Abstract

PURPOSE

Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors' aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans.

METHODS

In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling results can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities.

RESULTS

On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933.

CONCLUSIONS

The authors have developed two fast and reliable methods for interactive lung segmentation in challenging chest CT images. Both systems do not require prior knowledge of the scans under consideration and work on a variety of scans.

摘要

目的

许多医学图像分析系统都需要将感兴趣的结构进行分割作为第一步。对于存在明显病理改变的扫描,自动分割方法可能会失败。作者的目标是开发一种通用、快速且可靠的交互式系统来分割解剖结构。在本研究中,该系统被用于在具有挑战性的胸部计算机断层扫描(CT)中分割肺部。

方法

在容积式胸部CT扫描中,胸部被分割并划分为感兴趣的三维体积(VOI),其中包含密度相似的体素。这些VOI被自动标记为肺组织或非肺组织。自动标记结果可以使用交互式或监督交互式方法进行校正。当使用监督交互式系统时,会向用户展示每一层的分类结果,然后他/她可以调整错误的标签。系统会持续重新训练,将用户的校正和批准考虑在内。通过这种方式,系统学会更好地区分肺组织和非肺组织。当使用无监督学习的交互式框架时,用户手动校正所有错误标记的VOI。两种交互式分割工具都在32例包含肺部异常的猪、小鼠和人类的容积式CT扫描上进行了测试。

结果

平均而言,监督交互式肺部分割的用户交互时间不到9分钟。算法计算时间平均为2分钟,但很容易缩短。平均而言,扫描中所有VOI的2.0%需要重新标记。使用交互式分割方法进行肺部分割平均需要13分钟,平均涉及重新标记所有VOI的3.0%。分割结果与每次扫描八个轴向切片的手动勾勒结果非常吻合,平均骰子相似系数为0.933。

结论

作者开发了两种快速可靠的方法用于在具有挑战性的胸部CT图像中进行交互式肺部分割。这两种系统都不需要对所考虑的扫描有先验知识,并且适用于各种扫描。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验