Suppr超能文献

利用基于多方向二维投影的先验信息改进全身三维扫描中的自动肿瘤分割。

Improved automated tumor segmentation in whole-body 3D scans using multi-directional 2D projection-based priors.

作者信息

Tarai Sambit, Lundström Elin, Sjöholm Therese, Jönsson Hanna, Korenyushkin Alexander, Ahmad Nouman, Pedersen Mette A, Molin Daniel, Enblad Gunilla, Strand Robin, Ahlström Håkan, Kullberg Joel

机构信息

Department of Surgical Sciences, Uppsala University, SE-75185, Uppsala, Sweden.

Antaros Medical AB, SE-43153, Mölndal, Sweden.

出版信息

Heliyon. 2024 Feb 15;10(4):e26414. doi: 10.1016/j.heliyon.2024.e26414. eCollection 2024 Feb 29.

Abstract

Early cancer detection, guided by whole-body imaging, is important for the overall survival and well-being of the patients. While various computer-assisted systems have been developed to expedite and enhance cancer diagnostics and longitudinal monitoring, the detection and segmentation of tumors, especially from whole-body scans, remain challenging. To address this, we propose a novel end-to-end automated framework that first generates a tumor probability distribution map (TPDM), incorporating prior information about the tumor characteristics (e.g. size, shape, location). Subsequently, the TPDM is integrated with a state-of-the-art 3D segmentation network along with the original PET/CT or PET/MR images. This aims to produce more meaningful tumor segmentation masks compared to using the baseline 3D segmentation network alone. The proposed method was evaluated on three independent cohorts (autoPET, CAR-T, cHL) of images containing different cancer forms, obtained with different imaging modalities, and acquisition parameters and lesions annotated by different experts. The evaluation demonstrated the superiority of our proposed method over the baseline model by significant margins in terms of Dice coefficient, and lesion-wise sensitivity and precision. Many of the extremely small tumor lesions (i.e. the most difficult to segment) were missed by the baseline model but detected by the proposed model without additional false positives, resulting in clinically more relevant assessments. On average, an improvement of 0.0251 (autoPET), 0.144 (CAR-T), and 0.0528 (cHL) in overall Dice was observed. In conclusion, the proposed TPDM-based approach can be integrated with any state-of-the-art 3D UNET with potentially more accurate and robust segmentation results.

摘要

在全身成像的引导下进行早期癌症检测,对患者的总体生存和健康状况至关重要。虽然已经开发了各种计算机辅助系统来加速和增强癌症诊断及纵向监测,但肿瘤的检测和分割,尤其是从全身扫描中进行检测和分割,仍然具有挑战性。为了解决这个问题,我们提出了一种新颖的端到端自动化框架,该框架首先生成肿瘤概率分布图(TPDM),并纳入有关肿瘤特征(例如大小、形状、位置)的先验信息。随后,将TPDM与最先进的3D分割网络以及原始PET/CT或PET/MR图像集成。这旨在与单独使用基线3D分割网络相比,产生更有意义的肿瘤分割掩码。我们在三个独立的图像队列(autoPET、CAR-T、cHL)上对所提出的方法进行了评估,这些队列包含不同的癌症形式,通过不同的成像模态、采集参数获得,且由不同专家标注病变。评估表明,在Dice系数、病变敏感性和精度方面,我们提出的方法比基线模型具有显著优势。许多极小的肿瘤病变(即最难分割的病变)被基线模型遗漏,但被所提出的模型检测到,且没有额外的假阳性结果,从而产生了临床上更相关的评估。总体Dice平均提高了0.0251(autoPET)、0.144(CAR-T)和0.0528(cHL)。总之,所提出的基于TPDM的方法可以与任何最先进的3D UNET集成,可能会产生更准确、更稳健的分割结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7e01/10882139/a04c98d3e5b5/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验