Suppr超能文献

分割与分类:使用XGBoost在CT中进行ROI引导的可推广对比期分类

Segment-and-Classify: ROI-Guided Generalizable Contrast Phase Classification in CT Using XGBoost.

作者信息

Hou Benjamin, Mathai Tejas Sudharshan, Mukherjee Pritam, Wang Xinya, Summers Ronald M, Lu Zhiyong

机构信息

Division of Intramural Research, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.

Imaging Biomarkers and Computer Aided Diagnosis Lab, Clinical Center, National Institutes of Health, Bethesda, MD, USA.

出版信息

ArXiv. 2025 May 1:arXiv:2501.14066v2.

Abstract

PURPOSE

To automate contrast phase classification in CT using organ-specific features extracted from a widely used segmentation tool with a lightweight decision tree classifier.

MATERIALS AND METHODS

This retrospective study utilized three public CT datasets from separate institutions. The phase prediction model was trained on the WAW-TACE (median age: 66 [60,73]; 185 males) dataset, and externally validated on the VinDr-Multiphase (146 males; 63 females; 56 unk) and C4KC-KiTS (median age: 61 [50.68; 123 males) datasets. Contrast phase classification was performed using organ-specific features extracted by TotalSegmentator, followed by prediction using a gradient-boosted decision tree classifier.

RESULTS

On the VinDr-Multiphase dataset, the phase prediction model achieved the highest or comparable AUCs across all phases (>0.937), with superior F1-scores in the non-contrast (0.994), arterial (0.937), and delayed (0.718) phases. Statistical testing indicated significant performance differences only in the arterial and delayed phases (p<0.05). On the C4KC-KiTS dataset, the phase prediction model achieved the highest AUCs across all phases (>0.991), with superior F1-scores in arterial/venous (0.968) and delayed (0.935) phases. Statistical testing confirmed significant improvements over all baseline models in these two phases (p<0.05). Performance in the non-contrast class, however, was comparable across all models, with no statistically significant differences observed (p>0.05).

CONCLUSION

The lightweight model demonstrated strong performance relative to all baseline models, and exhibited robust generalizability across datasets from different institutions.

摘要

目的

利用从广泛使用的分割工具中提取的器官特异性特征和轻量级决策树分类器,实现CT造影期分类的自动化。

材料与方法

这项回顾性研究使用了来自不同机构的三个公共CT数据集。相位预测模型在WAW-TACE(中位年龄:66[60,73];185名男性)数据集上进行训练,并在VinDr-Multiphase(146名男性;63名女性;56名未知)和C4KC-KiTS(中位年龄:61[50.68;123名男性])数据集上进行外部验证。使用TotalSegmentator提取的器官特异性特征进行造影期分类,然后使用梯度提升决策树分类器进行预测。

结果

在VinDr-Multiphase数据集上,相位预测模型在所有阶段均取得了最高或相当的AUC(>0.937),在非增强(0.994)、动脉期(0.937)和延迟期(0.718)的F1分数更高。统计检验表明,仅在动脉期和延迟期存在显著的性能差异(p<0.05)。在C4KC-KiTS数据集上,相位预测模型在所有阶段均取得了最高的AUC(>0.991),在动脉/静脉期(0.968)和延迟期(0.935)的F1分数更高。统计检验证实,在这两个阶段,该模型相对于所有基线模型均有显著改进(p<0.05)。然而,在非增强类中的性能在所有模型中相当,未观察到统计学显著差异(p>0.05)。

结论

该轻量级模型相对于所有基线模型表现出强大的性能,并且在来自不同机构的数据集上具有强大的通用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5979/12306821/4cf9f4ac7025/nihpp-2501.14066v2-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验